Browsing by Title

Sort by: Order: Results:

Now showing items 233-252 of 857
  • Keyriläinen, Jani (Helsingin yliopisto, 2004)
  • Koskelo, Otso (Helsingin yliopisto, 2010)
    The main method of modifying properties of semiconductors is to introduce small amount of impurities inside the material. This is used to control magnetic and optical properties of materials and to realize p- and n-type semiconductors out of intrinsic material in order to manufacture fundamental components such as diodes. As diffusion can be described as random mixing of material due to thermal movement of atoms, it is essential to know the diffusion behavior of the impurities in order to manufacture working components. In modified radiotracer technique diffusion is studied using radioactive isotopes of elements as tracers. The technique is called modified as atoms are deployed inside the material by ion beam implantation. With ion implantation, a distinct distribution of impurities can be deployed inside the sample surface with good con- trol over the amount of implanted atoms. As electromagnetic radiation and other nuclear decay products emitted by radioactive materials can be easily detected, only very low amount of impurities can be used. This makes it possible to study diffusion in pure materials without essentially modifying the initial properties by doping. In this thesis a modified radiotracer technique is used to study the diffusion of beryllium in GaN, ZnO, SiGe and glassy carbon. GaN, ZnO and SiGe are of great interest to the semiconductor industry and beryllium as a small and possibly rapid dopant hasn t been studied previously using the technique. Glassy carbon has been added to demonstrate the feasibility of the technique. In addition, the diffusion of magnetic impurities, Mn and Co, has been studied in GaAs and ZnO (respectively) with spintronic applications in mind.
  • Slotte, Jonatan (Helsingin yliopisto, 1999)
  • Oksanen, Juha (Helsingin yliopisto, 2006)
    Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
  • Kurkela, Aleksi (Helsingin yliopisto, 2008)
    When ordinary nuclear matter is heated to a high temperature of ~ 10^12 K, it undergoes a deconfinement transition to a new phase, strongly interacting quark-gluon plasma. While the color charged fundamental constituents of the nuclei, the quarks and gluons, are at low temperatures permanently confined inside color neutral hadrons, in the plasma the color degrees of freedom become dominant over nuclear, rather than merely nucleonic, volumes. Quantum Chromodynamics (QCD) is the accepted theory of the strong interactions, and confines quarks and gluons inside hadrons. The theory was formulated in early seventies, but deriving first principles predictions from it still remains a challenge, and novel methods of studying it are needed. One such method is dimensional reduction, in which the high temperature dynamics of static observables of the full four-dimensional theory are described using a simpler three-dimensional effective theory, having only the static modes of the various fields as its degrees of freedom. A perturbatively constructed effective theory is known to provide a good description of the plasma at high temperatures, where asymptotic freedom makes the gauge coupling small. In addition to this, numerical lattice simulations have, however, shown that the perturbatively constructed theory gives a surprisingly good description of the plasma all the way down to temperatures a few times the transition temperature. Near the critical temperature, the effective theory, however, ceases to give a valid description of the physics, since it fails to respect the approximate center symmetry of the full theory. The symmetry plays a key role in the dynamics near the phase transition, and thus one expects that the regime of validity of the dimensionally reduced theories can be significantly extended towards the deconfinement transition by incorporating the center symmetry in them. In the introductory part of the thesis, the status of dimensionally reduced effective theories of high temperature QCD is reviewed, placing emphasis on the phase structure of the theories. In the first research paper included in the thesis, the non-perturbative input required in computing the g^6 term in the weak coupling expansion of the pressure of QCD is computed in the effective theory framework at an arbitrary number of colors. The two last papers on the other hand focus on the construction of the center-symmetric effective theories, and subsequently the first non-perturbative studies of these theories are presented. Non-perturbative lattice simulations of a center-symmetric effective theory for SU(2) Yang-Mills theory show --- in sharp contrast to the perturbative setup --- that the effective theory accommodates a phase transition in the correct universality class of the full theory. This transition is seen to take place at a value of the effective theory coupling constant that is consistent with the full theory coupling at the critical temperature.
  • Fang, Chun (Helsingin yliopisto, 2013)
    One of the central problems in dynamical systems and differential equations is the analysis of the structures of invariant sets. The structures of the invariant sets of a dynamical system or differential equation reflect the complexity of the system or the equation. For example, any omega-limit set of a finite dimensional differential equation is a singleton implies that each bounded solution of the equation eventually stabilizes at some equilibrium state. In general, a dynamical system or differential equation can have very complicated invariant sets or so called chaotic sets. It is of great importance to classify those systems whose minimal invariant sets have certain simple structures and to characterize the complexity of chaotic type sets in general dynamical systems. In this thesis, we focus on the following two important problems: estimates for the dimension of chaotic sets and stable sets in a finite positive entropy system, and characterizations of minimal sets of nonautonomous tridiagonal competitive-cooperative systems.
  • Manninen, Hanna (Helsingin yliopisto, 2011)
    Aerosol particles play an important role in the Earth s atmosphere and in the climate system: they scatter and absorb solar radiation, facilitate chemical processes, and serve as seeds for cloud formation. Secondary new particle formation (NPF) is a globally important source of these particles. Currently, the mechanisms of particle formation and the vapors participating in this process are, however, not truly understood. In order to fully explain atmospheric NPF and subsequent growth, we need to measure directly the very initial steps of the formation processes. This thesis investigates the possibility to study atmospheric particle formation using a recently developed Neutral cluster and Air Ion Spectrometer (NAIS). First, the NAIS was calibrated and intercompared, and found to be in good agreement with the reference instruments both in the laboratory and in the field. It was concluded that NAIS can be reliably used to measure small atmospheric ions and particles directly at the sizes where NPF begins. Second, several NAIS systems were deployed simultaneously at 12 European measurement sites to quantify the spatial and temporal distribution of particle formation events. The sites represented a variety of geographical and atmospheric conditions. The NPF events were detected using NAIS systems at all of the sites during the year-long measurement period. Various particle formation characteristics, such as formation and growth rates, were used as indicators of the relevant processes and participating compounds in the initial formation. In a case of parallel ion and neutral cluster measurements, we also estimated the relative contribution of ion-induced and neutral nucleation to the total particle formation. At most sites, the particle growth rate increased with the increasing particle size indicating that different condensing vapors are participating in the growth of different-sized particles. The results suggest that, in addition to sulfuric acid, organic vapors contribute to the initial steps of NPF and to the subsequent growth, not just later steps of the particle growth. As a significant new result, we found out that the total particle formation rate varied much more between the different sites than the formation rate of charged particles. The results infer that the ion-induced nucleation has a minor contribution to particle formation in the boundary layer in most of the environments. These results give tools to better quantify the aerosol source provided by secondary NPF in various environments. The particle formation characteristics determined in this thesis can be used in global models to assess NPF s climatic effects.
  • Vähäkangas, Aleksi (Helsingin yliopisto, 2008)
    The object of this dissertation is to study globally defined bounded p-harmonic functions on Cartan-Hadamard manifolds and Gromov hyperbolic metric measure spaces. Such functions are constructed by solving the so called Dirichlet problem at infinity. This problem is to find a p-harmonic function on the space that extends continuously to the boundary at inifinity and obtains given boundary values there. The dissertation consists of an overview and three published research articles. In the first article the Dirichlet problem at infinity is considered for more general A-harmonic functions on Cartan-Hadamard manifolds. In the special case of two dimensions the Dirichlet problem at infinity is solved by only assuming that the sectional curvature has a certain upper bound. A sharpness result is proved for this upper bound. In the second article the Dirichlet problem at infinity is solved for p-harmonic functions on Cartan-Hadamard manifolds under the assumption that the sectional curvature is bounded outside a compact set from above and from below by functions that depend on the distance to a fixed point. The curvature bounds allow examples of quadratic decay and examples of exponential growth. In the final article a generalization of the Dirichlet problem at infinity for p-harmonic functions is considered on Gromov hyperbolic metric measure spaces. Existence and uniqueness results are proved and Cartan-Hadamard manifolds are considered as an application.
  • Hyttinen, Antti (Helsingin yliopisto, 2013)
    The causal relationships determining the behaviour of a system under study are inherently directional: by manipulating a cause we can control its effect, but an effect cannot be used to control its cause. Understanding the network of causal relationships is necessary, for example, if we want to predict the behaviour in settings where the system is subject to different manipulations. However, we are rarely able to directly observe the causal processes in action; we only see the statistical associations they induce in the collected data. This thesis considers the discovery of the fundamental causal relationships from data in several different learning settings and under various modeling assumptions. Although the research is mostly theoretical, possible application areas include biology, medicine, economics and the social sciences. Latent confounders, unobserved common causes of two or more observed parts of a system, are especially troublesome when discovering causal relations. The statistical dependence relations induced by such latent confounders often cannot be distinguished from directed causal relationships. Possible presence of feedback, that induces a cyclic causal structure, provides another complicating factor. To achieve informative learning results in this challenging setting, some restricting assumptions need to be made. One option is to constrain the functional forms of the causal relationships to be smooth and simple. In particular, we explore how linearity of the causal relations can be effectively exploited. Another common assumption under study is causal faithfulness, with which we can deduce the lack of causal relations from the lack of statistical associations. Along with these assumptions, we use data from randomized experiments, in which the system under study is observed under different interventions and manipulations. In particular, we present a full theoretical foundation of learning linear cyclic models with latent variables using second order statistics in several experimental data sets. This includes sufficient and necessary conditions on the different experimental settings needed for full model identification, a provably complete learning algorithm and characterization of the underdetermination when the data do not allow for full model identification. We also consider several ways of exploiting the faithfulness assumption for this model class. We are able to learn from overlapping data sets, in which different (but overlapping) subsets of variables are observed. In addition, we formulate a model class called Noisy-OR models with latent confounding. We prove sufficient and worst case necessary conditions for the identifiability of the full model and derive several learning algorithms. The thesis also suggests the optimal sets of experiments for the identification of the above models and others. For settings without latent confounders, we develop a Bayesian learning algorithm that is able to exploit non-Gaussianity in passively observed data.
  • Marttinen, Pekka (Helsingin yliopisto, 2008)
    Advancements in the analysis techniques have led to a rapid accumulation of biological data in databases. Such data often are in the form of sequences of observations, examples including DNA sequences and amino acid sequences of proteins. The scale and quality of the data give promises of answering various biologically relevant questions in more detail than what has been possible before. For example, one may wish to identify areas in an amino acid sequence, which are important for the function of the corresponding protein, or investigate how characteristics on the level of DNA sequence affect the adaptation of a bacterial species to its environment. Many of the interesting questions are intimately associated with the understanding of the evolutionary relationships among the items under consideration. The aim of this work is to develop novel statistical models and computational techniques to meet with the challenge of deriving meaning from the increasing amounts of data. Our main concern is on modeling the evolutionary relationships based on the observed molecular data. We operate within a Bayesian statistical framework, which allows a probabilistic quantification of the uncertainties related to a particular solution. As the basis of our modeling approach we utilize a partition model, which is used to describe the structure of data by appropriately dividing the data items into clusters of related items. Generalizations and modifications of the partition model are developed and applied to various problems. Large-scale data sets provide also a computational challenge. The models used to describe the data must be realistic enough to capture the essential features of the current modeling task but, at the same time, simple enough to make it possible to carry out the inference in practice. The partition model fulfills these two requirements. The problem-specific features can be taken into account by modifying the prior probability distributions of the model parameters. The computational efficiency stems from the ability to integrate out the parameters of the partition model analytically, which enables the use of efficient stochastic search algorithms.
  • Toivonen, Hannu (Helsingin yliopisto, 1996)
  • Taipale, Risto (Helsingin yliopisto, 2011)
    Volatile organic compounds (VOCs) are emitted into the atmosphere from natural and anthropogenic sources, vegetation being the dominant source on a global scale. Some of these reactive compounds are deemed major contributors or inhibitors to aerosol particle formation and growth, thus making VOC measurements essential for current climate change research. This thesis discusses ecosystem scale VOC fluxes measured above a boreal Scots pine dominated forest in southern Finland. The flux measurements were performed using the micrometeorological disjunct eddy covariance (DEC) method combined with proton transfer reaction mass spectrometry (PTR-MS), which is an online technique for measuring VOC concentrations. The measurement, calibration, and calculation procedures developed in this work proved to be well suited to long-term VOC concentration and flux measurements with PTR-MS. A new averaging approach based on running averaged covariance functions improved the determination of the lag time between wind and concentration measurements, which is a common challenge in DEC when measuring fluxes near the detection limit. The ecosystem scale emissions of methanol, acetaldehyde, and acetone were substantial. These three oxygenated VOCs made up about half of the total emissions, with the rest comprised of monoterpenes. Contrary to the traditional assumption that monoterpene emissions from Scots pine originate mainly as evaporation from specialized storage pools, the DEC measurements indicated a significant contribution from de novo biosynthesis to the ecosystem scale monoterpene emissions. This thesis offers practical guidelines for long-term DEC measurements with PTR-MS. In particular, the new averaging approach to the lag time determination seems useful in the automation of DEC flux calculations. Seasonal variation in the monoterpene biosynthesis and the detailed structure of a revised hybrid algorithm, describing both de novo and pool emissions, should be determined in further studies to improve biological realism in the modelling of monoterpene emissions from Scots pine forests. The increasing number of DEC measurements of oxygenated VOCs will probably enable better estimates of the role of these compounds in plant physiology and tropospheric chemistry. Keywords: disjunct eddy covariance, lag time determination, long-term flux measurements, proton transfer reaction mass spectrometry, Scots pine forests, volatile organic compounds
  • Pohjonen, Aarne (Helsingin yliopisto, 2013)
    The work presented in this thesis is related to the design of the future electron-positron collider, called the Compact Linear Collider (CLIC), which is currently under development at CERN. The designed operation of the collider requires accelerating electric field strengths of ∼ 100 MV/m range to reach the target energy range of 0.5 to 5 TeV for the collisions in a realistic and cost efficient way. An important limiting factor of the application of the very high electric fields is the electrical breakdown rate, which has drastic dependence on the accelerating electric field strength E (approximately proportional to E^30 ). In order to achieve material properties capable of tolerating higher electric fields, research on the materials related physical origin of the fundamental cause of electrical breakdown onset needs to be undertaken. The onset stage of the electrical breakdown on a broad area metal surfaces under electric field is still unknown, although many theories have been proposed earlier. In many of the theories, it has been common to postulate the existence of a geometric protrusion on the surface that is capable of causing high field enhancement and pre-breakdown electric currents in the vacuum over metal surfaces under electric field. However, such protrusions have never been seen on the metal surface prior to the breakdown. It has been recently experimentally observed that the average field that the material can tolerate without breakdown is correlated with the crystal structure of the material. This observation hints that some dislocation mechanism could be possibly related to the onset stage of the breakdown event. In this thesis, the following mechanism that can be responsible for the breakdown onset is analyzed. Application of the electric field exerts stress on a metal surface, which can cause the nucleation and mobility of the dislocations, i.e. plasticity. The localized plastic deformation can eventually lead to protrusion growth on the metal surface. Once a protrusion is formed on the surface, the electric field is enhanced on the protrusion site, further enhancing the protrusion growth. A defect such as a void can act as a stress concentrator which changes the otherwise uniform stress field and acts as an initiation site for plastic deformation caused by dislocations. In this thesis, we have examined the effect of an external stress on a near surface void in conditions which are relevant for the research and design of the accelerating structures of the CLIC collider. A void present at a near surface region of the accelerating structure causes local concentration of the stress induced by the external electric field on the conducting metal surface. The presence of such near surface void was experimentally observed in a metal sample prepared for experimental spark setup. By means of molecular dynamics simulation method we have shown that the stress can cause nucleation and/or movement of dislocations near the void. The mobility of dislocations then leads to formation of a protrusion on the material surface. We analyzed the nucleation of the dislocations in detail and constructed a simplified analytical model that describes the relevant physical factors affecting the nucleation event. Since the shear stress on the slip plane causes the mobility and nucleation of the dislocations, we analyzed the stress distribution on the slip plane between the void and surface by using finite element method and by calculating the atomic level stress with molecular dynamics method. The results were compared also to an analytic solution for a void located deep in the bulk under similar stress. It was found that the nearby surface had significant effect on the stress distribution only when the void depth was less than its diameter. Below this the maximum stress is equal to that for a void located deep in the bulk under similar external stress. The comparison of the finite element results to the atomic level stress revealed that the pre-existing surface stress near the void surface had significant effect on the stress distribution. In addition to the tensile stress caused by the electric field on the charged metal surface, pulsed surface heating also induces stress in the material surface region under alternating electric field. This cyclic thermal stress is known to cause fatigue and severe deformation of the metal surface. We investigated the condition relevant for yield by calculating atomic level von Mises strain which has been earlier related to dislocation nucleation. The strain concentration caused by the void was 1.9 times the bulk value. In order to see activated slip planes, we exaggerated the compressive stress to the extent that dislocation nucleation could be observed within the timespan allowed by the molecular dynamics simulation method. Dislocations were observed to nucleate at the sites of maximum von Mises strain. Taken together, the results presented in thesis contribute to the understanding of the stress distributions and possible dislocation related mechanisms under different stressing conditions assuming existence of a stress concentrator, such as a near surface void.
  • Pylkkänen, Tuomas (Helsingin yliopisto, 2011)
    Spectroscopy can provide valuable information on the structure of disordered matter beyond that which is available through e.g. x-ray and neutron diffraction. X-ray Raman scattering is a non-resonant element-sensitive process which allows bulk-sensitive measurements of core-excited spectra from light-element samples. In this thesis, x-ray Raman scattering is used to study the local structure of hydrogen-bonded liquids and solids, including liquid water, a series of linear and branched alcohols, and high-pressure ice phases. Connecting the spectral features to the local atomic-scale structure involves theoretical references, and in the case of hydrogen-bonded systems the interpretation of the spectra is currently actively debated. The systematic studies of the intra- and intermolecular effects in alcohols, non-hydrogen-bonded neighbors in high-pressure ices, and the effect of temperature in liquid water are used to demonstrate different aspects of the local structure that can influence the near-edge spectra. Additionally, the determination of the extended x-ray absorption fine structure is addressed in a momentum-transfer dependent study. This work demonstrates the potential of x-ray Raman scattering for unique studies of the local structure of a variety of disordered light-element systems.
  • Prause, Istvan (Helsingin yliopisto, 2007)
    Quasiconformal mappings are natural generalizations of conformal mappings. They are homeomorphisms with 'bounded distortion' of which there exist several approaches. In this work we study dimension distortion properties of quasiconformal mappings both in the plane and in higher dimensional Euclidean setting. The thesis consists of a summary and three research articles. A basic property of quasiconformal mappings is the local Hölder continuity. It has long been conjectured that this regularity holds at the Sobolev level (Gehring's higher integrabilty conjecture). Optimal regularity would also provide sharp bounds for the distortion of Hausdorff dimension. The higher integrability conjecture was solved in the plane by Astala in 1994 and it is still open in higher dimensions. Thus in the plane we have a precise description how Hausdorff dimension changes under quasiconformal deformations for general sets. The first two articles contribute to two remaining issues in the planar theory. The first one concerns distortion of more special sets, for rectifiable sets we expect improved bounds to hold. The second issue consists of understanding distortion of dimension on a finer level, namely on the level of Hausdorff measures. In the third article we study flatness properties of quasiconformal images of spheres in a quantitative way. These also lead to nontrivial bounds for their Hausdorff dimension even in the n-dimensional case.
  • Koivunoro, Hanna (Helsingin yliopisto, 2012)
    Boron neutron capture therapy (BNCT) is a biologically targeted radiotherapy modality. So far, 249 cancer patients have received BNCT at the Finnish Research Reactor 1 (FiR1) in Finland. The effectiveness and safety of radiotherapy are dependent on the radiation dose delivered to the tumor and healthy tissues, and on the accuracy of the doses. At FiR 1, patient dose calculations are performed with the Monte Carlo (MC) based SERA treatment planning system. Initially, BNCT was applied to head and neck cancer, brain tumors, and malignant melanoma. To evaluate the applicability of the new target tumors for BNCT, calculation dosimetry studies are needed. So far, clinical BNCT has been performed with the neutrons from a nuclear reactor, while an accelerator based neutron sources applicable for hospital operation would be preferable. In this thesis, BNCT patient dose calculation practice in Finland was evaluated against reference calculations and experimental data in several cases. The suitability of the deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion reaction based compact neutron sources for BNCT were evaluated. In addition, feasibility of BNCT for noninvasive liver tumor treatments was examined. The deviation between SERA and the reference calculations was within 4% for the boron, nitrogen, and photon dose components elsewhere, except on the phantom or skin surface. These dose components produce 99% of the tumor dose and more than 90% of the healthy tissue dose at points of relevance for treatment at the FiR 1 facility. The reduced voxel cell size in the SERA edit mesh improves calculation accuracy on the surface. The erratic biased fast-neutron run option in SERA led to significant underestimation (up to 30 60%) of the fast-neutron dose, while more accurate fast-neutron dose calculations without the biased option are too time-consuming for clinical practice. Large (over 5%) deviation was found between the measured and calculated photon doses, which produces from 25% up to 50% or more of the healthy tissue dose at certain depths. The MC code version MCNP5 is applicable for ionization chamber response within an accuracy of 2%  1%, which is sufficient for BNCT. The fusion-based neutron generators are applicable for BNCT treatments, if yields of over 1013 neutrons per second could be obtained. The simulations indicate that noninvasive liver BNCT with epithermal neutron beams can deliver high tumor dose (about 70 weighted Gy units) into the shallow depths of the liver, while tumor doses at the deepest parts of the organ remains low (approximately 10 weighted Gy units), if the accumulation of boron in the tumor compared with that in the healthy liver is sixfold or less. The patient dose calculation practice is safe and accurate against reference methods for the major dose components induced by thermal neutrons. Final verification of the fast neutron and photon dose calculation is restricted to high levels of uncertainty in existing measurement methods.
  • Kairema, Anna (Helsingin yliopisto, 2013)
    This dissertation brings contribution to two interrelated topics. The first contribution concerns the so-called systems of dyadic cubes in the context of metric spaces. Second contribution is applications to one and two weight norm inequalities for linear and sublinear positive integral operators. Both of the topics are important in harmonic analysis and an ongoing area of study. The main novelties of the presented works consist of improving and extending existing results into more general frameworks. The work consists of four research articles and an introductory part. The first two articles, written in collaboration with T. Hytönen, study systems of dyadic cubes in metric spaces. In the Euclidean space, dyadic cubes are well-known and define a convenient structure with useful covering and intersection properties. Such dyadic structures are central especially in the modern trend of harmonic analysis. In the first article extensions of these structures are constructed in general geometrically doubling metric spaces. These consist of a refinement of existing constructions and a completely new construction of finitely many adjacent dyadic systems which behave like "translates" of a fixed system but without requiring a group structure. In this context, "cubes" are not properly cubes but rather more complicated sets that collectively have properties reminiscent of those in the Euclidean case. However, it is natural to ask what type of sets could or should be regarded cubes. In the second paper, we give a complete answer to this question in the general framework of a geometrically doubling metric space making use of the "plumpness" notion already appeared in the geometric measure theory. From another side; the two latter articles study weighted norm inequalities. Via the new construction of adjacent dyadic systems, weighted estimates for positive integral operators are obtained in a general framework. In the third paper, the two-weight problem is investigated for potential-type operators. Both strong and weak type estimates are characterized by "testing type" conditions: to show the full norm inequality it suffices to test the desired estimate on a specific class of simple test functions only. The results improve some previous results in the sense that the considered ambient space is more general (with more general measures and no additional geometric assumptions) and the testing is over a countable collection of test function only (instead of a significantly larger collection appearing in the previous works on the topic). The main technical novelty of the proof is a decomposition of the operator, along dyadic systems giving rise to certain finitely many "dyadic" versions of the original operator. In the fourth article, the focus is on sharp constant estimates for generalized fractional integral operators. A positive answer and its sharpness are given in the context of a space of homogeneous type. The result is reduced to weak-type inequalities using the results in the third paper. The sharpness requires a construction of functions that locally behave similarly to the basic power functions on the Euclidean space. The result extends a recent Euclidean result. Keywords: dyadic cube, adjacent dyadic systems, metric space, space of homogeneous type, potential type operator, testing condition, weighted norm inequality, sharp bound