Browsing by Title

Sort by: Order: Results:

Now showing items 3506-3525 of 5584
  • Kankuri, Esko (Helsingin yliopisto, 2002)
  • Vakkari, Ville (Helsingin yliopisto, 2013)
    Aerosol is defined as solid or liquid particles suspended in a gas lighter than the particles, which means that the atmosphere we live in is an aerosol in itself. Although aerosol particles are only a trace component of the atmosphere they affect our lives in several ways. The aerosol particles can cause adverse health effects and deteriorate visibility, but they affect also the Earth s climate directly by scattering and absorbing solar radiation and indirectly by modulating the properties of the clouds. Anthropogenic aerosol particles have a net cooling effect on the climate, but the uncertainty in the amount of cooling is presently as large as the heating effect of carbon dioxide. To reduce the uncertainty in the aerosol climate effects, spatially representative reference data of high quality are needed for the global climate models. To be able to capture the diurnal and seasonal variability the data have to be collected continuously over time periods that cover at least one full seasonal cycle. Until recently such data have been nearly non-existing for continental Africa and hence one aim of this work was to establish a permanent measurement station measuring the key aerosol particle properties in a continental location in southern Africa. In close collaboration with the North-West University in South Africa this aim has now been achieved at the Welgegund measurement station. The other aims of this work were to determine the aerosol particle concentrations including their seasonal and diurnal variation and to study the most important aerosol particle sources in continental southern Africa. In this thesis the aerosol size distribution and its seasonal and diurnal variation is reported for different environments ranging from a clean rural background to an anthropogenically heavily influenced mining region in continental southern Africa. Atmospheric regional scale new particle formation has been observed at a world record high frequency and it dominates the diurnal variation except in the vicinity of low-income residential areas, where domestic heating and cooking are a stronger source. The concentration of aerosol particles in sizes that can act as cloud condensation nuclei was found to increase during the dry season because of reduced wet removal and increased aerosol production from incomplete combustion, which can be either domestic heating or savannah and grassland fires depending on location. During the wet season new particle formation was shown to be an important source of particles in the size range of cloud condensation nuclei.
  • Hiekkalinna, Tero (Helsingin yliopisto, 2012)
    In this thesis, we developed software for automated genome-wide linkage and linkage disequilibrium analysis based on common gene mapping methods for qualitative and quantitative phenotypes. We further developed likelihood-based software for joint linkage and/or linkage disequilibrium (LD) analysis in general pedigrees based on a novel variation of the classical lod score approach, the so-called pseudomarker method, and evaluated its statistical properties as compared with the existing family-based association methods. This was done using real-life migraine and schizophrenia pedigree structures from Finland. In addition, we compared various study designs for association analysis and investigated statistical properties of the likelihood ratio test for conditional analysis of LD given linkage. First, we automated the laborious process of running a variety of genome-wide linkage and linkage disequilibrium analysis software packages, including ANALYZE, MERLIN, GENEHUNTER, and SOLAR. With this software tool, data file format conversion, and running of the analyses are completely automated. This tool has been applied to many large genome-wide mapping studies. Second, we developed user-friendly PSEUDOMARKER software, which performs likelihood-based linkage and/or linkage disequilibrium analysis in general pedigrees. This software allows for joint analysis of heterogeneous relationship structures, such as singletons (i.e. cases and controls), triads, sibships, and large multigenerational pedigrees. The performance of this software was evaluated in comparison to the existing repertoire of other family-based association methods. Third, we performed an extensive simulation study to investigate the statistical properties (i.e. type-I error and power) of PSEUDOMARKER and other commonly used family-based association methods. Our results demonstrate that many widely-used methods are not valid for testing LD in the presence of linkage, and likelihood-based methods which can properly account for missing data and individual relationships in pedigrees, such as PSEUDOMARKER, outperform the other approaches over a wide variety of etiological models. We also demonstrated that association mapping in families is far more powerful than in population-based samples. Fourth, we investigated the statistical properties of the likelihood ratio test for association conditional on linkage when inaccurate parametric models were used. Our results showed that while under most situations they perform appropriately despite the parametric model being improperly specified, under certain conditions, when there is complete linkage between disease and marker loci, overly-deterministic dominant analysis models can lead to false inferences of LD in the presence of linkage when the true etiological model is recessive in character. In this study, we have developed powerful and easy to use tools for analysis of linkage and LD in general pedigrees and unrelated individuals jointly, and have demonstrated the superiority of such methods in the general case. Our results provide important information for the human genetics community about optimal ways to collect and analyze data.
  • Saltikoff, Elena (Helsingin yliopisto, 2011)
    Mesoscale weather phenomena, such as the sea breeze circulation or lake effect snow bands, are typically too large to be observed at one point, yet too small to be caught in a traditional network of weather stations. Hence, the weather radar is one of the best tools for observing, analyzing and understanding their behavior and development. A weather radar network is a complex system, which has many structural and technical features to be tuned, from the location of each radar to the number of pulses averaged in the signal processing. These design parameters have no universal optimal values, but their selection depends on the nature of the weather phenomena to be monitored as well as on the applications for which the data will be used. The priorities and critical values are different for forest fire forecasting, aviation weather service or the planning of snow ploughing, to name a few radar-based applications. The main objective of the work performed within this thesis has been to combine knowledge of technical properties of the radar systems and our understanding of weather conditions in order to produce better applications able to efficiently support decision making in service duties for modern society related to weather and safety in northern conditions. When a new application is developed, it must be tested against ground truth . Two new verification approaches for radar-based hail estimates are introduced in this thesis. For mesoscale applications, finding the representative reference can be challenging since these phenomena are by definition difficult to catch with surface observations. Hence, almost any valuable information, which can be distilled from unconventional data sources such as newspapers and holiday shots is welcome. However, as important as getting data is to obtain estimates of data quality, and to judge to what extent the two disparate information sources can be compared. The presented new applications do not rely on radar data alone, but ingest information from auxiliary sources such as temperature fields. The author concludes that in the future the radar will continue to be a key source of data and information especially when used together in an effective way with other meteorological data.
  • Hannula, Miika (Helsingin yliopisto, 2015)
    Dependence logic is a novel logical formalism that has connections to database theory, statistics, linguistics, social choice theory, and physics. Its aim is to provide a systematic and mathematically rigorous tool for studying notions of dependence and independence in different areas. Recently many variants of dependence logic have been studied in the contexts of first-order, modal, and propositional logic. In this thesis we examine independence and inclusion logic that are variants of dependence logic extending first-order logic with so-called independence or inclusion atoms, respectively. The work consists of two parts in which we study either axiomatizability or expressivity hierarchies regarding these logics. In the first part we examine whether there exist some natural parameters of independence and inclusion logic that give rise to infinite expressivity or complexity hierarchies. Two main parameters are considered. These are arity of a dependency atom and number of universal quantifiers. We show that for both logics, the notion of arity gives rise to strict expressivity hierarchies. With respect to number of universal quantifiers however, strictness or collapse of the corresponding hierarchies turns out to be relative to the choice of semantics. In the second part we turn attention to axiomatizations. Due to their complexity, dependence and independence logic cannot have a complete recursively enumerable axiomatization. Hence, restricting attention to partial solutions, we first axiomatize all first-order consequences of independence logic sentences, thus extending an analogous result for dependence logic. We also consider the class of independence and inclusion atoms, and show that it can be axiomatized using implicit existential quantification. For relational databases this implies a sound and complete axiomatization of embedded multivalued and inclusion dependencies taken together. Lastly, we consider keys together with so-called pure independence atoms and prove both positive and negative results regarding their finite axiomatizability.
  • Helin, Mari (Helsingin yliopisto, 2014)
    TEACHERS` PROFESSIONAL DEVELOPMENT AS A CONTINUUM − Educational Partnership Between the University and Schools Building a continuum between teachers´ initial and in-service education is a challenge in Finland as well as worldwide. A basic principle and overall model for in-service education is missing. A possible in-service education model for the future is considered in this research. The theoretical part of the study focuses on teachers´ professional development and the partnership between university and school. This partnership includes teacher-teacher mentoring, which has recently been successfully implemented in Finland. The needs for in-service education were investigated through a systematic literature review of international in-service education models. The position of development models of Finnish in-service education in teacher education programmes was also determined. Different stakeholders including teachers, headmasters, school administrators, teacher educators in university, other teacher educators as well as state education administrators were interviewed in groups. In-service education as a continuum of professional development should be every teacher´s right, and duty, and in-service education could facilitate teachers´ tasks, improve their performance and boost their salaries. Utilizing the latest research findings in in-service education was seen by the interviewees as important, due to which universities were regarded as the most suitable organizers of in-service education. School community and mentoring in particular was seen as the main avenue for in-service education. The interviewees felt that the school community will be important in the future as teachers continue to develop together and the school becomes a learning community. The state and community were unanimously seen as the financers of in-service education. In the model proposed here, continuing in-service education contains four vertical continuums which are preceded by induction education. Following the induction phase any of the four continuums, the research-based continuum, the practice-based continuum for working life needs, the partnership continuum and the updating continuum can be selected. These continuums would guarantee flexible in-service education possibilities for the teachers. Having a number of obligatory courses would ensure that teachers educate themselves regularly. In addition to that, teachers could be provided with opportunities to educate themselves more continuously. Future in-service education on the basis of these four continuums strives for constant education based on a learning community. Keywords: pre-service education, in-service education, professional development, lifelong education, continuum, school-university partnership, learning community, mentoring, Interplay research project
  • Kilpiö, Anna (University of Helsinki, 2008)
    This study investigates primary and secondary school teachers’ social representations and ways to conceptualise new technologies. The focus is both on teachers’ descriptions, interpretations and conceptions of technology and on the adoption and formation of these conceptions. In addition, the purpose of this study is to analyse how the national objectives of the information society and the implementation of information and communication technologies (ICT) in schools reflect teachers’ thinking and everyday practices. The starting point for the study is the idea of a dynamic and mutual relationship between teachers and technology so that technology does not affect one-sidedly teachers’ thinking. This relationship is described in this study as the teachers’ technology relationship. This concept emphasises that technology cannot be separated from society, social relations and the context where it is used but it is intertwined with societal practices and is therefore formed in interaction with the material and social factors. The theoretical part of this study encompasses three different research traditions: 1) the social shaping of technology, 2) research on how schools and teachers use technology and 3) social representations theory. The study was part of the Helmi Project (Holistic development of e-Learning and business models) in 2001–2005 at the Helsinki University of Technology, SimLab research unit. The Helmi Project focused on different aspects of the utilisation of ICT in teaching. The research data consisted of interviews of teachers and principals. Altogether 37 interviews were conducted in 2003 and 2004 in six different primary and secondary schools in Espoo, Finland. The data was analysed applying grounded theory. The results showed that the teachers’ technology relationship was diverse and context specific. Technology was interpreted differently depending on the context: the teachers’ technology related descriptions and metaphors illustrated on one hand the benefits and the possibilities and on the other hand the problems and threats of different technologies. The dualist nature of technology was also expressed in the teachers’ thinking about technology as a deterministic and irrevocable force and as a controllable and functional tool at the same time. Teachers did not consider technology as having a stable character but they interpreted technology in relation to the variable context of use. This way they positioned or anchored technology into their everyday practices. The study also analysed the formation of the teachers’ technology relationship and the ways teachers familiarise themselves with new technologies. Comparison of different technologies as well as technology related metaphors turned out to be significant in forming the technology relationship. Also the ways teachers described the familiarisation process and the interpretations of their own technical skills affected the formation of technology relationship. In addition, teachers defined technology together with other teachers, and the discussions reflected teachers’ interpretations and descriptions.
  • Frilander-Paavilainen, Eeva-Liisa (Helsingin yliopisto, 2005)
  • Svinhufvud, Kimmo (Helsingin yliopisto, 2013)
    This study examines two types of interaction closely connected to the writing of master s thesis: the seminar and the supervisory encounters between the author and the supervisor. The study consists of a summary and four original articles that analyze the thesis interaction from various points of view. The focus is on peer feedback in the seminar, participation in the seminar, and the role of documents in the supervisory encounter. The study uses the method of conversation analysis to analyze encounters videotaped in the seminars and videotaped supervisory encounters. The main research questions are: 1) What do the forms of interaction connected to writing a thesis consist of? 2) How can the seminar and supervision encounters be pedagogically developed? The study analyzes the discussant s text feedback as consisting of three activities, questioning, praise, and the so-called problem-solution feedback. The feedback turn is structurally complicated, because the discussant s feedback has several, partly conflicting goals, and the feedback needs to counter several interactional challenges. Since text feedback is the dominant form of activity in the seminar, discussion and participation face particular challenges. In the supervisory encounters, the study concentrates on the role of written documents in the openings of the encounters. A key observation is that the role of the documents is dominant. Documents are used to topicalize the thesis work, and various kinds of encounter openings are tied to the documents. The results of the study are compared to the literature on university pedagogy and the pedagogy of writing. Some practical suggestions are also given in order to develop supervisory interaction further.
  • Kivisaari, Reetta (Helsingin yliopisto, 2008)
    Background: Opiod dependence is a chronic severe brain disorder associated with enormous health and social problems. The relapse back to opioid abuse is very high especially in early abstinence, but neuropsychological and neurophysiological deficits during opioid abuse or soon after cessation of opioids are scarcely investigated. Also the structural brain changes and their correlations with the length of opioid abuse or abuse onset age are not known. In this study the cognitive functions, neural basis of cognitive dysfunction, and brain structural changes was studied in opioid-dependent patients and in age and sex matched healthy controls. Materials and methods: All subjects participating in the study, 23 opioid dependents of whom, 15 were also benzodiazepine and five cannabis co-dependent and 18 healthy age and sex matched controls went through Structured Clinical Interviews (SCID) to obtain DSM-IV axis I and II diagnosis and to exclude psychiatric illness not related to opioid dependence or personality disorders. Simultaneous magnetoencephalography (MEG) and electroencephalography (EEG) measurements were done on 21 opioid-dependent individuals on the day of hospitalization for withdrawal therapy. The neural basis of auditory processing was studied and pre-attentive attention and sensory memory were investigated. During the withdrawal 15 opioid-dependent patients participated in neuropsychological tests, measuring fluid intelligence, attention and working memory, verbal and visual memory, and executive functions. Fifteen healthy subjects served as controls for the MEG-EEG measurements and neuropsychological assessment. The brain magnetic resonance imaging (MRI) was obtained from 17 patients after approximately two weeks abstinence, and from 17 controls. The areas of different brain structures and the absolute and relative volumes of cerebrum, cerebral white and gray matter, and cerebrospinal fluid (CSF) spaces were measured and the Sylvian fissure ratio (SFR) and bifrontal ratio were calculated. Also correlation between the cerebral measures and neuropsychological performance was done. Results: MEG-EEG measurements showed that compared to controls the opioid-dependent patients had delayed mismatch negativity (MMN) response to novel sounds in the EEG and P3am on the contralateral hemisphere to the stimulated ear in MEG. The equivalent current dipole (ECD) of N1m response was stronger in patients with benzodiazepine co-dependence than those without benzodiazepine co-dependence or controls. In early abstinence the opioid dependents performed poorer than the controls in tests measuring attention and working memory, executive function and fluid intelligence. Test results of the Culture Fair Intelligence Test (CFIT), testing fluid intelligence, and Paced Auditory Serial Addition Test (PASAT), measuring attention and working memory correlated positively with the days of abstinence. MRI measurements showed that the relative volume of CSF was significantly larger in opioid dependents, which could also be seen in visual analysis. Also Sylvian fissures, expressed by SFR were wider in patients, which correlated negatively with the age of opioid abuse onset. In controls the relative gray matter volume had a positive correlation with composite cognitive performance, but this correlation was not found in opioid dependents in early abstinence. Conclusions: Opioid dependents had wide Sylvian fissures and CSF spaces indicating frontotemporal atrophy. Dilatation of Sylvian fissures correlated with the abuse onset age. During early withdrawal cognitive performance of opioid dependents was impaired. While intoxicated the pre-attentive attention to novel stimulus was delayed and benzodiazepine co-dependence impaired sound detection. All these changes point to disturbances on frontotemporal areas.
  • Korhonen, Seija (Helsingin yliopisto, 2012)
    Both in Finland and abroad, Finnish basic language skills are usually taught up to level B of the Common European Framework of Reference. The main objective of this study is to bring out the learners view, the perceptions and the competences of educated adults who have studied as far as this level, that is, completed the basic courses in Finnish. The number of informants totals 345. They represent 38 native languages and have studied Finnish as a second or foreign language. To obtain as comprehensive an understanding as possible of the target group s Finnish language learning, the study applies two different research methods: introspection and error analysis. Introspective data was collected with a form from all 345 informants, asking them, i.a., what do they find to be difficult and what easy in the Finnish language. The error analysis material consists of essays written by 20 informants. The basic hypothesis of this study is that socio-cultural factors such as linguistic and cultural background may affect the learners perceptions of the Finnish language and the development of their Finnish language skills. The introspective answers highlight dozens of difficult and easy aspects of the Finnish language. The informants mentioned 50 difficult and 47 easy aspects, more or less an equal amount, that is, with over one-third of the aspects found to be the same. While difficulties are mentioned more frequently, the most commonly brought up aspects come from relatively few informants. Essays conform 85% to the standards of written Finnish, and the medians of the most common types of errors are quite low. The research data points to the Finnish language not being all that difficult at least not after the completion of basic courses. Nevertheless, this study reveals a number of issues related to the learning and teaching of communicative language competences, which deserve to be contemplated to further develop Finnish-language basic courses and integrate courses of different reference levels into a constructive system that provides increasingly better support to goal-oriented learning. The results also show that it is worthwhile listening to learners voices and using introspection to support the planning of instruction: despite individual differences, the perceptions and competences show good statistical correlation. It would seem that the socio-cultural factors I have studied do not have a statistical impact on learners of this advanced level, or one could also say that the socio-cultural factors common to educated informants have a statistically homogenising influence. Keywords: Finnish as a second language, Finnish as a foreign language, perceptions, competences, introspection, error analysis
  • Helander, Jaakko (Helsingin yliopisto, 2000)
  • Ahonen, Heli (Helsingin yliopisto, 2008)
    Reciprocal development of the object and subject of learning. The renewal of the learning practices of front-line communities in a telecommunications company as part of the techno-economical paradigm change. Current changes in production have been seen as an indication of a shift from the techno-economical paradigm of a mass-production era to a new paradigm of the information and communication technological era. The rise of knowledge management in the late 1990s can be seen as one aspect of this paradigm shift, as knowledge creation and customer responsiveness were recognized as the prime factors in business competition. However, paradoxical conceptions concerning learning and agency have been presented in the discussion of knowledge management. One prevalent notion in the literature is that learning is based on individuals’ voluntary actions and this has now become incompatible with the growing interest in knowledge-management systems. Furthermore, commonly held view of learning as a general process that is independent of the object of learning contradicts the observation that the current need for new knowledge and new competences are caused by ongoing techno-economic changes. Even though the current view acknowledges that individuals and communities have key roles in knowledge creation, this conception defies the idea of the individuals’ and communities’ agency in developing the practices through which they learn. This research therefore presents a new theoretical interpretation of learning and agency based on Cultural-Historical Activity Theory. This approach overcomes the paradoxes in knowledge-management theory and offers means for understanding and analyzing changes in the ways of learning within work communities. This research is also an evaluation of the Competence-Laboratory method which was developed as part of the study as a special application of Developmental Work Research methodology. The research data comprises the videotaped competence-laboratory processes of four front-line work communities in a telecommunications company. The findings reported in the five articles included in this thesis are based on the analyses of this data. The new theoretical interpretation offered here is based on the assessment that the findings reported in the articles represent one of the front lines of the ongoing historical transformation of work-related learning since the research site represents one of the key industries of the new “knowledge society”. The research can be characterized as elaboration of a hypothesis concerning the development of work related learning. According to the new theoretical interpretation, the object of activity is also the object of distributed learning in work communities. The historical socialization of production has increased the number of actors involved in an activity, which has also increased the number of mutual interdependencies as well as the need for communication. Learning practices and organizational systems of learning are historically developed forms of distributed learning mediated by specific forms of division of labor, specific tools, and specific rules. However, the learning practices of the mass production era become increasingly inadequate to accommodate the conditions in the new economy. This was manifested in the front-line work communities in the research site as an aggravating contradiction between the new objects of learning and the prevailing learning practices. The constituent element of this new theoretical interpretation is the idea of a work community’s learning as part of its collaborative mastery of the developing business activity. The development of the business activity is at the same time a practical and an epistemic object for the community. This kind of changing object cannot be mastered by using learning practices designed for the stable conditions of mass production, because learning has to change along the changes in business. According to the model introduced in this thesis, the transformation of learning proceeds through specific stages: predefined learning tasks are first transformed into learning through re-conceptualizing the object of the activity and of the joint learning and then, as the new object becomes stabilized, into the creation of new kinds of learning practices to master the re-defined object of the activity. This transformation of the form of learning is realized through a stepwise expansion of the work community’s agency. To summarize, the conceptual model developed in this study sets the tool-mediated co-development of the subject and the object of learning as the theoretical starting point for developing new, second-generation knowledge management methods. Key words: knowledge management, learning practice, organizational system of learning, agency
  • Nikitin, Timur (Helsingin yliopisto, 2013)
    Silicon nanocrystals (Si-nc) embedded in a SiO₂ matrix is a promising system for silicon-based photonics. We studied optical and structural properties of Si-rich silicon oxide SiOₓ (x < 2) films annealed in a furnace at temperatures up to 1200 °C and containing Si-nc. The measured optical properties of SiOₓ films are compared with the values estimated by using the effective medium approximation and X-ray photoelectron spectroscopy (XPS) results. A good agreement is found between the measured and calculated refractive index. The results for absorption suggest high transparency of nanoscale suboxide. The extinction coefficient for elemental Si is found to be between the values for crystalline and amorphous Si. Thermal annealing increases the degree of Si crystallization; however, the Si–SiO₂ phase separation is not complete after annealing at 1200 °C. The 1.5-eV photoluminescence probably originates from small (~1 nm) oxidized Si grains or oxygen-related defects, but not from Si-nc with sizes of about 4 nm. The SiOx films prepared by molecular beam deposition and ion implantation are structurally and optically very different after preparation but become similar after annealing at ~1100 °C. The laser-induced thermal effects found for SiOₓ films on silica substrates illuminated by focused laser light should be taken into account in optical measurements. Continuous-wave laser irradiation can produce very high temperatures in free-standing SiOₓ and Si/SiO₂ superlattice films, which changes their structure and optical properties. The center of a laser-annealed area is very transparent and consists of amorphous SiO₂. Large Si-nc (up to 300 nm) are observed in the ring around the central region. These Si-nc produce high absorption and they are typically under compressive stress, which is connected with the crystallization from the melt phase. Some of the large Si-nc exhibit surface features, which is interpreted in terms of eruption of pressurized Si from the film. A part of large Si-nc is removed from the film forming holes of similar sizes. The presence of oxygen in the laser-annealing atmosphere decreases the amount of removed Si-nc. The structure of laser-annealed areas is explained by thermodiffusion, which leads to the macroscopic Si–SiO₂ phase separation. Comparison of the structure of central regions for laser annealing in oxygen, air, and inert atmospheres excludes the dominating effect of Si oxidation in the formation of laser-annealed area. By using a strongly focused laser beam, the structural changes in the free-standing films can be obtained in submicron areas, which suggests a concept of nonvolatile optical memory with high information density and superior thermal stability.
  • Backman, John (Helsingin yliopisto, 2015)
    Aerosol particles are part of the Earth's climatic system. Aerosol particles can significantly impact the climate. The ability of aerosol particles to do so depends mainly on the size, concentration and chemical composition of the particles. Aerosol particles can act as cloud condensation nuclei (CCN) and can therefore mediate cloud properties. Aerosol particles can thus perturb the energy balance of the Earth through clouds. Aerosol particles can also directly interact with solar radiation through scattering, absorption, or both. The climatic implications of aerosol radiation interactions depend on the Earth s surface properties and the amount of light scattering in relation to light absorption. Light absorbing aerosol particles, in particular, can alter the vertical temperature structure of the atmosphere and inhibit the formation of convective clouds. The net change in the energy balance imposed by perturbing agents, such as aerosol particles, results in a radiative forcing. Globally, aerosol particles have a net cooling effect on the climate, but, not necessarily on a local scale. Accurate measurements of the optical properties of aerosol particles are needed to estimate the climatic effects of aerosols. A widely used means of measuring light absorption by aerosol particles is to use a filter-based measurement technique. The technique is based on light-transmission measurements through the filter when the aerosol sample is drawn through the filter and particles deposit onto the filter. As the sample deposits, it will inevitably interact with the fibres of the filter and the interactions needs to be taken into account. This thesis investigates different approaches to dealing with filter-induced artefacts and how they affect aerosol light absorption using this technique. In addition, the articles included in the thesis report aerosol optical properties at sites that have not been reported in the literature before. The locations range from an urban environment in the city of São Paulo, Brazil, an industrialised region of the South African Highveld, to a rural station in Hyytiälä in Finland. In general, it can be said that sites that are distant from urban areas tend to scatter more light in relation to light absorption. In urban areas, the aerosol particle optical properties show the aerosol particles to be darker.
  • Rasmus, Kai (Helsingin yliopisto, 2009)
    The Antarctic system comprises of the continent itself, Antarctica, and the ocean surrounding it, the Southern Ocean. The system has an important part in the global climate due to its size, its high latitude location and the negative radiation balance of its large ice sheets. Antarctica has also been in focus for several decades due to increased ultraviolet (UV) levels caused by stratospheric ozone depletion, and the disintegration of its ice shelves. In this study, measurements were made during three Austral summers to study the optical properties of the Antarctic system and to produce radiation information for additional modeling studies. These are related to specific phenomena found in the system. During the summer of 1997-1998, measurements of beam absorption and beam attenuation coefficients, and downwelling and upwelling irradiance were made in the Southern Ocean along a S-N transect at 6°E. The attenuation of photosynthetically active radiation (PAR) was calculated and used together with hydrographic measurements to judge whether the phytoplankton in the investigated areas of the Southern Ocean are light limited. By using the Kirk formula the diffuse attenuation coefficient was linked to the absorption and scattering coefficients. The diffuse attenuation coefficients (Kpar) for PAR were found to vary between 0.03 and 0.09 1/m. Using the values for KPAR and the definition of the Sverdrup critical depth, the studied Southern Ocean plankton systems were found not to be light limited. Variabilities in the spectral and total albedo of snow were studied in the Queen Maud Land region of Antarctica during the summers of 1999-2000 and 2000-2001. The measurement areas were the vicinity of the South African Antarctic research station SANAE 4, and a traverse near the Finnish Antarctic research station Aboa. The midday mean total albedos for snow were between 0.83, for clear skies, and 0.86, for overcast skies, at Aboa and between 0.81 and 0.83 for SANAE 4. The mean spectral albedo levels at Aboa and SANAE 4 were very close to each other. The variations in the spectral albedos were due more to differences in ambient conditions than variations in snow properties. A Monte-Carlo model was developed to study the spectral albedo and to develop a novel nondestructive method to measure the diffuse attenuation coefficient of snow. The method was based on the decay of upwelling radiation moving horizontally away from a source of downwelling light. This was assumed to have a relation to the diffuse attenuation coefficient. In the model, the attenuation coefficient obtained from the upwelling irradiance was higher than that obtained using vertical profiles of downwelling irradiance. The model results were compared to field measurements made on dry snow in Finnish Lapland and they correlated reasonably well. Low-elevation (below 1000 m) blue-ice areas may experience substantial melt-freeze cycles due to absorbed solar radiation and the small heat conductivity in the ice. A two-dimensional (x-z) model has been developed to simulate the formation and water circulation in the subsurface ponds. The model results show that for a physically reasonable parameter set the formation of liquid water within the ice can be reproduced. The results however are sensitive to the chosen parameter values, and their exact values are not well known. Vertical convection and a weak overturning circulation is generated stratifying the fluid and transporting warmer water downward, thereby causing additional melting at the base of the pond. In a 50-year integration, a global warming scenario mimicked by a decadal scale increase of 3 degrees per 100 years in air temperature, leads to a general increase in subsurface water volume. The ice did not disintegrate due to the air temperature increase after the 50 year integration.
  • Wallin, Anders (Helsingin yliopisto, 2011)
    Molecular machinery on the micro-scale, believed to be the fundamental building blocks of life, involve forces of 1-100 pN and movements of nanometers to micrometers. Micromechanical single-molecule experiments seek to understand the physics of nucleic acids, molecular motors, and other biological systems through direct measurement of forces and displacements. Optical tweezers are a popular choice among several complementary techniques for sensitive force-spectroscopy in the field of single molecule biology. The main objective of this thesis was to design and construct an optical tweezers instrument capable of investigating the physics of molecular motors and mechanisms of protein/nucleic-acid interactions on the single-molecule level. A double-trap optical tweezers instrument incorporating acousto-optic trap-steering, two independent detection channels, and a real-time digital controller was built. A numerical simulation and a theoretical study was performed to assess the signal-to-noise ratio in a constant-force molecular motor stepping experiment. Real-time feedback control of optical tweezers was explored in three studies. Position-clamping was implemented and compared to theoretical models using both proportional and predictive control. A force-clamp was implemented and tested with a DNA-tether in presence of the enzyme lambda exonuclease. The results of the study indicate that the presented models describing signal-to-noise ratio in constant-force experiments and feedback control experiments in optical tweezers agree well with experimental data. The effective trap stiffness can be increased by an order of magnitude using the presented position-clamping method. The force-clamp can be used for constant-force experiments, and the results from a proof-of-principle experiment, in which the enzyme lambda exonuclease converts double-stranded DNA to single-stranded DNA, agree with previous research. The main objective of the thesis was thus achieved. The developed instrument and presented results on feedback control serve as a stepping stone for future contributions to the growing field of single molecule biology.