Browsing by Title

Sort by: Order: Results:

Now showing items 19220-19239 of 28496
  • Molari, Juha (Helsingin yliopisto, 2009)
    In the course of my research for my thesis The Q Gospel and Psychohistory, I moved on from the accounts of the Cynics ideals to psychohistorical explanations. Studying the texts dealing with the Cynics and the Q Gospel, I was amazed by the fact that these texts actually portrayed people living in greater poverty than they had to. I paid particular attention to the fact that the Q Gospel was born in traumatising, warlike circumstances. Psychiatric traumatology helped me understand the Q Gospel and other ancient documents using historical approaches in a way that would comply with modern behavioural science. Even though I found some answers to the questions I had posed in my research, the main result of my research work is the justification of the question: Is it important to ask whether there is a connection between the ethos expressed by means of the religious language of the Q Gospel and the predominantly war-related life experiences typical to Palestine at the time. As has been convincingly revealed by a number of studies, traumatic events contribute to the development of psychotic experiences. I approached the problematic nature, significance and complexity of the ideal of poverty and this warlike environment by clarifying the history of psychohistorical literary research and the interpretative contexts associated with Sigmund Freud, Jacques Lacan and Melanie Klein. It is justifiable to question abnormal mentality, but there is no reliable return from the abnormal mentality described in any particular text to the only affecting factor. The popular research tendency based on the Oedipus complex is just as controversial as the Oedipus complex itself. The sociological frameworks concerning moral panics and political paranoia of an outer and inner danger fit quite well with the construction of the Q Gospel. Jerrold M. Post, M.D., Professor of Psychiatry, Political Psychology and Interna-tional Affairs at George Washington University, and founder and director of the Center for the Analysis of Personality and Political Behavior for the Central Intelligence Agency, has focused on the role played by charisma in the attracting of followers and detailed the psychological styles of a "charismatic" leader. He wrote the books Political Paranoia and Leaders and Their Followers in a Dangerous World: the Psychology of Political Behavior among others. His psychoanalytic vocabulary was useful for my understanding of the minds and motivations involved in the Q Gospel s formation. The Q sect began to live in a predestined future, with the reality and safety of this world having collapsed in both their experience and their fantasies. The deep and clear-cut divisions into good and evil that are expressed in the Q Gospel reveal the powerful nature of destructive impulses, envy and overwhelming anxiety. Responsible people who influenced the Q Gospel's origination tried to mount an ascetic defense against anxiety, denying their own needs, focusing their efforts on another objective (God s Kingdom) and a regressive, submissive earlier phase of development (a child s carelessness). This spiritual process was primarily an ecclesiastic or group-dynamical tactic to give support to the power of group leaders.
  • Kolkkinen, Juha (2008)
    Tutkimuksen tavoite on arvioida sitä, miten tilaaja-tuottajamallin keskeiset toimijat Helsinki-konsernissa kokevat tilaaja-tuottajamallista muodostetun väiteotoksen. Arvioinnin kohteena ovat palvelun laatu, taloudelliset säästöt, kilpailun hyödyt ja osaaminen. Tutkimuksessa selvitetään myös, missä määrin tutkimukseen osallistuvien ryhmien sisällä esiintyy mielipide-eroja tilaaja-tuottajamallin väiteotoksesta. Tutkimuksessa teoreettisena lähtökohtana käytetään arviointiteoriaa ja naturalistista arviointimenetelmää. Arviointisuuntauksena tutkimus edustaa jälkipositivistista monitahoarviointia. Tutkimus suoritettiin Q-metodolgian avulla. Tulosten mukaan poliittisella kannalla tai sukupuolella ei ollut merkitystä väiteotoksen arvottamisessa. Tutkimuksen mukaan tilaaja-tuottajamallin epäsuotavana sivuvaikutuksena on havaittavissa eri toimijoiden mielipide-erojen polarisoituminen, joka muodostaa potentiaalisia ristiriitoja. Kaikkien osallistujaryhmien yhteisessä Q-jaottelussa erottui kolme erilaista väiteperhettä. Kaksi muodostuneista väiteperheistä koki tilaaja-tuottajamallin positiivisena ja yksi väiteperhe näki, että mallilla ei saavuteta mitään sille asetettuja tavoitteita. Tutkimustuloksien perusteella voidaan todeta, että jokainen sidosryhmä arvioi mallia subjektiivisesti vain omista lähtökohdistaan. Ryhmien jäsenet eivät yhdistä mallista esitettyjä väitteitä palvelustrategiaan, jota mallin avulla toteutetaan.Tämä paljastaa, että mallista yleisellä tasolla käytävä keskustelu on vahvasti politisoitunutta ja heijastelee sidosryhmien omia intressejä.
  • TORRES FERNANDEZ DE CASTRO, JOSE GUILLERMO (2015)
    This thesis is an attempt to find alternative ways of approaching the study of values and political attitudes. The theoretical framework used for this purpose is Schwartz basic human values theory. Value profiles are elaborated for ten individual interviews and one focus group. Quantitatively, the Schwartz Value Questionnaire produced scores for each participant. Using Qualitative Content Analysis (QCA) a different profile based on quotations is generated. The results suggest that both measurements inform about the priorities of the interviewee, and that numeric scores can be helpful to understand the relevance of certain political attitudes expressed in the semi-structured interviews, such as the perceived dimensions of political competition. Additionally, data from five focus groups, conducted with participants from five different municipalities of the State of Mexico, was analyzed using QCA. The qualitative as well as the quantitative differences between the five groups suggest that this method, combined with the framework of Schwartz basic human values, produces meaningful results that can be related to the socioeconomic profiles of the municipalities.
  • Wallenius, Tarja (Helsingfors universitet, 2010)
    In this study, a quality assessment method based on sampling of primary laser inventory units (microsegments) was analysed. The accuracy of a laser inventory carried out in Kuhmo was analysed as a case study. Field sample plots were measured on the sampled microsegments in the Kuhmo inventory area. Two main questions were considered. Did the ALS based inventory meet the accuracy requirements set for the provider and how should a reliable, cost-efficient and independent quality assessment be undertaken. The agreement between control measurement and ALS based inventory was analysed in four ways: 1) The root mean squared errors (RMSEs) and bias were calculated. 2) Scatter plots with 95% confidence intervals were plotted and the placing of identity lines was checked. 3) Bland-Altman plots were drawn so that the mean difference of attributes between the control method and ALS-method was calculated and plotted against average value of attributes. 4) The tolerance limits were defined and combined with Bland-Altman plots. The RMSE values were compared to a reference study from which the accuracy requirements had been set to the service provider. The accuracy requirements in Kuhmo were achieved, however comparison of RMSE values proved to be difficult. Field control measurements are costly and time-consuming, but they are considered to be robust. However, control measurements might include errors, which are difficult to take into account. Using the Bland-Altman plots none of the compared methods are considered to be completely exact, so this offers a fair way to interpret results of assessment. The tolerance limits to be set on order combined with Bland-Altman plots were suggested to be taken in practise. In addition, bias should be calculated for total area. Some other approaches for quality control were briefly examined. No method was found to fulfil all the required demands of statistical reliability, cost-efficiency, time efficiency, simplicity and speed of implementation. Some benefits and shortcomings of the studied methods were discussed.
  • Wallenius, Tarja (Helsingfors universitet, 2010)
    In this study, a quality assessment method based on sampling of primary laser inventory units (microsegments) was analysed. The accuracy of a laser inventory carried out in Kuhmo was analysed as a case study. Field sample plots were measured on the sampled microsegments in the Kuhmo inventory area. Two main questions were considered. Did the ALS based inventory meet the accuracy requirements set for the provider and how should a reliable, cost-efficient and independent quality assessment be undertaken. The agreement between control measurement and ALS based inventory was analysed in four ways: 1) The root mean squared errors (RMSEs) and bias were calculated. 2) Scatter plots with 95% confidence intervals were plotted and the placing of identity lines was checked. 3) Bland-Altman plots were drawn so that the mean difference of attributes between the control method and ALS-method was calculated and plotted against average value of attributes. 4) The tolerance limits were defined and combined with Bland-Altman plots. The RMSE values were compared to a reference study from which the accuracy requirements had been set to the service provider. The accuracy requirements in Kuhmo were achieved, however comparison of RMSE values proved to be difficult. Field control measurements are costly and time-consuming, but they are considered to be robust. However, control measurements might include errors, which are difficult to take into account. Using the Bland-Altman plots none of the compared methods are considered to be completely exact, so this offers a fair way to interpret results of assessment. The tolerance limits to be set on order combined with Bland-Altman plots were suggested to be taken in practise. In addition, bias should be calculated for total area. Some other approaches for quality control were briefly examined. No method was found to fulfil all the required demands of statistical reliability, cost-efficiency, time efficiency, simplicity and speed of implementation. Some benefits and shortcomings of the studied methods were discussed.
  • Hildén, Timo (Helsingin yliopisto, 2015)
    Gas Electron Multiplier (GEM) detectors are special of position sensitive gas filled detectors used in several particle physics experiments. They are capable of sub- millimeter spatial resolution and energy resolution (FWHM) of the order of 20%. GEM detectors can operate with rates up to 50 kHz/mm2, withstand radiation excellently and can be manufactured up to square meter sizes. This thesis describes the Quality Assurance (QA) methods used in the assembly of 50 GEM detectors for the TOTEM T2 telescope at the LHC at CERN. Further development of optical QA methods used in T2 detector assembly lead into development of a unique large-area scanning system capable of sub-µm resolution. The system, its capability and the software used in the analysis of the scans are described in detail. A correlation was found between one of the main characteristics of the detector, the gas gain, and the results of the optical QA method. It was shown, that a qualitative estimation of the gain can be made based on accurate optical measurement of the microscopic features of the detector components. Ability to predict the performance of individual components of the detectors is extremely useful in large scale production of GEM based detectors.
  • Ihalainen, Toni (Helsingin yliopisto, 2016)
    Quality control methods and test objects were developed and used for structural magnetic resonance imaging (MRI), functional MRI (fMRI) and diffusion-weighted imaging (DWI). Emphasis was put on methods that allowed objective quality control for organizations that use several MRI systems from different vendors, which had different field strengths. Notable increases in the numbers of MRI studies and novel MRI systems, fast development of MRI technology, and international discussion about the quality and safety of medical imaging have motivated the development of objective, quantitative and time-efficient methods for quality control. The quality control methods need to be up to date with the most modern MRI methods, including parallel imaging, parallel transmit technology, and new diffusion-weighted sequences. The methods need to be appropriate to those organizations that use MRI for quantitative measurements, or for the participation in multicenter studies. Two different test object methods for structural MRI were evaluated in a multi-unit medical imaging organization, these were: the Eurospin method and the American College of Radiology (ACR) method. The Eurospin method was originally developed as a part of European Concerted Action, and five standardized test objects were used to create a quality control protocol for six MRI systems. Automatic software was written for image analysis. In contrast, a single multi-purpose test object was used for the ACR method, and image quality for both standard and clinical imaging protocols were measured for 11 MRI systems. A previously published method for fMRI quality control was applied to the evaluation of 5 MRI systems and was extended for simultaneous electroencephalography (EEG) and fMRI (EEG fMRI). The test object results were compared with human data that were obtained from two healthy volunteers. A body-diameter test object was constructed for DWI testing, and apparent diffusion coefficient (ADC) values and level of artifacts were measured using conventional and evolving DWI methods. The majority of the measured MRI systems operated at an acceptable level, when compared with published recommended values for structural and functional MRI. In general, the measurements were repeatable. The study that used the test object revealed information about the extent of superficial artifacts (15 mm) and the magnitude of signal-to-noise ratio (SNR) reduction (15%) of the simultaneous EEG fMRI images. The observations were in accordance with the data of healthy human volunteers. The agreement between the ADC values for different methods used in DWI was generally good, although differences of up to 0.1 x10^-3 mm^2/s were observed between different acquisition options and different field strengths, and along the slice direction. Readout-segmented echo-planar imaging (EPI) and zoomed EPI in addition to efficient use of the parallel transmit technology resulted in lower levels of artifacts than the conventional methods. Other findings included geometric distortions at the edges of MRI system field-of-view, minor instability of image center-of-mass in fMRI, and an amplifier difference that affected the EEG signal of EEG fMRI. The findings showed that although the majority of the results were within acceptable limits, MRI quality control was capable of detecting inferior image quality and revealing information that supported clinical imaging. A comparison between the different systems and also with international reference values was feasible with the reported limitations. Automated analysis methods were successfully developed and applied in this study. The possible future direction of MRI quality control would be the further development of its relevance for clinical imaging.
  • Kankaanhuhta, Ville (Finnish Society of Forest Science, Finnish Forest Research Institute, Faculty of Agriculture and Forestry of the University of Helsinki, School of Forest Sciences of the University of Eastern Finland, 2014)
    The purpose of this thesis was to find out what are the main factors that have to be taken into account in planning, controlling and improving the quality of forest regeneration activities. The forest regeneration services provided for the non-industrial privately-owned forests in Southern Finland by the local Forest Owners Associations (FOAs) were used as an example. Since the original assumptions of quality management were not completely valid in this context, Lillrank s classification of production processes was used. The classification fit well for this field of services, and a tentative framework for modelling and standardisation of forest regeneration service processes was proposed for further testing. The results of regeneration and costs varied considerably between the service providers at different levels. The jointly analysed inventory results and feedback provided a sound starting point for tackling the main causes of the statistical variation observed. The inventory results indicated that the selection of proper methods of regeneration and the way they were executed were the most common factors influencing the quality of service outcomes. The cost-quality analysis of the two most common chains of regeneration revealed an improvement potential for the cost-efficiency of these services. In the case of Norway spruce (Picea abies (L.) Karst.) planting the regeneration costs were only weakly related to quality. As for direct seeding of Scots pine (Pinus sylvestris L.) direct seeding, a significant positive correlation was found. However, the selection of this chain of regeneration for the MT (Myrtillus type) and more fertile site types produced poor regeneration results. In the case of Norway spruce planting, the most important factor explaining the outcomes was soil preparation. Selection of mounding produced better results than patching and disc trenching. In the FOAs, the effect of quality management interventions was observable especially regarding the improvement of resource allocation and practices related to soil preparation.
  • Aaltonen, Serja (Helsingin yliopisto, 2007)
    ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.
  • Bärlund, Hanna-Maria (2012)
    Since the beginning of the 1990s the emphasis of participatory democracy has become stronger in Finnish policy- and decision-making. This development involves various stakeholders participating in negotiations, or more specifically deliberations, around current issues in order to reach consensus and enable a continuance in the policy process. According to research, the more consensual a democracy is the more favourable are the policy outcomes towards environmental issues. The three case studies investigated, ie. the Forest Biodiversity Programme for Southern Finland, the Working Group on Renewable Energy, and the Natura 2000 Network of European Union nature protection areas, support this notion. The case studies are focused on how the key players involved have conceived the decision-making process in terms of achieved goals and degree of agreement as well as on the specific issue context as a backdrop to the development of policy. The cases displayed significant differences of outcomes depending on the achieved level of consensus and deliberation. The outcomes are analysed within the theoretical frameworks of Arend Lijphart's 'consensus vs majoritarian model of democracy' and Martin Jänicke's 'consensual capacity for ecological modernisation'. Further, applying Joshua Cohen's theory of deliberative democracy and his suggestions for achieving "ideal deliberation", the results suggest that the connection between consensus democracy and more effective environmental conservation policy is not that clear-cut. Nevertheless, consensus democracy provides a promising point of departure for overcoming the main disputes between the stakeholders, and common starting points and general goals to be agreed on, which is crucial in order for any progress in environmental conservation to take place.
  • Kymälainen, Hanna-Riitta (Helsingin yliopisto, 2004)
  • Laine, Katarina (Helsingin yliopisto, 2015)
    Objective of this article was to analyze whether the reporting of the 3rd and 4th degree obstetric anal sphincter injuries differ between the patient data recording systems. The study was retrospective. The setting included all six delivery units in the Hospital District of Helsinki and Uusimaa (HUS) comprising one third of all deliveries in Finland. Population was all deliveries in HUS in 2012 (n=18099). The incidence of the sphincter injury was extracted from three electronic medical record (EMR) systems (Obstetrix, Opera and Oberon), using the national versions of International Classification of Diseases 10th revision (ICD10) and the Nordic Classification of Surgical Procedures (NOMESCO). All observed cases were studied carefully from the patient records and the reliability of different systems was analyzed and compared to the data reported to national registers (MBR Medical Birth Register and HDR Hospital Discharge Register). Main outcome measure was sphincter injury rate in delivery units. We found that the actual rate of sphincter injury in all the EMRs combined in HUS was higher (1.8%) than the rate delivered from any single reporting system (from 1.5% to 1.7%) and varied even more among single delivery units. The coverage in the MBR (88%) was much higher than in the HDR (3%). In conclusion the simultaneous use of several patient data recording systems is confusing and prone for systematic errors. One common database – preferably an EMR with a structured format - would clarify the registering and enable reliable quality reports creating a sustainable base for quality improvements.
  • Kai, Zhan (2003)
    This research is focused on the quality uncertainty and market efficiency in E-commerce. The purpose of this study is to analyse the economics of lemons market in electronic commerce. In addition, I try to find methods to deal with this problem. Electronic commerce is presenting an exciting opportunity to reduce transaction costs, but its future may depend on how non-technological but fundamentally economic issues such as the lemons problems are solved, or it will essentially lead to the market failure. Repeat purchases play an important role in my analysis. In my opinion, one of the main reasons why electronic commerce players are losing money is because high-quality products cannot receive higher prices in high-quality markets. Due to lack of sufficient informed consumers, firms have to spend on dissipative advertising to signal product quality and consumers have to pay higher prices for high-quality products. By so doing, market efficiency cannot achieve. Thus, how to make consumers informed is the core of the problem of resolving lemons problems. I suggest that electronic intermediaries may provide information about product quality to consumers and reduce quality uncertainty. Actually, none of price, advertising and intermediaries is reliable to signal product quality. In order to reduce quality uncertainty and improve market efficiency, sellers are responsible to provide adequate information to buyers. Similarly, buyers should inform their preferences and tastes to sellers. My hope is that lemons could be turned into lemonade.
  • Denham, Sander (Helsingin yliopisto, 2015)
    Pinus taeda is an important timber species both economically and ecologically. In past years there have been severe economic losses, as well as ecological disruption, due to epidemic outbreaks of Dendroctonus frontalis. Resin flow is the first line of defense within conifer species acting as both a physical and chemical barrier to invading pests. This study demonstrates the effectiveness of utilizing aggregation pheromones to attract Ips spp. bark beetles to Pinus taeda plantation stands in order to study the resin flow defense response mechanism. Individual trees were selected to be baited with aggregation pheromones. Trees in close proximity to the baited tree were labeled as monitor trees, and a control was established. Results of a general linear model for the aggregation pheromone attracting Ips spp. beetles indicate that there was a significant different (p<0.0001) between the baited and control trees. Using a repeated measures ANOVA, differences of resin flow exudation in Pinus taeda were considered among varying stand conditions (fertilizer, fire, anc control plots) during the induced Ips spp. bark beetle attack. This study illustrates that different stand conditions elicit more or less of a response of Ips spp. to the baited trees, however, site treatment did not significantly affect resin flow. We conclude that utilizing pheromones to attract Ips spp. bark beetles in an effective technique for studying the resin flow defense in conifers. From a management perspective, it is concerning to see differences in bark beetle activity amond different stand conditions while simultaneously seeing no difference in resin flow defense, making this an important aspect of integrated pest management study, and an area in need of further research.
  • Forsman, Pia (Helsingin yliopisto, 2008)
    This thesis focuses on the issue of testing sleepiness quantitatively. The issue is relevant to policymakers concerned with traffic- and occupational safety; such testing provides a tool for safety legislation and -surveillance. The findings of this thesis provide guidelines for a posturographic sleepiness tester. Sleepiness ensuing from staying awake merely 17 h impairs our performance as much as the legally proscribed blood alcohol concentration 0.5 does. Hence, sleepiness is a major risk factor in transportation and occupational accidents. The lack of convenient, commercial sleepiness tests precludes testing impending sleepiness levels contrary to simply breath testing for alcohol intoxication. Posturography is a potential sleepiness test, since clinical diurnal balance testing suggests the hypothesis that time awake could be posturographically estimable. Relying on this hypothesis this thesis examines posturographic sleepiness testing for instrumentation purposes. Empirical results from 63 subjects for whom we tested balance with a force platform during wakefulness for maximum 36 h show that sustained wakefulness impairs balance. The results show that time awake is posturographically estimable with 88% accuracy and 97% precision which validates our hypothesis. Results also show that balance scores tested at 13:30 hours serve as a threshold to detect excessive sleepiness. Analytical results show that the test length has a marked effect on estimation accuracy: 18 s tests suffice to identify sleepiness related balance changes, but trades off some of the accuracy achieved with 30 s tests. The procedure to estimate time awake relies on equating the subject s test score to a reference table (comprising balance scores tested during sustained wakefulness, regressed against time awake). Empirical results showed that sustained wakefulness explains 60% of the diurnal balance variations, whereas the time of day explains 40% of the balance variations. The latter fact implies that time awake estimations also must rely on knowing the local times of both test and reference scores.
  • Salonen, J Sakari (Helsingin yliopisto, 2012)
    Palaeoclimatic reconstructions from fossil proxies have provided important insights into the natural variability of climate in the late Quaternary. However, major challenges remain in ensuring the robustness of these reconstructions. Multiple factors may introduce variability and biases into the palaeoclimatic estimates. For example, quantitative reconstructions use diverse modern calibration data-sets, and a wide variety of numerical calibration methods. While the choice of calibration data-set and calibration method may significantly influence the reconstructions, the comparison and analysis of these data-sets and methods have received relatively little attention. Further challenges are presented by the validation of the prepared reconstructions and the identification of climatic variables which can be robustly reconstructed from a given proxy. In this work, summer temperature reconstructions are prepared based on late-Quaternary pollen sequences from northern Finland and northern Russia, covering the Holocene and the early part of the last glacial period (Marine Isotope Stages 5d c). The major aim of this work is to validate these reconstructions and to identify sources of bias in them. Reconstructions are prepared using a number of different calibration methods and calibration sets, to analyse the between-reconstruction variability introduced by the choice of calibration method and calibration set. In addition, novel regression tree methods are used to test the ecological significance of different climatic factors, with the aim of identifying parameters which could feasibly be reconstructed. In the results, it is found that the choice of calibration method, calibration data-set, and fossil pollen sequence can all significantly affect the reconstruction. The problems in choosing calibration data are especially acute in pre-Holocene reconstructions, as it is difficult to find representative calibration data for reconstructions from non-analogue palaeoclimates which become increasingly common in the more distant past. First-order trends in the reconstructed palaeoclimates are found to be relatively robust. However, the degree of between-reconstruction variability stresses the importance of independent validation, and suggests that ensemble reconstructions using different methods and proxies should be increasingly relied on. The analysis of climatic response in northern European modern pollen samples by regression trees suggests secondary climatic determinants such as winter temperature and continentality to have major ecological influence, in addition to summer temperature which has been the most commonly reconstructed variable in palaeoclimatic studies. This suggests the potential to reconstruct the secondary parameters from fossil pollen. However, validating the robustness of secondary-parameter reconstructions remains a major challenge for future studies.
  • Sirén, Saija (2015)
    Lipids can be found in all living organisms, and complex lipids are typically determined from biological samples and food products. Samples are usually prepared prior to analysis. Liquid-liquid extraction (LLE) is obviously the most often used technique to isolate lipids. Two fundamental protocols are Folch and Bligh & Dyer methods. In both methods, the extraction is based on lipid partitioning between chloroform and water-methanol phases. Methyl-tert-butyl ether offers an environmentally friendly alternative to chloroform. Total lipid fraction can be further separated by solid-phase extraction. Complex lipids are typically isolated from other lipid species with silica SPE cartridges. Three main techniques used in quantitative determination of complex lipids are thin layer chromatography (TLC), high performance liquid chromatography (HPLC) and direct infusion mass spectrometry (MS). Thin layer chromatography is a traditional technique, but its applicability is limited due to poor resolution and requirement of post-column derivatization. Instead, HPLC provides an efficient separation and it is easily coupled with several detectors. HPLC methods are the most commonly used in lipid analysis. Direct infusion mass spectrometry is the incoming technique. Lipid molecules can be precisely identified during the fast measurement. Other advantages are excellent selectivity and sensitivity. New method for glycolipids was developed during the experimental period. Glycolipids were isolated from bio oil samples using solid phase extraction cartridges. Normal phase liquid chromatography was utilized to separate glycolipids, and detection was carried out with parallel tandem mass spectrometry (MS/MS) and evaporative light scattering detection (ELSD). Quantification was based on ELSD measurements, whereas MS/MS was adopted to confirm the identification. Developed method was validated and following parameters were determined: linearity, trueness, precision, measurement uncertainty, detection and quantification limits. Precisions were successful and they were mainly between 5-15 %. Trueness results were however more undesired, because measured concentrations were typically higher than theoretical concentrations. Results were dependent on analyte, but generally they varied between 66 % and even 194 %. Validation pointed out that method needs further development. Mass spectrometric quantification can be considered, if appropriate internal standards would be available.