Matemaattis-luonnontieteellinen tiedekunta


The collection is not currently up-to-date. (The latest updata on Friday, 20 April 2018.) (The synchronization process from the E-Thesis Server has failed.)

However, new theses are available in the university campus libraries in the (Opinnäytekioski) thesis computers.

Recent Submissions

  • Lapio, Tommi (Helsingin yliopisto, 2018)
    Immigration is a topical and greatly controversial phenomenon. Migration in general is a natural behavior for humankind, something that we have done throughout the history. There is a clear universal consensus about the fact that migration is every man's right and distressed people should get help. Nonetheless immigration is still sometimes being seen as problematic for the receiving societies. This is a discursive turn where originally oppressed people are now being presented as a threat. This research concerns how immigrants are presented as a threat in the Finnish newspapers. My research methods are qualitative. By using discourse analysis, I want to analyze the threats that show up in the Finnish context and also how their existence is explained. I want to discover which agents produce and maintain the threat discourse of immigrants in my data. By rhetorical analysis I will show which intentional strategies and rhetorics are used aiming for convincing the audience about the threat of immigrants for Finland. This research includes also literature review about threat of immigration in the West, which I will compare to the Finnish context to see what similarities and differences there are in this discourse. This research is based on critical theory which works here as a meta theory. I suppose here that immigrants are oppressed in this threat-discourse which is produced and maintained by Finnish media - here the newspapers. This process is about creation of discourses, in which desired image is created by using of power. The actual theory in this research is securitization theory, which is specifically focused on how some phenomenon is created as security issue by intentional choices. As a material I have here newspaper data. It consists of 70 piece of news where immigration is depicted somehow as a threat or challenge. News are from four largest newspapers in Finland which are Helsingin Sanomat, Aamulehti, Turun Sanomat and Kaleva. I have searched the news by using words as "immigrants", "asylum seekers" and "foreigners". I have went through every news before accepting them as a part of the data. With this I have guaranteed that every single piece of news will somehow affect to the results. According to my results, it seems that in Finland immigrants as a threat -discourse follows the one of the West in general. There come up the growing threat of terrorism, criminality in its different forms, unemployment, financial costs and the alienation of the immigrants. It is also essential to notice how much different threats are connected to each other. Many of them are both reasons for and outcomes of each other. In the news about immigrants writers use a lot of quantification. Arguments performed are mainly from specialists hence it is really common to use speaker categories to justify the arguments. In the news it is also common to use a lot of specific details and narratives. To underline the seriousness of many threats, using of different future scenarios is also common. The results of this research support the fact, that also in Finland in the 2010's, immigration has genuinely become a security issue. Certain (even political) actors have targeted it and try to use immigration as an explanatory actor for many societal problems. Some people even consider it as a single greatest threat for Finland. Therefore immigration fits also in Finland very well to the idea of securitization theory.
  • Sahlberg, Isac (Helsingin yliopisto, 2018)
    In this thesis, we examine a fairly novel area of physics that concerns topological materials, and in particular, topological superconductivity. A goal in the research of topological materials is realizing applications in quantum computing, which could be aided by the emergent quasiparticles that exhibit non-Abelian exchange statistics. These are called Majorana bound states, and they are elusive quasiparticles predicted to be found on the boundary of topological superconductors. We first study a one-dimensional chain of potential impurities placed on the surface of a two-dimensional $p$-wave superconductor. As is usually the case, such chains are composed of perfect lattice structures, which is very challenging to achieve in any laboratory setting. Nevertheless, they serve as a good example of systems where an analytical solution can be well established. We investigate the model without employing any deep-dilute approximation, which gives us an accurate description even far away from the gap center. This is done by formulating the problem as a non-linear eigenvalue equation, which complicates it significantly, but also extends the region of applicability of our theory. We use reciprocal space calculations of two topological invariants to obtain the topological phase diagram of the system. The model is shown to host topological quasiparticle excitations at the ends of the chain, with multiple distinct topological phases. The near-perfect localization of the excitations makes them good candidates for probing Majorana bound states in experimental setups. We then move on to study topological superconductivity in random lattices, as opposed to regular structures which assume arbitrary precision. We frame our work starting with the mathematics of random numbers. Our work is thus in stark contrast with previous studies on topological materials that start off with a perfect lattice structure, and investigate some degree of disorder as perturbations to the regular lattice case. Our work establishes a first-ever realistic candidate for realizing topological superconductivity in an amorphous material. This could enable a novel approach to creating topological materials, and drastically aid in the development of fault-tolerant quantum computing.
  • Vanhalakka, Joonas (Helsingin yliopisto, 2018)
    Tässä tutkielmassa muotoillaan Hiltonin ja Milnorin lause, joka sanoo, että yhtenäisten CW-kompleksien yhden pisteen yhdisteen redusoidun suspension silmukka-avaruudella on sama homotopiatyyppi kuin näiden CW-kompleksien eräiden iteroitujen nitistysten (smash product) redusoitujen suspensioden silmukka-avaruuksien äärettömällä heikolla tulolla. Työssä käydään läpi lauseen muotoilussa ja todistuksessa tarvittavia käsitteitä ja aputuloksia. Keskeisimpiä näistä ovat Ioan Jamesin redusoidut tulot ja Wittin kaavat. Lausetta ei todisteta.
  • Laine, Eero-Veikko (Helsingin yliopisto, 2018)
    Internal startups are new ventures launched by companies seeking ways for radical innovation. Conceptually part of Lean Startup, they are strongly influenced by independent startup companies and agile software methodology. Internal startups favor a "lean" approach to their organization, usually have minimal resources and are expected to produce results fast. This thesis explores how to organize testing effectively in the difficult conditions internal startups operate in. We conducted a case study where we interviewed five IT professionals associated with an internal startup in a global IT service and software company. To systematically analyze the material collected in the interviews, we applied thematic synthesis. Our results suggest that the organization of testing in internal startups is affected by the resources provided by the startup's parent company, as well as the demands presented by the company. Our results also suggest that the lean approach favored by internal startups can have a negative effect on testing and product quality. These results are supported by the existing literature on the subject. ACM Classification: • Software and its engineering~Software testing and debugging • Social and professional topics~Project and people management
  • Varis, Jouni (Helsingin yliopisto, 2018)
    Devops on nykyisin suosittu termi ohjelmistoalalla. Devopsin periaatteisiin kuuluvien automaatioiden myötä ohjelmistoja voidaan julkaista käyttäjien saataville nopeammin ja laadukkaammin. Tieteellisten tutkimusten mukaan automaatioita tehokkaasti harjoittavat yritykset voivat menestyä kilpailijoitaan paremmin. Tutkielman tavoitteena on tutkia web-ohjelmistokehitystä tekevien suomalaisten konsulttiyritysten suhtautumista paketointi-, testaus- ja käyttöönottoautomaatioihin devops näkökulmasta, sekä selvittää näiden hyödyllisyyttä ja haasteita. Tutkielmassa suoritetun kyselytutkimuksen perusteella paketointi-, testaus- ja käyttöönottoautomaatioita pidetään hyödyllisenä, ja näiden avulla oltiin myös saatu konkreettisia hyötyjä. Näiden automaatioiden avulla voidaan lisäksi pyrkiä kohti kokonaisvaltaista julkaisuprosessin automaatiota. Tuloksista kävi ilmi, etteivät yritykset pyri menemään kohti täysin automaattista julkaisuprosessia. Täysin automaattisessa julkaisuprosessissa muutokset julkaistaan automaattisesti loppukäyttäjien saataville, jos läpäistään tietyt laatuportit. Täysin automatisoidun julkaisuprosessin sijaan puoliautomaattinen julkaisuprosessi nähtiin jo riittävänä. Tällöin kontrolli säilyy siihen, milloin muutokset tulevat voimaan. Myös useita haasteita ja esteitä liittyi julkaisuprosessin ja sen vaiheiden automatisointiin. Tulokset olivat linjassa verrattuna aikaisempiin tutkimuksiin, sekä vahvistivat näissä saatuja tuloksia. ACM Computing Classification System (CCS): Software and its engineering → Agile software development Software and its engineering → Software testing and debugging
  • Obscura Acosta, Nidia (Helsingin yliopisto, 2018)
    In this thesis we study the concept of “safe solutions” in different problems whose solutions are walks on graphs. A safe solution to a problem X can be understood as a partial solution common to all solutions to problem X. In problems whose solutions are walks on graphs, safe solutions refer to walks common to all walks which are solutions to the problem. In this thesis, we focused on formulating four main graph traversal problems and finding characterizations for those walks contained in all their solutions. We give formulations for these graph traversal problems, we prove some of their combinatorial and structural properties, and we give safe and complete algorithms for finding their safe solutions based on their characterizations. We use the genome assembly problem and its applications as our main motivating example for finding safe solutions in these graph traversal problems. We begin by motivating and exemplifying the notion of safe solutions through a problem on s-t paths in undirected graphs with at least two non-trivial biconnected components S and T and with s ∈ S, t ∈ T . We continue by reviewing similar and related notions in other fields, especially in combinatorial optimization and previous work on the bioinformatics problem of genome assembly. We then proceed to characterize the safe solutions to the Eulerian cycle problem, where one must find a circular walk in a graph G which traverses each edge exactly once. We suggest a characterization for them by improving on (Nagarajan, Pop, JCB 2009) and a polynomial-time algorithm for finding them. We then study edge-covering circular walks in a graph G. We look at the characterization from (Tomescu, Medvedev, JCB 2017) for their safe solutions and their suggested polynomial-time algorithm and we show an optimal O(mn)-time algorithm that we proposed in (Cairo et al. CPM 2017). Finally, we generalize this to edge-covering collections of circular walks. We characterize safe solutions in an edge-covering setting and provide a polynomial-time algorithm for computing them. We suggested these originally in (Obscura et al. ALMOB 2018).
  • Snellman, Mikael (Helsingin yliopisto, 2018)
    Today many of the most popular service provides such as Netflix, LinkedIn, Amazon and others compose their applications from a group of individual services. These providers need to deploy new changes and features continuously without any downtime in the application and scale individual parts of the system on demand. To address these needs the usage of microservice architecture has grown in popularity in recent years. In microservice architecture, the application is a collection of services which are managed, developed and deployed independently. This independence of services enables the microservices to be polyglot when needed, meaning that the developers can choose the technology stack for each microservice individually depending on the nature of the microservice. This independent and polyglot nature of microservices can make developing a single service easier, but it also introduces significant operations overhead when not taken into account when adopting the microservice architecture. These overheads include the need for extensive DevOps, monitoring, infrastructure and preparation for distributed system fallacies. Many cloud-native and microservice based applications suffer from outages even with thorough unit and integration tests applied. This can be because distributed cloud environments are prone to fail in node or even regional level, which cause unexpected behavior in the system when not prepared for. The applications ability to recover and maintain functionality at an acceptable level in these unexpected faults, also known as resilience, should also be tested systematically. In this thesis we give a introduction to the microservice architecture. We inspect an industry case where a leading banking company suffered from issues regarding resiliency. We examine the challenges regarding resilience testing microservice architecture based applications. We compose a small microservice application which we use to study the defensive design patterns and tools and methods available to test microservice architecture resiliency.
  • Somero, Sonja (Helsingin yliopisto, 2018)
    Sovelluksen ylläpitäminen on usein kallista. Muutosten tekeminen ja vikojen korjaaminen on edullisempaa kehitysvaiheessa kuin ylläpitovaiheessa. Jotta mahdollisimman moni virhe löytyisi aikaisessa vaiheessa, ohjelmistoprojekteissa voidaan käyttää muun muassa staattisen analyysin työkaluja, joilla virheitä voidaan havaita. JavaScript on yksi käytetyimmistä ohjelmointikielistä, mutta se sisältää kuitenkin joukon huonoja käytänteitä. Lisäksi jokin asia, kuten funktion määrittely, voidaan toteuttaa usealla eri tavalla. Huonojen käytänteiden välttämiseksi ja ohjelmakoodin yhdenmukaisuuden lisäämiseksi JavaScriptille on kehitetty lint-työkaluja, jotka tekevät ohjelmakoodille staattista analyysia etsien edellä mainittujen ongelmien lisäksi syntaksivirheitä ja muita mahdollisia virheitä. Tässä tutkielmassa tarkastellaan miten lint-tyyppisen ESLint-työkalun käyttöönottaminen vaikuttaa JavaScript-sovelluksen ylläpidettävyyteen. Tutkimus tehdään yksittäiselle projektille, jossa toteutettiin JavaScript-sovellus. Ylläpidettävyyttä verrataan metriikoiden avulla ennen ESLintin havaitseminen ongelmien korjaamista sekä korjausten jälkeen. Lisäksi kohdeprojektin kehittäjä arvioi ESLintin avulla korjattua ohjelmakoodia. Tulosten perusteella ESLintin käyttöönotto on helppoa ja nopeaa ja sen käyttämisen katsotaan parantavan ylläpidettävyyttä. Lisäksi ESLintin käyttämisestä seurasi, että aiemmin piilossa olleet toisteiset rakenteet tulivat ohjelmakoodin yhdenmukaistumisen myötä esille.
  • Sinisalo Hannu Ilari, Seppo (Helsingin yliopisto, 2018)
    Welcome, in this thesis, some of the higher ranked, popular web content management software (CMS), namely Drupal, WordPress, Joomla and Plone, are compared by usability, from a developer’s perspective, and by performance of the resultant site build with these CMSs, to find out, among other topics, about their potential to build websites to different needs. This thesis tries to discover if a CMS exists, in this selected group, that is a clear choice above the others in both usability and performance. A substantial portion of source material for this research comes from measurements, and small demo systems built and used, in addition to any literature sources used and experience garnered from career as a web developer. In this thesis, we provide an overview of these four selected CMS; their characteristics, statistics and how they measure up to each other. And so doing, expand upon the still narrow research done in this field.
  • Silvennoinen, Aku (Helsingin yliopisto, 2018)
    De-anonymization is an important requirement in real-world V2X systems (e.g., to enable effective law enforcement). In de-anonymization, a pseudonymous identity is linked to a long-term identity in a process known as pseudonym resolution. For de-anonymization to be acceptable from political, social and legislative points of view, it has to be accountable. A system is accountable if no action by it or using it can be taken without some entity being responsible for the action. Being responsible for an action means that the responsible entity cannot deny its responsibility of or relation to an action afterwards. The main research question is: How can we achieve accountable pseudonym resolution in V2X communication systems? One possible answer is to develop an accountable de-anonymization service, which is compatible with existing V2X pseudonym schemes. The accountability can be achieved by making some entities accountable for the de-anonymization. This thesis proposes a system design that enables, i) fine-grained pseudonym resolution; ii) the possibility to inform the subject of the resolution after a suitable time delay; and iii) the possibility for the public to audit the aggregate number of pseudonym resolutions. A TEE is used to ensure these accountability properties. The security properties of this design are verified using symbolic protocol analysis.
  • Shrestha, Shiva Ram (Helsingin yliopisto, 2018)
    Apache Hadoop has provided solutions to the obstacles related to the Big Data processing. Hadoop stores large datasets in HDFS at the distributed network of commodity hardware and process with parallelism. The parallel computing power of Hadoop comes with the MapReduce framework, in which map/reduce programs are Java code, but as for the data analytics the Structured Query Language(SQL) has been a dominant tool from a long run. Thus, to add-on efficiency and effectiveness with data analytics, Hadoop came up with SQL engines such as Hive, Spark SQL, Impala. With these engines laying on the top of Hadoop ecosystem, the end user can leverage in writing a data analytics applications in well understood SQL-like language and can focus only on data analytics. The thesis provides a comparative performance analysis of mainly two SQL engines, Hive and Spark SQL. Apache Hadoop and its components such as HDFS, MapReduce, YARN, and Spark, its design and work-flow are mentioned as a requisite background knowledge. The both SQL engines, Hive and Spark SQL, conduct the data processing with HiveQL statement deployed with several Hadoop components. The experiments were performed accompany with a tune on the configuration parameters of Hadoop components to provides more in-depth understanding of both SQL engines.The experimental Hadoop cluster was configured with limited resources, data size, and evaluation tool(HiBench) were used to provide a fair comparison between the engines. The configuration parameters resulting an optimal performance was opted at the end to evaluate and compare the Hive and Spark SQL performance. The experimental results shed light on the cluster performance with a change in the configuration parameters of Hadoop. And also, the comparative performance between the Hive and Spark SQL showed the Spark SQL perform better even when configured with the minimal cluster resource than the Hive.
  • Aluthge, Nishadh (Helsingin yliopisto, 2018)
    Exponential growth of Internet of Things complicates the network management in terms of security and device troubleshooting due to the heterogeneity of IoT devices. In the absence of a proper device identification mechanism, network administrators are unable to limit unauthorized accesses, locate vulnerable/rogue devices or assess the security policies applicable to these devices. Hence identifying the devices connected to the network is essential as it provides important insights about the devices that enable proper application of security measures and improve the efficiency of device troubleshooting. Despite the fact that active device fingerprinting reveals in depth information about devices, passive device fingerprinting has gained focus as a consequence of the lack of cooperation of devices in active fingerprinting. We propose a passive, feature based device identification technique that extracts features from a sequence of packets during the initial startup of a device and then uses machine learning for classification. Proposed system improves the average device prediction F1-score up to 0.912 which is a 14% increase compared with the state-of-the-art technique. In addition, We have analyzed the impact of confidence threshold on device prediction accuracy when a previously unknown device is detected by the classifier. As future work we suggest a feature-based approach to detect anomalies in devices by comparing long-term device behaviors.
  • Concas, Francesco (Helsingin yliopisto, 2018)
    The Bloom Filter is a space-efficient probabilistic data structure that deals with the problem of set membership. The space reduction comes at the expense of introducing a false positive rate that many applications can tolerate since they require approximate answers. In this thesis, we extend the Bloom Filter to deal with the problem of matching multiple labels to a set, introducing two new data structures: the Bloom Vector and the Bloom Matrix. We also introduce a more efficient variation for each of them, namely the Optimised Bloom Vector and the Sparse Bloom Matrix. We implement them and show experimental results from testing with artificial datasets and a real dataset.
  • Harkonsalo, Olli-Pekka (Helsingin yliopisto, 2018)
    Tässä systemaattisesti tehdyssä kirjallisuuskatsauksessa selvitettiin, millainen on arkkitehtuurin kannalta merkittävien suunnittelupäätöksien tekemiseen käytetty päätöksentekoprosessi käytännössä, mitkä tekijät vaikuttavat suunnittelupäätöksien tekemiseen ja miten arkkitehtien rationaalista päätöksentekoprosessia voidaan tukea. Työssä selvisi, että arkkitehdit tekevät päätöksiään ainakin pääosin rationaalisti ja vaikuttivatkin hyötyvän tästä. Arkkitehdit eivät myöskään suosineet erilaisten systemaattisten päätöksenteko- tai dokumentointimenetelmien käyttöä. Arkkitehtien kokemustaso vaikutti päätöksentekoprosessiin siten, että vähemmän kokeneemmat arkkitehdit tekivät päätöksiään vähemmän rationaalisesti (ja oletettavasti myös vähemmän onnistuneesti) kuin kokeneemmat. Tärkeänä päätöksiin vaikuttavana tekijänä puolestaan nousi esiin arkkitehtien omat kokemukset ja uskomukset. Näiden ja erilaisten vaatimusten ja rajoitusten lisäksi päätöksentekoon vaikuttuvina tekijöinä nousivat esiin myös erilaiset kontekstiin liittyvät tekijät. Näistä nousi esiin myös se, kuka varsinaisesti tekee suunnittelupäätökset ja miten tämä tapahtuu. Kirjallisuuskatsauksessa selvisikin, että suurin osa suunnittelupäätöksistä tehdäänkin ryhmissä eikä vain yhden arkkitehdin toimesta. Ryhmäpäätöksenteko tapahtui useimmiten siten, että arkkitehti oli valmis tekemään lopullisen päätöksen, mutta oli kuitenkin myös valmis huomioimaan muiden mielipiteet. Ryhmäpäätöksentekoon liittyi sekä hyötyjä että haasteita. Työssä selvisi myös, että varsinkin vähemmän kokeneiden arkkitehtien rationaalista päätöksentekoprosessia voitiin tukea kokonaisvaltaisesti arkkitehtuurin kannalta merkittävien suunnittelupäätösten ja niiden järjellisten perustelujen tallentamiseen tarkoitettujen dokumentointimenetelmien käytön avulla. Näiden käytöstä voi spekuloida olevan hyötyä myös kokeneemmille arkkitehdeille, vaikkakin heidän voi tosin epäillä välttävän niiden käyttöä mm. niiden raskauden vuoksi. Toisaalta taas rationaalisempaa päätöksentekoprosessia pystyttiin tukemaan myös kannustamalla arkkitehtejä eri päättelytekniikoiden käytössä eri tavoin, mikä olisi dokumentointimenetelmien käyttöä kevyempi vaihtoehto, vaikkakin tässä tapauksessa luovuttaisiin kompromissina dokumentointimenetelmien käytön tuomista muista hyödyistä. ACM Computing Classification System (CCS): • Software and its engineering~Software architectures • Software and its engineering~Software design engineering
  • Kaija, Kasperi (Helsingin yliopisto, 2018)
    Chord is a distributed hash table solution that makes a set of assumptions about its performance and how that performance is affected when the size of the Chord network increases. This thesis studies those assumptions and the foundation they are based on. The main focus is to study how the Chord protocol performs in practice by utilizing a custom Chord protocol implementation written in Python. The performance is tested by measuring the length of lookup queries over the network and the cost of maintaining the routing invariants. Additionally, the amount of data being exchanged when a new Chord node joins the network and how data has been distributed over network in general is also measured. The tests are repeated using various different networks sizes and states. The measurements are used to formulate models and those models are then used to draw conclusions about the performance assumptions. Statistical measurements of quality are used to estimate the quality of the models. The Ukko high performance cluster is used for running the Chord networks and to execute the tests.