Browsing by Subject "Software systems"

Sort by: Order: Results:

Now showing items 1-20 of 29
  • Heikkinen, Juuso (Helsingin yliopisto, 2021)
    Telecommunication companies are moving towards even more digitalized and agile ways of working. They are expanding their business in other fields, such as television, thus moving further away from the traditional telecommunications model. Recently, Telia has become the largest television company in the Nordics. One of the their main products in the field of television is channel packages, which allow customers to access specific television content. In this study, a benefit analysis for Telia Finland Oyj was conducted to inspect the benefits that test automation brings for the channel package testing process. 8 interviews in total were conducted with Telia employees with knowledge on channel packages. To receive both a business and a technical perspective, the interviewees were divided into two groups fitting their expertise. In general, test automation was seen as a useful tool. The main business related benefits of test automation mentioned were a faster and cheaper testing process, and a faster time-to-market. It was also seen that test automation could help achieve a more efficient testing process, and increase confidence in test automation. Based on the interview results, an epic was defined and analyzed according to the principles of Scaled Agile Framework (SAFe). This included describing the solution in detail and defining a Minimum Viable Product (MVP). By using example variables and generalized values, several calculations were made to present a framework on the costs of implementing the MVP and the estimated reduction of channel package testing costs. By utilizing the MVP as a part of the channel package testing process, the return on investment (ROI) was not as desirable as expected. With more automated tests compared to the number of test cases, combined with regular use of test automation, the investment would pay itself back and start generating additional savings faster. Based on the epic analysis, a Lean Business Case was defined.
  • Ollila, Risto (Helsingin yliopisto, 2021)
    The Web has become the world's most important application distribution platform, with web pages increasingly containing not static documents, but dynamic, script-driven content. Script-based rendering relies on imperative browser APIs which become unwieldy to use as an application's complexity grows. An increasingly common solution is to use libraries and frameworks which provide an abstraction over rendering and enable a less error-prone declarative programming model. The details of how web frontend frameworks implement rendering vary widely and can potentially have significant consequences for application performance. Frameworks' rendering strategies are typically invisible to the application developer, and may consequently be poorly understood despite their potential impact. In this thesis, we review rendering strategies used in a number of influential and popular web frontend frameworks. By studying their implementation details, we discover ways to categorize and estimate rendering strategies' performance based on input sizes in update loops. To verify and measure the effects of these differences, we implement a number of benchmarks that measure different aspects of rendering. In our benchmarks, we discover significant performance differences ranging up to an order of magnitude under some conditions. Additionally, we confirm that categorizing rendering strategies based on input sizes of update loops is an effective way to estimate their relative performance. The best performing rendering strategies are found to be ones which minimize input sizes in update loops using techniques such as compile-time optimization and reactive programming models.
  • Meriläinen, Roosa (Helsingin yliopisto, 2020)
    In the world of constantly growing data masses the efficient extraction, saving and accessing that data for business intelligence and analytics has become increasingly important to businesses. Analytics and business intelligence software is offered by many providers in the market for all sizes of organizations and there are multiple ways to build an analytics system, or pipeline from scratch or integrated with tools available on the market. In this case study we explore and re-design the analytics pipeline solution of a medium sized software product company by utilizing the design science research methodology. We discuss the current technologies and tools on the market for business intelligence and analytics and consider how they fit into our case study context. As design science suggests, we design, implement and evaluate two prototypes of an analyt- ics pipeline with an Extract, Transform and Load (ETL) solution and data warehouse. The prototypes represent two different approaches to building an analytics pipeline - an in-house approach, and a partially outsourced approach. Our study brings out typical challenges similar businesses may face when designing and building their own business intelligence and analytics software. In our case we lean towards an analytics pipeline with an outsourced ETL process to be able to pass various different types of event data with a consistent data schema into our data warehouse with minimal maintenance work. However, we also show the value of near real time analytics with an in-house solution, and offer some ideas on how such a pipeline may be built.
  • Huotala, Aleksi (Helsingin yliopisto, 2021)
    Isomorphic web applications combine the best parts of static Hypertext Markup Language (HTML) pages and single-page applications. An isomorphic web application shares code between the server and the client. However, there is not much existing research on isomorphic web applications. Improving the performance, user experience and development experience of web applications are popular research topics in computer science. This thesis studies the benefits and challenges of isomorphism in single-page applications. To study the benefits and challenges of isomorphism in single-page applications, a gray literature review and a case study were conducted. The articles used in the gray literature review were searched from four different websites. To make sure the gray literature could be used in this study, a quality assessment process was conducted. The case study was conducted as a developer survey, where developers familiar with isomorphic web applications were interviewed. The results of both studies are then compared and the key findings are compared together. The results of this study show that isomorphism in single-page applications brings benefits to both the developers and the end-users. Isomorphism in single-page applications is challenging to implement and has some downsides, but they mostly affect developers. The performance and search engine optimization of the application are improved. Implementing isomorphism makes it possible to share code between the server and the client, but it increases the complexity of the application. Framework and library compatibility are issues that must be addressed by the developers. The findings of this thesis give motivation for developers to implement isomorphism when starting a new project or transforming existing single-page applications to use isomorphism.
  • Ritala, Susanna (Helsingin yliopisto, 2021)
    Chatbotteja on kehitetty jo vuosikymmenten ajan, mutta nykyinen kiinnostus on kasvanut niihin teknologian kehityksen myötä. Chatbotit palvelevat ihmisiä eri tarkoituksissa ja niiden toiminta perustuu keskusteluun ihmisen kanssa. Chatbotit tarjoavat henkilökohtaista palvelua vuorokauden jokaisena hetkenä, jonka vuoksi niiden tarve on lisääntynyt monilla aloilla, kuten verkkomyynnissä ja terveydenhuollossa. Chatbottien kehityksessä on tärkeää pohtia niiden toteutusta. Monet käyttäjät suosivat edelleen muita informaationlähteitä heidän ongelmiensa ratkaisuun. Yksi tapa mitata chatbot-järjestelmien laatua on tutkia niiden käyttäjäkokemusta. Tässä tutkielmassa tarkastellaan empiirisesti chatbot-sovellusten käyttäjäkokemusta. Empiirisen osion muodostaa laadullinen tutkimus, jonka avulla pyritään vastaamaan seuraavaan tutkimuskysymykseen: Kuinka chatbottien käyttäjäkokemusta voitaisiin parantaa? Tutkimus järjestettiin Osaamisbotti-palvelun kanssa, joka tarjosi testiympäristön tutkimuksen suorittamiselle. Tutkimukseen osallistui kahdeksan henkilöä, jotka suorittivat heille annetun tehtävän keskustelemalla chatbotin kanssa. Tutkimuksen aineisto on saatu protokolla-analyysin ja sen jälkeisen haastattelun keinoin. Tulokset esittävät, että ihmismäiset keskustelukyvyt, pidemmät vastaukset sekä tehokas keskustelun kulku parantavat chatbottien käyttäjäkokemusta. Lisäksi riittävällä informoinnilla ohjataan keskustelua sekä vältetään virhetilanteita. Chatbottien hyvällä saatavuudella sekä helppokäyttöisyydellä kasvatetaan niiden hyväksyntää ja käyttöönottoa. Tutkielman tuloksia voidaan hyödyntää tulevissa tutkimuksissa ja chatbottien kehitystyössä.
  • Martesuo, Kim (Helsingin yliopisto, 2019)
    Creating a user interface (UI) is often a part of software development. In the software industry designated UI designers work side by side with the developers in agile software development teams. While agile software processes have been researched, yet there is no general consensus on how UI designers should be integrated with the developing team. The existing research points towards the industry favoring tight collaboration between developers and UI designers by having them work together in the same team. The subject is gathering interest and different ways of integration is happening in the industry. In this thesis we researched the collaboration between developers and UI designers in agile software development. The goal was to understand the teamwork between the UI designers and developers working in the same agile software teams. The research was conducted by doing semi-structured theme interviews with UI designers and devel- opers individually. The interviewees were from consulting firms located in the Helsinki metropolitan are in Finland. The subjects reported about a recent project where they worked in an agile software team consisting of UI designers and developers. The data from the interviews was compared to the literature. Results of the interviews were similar to the findings from the literature for the most part. Finding a suitable process for the teamwork, co-location, good social relations and a an atmosphere of trust were factors present in the literature and the interviews. The importance of good software tools for communicating designs, and developers taking part in the UI designing process stood out from the interviews.
  • Ahonen, Heikki (Helsingin yliopisto, 2020)
    The research group dLearn.Helsinki has created a software for defining the work life competence skills of a person, working as a part of a group. The software is a research tool for developing the mentioned skills of users, and users can be of any age, from school children to employees in a company. As the users can be of different age groups, the data privacy of different groups has to be taken into consideration from different aspects. Children are more vulnerable than adults, and may not understand all the risks imposed to-wards them. Thus in the European Union the General Data Protection Regulation (GDPR)determines the privacy and data of children are more protected, and this has to be taken into account when designing software which uses said data. For dLearn.Helsinki this caused changes not only in the data handling of children, but also other users. To tackle this problem, existing and future use cases needed to be planned and possibly implemented. Another solution was to implement different versions of the software, where the organizations would be separate. One option would be determining organizational differences in the existing SaaS solution. The other option would be creating on-premise versions, where organizations would be locked in accordance to the customer type. This thesis introduces said use cases, as well as installation options for both SaaS and on-premise. With these, broader views of data privacy and the different approaches are investigated, and it can be concluded that no matter the approach, the data privacy of children will always prove a challenge.
  • Bui, Minh (Helsingin yliopisto, 2021)
    Background. In API requests to a confidential data system, there always are sets of rules that the users must follow to retrieve desired data within their granted permission. These rules are made to assure the security of the system and limit all possible violations. Objective. The thesis is about detecting the violations of these rules in such systems. For any violation found, the request is considered as containing inconsistency and it must be fixed before retrieving any data. This thesis also looks for all diagnoses of inconsistencies requests. These diagnoses promote reconstructing the requests to remove any inconsistency. Method. In this thesis, we choose the design science research methodology to work on solutions. In this methodology, the current problem in distributing data from a smart building plays as the main motivation. Then, system design and development are implemented to prove the found solutions of practicality, while a testing system is built to confirm its validity. Results. The inconsistencies detection is considered as a diagnostic problem, and many algorithms have been found to resolve the diagnostic problem for decades. The algorithms are developed based on DAG algorithms and preserved to apply on different purposes. This thesis is based on these algorithms and constraint programming techniques to resolve the facing issues of the given confidential data system. Conclusions. A combination of constraint programming techniques and DAG algorithms for diagnostic problems can be used to resolve inconsistencies detection in API requests. Despite the need on performance improvement in application of these algorithms, the combination works effectively, and can resolve the research problem.
  • Talonpoika, Ville (Helsingin yliopisto, 2020)
    In recent years, virtual reality devices have entered the mainstream with many gaming-oriented consumer devices. However, the locomotion methods utilized in virtual reality games are yet to gain a standardized form, and different types of games have different requirements for locomotion to optimize player experience. In this thesis, we compare some popular and some uncommon locomotion methods in different game scenarios. We consider their strengths and weaknesses in these scenarios from a game design perspective. We also create suggestions on which kind of locomotion methods would be optimal for different game types. We conducted an experiment with ten participants, seven locomotion methods and five virtual environments to gauge how the locomotion methods compare against each other, utilizing game scenarios requiring timing and precision. Our experiment, while small in scope, produced results we could use to construct useful guidelines for selecting locomotion methods for a virtual reality game. We found that the arm swinger was a favourite for situations where precision and timing was required. Touchpad locomotion was also considered one of the best for its intuitiveness and ease of use. Teleportation is a safe choice for games not requiring a strong feeling of presence.
  • Kangas, Vilma (Helsingin yliopisto, 2020)
    Software testing is an important process when ensuring a program's quality. However, testing has not traditionally been a very substantial part of computer science education. Some attempts to integrate it into the curriculum has been made but best practices still prove to be an open question. This thesis discusses multiple attempts of teaching software testing during the years. It also introduces CrowdSorcerer, a system for gathering programming assignments with tests from students. It has been used in introductory programming courses in University of Helsinki. To study if the students benefit from creating assignments with CrowdSorcerer, we analysed the number of assignments and tests they created and if they correlate with their performance in a testing-related question in the course exam. We also gathered feedback from the students on their experiences from using CrowdSorcerer. Looking at the results, it seems that more research on how to teach testing would be beneficial. Improving CrowdSorcerer would also be a good idea.
  • Seppänen, Jukka-Pekka (Helsingin yliopisto, 2021)
    Helsingin yliopiston hammaslääketieteellisen koulutusohjelman suoritteita seurataan erinäisin Excel-taulukoin ja paperisin lomakkein. Suoritteet ovat osa opiskelijan kehittymistä kohti työelämää ja vaadittavien suoritteiden suorittamisen jälkeen opiskelijoille myönnetään oikeus toimia hammaslääkärin tehtävissä. Nykyisen järjestelmän ongelmana on opiskelijoiden tutkinnon kehittymisen seurannan vaikeus, sekä opiskelijan näkökulmasta oman oikeusturvan toteutuminen. Excel-taulukoiden julkinen näkyvyys opiskelijoiden keskuudessa mahdollistaa väärinkäytön, jossa opiskelija muuttaa toisen opiskelijan suoritteiden tietoja. Tässä tutkielmassa tutkitaan arkkitehtuurisia ratkaisuja, joilla suoriteseuranta voidaan tulevaisuudessa digitalisoida. Tutkielman lopputuloksena suositellaan järjetelmälle käytettävä tietokanta sekä sovellusarkkitehtuurimalli. Koska järjestelmässä käyttäjämäärä on rajattu hyvin pieneksi ja järjestelmän käyttö on satunnaista, ei järjestelmän tarvitse olla kovinkaan skaalautuva. Opiskelijan oikeusturvan kannalta on olennaista, että jokainen opiskelijan tekemä suorite tallennetaan kantaan ja kannan tila pysyy vakaana koko järjestelmän elinkaaren ajan. Tämän takia on suositeltavaa valita relaatiopohjainen tietokanta kuten PostgreSQL, joka tukee relaatiomallin lisäksi joustavia dokumenttitietokannasta tuttuja rakenteita. Arkkitehtuurimalliksi järjestelmään on suositeltavaa käyttää joko monoliittimallia, jossa järjestelmä toteutetaan yhden rajapinnan päälle, tai vaihtoehtoisesti mikropalveluina, jossa järejstelmä on jaettu kolmeen eri mikropalveluun.
  • Baumgartner, Axel (Helsingin yliopisto, 2021)
    Keväällä 2020 koronavirusepidemia pakotti suuren osan väestöstä työskentelemään etänä. ICT-alalla etätyöskentely ei ole epätavallista ja siitä on tehty paljon tutkimusta, etenkin globaalisti hajautettujen ohjelmistokehitysryhmien näkökulmasta. Koronavirusepidemian aiheuttaman etätyöpakotteen ominaispiirteenä on nopea ja yllättävä siirtyminen lähityöskentelystä etätyöskentelyyn, jota tutkitaan tässä tutkielmassa tarkemmin. Tutkielmassa keskitytään ketteriä ohjelmistokehitysmenetelmiä hyödyntäviin kehitysryhmiin. Taustana käytetään tutkimusmateriaalia ketterästä ohjelmistokehityksestä ja verrataan sitä tapaustutkimuksen tuloksiin. Tapaustutkimuksessa selvitetään kohderyhmän etätyöskentelyyn siirtymisen aikana syntyneitä ilmiöitä. Tavoitteena on tunnistaa taustatiedosta poikkeavat ilmiöt ja määritellä niistä jatkotutkimusaiheita. Tapaustutkimuksen tuloksesta selviää käytössä olevien työkalujen soveltuminen niin lähi- kuin etätyöskentelyyn. Siirtyminen sujui ilman suurempia ongelmia ja työskentely on jatkunut tauotta. Ongelmat keskittyvät kommunikaatioon ja sen vähenemiseen. Ketterien menetelmien rutiininen merkitys ja määrittely korostuu myös etätyöskentelyssä. Mahdollisiksi jatkotutkimusaiheiksi erottuu virtuaalisen valkotaulun ja jatkuvan puheyhteyden hyödyntäminen.
  • Männistö, Jouni (Helsingin yliopisto, 2021)
    Micro frontend -arkkitehtuuri on mikropalveluarkkitehtuurin erikoistapaus, jossa mikropalveluiden rooliin kuuluu tuottaa datan lisäksi myös käyttöliittymä toimintoineen. Toistaiseksi tästä arkkitehtuurimallista on julkaistu kirjallisuutta varsin vähän ja esitetyt mallit ovat enimmäkseen prototyyppitoteutuksia. Tässä tutkielmassa tarkastellaan erään tietyn ohjelmistoprojektin tuloksena syntynyttä micro frontend -ratkaisua design science -kehyksessä. Aluksi kuvataan toimintaympäristö, ongelmat ja sieltä nousevat vaatimukset. Tähän esitetään Web Components -teknologiaan perustuva ratkaisu, jonka kelpoisuutta arvioidaan sekä sen tuotantokäytöstä saatujen kokemusten perusteella että ATAM-evaluointimenetelmää käyttäen. Saadut tulokset osoittavat muun muassa sen, että edellä mainittu Web Components -teknologia mahdollistaa HTML-standardin määrittelemien ohjelmointirajapintojen suoran ja tehokkaan käytön web-kehityksessä — ilman tarvetta ohjelmistokehyksille. Lisäksi kyseenalaistetaan näkemys, jonka mukaan micro frontend -arkkitehtuurin kehitys olisi järkevää vain organisatorisista syistä: sille voi olla myös vahvat perustelut esimerkiksi ohjelmiston muokattavuudelle asetettujen vaatimusten vuoksi, ja se on mahdollista toteuttaa myös pienen kehittäjäryhmän toimesta.
  • Haatanen, Heini (Helsingin yliopisto, 2020)
    Ohjelmistorobotiikka on joukko teknologioita, joiden avulla voidaan automatisoida rutiiniprosesseja toimistotyössä. Ohjelmistorobotiikan hyödyntämisen päätavoitteena onkin korvata ihmisten toistuvia ja tylsiä toimistotehtäviä automaation avulla. Kun ohjelmistorobotti suorittaa tehtävän prosessia, se jäljittelee ihmisen toimintaa. Tämän pro gradu tutkimuksen aiheena on ohjelmistorobotiikan hyödyntämisen edellytykset. Tutkimuksen tavoitteena on tutkia millaisia taitoja ja kyvykkyyksiä vaaditaan organisaatiolta, jotta se voi automatisoida liiketoimintaansa ohjelmistorobotiikan avulla. Tuloksia voidaan hyödyntää tulevissa ohjelmistorobotiikka-automatisoinneissa. Tämä tutkimus on kvalitatiivinen eli laadullinen semi-strukturoitu haastattelu. Tutkimuksen aineisto kerättiin kyselylomakkeen avulla tehtävistä haastatteluista. Kyselylomakkeen tukena käytettiin kirjallisuuskatsausta. Tutkimuksessa käytettiin taustatiedon hakuun lumipallo-otantaa. Tutkimuksessa hyödyntämisen edellytyksiä tutkittiin kolmesta näkökulmasta; prosessin, asiantuntijan ja organisaation näkökulmasta. Näin ollen tutkimuksen tuloksetkin kuvaavat näitä osa-alueita erikseen. Kuitenkin kaikilla osa-alueilla on oleellinen suhde toisiinsa. Tutkimuksessa saatujen tulosten perusteella tärkeä vaatimus organisaation näkökulmasta automatisoinnille on, että automatisoitava prosessi pystytään kuvaamaan tarkasti ja voidaan ylipäätään automatisoida. Asiantuntijoilta vaaditaan yleisesti loogista ajattelukykyä, teknologiaymmärrystä ja kommunikaatiotaitoa. Edellytettävät taidot ovat riippuvaisia siitä, minkä alan asiantuntija on kyseessä. Roolista riippumatta vaaditaan perusymmärrystä ohjelmoinnista ja RPA-työkaluista. Prosessilta vaaditaan, että se on rutiininomainen, toistuva ja sääntöperusteinen. Automatisoinnin on myös oltava kannattavaa. Tutkimuksen tuloksia voivat hyödyntää kaikki, jotka miettivät työtehtävien automatisointia ohjelmistorobotiikan avulla. ACM Computing Classification System (CCS): •Social and professional topics~Professional topics~Computing and business~Automation •Information systems~Information systems applications~Process control systems
  • Laukkanen, Olli (Helsingin yliopisto, 2020)
    Decision-making is an important part of all of software development. This is especially true in the context of software architecture. Software architecture can even be thought of as a set of architectural decisions. Decision-making therefore plays a large part in influencing the architecture of a system. This thesis studies architecturally significant decision-making in the context of a software development project. This thesis presents the results of a case study where the primary source of data was interviews. The case is a single decision made in the middle of a subcontracted project. It involves the development team and several stakeholders from the client, including architects. The decision was handled quickly by the development team when an acute need for a decision arose. The work relating to the decision-making was mostly done within the agile development process used by the development team. Only the final approval from the client was done outside the development process. This final approval was given after the decision was already decided in practise and an implementation based on it was built. This illustrates how difficult it is to incorporate outside decision-making into software development. The decision-making also had a division of labour where one person did the researching and preparing of the the decision. This constituted most of the work done relating to the decision. This type of division of labour may perhaps be generalized further into other decision-making elsewhere within software development generally.
  • Lassila, Atte (Helsingin yliopisto, 2019)
    Modern software systems increasingly consist of independent services that communicate with each other through their public interfaces. Requirements for systems are thus implemented through communication and collaboration between different its services. This creates challenges in how each requirement is to be tested. One approach to testing the communication procedures between different services is end-to-end testing. With end-to-end testing, a system consisting of multiple services can be tested as a whole. However, end-to-end testing confers many disadvantages, in tests being difficult to write and maintain. When end-to-end testing should adopted is thus not clear. In this research, an artifact for continuous end-to-end testing was designed and evaluated it in use at a case company. Using the results gathered from building and maintaining the design, we evaluated what requirements, advantages and challenges are involved in adopting end-to-end testing. Based on the results, we conclude that end-to-end testing can confer significant improvements over manual testing processes. However, because of the equally significant disadvantages in end-to-end testing, their scope should be limited, and alternatives should be considered. To alleviate the challenges in end-to-end testing, investment in improving interfaces, as well as deployment tools is recommended.
  • Speer, Jon (Helsingin yliopisto, 2020)
    The techniques used to program quantum computers are somewhat crude. As quantum computing progresses and becomes mainstream, a more efficient method of programming these devices would be beneficial. We propose a method that applies today’s programming techniques to quantum computing, with program equivalence checking used to discern between code suited for execution on a conventional computer and a quantum computer. This process involves determining a quantum algorithm’s implementation using a programming language. This so-called benchmark implementation can be checked against code written by a programmer, with semantic equivalence between the two implying the programmer’s code should be executed on a quantum computer instead of a conventional computer. Using a novel compiler optimization verification tool named CORK, we test for semantic equivalence between a portion of Shor’s algorithm (representing the benchmark implementation) and various modified versions of this code (representing the arbitrary code written by a programmer). Some of the modified versions are intended to be semantically equivalent to the benchmark while others semantically inequivalent. Our testing shows that CORK is able to correctly determine semantic equivalence or semantic inequivalence in a majority of cases.
  • Sipilä, Suvi (Helsingin yliopisto, 2020)
    Clean and high-quality code affects the maintainability of software throughout the software lifecycle. Cleanliness and high-quality should be pursued from the software development phase. Nowadays, the software is developed rapidly, which is why the code must be easy to maintain. When the code is easy to maintain, it can basically be managed by any software developer. The thesis conducted a literature review of clean and high-quality code. The thesis aimed to find out what is clean and high-quality code in the class and function level. The purpose of the thesis was to explain why clean and high-quality code is necessary and how the clean and high-quality code can be improved with different tools such as metrics, refactoring, code review, and unit tests. The thesis also included a survey for software developers. The survey sought an answer to how clean and high-quality code practices are implemented in working life from the perspective of software developers. 103 software professionals responded to the survey. Based on the responses, 82,5 \% of respondents felt that they always or usually write clean and high-quality code. The main reasons why clean and high-quality code cannot be written were the challenges of the old codebase and schedule pressures. Writing code is a very people-oriented job, so we must understand the code and its purpose. The code must be simple and carefully written. When the code is clean and high-quality, it is easier to read and understand, and thus easier to maintain.
  • Siipola, Tuomas (Helsingin yliopisto, 2021)
    Ohjelmointikielet ovat digitaalisen infrastruktuurin perusta, jonka avulla ohjelmoijat ja yritykset rakentavat muun muassa tietojärjestelmiä ja erilaisia tuotteita. Ohjelmointikielten kehitystä, käyttöä ja teoreettisia ominaisuuksia on tutkittu paljon, mutta laajasti käytettyjen ohjelmointikielten jatkokehityksestä ja siihen liittyvästä päätöksenteosta löytyy vasta vähän kirjallisuutta. Tässä tutkielmassa ohjelmointikielten jatkokehitystä tutkitaan sosiaalisena ilmiönä, jossa erityisinä kiinnostuksen kohteina ovat kehitystyön koordinointiin käytetyt prosessit ja missä roolissa prosessiin osallistuvat henkilöt toimivat. Tutkielmassa perehdytään avoimen lähdekoodin yhteisössä kehitettävään Rust-ohjelmointikieleen, jonka muutoksista ja kehityssuunnasta keskustellaan ja päätetään julkisen Request for Comments -prosessin kautta. Tutkimukseen käytetyillä kuvailevan tilastotieteen menetelmillä ja sosiaalisten verkostojen analyysilla tarkastellaan, kuinka prosessi toimii käytännössä ja kuinka osallistujat vaikuttavat prosessin kulkuun.
  • Mäkelä, Nicolas (Helsingin yliopisto, 2020)
    The goal of real-time rendering is to produce synthetic and usually photorealistic images from a virtual scene as part of an interactive application. A scene is a set of light sources and polygonal objects. Photorealism requires a realistic simulation of light, but it contains a recursive problem where light rays can bounce between objects countless of times. The objects can contain hundreds of thoudands of polygons, so they cannot be processed recursively in real-time. The subject of this thesis is a voxel-based lighting method, where the polygonal scene is processed into a voxel grid. When calculating the indirect bounces of light, we can process a small amount of voxels instead of the vast amount of polygons. The method was introduced in 2011, but it didn't gain much popularity due to its performance requirements. In this thesis, we studied the performance and image quality of the voxel-based lighting algoritm with a modern, low-cost graphics card. The study was conducted through design research. The artefact is a renderer that produces images with the voxel-based algorithm. The results show that the algorithm is capable of a high frame rate of 60 images per second in a full-hd resolution of 1920x720 pixels. However, the algorithm consumes most of the time spent forming the image, which doesn't leave much time to simulate a game world for example. In addition, the voxelization of the scene is a slow operation, which would require some application-specific optimizations to be performed every frame in order to support dynamically moving objects. The image quality improves greatly when compared to a scene that doesn't calculate indirect light bounces, but there is a problem of light bleeding through solid objects.