Browsing by Subject "algorithms"

Sort by: Order: Results:

Now showing items 1-18 of 18
  • Bhattacharjee, Joy; Rabbil, Mehedi; Fazel, Nasim; Darabi, Hamid; Choubin, Bahram; Khan, Md. Motiur Rahman; Marttila, Hannu; Haghighi, Ali Torabi (Elsevier, 2021)
    Science of the Total Environment 797 (2021), 149034
    Lake water level fluctuation is a function of hydro-meteorological components, namely input, and output to the system. The combination of these components from in-situ and remote sensing sources has been used in this study to define multiple scenarios, which are the major explanatory pathways to assess lake water levels. The goal is to analyze each scenario through the application of the water balance equation to simulate lake water levels. The largest lake in Iran, Lake Urmia, has been selected in this study as it needs a great deal of attention in terms of water management issues. We ran a monthly water balance simulation of nineteen scenarios for Lake Urmia from 2003 to 2007 by applying different combinations of data, including observed and remotely sensed water level, flow, evaporation, and rainfall. We used readily available water level data from Hydrosat, Hydroweb, and DAHITI platforms; evapotranspiration from MODIS and rainfall from TRMM. The analysis suggests that the consideration of field data in the algorithm as the initial water level can reproduce the fluctuation of Lake Urmia water level in the best way. The scenario that combines in-situ meteorological components is the closest match to the observed water level of Lake Urmia. Almost all scenarios showed good dynamics with the field water level, but we found that nine out of nineteen scenarios did not vary significantly in terms of dynamics. The results also reveal that, even without any field data, the proposed scenario, which consists entirely of remote sensing components, is capable of estimating water level fluctuation in a lake. The analysis also explains the necessity of using proper data sources to act on water regulations and managerial decisions to understand the temporal phenomenon not only for Lake Urmia but also for other lakes in semi-arid regions.
  • Velkova, Julia; Kaun, Anne (2021)
    The article constitutes a critical intervention in the current, dramatic debate on the consequences of algorithms and automation for society. While most research has focused on negative outcomes, including ethical problems of machine bias and accountability, little has been said about the possibilities of users to resist algorithmic power. The article draws on Raymond Williams’ work on media as practice to advance a framework for studying algorithms with a focus on user agency. We illustrate this framework with the example of the media activist campaign World White Web by the Swedish artist and visual designer Johanna Burai. We suggest that user agency in relation to algorithms can emerge from alternative uses of platforms, in the aftermath of algorithmic logics, and give birth to complicit forms of resistance that work through ‘repair’ politics oriented towards correcting the work of algorithms. We conclude with a discussion of the ways in which the proposed framework helps us rethink debates on algorithmic power.
  • Gritsenko, Daria; Markham, Annette; Pötzsch, Holger; Wijermars, Marielle (2022)
    This introduction to the special issue on algorithmic governance in context offers an outline of the field and summarizes each contribution to the issue.
  • Yli-Jyrä, Anssi Mikael (Northern European Association for Language Technology, 2011)
    NEALT Proceedings Series
    The paper reconceptualizes Constraint Grammar as a framework where the rules refine the compact representations of local ambiguity while the rule conditions are matched against a string of feature vectors that summarize the compact representations. Both views to the ambiguity are processed with pure finite-state operations. The compact representations are mapped to feature vectors with the aid of a rational power series. This magical interconnection is not less pure than a prevalent interpretation that requires that the reading set provided by a lexical transducer is magically linearized to a marked concatenation of readings given to pure transducers. The current approach has several practical benefits, including the inward deterministic way to compute, represent and maintain all the applications of the rules in the sentence.
  • Kinnunen, Lauri (Helsingin yliopisto, 2022)
    This thesis is a review of articles focusing on software assisted floor plan design for architecture. I group the articles into optimization, case based design, and machine learning, based on their use of prior examples. I then look into each category and further classify articles based on dimensions relevant to their overall approach. Case based design was a popular research field in the 1990s and early 2000s when several large research projects were conducted. However, since then the research has slowed down. Over the past 20 years, optimization methods to solve architectural floor plans have been researched extensively using a number of different algorithms and data models. The most popular approach is to use a stochastic optimization method such as a genetic algorithm or simulated annealing. More recently, a number of articles have investigated the possibility of using machine learning on architectural floor plans. The advent of neural networks and GAN models, in particular, has spurred a great deal of new research. Despite considerable research efforts, assisted floor plan design has not found its way into commercial applications. To aid industry adoption, more work is needed on the integration of computational design tools into the existing design workflows.
  • Koppatz, Maximilian (Helsingin yliopisto, 2022)
    Automatic headline generation has the potential to significantly assist editors charged with head- lining articles. Approaches to automation in the headlining process can range from tools as creative aids, to complete end to end automation. The latter is difficult to achieve as journalistic require- ments imposed on headlines must be met with little room for error, with the requirements depending on the news brand in question. This thesis investigates automatic headline generation in the context of the Finnish newsroom. The primary question I seek to answer is how well the current state of text generation using deep neural language models can be applied to the headlining process in Finnish news media. To answer this, I have implemented and pre-trained a Finnish generative language model based on the Transformer architecture. I have fine-tuned this language model for headline generation as autoregression of headlines conditioned on the article text. I have designed and implemented a variation of the Diverse Beam Search algorithm, with additional parameters, to perform the headline generation in order to generate a diverse set of headlines for a given text. The evaluation of the generative capabilities of this system was done with real world usage in mind. I asked domain-experts in headlining to evaluate a generated set of text-headline pairs. The task was to accept or reject the individual headlines in key criteria. The responses of this survey were then quantitatively and qualitatively analyzed. Based on the analysis and feedback, this model can already be useful as a creative aid in the newsroom despite being far from ready for automation. I have identified concrete improvement directions based on the most common types of errors, and this provides interesting future work.
  • Väliaho, Eemu-Samuli; Lipponen, Jukka A.; Kuoppa, Pekka; Martikainen, Tero J.; Jäntti, Helena; Rissanen, Tuomas T.; Castren, Maaret; Halonen, Jari; Tarvainen, Mika P.; Laitinen, Tiina M.; Laitinen, Tomi P.; Santala, Onni E.; Rantula, Olli; Naukkarinen, Noora S.; Hartikainen, Juha E. K. (2022)
    Aim: Atrial fibrillation (AF) detection is challenging because it is often asymptomatic and paroxysmal. We evaluated continuous photoplethysmogram (PPG) for signal quality and detection of AF.Methods: PPGs were recorded using a wrist-band device in 173 patients (76 AF, 97 sinus rhythm, SR) for 24 h. Simultaneously recorded 3-lead ambulatory ECG served as control. The recordings were split into 10-, 20-, 30-, and 60-min time-frames. The sensitivity, specificity, and F1-score of AF detection were evaluated for each time-frame. AF alarms were generated to simulate continuous AF monitoring. Sensitivities, specificities, and positive predictive values (PPVs) of the alarms were evaluated. User experiences of PPG and ECG recordings were assessed. The study was registered in the Clinical Trials database (NCT03507335).Results: The quality of PPG signal was better during night-time than in daytime (67.3 +/- 22.4% vs. 30.5 +/- 19.4%, p < 0.001). The 30-min time-frame yielded the highest F1-score (0.9536), identifying AF correctly in 72/76 AF patients (sensitivity 94.7%), only 3/97 SR patients receiving a false AF diagnosis (specificity 96.9%). The sensitivity and PPV of the simulated AF alarms were 78.2 and 97.2% at night, and 49.3 and 97.0% during the daytime. 82% of patients were willing to use the device at home.Conclusion: PPG wrist-band provided reliable AF identification both during daytime and night-time. The PPG data's quality was better at night. The positive user experience suggests that wearable PPG devices could be feasible for continuous rhythm monitoring.
  • Mäkinen, Ville; Oksanen, Juha; Sarjakoski, Tapani (Stichting AGILE, 2019)
    The digital elevation model (DEM) is an invaluable product in numerous geospatial applications from orthorectification of aerial photographs to hydrological modelling and advanced 3D visualisation. With the current aerial laser scanning methods, superior quality digital elevation models can be produced over land areas, but surfaces over water bodies are visually problematic, especially for streams in 3D. We present a method to generate smooth, monotonically decreasing elevation surfaces over water bodies in DEMs. The method requires the point cloud data and the polygons delineating the water bodies as input data. We show how DEM visualisations improve by applying the presented method.
  • Savolainen, Laura; Trilling, Damian; Liotsiou, Dimitra (2020)
    How do audiences make sense of and interact with political junk news on Facebook? How does the platform's "emotional architecture" intervene in these sense-making, interactive processes? What kinds of mediated publics emerge on and through Facebook as a result? We study these questions through topic modeling 40,500 junk news articles, quantitatively analyzing their engagement metrics, and a qualitative comment analysis. This exploratory research design allows us to move between levels of public discourse, zooming in from cross-outlet talking points to microsociological processes of meaning-making, interaction, and emotional entrainment taking place within the comment boxes themselves. We propose the concepts of delighting and detesting engagement to illustrate how the interplay between audiences, platform architecture, and political junk news generates a bivalent emotional dynamic that routinely divides posts into highly "loved" and highly "angering." We argue that high-performing (or in everyday parlance, viral) junk news bring otherwise disparate audience members together and orient their dramatic focus toward objects of collective joy, anger, or concern. In this context, the nature of political junk news is performative as they become resources for emotional signaling and the construction of group identity and shared feeling on social media. The emotions that animate junk news audiences typically refer back to a transpiring social relationship between two political sides. This affectively loaded "us" versus "them" dynamic is both enforced by Facebook's emotional architecture and made use of by junk news publishers.
  • Bhattacharjee, Joy; Marttila, Hannu; Haghighi, Ali Torabi; Saarimaa, Miia; Tolvanen, Anne; Lepistö, Ahti; Futter, Martyn N.; Kløve, Bjørn (American Society of Civil Engineers, 2021)
    Journal of Irrigation and Drainage Engineering, 147(4), 04021006
    Spatiotemporal information on historical peatland drainage is needed to relate past land use to observed changes in catchment hydrology. Comprehensive knowledge of historical development of peatland management is largely unknown at the catchment scale. Aerial photos and light detection and ranging (LIDAR) data enlarge the possibilities for identifying past peatland drainage patterns. Here, our objectives are (1) to develop techniques for semiautomatically mapping the location of ditch networks in peat-dominated catchments using aerial photos and LIDAR data, and (2) to generate time series of drainage networks. Our approaches provide open-access techniques to systematically map ditches in peat-dominated catchments through time. We focused on the algorithm in such a way that we can identify the ditch networks from raw aerial images and LIDAR data based on the modification of multiple filters and number of threshold values. Such data are needed to relate spatiotemporal drainage patterns to observed changes in many northern rivers. We demonstrate our approach using data from the Simojoki River catchment (3,160  km2) in northern Finland. The catchment is dominated by forests and peatlands that were almost all drained after 1960. For two representative locations in cultivated peatland (downstream) and peatland forest (upstream) areas of the catchment; we found total ditch length density (km/km2), estimated from aerial images and LIDAR data based on our proposed algorithm, to have varied from 2% to 50% compared with the monitored ditch length available from the National Land survey of Finland (NLSF) in 2018. A different pattern of source variation in ditch network density was observed for whole-catchment estimates and for the available drained-peatland database from Natural Resources Institute Finland (LUKE). Despite such differences, no significant differences were found using the nonparametric Mann-Whitney U test with a 0.05 significance level based on the samples of pixel-identified ditches between (1) aerial images and NLSF vector files and (2) LIDAR data and NLSF vector files.
  • Higuera Ornelas, Adriana (Helsingin yliopisto, 2022)
    AI-driven innovation offers numerous possibilities for the public sector. The potential of digital advancements is already palpable within the tax administrations. Automation is efficiently used for tax assessments, to perform compliance management, to enhance revenue collection and to provide services to taxpayers. A digital transformation encompassing Big Data, advanced analytics and ADM systems promises significant benefits and efficiencies for the tax administrations. It is essential that public organizations meet the necessary legal framework and safeguards to expand the use of these automated systems since its sources of information, technical capacity, and extent of application have evolved. Using Finland as a case study, this research assesses the use of automated decision-making systems within the public sector. Constitutional and administrative legal principles serve as guidelines and constraints for the administrative activity and decision-making. This study examines the lawfulness of the deployment of ADM systems in the field of taxation by looking its compatibility with long-standing legal principles. Focus if given to the principles of the rule of law, due process, good administration, access to information, official accountability, confidentiality, and privacy. Numerous public concerns have been raised regarding the use of ADM systems in the public sector. Scholars, academics and journalists have justifiably pointed out the risks and limitations of ADM systems. Despite the legal challenges posed by automation, this research suggests that ADM systems used to pursue administrative objectives can fit with long-standing legal principles with appropriate regulation, design and human capacity.
  • Kemppainen, Esa (Helsingin yliopisto, 2020)
    NP-hard optimization problems can be found in various real-world settings such as scheduling, planning and data analysis. Coming up with algorithms that can efficiently solve these problems can save various rescources. Instead of developing problem domain specific algorithms we can encode a problem instance as an instance of maximum satisfiability (MaxSAT), which is an optimization extension of Boolean satisfiability (SAT). We can then solve instances resulting from this encoding using MaxSAT specific algorithms. This way we can solve instances in various different problem domains by focusing on developing algorithms to solve MaxSAT instances. Computing an optimal solution and proving optimality of the found solution can be time-consuming in real-world settings. Finding an optimal solution for problems in these settings is often not feasible. Instead we are only interested in finding a good quality solution fast. Incomplete solvers trade guaranteed optimality for better scalability. In this thesis, we study an incomplete solution approach for solving MaxSAT based on linear programming relaxation and rounding. Linear programming (LP) relaxation and rounding has been used for obtaining approximation algorithms on various NP-hard optimization problems. As such we are interested in investigating the effectiveness of this approach on MaxSAT. We describe multiple rounding heuristics that are empirically evaluated on random, crafted and industrial MaxSAT instances from yearly MaxSAT Evaluations. We compare rounding approaches against each other and to state-of-the-art incomplete solvers SATLike and Loandra. The LP relaxation based rounding approaches are not competitive in general against either SATLike or Loandra However, for some problem domains our approach manages to be competitive against SATLike and Loandra.
  • Siipola, Ossi (Helsingin yliopisto, 2022)
    Inverted indices is a core index structure for different low-level structures, like search engines and databases. It stores a mapping from terms, numbers etc. to list of location in document, set of documents, database, table etc. and allows efficient full-text searches on indexed structure. Mapping location in the inverted indicies is usually called a postings list. In real life applications, scale of the inverted indicies size can grow huge. Therefore efficient representation of it is needed, but at the same time, efficient queries must be supported. This thesis explores ways to represent postings lists efficiently, while allowing efficient nextGEQ queries on the set. Efficient nextGEQ queries is needed to implement inverted indicies. First we convert postings lists into one bitvector, which concatenates each postings list's characteristic bitvector. Then representing an integer set efficiently converts to representing this bitvector efficiently, which is expected to have long runs of 0s and 1s. Run-length encoding of bitvector have recently led to promising results. Therefore in this thesis we experiment two encoding methods (Top-k Hybrid coder, RLZ) that encode postings lists via run-length encodes of the bitvector. We also investigate another new bitvector compression method (Zombit-vector), which encodes bitvectors by finding redundancies of runs of 0/1s. We compare all encoding to current state-of-the-art Partitioned Elisa-Fano (PEF) coding. Compression results on all encodings were more efficient than the current state-of-the-art PEF encoding. Zombit-vector nextGEQ query results were slighty more efficient than PEF's, which make it more attractive with bitvectors that have long runs of 0s and 1s. More work is needed with Top-k Hybrid coder and RLZ, so that those encodings nextGEQ can be compared to Zombit-vector and PEF.
  • Granroth, Julia (Helsingin yliopisto, 2020)
    In my MA thesis I explore the Finnish people’s relationship to technology and especially to information communication technology by discussing technological imaginaries. Imaginaries guide attention toward collective sense making while they convey shared social values, norms and identities that are performed in different speech acts. Everyday algorithms are the starting point of my thesis. I framed the topic of algorithms and the technological imaginaries they produce with the theoretical discussion of datafication and dataveillance. As the influence of technology is only growing in our Western society, I am interested in observing its potential sociopolitical impact. My research question is how technological imaginaries affect the society. I am interested in questions how a technology-related future narrative exists and how these narratives are constructed—and what they tell about the Finnish society. My goal is to create holistic understanding of living with quantified data and to analyze what values technological imaginaries might reveal. The research approach is anthropology of technology. The ethnographic focus is in Helsinki, Finland and the ethnographic material consists of 39 semi-structured interviews, which are divided between two reference groups, which I named as ‘everyday algorithms’ and ‘digital marketers.’ The interviews were conducted in 2017 and 2018 and the ethnographic material was systematically analyzed with content analysis. The interviewees’ affective and analytical responses depended on the topic at hand and from the different perspectives the interviewees saw them through. Technological determinism and dataism, which represent faith in technology as the source of progress and faith in data as objective, rational and good, were themes that emerged in the interviews. What became apparent is that technology is viewed to mediate social utopias, such as social equality, even when actual technology mediated practices might not support those desires.
  • van Rheenen, Patrick F.; Aloi, Marina; Assa, Amit; Bronsky, Jiri; Escher, Johanna C.; Fagerberg, Ulrika L.; Gasparetto, Marco; Gerasimidis, Konstantinos; Griffiths, Anne; Henderson, Paul; Koletzko, Sibylle; Kolho, Kaija-Leena; Levine, Arie; van Limbergen, Johan; de Carpi, Francisco Javier Martin; Navas-Lopez, Victor Manuel; Oliva, Salvatore; de Ridder, Lissy; Russell, Richard K.; Shouval, Dror; Spinelli, Antonino; Turner, Dan; Wilson, David; Wine, Eytan; Ruemmele, Frank M. (2021)
    Objective: We aimed to provide an evidence-supported update of the ECCO-ESPGHAN guideline on the medical management of paediatric Crohn's disease [CD]. Methods: We formed 10 working groups and formulated 17 PICO-structured clinical questions [Patients, Intervention, Comparator, and Outcome]. A systematic literature search from January 1, 1991 to March 19, 2019 was conducted by a medical librarian using MEDLINE, EMBASE, and Cochrane Central databases. A shortlist of 30 provisional statements were further refined during a consensus meeting in Barcelona in October 2019 and subjected to a vote. In total 22 statements reached >= 80% agreement and were retained. Results: We established that it was key to identify patients at high risk of a complicated disease course at the earliest opportunity, to reduce bowel damage. Patients with perianal disease, stricturing or penetrating behaviour, or severe growth retardation should be considered for up-front anti-tumour necrosis factor [TNF] agents in combination with an immunomodulator. Therapeutic drug monitoring to guide treatment changes is recommended over empirically escalating anti-TNF dose or switching therapies. Patients with low-risk luminal CD should be induced with exclusive enteral nutrition [EEN], or with corticosteroids when EEN is not an option, and require immunomodulator-based maintenance therapy. Favourable outcomes rely on close monitoring of treatment response, with timely adjustments in therapy when treatment targets are not met. Serial faecal calprotectin measurements or small bowel imaging [ultrasound or magnetic resonance enterography] are more reliable markers of treatment response than clinical scores alone. Conclusions: We present state-of-the-art guidance on the medical treatment and long-term management of children and adolescents with CD.
  • Ekholm, Malin (Helsingin yliopisto, 2020)
    Algorithms are effective data processing programs, which are being applied in an increasing amount of contexts and areas of our lives. One such context is that of our working lives, where algorithms are being adapted to take over tasks previously performed by human workers. This has sparked the discussion about capabilities and agency of algorithmic technology, and also whether or not technology will be replacing the human workforce. Public discussion has actively taken part in constructing both opportunities and fears related to algorithmic technology, but very little research exists about the impact of algorithmic technology at work. A lot of discussion has also centered around the agency of algorithms, as due to the advances in technology, agency is no longer something only only assigned to, or possessed by human actors. While some research has been done on the construction of algorithm agency, very little research has been conducted to explore the phenomena in the context of work. Research about adapting algorithms in companies is very scarce, and the gap in this research is especially crucial due to its lack of research from a social scientific perspective. The purpose of this thesis is to investigate how algorithmic agency (or lack thereof) is constructed in the discourse of five employees of an IT company that has applied an algorithm in their operations. I further want to investigate what consequences these constructs have on the work of the employees and the flow of agency in the company. The theoretical and methodological framework is rooted in social constructionism and discursive psychology and the analysis focuses on the construction of accounts of agency in the context. In order to answer the research questions I have conducted a semi-structured focused interview with each of the recruited employees. The results show that algorithmic agency is constructed in multifaceted ways and several constructs of agency coexist in the discourse of the employees. The agency is constructed as an independent actor with agency, but that this agency is also restricted by its human developers and operational staff intervening in its decisions. While accounts for algorithmicx agency exist, agency is also constructed as something possessed by the developers and company, who develop the algorithm in order to reach certain goals. The results also show that the algorithm is constructed as an enabler and restrictor to human agency, but that the adaptation of the algorithm has also created new flows of agency, where agency flows from human to algorithm and vice versa. This thesis contributes to previous research on agency, algorithms and work by taking a contemporary, employee-centric perspective on agency, not yet taken by previous research. In order to take into account the dynamic processes of agency when adapting algorithmic technology in companies, an extensive social scientific perspective is needed to inform organizational change. In order to achieve this, more qualitative research is needed to further understand the impact of automation on agency and other interpersonal dynamics.
  • Väliaho, Eemu-Samuli; Kuoppa, Pekka; Lipponen, Jukka A.; Hartikainen, Juha E. K.; Jäntti, Helena; Rissanen, Tuomas T; Kolk, Indrek; Pohjantähti-Maaroos, Hanna; Castrén, Maaret; Halonen, Jari; Tarvainen, Mika P.; Santala, Onni E.; Martikainen, Tero J. (2021)
    Atrial fibrillation is often asymptomatic and intermittent making its detection challenging. A photoplethysmography (PPG) provides a promising option for atrial fibrillation detection. However, the shapes of pulse waves vary in atrial fibrillation decreasing pulse and atrial fibrillation detection accuracy. This study evaluated ten robust photoplethysmography features for detection of atrial fibrillation. The study was a national multi-center clinical study in Finland and the data were combined from two broader research projects (NCT03721601, URL: https://clinicaltrials.gov/ct2/show/NCT03721601 and NCT03753139, URL: https://clinicaltrials.gov/ct2/show/NCT03753139). A photoplethysmography signal was recorded with a wrist band. Five pulse interval variability, four amplitude features and a novel autocorrelation-based morphology feature were calculated and evaluated independently as predictors of atrial fibrillation. A multivariate predictor model including only the most significant features was established. The models were 10-fold cross-validated. 359 patients were included in the study (atrial fibrillation n = 169, sinus rhythm n = 190). The autocorrelation univariate predictor model detected atrial fibrillation with the highest area under receiver operating characteristic curve (AUC) value of 0.982 (sensitivity 95.1%, specificity 93.7%). Autocorrelation was also the most significant individual feature (p < 0.00001) in the multivariate predictor model, detecting atrial fibrillation with AUC of 0.993 (sensitivity 96.4%, specificity 96.3%). Our results demonstrated that the autocorrelation independently detects atrial fibrillation reliably without the need of pulse detection. Combining pulse wave morphology-based features such as autocorrelation with information from pulse-interval variability it is possible to detect atrial fibrillation with high accuracy with a commercial wrist band. Photoplethysmography wrist bands accompanied with atrial fibrillation detection algorithms utilizing autocorrelation could provide a computationally very effective and reliable wearable monitoring method in screening of atrial fibrillation.
  • Alahuhta, Veera (Helsingin yliopisto, 2022)
    This thesis examines the social interaction and media ideologies of Finnish TikTok users and how they are different from the discourses about TikTok in Finnish news media. My aim is to understand how TikTok is seen as a social media platform and what kind of social interaction and face-work users do when encountered with differing views of their media ideologies. I use Gershon’s framework of media ideologies to understand the implied and explicit opinions of Finnish TikTok users of the proper ways of using the medium and to compare those views to Finnish news media’s representations of the app and its use. By analysing discourses of the online news of Finnish Public Broadcasting Company (YLE) and the online news of Finland’s biggest newspaper Helsingin Sanomat (HS), I identify main categories of discourse about TikTok and its use. To understand the users’ perspective, I have conducted online ethnography in TikTok with two separate user accounts to collect data from videos, comments and video replies. From this data, I identify multiple media ideologies voiced out among the users of TikTok. Lastly, I compare these two data sets to see where they overlap and where they live separate lives. In order to understand the social phenomena related to the debates over different media ideologies, I utilise Erving Goffman’s concept of face and face-work. By exploring these questions with these methods and theoretical frameworks I wish to contribute to the discussion of how young users might adopt new ways of interacting and expressing themselves within a new medium, and how that might be different from the views of the people outside that medium. The goal of my thesis is to create an analytical overview on social interaction and media ideologies in TikTok, especially among Finnish TikTokers and how that differs from the discourses represented in Finnish news media