Browsing by Subject "Privacy"

Sort by: Order: Results:

Now showing items 1-7 of 7
  • Reunamo, Antti (Helsingin yliopisto, 2020)
    Popularity of mobile instant messaging applications has flourished during the last ten years, and people are using them to exchange private and personal information on daily basis. These applications can be freely installed from online marketplaces, and average users may have several of them installed on their devices. The amount of information available from these messaging applications for a third-party eavesdropper via network traffic analysis has therefore grown significantly as well. Security features of these applications have also been developing over the years, and the communication between the applications and the background server infrastructure nowadays practically always employs encryption. Recently, more advanced end-to-end encryption methods have been developed to hide the content of the exchanged data even from the messaging service providers. Machine learning techniques have successfully been utilized in analyzing encrypted network traffic, and previous research has shown that this approach can effectively be used to detect mobile applications and the actions users are performing in those applications regardless of encryption. While the actual content of the messages and other transferred data cannot be accessed by the eavesdropper, these methods can still lead to serious privacy compromises. This thesis discusses the present state of machine learning-based identification of applications and user actions, how feasible it would be to actually perform such detection in a Wi-Fi network and what kind of privacy concerns would arise.
  • Järvinen, K.; Leppäkoski, H.; Lohan, E.; Richter, P.; Schneider, T.; Tkachenko, O.; Yang, Z. (IEEE, 2019)
    In the last decade, we observed a constantly growing number of Location-Based Services (LBSs) used in indoor environments, such as for targeted advertising in shopping malls or finding nearby friends. Although privacy-preserving LBSs were addressed in the literature, there was a lack of attention to the problem of enhancing privacy of indoor localization, i.e., the process of obtaining the users' locations indoors and, thus, a prerequisite for any indoor LBS. In this work we present PILOT, the first practically efficient solution for Privacy-Preserving Indoor Localization (PPIL) that was obtained by a synergy of the research areas indoor localization and applied cryptography. We design, implement, and evaluate protocols for Wi-Fi fingerprint-based PPIL that rely on 4 different distance metrics. To save energy and network bandwidth for the mobile end devices in PPIL, we securely outsource the computations to two non-colluding semi-honest parties. Our solution mixes different secure two-party computation protocols and we design size-and depth-optimized circuits for PPIL. We construct efficient circuit building blocks that are of independent interest: Single Instruction Multiple Data (SIMD) capable oblivious access to an array with low circuit depth and selection of the k-Nearest Neighbors with small circuit size. Additionally, we reduce Received Signal Strength (RSS) values from 8 bits to 4 bits without any significant accuracy reduction. Our most efficient PPIL protocol is 553x faster than that of Li et al. (INFOCOM'14) and 500× faster than that of Ziegeldorf et al. (WiSec'14). Our implementation on commodity hardware has practical run-times of less than 1 second even for the most accurate distance metrics that we consider, and it can process more than half a million PPIL queries per day.
  • Pal, Ranjan; Crowcroft, Jon; Wang, Yixuan; Li, Yong; De, Swades; Tarkoma, Sasu; Liu, Mingyan; Nag, Bodhibrata; Kumar, Abhishek; Hui, Pan (2020)
    In the modern era of the mobile apps (the era of surveillance capitalism - as termed by Shoshana Zuboff) huge quantities of surveillance data about consumers and their activities offer a wave of opportunities for economic and societal value creation. ln-app advertising - a multi-billion dollar industry, is an essential part of the current digital ecosystem driven by free mobile applications, where the ecosystem entities usually comprise consumer apps, their clients (consumers), ad-networks, and advertisers. Sensitive consumer information is often being sold downstream in this ecosystem without the knowledge of consumers, and in many cases to their annoyance. While this practice, in cases, may result in long-term benefits for the consumers, it can result in serious information privacy breaches of very significant impact (e.g., breach of genetic data) in the short term. The question we raise through this paper is: Is it economically feasible to trade consumer personal information with their formal consent (permission) and in return provide them incentives (monetary or otherwise)?. In view of (a) the behavioral assumption that humans are 'compromising' beings and have privacy preferences, (b) privacy as a good not having strict boundaries, and (c) the practical inevitability of inappropriate data leakage by data holders downstream in the data-release supply-chain, we propose a design of regulated efficient/bounded inefficient economic mechanisms for oligopoly data trading markets using a novel preference function bidding approach on a simplified sellers-broker market. Our methodology preserves the heterogeneous privacy preservation constraints (at a grouped consumer, i.e., app, level) upto certain compromise levels, and at the same time satisfies information demand (via the broker) of agencies (e.g., advertising organizations) that collect client data for the purpose of targeted behavioral advertising.
  • Khan, Mohsin; Niemi, Valtteri (2017)
    Subscription privacy of a user has been a historical concern with all the previous generation mobile networks, namely, GSM, UMTS, and LTE. While a little improvement have been achieved in securing the privacy of the long-term identity of a subscriber, the so called IMSI catchers are still in existence even in the LTE and advanced LTE networks. Proposals have been published to tackle this problem in 5G based on pseudonyms, and different public-key technologies. This paper looks into the problem of concealing long-term identity of a subscriber and presents a protocol based on identity based encryption (IBE) to tackle it. The proposed solution can be extended to a mutual authentication and key agreement protocol between a serving network (SN) and a user equipment (UE). We name the protocol PEFMA (privacy enhanced fast mutual authentication). The SN does not need to connect with the home network (HN) on every PEFMA run. In PEFMA, both the user equipment (UE) and the SN has public keys. A UE sends the IMSI after encrypting it using the SN’s public key. Since both the UE and SN have public keys, PEFMA can run without contacting the HN. A qualitative comparison of different techniques show that our solution is competitive for securing the long-term identity privacy of a user in the 5G network.
  • Bhardwaj, Shivam (Helsingin yliopisto, 2020)
    The banking and financial sector has often been synonymous with established names, with some having centuries old presence. In the recent past these incumbents have been experiencing a consequential disruption by new entrants and rapidly changing consumer demands. These disruptions to the status quo have been characterised by a shift towards adoption of technology and artificial intelligence particularly in the service and products offered to the end customers. The changing business climate in the financial sector has risen many convoluted questions for the regulators. These complications cover a vast set of issues – from the concerns relating to the privacy of data of the end users to the increasing vulnerability of the financial market, to unproportionally increased compliance requirements for new entrants, all form part of the mesh of questions that have arisen in the wake of new services and operations being designed with the aid and assistance of artificial intelligence, machine learning and big data analytics. It is in this background that this Thesis seeks to explore the trajectory of the development of the legal landscape for regulating artificial intelligence – both in general and specifically in the financial and banking sector, particularly in the European Union. During the analysis, existing legal enactments, such as the General Data Protection Regulation, have been scrutinised and certain observations have been made regarding the areas that still remain unregulated or open to debate under the laws as it stands today. In the same vein, an attempt has been made to explore the emerging discussion on a dedicated legal regime for artificial intelligence in the European Union, and those observations have been viewed from the perspective of the financial sector, thereby creating thematic underpinnings that ought to form part of any legal instrument aiming to optimally regulate technology in the financial sector. To concretise the actual application of such a legal instrument, a European Union member state has been identified and the evolution of the regulatory regime in the financial sector has been discussed with the said member states’ financial supervisory authority, thus highlighting the crucial role of the law making and enactment bodies in creating and sustaining a technologically innovative financial and banking sector. The themes recognised in this Thesis could be the building blocks upon which the future legal discourse on artificial intelligence and the financial sector could be structured.
  • Pfau, Diana Victoria (Helsingin yliopisto, 2021)
    Surveillance Capitalism, as described by Shoshana Zuboff, is a mutation of capitalism in which the main commodity to be traded is behavioural surplus, or personal data. As the forming of Surveillance Capitalism was significantly furthered by Artificial Intelligence (AI), AI is a central topic of the thesis. Personalisation that will oftentimes involve the use of AI tools is based on the collection of big amounts of personal data and bears several risks for data subjects. In Chapter I, I introduce the underlying research questions: Firstly, the question which effects the use of AI in Surveillance Capitalism has on democracy in the light of personalisation of advertisement, news provision, and propaganda. Secondly, the question whether the European Data Protection Regulation (GDPR) and the Charter of Fundamental Rights of the European Union react to these effects appropriately or if there is still need for additional legislation. In Chapter II, I determined a working definition of Artificial Intelligence. Additionally, the applicability of the GDPR together with potential problems are introduced. A special focus here lays on the underlying rationale of the GDPR. This topic is evaluated on several occasions during the thesis and reveals that the focus of the GDPR on enabling the data subject to exercise control over his or her information conflicts with the underlying rationale of Surveillance Capitalism. In Chapter III, four steps of examination follow. In a first step,I introduce the concept of Surveillance Capitalism. Personalized advertisement together with consent as a legal basis for processing of personal data are examined. During this examination, profiling, inferences, and the data processing principles of the GDPR are explored in the context of personalisation and AI. A focus in this examination is the question how individuals and democracy can be impacted. It is found that there is a lack of protection when it comes to the use of consent as a legal basis for privacy intrusive personalized advertisement and it is likely that the data subject will not be able to make an informed decision when asked for consent. Data minimisation, purpose limitation and storage limitation as important data processing principles proof to be at odds with the application of Artificial intelligence in the context of personalisation. Especially when it comes to the deletion of data further research in AI will be necessary to enable the adherence to the storage limitation.In a second step, I examined personalized news and propaganda according to their potential impacts on individuals and democracy. Explicit consent as a legal basis for processing of special categories is examined together with the concept of data protection by design as stipulated in article 25 GDPR. While explicit consent is found to likely suffer from the same weaknesses as the “regular consent”, I proposed that data protection by design could solve some of the arising issues if the norm is strengthened in the future.In a third step, I evaluate whether the right to receive and impart information laid down in the Charter of Fundamental Rights of the European Union provides for a right to receive unbiased, or unpersonalized, information. While there are indications that such a right could be acknowledged however, its scope is unclear so far. In a fourth step, I examine the proposal for a European Artificial Intelligence Act with the unfortunate outcome, that this Act might not be able to fill the discovered gaps left by the GDPR. I conclude that, taking into consideration all findings of the research, the use of AI in personalisation can significantly harm democracy by potentially impacting the freedom of political discourse, provoking social inequalities, and influencing legislation and science through heavy investment and lobbying. Ultimately, the GDPR does leave significant gaps due to the incompatibility of underlying rationales of the GDPR and Surveillance Capitalism and there is a need to protect data subjects additionally. I propose that future legislations on the use of AI in personalization should react appropriately to the rationale of Surveillance Capitalism.