Browsing by Subject "artificial intelligence"

Sort by: Order: Results:

Now showing items 1-19 of 19
  • Mouchlis, Varnavas D.; Afantitis, Antreas; Serra, Angela; Fratello, Michele; Papadiamantis, Anastasios G.; Aidinis, Vassilis; Lynch, Iseult; Greco, Dario; Melagraki, Georgia (2021)
    De novo drug design is a computational approach that generates novel molecular structures from atomic building blocks with no a priori relationships. Conventional methods include structure-based and ligand-based design, which depend on the properties of the active site of a biological target or its known active binders, respectively. Artificial intelligence, including ma-chine learning, is an emerging field that has positively impacted the drug discovery process. Deep reinforcement learning is a subdivision of machine learning that combines artificial neural networks with reinforcement-learning architectures. This method has successfully been em-ployed to develop novel de novo drug design approaches using a variety of artificial networks including recurrent neural networks, convolutional neural networks, generative adversarial networks, and autoencoders. This review article summarizes advances in de novo drug design, from conventional growth algorithms to advanced machine-learning methodologies and high-lights hot topics for further development.
  • Niemi, Hannele (2021)
    This special issue raises two thematic questions: (1) How will AI change learning in the future and what role will human beings play in the interaction with machine learning, and (2), What can we learn from the articles in this special issue for future research? These questions are reflected in the frame of the recent discussion of human and machine learning. AI for learning provides many applications and multimodal channels for supporting people in cognitive and non-cognitive task domains. The articles in this special issue evidence that agency, engagement, self-efficacy, and collaboration are needed in learning and working with intelligent tools and environments. The importance of social elements is also clear in the articles. The articles also point out that the teacher's role in digital pedagogy primarily involves facilitating and coaching. AI in learning has a high potential, but it also has many limitations. Many worries are linked with ethical issues, such as biases in algorithms, privacy, transparency, and data ownership. This special issue also highlights the concepts of explainability and explicability in the context of human learning. We need much more research and research-based discussion for making AI more trustworthy for users in learning environments and to prevent misconceptions.
  • Lindevall, Mari (Helsingin yliopisto, 2021)
    The purpose of this systematic review is to investigate the usage of artificial intelligence in the pharmaceutical industry in the fields of pharmaceutical manufacturing, product development, and quality control. Today, developing and getting a new drug on the market is time-consuming, ineffective, and expensive. Artificial intelligence is seen as one possible solution to the problems of the pharmaceutical industry. From 734 articles 77 academic study articles were included. Included articles showed artificial neural networks to be the most used artificial intelligence method between 1991 and 2021. The search was conducted from three databases with the following inclusion criteria: studies using AI in either pharmaceutical manufacturing, product development or quality control, English as the language, and Western medicine-based pharmacy as a branch of science. This systematic literature review has three main limitations: the possibility of an important search word missing from the search algorithm, the selection of articles according to one person's assessment, and the possible narrow picture of the used artificial intelligence methods in the pharmaceutical industry, as pharmaceutical companies also research the subject. The use of artificial intelligence in product development has been studied the most, while its use in quality control has been studied the least. In the studies, tablets were a popular drug form, while biological drugs were underrepresented. In total, the number of studies published increased over three decades. However, most of the articles were published in 2020. Nearly half of the articles had some connection to a pharmaceutical company, indicating the interest of both the academy and pharmaceutical companies in the use of artificial intelligence in manufacturing, product development, and quality control. In the future, the efficacy of artificial intelligence, as well as its limitations as a method, should be investigated to conclude its potential to play a key role in reforming the pharmaceutical industry. The results of the study show that a wave of artificial intelligence has arrived in the pharmaceutical industry, however, its real benefits will only be seen with future research.
  • Tanoli, Ziaurrehman; Vähä-Koskela, Markus; Aittokallio, Tero (2021)
    Introduction: Drug repurposing provides a cost-effective strategy to re-use approved drugs for new medical indications. Several machine learning (ML) and artificial intelligence (AI) approaches have been developed for systematic identification of drug repurposing leads based on big data resources, hence further accelerating and de-risking the drug development process by computational means. Areas covered: The authors focus on supervised ML and AI methods that make use of publicly available databases and information resources. While most of the example applications are in the field of anticancer drug therapies, the methods and resources reviewed are widely applicable also to other indications including COVID-19 treatment. A particular emphasis is placed on the use of comprehensive target activity profiles that enable a systematic repurposing process by extending the target profile of drugs to include potent off-targets with therapeutic potential for a new indication. Expert opinion: The scarcity of clinical patient data and the current focus on genetic aberrations as primary drug targets may limit the performance of anticancer drug repurposing approaches that rely solely on genomics-based information. Functional testing of cancer patient cells exposed to a large number of targeted therapies and their combinations provides an additional source of repurposing information for tissue-aware AI approaches.
  • Cardoso, Pedro; Branco, Vasco V.; Borges, Paulo A.; Carvalho, Jose C.; Rigal, Francois; Gabriel, Rosalina; Mammola, Stefano; Cascalho, Jose; Correia, Luis (2020)
    Ecological systems are the quintessential complex systems, involving numerous high-order interactions and non-linear relationships. The most used statistical modeling techniques can hardly accommodate the complexity of ecological patterns and processes. Finding hidden relationships in complex data is now possible using massive computational power, particularly by means of artificial intelligence and machine learning methods. Here we explored the potential of symbolic regression (SR), commonly used in other areas, in the field of ecology. Symbolic regression searches for both the formal structure of equations and the fitting parameters simultaneously, hence providing the required flexibility to characterize complex ecological systems. Although the method here presented is automated, it is part of a collaborative human-machine effort and we demonstrate ways to do it. First, we test the robustness of SR to extreme levels of noise when searching for the species-area relationship. Second, we demonstrate how SR can model species richness and spatial distributions. Third, we illustrate how SR can be used to find general models in ecology, namely new formulas for species richness estimators and the general dynamic model of oceanic island biogeography. We propose that evolving free-form equations purely from data, often without prior human inference or hypotheses, may represent a very powerful tool for ecologists and biogeographers to become aware of hidden relationships and suggest general theoretical models and principles.
  • Martins, Pedro; Goncalo Oliveira, Hugo; Carlos Gonçalves, João; Cruz, António; Cardoso, Amílcar; Žnidaršič, Martin; Lavrač, Nada; Linkola, Simo; Toivonen, Hannu; Hervás, Raquel; Méndez, Gonzalo; Gervás, Pablo (2019)
    Computational creativity (CC) is a multidisciplinary research field, studying how to engineer software that exhibits behavior that would reasonably be deemed creative. This paper shows how composition of software solutions in this field can effectively be supported through a CC infrastructure that supports user-friendly development of CC software components and workflows, their sharing, execution, and reuse. The infrastructure allows CC researchers to build workflows that can be executed online and be easily reused by others through the workflow web address. Moreover, it enables the building of procedures composed of software developed by different researchers from different laboratories, leading to novel ways of software composition for computational purposes that were not expected in advance. This capability is illustrated on a workflow that implements a Concept Generator prototype based on the Conceptual Blending framework. The prototype consists of a composition of modules made available as web services, and is explored and tested through experiments involving blending of texts from different domains, blending of images, and poetry generation.
  • Amadae, S. M. (Faculty of Social Sciences, University of Helsinki, 2020)
    Computational Transformation of the Public Sphere is the organic product of what turned out to be an effective collaboration between MA students and their professor in the Global Politics and Communication program in the Faculty of Social Sciences at the University of Helsinki, in the Fall of 2019. The course, Philosophy of Politics and Communication, is a gateway course into this MA program. As I had been eager to conduct research on the impact of new digital technologies and artificial intelligence (AI) on democratic governance, I saw this course as an opportunity to not only share, but also further develop my knowledge of this topic.
  • Amadae, S. M. (University of Helsinki, Faculty of Social Sciences, 2020)
    Publications of the Faculty of Social Sciences
    This book is an edited collection of MA research paper on the digital revolution of the public and governance. It covers cyber governance in Finland, and the securitization of cyber security in Finland. It investigates the cases of Brexit, the 2016 US presidental election of Donald Trump, the 2017 presidential election of Volodymyr Zelensky, and Brexit. It examines the environmental concerns of climate change and greenwashing, and the impact of digital communication giving rise to the #MeToo and Incel movements. It considers how digitilization can serve to emancipate women through ride-sharing, and how it leads to the question of robot rights. It considers fake news and algorithmic governance with respect to case studies of the Chinese social credit system, the US FICO credit score, along with Facebook, Twitter, Cambridge Analytica and the European effort to regulate and protect data usage.
  • Tiedemann, Jörg (CEUR Workshop Proceedings, 2018)
    CEUR Workshop Proceedings
  • Penttinen, Anna-Maija; Parkkinen, Ilmari; Blom, Sami; Kopra, Jaakko; Andressoo, Jaan-Olle; Pitkänen, Kari; Voutilainen, Merja H.; Saarma, Mart; Airavaara, Mikko (2018)
    Unbiased estimates of neuron numbers within substantia nigra are crucial for experimental Parkinson's disease models and gene-function studies. Unbiased stereological counting techniques with optical fractionation are successfully implemented, but are extremely laborious and time-consuming. The development of neural networks and deep learning has opened a new way to teach computers to count neurons. Implementation of a programming paradigm enables a computer to learn from the data and development of an automated cell counting method. The advantages of computerized counting are reproducibility, elimination of human error and fast high-capacity analysis. We implemented whole-slide digital imaging and deep convolutional neural networks (CNN) to count substantia nigra dopamine neurons. We compared the results of the developed method against independent manual counting by human observers and validated the CNN algorithm against previously published data in rats and mice, where tyrosine hydroxylase (TH)-immunoreactive neurons were counted using unbiased stereology. The developed CNN algorithm and fully cloud-embedded Aiforia (TM) platform provide robust and fast analysis of dopamine neurons in rat and mouse substantia nigra.
  • Tenhunen, Henni; Hirvonen, Petteri; Linna, Miika; Halminen, Olli; Hörhammer, Iiris (IOS PRESS, 2018)
    Studies in Health Technology and Informatics
    An intelligent patient flow management system (IPFM) was piloted at a large primary healthcare center in Finland in August 2017. The goals of the system are to help patients avoid unnecessary calls and visits to their health center and to enhance the use of professional resources through more streamlined patient pathways and the re-allocation of professionals from assessment tasks to actual patient care. These goals should be reflected in the decreased service costs through optimized contact forms. Using multiple regression analysis, we studied the associations between IPFM and patients' service utilization (17,943 patients; 73,038 service contacts) during the first five months of the pilot in 2017. The results indicated that the use of IPFM by the patient was associated with a decrease of EUR 31 in the total service costs of the patient in the study period. This decrease is 14% of patient's average total service cost.
  • Bakula, Daniela; Ablasser, Andrea; Aguzzi, Adriano; Antebi, Adam; Barzilai, Nir; Bittner, Martin-Immanuel; Jensen, Martin Borch; Calkhoven, Cornelis F.; Chen, Danica; de Grey, Aubrey D. N. J.; Feige, Jerome N.; Georgievskaya, Anastasia; Gladyshev, Vadim N.; Golato, Tyler; Gudkov, Andrei V.; Hoppe, Thorsten; Kaeberlein, Matt; Katajisto, Pekka; Kennedy, Brian K.; Lal, Unmesh; Martin-Villalba, Ana; Moskalev, Alexey A.; Ozerov, Ivan; Petr, Michael A.; Reason, Matthew; Rubinsztein, David C.; Tyshkovskiy, Alexander; Vanhaelen, Quentin; Zhavoronkov, Alex; Scheibye-Knudsen, Morten (2019)
    An increasing aging population poses a significant challenge to societies worldwide. A better understanding of the molecular, cellular, organ, tissue, physiological, psychological, and even sociological changes that occur with aging is needed in order to treat age-associated diseases. The field of aging research is rapidly expanding with multiple advances transpiring in many previously disconnected areas. Several major pharmaceutical, biotechnology, and consumer companies made aging research a priority and are building internal expertise, integrating aging research into traditional business models and exploring new go-to-market strategies. Many of these efforts are spearheaded by the latest advances in artificial intelligence, namely deep learning, including generative and reinforcement learning. To facilitate these trends, the Center for Healthy Aging at the University of Copenhagen and Insilico Medicine are building a community of Key Opinion Leaders (KOLs) in these areas and launched the annual conference series titled "Aging Research and Drug Discovery (ARDD)" held in the capital of the pharmaceutical industry, Basel, Switzerland (www.agingpharma.org). This ARDD collection contains summaries from the 6th annual meeting that explored aging mechanisms and new interventions in age-associated diseases. The 7th annual ARDD exhibition will transpire 2nd-4th of September, 2020, in Basel.
  • Fortino, Vittorio; Wisgrill, Lukas; Werner, Paulina; Suomela, Sari; Linder, Nina; Jalonen, Erja; Suomalainen, Alina; Marwah, Veer; Kero, Mia; Pesonen, Maria; Lundin, Johan; Lauerma, Antti; Aalto-Korte, Kristiina; Greco, Dario; Alenius, Harri; Fyhrquist, Nanna (2020)
    Contact dermatitis tremendously impacts the quality of life of suffering patients. Currently, diagnostic regimes rely on allergy testing, exposure specification, and follow-up visits; however, distinguishing the clinical phenotype of irritant and allergic contact dermatitis remains challenging. Employing integrative transcriptomic analysis and machine-learning approaches, we aimed to decipher disease-related signature genes to find suitable sets of biomarkers. A total of 89 positive patch-test reaction biopsies against four contact allergens and two irritants were analyzed via microarray. Coexpression network analysis and Random Forest classification were used to discover potential biomarkers and selected biomarker models were validated in an independent patient group. Differential gene-expression analysis identified major gene-expression changes depending on the stimulus. Random Forest classification identified CD47, BATF, FASLG, RGS16, SYNPO, SELE, PTPN7, WARS, PRC1, EXO1, RRM2, PBK, RAD54L, KIFC1, SPC25, PKMYT, HISTH1A, TPX2, DLGAP5, TPX2, CH25H, and IL37 as potential biomarkers to distinguish allergic and irritant contact dermatitis in human skin. Validation experiments and prediction performances on external testing datasets demonstrated potential applicability of the identified biomarker models in the clinic. Capitalizing on this knowledge, novel diagnostic tools can be developed to guide clinical diagnosis of contact allergies.
  • Alabi, Rasheed Omobolaji; Hietanen, Päivi; Elmusrati, Mohammed; Youssef, Omar; Almangush, Alhadi; Mäkitie, Antti A. (2021)
    Objectives: The purpose of this study was to provide a scoping review on how to address and mitigate burnout in the profession of clinical oncology. Also, it examines how artificial intelligence (AI) can mitigate burnout in oncology. Methods: We searched Ovid Medline, PubMed, Scopus, and Web of Science, for articles that examine how to address burnout in oncology. Results: A total of 17 studies were found to examine how burnout in oncology can be mitigated. These interventions were either targeted at individuals (oncologists) or organizations where the oncologists work. The organizational interventions include educational (psychosocial and mindfulness-based course), art therapies and entertainment, team-based training, group meetings, motivational package and reward, effective leadership and policy change, and staff support. The individual interventions include equipping the oncologists with adequate training that include—communication skills, well-being and stress management, burnout education, financial independence, relaxation, self-efficacy, resilience, hobby adoption, and work-life balance for the oncologists. Similarly, AI is thought to be poised to offer the potential to mitigate burnout in oncology by enhancing the productivity and performance of the oncologists, reduce the workload and provide job satisfaction, and foster teamwork between the caregivers of patients with cancer. Discussion: Burnout is common among oncologists and can be elicited from different types of situations encountered in the process of caring for patients with cancer. Therefore, for these interventions to achieve the touted benefits, combinatorial strategies that combine other interventions may be viable for mitigating burnout in oncology. With the potential of AI to mitigate burnout, it is important for healthcare providers to facilitate its use in daily clinical practices. Conclusion: These combinatorial interventions can ensure job satisfaction, a supportive working environment, job retention for oncologists, and improved patient care. These interventions could be integrated systematically into routine cancer care for a positive impact on quality care, patient satisfaction, the overall success of the oncological ward, and the health organizations at large.
  • Laivuori, Mirjami; Tolva, Johanna; Lokki, A. Inkeri; Linder, Nina; Lundin, Johan; Paakkanen, Riitta; Albäck, Anders; Venermo, Maarit; Mäyränpää, Mikko I.; Lokki, Marja-Liisa; Sinisalo, Juha (2020)
    Lamellar metaplastic bone, osteoid metaplasia (OM), is found in atherosclerotic plaques, especially in the femoral arteries. In the carotid arteries, OM has been documented to be associated with plaque stability. This study investigated the clinical impact of OM load in femoral artery plaques of patients with lower extremity artery disease (LEAD) by using a deep learning-based image analysis algorithm. Plaques from 90 patients undergoing endarterectomy of the common femoral artery were collected and analyzed. After decalcification and fixation, 4-μm-thick longitudinal sections were stained with hematoxylin and eosin, digitized, and uploaded as whole-slide images on a cloud-based platform. A deep learning-based image analysis algorithm was trained to analyze the area percentage of OM in whole-slide images. Clinical data were extracted from electronic patient records, and the association with OM was analyzed. Fifty-one (56.7%) sections had OM. Females with diabetes had a higher area percentage of OM than females without diabetes. In male patients, the area percentage of OM inversely correlated with toe pressure and was significantly associated with severe symptoms of LEAD including rest pain, ulcer, or gangrene. According to our results, OM is a typical feature of femoral artery plaques and can be quantified using a deep learning-based image analysis method. The association of OM load with clinical features of LEAD appears to differ between male and female patients, highlighting the need for a gender-specific approach in the study of the mechanisms of atherosclerotic disease. In addition, the role of plaque characteristics in the treatment of atherosclerotic lesions warrants further consideration in the future.
  • Berg, Anton (Helsingin yliopisto, 2021)
    China represents a digital dictatorship where digital repressive measures are effectively used to control citizens and dissidents. Continuous monitoring and analysis of data of individuals, groups and organizations is the core of China’s massive digital surveillance system. Internet and social media is under censorship, information is being manipulated and certain individuals and groups are targeted. In order to achieve all of this, China makes extensive use of modern technology such as artificial intelligence and facial recognition. One particular section of the population that has had to experience the full force and scale of digital repression, are China’s own indigenous people, the Uyghurs. Based on their research, human rights organizations such as Human Rights Watch have reported that the Chinese authorities have placed Xinjiang Uyghurs under mass arrests and detentions. According to the US State Department’s recent estimates, possibly over a million Uyghurs, ethnic Kazakhs and other Muslims are being held in internment camps. The detainees are reportedly subjected to beatings, torture, rape and even killed. The plight of the Uyghurs represents the most extensive mass imprisonment of an ethnic and religious minority since World War II. China is also systematically seeking to destroy Uyghur culture and ethnic characteristics. Mosques are destroyed, the practice of religion and the use of the mother tongue are banned, children are separated from their parents and women are forcibly sterilized. On January 2021, The US Secretary of State, Mike Pompeo, gave a declaration where he named the acts China is committing against the Uyghurs and other Muslim minorities as genocide. Many liberal democracies replied with similar statements. This master’s thesis seeks to answer three main questions. First, what is digital repression, second, how does China use modern technology for digital repression, and third, how does this repression affect the Uyghurs? In addition, I consider the ethical dimension and issues associated with digital repression. This includes the broader context of repressive algorithms, such as direct or indirect discrimination, as well as human rights issues, such as privacy and freedom. This is particularly important as we witness how the world is filled with a variety of devices that utilize artificial intelligence but also allow for a new scale of control and surveillance, and as we face the current era of digital dictatorships that do not respect human rights. The current world situation also raises a serious point of reflection, as artificial intelligence has become the subject of a new kind of competition between countries. China is for example exporting its surveillance technology and facial recognition capabilities beyond its own boarders to countries like Pakistan or Zimbabwe.
  • Jauhiainen, Tommi; Lennes, Mietta; Marttila, Terhi; Digitaalisten ihmistieteiden osasto; Centre of Excellence in Ancient Near Eastern Empires (ANEE); Kieliteknologia (Vake Oy, 2019)
  • Piikkilä, Mimmi (Helsingin yliopisto, 2020)
    Tutkielma käsittelee tekoälyä koskevaa julkista keskustelua Suomessa vuosina 1994-2019. Julkista keskustelua lähestytään tekoälyyn kohdistuvien odotusten kautta. Tutkimus selvittää, millaisia odotuksia tekoälyyn liitetään julkisessa keskustelussa. Teoreettisena viitekehyksenä toimii ensisijaisesti odotusten sosiologia. Odotuksiin liittyvät teoreettisina käsitteinä myös hypesykli ja teknologiamyytit. Lisäksi tarkastellaan tieteisfiktion merkitystä odotusten synnyssä. Teoreettinen tausta käsittelee myös tekoälyn käsitettä ja historiaa, julkisuusteoriaa ja journalismin murrosta. Tutkimuksen aineistona on yhdentoista suomalaisen sanomalehden artikkeleita vuosilta 1994-2019. Aineistoa analysoidaan kahden tutkimusmenetelmän, aihemallinnuksen ja kehysanalyysin avulla. Laskennalliseen yhteiskuntatieteeseen lukeutuva aihemallinnus ja laadulliseen tutkimusperinteeseen kuuluva kehysanalyysi yhdistetään tuoreeksi tutkimusmenetelmäksi nimeltä kehysmallinnus. Alkuperäisaineistosta löytyi 2286 tekoälyä koskevaa artikkelia, joille tehtiin aihemallinnus. Mallinnuksen jälkeen aineistosta poimittiin kolmetoista artikkelia lähilukua varten ja ne analysoitiin kehysanalyysin avulla. Aihemallinnus tuotti aiheita, joista kolmetoista ryhmiteltiin neljään teemaan. Yhteiskunta-teeman aiheet liittyivät talouteen, työhön ja kansainväliseen politiikkaan. Teknologia-teeman aiheet käsittelivät koneoppimista, robotteja, teknologista kehitystä ja sosiaalista mediaa. Kulttuuri-teemaan muodostui yleinen kulttuuria käsittelevä ja erillinen kirjallisuutta käsittelevä aihe. Viihde-teema käsitteli pelejä ja elokuvia. Kehysanalyysin avulla tunnistettiin lähiluetuista artikkeleista yhdeksänkymmentäkuusi odotusta. Kahden analyysimenetelmän avulla tunnistettiin, että suomalaisessa julkisessa keskustelussa on nähtävissä hypesyklin mukainen aktivoituminen vuosina 2014-2019. Lähes kaikki tekoälyä koskeva yhteiskunnallinen keskustelu tarkasteluaikavälillä on käyty samoina vuosina. Ennen 2010-lukua tekoäly on esiintynyt enimmäkseen peliarvosteluiden yhteydessä. Tekoälyyn kohdistuvista odotuksista osa oli ylilyöviä ja tekoälymyyttiä rakentavia. Merkittävämpi havainto oli kuitenkin se, että odotukset kohdistuivat suurelta osin taloudellisen tehokkuuden edistämiseen. Keskustelussa ääneen pääsivät pääasiassa yritysmaailmaa edustavat, korkeassa yhteiskunnallisessa asemassa toimivat miehet, jotka hyötyvät tekoälyn kehityksestä taloudellisesti. Tutkimuksen johtopäätöksenä onkin, että tekoälyä koskevan julkisen keskustelun tulisi olla moniäänisempää. Toimittajien tulisi pyrkiä päästämään ääneen myös niitä, jotka kärsivät tekoälyn mahdollistamasta automatisaatiosta ja suhtautua kriittisemmin kehityksestä hyötyviin.
  • Kullström, Niklas (Helsingin yliopisto, 2020)
    My thesis is about photography and its aesthetics in a world of digitized culture. The main hypothesis is that there is an ongoing and fundamental change in the way photographs and images are being produced, distributed and consumed in society, resulting in a new kind of aesthetics, that did not previously exist in photography. I argue that digital photography should be seen as part of a wider range of digital imaging, as a separate field from traditional analogue photography. My observation is on all different aspects of photographic practice: artistic, technical and social; on the different aspects of photographic expression in different artistic, social and scientific practices (both analogue and digital). Fundamental issues are how the digital divide changes our perception, the way we work and how we process and understand images. I complement academic thought with empirical observations derived from my background as a practicing media-artist and film- and photography professional with almost two decades experience from the field. I start by introducing a basic history of photography, in order to place the practice in a historicaly and technologically determined context, followed by defining what a photograph is in an analogue and digital sense. The main discussion looks at aesthetic concepts related to photography and imaging. This is mainly done by deconstructing formal aspects of the image/photograph and examining the photographs function as a representation of reality and truth. To support my thoughts and to argue against conflicting theories, I mainly rely on writings and thoughts by authors like Bruce Wands, Vilém Flusser, Jerry L. Thompson, Martin Hand and Charlie Gere. From more classic writers on photographic theory I use Susan Sontag, Walter Benjamin, Roland Barthes, Henri Cartier-Bresson and Roger Scruton. The aim is to create a comprehensive image of the field of thought, both on a contemporary and historical axis, and through this build a solid base for understanding and argumentation. I conclude that we are already living in the future, and that the reality we know will change with an ever-increasing pace, soon taking the step over to augmented and virtual reality. Current and future image makers should consider in depth what it really means to create images in a digital universe. A new way of seeing digitally is crucial for future understanding of the changing digital landscape of images.