Browsing by Subject "Artificial intelligence"

Sort by: Order: Results:

Now showing items 1-20 of 37
  • Erlandsson, Rasmus; Bjerke, Jarle W.; Finne, Eirik A.; Myneni, Ranga B.; Piao, Shilong; Wang, Xuhui; Virtanen, Tarmo; Rasanen, Aleksi; Kumpula, Timo; Kolari, Tiina H. M.; Tahvanainen, Teemu; Tommervik, Hans (2022)
    Although generally given little attention in vegetation studies, ground-dwelling (terricolous) lichens are major contributors to overall carbon and nitrogen cycling, albedo, biodiversity and biomass in many high-latitude ecosystems. Changes in biomass of mat-forming pale lichens have the potential to affect vegetation, fauna, climate and human activities including reindeer husbandry. Lichens have a complex spectral signature and terricolous lichens have limited growth height, often growing in mixtures with taller vegetation. This has, so far, prevented the development of remote sensing techniques to accurately assess lichen biomass, which would be a powerful tool in ecosystem and ecological research and rangeland management. We present a Landsat based remote sensing model developed using deep neural networks, trained with 8914 field records of lichen volume collected for > 20 years. In contrast to earlier proposed machine learning and regression methods for lichens, our model exploited the ability of neural networks to handle mixed spatial resolution input. We trained candidate models using input of 1 x 1 (30 x 30 m) and 3 x 3 Landsat pixels based on 7 reflective bands and 3 indices, combined with a 10 m spatial resolution digital elevation model. We normalised elevation data locally for each plot to remove the region-specific variation, while maintaining informative local variation in topography. The final model predicted lichen volume in an evaluation set (n = 159) reaching an R2 of 0.57. NDVI and elevation were the most important predictors, followed by the green band. Even with moderate tree cover density, the model was efficient, offering a considerable improvement compared to earlier methods based on specific reflectance. The model was in principle trained on data from Scandinavia, but when applied to sites in North America and Russia, the predictions of the model corresponded well with our visual interpretations of lichen abundance. We also accurately quantified a recent historic (35 years) change in lichen abundance in northern Norway. This new method enables further spatial and temporal studies of variation and changes in lichen biomass related to multiple research questions as well as rangeland management and economic and cultural ecosystem services. Combined with information on changes in drivers such as climate, land use and management, and air pollution, our model can be used to provide accurate estimates of ecosystem changes and to improve vegetation-climate models by including pale lichens.
  • Mäkelä, Kati; Mäyränpää, Mikko I.; Sihvo, Hanna-Kaisa; Bergman, Paula; Sutinen, Eva; Ollila, Hely; Kaarteenaho, Riitta; Myllärniemi, Marjukka (2021)
    A large number of fibroblast foci (FF) predict mortality in idiopathic pulmonary fibrosis (IPF). Other prognostic histological markers have not been identified. Artificial intelligence (AI) offers a possibility to quantitate possible prognostic histological features in IPF. We aimed to test the use of AI in IPF lung tissue samples by quantitating FF, interstitial mononuclear inflammation, and intra-alveolar macrophages with a deep convolutional neural network (CNN). Lung tissue samples of 71 patients with IPF from the FinnishIPF registry were analyzed by an AI model developed in the Aiforia® platform. The model was trained to detect tissue, air spaces, FF, interstitial mononuclear inflammation, and intra-alveolar macrophages with 20 samples. For survival analysis, cut-point values for high and low values of histological parameters were determined with maximally selected rank statistics. Survival was analyzed using the Kaplan-Meier method. A large area of FF predicted poor prognosis in IPF (p = 0.01). High numbers of interstitial mononuclear inflammatory cells and intra-alveolar macrophages were associated with prolonged survival (p = 0.01 and p = 0.01, respectively). Of lung function values, low diffusing capacity for carbon monoxide was connected to a high density of FF (p = 0.03) and a high forced vital capacity of predicted was associated with a high intra-alveolar macrophage density (p = 0.03). The deep CNN detected histological features that are difficult to quantitate manually. Interstitial mononuclear inflammation and intra-alveolar macrophages were novel prognostic histological biomarkers in IPF. Evaluating histological features with AI provides novel information on the prognostic estimation of IPF.
  • Sjöblom, Nelli; Boyd, Sonja; Manninen, Anniina; Blom, Sami; Knuuttila, Anna; Färkkilä, Martti; Arola, Johanna (2023)
    Background and AimsPrimary sclerosing cholangitis (PSC) is a chronic cholestatic liver disease that obstructs the bile ducts and causes liver cirrhosis and cholangiocarcinoma. Efficient surrogate markers are required to measure disease progression. The cytokeratin 7 (K7) load in a liver specimen is an independent prognostic indicator that can be measured from digitalized slides using artificial intelligence (AI)-based models. MethodsA K7-AI model 2.0 was built to measure the hepatocellular K7 load area of the parenchyma, portal tracts, and biliary epithelium. K7-stained PSC liver biopsy specimens (n = 295) were analyzed. A compound endpoint (liver transplantation, liver-related death, and cholangiocarcinoma) was applied in Kaplan-Meier survival analysis to measure AUC values and positive likelihood ratios for each histological variable detected by the model. ResultsThe K7-AI model 2.0 was a better prognostic tool than plasma alkaline phosphatase, the fibrosis stage evaluated by Nakanuma classification, or K7 score evaluated by a pathologist based on the AUC values of measured variables. A combination of parameters, such as portal tract volume and area of K7-positive hepatocytes analyzed by the model, produced an AUC of 0.81 for predicting the compound endpoint. Portal tract volume measured by the model correlated with the histological fibrosis stage. ConclusionsThe K7 staining of histological liver specimens in PSC provides significant information on disease outcomes through objective and reproducible data, including variables that cannot be measured by a human pathologist. The K7-AI model 2.0 could serve as a prognostic tool for clinical endpoints and as a surrogate marker in drug trials.
  • Sjöblom, Nelli; Boyd, Sonja; Manninen, Anniina; Knuuttila, Anna; Blom, Sami; Färkkilä, Martti; Arola, Johanna (2021)
    Background The objective was to build a novel method for automated image analysis to locate and quantify the number of cytokeratin 7 (K7)-positive hepatocytes reflecting cholestasis by applying deep learning neural networks (AI model) in a cohort of 210 liver specimens. We aimed to study the correlation between the AI model's results and disease progression. The cohort of liver biopsies which served as a model of chronic cholestatic liver disease comprised of patients diagnosed with primary sclerosing cholangitis (PSC). Methods In a cohort of patients with PSC identified from the PSC registry of the University Hospital of Helsinki, their K7-stained liver biopsy specimens were scored by a pathologist (human K7 score) and then digitally analyzed for K7-positive hepatocytes (K7%area). The digital analysis was by a K7-AI model created in an Aiforia Technologies cloud platform. For validation, values were human K7 score, stage of disease (Metavir and Nakunuma fibrosis score), and plasma liver enzymes indicating clinical cholestasis, all subjected to correlation analysis. Results The K7-AI model results (K7%area) correlated with the human K7 score (0.896; p < 2.2e(- 16)). In addition, K7%area correlated with stage of PSC (Metavir 0.446; p < 1.849e(- 10) and Nakanuma 0.424; p < 4.23e(- 10)) and with plasma alkaline phosphatase (P-ALP) levels (0.369, p < 5.749e(- 5)). Conclusions The accuracy of the AI-based analysis was comparable to that of the human K7 score. Automated quantitative image analysis correlated with stage of PSC and with P-ALP. Based on the results of the K7-AI model, we recommend K7 staining in the assessment of cholestasis by means of automated methods that provide fast (9.75 s/specimen) quantitative analysis.
  • Sjöblom, Nelli; Boyd, Sonja; Manninen, Anniina; Knuuttila, Anna; Blom, Sami; Färkkilä, Martti; Arola, Johanna (BioMed Central, 2021)
    Abstract Background The objective was to build a novel method for automated image analysis to locate and quantify the number of cytokeratin 7 (K7)-positive hepatocytes reflecting cholestasis by applying deep learning neural networks (AI model) in a cohort of 210 liver specimens. We aimed to study the correlation between the AI model’s results and disease progression. The cohort of liver biopsies which served as a model of chronic cholestatic liver disease comprised of patients diagnosed with primary sclerosing cholangitis (PSC). Methods In a cohort of patients with PSC identified from the PSC registry of the University Hospital of Helsinki, their K7-stained liver biopsy specimens were scored by a pathologist (human K7 score) and then digitally analyzed for K7-positive hepatocytes (K7%area). The digital analysis was by a K7-AI model created in an Aiforia Technologies cloud platform. For validation, values were human K7 score, stage of disease (Metavir and Nakunuma fibrosis score), and plasma liver enzymes indicating clinical cholestasis, all subjected to correlation analysis. Results The K7-AI model results (K7%area) correlated with the human K7 score (0.896; p < 2.2e− 16). In addition, K7%area correlated with stage of PSC (Metavir 0.446; p < 1.849e− 10 and Nakanuma 0.424; p < 4.23e− 10) and with plasma alkaline phosphatase (P-ALP) levels (0.369, p < 5.749e− 5). Conclusions The accuracy of the AI-based analysis was comparable to that of the human K7 score. Automated quantitative image analysis correlated with stage of PSC and with P-ALP. Based on the results of the K7-AI model, we recommend K7 staining in the assessment of cholestasis by means of automated methods that provide fast (9.75 s/specimen) quantitative analysis.
  • Alabi, Rasheed Omobolaji; Elmusrati, Mohammed; Sawazaki-Calone, Iris; Kowalski, Luiz Paulo; Haglund, Caj; Coletta, Ricardo D.; Mäkitie, Antti A.; Salo, Tuula; Almangush, Alhadi; Leivo, Ilmo (2020)
    Background: The proper estimate of the risk of recurrences in early-stage oral tongue squamous cell carcinoma (OTSCC) is mandatory for individual treatment-decision making. However, this remains a challenge even for experienced multidisciplinary centers. Objectives: We compared the performance of four machine learning (ML) algorithms for predicting the risk of locoregional recurrences in patients with OTSCC. These algorithms were Support Vector Machine (SVM), Naive Bayes (NB), Boosted Decision Tree (BDT), and Decision Forest (DF). Materials and methods: The study cohort comprised 311 cases from the five University Hospitals in Finland and A.C. Camargo Cancer Center, Sao Paulo, Brazil. For comparison of the algorithms, we used the harmonic mean of precision and recall called F1 score, specificity, and accuracy values. These algorithms and their corresponding permutation feature importance (PFI) with the input parameters were externally tested on 59 new cases. Furthermore, we compared the performance of the algorithm that showed the highest prediction accuracy with the prognostic significance of depth of invasion (DOI). Results: The results showed that the average specificity of all the algorithms was 71% The SVM showed an accuracy of 68% and F1 score of 0.63, NB an accuracy of 70% and F1 score of 0.64, BDT an accuracy of 81% and F1 score of 0.78, and DF an accuracy of 78% and F1 score of 0.70. Additionally, these algorithms outperformed the DOI-based approach, which gave an accuracy of 63%. With PFI-analysis, there was no significant difference in the overall accuracies of three of the algorithms; PFI-BDT accuracy increased to 83.1%, PFI-DF increased to 80%, PFI-SVM decreased to 64.4%, while PFI-NB accuracy increased significantly to 81.4%. Conclusions: Our findings show that the best classification accuracy was achieved with the boosted decision tree algorithm. Additionally, these algorithms outperformed the DOI-based approach. Furthermore, with few parameters identified in the PFI analysis, ML technique still showed the ability to predict locoregional recurrence. The application of boosted decision tree machine learning algorithm can stratify OTSCC patients and thus aid in their individual treatment planning.
  • Tupasela, Aaro; Di Nucci, Ezio (2020)
    Machine learning platforms have emerged as a new promissory technology that some argue will revolutionize work practices across a broad range of professions, including medical care. During the past few years, IBM has been testing its Watson for Oncology platform at several oncology departments around the world. Published reports, news stories, as well as our own empirical research show that in some cases, the levels of concordance over recommended treatment protocols between the platform and human oncologists have been quite low. Other studies supported by IBM claim concordance rates as high as 96%. We use the Watson for Oncology case to examine the practice of using concordance levels between tumor boards and a machine learning decision-support system as a form of evidence. We address a challenge related to the epistemic authority between oncologists on tumor boards and the Watson Oncology platform by arguing that the use of concordance levels as a form of evidence of quality or trustworthiness is problematic. Although the platform provides links to the literature from which it draws its conclusion, it obfuscates the scoring criteria that it uses to value some studies over others. In other words, the platform "black boxes" the values that are coded into its scoring system.
  • Wilcock, Graham; Jokinen, Kristiina (IEEE, 2022)
    The paper describes an approach that combines work from three fields with previously separate research commu-nities: social robotics, conversational AI, and graph databases. The aim is to develop a generic framework in which a variety of social robots can provide high-quality information to users by accessing semantically-rich knowledge graphs about multiple different domains. An example implementation uses a Furhat robot with Rasa open source conversational AI and knowledge graphs in Neo4j graph databases.
  • Ohukainen, Pauli; Kuusisto, Sanna; Kettunen, Johannes; Perola, Markus; Järvelin, Marjo-Riitta; Makinen, Ville-Petteri; Ala-Korpela, Mika (2020)
    Background and aims: Population subgrouping has been suggested as means to improve coronary heart disease (CHD) risk assessment. We explored here how unsupervised data-driven metabolic subgrouping, based on comprehensive lipoprotein subclass data, would work in large-scale population cohorts. Methods: We applied a self-organizing map (SOM) artificial intelligence methodology to define subgroups based on detailed lipoprotein profiles in a population-based cohort (n = 5789) and utilised the trained SOM in an independent cohort (n = 7607). We identified four SOM-based subgroups of individuals with distinct lipoprotein profiles and CHD risk and compared those to univariate subgrouping by apolipoprotein B quartiles. Results: The SOM-based subgroup with highest concentrations for non-HDL measures had the highest, and the subgroup with lowest concentrations, the lowest risk for CHD. However, apolipoprotein B quartiles produced better resolution of risk than the SOM-based subgroups and also striking dose-response behaviour. Conclusions: These results suggest that the majority of lipoprotein-mediated CHD risk is explained by apolipoprotein B-containing lipoprotein particles. Therefore, even advanced multivariate subgrouping, with comprehensive data on lipoprotein metabolism, may not advance CHD risk assessment
  • Oura, Petteri; Junno, Juho-Antti; Hunt, David; Lehenkari, Petri; Tuukkanen, Juha; Maijanen, Heli (2023)
    Although knee measurements yield high classification rates in metric sex estimation, there is a paucity of studies exploring the knee in artificial intelligence-based sexing. This proof-of-concept study aimed to develop deep learning algorithms for sex estimation from radiographs of reconstructed cadaver knee joints belonging to the Terry Anatomical Collection. A total of 199 knee radiographs were obtained from 100 skeletons (46 male and 54 female cadavers; mean age at death 64.2 years, range 50-102 years) whose tibiofemoral joints were recon-structed in standard anatomical position. The AIDeveloper software was used to train, validate, and test neural network architectures in sex estimation based on image classification. Of the explored algorithms, an MhNet-based model reached the highest overall testing accuracy of 90.3%. The model was able to classify all females (100.0%) and most males (78.6%) correctly. These preliminary findings encourage further research on artificial intelligence-based methods in sex estimation from the knee joint. Combining radiographic data with automated and externally validated algorithms may establish valuable tools to be utilized in forensic anthropology.
  • Cardoso, Ana Sofia; Bryukhova, Sofiya; Renna, Francesco; Reino, Luis; Xu, Chi; Xiao, Zixiang; Correia, Ricardo; Di Minin, Enrico; Ribeiro, Joana; Vaz, Ana Sofia (2023)
    E-commerce has become a booming market for wildlife trafficking, as online platforms are increasingly more accessible and easier to navigate by sellers, while still lacking adequate supervision. Artificial intelligence models, and specifically deep learning, have been emerging as promising tools for the automated analysis and monitoring of digital online content pertaining to wildlife trade. Here, we used and fine-tuned freely available artificial intelligence models (i.e., convolutional neural networks) to understand the potential of these models to identify instances of wildlife trade. We specifically focused on pangolin species, which are among the most trafficked mammals globally and receiving increasing trade attention since the COVID-19 pandemic. Our convolutional neural networks were trained using online images (available from iNaturalist, Flickr and Google) displaying both traded and non-traded pangolin settings. The trained models showed great performances, being able to identify over 90 % of potential instances of pangolin trade in the considered imagery dataset. These instances included the showcasing of pangolins in popular marketplaces (e.g., wet markets and cages), and the displaying of commonly traded pangolin parts and derivates (e.g., scales) online. Nevertheless, not all instances of pangolin trade could be identified by our models (e.g., in images with dark colours and shaded areas), leaving space for further research developments. The methodological developments and results from this exploratory study represent an advancement in the monitoring of online wildlife trade. Complementing our approach with other forms of online data, such as text, would be a way forward to deliver more robust monitoring tools for online trafficking.
  • Clubb, James H. A.; Kudling, Tatiana V.; Girych, Mykhailo; Haybout, Lyna; Pakola, Santeri; Hamdan, Firas; Cervera-Carrascon, Victor; Hemmes, Annabrita; Grönberg-Vähä-Koskela, Susanna; Santos, Joao Manuel; Quixabeira, Dafne C. A.; Basnet, Saru; Heiniö, Camilla; Arias, Victor; Jirovec, Elise; Kaptan, Shreyas; Havunen, Riikka; Sorsa, Suvi; Erikat, Abdullah; Schwartz, Joel; Anttila, Marjukka; Aro, Katri; Viitala, Tapani; Vattulainen, Ilpo; Cerullo, Vincenzo; Kanerva, Anna; Hemminki, Akseli (2023)
  • Xu, Dianlei; Li, Tong; Li, Yong; Su, Xiang; Tarkoma, Sasu; Jiang, Tao; Crowcroft, Jon; Hui, Pan (2021)
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.
  • Pollak, Senja; Boggia, Michele; Linden, Carl-Gustav; Leppänen, Leo; Zosa, Elaine; Toivonen, Hannu (The Association for Computational Linguistics, 2021)
  • Zanca, F.; Hernandez-Giron, I.; Avanzo, M.; Guidi, G.; Crijns, W.; Diaz, O.; Kagadis, G. C.; Rampado, O.; Lonne, P.; Ken, S.; Colgan, N.; Zaidi, H.; Zakaria, G. A.; Kortesniemi, M. (2021)
    Purpose: To provide a guideline curriculum related to Artificial Intelligence (AI), for the education and training of European Medical Physicists (MPs). Materials and methods: The proposed curriculum consists of two levels: Basic (introducing MPs to the pillars of knowledge, development and applications of AI, in the context of medical imaging and radiation therapy) and Advanced. Both are common to the subspecialties (diagnostic and interventional radiology, nuclear medicine, and radiation oncology). The learning outcomes of the training are presented as knowledge, skills and competences (KSC approach). Results: For the Basic section, KSCs were stratified in four subsections: (1) Medical imaging analysis and AI Basics; (2) Implementation of AI applications in clinical practice; (3) Big data and enterprise imaging, and (4) Quality, Regulatory and Ethical Issues of AI processes. For the Advanced section instead, a common block was proposed to be further elaborated by each subspecialty core curriculum. The learning outcomes were also translated into a syllabus of a more traditional format, including practical applications. Conclusions: This AI curriculum is the first attempt to create a guideline expanding the current educational framework for Medical Physicists in Europe. It should be considered as a document to top the sub-specialties' curriculums and adapted by national training and regulatory bodies. The proposed educational program can be implemented via the European School of Medical Physics Expert (ESMPE) course modules and - to some extent - also by the national competent EFOMP organizations, to reach widely the medical physicist community in Europe.
  • Kuismin-Raerinne, Atte (Helsingin yliopisto, 2022)
    The usage of different types of wearable mHealth solutions for consumers has exploded especially since the start of the COVID-19 pandemic. A big question regarding these devices is the quality and accuracy of the data produced by them. When the consumer can use these devices to measure their heartbeat, blood sugar levels, sleep quality, blood oxygen levels etc. the quality and accuracy of this data is getting more important by the day. Not only for the consumer but also for the development of Artificial Intelligence the quality of data is of utmost importance. The importance of the data produced by these devices which the consumers wear voluntarily for long periods of time for the development of Artificial Intelligence in the medical sector cannot be overstated. Many of these mHealth devices also use Artificial Intelligence in one way or another already. In this Thesis the research question is how EU regulation affects the obligations of the producers of mHealth devices in regards the data quality of these devices. The starting point for the research is the define Artificial Intelligence in general and data quality by the EU standards. The method for this research is a legal dogmatic approach to present and future EU regulation surrounding this topic with the viewpoint of ensuring high quality data for Artificial Intelligence development. In the scope of this research there are the Medical Device Regulations for current regulation and the regulations based on the EU Data Strategy, Data Governance Act, the proposal for the Data Act and finally the proposal for the Artificial Intelligence Act. I note that there are many other important aspects to this topic that do not fit into the scope of this Thesis, namely access to data, movement of data, data protection, unfair commercial activities and “soft law” -type of regulation especially standards. The result of the research is that the situation is unclear in the light of the regulations inside the scope of this Thesis. For medical devices, the many obligations for medical devices do ensure that the devices need work as intended and as such ensure the data quality too. Many of the mHealth solutions, however, do not fit into the scope of either of the Medical Device Regulations, because their intended purpose is not ‘medical’. As these devices produce more and more intricate health data, the question left to be answered is when does the intended purpose become medical. EU has tried to tackle this problem mainly by soft law -instruments with the latest being the ISO/TS 82304-2 standard in regards the quality of health and wellness apps released in 2021. For the upcoming regulations the duo of Data related Acts do not bring any light to the problem. They mainly focus on access to data and movement of data with the data quality parts focusing on interoperability of data. The proposal for Artificial Intelligence Act has obligations mainly for the AI systems classified as ‘high-risk’. The interesting part for this paper is how medical devices and security systems for them would be classified as high-risk. This however leads the research back to the Medical Device Regulations and the issue with devices whose intended purpose is not medical.
  • Wilcock, Graham (2022)
    We have connected social robots to conversational AI systems that search for information in knowledge graphs stored in databases. We use Virtual Furhat robots with Rasa conversational AI. The knowledge graphs are stored in Neo4j graph databases. We have added semantic metadata to the knowledge graphs including taxonomies and other semantic hierarchies extracted from WikiData. Our primary goal is to develop methods to generate more intelligent dialogue responses that leverage the semantic metadata. A further aim is to use the metadata to generate simple explanations if the user asks the robot why it gave a certain response.
  • Hatwagner, M.F.; Vastag, G.; Niskanen, V.A.; Kóczy, L.T. (Springer, 2018)
    Lecture Notes in Computer Science
    Fuzzy Cognitive Maps (FCMs) are widely applied for describing the major components of complex systems and their interconnections. The popularity of FCMs is mostly based on their simple system representation, easy model creation and usage, and its decision support capabilities. The preferable way of model construction is based on historical, measured data of the investigated system and a suitable learning technique. Such data are not always available, however. In these cases experts have to define the strength and direction of causal connections among the components of the system, and their decisions are unavoidably affected by more or less subjective elements. Unfortunately, even a small change in the estimated strength may lead to significantly different simulation outcome, which could pose significant decision risks. Therefore, the preliminary exploration of model ‘sensitivity’ to subtle weight modifications is very important to decision makers. This way their attention can be attracted to possible problems. This paper deals with the advanced version of a behavioral analysis. Based on the experiences of the authors, their method is further improved to generate more life-like, slightly modified model versions based on the original one suggested by experts. The details of the method is described, its application and the results are presented by an example of a banking application. The combination of Pareto-fronts and Bacterial Evolutionary Algorithm is a novelty of the approach. © Springer International Publishing AG, part of Springer Nature 2018.
  • De Simone, Belinda; Abu-Zidan, Fikri M.; Gumbs, Andrew A.; Chouillard, Elie; Di Saverio, Salomone; Sartelli, Massimo; Coccolini, Federico; Ansaloni, Luca; Collins, Toby; Kluger, Yoram; Moore, Ernest E.; Litvin, Andrej; Leppaniemi, Ari; Mascagni, Pietro; Milone, Luca; Piccoli, Micaela; Abu-Hilal, Mohamed; Sugrue, Michael; Biffl, Walter L.; Catena, Fausto (2022)
    Aim We aimed to evaluate the knowledge, attitude, and practices in the application of AI in the emergency setting among international acute care and emergency surgeons. Methods An online questionnaire composed of 30 multiple choice and open-ended questions was sent to the members of the World Society of Emergency Surgery between 29th May and 28th August 2021. The questionnaire was developed by a panel of 11 international experts and approved by the WSES steering committee. Results 200 participants answered the survey, 32 were females (16%). 172 (86%) surgeons thought that AI will improve acute care surgery. Fifty surgeons (25%) were trained, robotic surgeons and can perform it. Only 19 (9.5%) were currently performing it. 126 (63%) surgeons do not have a robotic system in their institution, and for those who have it, it was mainly used for elective surgery. Only 100 surgeons (50%) were able to define different AI terminology. Participants thought that AI is useful to support training and education (61.5%), perioperative decision making (59.5%), and surgical vision (53%) in emergency surgery. There was no statistically significant difference between males and females in ability, interest in training or expectations of AI (p values 0.91, 0.82, and 0.28, respectively, Mann-Whitney U test). Ability was significantly correlated with interest and expectations (p < 0.0001 Pearson rank correlation, rho 0.42 and 0.47, respectively) but not with experience (p = 0.9, rho - 0.01). Conclusions The implementation of artificial intelligence in the emergency and trauma setting is still in an early phase. The support of emergency and trauma surgeons is essential for the progress of AI in their setting which can be augmented by proper research and training programs in this area.
  • Li, Shu; Faure, Michael; Havu, Katri (2022)
    The potential of artificial intelligence (AI) has grown exponentially in recent years, which not only generates value but also creates risks. AI systems are characterised by their complexity, opacity and autonomy in operation. Now and in the foreseeable future, AI systems will be operating in a manner that is not fully autonomous. This signifies that providing appropriate incentives to the human par- ties involved is still of great importance in reducing AI-related harm. Therefore, liability rules should be adapted in such a way to provide the relevant parties with incentives to efficiently reduce the social costs of potential accidents. Relying on a law and economics approach, we address the theo- retical question of what kind of liability rules should be applied to different parties along the value chain related to AI. In addition, we critically analyse the ongoing policy debates in the European Union, discussing the risk that European policymakers will fail to determine efficient liability rules with regard to different stakeholders.