Results for 'Machine Learning Models'

975 found
Order:
  1. Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   60 citations  
  2.  25
    Machine learning models, trusted research environments and UK health data: ensuring a safe and beneficial future for AI development in healthcare.Charalampia Kerasidou, Maeve Malone, Angela Daly & Francesco Tava - 2023 - Journal of Medical Ethics 49 (12):838-843.
    Digitalisation of health and the use of health data in artificial intelligence, and machine learning (ML), including for applications that will then in turn be used in healthcare are major themes permeating current UK and other countries’ healthcare systems and policies. Obtaining rich and representative data is key for robust ML development, and UK health data sets are particularly attractive sources for this. However, ensuring that such research and development is in the public interest, produces public benefit and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3.  36
    Predicting and explaining with machine learning models: Social science as a touchstone.Oliver Buchholz & Thomas Grote - 2023 - Studies in History and Philosophy of Science Part A 102 (C):60-69.
    Machine learning (ML) models recently led to major breakthroughs in predictive tasks in the natural sciences. Yet their benefits for the social sciences are less evident, as even high-profile studies on the prediction of life trajectories have shown to be largely unsuccessful – at least when measured in traditional criteria of scientific success. This paper tries to shed light on this remarkable performance gap. Comparing two social science case studies to a paradigm example from the natural sciences, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  78
    Ensemble Machine Learning Model for Classification of Spam Product Reviews.Muhammad Fayaz, Atif Khan, Javid Ur Rahman, Abdullah Alharbi, M. Irfan Uddin & Bader Alouffi - 2020 - Complexity 2020:1-10.
    Nowadays, online product reviews have been at the heart of the product assessment process for a company and its customers. They give feedback to a company on improving product quality, planning, and monitoring its business schemes in order to increase sale and gain more profit. They are also helpful for customers to select the right products in less effort and time. Most companies make spam reviews of products in order to increase the products sales and gain more profit. Detecting spam (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  3
    Adaptive Medical Machine Learning Models Should Not Be Classified as Perpetual Research, but Do Require New Regulatory Solutions.Yves Saint James Aquino & Stacy Carter - 2024 - American Journal of Bioethics 24 (10):82-85.
    Volume 24, Issue 10, October 2024, Page 82-85.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7.  59
    Values and inductive risk in machine learning modelling: the case of binary classification models.Koray Karaca - 2021 - European Journal for Philosophy of Science 11 (4):1-27.
    I examine the construction and evaluation of machine learning binary classification models. These models are increasingly used for societal applications such as classifying patients into two categories according to the presence or absence of a certain disease like cancer and heart disease. I argue that the construction of ML classification models involves an optimisation process aiming at the minimization of the inductive risk associated with the intended uses of these models. I also argue that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  8.  21
    Application of Machine Learning Models for Tracking Participant Skills in Cognitive Training.Sanjana Sandeep, Christian R. Shelton, Anja Pahor, Susanne M. Jaeggi & Aaron R. Seitz - 2020 - Frontiers in Psychology 11.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  18
    Prediction of Banks Efficiency Using Feature Selection Method: Comparison between Selected Machine Learning Models.Hamzeh F. Assous - 2022 - Complexity 2022:1-15.
    This study aims to examine the main determinants of efficiency of both conventional and Islamic Saudi banks and then choose the best fit model among machine learning prediction models, Chi-squared automatic interaction detector, linear regression, and neural network ). The data were collected from the annual financial reports of Saudi banks from 2014 to 2018. The Saudi banking sector consists of 11 banks, 4 of which are Islamic. In this study, the major financial ratios are subgrouped into (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  43
    Machine learning and the quest for objectivity in climate model parameterization.Julie Jebeile, Vincent Lam, Mason Majszak & Tim Räz - 2023 - Climatic Change 176 (101).
    Parameterization and parameter tuning are central aspects of climate modeling, and there is widespread consensus that these procedures involve certain subjective elements. Even if the use of these subjective elements is not necessarily epistemically problematic, there is an intuitive appeal for replacing them with more objective (automated) methods, such as machine learning. Relying on several case studies, we argue that, while machine learning techniques may help to improve climate model parameterization in several ways, they still require (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  11.  25
    The virtue of simplicity: On machine learning models in algorithmic trading.Kristian Bondo Hansen - 2020 - Big Data and Society 7 (1).
    Machine learning models are becoming increasingly prevalent in algorithmic trading and investment management. The spread of machine learning in finance challenges existing practices of modelling and model use and creates a demand for practical solutions for how to manage the complexity pertaining to these techniques. Drawing on interviews with quants applying machine learning techniques to financial problems, the article examines how these people manage model complexity in the process of devising machine (...)-powered trading algorithms. The analysis shows that machine learning quants use Ockham’s razor – things should not be multiplied without necessity – as a heuristic tool to prevent excess model complexity and secure a certain level of human control and interpretability in the modelling process. I argue that understanding the way quants handle the complexity of learning models is a key to grasping the transformation of the human’s role in contemporary data and model-driven finance. The study contributes to social studies of finance research on the human–model interplay by exploring it in the context of machine learning model use. (shrink)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  12.  14
    Two-Stage Hybrid Machine Learning Model for High-Frequency Intraday Bitcoin Price Prediction Based on Technical Indicators, Variational Mode Decomposition, and Support Vector Regression.Samuel Asante Gyamerah - 2021 - Complexity 2021:1-15.
    Due to the inherent chaotic and fractal dynamics in the price series of Bitcoin, this paper proposes a two-stage Bitcoin price prediction model by combining the advantage of variational mode decomposition and technical analysis. VMD eliminates the noise signals and stochastic volatility in the price data by decomposing the data into variational mode functions, while technical analysis uses statistical trends obtained from past trading activity and price changes to construct technical indicators. The support vector regression accepts input from a hybrid (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  12
    AI and mental health: evaluating supervised machine learning models trained on diagnostic classifications.Anna van Oosterzee - forthcoming - AI and Society:1-10.
    Machine learning (ML) has emerged as a promising tool in psychiatry, revolutionising diagnostic processes and patient outcomes. In this paper, I argue that while ML studies show promising initial results, their application in mimicking clinician-based judgements presents inherent limitations (Shatte et al. in Psychol Med 49:1426–1448. https://doi.org/10.1017/S0033291719000151, 2019). Most models still rely on DSM (the Diagnostic and Statistical Manual of Mental Disorders) categories, known for their heterogeneity and low predictive value. DSM's descriptive nature limits the validity of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14. Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  15.  54
    Model theory and machine learning.Hunter Chase & James Freitag - 2019 - Bulletin of Symbolic Logic 25 (3):319-332.
    About 25 years ago, it came to light that a single combinatorial property determines both an important dividing line in model theory and machine learning. The following years saw a fruitful exchange of ideas between PAC-learning and the model theory of NIP structures. In this article, we point out a new and similar connection between model theory and machine learning, this time developing a correspondence between stability and learnability in various settings of online learning. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  16.  49
    Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned?Chang Ho Yoon, Robert Torrance & Naomi Scheinerman - 2022 - Journal of Medical Ethics 48 (9):581-585.
    We argue why interpretability should have primacy alongside empiricism for several reasons: first, if machine learning models are beginning to render some of the high-risk healthcare decisions instead of clinicians, these models pose a novel medicolegal and ethical frontier that is incompletely addressed by current methods of appraising medical interventions like pharmacological therapies; second, a number of judicial precedents underpinning medical liability and negligence are compromised when ‘autonomous’ ML recommendations are considered to be en par with (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  17.  27
    Machine Learning in Psychometrics and Psychological Research.Graziella Orrù, Merylin Monaro, Ciro Conversano, Angelo Gemignani & Giuseppe Sartori - 2020 - Frontiers in Psychology 10:492685.
    Recent controversies about the level of replicability of behavioral research analyzed using statistical inference have cast interest in developing more efficient techniques for analyzing the results of psychological experiments. Here we claim that complementing the analytical workflow of psychological experiments with Machine Learning-based analysis will both maximize accuracy and minimize replicability issues. As compared to statistical inference, ML analysis of experimental data is model agnostic and primarily focused on prediction rather than inference. We also highlight some potential pitfalls (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  18.  14
    Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena.Timo Freiesleben, Gunnar König, Christoph Molnar & Álvaro Tejero-Cantero - 2024 - Minds and Machines 34 (3):1-39.
    To learn about real world phenomena, scientists have traditionally used models with clearly interpretable elements. However, modern machine learning (ML) models, while powerful predictors, lack this direct elementwise interpretability (e.g. neural network weights). Interpretable machine learning (IML) offers a solution by analyzing models holistically to derive interpretations. Yet, current IML research is focused on auditing ML models rather than leveraging them for scientific inference. Our work bridges this gap, presenting a framework for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  30
    Sources of Understanding in Supervised Machine Learning Models.Paulo Pirozelli - 2022 - Philosophy and Technology 35 (2):1-19.
    In the last decades, supervised machine learning has seen the widespread growth of highly complex, non-interpretable models, of which deep neural networks are the most typical representative. Due to their complexity, these models have showed an outstanding performance in a series of tasks, as in image recognition and machine translation. Recently, though, there has been an important discussion over whether those non-interpretable models are able to provide any sort of understanding whatsoever. For some scholars, (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  81
    Using machine learning to create a repository of judgments concerning a new practice area: a case study in animal protection law.Joe Watson, Guy Aglionby & Samuel March - 2023 - Artificial Intelligence and Law 31 (2):293-324.
    Judgments concerning animals have arisen across a variety of established practice areas. There is, however, no publicly available repository of judgments concerning the emerging practice area of animal protection law. This has hindered the identification of individual animal protection law judgments and comprehension of the scale of animal protection law made by courts. Thus, we detail the creation of an initial animal protection law repository using natural language processing and machine learning techniques. This involved domain expert classification of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Combining psychological models with machine learning to better predict people’s decisions.Avi Rosenfeld, Inon Zuckerman, Amos Azaria & Sarit Kraus - 2012 - Synthese 189 (S1):81-93.
    Creating agents that proficiently interact with people is critical for many applications. Towards creating these agents, models are needed that effectively predict people's decisions in a variety of problems. To date, two approaches have been suggested to generally describe people's decision behavior. One approach creates a-priori predictions about people's behavior, either based on theoretical rational behavior or based on psychological models, including bounded rationality. A second type of approach focuses on creating models based exclusively on observations of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  22.  30
    Privacy and surveillance concerns in machine learning fall prediction models: implications for geriatric care and the internet of medical things.Russell Yang - forthcoming - AI and Society:1-5.
    Fall prediction using machine learning has become one of the most fruitful and socially relevant applications of computer vision in gerontological research. Since its inception in the early 2000s, this subfield has proliferated into a robust body of research underpinned by various machine learning algorithms (including neural networks, support vector machines, and decision trees) as well as statistical modeling approaches (Markov chains, Gaussian mixture models, and hidden Markov models). Furthermore, some advancements have been translated (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  26
    Machine Learning Classifiers to Evaluate Data From Gait Analysis With Depth Cameras in Patients With Parkinson’s Disease.Beatriz Muñoz-Ospina, Daniela Alvarez-Garcia, Hugo Juan Camilo Clavijo-Moran, Jaime Andrés Valderrama-Chaparro, Melisa García-Peña, Carlos Alfonso Herrán, Christian Camilo Urcuqui, Andrés Navarro-Cadavid & Jorge Orozco - 2022 - Frontiers in Human Neuroscience 16.
    IntroductionThe assessments of the motor symptoms in Parkinson’s disease are usually limited to clinical rating scales, and it depends on the clinician’s experience. This study aims to propose a machine learning technique algorithm using the variables from upper and lower limbs, to classify people with PD from healthy people, using data from a portable low-cost device. And can be used to support the diagnosis and follow-up of patients in developing countries and remote areas.MethodsWe used Kinect®eMotion system to capture (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24.  13
    Machine Learning to Assess Relatedness: The Advantage of Using Firm-Level Data.Giambattista Albora & Andrea Zaccaria - 2022 - Complexity 2022:1-12.
    The relatedness between a country or a firm and a product is a measure of the feasibility of that economic activity. As such, it is a driver for investments at a private and institutional level. Traditionally, relatedness is measured using networks derived by country-level co-occurrences of product pairs, that is counting how many countries export both. In this work, we compare networks and machine learning algorithms trained not only on country-level data, but also on firms, which is something (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  22
    A review of possible effects of cognitive biases on interpretation of rule-based machine learning models[REVIEW]Tomáš Kliegr, Štěpán Bahník & Johannes Fürnkranz - 2021 - Artificial Intelligence 295 (C):103458.
  26.  18
    Gaussian Process Panel Modeling—Machine Learning Inspired Analysis of Longitudinal Panel Data.Julian D. Karch, Andreas M. Brandmaier & Manuel C. Voelkle - 2020 - Frontiers in Psychology 11.
    In this article, we extend the Bayesian nonparametric regression method Gaussian Process Regression to the analysis of longitudinal panel data. We call this new approach Gaussian Process Panel Modeling (GPPM). GPPM provides great flexibility because of the large number of models it can represent. It allows classical statistical inference as well as machine learning inspired predictive modeling. GPPM offers frequentist and Bayesian inference without the need to resort to Markov chain Monte Carlo-based approximations, which makes the approach (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27.  26
    Testimonial injustice in medical machine learning: a perspective from psychiatry.George Gillett - 2023 - Journal of Medical Ethics 49 (8):541-542.
    Pozzi provides a thought-provoking account of how machine-learning clinical prediction models (such as Prediction Drug Monitoring Programmes (PDMPs)) may exacerbate testimonial injustice.1 In this response, I generalise Pozzi’s concerns about PDMPs to traditional models of clinical practice and question the claim that inaccurate clinicians are necessarily preferential to inaccurate machine-learning models. I then explore Pozzi’s concern that such models may deprive patients of a right to ‘convey information’. I suggest that machine- (...) tools may be used to enhance, rather than frustrate, this right, through the perspective of hermeneutical justice. Pozzi objects to the introduction of machine-learning risk prediction tools in clinical care, due to their being ‘epistemically opaque’, often inaccurate, and a threat to patients’ testimonial justice. Through the example of psychiatry, I suggest this stance idealises traditional models of clinical practice. I propose that clinicians’ judgements are often equally vulnerable to opaque subjectivity, bias and epistemic injustice. For instance, the use of clinical observation or collateral interviewing might be conceptualised as a threat to testimonial justice, by neglecting patients’ own perspectives or voices. In practice, clinical risk prediction often follows inconsistent and somewhat subjective clinical heuristics, which are difficult to summarise or evaluate. Further, although Pozzi diminishes machine-learning …. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28.  23
    Show or suppress? Managing input uncertainty in machine learning model explanations.Danding Wang, Wencan Zhang & Brian Y. Lim - 2021 - Artificial Intelligence 294 (C):103456.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. Understanding with Toy Surrogate Models in Machine Learning.Andrés Páez - 2024 - Minds and Machines 34 (4):45.
    In the natural and social sciences, it is common to use toy models—extremely simple and highly idealized representations—to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  30.  57
    Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data.Reuben Binns & Michael Veale - 2017 - Big Data and Society 4 (2):205395171774353.
    Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining and fairness, accountability and transparency machine learning, their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  31. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Direct download  
     
    Export citation  
     
    Bookmark   17 citations  
  32.  21
    A machine learning approach to detecting fraudulent job types.Marcel Naudé, Kolawole John Adebayo & Rohan Nanda - 2023 - AI and Society 38 (2):1013-1024.
    Job seekers find themselves increasingly duped and misled by fraudulent job advertisements, posing a threat to their privacy, security and well-being. There is a clear need for solutions that can protect innocent job seekers. Existing approaches to detecting fraudulent jobs do not scale well, function like a black-box, and lack interpretability, which is essential to guide applicants’ decision-making. Moreover, commonly used lexical features may be insufficient as the representation does not capture contextual semantics of the underlying document. Hence, this paper (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  7
    Between world models and model worlds: on generality, agency, and worlding in machine learning.Konstantin Mitrokhov - forthcoming - AI and Society:1-13.
    The article offers a discursive account of what generality in machine learning research means and how it is constructed in the development of general artificial intelligence from the perspectives of cultural and media studies. I discuss several technical papers that outline novel architectures in machine learning and how they conceive of the “world”. The agency to learn and the learning curriculum are modulated through worlding (in the sense of setting up and unfolding of the world (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34.  29
    On the genealogy of machine learning datasets: A critical history of ImageNet.Hilary Nicole, Andrew Smart, Razvan Amironesei, Alex Hanna & Emily Denton - 2021 - Big Data and Society 8 (2).
    In response to growing concerns of bias, discrimination, and unfairness perpetuated by algorithmic systems, the datasets used to train and evaluate machine learning models have come under increased scrutiny. Many of these examinations have focused on the contents of machine learning datasets, finding glaring underrepresentation of minoritized groups. In contrast, relatively little work has been done to examine the norms, values, and assumptions embedded in these datasets. In this work, we conceptualize machine learning (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   11 citations  
  35.  21
    Beyond model interpretability: socio-structural explanations in machine learning.Andrew Smart & Atoosa Kasirzadeh - forthcoming - AI and Society:1-9.
    What is it to interpret the outputs of an opaque machine learning model? One approach is to develop interpretable machine learning techniques. These techniques aim to show how machine learning models function by providing either model-centric local or global explanations, which can be based on mechanistic interpretations (revealing the inner working mechanisms of models) or non-mechanistic approximations (showing input feature–output data relationships). In this paper, we draw on social philosophy to argue that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36.  15
    Using Machine Learning Algorithm to Describe the Connection between the Types and Characteristics of Music Signal.Bo Sun - 2021 - Complexity 2021:1-10.
    Music classification is conducive to online music retrieval, but the current music classification model finds it difficult to accurately identify various types of music, which makes the classification effect of the current music classification model poor. In order to improve the accuracy of music classification, a music classification model based on multifeature fusion and machine learning algorithm is proposed. First, we obtain the music signal, and then extract various features from the classification of the music signal, and use (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  26
    The predictive reframing of machine learning applications: good predictions and bad measurements.Alexander Martin Mussgnug - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    Supervised machine learning has found its way into ever more areas of scientific inquiry, where the outcomes of supervised machine learning applications are almost universally classified as predictions. I argue that what researchers often present as a mere terminological particularity of the field involves the consequential transformation of tasks as diverse as classification, measurement, or image segmentation into prediction problems. Focusing on the case of machine-learning enabled poverty prediction, I explore how reframing a measurement (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38.  58
    Word associations contribute to machine learning in automatic scoring of degree of emotional tones in dream reports.Reza Amini, Catherine Sabourin & Joseph De Koninck - 2011 - Consciousness and Cognition 20 (4):1570-1576.
    Scientific study of dreams requires the most objective methods to reliably analyze dream content. In this context, artificial intelligence should prove useful for an automatic and non subjective scoring technique. Past research has utilized word search and emotional affiliation methods, to model and automatically match human judges’ scoring of dream report’s negative emotional tone. The current study added word associations to improve the model’s accuracy. Word associations were established using words’ frequency of co-occurrence with their defining words as found in (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  39.  51
    Machine learning in human creativity: status and perspectives.Mirko Farina, Andrea Lavazza, Giuseppe Sartori & Witold Pedrycz - 2024 - AI and Society 39 (6):3017-3029.
    As we write this research paper, we notice an explosion in popularity of machine learning in numerous fields (ranging from governance, education, and management to criminal justice, fraud detection, and internet of things). In this contribution, rather than focusing on any of those fields, which have been well-reviewed already, we decided to concentrate on a series of more recent applications of deep learning models and technologies that have only recently gained significant track in the relevant literature. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Jessica Dai, Sina Fazelpour & Zachary Lipton (eds.), Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect the (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Mechanistic Models and the Explanatory Limits of Machine Learning.Emanuele Ratti & Ezequiel López-Rubio - unknown
    We argue that mechanistic models elaborated by machine learning cannot be explanatory by discussing the relation between mechanistic models, explanation and the notion of intelligibility of models. We show that the ability of biologists to understand the model that they work with severely constrains their capacity of turning the model into an explanatory model. The more a mechanistic model is complex, the less explanatory it will be. Since machine learning increases its performances when (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  42.  57
    Cognition‐Enhanced Machine Learning for Better Predictions with Limited Data.Florian Sense, Ryan Wood, Michael G. Collins, Joshua Fiechter, Aihua Wood, Michael Krusmark, Tiffany Jastrzembski & Christopher W. Myers - 2022 - Topics in Cognitive Science 14 (4):739-755.
    The fields of machine learning (ML) and cognitive science have developed complementary approaches to computationally modeling human behavior. ML's primary concern is maximizing prediction accuracy; cognitive science's primary concern is explaining the underlying mechanisms. Cross-talk between these disciplines is limited, likely because the tasks and goals usually differ. The domain of e-learning and knowledge acquisition constitutes a fruitful intersection for the two fields’ methodologies to be integrated because accurately tracking learning and forgetting over time and predicting (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  43.  54
    Can Machines Learn How Clouds Work? The Epistemic Implications of Machine Learning Methods in Climate Science.Suzanne Kawamleh - 2021 - Philosophy of Science 88 (5):1008-1020.
    Scientists and decision makers rely on climate models for predictions concerning future climate change. Traditionally, physical processes that are key to predicting extreme events are either directly represented or indirectly represented. Scientists are now replacing physically based parameterizations with neural networks that do not represent physical processes directly or indirectly. I analyze the epistemic implications of this method and argue that it undermines the reliability of model predictions. I attribute the widespread failure in neural network generalizability to the lack (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  44.  99
    Trust in Intrusion Detection Systems: An Investigation of Performance Analysis for Machine Learning and Deep Learning Models.Basim Mahbooba, Radhya Sahal, Martin Serrano & Wael Alosaimi - 2021 - Complexity 2021:1-23.
    To design and develop AI-based cybersecurity systems ), users can justifiably trust, one needs to evaluate the impact of trust using machine learning and deep learning technologies. To guide the design and implementation of trusted AI-based systems in IDS, this paper provides a comparison among machine learning and deep learning models to investigate the trust impact based on the accuracy of the trusted AI-based systems regarding the malicious data in IDs. The four (...) learning techniques are decision tree, K nearest neighbour, random forest, and naïve Bayes. The four deep learning techniques are LSTM and GRU. Two datasets are used to classify the IDS attack type, including wireless sensor network detection system and KDD Cup network intrusion dataset. A detailed comparison of the eight techniques’ performance using all features and selected features is made by measuring the accuracy, precision, recall, and F1-score. Considering the findings related to the data, methodology, and expert accountability, interpretability for AI-based solutions also becomes demanded to enhance trust in the IDS. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  15
    Toward a sociology of machine learning explainability: Human–machine interaction in deep neural network-based automated trading.Bo Hee Min & Christian Borch - 2022 - Big Data and Society 9 (2).
    Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how (...) learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm’s quest for explaining its deep neural network system’s actionable predictions. We demonstrate that this explainability effort involves a particular form of human–machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human–machine companionship. (shrink)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  46.  27
    From Cellphones to Machine Learning. A Shift in the Role of the User in Algorithmic Writing.Galit Wellner - 2018 - In Alberto Romele & Enrico Terrone (eds.), Towards a Philosophy of Digital Media. Cham: Springer Verlag. pp. 205-224.
    Writing is frequently analyzed as a mode of communication. But writing can be done for personal reasons, to remind oneself of things to do, of thoughts, of events. The cellphone has revealed this shift, commencing as a communication device and ending up as a memory prosthesis that records what we see, hear, read and think. The recordings are not necessarily for communicating a message to others, but sometimes just for oneself. Today, when machine learning algorithms read, write and (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  47. Machine learning, inductive reasoning, and reliability of generalisations.Petr Spelda - 2020 - AI and Society 35 (1):29-37.
    The present paper shows how statistical learning theory and machine learning models can be used to enhance understanding of AI-related epistemological issues regarding inductive reasoning and reliability of generalisations. Towards this aim, the paper proceeds as follows. First, it expounds Price’s dual image of representation in terms of the notions of e-representations and i-representations that constitute subject naturalism. For Price, this is not a strictly anti-representationalist position but rather a dualist one (e- and i-representations). Second, the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48.  93
    Machine learning by imitating human learning.Chang Kuo-Chin, Hong Tzung-Pei & Tseng Shian-Shyong - 1996 - Minds and Machines 6 (2):203-228.
    Learning general concepts in imperfect environments is difficult since training instances often include noisy data, inconclusive data, incomplete data, unknown attributes, unknown attribute values and other barriers to effective learning. It is well known that people can learn effectively in imperfect environments, and can manage to process very large amounts of data. Imitating human learning behavior therefore provides a useful model for machine learning in real-world applications. This paper proposes a new, more effective way to (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  49.  4
    Machine Learning for Predicting Corporate Violations: How Do CEO Characteristics Matter?Ruijie Sun, Feng Liu, Yinan Li, Rongping Wang & Jing Luo - 2024 - Journal of Business Ethics 195 (1):151-166.
    Based on upper echelon theory, we employ machine learning to explore how CEO characteristics influence corporate violations using a large-scale dataset of listed firms in China for the period 2010–2020. Comparing ten machine learning methods, we find that eXtreme Gradient Boosting (XGBoost) outperforms the other models in predicting corporate violations. An interpretable model combining XGBoost and SHapley Additive exPlanations (SHAP) indicates that CEO characteristics play a central role in predicting corporate violations. Tenure has the strongest (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  96
    What kind of novelties can machine learning possibly generate? The case of genomics.Emanuele Ratti - 2020 - Studies in History and Philosophy of Science Part A 83:86-96.
    Machine learning (ML) has been praised as a tool that can advance science and knowledge in radical ways. However, it is not clear exactly how radical are the novelties that ML generates. In this article, I argue that this question can only be answered contextually, because outputs generated by ML have to be evaluated on the basis of the theory of the science to which ML is applied. In particular, I analyze the problem of novelty of ML outputs (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
1 — 50 / 975