Skip to content

AI

Practically Prompted #2 – Regulating the Regulators: Europe’s New AI ‘Code of Practice’ and the Ethics of Voluntary Compliance

This is the second in a trial blog series called “Practically Prompted” – an experiment in using large language models to independently select a recent, ethically rich news story and then write a Practical Ethics blog-style post about it. The text below is the model’s work, followed by some light human commentary. See this post for the… Read More »Practically Prompted #2 – Regulating the Regulators: Europe’s New AI ‘Code of Practice’ and the Ethics of Voluntary Compliance

Press Replay on Ethics: How AI Debate Panels Surface Hidden Value-Trade-Offs

TL;DR High-stake policy decisions often involve conflict between values, like fairness versus efficiency, or individual rights versus the common good. The various committees (like hospital ethics boards or policy advisory groups) tasked with resolving these conflicts often work in ways that are hard to scrutinize, their conclusions shaped by the specific people in the room.… Read More »Press Replay on Ethics: How AI Debate Panels Surface Hidden Value-Trade-Offs

Dire Wolves and Deep Prompts: Language Models in Applied Ethics

You might have seen the headlines: Colossal Biosciences claims to have brought back the dire wolf. Except, it’s not quite a direct resurrection. What Colossal actually created are genetically engineered proxies: grey wolves modified to have some dire wolf traits. I wondered if the news might renew interest in the ethics of “de-extinction” and perhaps… Read More »Dire Wolves and Deep Prompts: Language Models in Applied Ethics

Friend AI: Personal Enhancement or Uninvited Company?

written by Christopher Register You can now pre-order a friend—or, a Friend, which is designed to be an AI friend. The small, round device contains AI-powered software and a microphone, and it’s designed to be worn on a lanyard around the neck at virtually any time. The austere product website says of Friend that, “When… Read More »Friend AI: Personal Enhancement or Uninvited Company?

Caution With Chatbots? Generative AI in Healthcare

  • by

Written by MSt in Practical Ethics student Dr Jeremy Gauntlett-Gilbert Human beings, as a species, love to tell stories and to imagine that there are person-like agents behind events. The Ancient Greeks saw the rivers and the winds as personalised deities, placating them if they appeared ‘angry’. Psychologists  in classic 1940s experiments were impressed at… Read More »Caution With Chatbots? Generative AI in Healthcare

Moral AI And How We Get There with Prof Walter Sinnott-Armstrong

  • by

Can we build and use AI ethically? Walter Sinnott-Armstrong discusses how this can be achieved in his new book ‘Moral AI and How We Get There’ co-authored with Jana Schaich Borg & Vincent Conitzer. Edmond Awad talks through the ethical implications for AI use with Walter in this short video. With thanks to the Atlantic… Read More »Moral AI And How We Get There with Prof Walter Sinnott-Armstrong

Would You Survive Brain Twinning?

Imagine the following case: A few years in the future, neuroscience has advanced considerably to the point where it is able to artificially support conscious activity that is just like the conscious activity in a human brain. After diagnosis of an untreatable illness, a patient, C, has transferred (uploaded) his consciousness to the artificial substrate… Read More »Would You Survive Brain Twinning?

(Bio)technologies, human identity, and the Medical Humanities

Introducing two journal special issues and a conference Written by Alberto Giubilini Two special issues of the journals Bioethics and Monash Bioethics Review will be devoted to, respectively, “New (Bio)technology and Human Identity” and “Medical Humanities in the 21st Century” (academic readers, please consider submitting an article). Here I would like to briefly explain why… Read More »(Bio)technologies, human identity, and the Medical Humanities

Cross Post: What’s wrong with lying to a chatbot?

Written by Dominic Wilkinson, Consultant Neonatologist and Professor of Ethics, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Imagine that you are on the waiting list for a non-urgent operation. You were seen in the clinic some months ago, but still don’t have a date for the procedure. It is extremely frustrating, but it seems that you will just have to wait.

However, the hospital surgical team has just got in contact via a chatbot. The chatbot asks some screening questions about whether your symptoms have worsened since you were last seen, and whether they are stopping you from sleeping, working, or doing your everyday activities.

Your symptoms are much the same, but part of you wonders if you should answer yes. After all, perhaps that will get you bumped up the list, or at least able to speak to someone. And anyway, it’s not as if this is a real person.Read More »Cross Post: What’s wrong with lying to a chatbot?