13 March 2016
Can machines learn right from wrong? As the first generation of driverless cars and battlefield warbots filter into society, scientists are working to develop moral decision-making skills in robots. Brake or swerve? Shoot or stand down?
These decisions may be based on algorithms created with a defined set of parameters — the international laws of war, for example. Or they might be influenced by ‘ethical adapters,’ programs that simulate human emotions like guilt and shame.
It may even be possible for intelligent machines to develop a moral framework through accumulated experience, much like a child does. But as ‘ethical robotic software’ proliferates, who will be responsible for those decisions? What other emotions might robots acquire? And how will society adapt to machines that appear — and yet aren’t quite — human?
We’ll tackle these and other questions as the leading cognitive scientists, roboticists, philosophers, and computer scientists take us inside the emerging field of robot morality.
The Moral Math of Robots is a Signature World Science Festival event.
Meet the Speakers
is regents' professor and associate dean for research in the College of Computing at Georgia Tech. He served as visiting professor at KTH in Stockholm; Sabbatical Chair at the Sony IDL in Tokyo; and the Robotics and AI Group at LAAS/CNRS in Toulouse.
Arkin's research interests include behavior-based control and action-oriented perception for mobile robots and unmanned aerial vehicles (UAV), human-robot interaction, robot ethics, and learning in autonomous systems. Arkin served on the Board of Governors of the IEEE Society on Social Implications of Technology and is a founding co-chair of IEEE RAS TC on Robot Ethics.
is one of the most accomplished science communicators in Australia with 20 years experience as a journalist and commentator. Prior to joining Catalyst, Graham was a reporter on ABC TV's science and technology programs: Quantum and Hot Chips.
Graham has a PhD in astrophysics and has lectured and researched at various universities as well as the Commonwealth Scientific and Industrial Research Organisation (CSIRO). He has penned columns for almost every major Australian newspaper and has somehow found the time to write four popular science books, including Our Fabulous Future and Secrets of Science II.
is a professor at Tufts University School of Engineering's Computer Science Department, and is director of the Human-Robot Interaction Laboratory. Scheutz's current research and teaching interests focus on complex cognitive and affective robots with natural language capabilities for human-robot interaction.
He is co-director of Tufts' interdisciplinary program in cognitive and brain science, and is a program manager of the new Center for Applied Brain and Cognitive Sciences joint program with the U.S. Army Natick Soldier Research, Development, and Engineering Center
is a professor in the Philosophy program; a chief investigator in the Australian Research Council Centre of Excellence for Electromaterials Science; and an adjunct professor in the Centre for Human Bioethics, at Monash University, where he works on ethical issues raised by new technologies. He is the author of some 70 refereed papers and book chapters on topics ranging from the ethics of military robotics, to cloning and nanotechnology. He is a co-chair of the IEEE Technical Committee on Robot Ethics
and was one of the founding members of the International Committee for Robot Arms Control
research program involves bio-inspired computation in complex systems, with applications in cognitive science and biorobotics. Janet formed the Complex and Intelligent Systems research group at the University of Queensland (UQ).
She currently coordinates the UQ node of the ARC Centre of Excellence for the Dynamics of Language, where her research focusses on robots and language.