![]() ![]() I’m sure everyone in this room has had a beneficial personal experience working with artificial intelligence. In the last few years, over 4500 aritificial intelligence and robotics researchers have called for a ban on lethal autonomous weapons, over 100 CEOs of prominent AI companies have called for a ban on lethal autonomous weapons, and over 240 companies and nearly 4000 people have pledged to never develop lethal autonomous weapons.īut as we turn our attention to human-machine teaming, we must also carefully consider research coming from the field of psychology and recognize the limitations there as well. It’s not hard to see how quickly image recognition software could misinterpret situations on the battlefield if it has to quickly assess everyday objects that have been upended or destroyed.Īnd challenges with image recognition is only one of many examples why an increasing number of people in AI research and in the tech field – that is an increasing number of the people who are most familiar with how the technology works, and how it can go wrong – are all saying that this technology cannot be used safely or fairly to select and engage a target. This is the same technology that would analyze and interpret data picked up by the sensors of an autonomous weapons system. And this is just one of many, many examples of image recognition software failing because it does not understand the context within the image. For example, researchers at Auburn University found that if objects, like a school bus or a firetruck were simply shifted into unnatural positions, so that they were upended or turned on their sides in an image, the image classifier would not recognize them. Let me give you an example: In just the last few months, researchers from various universities have shown how easy it is to trick image recognition software. FLI is deeply worried about an imprudent application of technology in warfare, especially with regard to emerging technologies in the field of artificial intelligence. The Future of Life Institute (FLI) is a research and outreach organization that works with scientists to mitigate existential risks facing humanity. The following statement was read on the floor of the United Nations during the March, 2019 CCW meeting, in which delegates discussed a possible ban on lethal autonomous weapons. 2019 Statement to the United Nations in Support of a Ban on LAWS
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |