PDA

View Full Version : Ethics and Autonomous Machines



LDAHL
2-6-20, 11:29am
I saw an interesting article in Bloomberg about self-driving cars: how should they be programmed to respond to situations where a choice must be made between the occupants’ safety and other vehicles or pedestrians? Do you crash into the tree or veer into the sidewalk full of people?

Will future buyers demand selfish versus altruistic options? Will insurers shun vehicles that choose the most costly disaster? Will government demand approval authority over the code? Will Asimov’s laws be written into the Uniform Commercial Code?

When the military begins deploying autonomous drones, smart munitions or a more clever class of mine, will they need to be programmed for some acceptable level of collateral damage or force proportionality as they go about their lethal business? Will machines need to choose when troops in the field must bear more risk to preserve the lives of non-combatants? Will computer intelligence some day be called upon to make some of the terrible decisions military officers have always needed to make?

Will machine ethics become an important new branch of philosophy?

Tybee
2-6-20, 11:33am
Not Asimov, rather Ray Bradbury, in "The Veldt."

catherine
2-6-20, 11:47am
Wow. You're making my head explode.

I'm reminded of utilitarian ethicist Peter Singer's ethics analogy about a train track that divides into two. There is a little boy sitting on one track. All of your financial security is tied up in a vintage car, and that car is sitting on the other track. You are the one who controls which track on oncoming train takes. What do you do?

Most would say that of course you save the little boy. But Singer argues that in reality we are all, collectively, choosing to save the vintage car (one's own financial security).

Interesting analogy. Not the same of course as the ethics of a self-driving car, but the car/train connection made me think of it.

I agree that this is a very interesting and important new "problem" to solve. Maybe the Autodrive will cancel out and put the decision in the driver's hands at the last minute. No self-driving car wants a guilty conscience.

razz
2-6-20, 12:01pm
People will be people, some struggling to be ethical and some self-interested. It is no different than my experience at the self-checkout at the grocery store. Someone carefully entered all the chosen items on the till, placed them in bags and left without paying.

I came up to the till and asked a staff member if there was a problem with the machine. Turns out this non-payment is not unusual. Corrections added to address this problem include photos of those entering items at the till, new green light indicators when a till is truly free...
Ethics is a complex field and the applications in the OP indicate a need for a monitoring process. I am curious what that will involve.

LDAHL
2-6-20, 12:06pm
Wow. You're making my head explode.

I'm reminded of utilitarian ethicist Peter Singer's ethics analogy about a train track that divides into two. There is a little boy sitting on one track. All of your financial security is tied up in a vintage car, and that car is sitting on the other track. You are the one who controls which track on oncoming train takes. What do you do?

Most would say that of course you save the little boy. But Singer argues that in reality we are all, collectively, choosing to save the vintage car (one's own financial security).

Interesting analogy. Not the same of course as the ethics of a self-driving car, but the car/train connection made me think of it.

I agree that this is a very interesting and important new "problem" to solve. Maybe the Autodrive will cancel out and put the decision in the driver's hands at the last minute. No self-driving car wants a guilty conscience.

People try to pass the buck all the time. Why shouldn’t AI?

The “Trolley Problem” in various forms gets used a lot. It even made an episode of “The Good Place”.

I always thought Singer’s utilitarianism to be a bit simplistic. He certainly doesn’t live it personally.

Teacher Terry
2-6-20, 12:10pm
I read that self driving trucks are coming to Texas soon. Also read that if a self driving car has to make a choice between hitting a pedestrian or crashing the car when the occupants are likely to die the car will hit the pedestrian.

SteveinMN
2-6-20, 12:14pm
Maybe the Autodrive will cancel out and put the decision in the driver's hands at the last minute. No self-driving car wants a guilty conscience.
I cannot see that ending well. Many (most?) drivers are inattentive (especially if the vehicle is driving for them) and prone to overreaction or simply freezing at times of crisis -- and the car is going to hand them a decision that should be made in a matter of seconds (maybe less because the driving system will have more information than the human will)?

I have long believed that the critical path for autonomous vehicles will not be the technology (it's pretty decent even now); it will be the law -- both codification of (autonomous) behavioral standards and the ability of those "wronged" to sue based on the driving system's decision. Given the market-chilling effects of insurance companies not writing policies on, say, flood-prone properties, the weight insurance companies will carry on the use and behavior of autonomous vehicles will be definitional.

happystuff
2-6-20, 3:54pm
Wow. You're making my head explode.

I'm reminded of utilitarian ethicist Peter Singer's ethics analogy about a train track that divides into two. There is a little boy sitting on one track. All of your financial security is tied up in a vintage car, and that car is sitting on the other track. You are the one who controls which track on oncoming train takes. What do you do?

Most would say that of course you save the little boy. But Singer argues that in reality we are all, collectively, choosing to save the vintage car (one's own financial security).

Interesting analogy. Not the same of course as the ethics of a self-driving car, but the car/train connection made me think of it.

I agree that this is a very interesting and important new "problem" to solve. Maybe the Autodrive will cancel out and put the decision in the driver's hands at the last minute. No self-driving car wants a guilty conscience.

Make a slight modification and say it is YOUR little boy on one track and YOUR financial security on the other track and YOU are the one at the controls. I think the results change and would really be saving the boy then. Not sure that it constitutes a true change in ethics, though, but something to think about.

jp1
2-13-20, 1:09am
I think steve has it right. Currently humans drive cars and when they crash they (or their insurance) pays for the damage caused to other people or property. With an autonomous vehicle where the actions of the vehicle are dictated by the software it will be the vehicle or the programmer's responsibility. Once autonomous vehicles are the norm people won't need auto insurance anymore. I see insurance submissions for this type of risk a couple of times a month. I decline to quote them all. Not because I think the companies developing this don't know what they are doing, but because I don't know what the law is eventually going to say about who is responsible. Just as it is still getting worked out when and where uber is responsible for accidents in various scenarios (did they have a passenger or not, etc) the same is the case for self-driving vehicles. Self driving vehicles will likely be far safer than human driven vehicles but they'll never be as safe as planes. Kids will still run out from between cars to chase balls, or whatever. Whether the car steers away from the kid but has a head-on collision with another car or decides to mow down the kid is a decision that someone will have to make before that decision has to be made.