MIT Technology Review recently did a very good piece titled "Why Self-Driving Cars Must Be Programmed to Kill" in which they claim that "carmakers must solve an impossible ethical dilemma of algorithmic morality". This sparked off a Twitter conversation with @e2b where I stated my view that Self-Driving cars and other things will not be considered a set of algorithms but as being which will be subject to ethical considerations.
From Machine Learning to Autonomous Agents
I understand that technically speaking the machine still follows a pre-defined program and therefore does not have a "free will" which it would need to do independent decisions. However such machines, once deployed, act autonomously in absence of their creator or operator and react to new situations to the best of their knowledge and capabilities. Therefore I suspect most people will think of them as autonomous agents rather than a set of machine learning algorithms.
These autonomous agenst will try to maximize their payoff (e.g. driving safely or energy efficiency) based on environmental inputs and the actions they have available. The actions of course have consequences so that the agent must outweigh the consequences to pick the desirable action.
Now the question remains what needs to be taken into consideration by the agent to pick the action. If the decisions of the agent can have negative (possibly even fatal) outcomes this, in my opinion, needs to be taken into consideration by the agent. Because if the agent would maximize his primary payoff (e.g. economic value) no matter what it could potentially take drastic actions (e.g. robbing, fraud) that harm other agents or people. This behaviour might not be tolerated. However one could argue that deciding not to include other factors of consideration is already an act of implementing ethics (or lack thereof) into the agent. In any case ethical considerations have been made.
The header Image was taken from the MIT Technology Review Article. All rights belong to the respective owner.