A bit of work in recent Experimental Philosophy (hat tip to Joshua Knobe and the Knobe Effect) has done well to point out that human intuition seems to associate ethical blameworthiness and praiseworthiness with intent. In other words, someone is praiseworthy only when they intended to be (note: intuitions do not always seem so black-and-white about blameworthiness).
If this intuition operated as a loose provisional premise (see below), then robots do not actually make ethical decisions.
Knobian Robot KR1: Decisions are ethically significant if, and only if, one intended an ethically significant outcome. KR2: A Robot’s action is not capable of ‘intent’ (because operates according to pre-intended programming). KRC: A robot is not capable of decisions that are ethically significant.
This however, does not remove the possibility of a robot’s programmer making ethically significant decisions in the programming of a robot. For example, a programmer could intend to program his or her robot in such a way that will produce certain ethically significant outcomes (as is the case in the above article). So,…
Knobian Programmer KP1: Decisions are ethically significant if, and only if, one intended an ethically significant outcome. KP2: A Robot’s programmer can program a robot intending that the robot will operate in a way that is ethically significant. KPC: A robot’s programmer can make ethically significant decisions (about programming).
If the law of transitivity could somehow apply to a programmer and his or her robot, then I might consider robots capable of ethically significant decisions; however, I do not accept such transitivity.
Feel free to disagree or challenge the ExPhi premises.