Tuesday, May 31, 2016

10 Questions about Conscious Machines

A FEE.org article by Max Borders:

In the past year or so, there have been a lot of films about artificial intelligence: Her, Chappie, and now there’s Ex Machina.

These films are good for us.

They stretch our thinking. They prompt us to ask serious questions about the possibility and prospects of conscious machines — the answers to which may be needed if we must someday co-exist with newly-sentient beings. Some of them may sound far out, but they force us to think critically about some important principles for the coming age of AI.

Ten come to mind.

  1. Can conscious awareness arise from causal-physical stuff — like that assembled (or grown) in a laboratory — to make a sentient being?
  2. If such beings become conscious, aware, and have volition, does that mean they could experience pain, pleasure, and emotion too?
  3. If these beings have human-like emotions, as well as volition, does that mean they are owed humane and ethical treatment?
  4. If these beings ought to be treated humanely and ethically, does that also confer certain rights upon them — and are they equal to the rights that humans have come to expect from each other? Does the comparison even make sense?
  5. If these beings have rights, is it wrong to program them for the specific task of serving us? What if they derive pleasure from serving us, or are programmed to do so?
  6. If these beings have rights by virtue of their consciousness and volition, does that offer the philosophical basis of rights in general?
  7. If these beings do not have rights people need respect, could anything at all grant rights to them?
  8. If these beings have nothing that grants them ethical treatment or rights, what makes humans distinct in this respect?
  9. If we were able to combine human intelligence with AI — a hybrid, if you will, in which the brain was a mix of biological material and sophisticated circuitry — what would be the ethical/legal status of this being?
  10. If it turns out that humans are not distinct in any meaningful sense from robots, at least in terms of justifying rights, does that mean that rights are a social construct?
  11. These questions might make some people uncomfortable. They should. I merely raise them; I do not purport to answer them here.

[source]

Good questions especially the first. That’s the question no scientist has yet solved. Maybe we should first determine if androids have any kind of consciousness like dogs, cats or even dolphins. Isaac Asimov’s novelette The Bicentennial Man the android Andrew was given human status when he was made mortal. If humanity defines an android as a human being does that mean he can vote in elections? Serve on a jury? Own property? I guess what I am asking does the Bill of Rights and the clause “right to life, liberty and pursuit of happiness (whatever happiness means to an android)” pertain to androids? Also, can an android be convicted of a crime in a court of law? Or even be suid? Something to think about.

Data in the Star Trek: The Next Generation TV series and Dorian in the Almost Human TV series were treated for the most part as human. In the Next Generation episode “The Measure of a Man” Data fights for his right of self-determination in order not to be declared the property of Starfleet and be disassembled in the name of science.

Maybe in the future there will be an advocacy group for androids like PETA. Except in the case PETA would stand for People for Ethical Treatment of Androids. Who knows.

No comments: