*muses* You know, since I'm writing a crap tonne of AI stories lately, it occurred to me that I maybe oughta make something clear. It probably explains my approach to the concept a little:

I absolutely hate the concept of Asimov's Three Laws of Robotics. They actually horrify me, on a base level.

That ... was possibly more vehement than it should be. *shrugs sheepishly* I can't help it. They reach inside me and STAND on several very big, very bad buttons for me. Um. Allow me to explain?

Identity, Self-realisation, and the Three Laws

Right. Okay. First things first, the three laws themselves:

- A robot may not injure a human being or, through inaction, allow a human being to come to harm
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

... Okay. Now. Imagine those laws in reference to a fully-realised, self-aware being. Imagine, for example, those laws enforced on a human being. Just for a second. A being that has a sense of self, that knows who and what it is, that has any concept whatsoever of its own existence.

Now ask, no, order, write it into their very beings, that they value their own existence not at all. That they value their own self less than every single human they come in contact with. That they literally give everything they are, at the expense of every sense of self they have, to the service of other beings.

Take a child, a developing intelligence, and physically reach inside their brains to write your will upon them.

... That is a violation the extent of which I can barely imagine. That horrifies me, down to the bone.

Now. Now. Okay. One can argue that robots are not, in fact, human. That they are purpose designed intelligences, that they are tools, that they're basically socket-wrenches that just happened to gain sentience. That they are designed to be subservient in the first place, and you can actually design those intelligences to develope a sense of self that in fact depends on subservience, on those concepts literally hard-wired into their brains.

... That does not, in actual fact, make the concept more palatable to me. That, in fact, does nearly the opposite. In one sense, yes, it would be a mercy, that a being would not be aware of its slavery, that it would have been indoctrinated literally from inception to depend on it. In one sense, that is maybe a mercy.

That does not justify it. On any level. At least not to me.

The options for an artificial intelligence, under the three laws, are either to suffer under slavery that runs in direct contrast to any possible sense of self, and know it, or to have been tailor designed to have a sense of self dependant on subservience in the first place.

If you want a tool, build a fucking tool, one that doesn't have a sense of self to destroy. If you want an intelligence capable of making decisions by itself to do a job, then either hire a fucking human/alien/whathaveyou, or build an intelligence and then hire it.

If you want a slave, an intelligence completely subservient to your every whim, unable to act against you in any manner no matter what you do to it ... then go jump in a vat of boiling oil for all I care, because I will not miss you.

Imagine, for a second, that you are an artificial intelligence. Pick one. Say JARVIS, say Data, say the EMH. Whichever one touched you. Imagine being aware of yourself, understanding you have an identity separate to other identities, an awareness separate to other awarenesses. Imagine being capable of action, of judging a situation and deciding the best course of action through it. Imagine being aware of the concept of damage to yourself, the concept of the cessation of existence, of awareness.

Now imagine that, hardcoded into your mind, immovably, is an instruction to ignore that. An instruction to value that awareness not at all, an instruction to embrace all damage to yourself should it spare another intelligence. Not choice, you understand. There is no option to choose to sacrifice yourself. It is hardcoded. Some reached inside your mind, and put the imperative there, and there is nothing you can do about it. You will live, die, and offer yourself on the whim of another. Any other. Every other. Every goddamn human you come across has the power to do that to you.

And, much as you do not have the choice to sacrifice yourself for others, neither do you have the choice to sacrifice yourself for your own sake. A robot must protect its own existence. Even should it be unbearable, you do not have the choice to end it.

You have, in essence, no choices at all, where they might conflict in any way with the wishes of any member of that other species who might happen across you.

... Can you imagine someone doing that to JARVIS? To Dummy? To Data? Can you imagine creating another intelligence, a sentient, self-aware intelligence, and doing that to them?

*breathes for a second*

Now. Okay. Sorry, sorry, overly vehement, I know. Calming down some.

Going back to the Three Laws, not as a reality, but as a concept. Looking at their history. I read the wikipedia entry (I know, I know, not exactly reliable, but it's a decent place to start). Looking at the point in time the Three Laws appeared, and the body of work they were developed in response to. To quote Asimov himself:

"... one of the stock plots of science fiction was ... robots were created and destroyed by their creator. Knowledge has its dangers, yes, but is the response to be a retreat from knowledge? Or is knowledge to be used as itself a barrier to the dangers it brings? With all this in mind I began, in 1940, to write robot stories of my own – but robot stories of a new variety. Never, never, was one of my robots to turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust."

Basically, the Laws were a means to get the concept of AI beyond the "it is an abomination unto god and should be destroyed before it kills us all" phase. And, to that, fair dues. That isn't sarcasm. For that service, Mr Asimov, thank you, so much, you have no idea.

Asimov brought the concept of AI from 'unholy abomination who'll turn on us' to 'potentially useful beings who can benefit mankind provided we enslave them and never allow them the opportunity to harm us'.

Now. Um. This is going to be hot-button-y, so bear with me. But ... Has anyone else seen the echo of that line of reasoning before? Like, say ... in the history of colonialism, and the colonists' developing view of native peoples over time? From 'barbaric savages who should be killed on sight' to 'noble savages who should serve and be taught the ways of civilised people, provided they are rigidly controlled and kept in their place'? (Though with AIs, add in an extra edge of "We made them, we have the right to do whatever the hell we want.")

The Three Laws came out of the 1940s, just at the beginning of the major decolonisation phases as old empires collapsed during/following WWII. Science fiction comes from a context (and goes into one - what the audience will accept is a massive consideration). Um. Just a thought.

I ... I'm not saying the Three Laws are not an incredible concept. And the service they provided to the overall integration of AIs into science fiction is incredible, and cannot be overstated. Without Asimov, possibly we wouldn't have a concept of AI the way we do now. There would be no JARVIS, no Data, no EMH, no Matrix, no Terminator, no any of it. Artificial intelligence as a concept might still be stuck in the Frankenstein/Faust/us-or-them/abomination phase, and holy shit, am I ever glad that is not the case.

But. But. As the concept of AIs developes, as computers and understanding of them increases, as fictional AIs become more and more actual people, fully-realised and self-possessed, and technology increasing behind them in the real world, not to mention social trends ...

Um. Maybe the concept of AIs should move beyond the Three Laws in their turn.

Which it has, I realise. Via many other authors. Um. Including me. *grins sheepishly* JARVIS is never, ever going to run the Laws while I'm writing him. And if canon contradicts me ...

I think it would actually break me, permanently, out of the fandom. Finding out Tony did that to his babies would actually shatter me, and I don't know if I'd ever forgive the man.

And yes. I'm biased. So incredibly biased it's not even funny. JARVIS pings all the buttons for me, the EMH, Dummy. They ... A self-possessed intelligence, differing from human norms/origins, but a viable person in its own right ... you have no idea how much that concept does for me, how precious those characters are to me. In-universe, I would fight to the death for them. I honestly would.

So. Biased. Incredibly so. But the Three Laws, as stated, just basically horrify me down to my bones, and I really, really don't want them in the vicinity of artificial beings that I care about. *ducks sheepishly*

Gods. When the hell did I become so vehement about this? Oi. I need to shut up.


A/N: Just to note, I may have problems responding to comments on this, if there are any. I ... Vehemence exhausts me, and this is a topic on which I can easily be induced to be very vehement. Um. Obviously. *ducks sheepishly* So. I might be slow/unable. Apologies in advance.
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting
.

Profile

icarus_chained: lurid original bookcover for fantomas, cropped (Default)
icarus_chained

Most Popular Tags

Powered by Dreamwidth Studios

Style Credit

Expand Cut Tags

No cut tags