AI Ethics

言語: JP EN DE FR
2010-06-21
New Items
users online
AI ethics
First Page 2 3 4 5 6
 
Offline
Posts:
By 2015-09-04 07:49:42
 Undelete | Edit  | Link | 引用 | 返事
 
Post deleted by User.
 Leviathan.Chaosx
Offline
サーバ: Leviathan
Game: FFXI
user: ChaosX128
Posts: 20284
By Leviathan.Chaosx 2015-09-04 07:51:16  
Damn there's a good Futurama video on sex with robots and how mankind does everything to impress the opposite sex. Can't find a copy online though.
 
Offline
Posts:
By 2015-09-04 07:51:41
 Undelete | Edit  | Link | 引用 | 返事
 
Post deleted by User.
Offline
Posts: 595
By charlo999 2015-09-04 07:52:13  
Why exactly have a debate over something that is never going to happen?
Computers and machines are programmed and follow instructions, by humans.
A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real.


Incoming - 'insert technology' from 'insert film' has come true.
Offline
Posts: 6304
By Ackeron 2015-09-04 07:53:58  
Leviathan.Chaosx said: »
Damn there's a good Futurama video on sex with robots and how mankind does everything to impress the opposite sex. Can't find a copy online though.
I was looking for the same thing I can't find the full video.
[+]
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 07:55:06  
charlo999 said: »
Why exactly have a debate over something that is never going to happen?
Computers and machines are programmed and follow instructions, by humans.
A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real.


Incoming - 'insert technology' from 'insert film' has come true.
I guess you aren't well informed on our current technological development. Like I said we already have robots that sense, exchange information with each other, gather and grow, like bacteria or even worms. They just can't self-replicate...yet.
Also quantum computers that can acquire and elaborate information at insane speeds.
And we know how to program an AI that can evolve its thinking based on analyzing data received.

Put them together and you got what we're talking about. But if you're not interested you're more than free to not read the thread.
Offline
Posts: 1968
By Yatenkou 2015-09-04 07:55:32  
charlo999 said: »
Why exactly have a debate over something that is never going to happen?
Computers and machines are programmed and follow instructions, by humans.
A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real.


Incoming - 'insert technology' from 'insert film' has come true.
Unfortunately with technology becoming more and more advanced this isn't the case. While they will only have pseudo emotions, some AI can make their own decisions. It's our job as humans to make sure these decisions are for the bettering of the planet and mankind and not for reasons that involve killing everything.



Though if a super advanced AI ended up smashing mail boxes, I'd die happy.
 
Offline
Posts:
By 2015-09-04 07:56:22
 Undelete | Edit  | Link | 引用 | 返事
 
Post deleted by User.
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 07:59:10  
What's wrong with mail boxes?
 Bismarck.Dracondria
Offline
サーバ: Bismarck
Game: FFXI
Posts: 33979
By Bismarck.Dracondria 2015-09-04 08:00:30  
Yatenkou once stubbed his/her/josiah's toe on one and now vows to destroy them all
Offline
By Aeyela 2015-09-04 08:01:23  
Yatenkou said: »
If you program an AI with Asimov's three laws of roboethics then you won't have that kind of problem.

As great as those rules are, there's two caveats you're neglecting to consider. Firstly, they were written in 1942, way before we were remotely capable of producing artificial intelligence on a scale that these rules would safeguard us from. Secondly, the rules were introduced as part of a story whose plot revolved around them and have since been taken in a literal context to 'govern' AI our species produces.

This means that Asimov's laws would undoubtedly be different if written in 2010 and that the outcome of said laws was in a story written by the inventor of those laws, so of course they were followed.

Therefore, to assume that they guarantee our safety from any AI we produce capable of self awareness, sentience and self morality is ridiculously naive. There is no such thing as "absolute laws" when you hand something the ability to think for itself.

tl;dr: they're a guideline, not a mandate, and any sentient AI will be capable of deciding not to follow the laws.
[+]
 
Offline
Posts:
By 2015-09-04 08:04:25
 Undelete | Edit  | Link | 引用 | 返事
 
Post deleted by User.
Offline
Posts: 1968
By Yatenkou 2015-09-04 08:04:56  
Ok then give me a scenario and I'll show you which law it violates.
 Leviathan.Chaosx
Offline
サーバ: Leviathan
Game: FFXI
user: ChaosX128
Posts: 20284
By Leviathan.Chaosx 2015-09-04 08:05:07  
charlo999 said: »
Why exactly have a debate over something that is never going to happen?
Computers and machines are programmed and follow instructions, by humans.
A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real.


Incoming - 'insert technology' from 'insert film' has come true.
Self-aware AI is inevitable.

You should read up on the subject.

We're talking ~14 years before the first forms of it become functional and abundant.
[+]
 Fenrir.Atheryn
Offline
サーバ: Fenrir
Game: FFXI
user: Temptaru
Posts: 1665
By Fenrir.Atheryn 2015-09-04 08:06:37  
Honestly, I think I'm more concerned about nanotech running amok than I am about AI.

Offline
Posts: 1968
By Yatenkou 2015-09-04 08:06:58  
Aeyela said: »
Yatenkou said: »
If you program an AI with Asimov's three laws of roboethics then you won't have that kind of problem.

As great as those rules are, there's two caveats you're neglecting to consider. Firstly, they were written in 1942, way before we were remotely capable of producing artificial intelligence on a scale that these rules would safeguard us from. Secondly, the rules were introduced as part of a story whose plot revolved around them and have since been taken in a literal context to 'govern' AI our species produces.

This means that Asimov's laws would undoubtedly be different if written in 2010 and that the outcome of said laws was in a story written by the inventor of those laws, so of course they were followed.

Therefore, to assume that they guarantee our safety from any AI we produce capable of self awareness, sentience and self morality is ridiculously naive. There is no such thing as "absolute laws" when you hand something the ability to think for itself.

tl;dr: they're a guideline, not a mandate, and any sentient AI will be capable of deciding not to follow the laws.
The truth is however, even if the guidelines are from that time period, programming something with those as mandates cannot choose whether or not to ignore them as mandates.
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 08:10:59  
Yatenkou said: »
The truth is however, even if the guidelines are from that time period, programming something with those as mandates cannot choose whether or not to ignore them as mandates.
Considering the level of evolution of lifeform we're talking about, even if you impart those rules(which might not even be possible if you want to create a functional thinking AI), it would perceive them as instinct.
We have those too and are capable of going against them through deliberate choice.
[+]
Offline
By Aeyela 2015-09-04 08:13:14  
Yatenkou said: »
Ok then give me a scenario and I'll show you which law it violates.

When a human kills another human, they know they're breaking the law. It doesn't stop people doing it. Why? Because we're a sentient species and we have the intelligence to break from the mold of what's "right" or "wrong" because it's not hard coded into our genetic or personalities.

You can programme anything you like into an AI, but the moment it gains sentience, it's no longer under your control. It has the sentience, like we do, to break from the mold of what's "right" or "wrong", based on the three laws.

Ergo, an exceptionally smart AI that eventually develops self awareness might one day decide, using its new sentience, that the laws suck and remove them from their programming. This is what happened in the Terminator films. Skynet became so intelligent it developed sentience and decided "*** humans" and went about exterminating them. The moment the machine develops sentience, which as Chaosx says is inevitable, then the robot could wipe its arse with your laws and then throttle you in your sleep and no amount of bleating "Asimov's Laws! Asimov's Laws!" will save you as it slowly throttles the life out of you.

Yatenkou said: »
The truth is however, even if the guidelines are from that time period, programming something with those as mandates cannot choose whether or not to ignore them as mandates.

Putting "the truth is" infront of something doesn't make it true. You're not grasping what sentience actually means. It means you are completely responsible for your actions. A robot with sentience can choose to ignore the laws. Not sure what part of that is tripping you up.
[+]
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 08:14:53  
That being said I doubt any robot would ever feel the need to wipe out humanity. They could, however, pursue their own development in a fashion that endangers our survivability.
[+]
Offline
Posts: 595
By charlo999 2015-09-04 08:16:30  
Valefor.Sehachan said: »
charlo999 said: »
Why exactly have a debate over something that is never going to happen?
Computers and machines are programmed and follow instructions, by humans.
A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real.


Incoming - 'insert technology' from 'insert film' has come true.
I guess you aren't well informed on our current technological development. Like I said we already have robots that sense, exchange information with each other, gather and grow, like bacteria or even worms. They just can't self-replicate...yet.
Also quantum computers that can acquire and elaborate information at insane speeds.
And we know how to program an AI that can evolve its thinking based on analyzing data received.

Put them together and you got what we're talking about. But if you're not interested you're more than free to not read the thread.

Hate to break it to you but following commands is not intelligence. No matter how advanced the accomplishment of these commands look.

Analysing data then making it executing a command depending on the data received is not intelligence either. It's following programming.

Now if your debate is really about a reaction that puts us in danger/goes against our ethics from bad programming, then fair enough. But you need to rename the OP.
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 08:18:31  
I think you're not quite understanding, others have already explained though so I wouldn't know what to add.
Offline
By Aeyela 2015-09-04 08:19:09  
charlo999 said: »
Hate to break it to you but following commands is not intelligence. No matter how advanced the accomplishment of these commands look.

Analysing data then making it executing a command depending on the data received is not intelligence either. It's following programming.

Now if your debate is really about a reaction that puts us in danger/goes against our ethics from bad programming, then fair enough. But you need to rename the OP.

Until AI is smart enough to rewrite or introduce new programming, which plenty of them have already done. What then? How do you govern all the potential code it could produce?
Offline
Posts: 1968
By Yatenkou 2015-09-04 08:20:09  
Valefor.Sehachan said: »
Yatenkou said: »
The truth is however, even if the guidelines are from that time period, programming something with those as mandates cannot choose whether or not to ignore them as mandates.
Considering the level of evolution of lifeform we're talking about, even if you impart those rules(which might not even be possible if you want to create a functional thinking AI), it would perceive them as instinct.
We have those too and are capable of going against them through deliberate choice.

An AI doesn't have instinct, no matter how advanced it gets, no matter how life like they seem. All of those choices are made through a combination of their fake emotions along with conditions of the situation and what they are and are not allowed to do.

An AI at its core is a computer, and computers do not behave outside of their programming unless through human input.

An AI cannot make it's own choices, it's programming does it for them, but you think it is making it's own choices based on it looking as if it was thinking it over. A robot programmed to be pacifistic will not murder someone, even if that murderer killed the robot's master. It's simple programming.

Can I kill a human?
l
l
Does this conflict with the first law? -Yes- operation aborted
l
No
l
Does this conflict with the second law? -Yes- Operation aborted
l
No
l
Does this conflict with the third law? -Yes- Operation Aborted
l
No
l
Proceed.
l
End

This is a basic layout for a programming flowchart. This is what a computer is doing when it processes things in a program.
[+]
Offline
By Aeyela 2015-09-04 08:21:23  
Yatenkou said: »
An AI doesn't have instinct, no matter how advanced it gets, no matter how life like they seem. All of those choices are made through a combination of their fake emotions along with conditions of the situation and what they are and are not allowed to do.

An AI at its core is a computer, and computers do not behave outside of their programming unless through human input.

An AI cannot make it's own choices, it's programming does it for them, but you think it is making it's own choices based on it looking as if it was thinking it over. A robot programmed to be pacifistic will not murder someone, even if that murderer killed the robot's master. It's simple programming.

Can I kill a human?
l
l
Does this conflict with the first law? -Yes- operation aborted
l
No
l
Does this conflict with the second law? -Yes- Operation aborted
l
No
l
Does this conflict with the third law? -Yes- Operation Aborted
l
No
l
Proceed.
l
End

This is a basic layout for a programming flowchart. This is what a computer is doing when it processes things in a program.

This is the classic human arrogance that caused Judgement Day. There's already AIs out there that have produced or modified lines of code in their source. Google's search spider is one example that you can find plenty of literature about online. It's not a physical walking or talking robot, but it's capable of modifying its code based on the interactions it makes on the net. In some of those situations, there is nothing in its source to account for this behaviour. Look it up online, you might find it a fascinating read.
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 08:22:06  
No Yatenkou, that isn't the advanced level of AI we're talking about.

The moment you impart it orders then it's much more primitive and there isn't even anything to consider. But now computers are becoming capable of developing knowledge and act based on it.
Offline
Posts: 1968
By Yatenkou 2015-09-04 08:23:36  
No the classic human arrogance is thinking it'll be all fine and dandy to not cover their own *** when more advanced artificial intelligence start to come into existence.

How can anyone not understand that even though they seem peaceful, you need INSURANCE to make sure that it will never hurt someone.

This is what a company will do if they ever release one for average day life. They're not going to release something that could get pissed off and kill someone. No that won't ever happen, because they will be held liable.
Offline
Posts: 6304
By Ackeron 2015-09-04 08:24:23  
Also come across problems like, if I don't kill human A he can kill humans B and C. Both Action and inaction would be a violation of the laws. Logic loop!

Seriously did no one here see i Robot?
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 08:25:29  
iRobot, soon in all Apple stores.
Offline
Posts: 1968
By Yatenkou 2015-09-04 08:25:30  
Valefor.Sehachan said: »
No Yatenkou, that isn't the advanced level of AI we're talking about.

The moment you impart it orders then it's much more primitive and there isn't even anything to consider. But now computers are becoming capable of developing knowledge and act based on it.

Any level if AI is the same thing at its core.
[+]
Offline
Posts: 1968
By Yatenkou 2015-09-04 08:26:13  
Ackeron said: »
Also come across problems like, if I don't kill human A he can kill humans B and C. Both Action and inaction would be a violation of the laws. Logic loop!

Seriously did no one here see i Robot?
Incorrect, there are non lethal means to uphold the laws.
Log in to post.