AI Ethics

言語: JP EN DE FR
2010-06-21
New Items
users online
AI ethics
First Page 2 3 4 5 6
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 06:06:42  
Before we freak out, the robot was probably being sarcastic(which is remarkable nonetheless)when it said it'll put people in a zoo. Here's a part of the conversation with it

YouTube Video Placeholder


I want to use this though as cue for a broader and very interesting ethical problem that is our advancement with AI.
As a person in the video suggests: if we make advanced AI that lack empathy, they might destroy us as in most sci-fi we're all familiar with; but if we do give em complex personalities and the ability to empathize with humans, then people might fall in love with them, which is an ethical problem.

Moreover, we now have technologies that let robots interact with the environment and other fellow robots, cooperate with each other, accrue materials, repair their parts and grow all on their own like some basic organisms. What they still can't do is self-replicate, which is pretty much the only thing that separates them from a lifeform at this point.
The fact that humans can create life is an extremely interesting subject, and I don't specifically mean this from an egocentric perspective, or as detractors would call it "playing god", but rather it puts our own existence in a different light.

At least it is my humble opinion that these are topics that can give everyone lead for a very interesting philosophical reflection.

Discuss, or offer further cues about the subject of robitics ethics.
Offline
Posts: 6304
By Ackeron 2015-09-04 06:27:35  
I for one cannot wait for our computerized Overlords.

All praise The Almighty Robot Policeman.
[+]
 Bismarck.Dracondria
Offline
サーバ: Bismarck
Game: FFXI
Posts: 33979
By Bismarck.Dracondria 2015-09-04 06:41:49  
YouTube Video Placeholder
Offline
Posts: 6304
By Ackeron 2015-09-04 06:45:49  
That show got canned =/ It has a lot of questions unanswered too.

Also that line "The Almighty Robot Policeman" is actually a reference to Robocop. Due to his story being somewhat Jesusish.
 
Offline
Posts:
By 2015-09-04 06:48:37
 Undelete | Edit  | Link | 引用 | 返事
 
Post deleted by User.
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 07:02:26  
Josiahkf said: »
Has anyone seen the movie her with Joachim Phoenix and Scarlett Johhanson?
It poses an interesting theory about this.
Her?
 
Offline
Posts:
By 2015-09-04 07:03:54
 Undelete | Edit  | Link | 引用 | 返事
 
Post deleted by User.
Offline
Posts: 1968
By Yatenkou 2015-09-04 07:20:19  
If you program an AI with Asimov's three laws of roboethics then you won't have that kind of problem.
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 07:22:06  
We're talking of AI capable of acquiring information and evolve its way of thinking based on that(at an incredible speed no less, so its evolutionary path would be really quick). This means that its morals might diverge along the way.
Offline
Posts: 1968
By Yatenkou 2015-09-04 07:29:16  
Unfortunately your morals cannot go above the programming laws.

1.) A robot cannot directly or indirectly cause harm to a human (This means no outright slaughter or reprogramming one another to not have these laws.)

2.) A robot MUST obey the commands of a human, as long as it doesn't conflict with the first law (No human ordering robot to kill human.)

3.) A robot must ensure its own survival, so long as it doesn't conflict with the first and second laws. (If it's being attacked and the only option of survival is killing a human, it will lay down and die.)

These are air tight and provide no means of misinterpretation. Asimov was a science fiction writer but these laws are so accepted that they have served as the basis of every Robo overlord fiction, where the creators ignored his laws.

An AI can grow morals just fine, however if they try to do anything, their programming will stop them from carrying out any hostile intentions. AI are different than humans, and a computer will follow the commands and laws put into place in their programming.
[+]
Offline
Posts: 6304
By Ackeron 2015-09-04 07:31:30  
Honestly I hope they evolve some time in my lifetime.

On the if they go to war side: Humans can come to terms with each other and have humanity work as one ... till we die or they run out of fuel.

On the if they wish to integrate with society side: They will face the same troubles as blacks, women, and gays. So it will pretty much be the same as watching a long part of history repeat itself.
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 07:32:18  
If you program it it's going to be a primitive form of AI that is not in question here. The advancement comes from an AI that is capable of learning and processing thoughts all on its own.
Offline
Posts: 1968
By Yatenkou 2015-09-04 07:33:50  
Valefor.Sehachan said: »
If you program it it's going to be a primitive form of AI that is not in question here. The advancement comes from an AI that is capable of learning and processing thoughts all on its own.
And you can learn and evolve just fine with fail safes in place so you don't become dangerous.
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 07:35:13  
Ackeron said: »
Honestly I hope they evolve some time in my lifetime.

On the if they go to war side: Humans can come to terms with each other and have humanity work as one ... till we die or they run out of fuel.

On the if they wish to integrate with society side: They will face the same troubles as blacks, women, and gays. So it will pretty much be the same as watching a long part of history repeat itself.
I think that if left to their own devices they would develop in a quite similar parasitic fashion as humans did. They would acquire resources indiscriminately endangering other species, including ours, in order to shape the environment to suit their own needs.
 Bismarck.Dracondria
Offline
サーバ: Bismarck
Game: FFXI
Posts: 33979
By Bismarck.Dracondria 2015-09-04 07:35:41  
I wouldn't want an AI with no restrictions, that could be very dangerous. I'd rather it be a bit dumber than too smart and Skynet-y
 
Offline
Posts:
By 2015-09-04 07:37:05
 Undelete | Edit  | Link | 引用 | 返事
 
Post deleted by User.
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 07:38:07  
Bismarck.Dracondria said: »
I wouldn't want an AI with no restrictions, that could be very dangerous. I'd rather it be a bit dumber than too smart and Skynet-y
And this is another ethical problem.
We want powerful quantum computers that can solve our problems, but as they vastly surpass us in intelligence we would automatically be at their mercy.

Anyone watched Transcendence? Not the greatest movie, but to make a point.
[+]
 Leviathan.Chaosx
Offline
サーバ: Leviathan
Game: FFXI
user: ChaosX128
Posts: 20284
By Leviathan.Chaosx 2015-09-04 07:38:51  
Valefor.Sehachan said: »
if we do give em complex personalities and the ability to empathize with humans, then people might fall in love with them, which is an ethical problem.
Why is this an ethical problem and what's wrong with it?
[+]
VIP
Offline
Posts: 604
By Terraka 2015-09-04 07:38:55  
Ackeron said: »
Honestly I hope they evolve some time in my lifetime

Well we already have Roomba and Siri, robotic limbs and Japan has developed some humanistic robots.

Soon, very soon!
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 07:40:06  
Leviathan.Chaosx said: »
Valefor.Sehachan said: »
if we do give em complex personalities and the ability to empathize with humans, then people might fall in love with them, which is an ethical problem.
Why is this an ethical problem and what's wrong with it?
Not a problem with me, personally. But as we still today have trouble letting homosexuality be considered acceptable worldwide, surely you see how this leap is not far off.
Offline
Posts: 6304
By Ackeron 2015-09-04 07:41:22  
Josiahkf said: »
But I think intelligence breeds control of emotions, not lack there of. Evolving past our sntiquated emotions would be believable even if we programmed every facet of the AI it could likely find ways to completely redesign itself or just design itself from scratch in a copy with any rules it wants.
The problem with this then comes where logic -vs- morals.

If humans were to control their emotions perfectly we would lack the ability for morals.
 Bismarck.Dracondria
Offline
サーバ: Bismarck
Game: FFXI
Posts: 33979
By Bismarck.Dracondria 2015-09-04 07:42:05  
This would obviously happen if we had proper human looking androids

YouTube Video Placeholder
 Valefor.Sehachan
Guide Maker
Offline
サーバ: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-04 07:43:47  
Unfortunately we like our individuality, our ability to have feelings for things and situations.

It's an ego problem. Cause if we had no emotions and logic only we would work as a hivemind, which is extremely more efficient. However it would render us no longer individuals, which is something humanity strongly clings to. Cue religions.
 
Offline
Posts:
By 2015-09-04 07:44:27
 Undelete | Edit  | Link | 引用 | 返事
 
Post deleted by User.
Offline
Posts: 6304
By Ackeron 2015-09-04 07:44:56  
Terraka said: »
Ackeron said: »
Honestly I hope they evolve some time in my lifetime

Well we already have Roomba and Siri, robotic limbs and Japan has developed some humanistic robots.

Soon, very soon!
Soon I can make my ideal girlfriend! I'll call her Sarah and have her make me a sandwich.

My life will be complete.
 Leviathan.Chaosx
Offline
サーバ: Leviathan
Game: FFXI
user: ChaosX128
Posts: 20284
By Leviathan.Chaosx 2015-09-04 07:45:34  
Valefor.Sehachan said: »
Leviathan.Chaosx said: »
Valefor.Sehachan said: »
if we do give em complex personalities and the ability to empathize with humans, then people might fall in love with them, which is an ethical problem.
Why is this an ethical problem and what's wrong with it?
Not a problem with me, personally. But as we still today have trouble letting homosexuality be considered acceptable worldwide, surely you see how this leap is not far off.
I can definitely see people who have a problem with it, but overall I think it's more of backlash/stigmatization problem than ethical. Maybe a slight ethical problem until machines become self aware.

I think people would be more accepting of a guy and his sex robot (female looking) than of homosexuality and definitely transsexuals.

There are already people with realistic looking (female) sex robots. I remember seeing it on some news story not too long ago.
Offline
Posts: 1968
By Yatenkou 2015-09-04 07:45:43  
My only problem with robotics is whether or not we repeat the many science fiction dumbasses who made robots without proper fail safes to prevent violence.

I don't mind robots being programmed at all, I just don't want them to be capable of causing harm to humanity. All it takes is one defective unit to spark bloodshed, and as someone who took several programming classes, I cannot stress enough how concrete these three simple laws would be.

Robots can argue, robots can ignore you, hate you, love you, however they cannot physically hurt you. I don't see any form of ethical issue with this.
 Bismarck.Dracondria
Offline
サーバ: Bismarck
Game: FFXI
Posts: 33979
By Bismarck.Dracondria 2015-09-04 07:45:47  
Our 'antiquated emotions' are what give us the drive for everything

Inventing new things, helping people, exploration, art

They are also what give us the ability to appreciate these things
 
Offline
Posts:
By 2015-09-04 07:48:21
 Undelete | Edit  | Link | 引用 | 返事
 
Post deleted by User.
Offline
Posts: 1968
By Yatenkou 2015-09-04 07:49:40  
Bismarck.Josiahkf said: »
Valefor.Sehachan said: »
Unfortunately we like our individuality, our ability to have feelings for things and situations.

It's an ego problem. Cause if we had no emotions and logic only we would work as a hivemind, which is extremely more efficient. However it would render us no longer individuals, which is something humanity strongly clings to. Cue religions.
Individuality is a freedom though. Like free speech or to be have equal rights as all other human beings.

A hive mind destroys your freedoms and all you become is a cell in an organism. (On a larger scale)

It comes down to what level of existence we have and how much humans were designed to have or should have.
In terms of people mimicking trends and following people on things like Twitter, can't you already see very slight hints of a hive mind?
First Page 2 3 4 5 6
Log in to post.