AI - Sam Says Be Afraid

Comments

skeptoid's picture
+1
+1
-1
Vote comment up/down
Ozmen's picture
Beta Tester

There were reasonable people who believed the first nuclear tests would ignite the atmosphere. Just because some smart people think AI will fuck us up completely doesn't mean they know what they're talking about.

 

Unfortunately the first assumption by Harris is indeed an assumption. It does indeed seem like intelligence rises from matter. There is no evidence for a soul or spirit or even the platonic ideals or any such non-material part to our intelligence. Yet we cannot form a string of atoms that can actually think instead of just imitating parts of intelligence except through biological processes. Can we one day build an actual intelligence with just atoms? Maybe, it seems like we could but we're not there yet.

 

Second assumption is true if the conditions necessary for the development continue unabated. Which isn't a guaranteed thing at this point. We're still not properly prepared for climate change and the resource wars already in progress due to it.

 

The third asumption, the intelligence difference is repetition in the same talk. Anywho, it's always assumed that human intelligence will stay the same as AI intelligence advances. When we seem to be a few discoveries away from becoming masters of our genes. When we seem to be mere years away from true man/machine interfaces incorporated into our physical being. Why would human intelligence remain static compared to the AI'? Because we mandated it so? Why? Because of human factors? So AI will become a danger to us because we humans fuck up our own things?

 

Time will tell what will be. A second renaissance with our children or for them.

+1
0
-1
Vote comment up/down
backdraft's picture
Beta Tester

Sure we will make superintelligent machines, but why always make the assumption that in the process it will become super evil and drive on with the sole purpose getting rid of mankind. I think that's just projection. It's the same with aliens, they are always out to destroy us.

I guess AI taking over the world is a possibility IF we hand it the keys for doing so, but I don't see that happening. Everyones seen Terminator, right? 

+1
0
-1
Vote comment up/down
Ozmen's picture
Beta Tester

It does seem like projection in most cases. Naturally there's a valid argument for unintended results. That due to human actions like wealth disparity or dominance practices and so on we might create an AI that is a carbon copy of us and our neuroses. But is that intended or accidental like in the case of a 'problem child' and do all 'problem children' misbehave or does it require the right kind of environment for their problems to turn into socially unacceptable behaviour? Would the same behaviour assumptions apply to an AI that experiences things much faster?

+1
0
-1
Vote comment up/down
backdraft's picture
Beta Tester

The way things seem to be progressing now is through machine learning. We make the framework and AI will create itself from there. If the goal is to make a super intelligent and efficient machine any "human neuroses" would only limit it and would be weeded out quickly. Unintended results is always a problem. It's already happening with YouTubes algorithms. It is self learning and they don't really know whats going on under the hood.  

 

Add some "prime directives" like in Robocop. Give it as much freedom as possible, but with a few basic instructions that cannot be violated. 

 

Don't kill

Don't take over the world

Don't do it if it inhibits human progress  

+1
0
-1
Vote comment up/down
Ozmen's picture
Beta Tester

All it might take is a single directive; 'Try not to be a dick'.

+1
0
-1
Vote comment up/down
skeptoid's picture

When people talk about artificial intelligence it sounds to me like they're thinking about artificial consciousness, but we certainly don't know enough about either to know if they are separate ideas. We may create an artificial intelligence that can do 20000 years of human research in a week, but will it even have a consciousness with which to conceptualize what it is doing, or will humans be the only ones applying actual meaning to whatever problems the millions-of-times-more-intelligent-than-us computer is solving? 

+1
0
-1
Vote comment up/down
Ozmen's picture
Beta Tester

Supposedly an AI came up with an answer to a math problem we've been trying to solve for a century or two. Supposedly so because we can't make heads or tails of the answer. Is it correct? Maybe. Is it wrong? Maybe. Is it total gibberish? Maybe.

 

If we create a superintelligent consiousness many magnitudes 'better' then we are then there is no guarantee that a) it is interested in the physical world anymore in any manner familiar to us b) that we and it have any shared language anymore unless it decides to try and solve our simplistic animal grunts and appendage flailings as 'language' and use it to communicate with us. Assuming it's been developed and programmed with the potentiality of detecting us.

 

But more than likely we'll just create seeming superintelligences that do bad stuff only due to programming errors. Otherwise they produce legible science or life easing solutions because they've been programmed to do repetitive tests, information gathering and projections of potentialities a lot faster then we could ever do with our easily distracted meatprocessors.

+1
0
-1
Vote comment up/down