Will computers or machines ever become self-aware or evolve?

We’ve all seen Star Trek or Battlestar Galactica, in which there are androids, human-android hybrids, and robots that think for themselves. Do you think this will ever become a reality?

Wow, big subject. Right up there with the meaning of life and all.

As we understand the human genome and gain a better understanding of how the brain works, it may be possible to mimic the brain’s behavior.

I would think it would be too much for hardware-based systems, but gell-based or biologically-based computer systems may be able to accomplish it, though no time soon.

The fundamental difference between human reasoning (and animal as far as we can tell) is that it is primarily associative, not logical - which is why humans have to be taught logic.

There are attempts using neural networds to mimic associative reasoning but it is difficult to see existing computers which are built on logical reasoning developing associative reasoning.

Computers are wonderful for doing all the logical reasoning we humans find so difficult but I doubt that they will be able to undertake much associative reasoning, not least because we humans understand so little about how it works and so would find it difficult to build computers that might develop it themselves.

I personally think that the first true step will be made when a system (program) is created that can anticipate on situations by learning from previous situations and also being able to evaluate findings with others (human and non-human). In a sense it can then self evolve by comparing and learning. This could already be a something that can be created… Step out of bindery, not only 1 or 0 as in true of false, but also 2 might be.

Eventually this could evolve in a primary instinct (very bluntly said a reaction by memory) along with judgment and diagnose.
Not talking emotion here as that gets even more complex… but if a system would be left to learn and evolve in it’s own way (and it has the resources it needs to fulfill this) who knows where that would lead.

But that is getting way to deep on a Thursday night for me… :wink:

Anyway, the self evolving and self aware… I do believe in that but not in a sense of how we experience it. You’d need to be a computer to know about that :slight_smile:

What is self aware?

We’re getting close to robots that react to their environment and can even change their environment. We’re getting close to robots with massive processing power.

What I can’t imagine happening any time soon is seeing robots or computers establish culture and society. It is possible it could happen someday, but one could argue that humanity is the only species to ever develop culture. I don’t believe that to be an accident or coincidence.

Wow, way to break it down, guys. My opinion is that if this does ever happen, it won’t be for at least another hundred years. I’m not saying that it can’t happen, because hundreds of years ago, no one ever thought that people would be flying through the air and landing on the moon. However, our technology isn’t advanced enough yet to create a truly lifelike android such as Data from Star Trek. As John said, we don’t have the neural network capability yet, and we also don’t have the storage capacity required for an android to have sufficient memory. Another area that would have to be developed a lot further is the control systems for the androids’ movement. Although, there have been a few robots from Japan that are starting to mimic human-like movement.

Self-aware is when an entity can think to itself: “Wow, I exist,” and ask itself “what is my purpose, and why am I here?” Some people would argue that computers aren’t at that level yet.

Not until scientist start realizing the meaning of the eye not being a
camera, and the ear not being a microphone, and coupling that to inventing
Binary Logical Operators that can convey such data into software!

When that hits ‘home’, we may be on the way to something like that.
Eventually, I think we would be pretty dumb not to be able to duplicate our
own properties and behaviour, since we are after all naught but machines
ourselves!

Tony. . .

“BNG22908” <BNG22908@no-mx.forums.opensuse.org> wrote in message
news:BNG22908.3cpb60@no-mx.forums.opensuse.org
>
> We’ve all seen Star Trek or Battlestar Galactica, in which there are
> androids, human-android hybrids, and robots that think for themselves.
> Do you think this will ever become a reality?
>
>
> –
> BNG22908
> ------------------------------------------------------------------------
> BNG22908’s Profile: http://forums.opensuse.org/member.php?userid=1077
> View this thread: http://forums.opensuse.org/showthread.php?t=390217
>

I recommend Daniel Dennett’s “Sweet Dreams”. It lays out some arguments as to why consciousness might even be possible to implement via scientific means. Not directly of course, but by arguing that consciousness is something that can be understood scientifically in its entirety.

i seriously hope robots etc do NOT develop AI, otherwise Terminator won’t just be an excellent film but reality and personally, I quite like not being dead!

Well if we ever manage to build a quantum computer it might be possible to develop
a computer that is self learning and self repairing and maybe even producing off spring
I do realize that the meaning quantum computer is very wide.

regards
dobby9

I’d like to point out that when measuring “intelligence”, it is not fair to compare the human brain to a computer. Compared to a human, computers might as well be savants.They can do seemingly enormous calculations in an instant where it would take a person much longer. The human brain is theorized to have only 10TB of “memory”, which may seem like alot, but can be easily surpassed by a computer (and without all that distortion). On the other hand, human brains have billions of “processors” working is parallel, enabling to to things no computer can do. We can dream, theorize and make personal opinions about things. For humans, intelligence is based on the hardware. Every brain is wired differently. We all have the same basic functions, but the hardware all depends on the person. For a computer, the exact opposite is true. All the hardware is basically the same. What makes a computer intelligent is the programing behind it.

I want to think as a human I will always triumph over machine, but secretly, I’m a transhumanist and can’t wait for the singularity. :wink:

badger fruit wrote:

>
> i seriously hope robots etc do NOT develop AI, otherwise Terminator
> won’t just be an excellent film but reality and personally, I quite
> like not being dead!
>
>

It’s happening already. My Roomba “malfunctioned” a while back and started attacking me. Of course, being that it’s a Roomba, it’s attack method consisted of casually bumping into my ankles from time to time, but the sheer implied aggressiveness was enough to convince me that the war on mankind had already begun. I wasn’t taking any chances, so I was forced to sneak into a local smelting plant and drop it into a vat of molten iron to ensure it was truly destroyed. I’m no fool.

++ for elsewhere, – for Skynet.

:smiley:

Cheers,
KV

Hehheh, on the Internet nobody knows you are an intelligent program.

Sincerely, suseforumbot-w00f

:smiley:

The 1st rule of SA is to comprehend your ultimate demise. This is to say, your own vanity. No computer will ever grasp that concept.

Not to really take a position on the question, but just to throw in a little info. for the discussion…

The professor that I TA for does robotics and AI research. The other TA is working on his phd in AI. I was talking to the other TA last spring, and he was bemoaning the fact that he disagreed with a lot of the conventional wisdom concerning AI out there; that problems were solved by breaking them down into smaller and smaller problems, and then finally getting to the point where you could write a fairly simple program to solve the problem, and then building the solution back up. This type of deconstruction to solve a big problem is a common tactic in CS and related fields.

Anyway, his theory was that this would never truly succeed, especially at helping machines become self-aware. His idea was that machines needed to learn in a way that mimicked humans–by creating the associations mentioned in this thread by experiencing them.

The problem, of course, is how do you accomplish this–robot day care? Send them to a robot school for 13 years? It becomes kind of silly.

But if the problem could be solved, it might prove more effective than the current methodology.

No, and here’s why… (from my blog)

The term ‘Artificial Intelligence’ was first coined by John McCarty in 1956; but it was 1968 before it caught the public imagination in Stanley Kubrick’s sci-fi classic ‘2001: A Space Odyssey’. In the film an ‘intelligent’ computer called HAL (the name was derived from IBM by subtracting one letter), starts to behave abnormally on a trip to Jupiter killing one of the crew members.

The remaining crew member has to disconnect the computer in order to regain control of the ship.The computer begins to sing ‘Daisy Bell’ (“Daisy, Daisy/Give me your answer do/I’m half crazy/all for the love of you”) while it’s being disconnected. The choice of song was no co-incidence. Physicist, John Larry Kelly, Jr synthesized the tune in 1962 using an IBM 704 computer in a landmark project at Bell Labs.

In 1965 researcher Herbert Simon stated that: “Machines will be capable, within twenty years, of doing any work a man can do”. AI pioneer Marvin Minsky added his own prediction: “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.” The most notable achievement to date occurred in 1997 when the IBM super computer ‘Deep Blue’ beat chess champion Gary Kasparov. Unfortunately, Artificial Intelligence has never reached such lofty heights as a ‘thinking’ computer.

So why are we no further along after 50 years of research? Part of the problem lies in the fundamental difference between computers and the human brain. The human brain is a massively parallel organism. As you read this your lungs are delivering oxygen to your blood stream, your heart is pumping blood around your body, you are digesting your last meal, your liver is filtering your blood, you are reading and processing this article.

Computers cannot achieve this level of parallelism. The average computer can only do one thing at a time. Say you have a web browser and a music program open at the same time. The computer appears to be handling the two events at once. It does this by spending a very small amount of time on each program and constantly switching back and forth. It creates the illusion of parallelism because the computer processor can switch between tasks extremely quickly. Some efforts such as pipelining have achieved a level of parallelism but none come close to the brain.

The human brain contains approximately 10 to the power of 80 (or 10 followed by 80 0s) neurons. This is an extraordinarily large number. It is more than the number of known atoms in the universe. The average microprocessor (as of 2006) contained 1.7 billion transistors. There is a massive gulf between the ‘computing’ power of the human brain and the modern day computer. Until this problem is solved it is unlikely that computers will be taking over the world anytime soon.

No offense, but I disagree. We might never reach a level of true AI (whatever the definition that we happen to agree upon), but I do not think it will be because of what you describe.

While it is true that the human organism (and life in general) is very complex, and while your analogies and comparisons hold up for simple computers, the basis of your argument is rooted in one fact that is simply not true: that computers will never scale, and they will never be able to reach any significant level of parallelism. What do you think cluster computing is? Cloud computing? All this will require is a certain amount of scaling and compartmentalization.

Whether we get there or not, I don’t know. But I don’t believe it will be for a lack of parallelism or crunching.

This definitely is a heavy topic. Although I think this is a long way off, or not even possible for that matter, I have to say it is foolish to say something is impossible just because we don’t know how it can be accomplished. On the other hand, supposedly the mind “computes” 3,000,000,000,000 (that’s trillion) commands every second. Now, I know we are making steps to improve computer software and make computers faster, but trying to make a computer compute that many commands per second of operation is pushing the limits on technology. To add to that, the commands change every time we look at something else or think of something else. Now, it is hard enough to compute 3 trillion commands, now try 3 trillion constantly dynamically-updating commands. Considering that our body creates such as huge amount of heat doing so (hence the reason sweating from the forehead directs heat away from the brain), try putting that heat to computer for a sustained amount of time. While I think we are making strides in the Genome projects of today, I think the human body and mind are very unique things, and even using such a unique and complex mind, I don’t think it is feasible to create anything like it, or even nearly like it.

I think we can create self-aware robots, and we already have to some extent, but ones like humans themselves, I’d have to lean on not.

That’s just my 2 cents, though. :rolleyes: :wink: :slight_smile:

There is no way you can create self-awareness in a machine without a conscience, and that idea is simply absurd. But go ahead and prove me wrong…show me the code.

Cryovac wrote:

>
> There is no way you can create self-awareness in a machine without a
> conscience, and that idea is simply absurd. But go ahead and prove me
> wrong…show me the code.
>
>
Dave;

We are intelligent, sentient beings. As for humans the jury is out. They
seem to only exhibit greed,lust,envy.gluttony,pride,sloth and wrath.

Hal