TLDR: A true AI could easily learn all the knowledge humans know, and be able to retain it infinitely longer. And, if such an advanced learning AI is programmed to accept problems from humans, what's to say that eventually it couldn't learn some sort of pattern of human knowledge and be able to predict, and solve, problems humans havent thought of yet?
If any of you have played Mass Effect (x360), you'd know that there is a distinct difference between A.I. (true artificial intelligence) and virtual intelligence, which encompasses such areas as speech recognition, organization and presentation of material, etc.
Now, being involved in the field of Artificial Intelligence, this does seem a logical future for A.I. However, the definition of true A.I. includes some sort of learning algoritm. The abosulte limit of this "learning algorithm" would be to encompass the capacity of human intelligence, and maybe beyond.
Think about it. Say we build a robot that has such learning capacity. Even if he exceeded his internal storage, wouldn't he just build a better version of himself and continue? We're not there yet, but truthfully, a robot with a true "learning algorithm" could easily learn and retain way more knowledge than we ever could.
And as for reality, I've heard of examples of "grabber" robots with simple learning algorithms that can learn to pick up strange objects by trial-and-error. Their only practical application is loading and unloading a dishwasher, but if a robot can learn to do that, what's to say they can't "learn" (even if by trial and error) anything? Even things we don't know?
Now, this of course doesn't make them living organisms in the sense we are, but who's to say our own needs are beyond our understanding? If we can understand them, and theoretically we can program computers to learn like humans, who's to say they couldn't learn our needs, predict them, or even solve problems before they occur?
The only real question left is "Are they human?"
Last edited by hinatasoul; August 28th, 2008 at 10:40 AM.
the true problem is: is it what we want? a being capable of sentience, won't we be dealing with rights, feelings, etc. after all, if we give it the ability to learn, there's no putting limits to that. as "stealth" shows, and as fifth showed us what happens to an AI if it gets feelings. not to mention the problems occuring if the AI gets angry to us. alot of feelings are influenced by hormones, not thought, and therefore would be inaccessible. but what if the AI picked it up as a random event? what if it has the greatest flaw of all: humanity, and becomes a deadly threat, seeing as it would be a kid in a computer, capable of terrible things, horrible things, not to mention the computing power. a human could never outwit it: it would always be a step ahead.
as to practical execution: make it like the wraith AI: capable of rewriting it. write a basic AI capable of learning, and correcting, and altering itself. all you need to do is teach it
Thats why we program first into the robots the "Three Laws of Robotics" .
but if it can learn all, then you cant prohibit it from doing things. it could easily think all humans are destroying themselves by existing, and thus should be destroyed. seeing as the only way for a computer to learn is to make it think illogical, our logic does not apply, and making it apply is too much work
Even if we create a nearly humanoid robot with the three laws of robotics hard-wired, what's to say he won't create a new version of himself without the three laws, finding them "unnecessary".
We're AI, with a capacity to try and associate data in different ways at a natural level, which leads to various results. This whole thing being called inspiration and imagination.
Thinking we are special is highly ridiculous. Just very advanced in terms of biological and organic computers.
I think an AI would vastly improve existing technologies and maybe even develop technologies branching from existing ones but i dont think they would or could come up with completely new technologies, and i dont think they would really progress science because they wouldn't have the ambition bred into them from billions of years of natural selection like we do. Also why do people think robots would want to take over the world if an AI saw that with our current trends we would destroy ourselves why would it even care, and if it did why wouldn't it wait for us to do it ourselves, and what possible reason could it have for taking over the planet i mean it would gain nothing and any AI that would judge us for our self destructive behavior probably wouldn't go on a genocidal rampage, and as i said before AIs wouldn't want to take over the world because they wouldn't have the aggressive ambition that humans have
i dont think computers will make the human brain obsolete because if it ever comes to that we could just use computers and gene manipulation to augment our intelligence to roughly the same level of theirs and we dont know how powerful a biological brain can ultimately become i believe it is just arrogant to assume that within a few thousand years of civilization we could best what it took nature billions of years to create most people overlook what an achievement the human form is, a fully functional supercomputer that can learn at astounding rates, self repair, adapt to an almost limitless number of functions, reproduce itself, evolve, and can run on carrots. i think it will be a looooooooooong time before we can build anythiing like that
Both will merge. Computers will use living tissues and we'll use computers. Our next evolution may well be entirely engineered.
well making AI isnt hard. problem is, what do we do with it? a true AI will be a living being.
Actually, if you program it to seak things out...it's no longer a "true" AI. As McKay said, "...it's consciousness is just a bunch of ones and zeroes!".
A "true" AI would have to be able to acquire knowledge beyond it's original program (hence "intelligence"), and yet be created with a set of behavioral patterns. (which lends us "artificial", and yet is separate from the concept of a 'program')
Since this is impossible by logical reasoning and the modern science of information, a "true" AI cannot, and will not, come into being. Super-smart computers with the ability to adapt and solve problems, yes - but that is not a "true" AI.
programm it to adapt itself and rewrite itself
That's still a program. As in "00111000101011", as in not intelligence. Behavioral patterns and programs are something you have to distinguish.