Announcement

Collapse
No announcement yet.

Could A.I. really attain Scientific Progress?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    ever heard of the singularity?

    http://en.wikipedia.org/wiki/The_singularity

    anyway... once computers reach the same level of prosessing capability as a human brain, and if pregrammed to emulate biological thought processes (we are getting close to that) and given the right data, i think it could improve on and surpass human ability to think.
    Spoiler:
    Disclaimer:
    I have been using this username since 1998, it has no connection to "The Last Airbender", or James Cameron's movie.
    Quotes!
    - "Things will not calm down, Daniel Jackson, they will in fact calm up!"
    - "I hope you like Guinness Sir, I find it a refreshing alternative to... food"
    - "I'm Beginning to regret staying up late to watch "Deuce Bigalow: European Gigalo" last night... Check that, i regretted it almost immediately"
    sigpic

    Comment


      #17
      Originally posted by Mister Oragahn View Post
      Just program it to look for new things, and associate elements which don't like like they can be associated at first hand. A good paper on what imagination is about would help, but I'm sure a complex AI will do that. We're nothing more than AI as well, but in meatbags instead of in metal towers.
      In principle the development of a true A.I should be possible, and imo you're right. There is nothing in our brains that cannot be replicated, the problem is that we might well not be clever enough as a species to be able to develop the kind of software that would be necessary to run an A.I. Pretty much all current AI research is done in specific fields like automated timetabling, game playing or glorified bin packing.

      Originally posted by thekillman View Post
      look, what you need is internet. with it, along with agents[informatiion collecting programs] and programms wich analyse this info, and programms who then steer the other programms, you could easily make an AI, as long as it is programmed to learn and advance
      I don't think you quite get how complex AI Programming is. Saying "so long as its programmed to learn and advance" is like saying "Oh yeah, and could you whip up a cold fusion reactor whilst you're at it?" In fact we're probably closer to cold fusion than we are to a true A.I.

      I'm sorry if I sound really negative. But AI is a field I'm looking to enter as a Programmer (Computer Science student atm) and people's perceptions of AI really doesn't map 1:1 with reality.

      Originally posted by AvatarIII View Post
      ever heard of the singularity?

      http://en.wikipedia.org/wiki/The_singularity

      anyway... once computers reach the same level of prosessing capability as a human brain, and if pregrammed to emulate biological thought processes (we are getting close to that) and given the right data, i think it could improve on and surpass human ability to think.
      The technological singularity is an interesting hypothesis... even if I were to buy into it all it means is that we have the potential hardware available. I mean, say we were to emulate an adult human brain what would happen when we turned it on? We'd have a braindead human emulation. We'd either have to emulate a child's brain and somehow give it the ability to 'grow' new Neurons by gradually exposing potential Neurons to it when it requires them... or 'insert' memories and knowledge directly into the AI. But to do either of those we'd need to know exactly how a human brain works, something we (afaik) are quite a far cry from. Otherwise we'd have the equivalent of a mentally handicapped brain with too few Neurons, Neurons not communicating properly, not being purged properly or not making new connections properly.

      It makes my brain hurt just thinking about how complex an endeavor it would be. Not that I'm saying it wouldn't be interesting, it really would but its substantially more complex than just hoping that the technological singularity will fix all our AI woes.

      Comment


        #18
        Originally posted by AvatarIII View Post
        ever heard of the singularity?

        http://en.wikipedia.org/wiki/The_singularity

        anyway... once computers reach the same level of prosessing capability as a human brain, and if pregrammed to emulate biological thought processes (we are getting close to that) and given the right data, i think it could improve on and surpass human ability to think.
        I think not.

        An AI could certainly emulate the scientific method- analyze data, compare, draw conclusions, etc. It would technically be capable of gradual, methodical scientific progress. The thing is, however, that we don't actually think that way. We are smarter than any possible AI because we are LESS rational, not more. We engage in irrationality, gratuitious randomness, associative thinking, obsessing over dead ends and all sorts of senseless tinkering- and that is what allows us to sometimes advance in quantum leaps rather than in small gradual steps. It is also what allows us to work on one thing and end up inventing something completely different from the original goal.

        Moreover, our need to advance is driven by our existential needs, desires and fears. As our needs, desires and fears change, so do our thinking patterns. An AI has no such needs other than securing a steady power supply to its hardware and pursuing an unchangeable pre-programmed set of goals. The ability to solve problems alone does not progress make.
        If Algeria introduced a resolution declaring that the earth was flat and that Israel had flattened it, it would pass by a vote of 164 to 13 with 26 abstentions.- Abba Eban.

        Comment


          #19
          AI has everything to do with illogical thinking, so programming something to think illogically would make a robot capable of learning. how? take a robot. programm it to go north, untill it cannot go further, and then go east, untill it cannot go further. then, if it cannot go further north aswell, it has reached its destination. a logical thinking robot will do this over and over and over again in a square room. an illogical thinking robot, will discover that going north-east is actually much faster, and will go northeast. and it will keep on doing it. if every computer with access to internet would have an info gathering programm, and an interaction programm to interact with other computers with the programm, and make them all programmed to think illogically, and then command it to make its interaction as fast and efficient as possible, then, eventually, you wind up with an AI, cappable of using terabytes of calculating power.

          Comment


            #20
            Originally posted by Gibsnag View Post
            In principle the development of a true A.I should be possible, and imo you're right. There is nothing in our brains that cannot be replicated, the problem is that we might well not be clever enough as a species to be able to develop the kind of software that would be necessary to run an A.I. Pretty much all current AI research is done in specific fields like automated timetabling, game playing or glorified bin packing.



            I don't think you quite get how complex AI Programming is. Saying "so long as its programmed to learn and advance" is like saying "Oh yeah, and could you whip up a cold fusion reactor whilst you're at it?" In fact we're probably closer to cold fusion than we are to a true A.I.

            I'm sorry if I sound really negative. But AI is a field I'm looking to enter as a Programmer (Computer Science student atm) and people's perceptions of AI really doesn't map 1:1 with reality.



            The technological singularity is an interesting hypothesis... even if I were to buy into it all it means is that we have the potential hardware available. I mean, say we were to emulate an adult human brain what would happen when we turned it on? We'd have a braindead human emulation. We'd either have to emulate a child's brain and somehow give it the ability to 'grow' new Neurons by gradually exposing potential Neurons to it when it requires them... or 'insert' memories and knowledge directly into the AI. But to do either of those we'd need to know exactly how a human brain works, something we (afaik) are quite a far cry from. Otherwise we'd have the equivalent of a mentally handicapped brain with too few Neurons, Neurons not communicating properly, not being purged properly or not making new connections properly.

            It makes my brain hurt just thinking about how complex an endeavor it would be. Not that I'm saying it wouldn't be interesting, it really would but its substantially more complex than just hoping that the technological singularity will fix all our AI woes.
            The highest intellect we see in nature is primates and theyre no where enar true intellegence. Parrots can get up to a 6 yr olds vocabulary which corresponds to a rather mildly mentally disabled adult. Verbal behavior and language is crucial to intellegent evolution. Monkeys can use all the stone tools they want, but until they come up with a way to communicate complex ideas theyre stuck flinging their poop at eachother. Complex language leads to more complex thoughts, which in turn further complicates language, and on and on.

            Comment


              #21
              Originally posted by Womble View Post
              I think not.

              An AI could certainly emulate the scientific method- analyze data, compare, draw conclusions, etc. It would technically be capable of gradual, methodical scientific progress. The thing is, however, that we don't actually think that way. We are smarter than any possible AI because we are LESS rational, not more. We engage in irrationality, gratuitious randomness, associative thinking, obsessing over dead ends and all sorts of senseless tinkering- and that is what allows us to sometimes advance in quantum leaps rather than in small gradual steps. It is also what allows us to work on one thing and end up inventing something completely different from the original goal.

              Moreover, our need to advance is driven by our existential needs, desires and fears. As our needs, desires and fears change, so do our thinking patterns. An AI has no such needs other than securing a steady power supply to its hardware and pursuing an unchangeable pre-programmed set of goals. The ability to solve problems alone does not progress make.
              All which we identify and just have to program.
              The Al'kesh is not a warship - Info on Naqahdah & Naqahdria - Firepower of Goa'uld staff weapons - Everything about Hiveships and the Wraith - An idea about what powers Destiny...

              Comment


                #22
                TLDR: A true AI could easily learn all the knowledge humans know, and be able to retain it infinitely longer. And, if such an advanced learning AI is programmed to accept problems from humans, what's to say that eventually it couldn't learn some sort of pattern of human knowledge and be able to predict, and solve, problems humans havent thought of yet?

                If any of you have played Mass Effect (x360), you'd know that there is a distinct difference between A.I. (true artificial intelligence) and virtual intelligence, which encompasses such areas as speech recognition, organization and presentation of material, etc.

                Now, being involved in the field of Artificial Intelligence, this does seem a logical future for A.I. However, the definition of true A.I. includes some sort of learning algoritm. The abosulte limit of this "learning algorithm" would be to encompass the capacity of human intelligence, and maybe beyond.

                Think about it. Say we build a robot that has such learning capacity. Even if he exceeded his internal storage, wouldn't he just build a better version of himself and continue? We're not there yet, but truthfully, a robot with a true "learning algorithm" could easily learn and retain way more knowledge than we ever could.

                And as for reality, I've heard of examples of "grabber" robots with simple learning algorithms that can learn to pick up strange objects by trial-and-error. Their only practical application is loading and unloading a dishwasher, but if a robot can learn to do that, what's to say they can't "learn" (even if by trial and error) anything? Even things we don't know?

                Now, this of course doesn't make them living organisms in the sense we are, but who's to say our own needs are beyond our understanding? If we can understand them, and theoretically we can program computers to learn like humans, who's to say they couldn't learn our needs, predict them, or even solve problems before they occur?

                The only real question left is "Are they human?"
                Last edited by hinatasoul; 28 August 2008, 10:40 AM.

                Comment


                  #23
                  the true problem is: is it what we want? a being capable of sentience, won't we be dealing with rights, feelings, etc. after all, if we give it the ability to learn, there's no putting limits to that. as "stealth" shows, and as fifth showed us what happens to an AI if it gets feelings. not to mention the problems occuring if the AI gets angry to us. alot of feelings are influenced by hormones, not thought, and therefore would be inaccessible. but what if the AI picked it up as a random event? what if it has the greatest flaw of all: humanity, and becomes a deadly threat, seeing as it would be a kid in a computer, capable of terrible things, horrible things, not to mention the computing power. a human could never outwit it: it would always be a step ahead.


                  as to practical execution: make it like the wraith AI: capable of rewriting it. write a basic AI capable of learning, and correcting, and altering itself. all you need to do is teach it

                  Comment


                    #24
                    Thats why we program first into the robots the "Three Laws of Robotics" .
                    Visit my Website

                    Comment


                      #25
                      but if it can learn all, then you cant prohibit it from doing things. it could easily think all humans are destroying themselves by existing, and thus should be destroyed. seeing as the only way for a computer to learn is to make it think illogical, our logic does not apply, and making it apply is too much work

                      Comment


                        #26
                        Originally posted by thekillman View Post
                        but if it can learn all, then you cant prohibit it from doing things. it could easily think all humans are destroying themselves by existing, and thus should be destroyed. seeing as the only way for a computer to learn is to make it think illogical, our logic does not apply, and making it apply is too much work
                        ... as is the problem in Mass Effect, which i referenced earlier. The Reapers are an advanced rogue A.I. that just seeds civilization across the galaxy and destroys it later. Although in short form that doesn't sound nearly as good as it does in the game...

                        Even if we create a nearly humanoid robot with the three laws of robotics hard-wired, what's to say he won't create a new version of himself without the three laws, finding them "unnecessary".

                        Comment


                          #27
                          or illogical

                          Comment


                            #28
                            We're AI, with a capacity to try and associate data in different ways at a natural level, which leads to various results. This whole thing being called inspiration and imagination.

                            Thinking we are special is highly ridiculous. Just very advanced in terms of biological and organic computers.
                            The Al'kesh is not a warship - Info on Naqahdah & Naqahdria - Firepower of Goa'uld staff weapons - Everything about Hiveships and the Wraith - An idea about what powers Destiny...

                            Comment


                              #29
                              I think an AI would vastly improve existing technologies and maybe even develop technologies branching from existing ones but i dont think they would or could come up with completely new technologies, and i dont think they would really progress science because they wouldn't have the ambition bred into them from billions of years of natural selection like we do. Also why do people think robots would want to take over the world if an AI saw that with our current trends we would destroy ourselves why would it even care, and if it did why wouldn't it wait for us to do it ourselves, and what possible reason could it have for taking over the planet i mean it would gain nothing and any AI that would judge us for our self destructive behavior probably wouldn't go on a genocidal rampage, and as i said before AIs wouldn't want to take over the world because they wouldn't have the aggressive ambition that humans have

                              Comment


                                #30
                                i dont think computers will make the human brain obsolete because if it ever comes to that we could just use computers and gene manipulation to augment our intelligence to roughly the same level of theirs and we dont know how powerful a biological brain can ultimately become i believe it is just arrogant to assume that within a few thousand years of civilization we could best what it took nature billions of years to create most people overlook what an achievement the human form is, a fully functional supercomputer that can learn at astounding rates, self repair, adapt to an almost limitless number of functions, reproduce itself, evolve, and can run on carrots. i think it will be a looooooooooong time before we can build anythiing like that

                                Comment

                                Working...
                                X