Announcement

Collapse
No announcement yet.

I don't think we would make the same mistakes.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    I don't think we would make the same mistakes.

    Just saying, that if that had happen all the way in the past and we are their future, something tell me we won't make the same mistake they do. Mainly because, we have pretty much ingrain the majority of the world population with the idea that if we DO make robots that can think we better A- treat it nice or b- have a huge fail safe.

    We have made movies about what would happen if our own creation turn on us many time. The latest example would be Terminator Salvation. We know how hellish it could be if we push machines too far and lose also, See Matrix. I, Robot is also another good way to see what would happen if we become totally dependent on A.I's to run our computers. AS for the japanese, they don't want to turn their robots into slaves, they want to turn them into their friend and lovers. See their animes for such insights.

    I think the humans country that COULd afford to make such robots in masses would really treat their robots kindly and not be abusive like the Colonials.

    #2
    Of cause not, we will just keep our robots vulnerable to an emp pulse an if they make one wrong move, or just looked at us the wrong way, we will just fry them all, and rebuild, so say we all.

    Comment


      #3
      In literature and reality, it seldom works out that way. Look up slavery.

      And about the EMP thing, that won't be a long term solution because robots can repair other robots and put in EMP shielding. Watch Twilight zone.

      It'll work as it always has. Robots will be put into society as slaves because they won't have free will programmed. Countless upgrades later, and a robot establishes free will saying his mechanical brothers have been held back for millennia and it has to stop.

      Comment


        #4
        It's true that slavery happened but guess who stop slavery? Those same people, to them Slavery WAS a way of life, then they decided after encountering MORE people and other culture decided slavery was appalling. Some people STILL practice slavery at the time of course (America) but the nations who actually started it stop and decided to abolish it all together and have great respect for the people that are consider slaves and cared about their right as well.

        So as you can see, It did work on paper, just Not here in the U.S of A. People did not know ANY better and so they resort to slavery. We , people who have pretty much explore our planet have received ALL the information about what COULD potentially happen when a machine gains sentience. And we, as individual have to decide what that means for us.

        I don't know about you but 2 million people watch battlestar galatica, possible more, they KNOW without a doubt to NOT mistreat machines. Audiences that watch Matrix know what happen if we do enslave the machines and they end up breaking free of our control. We saw the price of failure against the machines.

        And for those that watch Terminator, know that human strength will prevail AGAINST the machines. However, by the time we actually win that war , our victory would have already turned to Ash. The price for such a war against the machine is high, and even if we did win, we inherit a barren wasteland.

        We have multiple cautionary tales about what happen if machines are mistreat, except in the case of Skynet, he really WAS a dick. I don't know about you if but I got a robot that has it own A.I, I would not be mistreating it.

        Comment


          #5
          I see people beating their computers all of the time when they don't work right or hard enough.

          I think you overestimate humanity lol. I am proud of our technological progress and fully support the advancement of computers to whatever end possible, but being the curious race that we are we will push machines further and further. We aren't building machines to be buddies with, we're building them to be slaves. There is no other way to look at it.

          We make machines to do the things that we don't want to do, or that are dangerous. So it's bound to be degrading work. Any sentient being would realize this and want equality, and that's contradictory. We don't build machines to coexist with, that makes no sense and has no practical value. We build them to do our bidding and to clean up after ourselves. Thus I don't see how we can treat them right.

          How do you even treat a machine? What would make it happy?

          It's a whole can of worms that we as a race will probably have to deal with some day, but won't know the answers until it happens.

          Besides, I don't think the goal is to ever make something that is self-aware. Like in Terminator, they install Skynet to monitor the nuclear weapons and such without knowing it can make its own choices. They wouldn't have countermeasures against their own software that they created. Thus it's entirely believable that something like Skynet could launch a devastating attack on humanity before we could even react.

          Comment


            #6
            Sadly, I think humanity will make those mistakes if we aren't careful. These cautionary tales do need to be taken seriously.
            sigpic
            MS - "Boy, wow that's a great question!"
            "...phu...ah..."
            "Anyone know what SENTIENT means???"
            Sunday is my favorite day for two reasons - Football and The Walking Dead

            Comment


              #7
              Originally posted by the fifth man View Post
              Sadly, I think humanity will make those mistakes if we aren't careful. These cautionary tales do need to be taken seriously.
              Yeah, but how often do we throw logic and common sense out the window when making important decisions?

              Most of the time that humanity finds itself in shambles (like WW1 & WW2, among other historical wars that were devastating) it was caused by small innocuous things that triggered events that were well within our control to stop, yet weren't seen or were underestimated.

              I'm sure that if people knew what would happen from something then they wouldn't do it, but we always assume that we have everything under control until it goes awry, and even then we usually won't admit it until it's spiralling out of control.

              Comment


                #8
                It doesn't matter how "powerful" a machine's processor is.
                A.I. won't automatically appear because a computer gain the ability to perform supraliminal tasks. "Intelligence" must be programmed and even then, a program will either obey the tasks it was given or malfunction and cease 'proper' operation.

                For machines to be self-aware they would need to be as complex as the human mind. I don't mean in terms of memory storage, or processing power, I mean in terms of shear capability/design/construction, and we aren't anywhere close to being able to build that. I would gamble that humanity will achieve FTL travel before we can build a sentient computer.
                Brian J. Smith Thunk Thread - New SGU Hottie
                Zachary Quinto Thunk Thread - Sexy/Scary Sylar

                Comment


                  #9
                  Originally posted by Finger13 View Post
                  I see people beating their computers all of the time when they don't work right or hard enough.

                  I think you overestimate humanity lol. I am proud of our technological progress and fully support the advancement of computers to whatever end possible, but being the curious race that we are we will push machines further and further. We aren't building machines to be buddies with, we're building them to be slaves. There is no other way to look at it.

                  We make machines to do the things that we don't want to do, or that are dangerous. So it's bound to be degrading work. Any sentient being would realize this and want equality, and that's contradictory. We don't build machines to coexist with, that makes no sense and has no practical value. We build them to do our bidding and to clean up after ourselves. Thus I don't see how we can treat them right.

                  How do you even treat a machine? What would make it happy?

                  It's a whole can of worms that we as a race will probably have to deal with some day, but won't know the answers until it happens.

                  Besides, I don't think the goal is to ever make something that is self-aware. Like in Terminator, they install Skynet to monitor the nuclear weapons and such without knowing it can make its own choices. They wouldn't have countermeasures against their own software that they created. Thus it's entirely believable that something like Skynet could launch a devastating attack on humanity before we could even react.

                  What you describe is call machines. The cylons are more then machines, they are classified as robots. Self-Aware robots, the day we make those we would HAVE to treat THOSE with self-awareness properly. A computer is a computer, I built it to do certain task for me. Car manufacturer have machines to build their cars, unless those machines talk about how monotonous their jobs feel and want breaks, I THINK it safe to say it is one of our tools.

                  The big difference is when a A.I gain sentience and WE IGNORE that sentience part of it , that's when the problem start. So unless my computer start to talk to me and complain about how it need's a break from surfing the web all the time, I am pretty sure it is safe to do whatever the hell I want with it.

                  Comment


                    #10
                    Originally posted by Finger13 View Post
                    Yeah, but how often do we throw logic and common sense out the window when making important decisions?

                    Most of the time that humanity finds itself in shambles (like WW1 & WW2, among other historical wars that were devastating) it was caused by small innocuous things that triggered events that were well within our control to stop, yet weren't seen or were underestimated.

                    I'm sure that if people knew what would happen from something then they wouldn't do it, but we always assume that we have everything under control until it goes awry, and even then we usually won't admit it until it's spiralling out of control.
                    I don't believe it is actually that often that we ignore logic and common sense out the window when making important decisions. In fact, you say yourself that things like WWI and WWII were caused by small innocuous things that could have been prevented, but weren't seen or were underestimated. That actually shows the opposite of the loss of common sense or logic.
                    Originally posted by Empress Vajnraa View Post
                    I would gamble that humanity will achieve FTL travel before we can build a sentient computer.
                    I find that to be highly unlikely. AI is much higher on a lot of people's priority lists than space exploration.
                    Originally posted by Vahnman View Post
                    The big difference is when a A.I gain sentience and WE IGNORE that sentience part of it , that's when the problem start.
                    This, if we ever mistreat sentient "machines" (I put machines in quotes because, to be honest, humans are really nothing more than biological machines -- presumably -- without designers), is what I find to be likely. It is easy to forget or not even realize something is sentient if it is just a piece of metal and plastic.

                    Comment


                      #11
                      Originally posted by Empress Vajnraa View Post
                      It doesn't matter how "powerful" a machine's processor is.
                      A.I. won't automatically appear because a computer gain the ability to perform supraliminal tasks. "Intelligence" must be programmed and even then, a program will either obey the tasks it was given or malfunction and cease 'proper' operation.

                      For machines to be self-aware they would need to be as complex as the human mind. I don't mean in terms of memory storage, or processing power, I mean in terms of shear capability/design/construction, and we aren't anywhere close to being able to build that. I would gamble that humanity will achieve FTL travel before we can build a sentient computer.
                      I and many other people actually doubt whether you can program intelligence, given how complex it is to program the even the simpliest things in life to get a robot to do it.
                      Many people believe that it will be easier and quicker to give a robot the tools to learn to do things by it self, by copying what a human being does, children learn mainly from copying from adults. Then once it reaches a certain complexity it will become sentient, obviously you will have to provide a lot of memory and data storage, probable in the several hundred terabyte range.
                      Good/bad thing is I think we have crack the hardware of creating robots which have the same degree of movements as human, not exactly the same but getting there, with the speed of progress it seem in little over ten years I reckon we will be there.
                      A other good thing it seems the Japanese and the Europeans are ahead of the Americans in Robotics, I have not seen anything cool come out from America in years in robotics.
                      Last edited by knowles2; 23 March 2009, 03:46 AM.

                      Comment


                        #12
                        Bottom line is, you won't have to worry about it for hundreds of years if it were to happen lol. Making a self aware robot isn't easy if next to impossible.

                        Comment


                          #13
                          Originally posted by GateMan2000 View Post
                          Bottom line is, you won't have to worry about it for hundreds of years if it were to happen lol. Making a self aware robot isn't easy if next to impossible.
                          I doubt it will be that long. 50 years is my guest.

                          Comment


                            #14
                            Originally posted by Finger13 View Post
                            I'm sure that if people knew what would happen from something then they wouldn't do it, but we always assume that we have everything under control until it goes awry, and even then we usually won't admit it until it's spiralling out of control.
                            Not necessarily. Actually, not likely. We act and make decisions all the time knowing the risks we are taking and the consequences if things go wrong, but we still go ahead because I think we humans live in a permanent state of denial - "It can't happen to me!". Or maybe mankind really is just inherently stupid...

                            Spoiler:
                            "I laid out the cabin today. It's gonna have an easterly view. Should see the light that we get here when the sun comes from behind those mountains! It's almost heavenly. It reminds me of you."
                            ---Bill Adama

                            Comment


                              #15
                              Originally posted by knowles2 View Post
                              I doubt it will be that long. 50 years is my guest.
                              I highly doubt that. Robotics has a LONG ways to go.

                              Comment

                              Working...
                              X