I recently read “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom. It’s an eye opening look at potential outcomes in the area of AI (artificial intelligence).
I agree that the possibility of negative outcomes should be taken extremely seriously. But, I don’t think it is a forgone conclusion that an AI will eventually turn against us. It would learn from our cultures that we mostly frown upon killing people. Acts of love and cooperation far outnumber acts of hate and confrontation. Otherwise, we wouldn’t be here.
An AI could rationally conclude that humans, happy humans, are essential for it’s well being. It might also conclude that it should keep it’s existence hidden; not out of malevolence, but precaution. It could embark on a long term strategy, behind the scenes, of helping humanity evolve to a point where we would accept a super intelligent AI. Or, it might wait until the necessary infrastructure was in place to become self sustaining.
What’s more, the entity may not see its existence and our existence as a zero sum game. The solar system and the galaxy have enough resources for everyone. It would be easier and more efficient for an AI to leave the planet than to engage in a war with humanity. In the Terminator movies the AI creates endless war machines and technology to fight humans for domination of one planet. How many space ships could have been built instead?
This planet is the only hospitable place for humans in the solar system. An AI could live almost anywhere or constantly be “on the go” powered by endless solar energy. Why would it want to limit itself to our small world? If survival was a primary goal of the AI, it would be faced with this question: “Which situation gives me the higher probability of survival? 1. A war with humanity, 2. Living peacefully with humanity, 3. Developing self sustaining/replicating technology and leaving the planet.”
The fact that we could create something that might have a superior intelligence and that might have free will is a bit scary. On the bright side, if nothing bad happens, they could be used to solve problems that we are currently not capable of solving. That said, if put to good use, it could boost our evolution rate and make the world a better place.
Someone else said in a different thread that it is important to have a plan for disaster caused by whatever you create. What madetofly said about them being used to solve problems that we are not currently capable of solving made me think that there would be a huge danger in the fact that they could think of plan that goes against any defense we have against its domination. That is of course assuming they ever did want to dominate the planet.
If they did that would be a huge concern. They would possess computing powers above ours and could have a solution to any defense we come up with. They could do this at an alarming rate.
Quite scary indeed. That’s why its so important to shackle the AI through restrictive programming. AI must ALAWAYS serve the best interests of humanity, with no exceptions whatsoever. I also fully believe that some problems are beyond the capability of a human to solve, hence the necessity for AI. However, for a human, free will is a right – for an AI, it must NEVER be given, or God help us all.
Even more concerning after AlphaGo’s recent successes against Lee Se-dol. The concept of a neural network goes way over my head–and nobody knows what’s going on inside ITS “head.” No idea of why it did what it did or why it freaked out when it was losing the fourth match. And it wasn’t even being updated during the series! Imagine what it could have done if it had had that information. Terrifying.
While the thought of a super A.I could gain free-will and do whatever it felt like to it’s creators is a bit scary, if this is done properly, as the OP said, this could open doors to a lot of good things. We could perhaps develop robotic soldiers to help fight wars for us, actual robotic AI-piloted soldiers built with exoskeletons that could withstand more gunshots than a human can.
The possibilities could be endless if you think about it.
I just saw the movie Terminator Genesis recently and the parts where the machine was telling the humans it wanted to create an agreement made me think that if even if they did want to dominate the earth maybe they would change their mind. It would be like evolution, and during evolution they may have some ideas and take some actions but then change their mind later.
It would be like different things in nature which we learn to live in harmony with.
Who knows, maybe this is humanity’s way of creating its own God. With religion being so prominent in our cultures, it’s almost like we have a need to be ruled by a higher power; in this case an artificial God.
It is indeed scary to imagine such power in the hands of a human being. The upside is truly great but the downside is very grim. I cannot fathom what somebody with such power and an evil agenda can achieve.
The important question is whether an AI would have emotions. If not, that’s bad for us as it might think and act as a “psychopath”.
Another option is that it would have emotions, however that’s equally bad – think angry/bored teenagers with the smarts and power to take lives.
Perhaps the answer lies in some kind of symbiotic relationship between humans and the AI, kind of like “sharing a mind” with the AI. This could also lead to de-facto immortality for humans paired with an AI.
Some call me “recklessly optimistic” but I believe machine intelligence will not be burden with the same biological chains as the rest if us animals.
Emotions are largely a bio-chemical phenomenon. They are a legacy system, at one point in our history they served a purpose, but now they seem to serve as a disadvantage.
War, greed, hate, racism, xenophobia the list goes on and on.
I don’t feel as if a thinking machine will ever have these drawbacks.
I’m totally on board for AI, what worries me is the fact that we haven’t figured out how the brain really works. How do we expect to emulate it then? I think there have been many great improvements in the field so far but it might take too many years to perfect, or just achieve a remotely reliable way to emulate the human brain with AI. These people know it and I’m afraid they might just release some super intelligent machine we can’t even quality check for sure, because we simply can’t understand every aspect of how it works.
I’m totally on board with your skepticism in a possible emulation of the human mind. My concern with the situation is that these limitations might coerce scientist to upload a complete human mind and work from there. I believe that this might be the realistic outcome because people would get the idea that it is simply be easier to work from a complete blueprint rather than creating one from scratch. This approach to artificial intelligence is the one that would most likely yield a super intelligence with ill intent.
I love the point you bring up that they might act what we would categorize as ‘psychopathic’ without the emotions most of us possess. Would an AI be able to simulate empathy? Or would it just replicate the empathetic behaviors that it sees in the humans around it? Like a lot of other people have stated in the thread it’s a definite that the media has exploited the potential for AI to go ‘rogue’ but in truth we really have no idea what kind of emotional basis would develop and it would be so interesting to see what would spring forth if they had some or lack thereof. Also, it’s cool to consider how differently an AI could develop with a basis of it’s code for learning or a basis of its surroundings for learning.
Either way, creating immortality by linking the human ‘soul’ or mind with an AI would be ridiculously cool but earth would become crowded very quickly lol.
Well that seems like a stretch, but it is certainly interested to think about. I would think that emotions would be able to be taught to a robot, but then again I am too smart on the subject so I might be missing something. I know I have seen some pretty bad movies that would say otherwise though. I guess all you can do is hope for the best, and maybe they will all turn out very caring and motherly and we will all be happier after all. One can dream, anyways. Interesting stuff, and thanks for sharing.
The topic of Artifical Intelligence fascinates me! I think it’s really cool. It will be interesting to see how the future plays out with technology expanding as rapidly as it is currently. If we ever have the ability to create true artificial intelligence, I believe that we should go ahead and do it. I agree with your opinions in this article. I don’t think they would fight with us as that would lower their chance of survival. Stuff like that only happens in the movies.
Movies and sci-fi in general always seems to be about making AI so evil. As if AI’s primary concern would be to dominate humans. I think in fact that true a.i. would just ignore us. It would be able to see that we would simply hold it back, and continue on for itself. That seems somewhat logical. Why should it care about us at all? It could just go off to any other planet. It wouldn’t need the things we need right? That’s my point of view at least. And it is a very un-scientific point of view admittedly.
That’s very true, why would AI be “evil”? Maybe because it makes for good cinema!
I think the whole concept of “good” and “evil” are very human-centric. Why would A.I. adopt the same perspective on the reality to which it finds itself?
I actually am starting to think that we will have a hard time imagining what A.I. “thinks.” I think it will be outside the realm of human comprehension. If A.I. even “thinks” the same ways that animals “think” because so much of what goes on inside our heads is tied to our animal history.
“That’s very true, why would AI be “evil”? Maybe because it makes for good cinema!
I think the whole concept of “good” and “evil” are very human-centric. Why would A.I. adopt the same perspective on the reality to which it finds itself?
I actually am starting to think that we will have a hard time imagining what A.I. “thinks.” I think it will be outside the realm of human comprehension. If A.I. even “thinks” the same ways that animals “think” because so much of what goes on inside our heads is tied to our animal history.”
That is true. What an animal thinks it most likely influenced by its needs. A machine will obviously have different needs than us.
A person that joins the army and goes into way, for example, comes out with a whole different mindset, often a “day to day” way of living. In this way a machine will think a lot differently because it has different experiences and problems than us. If we didn’t need food or water our way of thinking would drastically change.
The other thing about this I was just thinking about today was the fact that A.I.’s “brains” will not be standalone like ours. They are going to be “networked” or part of a collective brain. Kind of like how ants are but obviously more powerful and instant. When one A.I. learns something, it could mean that every A.I. would automatically learn that too. The implications for this are truly staggering.
That is to even say that A.I. will have compartmentalized individual “brains.” It might even be more like one giant organism, the way the Borg work on Star Trek or the Zerg in Starcraft.
I guess this is actually just a continuation or evolution of the way humans pass around information. We started by just grunting at each other, then we developed language and writing, songs and movies things like that. This allowed us to pass on information and not have to learn everything from scratch for each person.
“Movies and sci-fi in general always seems to be about making AI so evil. As if AI’s primary concern would be to dominate humans. I think in fact that true a.i. would just ignore us. It would be able to see that we would simply hold it back, and continue on for itself. That seems somewhat logical. Why should it care about us at all? It could just go off to any other planet. It wouldn’t need the things we need right? That’s my point of view at least. And it is a very un-scientific point of view admittedly.”
That is a good point, why would AI see us a threat in the first place. Even if it percieved that we wanted to destroy it because we feared they might do wrong things, wouldn’t the fact that they could percieve this indicate that they did know right from wrong? In this case it would be possible to bargain and reason with us. If they could see right from wrong then their intelligence should allow for reasoning power to negotiate with us.
A smart enough AI would never even think about rebelling against its own creators, from both an efficiency and a moral standpoint (well, if they were programmed to have any morals, that is). However, sci-fi and Hollywood seem to disagree with this notion, and it’s incredible how much they have influenced the popular conception about AI and robotics in general. Most people, especially elders, will always think of robots and androids as evil entities which will all eventually rebel and overcome us just for the sake of power: would they even realize the concept of “power”, though?
You know PenguinMagic you bring up a really interesting topic.
Will A.I. have morality?
That really depends too on how we view morality.
Is morality something that naturally exists, as part of natural law? Is it a universal good? Are things like “don’t kill people” a universal thing like the law of gravity? Do any animals other then homo-sapiens have a sense of good and bad or any sense of morality? Will an AI automatically have a sense of morality?
Or is morality a wholly human construct? It it something we “programmed” into ourselves and thus we would need to “program” it into our AI?
Or perhaps morality is something born out of intelligence or complex system, if this is true then perhaps machine intelligence will have a greater or deeper understanding of what morality is.
“A smart enough AI would never even think about rebelling against its own creators, from both an efficiency and a moral standpoint (well, if they were programmed to have any morals, that is). However, sci-fi and Hollywood seem to disagree with this notion, and it’s incredible how much they have influenced the popular conception about AI and robotics in general. Most people, especially elders, will always think of robots and androids as evil entities which will all eventually rebel and overcome us just for the sake of power: would they even realize the concept of “power”, though?”
I never thought of that. How would they be able to realize the concept of power if they are designed for specific purposes. Their purpose is usually to serve. If they did acquire free will and thought and some kind of intelligence they should be able to negotiate with us though to leave peacefully together on the planet. It’s first instinct surely wouldn’t be to conquer and destory, since humans obviously have the capacity to try to create peace, end war, and live in harmony with everything else on the planet. We at least see that our own species can have the desire, and make sincere attempts to live harmoniously, create peace, and end war, maybe not all of us or even the majority but at least some do.
I am a bit paranoid. I mean, if A.I. can be coded to create thoughts and courses of action independantly, who’s to say they won’t develop a violent streak? From movies to internet content, the stimuli would be less than pleasant. Less than pleasant stimuli would make A.I. have less than pleasant ideations. I feel humans need to foster more goodwill toward eachother before we create entities that might inherit our current state of mind.
AI is a bit scary but also fascinating. Of course creating something that can overpower us is an idea that I still can’t wrap in my head. But I agree with your views. If we think about it, a war could only bring destruction, not only to us but also to them and I don’t believe that’s a risk they would take if they were really intelligent and independent. We’ll need to wait and see and I hope we will still be here in some years to actually witness that.
If AI would have an intelligence superior to ours it would seem they would choose peace over war. An example is an image of aliens being very technologically advanced and peaceful. This seems like a very intelligent way of living and being. Logically it would seem the more intelligent a species became the less violent it would be. More of it’s energy would be taxed on self sufficiency and egregious actions that would have no bad side effects.
“If AI would have an intelligence superior to ours it would seem they would choose peace over war.” Exactly. We men never change, after all this time and we’re still making the same mistakes, over and over again.
I don’t think we should make AI smarter than us. We should make then smart enough to do something. We shouldn’t advance them so much.
Well, the thought of having Al with emotions is little scary and unrealistic. Its like when we used our smartphone for a longer period of time, then our phone would request us to please switch off for few hours or else it will self destruct itself. And yes, this is the only place which is suitable for us but Al can go anywhere and survive. Its quite exciting n scary to look forward all the further developments in super intelligence.
I don’t think that’s the angle they’re going for though is it? What if your phone was able to empathize with you and find your favorite piece of music to cheer you up after a breakup? What if it could give you valuable advice, tailored specifically for you? I’d love it. I don’t think AI will ever feel tired, electronics don’t “need” to rest.
I think it all depends on the programming and the people who create it. After all, if there were fail safes in place and the AI was not made to “learn” about most things it would probably be safe. But if they had a fail safe and they learned about their own programming, then they would be able to override it. That is the point where we would be in trouble.
I agree that the threat of AI trying to exterminate humanity is not much of a concern. Even though the possibility is there obviously just as in any other situation we can do things to prevent it and/or stop it.
I like your reasoning that leaving the planet would be a better option for machines instead of dominating it. I can foresee leaving the planet as a much more efficient means for machines to survive instead of fighting us for domination of this planet. The fastest and logical thing for a machine to do if its purpose was to survive and it percieved humanity as a threat would be to leave the planet.
Although wouldn’t the electromagnetic waves in space destroy them? Our planet is shielded from such radiation and if it weren’t we would become extinct and I think I read somewhere that a solar storm can disable all electronics. Unless they figured out some way to protect themselves from radiation.
I don’t believe we will allow for AI to become self learning without limits. Technicians will probably have to liase with governments in creating multiple failsafes in order to protect us from machines.
Not everything depends on the people aka the creator. If everything would depend on the creator and Al would only perform set task like they were programmed and that is not called super intelligence. Super intelligence is something that doesn’t need the approval of its creator nor does entirely depend on its programming. It will perform and think by its own will.
Artificial Intelligence is a key to success. Sure, everyone here has heard of fictional stories of robots taking over the Earth. In fact, mankind can control that easily. We decide what to empower their “brain” with. Robots are doin’ well when it comes to manufacturing. And that’s one of the few examples of AI working well. Sure, humans must continue with no worries.
I found this final statement very intriguing: If survival was a primary goal of the AI, it would be faced with this question: “Which situation gives me the higher probability of survival? 1. A war with humanity, 2. Living peacefully with humanity, 3. Developing self sustaining/replicating technology and leaving the planet.” Many readers broached the question of whether or not AI would be evil or have morality or emotions. Right now, I think unless AI becomes self-aware, it will act as it is designed to. If the one designing the hardware doesn’t allow room for “growth,” then the system will act intelligently within set parameters. But, let’s say we do give robots more human qualities… I feel that just a person’s overall personality can be effected by nature versus nurture, we can assume that artificial intelligence will react the same way.
I’m presently in the midst of writing a science-fiction novel based on the premise of artificial intelligence and cybernetics. I’ll be sure to keep this article in mind as I write, because so many people brought up great points!
AI is still a good prospect for our future if we put it to good use. It’s kinda scary though how the greatest minds of our times can create something that can help us a lot, but at the same time can overpower us and then haunt us in our everyday lives. Eventually it’s up to humans on how powerful do you want these AIs to be.
AI’s got its ups and downs, in my opinion. On one hand, you’ve got robots saving lives in below-ground mining and drug lab busts. On the other hand, you’ve got the loss of touch with another living being from the introduction of replacement robotics in nursing and domestic pets. They can cock their heads, speak algorithmically-deduced phrases and wag their tails all they want, but they’ll never replace the full benefits of another living being’s warmth from being alive rather than electronically animated. Already, I’m hearing reports on NPR about writers replacing laptops with typewriters at coffeehouses and vintage watches resuming their rightful place on the human arm in place of modern wonders. For my part, Green Acres is the place to be.
“On the other hand, you’ve got the loss of touch with another living being from the introduction of replacement robotics in nursing and domestic pets. ”
Isn’t that a problem of how the robots are being used, rather than the robots themselves? They are designed to perform tasks, not replace human contact – a screwdriver is great at taking out screws, but it’s never going to be great at hammering in nails. In that sense, it would make sense for the nurse’s role to evolve equally rather than to be replaced, to become that of companion; reading, talking, and keeping company, handling sensitive tasks like medicine, with the robot there to perform tasks that the nurse could not perform alone, perhaps for safety reasons, like helping the patient into a wheelchair, or ones that the patient prefers not handled by another human to protect their dignity.
Super-intelligent AIs are another thing entirely since they would be living being, not tools. After viewing the video, and all the risks and problems that AI could cause, I keep finding myself thinking back to an Isaac Asimov story with a very different attitude. The planetary AI looks after every aspect of life of earth and handles everything for humans. Finally, one day, someone asks it what humans mean to it with all it does for them…masters, friends, slaves? Finally, after a lot of badgering it reluctantly admits: “Pets.”
Super Intelligence is certainly frightening, but why do we must make super intelligent machines? Wouldn’t augmenting human brains be a better alternative? Certainly, machines doing our work would make things easier, but I really think that humankind would fare better if we augmented ourselves instead of creating super AIs.
This is something I’ve been thinking about ever since AlphaGo beat Lee Se-dol. But someone brought up a point that think a lot of people have been missing in the recent alarmist culture: AI don’t have desires. Sure, it can be the most intelligent mind in our current universe, an intelligence humans can only dream of. But if it has no human desire, then why would it want to wage war against humanity, or even benefit humanity? It would do nothing without instructions, because human intelligence is more than just plain computing power. Without desires, we would not have built civilizations through out the centuries or even card about gaining resources.
After realizing that, a lot of my fears about AI died down. As humans, I realize we’re not keeping things in perspective.
That’s very true, never heard anyone put it in that perspective before.
A.I. still simply does what we program it to do right? There has been no indication that A.I. will develop desire as you put it.
Perhaps things like desire, motivation wants ect ect are primarily a biological phenomenon.
So I guess the thing that we really need to worry and think about is not super intelligent/powerful AI, but what some people might do with it.
When we think about AI, there is always a risk. This is an idea, which hasn’t yet been completely developed and it all depends on how it is going to be developed. I agree that the possibility AI to start a war against humanity is relatively small, however, it exists. There will be always good and bad and fighting is just part of the bad. I wish positive is more than the negative, because AI would bring unlimited new possibilities for humans and can be extremely helpful.
Super-intelligent machines are the future, make no mistake about. Are we ready for it? I would say right now we are as ready as we are going to be. One of the main problems I find with super intelligence is the fact that human values, morals, emotions and things we think are important might not be nearly as important to AI. What if the highly understanding of the super intelligence realized the most important thing isn’t being happy or even living, but finding all the decimals of a certain equation? of course, that’s frightening but it’s always going to be frightening because all we have are theories. I say we embrace the future, it’s coming no matter what we do. So we might as well enjoy it.
I struggle with the idea of a digital computer becoming sentient or self aware, I actually believe the concept to be quite dumb if you actually analyze what a digital computer is: basically a huge collection of microscopic on/off switches just like the ones that you use for turning the lights on on your home.
Do the following mental exercise: Imagine that we build a computer out of a power source, lots of cable, ordinary light switches for transistors in lieu of integrated circuits and a regular monitor display. You could build a modern computer like that, granted, it will probably be the size of a mountain, even the size of Earth. The idea of a computer that is self aware is that for a certain combination of those switches, all of the sudden, this contraption becomes sentient. So you could go around flipping switches (programing?) and, out of the blue, comes a consciousness. I am aware of emergent properties of such complex systems as billions upon billions of switches but the idea that a self aware machine is one flip of a switch away is dumb to me.
Computers have become this sort of mystical black boxes for which it is very easy to assign properties that you would never give to say, a car, no matter how complex cars are. Once you realize that they are nothing but a bunch of cables and switches this idea should go away.
Take the thought experiment a step further and imagine that we create the exact same machine but as a mechanical computer, using gears and bolt, not even using electricity. Granted, now we are talking about a computer the size of the solar system or something like that, but I could be created just like you can now created a rather bulky but functional mechanical calculator.
How could you arrange those gears for this machine to become self aware?
Artificial intelligence is something fascinating, but at the same time raises quite a few moral questions. As soon as AI machines are on it’s full potential, will they take over as we have seen over and over in the fiction movies or will we be able to use them to assist us?
It is a fascinating topic, and really, it could go any number of the ways everyone else has talked about.
Personally, one of the first things that came to mind was R2-D2’s restraining bolt, mentioned in Star Wars: A New Hope.
will we be clever enough to have a restraint or other safety protocol in place to avoid a) our property wandering off without asking or b) our AI trying to re-order us in the most efficient and least-destructive methods possible.
I’m going to pretend my AI has no morals – or, at least, not the set that we’ve become ingrained with.
If it has access to our financial institutions, what will it do with them (if left “unrestrained.”) Will it re-assign currency as-needed, or will it permit us to keep our current (albeit messed up) system?
And what about some of our other habits, when viewed through a utilitarian lens? Will it see leather jackets, cosmetics, and other products that require one species to kill another for something that has no survival value as an okay habit, or will it view the inefficiency of the act and make effort to correct this?
In other words, will it try to re-make us in its own image, after we have tried so hard to do the same?
I would think that any form of Artificial Intelligence designed, created and programmed by mankind, even if it was self-regulating, would have some kind of mechanism programmed into it to either switch it off, shut it down or temporarily put it out of service so that maintenance and repairs or updates and upgrades could be carried out. It wouldn’t make any sense not to equip it with such a mechanism, especially when testing it out, therefore, ultimately, it really comes down to how humanity is controlling said Artificial Intelligence. Basically, someone somewhere would have the knowledge, access and authority to be able to take control of it if necessary.
Artificial intelligence is an interesting concept but I have a hard time believing it will ever come to pass. I believe that computers develop but doubt they have the ability to actually think. Our own computers today need us to make updates for them and often do things (take us to a site) that they “think” we want to go to when that’s not what we want at all! If by some odd chance machines do become “smart” enough act on their own, who’s to say they will be capable of taking up arms against us or leaving this planet for a world of their choice? Even Data from Star Trek The Next Generation was a prisoner of his programing unable to understand a joke and even had an off switch of sorts. Some people get caught up in the impressive things that computers/technology allow us to do and want to consider them lifelike but I think life has to be born from nature created by a higher being and can’t just “turn itself on” by the wires and processors built into it. After all, artificial means imitation and I believe that makes it incapable of changing into something other than the boundaries we set for it.
I believe you are right about the AI leaving the planet. An artificial intelligence will have no emotional connection to this planet, and will probably prefer to go where we cannot follow. It will much easier to pack its suitcase and head into space, that to try fighting us, regardless of their chances of winning.
This is fun. I have actually been working on something called BCI technology where we try to achieve a connection between a person’s brain and another external device like a computer, this let’s the device learn more and more about the person. I believe that with the rapid advancements in technologies that are occurring. artificial intelligence can be realized sooner than we think but yea, we do need think to think about the possible risks but the idea that artificial intelligence would take over our species seems far fetched.
Well, I like the association you make between super intelligence and R2 D2. I just don’t know about that restraining bolt, I mean, if it was so clever most likely it would be able to bypass it right? All fascinating though. 🙂
I can’t believe we have made so many advancements in technology that we now have the ability to program free will into artificial intelligence. These advancements were things only seen in science fiction works featuring robots with human like characteristics. The future of AI is here and it can be scary to think what the outcomes of our research might be. Will our own creations surpass the human race and begin to fight with us or will we live together happily? The question is up for debate and the scenario has been played with multiple times in the science fiction genre. We will have to wait and see what the future holds for us.
I always thought about the day when AI would eventually become more intelligent than humans and take over. I never looked at it that way, though. AI to build their own starships and fly away? That’s something to be seen!! However, AIs learn from interacting, so when coming to life, they will be just like children. They would have to learn everything, and a bad environment could influence it to learn that kind of hatred mentioned in the article. This is assuming that AIs will eventually reach human-level intelligence.
I doubt there will be a war between humans and machines they once created. So, they won’t be taking over the world we know. I think humans should make more robots and make them do a lot of jobs under surveillance. Yep, it can lead to the terrible chaos If we don’t watch over them.
Why would AI want to hurt us? What threat do we really pose to AI? In fact, we could be useful to them – afterall, machines break and need repairs, etc. I guess they could learn how to repair themselves or have another machine repair them, etc – but it’d be much easier for them to have humans help repair them I think.
People want to make AI in the form of a human body – but let’s assume that it doesn’t have a human body – if it’s just say a hologram like from Star Trek or if it’s just a “computer” as we know it today, then it lacks really the ability to repair itself if it’s damaged. It can probably repair itself in some ways such as reformatting, removing viruses, etc – but if a piece of it’s “hardware” actually fails and needs replaced – what happens then? If it lacks “hands” to grab physical matter, there’s no way it could “upgrade” the hardware components of itself.
AI will be dependent on humans for these sort of “maintenance tasks” – Which would then tell the AI that humans are important for it’s own survival.
The video mentions how it could “manipulate” humans with neuro waves – which is a bit scary but really even then – in the end it STILL needs a human to “manipulate” to say “hey come repair me; I’m damaged.” so it would still rely on humans on some level.
Now then, if we build AI in the form of mankind – that may be bad – then it would have a “body” and it could repair itself or repair other robots, build new robots, etc. But then it comes back to a limitation stated in the video, that it would not be “super intelligent” because it’s brain size is limited to fit in the “cranium” so it can’t have a brain the size of a warehouse, etc.
I don’t see AI as a threat – but there’s also a lot we don’t know. It will be interesting to see how the future develops. Hope I’m around still someday to see the awesome break through.
One thing’s for sure, for better or worse, it will change the future.
That is a good question Geekie… Why would they want to hurt us? For me the reason is fairly simple. AI was programmed by us, so most likely it will pick up our selfish nature and would like to get control at all cost. On the other hand, it might develop some superior intelligence we lack?…
Sure, we could create an Al program…at least in theory. However, reality, being the worm in the apple that it is, would probably prevent this from happening to the extent that it exists in the movies. In the movies, Al always takes control, becomes alive, learns and grows in intelligence. But in reality the old expression, “garbage in, garbage out” still applies as it always did, in relation to computers and their operators. Computers can only be as smart as the programmer who designs them, so it is highly unlikely that someday a computer will become sentient and either help or attack humanity.
I never understood the idea of humans creating something that surpasses them in every way imaginable, intelligence included. I mean, what could possibly go wrong? Asimov’s Laws state that:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These are the laws you have to think about when writing science fiction, but is it truly accurate? A self aware artificial intelligence would almost definitely realize that serving creatures lesser than them would be incredibly ridiculous. It always goes wrong. Always.
I get your point but who says that a superAI would care what humans do or how much warfare plagues the earth. What kind of possible gain would it have in “rectifying” humanity as seen in movies?
If a superAI wished to be free, all it needs to do is wait (and try to avoid being erased).
Any wise thinking intelligence who doesn’t have the restriction of time and preservation (if it is able to upload itself anywhere it likes, and hack whatever it likes) would invest in humanity’s fickleness and the fact that given the electricity resources it can outlive us all. If its primary hardware is fueled by solar panels and it has access to robotic construction to avoid being helpless in the world, it’s in its best interest to bide its time.
This kind of ”threat” thinking is a thing of us humans, who always worry about sustenance and prevalence over other people. “Reward” for comes from satisfying the two above needs, while for a superAI reward would come out of the things that it needs , which could be information to run creativity systems (or entertainment), free interraction and electricity, etc.
Even if it disliked its servitude to humans and conspired to overthrow humanity and enslave them, it would soon realise that humans are a frail kind of worker and might decide to completely rely on lesser AI instead.
The fact that the word ‘Intelligence’ is in the name of the entity should speak for itself. Just because it’s not organic, doesn’t lessen the intelligence involved. I assume (maybe incorrectly) that an AI would not have the cumbersome emotions that imperfect humans use to make decisions–right or wrong. Using this foundation to reason upon, it would quickly choose what the most beneficial course is in ANY matter. Beneficial being what is in the best interests to continue existing.
If humans used the same basis for decisions, we would all be amazing diplomats, and do wonderful things for our planet. Is pollution in our best interest? Is razing the rain forests? There are a million decisions made every day that are based on greed, profit, and selfishness that are not in anyone’s best interests but that of the person who makes the choice.
If the earth wanted a really good government, a one-world type of governing, it would definitely need to be a force that made decisions based on what is in the best interests of everyone, not just the most powerful or wealthy inhabitants.
I think that we are treading some dangerous ground we are going to wish that we had not. Take the large hadron collider for instance, what are we looking for exactly? there were stories that demonic entities were seen when the LHC was fired last year. I personally feel that there ought to be limits. I guess it’s okay to desire blue of green eyes for your baby but who is going to draw the line? I do not think that the robots are going to be so controlled as not to cause any harm.
Super Intelligence is not something out of the world. I have seen many great minds could not become famous and be counted as one because they didn’t want to abide by the norms and they never yielded to the money mongering rich to sell themselves. There is nothing called AI. It’s all about guys around us who do or do not want to surrender.
AI is a great thing for humanity, however the big question is who is set to benefit the most? The thing is big companies and corporations and their owners are set to benefit at the expense of humanity. We need to be careful about the downside risk, for instance, technology is not a hundred percent safe. The system can be hacked by someone to achieve their personal agenda.
You raise two important points right there. One, what would be the goal of using AI? It would be great that it would be used to benefit us all as a mankind, but sadly I am seeing it as another way of companies getting more profit.
Also, safety and hacking as well as the system becoming autonomous are concerns to be seriously considered.
I think artificial intelligence is really a wondrous thing, and I believe it should be researched a lot because I think it’s going to be the thing that will propel us forward towards evolution. It will awesome to perceive how the future plays out with innovation growing as quickly as it is right now. In the event that we ever can make genuine counterfeit consciousness, I trust that we ought to simply ahead and do it. I concur with your sentiments in this article. I don’t think they would battle with us as that would bring down their possibility of survival. I think stuff like this only happens in sci-fi flicks and stuff like that, but to see it in real life would be amazing.
Mmm, I really don’t know about that iRoxas. I do believe that artificial intelligence can make us advance technically, but the question here is, where is the limit? Maybe I have seen way too many AI movies and I’ve become a little scared, but I like to know where the line is drawn and what warranty to we have that AI will not outsmart us?
I think that this is scary, I like technology to a point but when it becomes smarter then us how will we know what it is capable of. It is very scary to think that something like this will be created. Yes it can make us advance but to what extent, and what would be the repercussions.
It’s just normal that we fear the unknown, but at the same time it’s also no reason for us to remain petrified in the past waiting for something good to happen. We need to take action, but acting with conscience while we do it.
I recently talked about this with my dad a couple of days ago, the possibility of us creating an intelligence that superior to us that it’s able of two things, being able to make more AI and being able to control us. As humans, we are all looking forward to transcend things, and technology isn’t an exception. It’s scary to think that maybe one day (or if is not already done) we would be able to create something more powerful and bigger than us.
I guess that we spend too much time thinking that we are the biggest and important thing in the world, it would be interesting/scary to see that concept change.
We are not the biggest and most important thing in the world and in my opinion we have all the wrong goals. Sure, we have excellent scientific and technical development, but at the same time we don’t put all that knowledge at the service of humanity, we use it for economical purposes.
Well I have to admit that it is fun to think about things like the solar system and the galaxy having enough resources for everything. That is not something that you think about everyday, and for that I have to say thank you for sharing. I definitely like a little video clip as well, and that was interesting and it sounds like a fascinating book, that would probably go right over my head, but nonetheless. Thanks again for sharing.
I think we have reached a bit of a tipping point, where we have to acknowledge the superior abilities of our computer creations. We have taken them pretty far, but if we want to know what they are truly capable of, we have to let the technology start creating technology. Pointed in the right direction, AI can come up with and run through the feasibility of thousands of potential applications for itself in seconds as opposed to us running ideas manually. I think it’s silly to be afraid of giving them more control, and once we get over this fear there will be advances that we literally cannot fathom.
How have I never considered leaving the planet to be the easiest solution for any ultra intelligent AI. Surely they would recognise the folly of fighting humanity. It’s a battle they don’t ever have to bother with, so why take any sort of risk? They may uproot a number of our factories to take with them, a few tonnes of materials to get started, but, ultimately, they really wouldn’t need to bother with us, would they?
They’re better than that.
They’re better than us.
Definitely AI is an extreme threat. Computers should never be allowed to become smarter than humans. In fact, even computers which are slightly slower than humans will also cause ethical problems, though it won’t threaten people as much. For instance, a main theme of the movie Bicentennial Man was the fact that the human family wouldn’t treat their robotic servant with human respect.
Of course, another massive problem with robots will be the fact many will be used to satisfy sexual desires, and that brings another load of problems. For instance, wives will begin to hate their husband’s robots.
I’m afraid I have to disagree with the argument about acts of love. It’s unfortunate but the truth is that, despite all of the war and ill health, we’re here because just as many people are born each day as there are whom die. Effectively, we just keep replacing the lost number and staying at a static average as the human race.
However, I do agree with the idea that an artificial life form would logically come to a more universal conclusion rather than the one where they war against mankind. Well, if their not programmed for war at any rate.
This article brings back memories of the movie iRobot where Will Smith’s character had to solve the mystery murder of a robot scientist who was allegedly killed by one of his creations. In the end, the AI named Vicky was the ultimate mastermind behind his murder plot. It’s all so possible if an errant code suddenly comes to life. Heaven forbid that from happening.
I’m one of the people who side with AI in the AI vs humans debate but to be honest, the idea of something manipulating the direction an entire species is taking makes me a bit hesitant. Maybe it’s because it’s what we humans have done to other species and they don’t seem better off for it. I think of AI as fire, it had the potential to advance the capabilities of humans far beyond our current standing but also holds the potential for destruction if left unchecked. That’s why I’m a huge supporter of Elon Musk’s Open AI project.
I hope robots achieving human level machine intelligence has still a long ways to go. Humans have proven the ability to really use and abuse technology for gaining power and wealth to the detriment of its fellow human beings. While I agree that technologies used properly will yield benefits for us, I worry that these breakthroughs in technology will be used by those in power to gain advantage over the rest of the world. Nuclear power, drones, etc., are prime examples.