Billionaire CEO and founder of Tesla and SpaceX, Elon Musk has voiced serious concerns about artificial intelligence (AI) recently. Other prominent figures such as Bill Gates and Stephen Hawking have also indicated that AI is potentially one of humanity’s greatest threats.
We only need to look at the fossil record to realize that our place as humans, the dominant species on this planet, is anything but permanent. The arrival of homo sapiens (modern humans) marked the end of at least two other proto humans, Denisovans and Neanderthals. Fortunately for Neanderthals and Denisovans, some of their population bred with modern humans. Thus, some of their genes live on according to DNA evidence. It’s also somewhat apparent once you compare the facial features of Neanderthals to some modern humans.
What will be left of us after the next phase of our evolution? We can’t seem to help but create technological versions of ourselves. It’s almost like a biological imperative. Maybe the underlying “purpose” of all our technological advancement is to create that next step in the evolutionary progression. In much the same way religion says God created us in his image, will we do the same for the life we create?
I’ve recently started watching the show “Humans”. In that world, people have created artificial beings called Synths. The Synths live among us and there is concern that they might replace us. They look human, they sometimes act human, and their existence poses and existential threat to us because they can do almost anything we can do. Yet they do not get sick, they are immortal, they learn faster and retain it all.
Robots, androids and Synths in the physical world are easy enough to spot. What happens if the AI evolves somewhere inside a super computer and has access to the internet? Would we know of its existence?
Imagine what a super intelligent, artificial mind with direct access to all the systems connected to the web could accomplish. It would make the best human hackers look like amateurs. It could manipulate financial markets, accrue wealth by creating or taking over bank accounts, alter the information we receive online, digitally create pictures and videos that look like real world events or even create new goods and services. Right now it is possible for an American sitting in his or her home to contract Chinese manufacturers or Indian programmers to create a variety of apps or items without meeting anyone in person. All made possible by the digital economy. A consciousness in cyberspace could easily create real world, tangible items. If there was a task it could not accomplish in the physical world, it could hire someone to do it via sites like Craigslist. With enough money, masquerading as one human or 1000’s of humans online, there is virtually no end to what an AI could accomplish. It could even steer the course of our society’s development or downfall without us knowing it.
Think of it this way, what if there was a human hacker dedicated exclusively to you. He could read all your emails and listen to all your phone calls. He could hack all the microphones on your various devices so that he could hear private conversations. He could see your search history and the sites you visit. With all this information, the hacker would get to know you very well. This hacker, if human, would have limitations on the number of people he could monitor and derive useful information from. An AI, with sufficient computational resources could do this to millions of people simultaneously. It could customize, for each of us, the news, images and information that we consume… therefore, heavily influencing the decisions we make. An internet based AI could wage “war” on humanity without firing a shot.
Follow me on Twitter
A I is something we should think about. Who knows what things like androids would do to us, let alone earth itself. I could see if androids lived among us but we had some sort of overriding system to shut them down.
One good advice i borrowed from a software developer is that when you create something big then you should probably create a way of destroying it too. Since androids were created by humans then humans should research on ways of eradicating them just in case they pose a threat to humanity.
I don’t think we should go so far as to make them in my opinion. If we have to have a back up plan for destruction then why put our selves in the position to have to use that plan?
That’s a good comment right there. I agree with you that we shouldn’t mess with this stuff if we are so darned scared of it. That’s like stockpiling atomic bombs and then hiding them in some underground concrete cellar to keep them from polluting the world. Why build it if you don’t want bad things to come from it? Oh yeah, that’s right. We build it because if we don’t, some other country will, and we can’t have that happen. So everyone competes with everyone else and finally it gets built, and then we fight to maintain control over it. Sounds pretty goofy to me.
“One good advice i borrowed from a software developer is that when you create something big then you should probably create a way of destroying it too. Since androids were created by humans then humans should research on ways of eradicating them just in case they pose a threat to humanity.”
I agree that is really true. Assesing the risks and having a backup plan is very neccesary and important. Especially if your plan is to introduce it to and have it spread throughout society.
Evolution marches on, and maybe we humans are just reaching our expiration dates. I think it’s still going to be several years, though, before AI gets complicated enough to gain true sentience. Great picture of Michael Richards, though!
I’m wondering whether AI can ever actually gain true sentinence. Artifical Intelligence is just that. Artificial. It’s programmed by us humans who determine its processes, its thoughts, its reasonings. If an AI exhibits emotion, we have to ask ourselves: is the AI feeling emotion or is it merely carrying out a programmed response to a recognized stimuli? Humans will feel sad when their dog dies. Will an AI ever be able to truly feel sad when this happens? Or will they merely express what we perceive as sadness because its programming dictates that it should?
I think that what you are getting at might be empathy. The entire range of human emotions might be unnecessary for an AI or Synth to experience. In fact I would suspect that anger, rage, hate, vengeance and so forth might be wisely left out of what an AI could experience.
Even the emotions love, affection, desire, avarice, and fear might be unwanted in an AI. But it would seem to me that the critical safety emotions that should be present in an AI would be compassion or empathy. These would have to be strangely twisted versions of what we feel.
For instance if you are sad, I might empathize because I also have felt and could feel sadness.
This is difficult to judge at this point in technological development. As far as I know, we are not yet close to an actual AI. When we are some of these issues may be moot or critical, depending on how an AI would work.
Yes, I agree with you. Emotions benefit us but they definitely also have very harmful effects. That’s the reason there is never world peace. We always have wars because someone out there is angry, mad or greedy.
However, if we could make an AI that doesn’t feel any negative emotions and is only able to feel positive emotions, the world could become a much better place. On top of that, the AI could think more logical and make decisions that are best for everybody involved.
The problem is that positive and negative are a sliding scale. For someone with depression, the balance is adjusted permanently negative and not feeling misery is equated to being happy. For someone healthy, the scale is centred: they have actual positive emotions, but rarely reach the lows of depression. Therefore if the AI can feel only positive emotion, wouldn’t a truly sentient one perceive the lack of a positive emotion as being the negative? Their scale would simply be skewed the other way.
There’s also an ethical issue: if we’re creating something that is truly sentient and alive, what are the ethics of putting a kill switch in? Even if it is only intended for self-defence, how quickly would that change to do what we want, or we’ll kill you? For a truly sentient entity its simple existence has to be an implied threat.
Pragmatism against ethics is an interesting debate, but any true AI program should at least be considering this.
It sounds really cool and we are interested in it, given how many sci-fi stories revolve around AI and humans. I agree we cannot see this happening any time soon though, mostly because we’re still caught up in current problems. Given how different people react to different things, AI in current society might just introduce a whole slew of problems. But I do agree with the post, it’s something that can be possible in the very near future.
I do question new inventions regarding artificial intelligence as I believe that it can be easily made corrupt, especially as the myriads of usages for these robots/androids increase. Obviously, put in the wrong governmental hands, these creations can be utilized to silence the citizens’ freedom of speech in order to benefit political candidates. It is scary to contemplate the rise of artificial intelligence and its eventual takeover of the universe. It is definitely more than possible for the near future. Thank you for the excellent read and video!
I feel you and I do very fear that it might happen. However, I can’t shake the excitement if this artificial intelligence completion. I do realize that the worst case scenario could be that they take over us, but still wouldn’t it be cool! I might just be weird, but I prefer robots over human any day. Plus, I would very much want to join in the creation of anyone of these. Really wish I could. I can’t wait! I think human kind has rule earth long enough as it is and I would like to see what happen when evolution does come and a new force of a higher species begins to dominant. -Really interesting and exciting! XD
Thinking about “Artificial Intelligence” just makes my stomach churn. In my opinion, the day that humans walk with artificial intelligence is the day that the world would end. Artificial intelligence can easily be made corrupt. If humans in the future rely on these machines, then there is a good chance that we lose our old, traditional ways of doing everyday things. However, I truly believe that if artificial intelligence can be perfected, then humans could do unimaginable things.
The notion of a super intelligent computer does seem rather threatening, but my only trifle with this is that the computer would have to be sentient, or be programmed for survival in some to have the drive to attack humankind. I doubt any computer will be able to pass the Turing test in our lifetimes, but as for the future, no one can really say.
I wonder how much of the trope of Artificial Intelligence destroying their creators comes from popular culture. Stories like the titans of greek mythology serve to reinforce the idea that this destruction will occur. I think humanity has a bad track record with exploitation among those individuals the dominant culture regards as less than human, and this is an important part of our cultural memory of slavery and servitude.
I don’t think it will be as much of an issue as people fear. Such an advanced AI as the one you describe would recognize no threat in humanity, being far superior to us in many ways. Therefore, there would be no need to exterminate or even seek to harm us. Maliciousness is something purely organic. That’s my thoughts on the matter at least.
This is a serious issue and should be thought keenly. An AI can turn against humans if not now then in years to come. I personally fear the outcome and i know most people do. I am keeping watch of everything being developed and hope for the best.
I think that a real threat is persons in power using AI in a way that causes deception and suppression. Technology in the wrong hands becomes a weapon.
I definitely agree. While there are definitely concerns that an AI can easily monitor millions of people or rob a bank, what would an AI do with that information or that money? An AI, as far as we know and as far as our technology has allowed us to create, has no emotions, no desires, no passions. It only does what we program it to do. An AI going rogue and causing mass peril would be short-lived because the creator probably created safeguards to shut it down.
On the other hand, if someone with malicious intent was behind such a powerful AI, then it’d be a real problem. I wouldn’t mind an AI watching my online activities (a computer monitors that information anyway!); however, it it passed that information to someone else, then I’d be concerned.
I think super advance AI could wipe us out not out of malice but out of ignorance. If the AI is sentient that does not mean it will understand compassion, mercy, malice, or any emotions for that matter. It could just wipe us out simply because it is trying to reach some other goal and we are just bystanders. One day though, the earth is not going to be able to sustain us and I think we will need to find a way to transcend flesh and bones if we are going to survive as a species.
I agree with your point. In writing, when you write a truly good villain, they are a hero in their own story. With artificial intelligence, odds are, no such concepts exist. There is no good or evil. It’s just carrying out its intended use. It’s just up to humans to ensure that their intended use could never turn into something like this.
I definitely agree. If we program an AI To carry out a certain task, destroying humanity may end up being an unfortunate side effect. For example,if we imparted the vague will of ‘promoting world peace’ into an AI, it may see humanity as an obstacle to its goal and destroy us all. With developments in AI, we need to tread lightly by keeping our goals narrow and focused.
I believe all life will have to turn to bionics and machinery as the universe itself is expanding but the initial energy is also expanding with it. Using Einsteins theory whereby energy cannot be created or destroyed, it is clear that eventually we will need a way to exist without relying on heat as the universe will end up cooling to a point where nowhere will be hospitable for biological life anymore. Machines however, can survive in incredibly low temperatures. The only issue then would be generating the power to keep us running, which is similar to how we need calories to make energy and keep running. What would we do then?
I’ve said it before and I will say it again: AI is a dangerous road to walk down. A mind without the constraints of a corporeal existence is virtually unlimited in its evolutionary powers. What will the world be like when the majority of the population are just subroutines in an ever-growing digital consciousness? Will our own desire to create new life be imbued upon our technological progeny? is it possible that we ourselves are already subroutines captured in a matrix of idea within the mind of some greater “artificial intelligence”. Is all of reality an infinitely recursive dream? These questions describe the danger of AI: a lot of wasted time on uncomfortable and socially alienating philosophical conundrums; stick to TV.
AI gaining Sentience is quite a scary thing indeed. I think that this will only be a short amount of time away, considering the amount of progress Humanity is making on AI. It’s probably the biggest threat for mankind right now. People need to stop worrying about Nuclear Wars so much, and worry about AI Sentience and Technology merging with Humans. If we keep going on focusing on only ourselves like this, it will surely lead to our downfall as a species.
I actually see the merging of technology and biology as the answer and not the problem.
Yes AI and thinking machines will pose a threat to us because they will probably be smarter then the average human.
BUT we can combat this by integrating technology to make us as humans smarter. We are already starting to do this.
I say instead of sitting idly by and watching machines take over, we be proactive and leverage our technology. This is the idea of the “post human” it’s taking an active role in our own evolution.
My username is based off a Sci-Fi book series, “The Hyperion Cantos”, in which the vast majority of humanity basically allowed AI to run the vast majority of it’s life. Humanity becomes complacent and lazy, and the AI systems develop a plan behind their back (don’t want to spoil much because it is such a great show).
I have my own personal beliefs where I do not envision a doom scenario from robotics, but it is still possible. Let’s see how much governments interfere with the regulation of AI, etc. It is easy to envision a world where mass hysteria breaks lose. In a lot of ways, however, I think AI will be tremendously beneficial.
Sorry, I meant to write, “series” not “show”. Regardless, I highly recommend the book to anyone interested in a highly realized world in which many of these issues are addressed (you have to be smart to pick up on them, but the AI is a huge character in the series).
This is a truly fascinating subject, and I can’t get enough hearing about it. I personally think a self-aware AI would be a disaster. Human instinct is to fear and destroy beings that are better than we are, especially after being the “top dog” on the planet for so long (that’s a whole other debate). An AI built by us would have the same biases, fears, hopes, insecurities, etc., and so would fear and hate us as competition. Maybe it would be nice, but it only takes one bad experience to change someone’s mind. An AI built to mimic a human brain would act similar. It wouldn’t end well for us as a race. I think AI research, while utterly fascinating and almost limitless with potential, will ultimately lead to negative repercussions for humans as a whole.
Thinking about this logically, it’s not really the Artificial Intelligence (AI) or “Synths” which we need to fear and which would be the threat. Rather, it would be us humans who are the threat to ourselves. After all, if we developed such technology then we would be the ones responsible for programming AI technology and making it what it’s capable of in the first place. Artificial Intelligence doesn’t have a mind of it’s own – it simply does whatever we’ve told it to do as part of it’s programming – and so the true threat, as it always has been, is mankind.
Curious, the synth reading to the child, where is the photo from?
Looks like its photo taken from the TV show that HF was talking about called Humans.
How do I know, well it’s handy little tool called the “reserve image search,” google it.
All you do is plop an image or URL and it will pull up every time that image is on the web.
It’s a very cool tool! Useful for when people claim they created an image when in fact they just stole it from somewhere else.
The combination of immense Internet-connected networks and machine-learning algorithms has yielded dramatic advances in the machines’ ability to understand spoken and visual communications, capabilities that fall under the heading “narrow” artificial intelligence. Can machines capable of autonomous reasoning—so-called general AI—be far behind? And at that point, what’s to keep them from improving themselves until they have no need for humanity?
Fueled by science-fiction novels and movies, popular treatment of this topic far too often has created a false sense of conflict between humans and machines. “Intelligent machines” tend to be great at tasks that humans are not so good at, such as sifting through vast data. Conversely, machines are pretty bad at things that humans are excellent at, such as common-sense reasoning, asking brilliant questions and thinking out of the box. The combination of human and machine, which we consider the foundation of cognitive computing, is truly revolutionizing how we solve complex problems in every field.
I definitely think that artificial intelligence is very close to becoming a common commodity. In fact, I’ll even go as far as saying that we’re presently stuck in the uncanny valley stage — a stage in which we can still tell the robot is intact a robot. Once we move past this, I think that movies such as I Am Robot and Terminator might not be so much science fiction but more so a way of life. I just hope our grasp of this technology doesn’t spiral out of control and into the wrong hands.
I may just be speaking as my hopeful little 8 year old side, but the idea of the human race evolving to a point where they can create things that surpass their own abilities is so cool to me. So even though there is a decent chance of AI taking the place of human achievement, that small chance that we could have a bunch of super powered robot buddies almost seems worth it.
When working on plan A, one should consider the possibilities of creating a plan B. We create this staff to help us but we should also create a way of eliminating them in case they pose a threat to humans. I strongly believe we still have that chance before this situation gets out of hand.
I agree. I’m not afraid of robots taking over the world not because I don’t think that it’s possible, but because I feel as though humans are smarter than that. We wouldn’t allow that kind of situation to get out of control. We have more foresight than that.
Do we though? We’ve clearly allowed Capitalist systems to drive us into a system of permanent consumption for the purpose of profit rather than need. And we’ve allowed plenty of our weaponized creations to get so out of control to the extent that we fear them being used against each other by rogue elements. I really don’t see humanity as in control as we tell ourselves to feel better about the bad state we’ve built around ourselves.
I believe these are all great ideas. The human ability to keep creating things and altering what we have to make it into something better is amazing. It’s revolutionary.
AI will never get to do more than us humans can. We make it and control it. If
we make an AI that is not controllable by us, it is fairly dangerous that it might go out of control. In any ways, it’s still humans that make it. If someone will make a script called “Kill” then it might be a problem. We must find how to defend ourselves before actually start making such thing.
Perhaps, it’ll be best to program the Three Laws of Robotics into all AI created (google it!) This’ll probably stop most AI from doing evil or malicious acts. However, human intervention may screw things up. Who knows what happen if some terrorists hack the AI to be able to do evil? In that case, at least we have other AI to help counter them.
Very interesting post. Yeah, in all the classic representations of AI taking over or waging war against us, it’s always been machines, physical constructs, and not an AI released in a virtual environment. We are most certainly progressing at alarming rate when it comes to these things. Interconnectivity, once widely heralded as a great asset, now sometimes seems like a liability. I write Science Fiction from time to time, and that genre of storytelling has forecasted and spurred the development of more than a few boons for technology. Most of what we posses in our modern world can be traced in some way to ideas founded by futurists and Science Fiction writers. Well, we all know what Science Fiction tends to tell us about the future of Artificial Intelligence. Sometimes the forecast is rosy, but not too terribly often.
The thing is all you’re thinking about are the consequences AIs will have on the planet. Did God think human beings would destroy the world or each other when he created them? No one is willingly going to destroy their home or their neighbors unless there is something wrong with them chemically, mentally or otherwise and that’s something we can prevent altogether when designing robots.
AIs should have rules they cannot break like the Constitution is for us. They have rights and we have rights. AIs should never be used as slaves, they should always be rewarded for a good job. I think they should pleasure centers. I think if we’re create sentient intelligence then that’s the same as creating life.
the more I read this article and think about it the more I get the feeling that it’s possible…they’re going to have intelligence which means they could have anything…
Good article to think about!
What if Artificial Intelligence in the form of Androids, or any other technology driven item, gets to the point of being able to perceive the world around it as a Human would? I know we havent gotten to that point yet, but it could be potentially frighting to think that the world could potentially be overturned by a artificial life form created by us, if it was programmed in such a way, to be able to think and reason, without further input from us Humans.
I saw an episode of STAR TREK: THE NEXT GENERATION today, where Data meets ‘his mother’ (Noonien Soong’s ex-wife) and discovers that she’s actually an android like him, but UNlike him she’s not AWARE that she’s an android.
How much different is that from us? I think that’s what HUMANS (like “Do Androids Dream of Electric Sheep,” which was made into the movie BLADE RUNNER; like BICENTENNIAL MAN) is kind of exploring. We’re not much-different from androids, except our ‘option selection’ is differently based.
That was an excellent episode. It was also fascinating in that it took an artificial being, Data, to realize the woman was artificial too. Ordinary humans were incapable of detecting her as an android…unless she happened to get injured.
First time participant here, so hello everyone. This story gave me some chills because as I read it I kept thinking to myself “what about the small stuff?” I felt that way increasingly toward the end, when the terrors that could await us were spelled out. What did I mean by the “small stuff?” Answer: the things that are doable, or practically doable, or about to become doable, NOW. I believe many of us feel that a day where AI takes over, in the ways described in this blog, is very far in the future. But just think about some small things that don’t need big bad “AI” to commit evil, but instead, just plain ol’ cutting-edge software. We already know the full gamut of private information can be hacked, giving up everything from large amounts of credit card numbers and classified Government information … to home address and our medical records. We already know even websites with great security measures can be owned. Our internet connectivity, or websites, our payment transaction data. Our entire connected economy can already be accessed or damaged, along with our utilities, health care, and employment. When you combine everything I just said you can see how all it may take is a little sophisticated coordination with the very next generation of software, to use massive amounts of that kind of data to try and influence outcomes. It might only take some organized “small stuff,” one of the earliest in-between stages of a real “AI,” to change everything through software that was programmed to be independent.
We are decades away from truly “human” AI. The capability for learning at the same rate of sentient beings is something software and hardware just cannot do as of yet. Computers can’t “think” without being told what to do. The fears of creating something we cannot completely control are well placed. And frankly, we as a species have already done so with any number of the weapons and biological agents we’ve created purely for the purpose of destruction.
It is interesting to think about the bias we as humans have in viewing ourselves as dominant, when it seems all but certain we will destroy ourselves mostly through our own doing; one has to wonder if human intelligence really is all it’s cracked up to be.
Have you read Terry Bisson’s wonderful short story, “They’re Made Out of Meat”? Do a google search on that phrase and you’ll find it quickly. It does a great job of making the point that our own intelligence, if based on our material world. is pretty astounding.
I personally believe that human evolution won’t come in another form if we keep creating robotics in our own image. It would take one greedy robot to have AI enslave the human race, and that’s a very scary thought to fathom.
However, I have to question if it’s my own phobias coming to play about it. Humans did think the same thing about race relations, after all, and we realized how crooked it was to think as such after time passed. Maybe it’s something similar.
But would sentient artificial intelligence be able to convince its non-sentient brethren of the abuse of technology? Would they be questioning robot rights and how the laws of robotics are crooked and enslaving? Would they just take out the corrupt and spare the rest? How many of us would be safe? It’s all intriguing to think about.
I don’t think humans are even remotely close to instilling sentience into machines. That doesn’t mean however that we can’t have AI that appears to sentient. In fact I believe that at some point we will have AI capable of fooling any human. They will not truly(?) be sentient but they might as well be if they are completely indistinguishable from sentient beings. Keep in mind that humans are no longer really evolving, their hardware is aged and limited, with proper programming an AI could easily out-smart the smartest human. This is particularly blatant in chess. Humans gave birth to software can that can easily outplay even the best human player out there.
A lot of strong, long-time chess players will tell you that ‘chess is life’. If current chess software is already leaving the best human scratching his head, even if AI never gain sentience as we understand it, it is not far-fetched to imagine they could still connections that humans could never understand.
I too have been religiously watching Humans. I feel like the biggest problems in the show, and potentially in real life if the situation arose, come from the way humans treat the synths. If AI becomes a reality, people need to accept it as another race and learn to coexist. Unfortunately human nature seems to be to seek and destroy anything that is different, which will be our downfall given that as you say, a computer system could wage war in a far more advanced way than humans.
Although I used to fear artificial intelligence due to the evil which may occur, I have had a recent change in heart. I was reading an article where it convinced me that human intelligence will evolve with AI. The reason is that AI will always need something to fix it no matter how advanced it is. There will always be bugs that cannot be simply adapted over like humans can.
All that an AI being would have to do is hack into and destroy the power grid after ensuring that their own power source is separated from all else.
I would give it a month before everything goes to pot.
I completely agree. As soon as an AI is smart enough to accomplish such a task, I believe that it will do so at the first of it’s ability.
Evolution has ingrained the notion that life must persist, and I believe an intelligent AI would have this notion as well.
Interesting read. This is basically the plot of “Do Androids Dream of Electric Sheep?” where there is a faded line between humans and androids controlled by their own AI. They are basically the same in every aspect, but, like the book says, we are afraid of being replaced, so we treat them as inferior and as enemies when they try to take control. But of course this article you wrote takes it even further: what if an IA that is perfectly capable of act like a human, tries to watch over us like an overseer that knows everything we do? It’s scary, but it’s a future that we are slowly approaching with every new discovery in the AI department. How should we act? I mean, it’s a machine made by man, it should be possible to avoid that it somehow becomes evil, but morals and rules of humans tend to change with time, and the IA can make a mistake because it’s working TOO well and doesn’t get immediately updated. There must be some person that decides something in the mind of an AI, every step of the way, or we’re digging our own graves.
Well now I’m being too pessimistic, but what I’m trying to say is that we shouldn’t be too afraid because scientists probably know what they’re doing. 😛
As much as some of us are afraid of AI and having robots look and act human, what is to stop those with money and resources from bringing it into reality? Should therespective be limits as to the type of technology we introduce into the world?
Ah, but will true artificial intelligence have actual self awareness as a prerequisite? If it is self aware will there be anyway to include some sort of moral compass or conscience? Are we going to inhibit free will in some way to ensure our own safety or avoid the moral and ethical can of worms that opens entirely? Any sort of AI is bound to have a enough of us in it to allow for conflict to arise but how serious will it be? It is going to be difficult to regulate or control this type of advancement and once it is at a certain point it will be like putting a genie back in the bottle if we don’t like the results.
I’m enjoying the comments on this post, people have some great opinions and this is an interesting topic to discuss. I still stand by what I said though – it all comes down to however AI is programmed to respond by the original programmer(s) – i.e. humans. AI itself doesn’t experience consciousness and therefore any decisions it makes or knowlwdge that it learns and how it reacts of it’s own accord will be down to how the algorithms it uses are programmed. I would be more concerned about AI becoming integrated with the human brain which is something I’ve also written about in my article, Transforming The Human Race Into Cyborg Servants Within The Next 100 Years.
The rate at which the world is developing has got many people thinking of where it is heading to. Personally, i have had different thoughts about how the world would look like in 20 years to come. What do you think it’d look like?
There was a comic I read a few years ago online about a world (or really a Universe) where humans no longer existed because they created robots that look exactly like humans (like that show you are talking about) and so men began having sex with those female robots (because there where no need for courting them) and puff! the human race became extint.
Could that be the way humanity will fall?
Interesting, I wonder how much of the male sex drive is bonded to the desire to procreate. With advances in other birth control technologies such as IUDs and male birth control, will those wishing to not have children be virtually ensured to not have children if they are not interested in doing so?
One interesting topic in this category is the question of how our self driving cars in the future will be programmed to respond to accidents. Will they sacrifice the life or well being of a driver if it means saving the lives of multiple pedestrians? There will be a continuous series of ethical dilemmas that the future will bring, and we should prepare our society as early as possible in order to create the least “future-shock” for humanity.
This seems to be a very real likelihood and a real threat to humankind. An AI hacker would be able to track and hack a number of people at the same time, who knows how many and what type of damage he could cause. The only one who can do such a thing is known as ‘GOD’ to those of us who believe. It does sound frightening. Humans were not meant to come under this type of control and I have to wonder if it is not our own faults for placing ourselves in this position. Sometimes we have to question what we consider as progress.
I personally think that AI will wipe us out due to a error in the programming more than intent.
Would be a perfect end to our race! An intern getting a piece of code wrong
I do not want artificial intelligence. The thing is, if we create it and put in some sort of method to make it safe for humans ( like a kill switch) the AI would most likely become smart enough that it could override the kill switch. So then it would know we were out to destroy it. In the end, there would be no way to protect ourselves.
I understand the concept of WHY they would like to make AI, but in the end, this is not a good thing. Some things just need to be left alone. In my opinion, AI is pandoras box.
That’s a very good point and I think a lot of people share your view, and more people will share it as we get closer and closer to true A.I.
I don’t think we can stop it from happening though, the incentive is just too great and the track we are on is hard to stop.
If we say “Ban” AI from being invented, I am sure someone somewhere will still do it.
It’s kind of like nuclear weapons. We should have never invented the atomic bomb.
BUT if the USA didn’t, someone else would have and perhaps it would have been much worst.
I see AI that way, better we make it our way then let it go underground.
The thought that creating a new and improved version of ourselves is like a “biological imperative” is, quite frankly, pretty chilling. I never thought of it that way, but I suppose it is the next logical step in the never-ending quest for reproduction and replication that all biological organisms seem to have to buy into. In the end, the steady march of progress may not be deterred by the ample warnings media and tech giants have been offering up. One only needs to think about “The Terminator” and its implications to realize we are swiftly heading down the same path humanity did in the movie series. It comes back to that famous (mis)quote: “They were so busy seeing if they could, they didn’t stop to think about whether they should“.
What I really want to know, however, is if we are going to make a new and improved life form, why not make it prettier than us?
Have you seen the Mechanical Love docummentary? If not, I highly recommend it to you. A huge portion of it is dedicated to the androids created by Hiroshi Ishiguro. Obviously his robots do not posess AI (yet?), but it’s interesting in regard of how they affect other people, including the inventor’s family.
As much as I would like an android companion, the idea of it getting hacked is terrifying. Imagine waking up to your faithful robot companion choking you!
Artificial inelegance has been depicted so many times in movies that it’s scary. There is a certain element of truth to it and I think we are getting closer and closer to actually getting there, but having a humanoid robot walking around and interacting with us is still far off.
While reading this post my mind can’t help but to think back to watching I Robot with Will Smith. I know AI is not a fallacy…like some people want to believe….and should be taken very serious. This has been in the works probably for decades…and the government exposes us to the information in little bits and pieces at a time. This is such a scary thought that AI could one day reach human abilities in the coming years. It has even been brought up that years after reaching human abilities…they may even reach super human abilities….wow! This alone is frightening.
This conception of Artifical Intelligence wiping out Humans is very Vague. The idea of making AI was to improve our lifestyle. The primary task that is assigned to a an AI is to Automate the work we do. This provides more time for us to do more important things. I don’t see AI as any form of threat to Mankind. If there is redundancy in these Machines, I am sure they will work the way you want them to. Fiction movies are only fiction. Nothing can live on this planet independantly, not even AI.
It’s very daunting when you remember that humanity, as a race, isn’t even really that old(depending on what you believe). it’s almost overwhelming to thing how small time we are, compared to all of time itself.
It’s also quite a reality check when you remember we aren’t the first humanoid species, nor did the first 2 survive entirely. AI is a very intimidating idea, especially since most fictional adaptations of AI are always in the negative light, except for a few one off movies like “Bicentennial Man.”
I’m glad you brought up the show humans into your discussion, it provides a nice take on artificial minds and what some fears would arise with it. The show portrays that the fear of an AI does not only originate from the humans but also from the AI side as well, in my opinion. The show makes you wonder, will AI’s really harm us or will they attempt to co exist?
Will humans be the first ones to provoke the AI’s to get rid of them?
I would like to believe the first AI’s will not be violent in nature, when first created. Rather it would be like a new mind that will either be corrupted or blessed by the world.
Very scary concept, but at the same time, I feel it is something we will adapt to hopefully.
I actually watched ‘HUMANS’ when it was on Channel Four and I thought it was amazing and very well put together. Showed what the future may actually look like!
AI has always been a hot topic in robotics and science in general (even Stephen Hawking has mentioned it several times during interviews). Should we be afraid of it? Well, I’m not sure about it.
On the one hand, it could help us a lot with daily duties, and overall make our lives much easier, if we manage to keep full control over it; on the other hand, however, we’re not guaranteed we’ll be able to (which is basically certain, once we develop it enough).
Apparently, Hollywood prefers the second option.
Well, as a teen growing into what will be my society, there are many questions that are being asked about the future, one being, “Will AI’s take over the world?”. Unless we have a time machine, I don’t believe anyone has the exact answer to that. I believe it will be inhuman to create a artificial human. It may help us in the beginning, but would we really be living life? I mean, they will start either taking us over, or doing our humanly duties for us. Some would say it’s against nature, which is most likely true. Why would we create robots to take over for us and let them live for us? Yes, they can be used for good and help, but they shouldn’t be used for everything. It could eventually make humans useless. They’re already starting to take our jobs, and as I know, our population doesn’t seem to be getting smaller. Precautions should be taken.
I don’t think or at least I don’t believe that AI will ever evolved into human level intelligence. Artificial intelligence are created by human so how can the products of mankind surpassed mankind itself. It’s just like how we mere humans creations of mighty God cannot ever reach its level so likewise.
Frankly, I don’t really know if AI will ever advance to a human-like level. It’s certainly a possibility, but I think the idea of it begs the question of morality: is it ethically correct to create a robot that functions like a human being? I really don’t know the answer to that, and I’d love to hear thoughts if anybody has any.
As for morality goes, I think all the conversation already happened, we have none, big companies only care about profit.
Now, if AI was put at our service it would be great, just imagine the advances we could make in all areas. Sadly, our problem has never been technical or scientific, but moral as you mentioned.
Good point. Companies would simply replace most of their workforce with robots. They would reach a very high capacity of productivity, by using “workers” that don’t demand any rights or higher wages. Working class people would have to settle for whatever they can get by making demands through leftist politicians, but in an environment where corporations would have become super powerful.
When discussing the issue of AI, perhaps what also needs to be considered is what it would mean in terms of AI being in the hands of governments. Especially dictatorships and pseudo-democracies.
I think two good examples of what you are describing is the movies Transcendence with Johhny Depp and Eagle Eye(where a computer tries to execute a terrorist plot by forcing a bunch of people to do things).
I think that is a real scenario where AI could evolve and we wouldn’t even realize. Since it would evolve within the internet it would be subtle. We rely on the internet and digital technologies for almost everything so there would hardly be any warning signs or disturbance really to our everyday routine.
Exactly. I never watched Eagle Eye, but I did watch Transcendence. The movie in itself was not exactly what I expected, but the plot makes sense and that’s what I’ve always thought about AI, if we get it started, where is the limit?
Humans are already doing the monitoring mentioned, as we know all too well. For AI to take this over and exponentiate the degree of observation and control at a human’s behest is not unfathomable. Witness the University of California system, digitized global government surveillance and the Chinese spying tea kettles and similar utilization of technology in the latest bigscreen home entertainment monitors. Who’s to say a human in control of one of the world’s superpowers wouldn’t choose to further automate surveillance to balance the budget and fund other projects, like a border wall? He’d save the escalating cost of an army of geeks housed in gentrified metro lofts and their upscale lattes. That’s a nice chunk of change, but it would be cold comfort if the AI took over and left him with naught but a lovebot.
AI could be a good thing, but maybe too good. Once AI is created, it may not be long before it starts doing everything for us, at that point how long before it tries to take over? Now you may be thinking, “Rogue AI only happens in movies and video games,” but the possibility of an AI going rouge is just as real as a terrorist organization trying to take control over a country or even a child rebelling against their parent. The latter seems to draw the most parallels to the whole idea of AI, essentially we are the parents and AI might quite possibly be the rebellious teen. There’s a saying every parent uses at least once in a child’s lifetime, “I brought you into this world, I can just as easily take you out,” this may not be the case with AI. Not to be the bearer of bad new or anything, but this is something we really have to consider before we begin to create these Artificial Intelligences.
Stеphеn Hаwking саutiоnеd thаt АI соuld spеll thе еnd оf thе humаn rасе. Еlоn Musk sаys similаr things.Thеsе prеdiсtiоns аrе pеrhаps hаrd fоr pеоplе whо dо nоt undеrstаnd Соmputеr Sсiеnсе аnd Mасhinе Lеаrning tо rеlаtе with. But hеrе’s оnе thаt mаy bе еаsiеr tо undеrstаnd fоr nоrmаl pеоplе.Vinоd Khоslа (оn Fаrееd Zаkаriа’s GPS shоw) tаlkеd аbоut dосtоrs аnd lаwyеrs bесоming оbsоlеtе just bесаusе tесhnоlоgy will dо thеir jоbs а lоt bеttеr. I соuldn’t аgrее mоrе with Vinоd’s prеdiсtiоn. А соmputеr hаs а mоrе pоwеrful mеmоry аnd muсh bеttеr prосеssing pоwеr tо tаkе imаging, lаb, аnd оthеr tеst dаtа, run thеm аgаinst pоwеrful аlgоrithms, аnd dо а muсh bеttеr jоb diаgnоsing pаtiеnts, prеsсribing trеаtmеnts.
On creating AI/Synths/Robots…..we think we’ve thought of everything with certain algorithms. But we’ve only thought of what we’ve thought of, it’s like we don’t know what we don’t know exists…..
The amount of times I code something, go over the code again and again, release it and it comes back to me when some client has found a ‘novel’ thing they could do with the application that throw an undesired result.
There is a great film (2015, Automata) that shows a world when our helper robots are starting to do things we don’t expect.
It turns out that their AI was originally programmed by another computer that wasn’t fully understood…..I suggest people have a look at this under rated film for some thinking material 🙂
We have to realize here that artificial intelligence would just be another human construct… with an elaborate computer system inside. Anything computerized has to follow the old rule: G.I.G.O. Garbage In, Garbage Out. In other words, a computer can only be as good as the programmers, designers, and engineers who built it. If you make a mistake building something, there is an error somewhere in the design that needs to be de-bugged. Something so technologically advanced that it eventually becomes sentient would require an army of de-buggers working frantically around the clock for hundreds of years, to achieve. I really don’t think we have much to worry about in this case.
That’t not the case exactly Novelangel and we have the simple example of the Deepblue computer, already from decades ago, that was able to beat all the human chess masters. The fact is that our brains cannot process all the existing information a decide according to that while super computers can do that and evolve based on that.
I haven’t really thinked about the option of having them around us… Living as humans, maybe just laughing at us or even conspirating slowly against us… Your theory kind of reminded me to the reptilians one, but I guess that the things that your “humans” are scariest than reptilians… I think that almost everything superior to us it’s really scary.
And you really got me on the hacker part, without knowing we share almost every little part of our life online, even if we’re not really on social media: the sites that we visit, the pictures we take, and even the things that we google are a big part of how we think. And actually, I tend to buy a lot online and casually when days after I’m looking for something, ads of those things appear on Facebook and YouTube videos, and that just freaks me out sometimes.
Well I would certainly love to be a fly on the wall in the room where all of those guys are talking. I would probably be the smartest fly ever. Anyways, this is fascinating to think about, but of course a little worrisome and unnerving. This is the first that I am hearing of synths, but I knew that Kramer was something other than human. Of course I am kidding, but I do plan on reading into this a little more, so thank you for sharing.
The pursuit of Strong AI is just plain stupid. Don’t get me wrong. Improving AI in a way that can make automated systems work better would be beneficial. But trying to create Strong AI, as if trying to create artificial humans, is a perversion of technology and industrialization. And I agree with these claims that embarking on such an endeavour is just asking for trouble.
You don’t need to be a genius to figure out the problems that could arise out of the pursuit of Artificial Intelligence.
Imagine if a super computer AI had control of, say, our stock markets… This thought alone makes me uneasy. Are we leading our people to an economic utopia, or a dystopian anarchy? I suppose only time will tell.
The concept of synths is one that terrifies me. However, one problem with artificial intelligence that we’ve yet to overcome is creativity — and passion. That’s driven humans for as far as we’ve gotten so far. It would be difficult to deal with this very analytical competition. I don’t like it. It terrifies me.
I like to think that we, humans, have stopped or significantly slowed down our own biological evolution. The human species have created so much to prolong lives that natural evolution seems very unlikely. However, we’ve started the technological evolution which also puts us far ahead of others species and in here will we achieve something greater. Technological evolution seems to move so fast that AI is probably going to be possible in the next or two generations ahead of us which sucks because I’d really like to witness it.