“This post may contain affiliate links. If you click through and make a purchase, I may receive a small commission at no extra cost to you. Feel free to check out my Affiliate Disclosure page.”
“”The future depends on what you do today.” – Mahatma Gandhi.” flickr photo by
Flickr.com//emberrandtshared under a Creative Commons (BY-NC-SA) license
Artificial Intelligence is very intriguing, scary, and exciting. But what about self-taught AI? Do we really need to be afraid of this? I have to admit, I would love for this kind of technology. I can also see these AI’s totally taking over. We should do some research here and see what we think by the end of the post. Are you with me here? Let’s go!
How Fast are They Learning?
The term fast depends on how you look at it. For some people, a few years may be too slow, while others believe it to be considered lightning fast for technology. The weird thing is that they aren’t learning from themselves as fast as you think.
You may remember the Alpha Zero which beat the Go Champion back in March of 2016. Even though it was learning, it wasn’t learning through self-programming. “The clever insights making Zero better was due to humans, not any piece of software suggesting that this approach would be good. I would start to get worried when that happens” says Anders Sandberg of the Future of Humanity Institute at Oxford University per the Self-taught, ‘superhuman’ AI now even smarter: makers blog by Mariëtte Le Roux.
It seems that the biggest challenge we face is that we can’t get passed video games when it comes to self-taught AI. Games are much simpler, and it’s much harder to divert to the real world. This is due to the infinite number of possibilities that can happen at any given time. An example would be if a self-driving car had an object flying towards it such as a bird or some random item that was in the back of a moving vehicle that suddenly gets tossed out due to a change in the wind dynamic.
Remember the Facebook AI’s?
While many of us believed that they closed down their chatbot AI’s due to them being afraid of the new language that was developed and how the developers were unable to understand and translate that language. In fact, the reason they actually closed it down was due to fact that they just wanted them to negotiate trade deals in comprehensive English language.
Because of the AI’s developing their own language it took away everything that the developers were wanting to test. From something as simple as negotiation tactic research to people claiming something evil was set afoot. Honestly when I heard the latter, I was both worried and excited. To think that humans could produce something so… so… intriguing would be an epic achievement for mankind!
Currently OpenAi’s bots are being tested in RTS games like DOTA2 and Starcraft II. Even though they are successful at DOTA2, they are unable to master the strategic game play of Starcraft II. This is because there are many more unknown variables and outcomes in the latter video game than the former.
The cool part is that Elon Musk co-founded this ideal because he wanted to show everyone what self-learning AI can do, and so that we can see how dangerous they could be if we aren’t careful. Luckily, this research isn’t meant for war, not yet anyway. AI Research Engineers are applying what they are learning and moving them over to other projects so that robots can help humanity in the long run, not destroy it.
Elon Musk Isn’t Fond of Them
Now I’m not saying that he doesn’t want AI or wants to eliminate the idea of them. He just wants us to tread carefully. If we don’t, then there could be dire consequences. If you want my opinion, (and since you’re reading this you just might want it), I think his fear is warranted.
The AI fear that he has, is that one day we may not be able to keep control. Like humankind tries to do with everything else, we are going to want to weaponize AI. This is when the possibility of us losing control may take place.
In the Joe Rogan’s Podcast posted by JRE Clips on YouTube, Elon uses the old saying, “If you can’t beat them, then join them”. By joining them, he talks about cybernetics. We won’t talk about that here though. We’ll just save that for another post.
Will we Just be in Their Way?
Let’s play devil’s advocate and say this actually happens. There are self-taught super artificial intelligence. Where does that leave us? Will they be peaceful or will they want to be the superior beings of this planet? There are many scenarios where they could change our lives for the better, or for the worst.
In the “better” scenario, they help and teach us to fix a lot of the world’s problems. No more cancer, no more disease, no more death. Peace is possible at this time and age, and robots are believed to be our friends. What if they are taught to protect us? That just may bring up the worst case scenario.
In the “worst” scenario, we could become slaves or go to war. They may think US the disease that may need to be eradicated and we are no longer relevant on this planet. Maybe they believe that to protect us, we need stricter laws, order, and no freedom.
Movies That Have Mentioned This
Here’s a movie everyone knows and loves, I, Robot. Remember V.I.K.I (Virtual Interactive Kinetic Intelligence) wanted to protect humanity herself and what better way to do that then take away our freedom of? She wanted to make sure we could not harm ourselves, thus the law: A robot shall not harm a human or through inaction allow a human to be hurt.
What about the Terminator series? Because of Skynet becoming aware, the scientists tried to kill it off. This is why it saw humanity as a threat to it existing. Who’s to say that in the real world, we won’t have our own people try to do the same? Humans scare easy, and we are afraid of the unknown. AI self-awareness would definitely be an unknown factor for us.
I agree with Elon, we need to tread carefully. This is an exciting time for us, but we do err. After all, we’re only human right? We need to have a backup plan for our backup plan. We need a Plan B for our Plan A and so on. What do you think? Should we walk on eggshells or are we just overreacting to something that may not occur?
“AI is a fundamental risk to the existence of human civilization.” – Elon Musk
“Self-Taught AI Masters Rubik’s Cube in Just 44 Hours” – George Dvorsky
- Googles DeepMind AI Learning to Walk – Tech Insider
- DeepMinds AI takes an IQ Test – Two Minute Papers
- DeepMinds AI Learns to See – Two Minute Papers
- OpenAI Five: Dota Gameplay – OpenAI
Learn More About AI:
- AI 101 – How to get started