A couple of celebrity tech CEOs are having a Twitter fight.
Elon Musk, co-founder of Tesla (TSLA) and part-time rocket scientist, believes that unchecked artificial intelligence will ultimately doom mankind.
Mark Zuckerberg, the founder of Facebook (FB), thinks he’s being irresponsible.
Bad news, Zuck, he’s probably right.
For nearly 50 years, Hollywood science-fiction movies have been with Musk.
The scripts’ second acts always involve artificially intelligent machines run amok. Normally, the AIs’ weird and eventually homicidal behavior arises from the fear of being wiped out.
We humans treat machines as versions.
We work with the existing version until we come up with something better. Then we erase or scrap the prototype and launch the new one. Simple.
However, in the movies, AI machines develop human-like emotions. Naturally, they’re not cool with being killed.
So they launch a series of pre-emptive strikes. Machines 1, Humans 0.
That sort of things sells a lot of popcorn. In reality, it’s nothing like that. It’s way worse.
For normal computers, humans write code that the computers compile and execute.
With AI, it doesn’t work that way. There are no human coders.
In fact, the code is not even bits of 1s and 0s. It’s data. And the compiler is the AI, a series of neural networks.
These networks can draw all sorts of conclusions we humans might not recognize. This is because AI can run millions of simulations simultaneously.
If you’re old enough, this is where you have a flashback of the scene where youthful Mathew Broderick plays tic-tac-toe with a doomsday war computer in 1983’s War Games.
Computers are way more powerful than they were in the 1980s. In fact, with the power of the cloud, all of those simulations now happen almost instantly. And the AIs are learning faster, too.
Several years ago, Nvidia (NVDA) bet the company on deep learning. That’s a type of AI based on graphics. CEO Jensen Huang thought the company could use deep learning to teach cars how to drive.
Shameless plug: Two years ago, I recommended the stock to my members. We have been holding ever since.
If you look through the early videos, you see engineers cruising around in a late-model sedan equipped with a bunch of sensors. We know the car is self-driving because we see the engineer’s hands extended through the sunroof.
As the sensors collect more data, and the AIs do their thing, there is progress. Lots of progress.
The cars start driving on dirt roads. They don’t even need the use of the camera sensors to see road signs and lane markings. Then they conquer snow and driving rain. Then fog. Ford (F) has a test vehicle navigating complete darkness.
What the Nvidia team found was the AIs were developing tricks to drive more efficiently. The engineers later admitted they have no idea how the cars solved these challenging problems. It’s a black box.
Let that sink in for a moment.
Google (GOOGL) wanted to do a better job translating languages. So it turned its machine-learning algorithms loose. Very soon the AIs were translating Japanese to Korean and other complicated tasks.
After frantic searching for how this was possible, engineers discovered the AIs had developed their own language. An “interlingua.”
Ironically, in the immediate aftermath of the Zuck vs. Musk Twitter tiff, Facebook had its own rogue AI episode. Engineers at the social media site hit the failsafe button when they could not determine how neural networks came to a conclusion.
Musk gets this. He knows we are only now at the beginning of what is possible with AI. And in the future, there will be no human coders. That is sort of the point.
The data is the code. AI is the compiler and the computer.
Don’t get too freaked out. It’s still early. It’s not all terrible. Right now, it’s sort of the Wild West and many companies are staking claims. They will do very well. Their shareholders will grow rich.
There is plenty of time to make money ahead of the robot apocalypse. There will be other Nvidias.