“What I think machine learning will do is make us (people) behave less like machines, not more,” said Eric Schmidt, executive chair of Alphabet, at “The Magic in the Machine” press event held recently.
He was quick to dismiss “robot takeover” in the future. The type of machine learning Google is doing “has something to do with services that are on the Internet; about information that will make (humans) smarter.”
Alphabet is now the parent company of search engine giant Google.
“The Magic in the Machine” talks revolved around the analytics and algorithms happening behind the screen and how predictability could put services such as Google Translate and Google Photos into the mainstream and make Google the leader in this technology.
Days before the press event, Google introduced a new Inbox app feature called Smart Reply which could send brief replies based on the users’ previous email activities.
Learning process gradual
Schmidt said Google was trying to understand how the brain worked but “I don’t think we will build human intelligence by having computer simulate the brain. Partly because it’s too complicated. The neurons are too many in the brain. I think with understanding of how humans collect memory, we focus on things, and we can build better computers out there.”
“Machine learning is like a rocket engine and data is the rocket fuel,” explained Greg Corrado, senior research scientist at Google. He works at the intersection of artificial intelligence, computational neuroscience, and scalable machine learning. Corrado is also the cofounder of Google Brain project alongside Jeff Dean and Andrew Ng.
“Learning process is very gradual,” he said. “For real apps, it takes thousands—even billions—of repetitions in this cycle to make learning better.”
Machine learning is best illustrated in Google Photos where mobile device users can easily search for particular images just by entering a keyword. It can identify animals, food, and even landmarks “almost effortlessly” simply because it read users’ data in the device. The more images are stored, the more Google Photos can comb through thousands of photos in the library quickly.
Google Photos uses 22 layers to aid it in recognizing image types. The first layer identifies basic features like colors then it becomes more complicated as layers go deeper until it is able to identify the image users are looking for.
“It isn’t magic,” Corrado said. “It’s just a tool or a new way of building software and it’s not something that automatically works. Data, model, and computations are all important.”
Deep learning is a powerful sub-discipline of machine learning, according to Corrado.
“It is slightly more complicated but the important thing is they (machine) learn,” he said. “The model is based very loosely on what little did we know about the brain. Each neuron gets to learn its own function.”
Faster, smarter system
Google is investing a huge amount of time and effort in this technology with the aim of defining the next-generation Internet services.
Replacing Google’s old machine learning system is TensorFlow, “a faster, smarter, and more flexible system,” which the company claims is five times faster than its first-generation system.
Schmidt revealed Google is “open-sourcing TensorFlow.”
“Why would we release software that’s so valuable?” Schmidt asked. “Because we gain if the industry gets smarter. This is so new that if we get everyone using TensorFlow, the smart people who are in the universities and research labs all over the world, then we will get the benefit that they’ll make it stronger and better. But we also bet that there would be more knowledge and more discovery.”
Schmidt explained they don’t see artificial intelligence (AI) “will look just like a brain. I think it will be its own structure derived from some ideas.” He said machine learning or AI will be more useful and “very good at predicting a sequence of things. If you have 1234567, it can predict the next one quite accurately.”