Google uses neural network as machine learning system
In the Google X lab, researchers have built a neural network consisting of 16,000 processor cores. The processors simulate a network that taught itself to recognize cats from YouTube images.
The neural network that Google managed to virtualize in a thousand systems with the sixteen thousand processor cores, consists of ‘neurons’ with more than a billion virtual interconnections or synapses. The Google researchers unleashed their neural network on a collection of ten million thumbnails from YouTube videos, in which it searched for images of cats. The software did this without human intervention, with the system modeling the ‘cat’ archetype and searching the images for matching images.
This way of machine learning object recognition would be very similar to the way the human visual cortex works. Neurons are said to specialize in recognizing specific faces in humans. The virtual neurons seemed to mirror this phenomenon, but on a much simpler level. The researchers argue that while their simulation is much more comprehensive than previous simulations and draws on larger data sets, it still pales in comparison to the size of a human visual cortex.
The study, which will be presented later this week, would show that the algorithms for ‘machine learning’ or ‘deep learning’ scale well to larger simulations with larger data sets. With ever decreasing hardware costs, it should be possible to simulate an entire visual cortex in a few years’ time. Until then, the researchers may be using their algorithms to improve Google’s image search results and produce better translations.