PROGRAMMING
Machine learning in the cloud
One of the most important factors that contributed to the huge successes of machine learning in the last 10 years was the increased computing power. In 2010, Dan Cireșan et al. have set the new state of art for handwritten digits using an algorithm developed in the 1980s and by augmenting the data set with a procedure described in 1990. The only difference was the amount of computing power: using a modern GPU they finished training in one day, which might have taken 50 days on a CPU.
PROGRAMMING
Autoencoders
In the previous issue, I presented the Restricted Boltzmann Machines, which were introduced by Geoffrey Hinton, a professor at the University of Toronto, in 2006, as a method for speeding up neural network training. In 2007, Yoshua Bengio, professor at the University of Montreal, presented an alternative to RBMs: autoencoders.
PROGRAMMING
Restricted Boltzmann Machines
In the last article I presented a short history of deep learning and I listed some of the main techniques that are used. Now I’m going to present the components of a deep learning system. Deep learning had its first major success in 2006, when Geoffrey Hinton and Ruslan Salakhutdinov published the paper “Reducing the Dimensionality of Data with Neural Networks”, which was the first efficient and fast application of Restricted Boltzmann Machines (or RBMs).
PROGRAMMING
Deep learning
ÎIn the last 2-3 years, a new buzzword has appeared: deep learning. In 2012, Microsoft presented a pretty impressive demo with an application that recognized spoken English, translated the text to Chinese and then spoke the translation with the original speakers voice.In the same year, Google developed a system that, from 10 million YouTube thumbnails, learned by itself to recognize cats (and 22.000 other categories of objects) .
Other authors from 3Pillar Global
Conference TSM
VIDEO: ISSUE 109 LAUNCH EVENT
VIDEO: ISSUE 109 LAUNCH EVENT
Design contribution