Deep Neural Networks
“At some fundamental level, no one understands machine learning.” —Christopher Olah
“Neural networks are one of the most beautiful programming paradigms ever invented.” —Michael Nielsen
This week, we round up a few examples on deep neural networks (DNNs), a subfield of machine learning that deals with developing training algorithms and uses raw video and speech data as input.
Replicating Deep Mind: Kristjan Korjus is working on a project to reproduce the results of Playing Atari with Deep Reinforcement Learning, by Volodymyr Mnih and colleagues of DeepMind Technologies. Mnih et al. presented a deep learning model that used reinforcement learning to learn control policies from sensory input and outperformed human experts on three of seven Atari games.
Deep Learning, NLP, and Representations: Christopher Olah at Colah’s Blog looks at deep learning from a perspective on natural-language processing and discusses how different DNNs designed for different language-processing tasks have learned the same things.
More Deep Learning Musings: Paul Mineiro at Machined Learnings gives his reactions to Yoshua Bengio’s talks at the 2014 International Conference on Machine Learning, with a focus on the advantages that “deep” architectures may have over “shallow” ones.
Accelerate Machine Learning with the cuDNN Deep Neural Network Library: Larry Brown at NVIDIA explains why neural networks have become one of the most powerful tools for machine learning and introduces a library of primitives for deep neural networks called cuDNN that is free for anyone to use.