Distilling the Knowledge in a Neural Network #NIPS2014
By Geoffrey Hinton @OriolVinyalsML @jeffdean

On transferring knowledge from an ensemble or
from a large highly regularized model into a smaller, distilled model

The ultimate list of biohacks and smart drugs:

-Drink more water
-Get 8 hours of sleep
-Walk outside in the sun
-Leave your phone on silent
-Read a few pages each day
-Eat more vegetables and greens
-Don’t hang out with toxic people
-Work on projects you care about

Implementing SPADE using fastai: Generating photo-realistic images using GANs

#MachineLearning #DeepLearning #GANs #FastAI via @TDataScience CC @fastdotai @math_rachel @jeremyphoward

1/ Does batchnorm make optimization landscape more smooth? says yes, but our new @iclr2019 paper shows BN causes grad explosion in randomly initialized deep BN net. Contradiction? We clarify below

Load More...