Twitter

Distilling the Knowledge in a Neural Network #NIPS2014
By Geoffrey Hinton @OriolVinyalsML @jeffdean

On transferring knowledge from an ensemble or
from a large highly regularized model into a smaller, distilled model

https://t.co/PHklruxCUC

The ultimate list of biohacks and smart drugs:

-Drink more water
-Get 8 hours of sleep
-Walk outside in the sun
-Leave your phone on silent
-Read a few pages each day
-Eat more vegetables and greens
-Don’t hang out with toxic people
-Work on projects you care about

Implementing SPADE using fastai: Generating photo-realistic images using GANs

#MachineLearning #DeepLearning #GANs #FastAI via @TDataScience CC @fastdotai @math_rachel @jeremyphoward https://t.co/PZZyiks8jh

1/ Does batchnorm make optimization landscape more smooth? https://t.co/5J92tRz8ag says yes, but our new @iclr2019 paper https://t.co/FM7jrTMWU2 shows BN causes grad explosion in randomly initialized deep BN net. Contradiction? We clarify below

Load More...

Tzeny