Аннотация
We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.

![Новая дополненная версия 3.0 руководства по созданию книг формата Fb2 в FictionBook Editor V 2.66. Обложка сделана... FictionBook Editor V 2.66 [Руководство по созданию книг]](https://www.rulit.club/data/programs/images/fictionbook-editor-v-2-66-rukovodstvo-po-sozdaniyu-knig_563954.jpg)
![Чтобы пользоваться компьютером, его недостаточно просто купить и включить – в отличие от того же телевизора. Вначале нужно получить необходимый минимум знаний, и... Компьютер от «А» до «Я» [Windows, Интернет, графика, музыка, видео и многое другое]](https://www.rulit.club/data/programs/images/kompyuter-ot-a-do-ya-windows-internet-grafika-muzyka-video-i_462712.jpg)

Комментарии к книге "A Fast Learning Algorithm for Deep Belief Nets"