Spiking neuron network Helmholtz machine

An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the...

Full description

Bibliographic Details
Main Authors: Sountsov, Pavel, Miller, Paul
Format: Online
Language:English
Published: Frontiers Media S.A. 2015
Online Access:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4405618/
id pubmed-4405618
recordtype oai_dc
spelling pubmed-44056182015-05-07 Spiking neuron network Helmholtz machine Sountsov, Pavel Miller, Paul Neuroscience An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. Frontiers Media S.A. 2015-04-21 /pmc/articles/PMC4405618/ /pubmed/25954191 http://dx.doi.org/10.3389/fncom.2015.00046 Text en Copyright © 2015 Sountsov and Miller. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
repository_type Open Access Journal
institution_category Foreign Institution
institution US National Center for Biotechnology Information
building NCBI PubMed
collection Online Access
language English
format Online
author Sountsov, Pavel
Miller, Paul
spellingShingle Sountsov, Pavel
Miller, Paul
Spiking neuron network Helmholtz machine
author_facet Sountsov, Pavel
Miller, Paul
author_sort Sountsov, Pavel
title Spiking neuron network Helmholtz machine
title_short Spiking neuron network Helmholtz machine
title_full Spiking neuron network Helmholtz machine
title_fullStr Spiking neuron network Helmholtz machine
title_full_unstemmed Spiking neuron network Helmholtz machine
title_sort spiking neuron network helmholtz machine
description An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.
publisher Frontiers Media S.A.
publishDate 2015
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4405618/
_version_ 1613214198307749888