Описание тега biological-neural-network
People sometimes conceptualize brains as immensely powerful computers. Although their power comes more from from a capacity for highly parallelized processing than from an actual speed advantage over supercomputers, brains are capable of learning many complex concepts that people repeatedly try and fail to make computers understand. As such, taking inspiration from brains is a logical first step. This inspiration lead to the creation of artificial neural networks (ANNs), a popular learning algorithm. This tag is a place to ask questions about the biological underpinnings of ANNs, and how those underpinnings might inform our usage of them.
Basic brain anatomy
Brains are composed of neurons (as shown in image below). The dendrites at the left end receive chemical signals from their surroundings and sum up the number of signals they've received. If the sum is high enough, an electrical charge called an action potential is transmitted down the axon to the other end, where it is converted into chemical signals that are perceived by nearby neurons. In this way, neurons can form a network that passes signals amongst itself. Various chemical feedback loops can alter the strength with which incoming signals are perceived (analogous to adjustment of connection weights in ANN training).
For further information about brain anatomy: https://faculty.washington.edu/chudler/cells.html
Comparison to ANNs
Although many of the circuits formed by BNNs are much more complex than those typically used in ANNs, the underlying structure is the same. Neurons in BNNs are equivalent to nodes in ANNs. The physical proximity of neurons in BNNs, and the resulting access to chemical signals, corresponds to connections between nodes in ANNs. The strength of a neuron's response to a given signal in BNNs is approximated by connection weights in ANNs. Lastly, the summing of incoming signals leading to the propagation of an action potential (or lack thereof) is implemented in ANNs via activation functions.