人工神经网络算法
The Neuron
神经元
The neuron that forms the basis of all Neural Network is an imitation of what was observed within the human brain.
构成所有神经网络基础的神经元是对人脑中观察到的东西的模仿。
This odd critter is just one of the thousands that swim in our brains.
这种奇怪的生物只是我们大脑中成千上万游泳的动物之一。
Its head without eyes is the neuron. It is connected by those neurons around it called dendrites and the tails, which are called axons, to other neurons. The electrical signals which shape our perception of the world around us flow through these.
它的头没有眼睛是神经元。 它由周围的称为树突的神经元和称为轴突的尾巴与其他神经元相连。 塑造我们对周围世界的感知的电信号通过这些信号流动。
Curiously enough, at the time a signal is being transmitted between an axon and dendrite, the two are not necessarily touched.
奇怪的是,在轴突和树突之间传输信号时,两者不一定会被触摸。
A gap exists between those two. The signal needs to act like a stuntman jumping over a deep canyon on a dirt-bike to continue its trip. This signal-passing hopping mechanism is called the synapse. For the sake of convenience, this is also the word I would use when referring to the transfer of signals in our Neural Networks.
两者之间存在差距。 该信号需要像个特技演员一样,跳过一辆越野摩托车在深深的峡谷中继续前进。 这种信号传递跳变机制称为突触 。 为了方便起见,这也是我在提及神经网络中信号传输时要使用的词。
How did you re imagine the biological neuron?Here is a diagram which expresses the shape a neuron takes in a Neural Network.
您如何想象生物神经元? 这是表示神经元在神经网络中采取的形状的示意图。
The left-hand inputs reflect the incoming signals to the center of the main neuron. This may involve scent or touch in a human neuron.
左侧输入将输入信号反射到主神经元的中心。 这可能涉及人类神经元的气味或触感。
These inputs are independent variable in the Neural Network. They are flowing through the synapses, passing into the wide grey loop, and appearing as output values on the other side. For the most part it is a like-for-like operation.
这些输入是神经网络中的自变量。 它们流过突触,进入宽的灰色环路,并在另一侧显示为输出值。 在大多数情况下,这是一个类似操作。
The biggest difference between the biological mechanism and its artificial equivalent is the level of control you possess over the input values; on the left hand, the independent variables.
生物学机制与其人工等效物之间的最大区别在于您对输入值的控制水平; 在左边,是自变量。
You can pick which variables can enter the Neural NetworkIt’s crucial to remember; either standardizing or normalizing the values of the independent variables. These methods keep the variables within a comparable range so they are easy to interpret for the Neural Network. That is important for your Neural Network’s operational capability
您可以选择可以进入神经网络的变量 。 标准化或标准化自变量的值。 这些方法将变量保持在可比较的范围内,因此易于为神经网络解释。 这对于您的神经网络的操作能力很重要
ObservationsIt is similarly important to remember that not every single variable stands alone. Together they are a singular observation.
观察同样重要的是要记住,并非每个变量都单独存在。 他们在一起是一个奇异的观察。
For instance, you can list the height, age and weight of a person. These are three distinct descriptors but they belong to a single individual.
例如,您可以列出一个人的身高,年龄和体重。 这是三个不同的描述符,但它们属于一个人。
Actually, as these values goes through the key neuron and moves on to the other side, they become output values.
实际上,当这些值经过关键神经元并移动到另一侧时,它们成为输出值。
Output valuesOutput values can take various forms. Look at this Diagram:
输出值输出值可以采用多种形式。 看这个图:
They can either be: 1. continuous (price) 2. binary (yes or no) 3. or categorical.
它们可以是:1.连续(价格)2.二进制(是或否)3.或分类的。
A categorical performance of several variables will fan out. But just as the input variables are separate sections of a whole, the same happens to a categorical output. Imagine it as a bobsled team: many people bundled into a single car.
几个变量的分类性能会散开。 但是,就像输入变量是整体的各个部分一样,分类输出也是如此。 想象成一个雪橇队:许多人捆绑在一辆汽车中。
Singular ObservationsIt is important to remember so you can keep things clear in your mind when working through this; both the inputs on the left and the outputs on the right are single observations.
奇异的观察重要的是要记住,以便在进行此操作时可以使事情清晰明了。 左侧的输入和右侧的输出都是单观测值。
The neuron is sandwiched between two single rows of data. There may be three input variables and one output. It doesn’t matter. They are two single corresponding rows of data. One for one.
神经元夹在两个单行数据之间。 可能有三个输入变量和一个输出。 没关系 它们是两个单独的对应数据行。 一对一。
Back to the StuntmanIf he’s marauding over the soft pink terrain of the human brain he will eventually reach the canyon we mentioned before. He needs to jump it.
回到特技演员如果他在人脑的柔软粉红色地形上掠夺,他最终将到达我们之前提到的峡谷。 他需要跳下去。
Photo by Alex Radelich on Unsplash Alex Radelich在 Unsplash上 拍摄的照片Perhaps there is a crowd of beautiful women and a stockpile of booze on the other side. He can hear ZZ Top blasting from unseen speakers.
也许另一边有一群漂亮的女人和一堆酒。 他可以听到看不见的扬声器发出的ZZ Top爆炸声。
He needs to get over there. In the brain, he has to take the leap. He would much rather face this dilemma in a Neural Network, where he doesn’t need the bike to reach his nirvana. Here, he has a tightrope linking him to the promised land. This is the synapse.
他需要去那儿。 在大脑中,他必须跃跃欲试。 他宁愿在神经网络中面对这个难题,而他不需要自行车就能达到他的必杀技。 在这里,他有一条绳索将他与应许之地联系起来。 这就是突触 。
WeightsEach synapse is assigned a weight. Just as the tautness of the tightrope is integral to the stuntman’s survival, so is the weight assigned to each synapse for the signal that passes along it. Weights are a pivotal factor in a Neural Network’s functioning.
权重每个突触都分配有权重。 就像钢丝绳的拉紧度是特技演员生存的必要组成部分一样,分配给每个突触的权重也取决于传递给它的信号。 权重是神经网络功能的关键因素。
Weights are how Neural Networks learnBased on each weight, the Neural Network decides what information is important, and what isn’t.
权重是神经网络学习的方式基于每个权重,神经网络决定哪些信息很重要,哪些不重要。
The weight determines which signals get passed along or not, or to what extent a signal gets passed along. The weights are what you will adjust through the process of learning. When you are training your Neural Network, not unlike with your body, the work is done with weights. Later on I will cover Gradient Descent and Backpropagation.
权重确定哪些信号通过或不通过,或信号通过什么程度。 权重是您在学习过程中将调整的权重。 当您训练自己的神经网络时,与您的身体不同,这项工作是通过举重完成的。 稍后,我将介绍“ 梯度下降”和“ 反向传播” 。
These are concepts that apply to the alteration of weights, the hows, and whys; what works best.
这些是适用于权重,方式和原因的概念; 什么最有效。
That’s everything to do with what goes into the neuron, what comes out, and by what means.
这与进入神经元的东西,产生的东西以及通过什么手段有关。
What happens inside the neuron?How are input signals altered in the neuron so they come out the other side as output signals? I’m sad to say it’s slightly less adventurous than a tiny stuntman taking risks on his travels to who-knows-where.
神经元内部会发生什么? 输入信号如何在神经元中改变,以便它们作为输出信号从另一端出来? 我很伤心地说这是略少的冒险一个小小的替身演员拍摄在他的旅行风险,以谁也不知道的地方。
It all comes down to plain old addition.First, the neuron takes all the weights it has received and adds them all up. Simple.
一切都归结为简单的旧加法 。首先,神经元将接收到的所有权重全部加起来。 简单。
It then applies an activation function that has already been applied to either the neuron itself, or an entire layer of neurons. I will go into deeper learning on the activation function later on. For now, all you need to know is that this function facilitates whether a signal gets passed on or not.
然后,它应用已经应用于神经元本身或整个神经元层的激活函数 。 稍后,我将对激活功能进行更深入的学习。 现在,您只需要知道此功能可以促进信号是否通过。
That signal goes on to the next neuron down the line then the next, so on and so forth. That’s it.
该信号继续传输到下一个神经元,然后再传输到下一个神经元,依此类推。 而已。
Additional Reading
附加阅读
For deeper learning on Artificial Neural Networks the Neuron you can read a paper titled Efficient BackProp by Yan LeCun et al. (1998). The link is here.
为了深入学习人工神经网络神经元,您可以阅读Yan LeCun等人的题为《高效BackProp》的论文。 (1998)。 链接在这里 。
翻译自: https://medium.com/swlh/artificial-neural-networks-the-neuron-b046f53547fe
人工神经网络算法
相关资源:四史答题软件安装包exe