这里主要集结了CS229,week 4的内容,感觉也没啥内容。。。可能是,之前就比较熟悉lenet5分类器的缘故。
None-linear Hypotheses
提出了 Neural Network
Neurons and the Brain
大概就是,一些脑科学实验,然后从其中,抽象出,神经网络这一数学模型
Model Representation I
Input Layer
Our input nodes (layer 1), also known as the “input layer”, go into another node (layer 2)。
Output Layer
which finally outputs the hypothesis function, known as the “output layer”.
Hidden Layer
The intermediate layers of nodes which between the input and output layers called the “hidden layers.”
The other notation
Visually, a simplistic representation looks like:
label these intermediate or “hidden” layer nodes
If we had one hidden layer, it would look like:
The values for each of the “activation” nodes is obtained as follows:
The dimensions of these matrices of weights is determined as follows:
If network has
矩阵bias unit的设置
The +1 comes from the addition in
Model Representation II
vectorized implementation
The vector representation of
Setting
, we can rewrite the equation as:
Examples and Intuitions
主要介绍,Neural Network用来表现逻辑函数
逻辑函数
Multiclass Classification
- Post title: 07_Neural Network
- Create time: 2022-01-07 21:19:22
- Post link: Machine-Learning/07-neural-network/
- Copyright notice: All articles in this blog are licensed under BY-NC-SA unless stating additionally.