【中英】【吴恩达课后测验】Course 1 - 神经网络和深度学习 - 第二周测验
上一篇:【课程1 - 第一周测验】 <https://blog.csdn.net/u013733326/article/details/79862336>
※※※※※【回到目录】 <https://blog.csdn.net/u013733326/article/details/79827273>※※※※※下一篇:
【课程1 - 第二周编程作业】 <https://blog.csdn.net/u013733326/article/details/79639509>
第2周测验 - 神经网络基础
 * 
神经元节点计算什么?
 * 
【 】神经元节点先计算激活函数,再计算线性函数(z = Wx + b)
 * 
【★】神经元节点先计算线性函数(z = Wx + b),再计算激活。
 * 
【 】神经元节点计算函数g,函数g计算(Wx + b)。
 * 
【 】在 将输出应用于激活函数之前,神经元节点计算所有特征的平均值
请注意:神经元的输出是a = g(Wx + b),其中g是激活函数(sigmoid,tanh,ReLU,…)。
 * 
下面哪一个是Logistic损失?
 * 点击这里 
<https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_error_function_and_logistic_regression>
. 
请注意:我们使用交叉熵损失函数。
 * 
假设img是一个(32,32,3)数组,具有3个颜色通道:红色、绿色和蓝色的32x32像素的图像。 如何将其重新转换为列向量?
x = img.reshape((32 * 32 * 3, 1)) 
 * 
看一下下面的这两个随机数组“a”和“b”:
a = np.random.randn(2, 3) # a.shape = (2, 3) b = np.random.randn(2, 1) # 
b.shape = (2, 1) c = a + b 
请问数组c的维度是多少?
答: B(列向量)复制3次,以便它可以和A的每一列相加,所以:c.shape = (2, 3)
 * 
看一下下面的这两个随机数组“a”和“b”:
a = np.random.randn(4, 3) # a.shape = (4, 3) b = np.random.randn(3, 2) # 
b.shape = (3, 2) c = a * b 
请问数组“c”的维度是多少?
答:运算符 “*” 说明了按元素乘法来相乘,但是元素乘法需要两个矩阵之间的维数相同,所以这将报错,无法计算。
 * 
假设你的每一个实例有n_x个输入特征,想一下在X=[x^(1), x^(2)…x^(m)]中,X的维度是多少?
答: (n_x, m)
请注意:一个比较笨的方法是当l=1的时候,那么计算一下Z(l)=W(l)A(l)Z(l)=W(l)A(l),所以我们就有:
 * A(1)A(1) = X 
 * X.shape = (n_x, m) 
 * Z(1)Z(1).shape = (n(1)n(1), m) 
 * W(1)W(1).shape = (n(1)n(1), n_x) 
 * 
回想一下,np.dot(a,b)在a和b上执行矩阵乘法,而`a * b’执行元素方式的乘法。
看一下下面的这两个随机数组“a”和“b”:
a = np.random.randn(12288, 150) # a.shape = (12288, 150) b = np.random.randn(
150, 45) # b.shape = (150, 45) c = np.dot(a, b) 
请问c的维度是多少?
答: c.shape = (12288, 45), 这是一个简单的矩阵乘法例子。
 * 
看一下下面的这个代码片段:
# a.shape = (3,4) # b.shape = (4,1) for i in range(3): for j in range(4): 
c[i][j] =a[i][j] + b[j] 
请问要怎么把它们向量化?
答:c = a + b.T
 * 
看一下下面的代码:
a = np.random.randn(3, 3) b = np.random.randn(3, 1) c = a * b 
请问c的维度会是多少? 
 答:这将会使用广播机制,b会被复制三次,就会变成(3,3),再使用元素乘法。所以: c.shape = (3, 3).
 * 
看一下下面的计算图:
J = u + v - w = a * b + a * c - (b + c)  = a * (b + c) - (b + c)  = (a - 1) * 
(b + c) 
答: (a - 1) * (b + c) 
 博主注:由于弄不到图,所以很抱歉。
上一篇:【第1周测验-深度学习简介】 <https://blog.csdn.net/u013733326/article/details/79862336>
※※※※※【回到目录】 <https://blog.csdn.net/u013733326/article/details/79827273>※※※※※下一篇:
【具有神经网络思维的Logistic回归】 
<https://blog.csdn.net/u013733326/article/details/79639509>
Week 2 Quiz - Neural Network Basics
 * 
What does a neuron compute?
 * 
[ ] A neuron computes an activation function followed by a linear function (z 
= Wx + b)
 * 
[x] A neuron computes a linear function (z = Wx + b) followed by an activation 
function
 * 
[ ] A neuron computes a function g that scales the input x linearly (Wx + b)
 * 
[ ] A neuron computes the mean of all features before applying the output to 
an activation function
Note: The output of a neuron is a = g(Wx + b) where g is the activation 
function (sigmoid, tanh, ReLU, …).
 * 
Which of these is the “Logistic Loss”?
 * Check here 
<https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_error_function_and_logistic_regression>
. 
Note: We are using a cross-entropy loss function.
 * 
Suppose img is a (32,32,3) array, representing a 32x32 image with 3 color 
channels red, green and blue. How do you reshape this into a column vector?
 * x = img.reshape((32 * 32 * 3, 1)) 
 * 
Consider the two following random arrays “a” and “b”:
a = np.random.randn(2, 3) # a.shape = (2, 3) b = np.random.randn(2, 1) # 
b.shape = (2, 1) c = a + b 
What will be the shape of “c”?
b (column vector) is copied 3 times so that it can be summed to each column of 
a. Therefore,c.shape = (2, 3).
 * 
Consider the two following random arrays “a” and “b”:
a = np.random.randn(4, 3) # a.shape = (4, 3) b = np.random.randn(3, 2) # 
b.shape = (3, 2) c = a * b 
What will be the shape of “c”?
“*” operator indicates element-wise multiplication. Element-wise 
multiplication requires same dimension between two matrices. It’s going to be 
an error.
 * 
Suppose you have n_x input features per example. Recall that X=[x^(1), 
x^(2)…x^(m)]. What is the dimension of X?
(n_x, m)
Note: A stupid way to validate this is use the formula Z^(l) = W^(l)A^(l) when 
l = 1, then we have
 * A^(1) = X 
 * X.shape = (n_x, m) 
 * Z^(1).shape = (n^(1), m) 
 * W^(1).shape = (n^(1), n_x) 
 * 
Recall that np.dot(a,b) performs a matrix multiplication on a and b, whereas 
a*b performs an element-wise multiplication.
Consider the two following random arrays “a” and “b”:
a = np.random.randn(12288, 150) # a.shape = (12288, 150) b = np.random.randn(
150, 45) # b.shape = (150, 45) c = np.dot(a, b) 
What is the shape of c?
c.shape = (12288, 45), this is a simple matrix multiplication example.
 * 
Consider the following code snippet:
# a.shape = (3,4) # b.shape = (4,1) for i in range(3): for j in range(4): 
c[i][j] =a[i][j] + b[j] 
How do you vectorize this?
c = a + b.T
 * 
Consider the following code:
a = np.random.randn(3, 3) b = np.random.randn(3, 1) c = a * b 
What will be c?
This will invoke broadcasting, so b is copied three times to become (3,3), and 
鈭� is an element-wise product soc.shape = (3, 3).
 * 
Consider the following computation graph.
J = u + v - w = a * b + a * c - (b + c)  = a * (b + c) - (b + c)  = (a - 1) * 
(b + c) 
Answer: (a - 1) * (b + c)
热门工具 换一换
