Coursera - Neural Networks and Deep Learning - Week 3 - Section 1 - Shallow Neural Networks
2025年01月15日
What is a Neural Network?
Neural Network Representation
Neural Network Representation
Neural Network Representation learning
Vectorizing across multiple examples
Justification for vectorized implementation
Recap of vectorizing across multiple examples
Activation functions
Pros and cons of activation functions
Activation function
Sigmoid activation function
Tanh activation function
ReLU and Leaky ReLU
Gradient descent for neural networks
Formulas for computing derivatives
Computing gradients
Neural network gradients
Summary of gradient descent
What happens if you initialize weights to zero?
Random initialization
Week 3: Shallow Neural Networks
Section 1: Shallow Neural Network
1. Video: Neural Networks Overview
What is a Neural Network?
2. Video: Neural Network Representation
Neural Network Representation
3. Video: Computing a Neural Network's Output
Neural Network Representation
Neural Network Representation learning
4. Video: Vectorizing Across Multiple Examples
Vectorizing across multiple examples
5. Video: Explanation for Vectorized Implementation
Justification for vectorized implementation
Recap of vectorizing across multiple examples
6. Video: Activation Functions
Activation functions
Pros and cons of activation functions
7. Video: Why do you need Non-Linear Activation Functions?
Activation function
8. Video: Derivatives of Activation Functions
Sigmoid activation function
Tanh activation function
ReLU and Leaky ReLU
9. Video: Gradient Descent for Neural Networks
Gradient descent for neural networks
Formulas for computing derivatives
10. Video: Backpropagation Intuition (Optional)
Computing gradients
Neural network gradients
Summary of gradient descent
11. Video: Random Initialization
What happens if you initialize weights to zero?
Random initialization