stochastic gradient descent
Stochastic gradient descent is an optimization algorithm used to train machine learning models. It updates the model's parameters iteratively by randomly selecting a small subset of training data (a batch), calculating the gradient of the loss function for that batch, and adjusting the parameters in the direction that minimizes the loss. This process is repeated for multiple batches until the model converges to an optimal state.
Requires login.
Related Concepts (3)
Similar Concepts
- accelerated gradient descent methods
- batch gradient descent
- conjugate gradient descent
- convergence of gradient descent
- gradient descent for linear regression
- gradient descent for neural networks
- hybrid optimization algorithms combining gradient descent
- mini-batch gradient descent
- non-convex optimization using gradient descent
- online gradient descent
- proximal gradient descent
- stochastic approximation
- stochastic calculus
- stochastic differential equations
- stochastic optimization