Batch gradient descent performs redundant computations for large datasets, as it recomputes gradients for similar examples before each parameter update. SGD does away with this redundancy by performing one update at a time. It is therefore usually much faster and can also be used to learn online.

Though the sigma that it gives you the correct direction but you can show that stochastic gradient descent all most of the time it gives you a good direction. You know it may happen that using only one point you go wrong for in one iteration, but over all you go to the right direction generally and it’s faster.