Actually, the backpropagation algorithm is the example of gradient descent for the neural network. As you can know,
gradient descent modifies the weights near model parameters in order to reduce the error of the model predictions. This means that you pass some random weights, calculate the prediction, compute the error, and on the next stage, you should go towards the side of the so-called anti-gradient of the cost function. From the math course, the gradient points to the direction of the fastest growth of the function, and the anti-gradient points into the opposite direction. So, the gradient descent changes the weights by some delta value in the anti-gradient direction, which makes the cost function closer to its minimum.
Now, when you have a neural network, you should do the same, but there are weights near each of the neuron on each layer of the network. The backpropagation algorithm is simply the technique of how to measure the impact of each neuron on the final prediction (and, in turn, on the final error). In other words, the backpropagation algorithm is a way of how to apply the gradient descent to each neuron of the neural network according to the impact of each neuron on the overall prediction error of the network.
Regarding the second part of your question. In general, it is good to understand the whole process of why and how the backpropagation is used. Nevertheless, if you are using modern deep learning frameworks (for example, Tensorflow, Keras, PyTorch) it is not needed to perform backpropagation manually. You have just to specify the architecture of the neural network and its hyperparameters, and the framework will perform backpropagation for you.