Machine learning (ML) interatomic potentials offer new opportunities to accurately simulate larger material systems for longer periods of time. In the past decade, a large number of ML potential models have been proposed, but the assessment of their reliability and the quality of the simulation results largely lag behind. It is imperative to quantify the uncertainty in such ML potentials since their functional forms are overly flexible and do not explicitly have the bonding information between atoms baked in. In this talk, I will discuss a class of dropout uncertainty neural network potentials that provide rigorous uncertainty estimates which can be understood from both Bayesian and frequentist statistics perspectives. I will demonstrate the strengths and potential limitations of this approach using examples involving the fitting of carbon allotropes, as well as how to propagate the model uncertainty in molecular simulations. Additionally, I will discuss recent new developments in the field, including model uncertainty and calibration.