Real world scenarios in molecular dynamics simulations are highly complex and cannot be efficiently approximated by single core applications. To achieve convincing results in a reasonable amount of time, scientists use supercomputers with a large number of compute units.
So far, the demonstrator MD-Flexible for the AutoPas particle simulation library was limited to a single process. MD-Flexible now has been turned into a massively parallel application using MPI for inter-processor communication and adaptive domain decomposition for workload balancing. It subdivides the simulation domain into a regular grid which is either balanced by ALL’s Tensor method or by the here presented Inverted Pressure method. After taking a brief introduction into Molecular Dynamics and Adaptive Domain Decomposition, we take a look at the technologies involved in the parallelization of MD-Flexible and compare it to other well known massively parallel Molecular Dynamics applications. Before going into detail about the implementation of massive parallelism, we explain on a high level how MD-Flexible worked before and what has changed during the implementation. Next, we describe the details about the adaptive domain decomposition, the resulting inter-process communication and other requirements that arise from the parallelization. The result has been evaluated by running three scenarios with up to 128 processes. The best speedup we achieved using 128 processes is 40. The performance of the parallelization is investigated during the evaluation and several suggestions are given on how it can be improved.
«
Real world scenarios in molecular dynamics simulations are highly complex and cannot be efficiently approximated by single core applications. To achieve convincing results in a reasonable amount of time, scientists use supercomputers with a large number of compute units.
So far, the demonstrator MD-Flexible for the AutoPas particle simulation library was limited to a single process. MD-Flexible now has been turned into a massively parallel application using MPI for inter-processor communication...
»