MPI is a cross-language communication protocol for writing parallel computers. Support point-to-point and broadcast.
It is an information transfer application program interface, including protocol and semantic description, which shows how it can play its characteristics in various implementations. The goal of MPI is high performance, large scale and portability.
MPI is still the main model of high performance computing.
Thirdly, the main MPI- 1 model does not contain the concept of * * * shared memory, and MPI-2 only has a limited distribution of the concept of * * * shared memory.
But MPI programs usually run on machines that enjoy memory.
It is better to design programs around MPI model than under NUMA architecture, because MPI encourages memory localization.
Although MPI belongs to the fifth or higher layer of the OSI reference model, its implementation may cover most layers through sockets and transmission control protocol (TCP) in the transport layer.
Most MPI implementations are composed of some specified convention sets (APIs), which can be directly called by C, C++, Fortran or languages with this class library, such as C#, Java or Python.
MPI is superior to the old information transfer library because of its portability and speed.
It can be uninstalled without installing large software that requires data calculation.