MPICH2 Release Information¶
The following is reproduced essentially verbatim from files contained within the MPICH2 tarball downloaded from http://www.mpich.org/downloads/.
NOTE: MPICH-2 has been effectively deprecated by the Open Source Community in favor of MPICH-3, which Scyld ClusterWare distributes as a set of mpich-scyld RPMs. Scyld ClusterWare continues to distribute mpich2-scyld, although we encourage users to migrate to MPICH-3, which enjoys active support by the Community.
===============================================================================
Changes in 1.5
===============================================================================
# OVERALL: Nemesis now supports an "--enable-yield=..." configure
option for better performance/behavior when oversubscribing
processes to cores. Some form of this option is enabled by default
on Linux, Darwin, and systems that support sched_yield().
# OVERALL: Added support for Intel Many Integrated Core (MIC)
architecture: shared memory, TCP/IP, and SCIF based communication.
# OVERALL: Added support for IBM BG/Q architecture. Thanks to IBM
for the contribution.
# MPI-3: const support has been added to mpi.h, although it is
disabled by default. It can be enabled on a per-translation unit
basis with "#define MPICH2_CONST const".
# MPI-3: Added support for MPIX_Type_create_hindexed_block.
# MPI-3: The new MPI-3 nonblocking collective functions are now
available as "MPIX_" functions (e.g., "MPIX_Ibcast").
# MPI-3: The new MPI-3 neighborhood collective routines are now available as
"MPIX_" functions (e.g., "MPIX_Neighbor_allgather").
# MPI-3: The new MPI-3 MPI_Comm_split_type function is now available
as an "MPIX_" function.
# MPI-3: The new MPI-3 tools interface is now available as "MPIX_T_"
functions. This is a beta implementation right now with several
limitations, including no support for multithreading. Several
performance variables related to CH3's message matching are exposed
through this interface.
# MPI-3: The new MPI-3 matched probe functionality is supported via
the new routines MPIX_Mprobe, MPIX_Improbe, MPIX_Mrecv, and
MPIX_Imrecv.
# MPI-3: The new MPI-3 nonblocking communicator duplication routine,
MPIX_Comm_idup, is now supported. It will only work for
single-threaded programs at this time.
# MPI-3: MPIX_Comm_reenable_anysource support
# MPI-3: Native MPIX_Comm_create_group support (updated version of
the prior MPIX_Group_comm_create routine).
# MPI-3: MPI_Intercomm_create's internal communication no longer interferes
with point-to-point communication, even if point-to-point operations on the
parent communicator use the same tag or MPI_ANY_TAG.
# MPI-3: Eliminated the possibility of interference between
MPI_Intercomm_create and point-to-point messaging operations.
# Build system: Completely revamped build system to rely fully on
autotools. Parallel builds ("make -j8" and similar) are now supported.
# Build system: rename "./maint/updatefiles" --> "./autogen.sh" and
"configure.in" --> "configure.ac"
# JUMPSHOT: Improvements to Jumpshot to handle thousands of
timelines, including performance improvements to slog2 in such
cases.
# JUMPSHOT: Added navigation support to locate chosen drawable's ends
when viewport has been scrolled far from the drawable.
# PM/PMI: Added support for memory binding policies.
# PM/PMI: Various improvements to the process binding support in
Hydra. Several new pre-defined binding options are provided.
# PM/PMI: Upgraded to hwloc-1.5
# PM/PMI: Several improvements to PBS support to natively use the PBS
launcher.
# Several other minor bug fixes, memory leak fixes, and code cleanup.
A full list of changes is available using:
svn log -r8478:HEAD https://svn.mcs.anl.gov/repos/mpi/mpich2/tags/release/mpich2-1.5
... or at the following link:
https://trac.mcs.anl.gov/projects/mpich2/log/mpich2/tags/release/
mpich2-1.5?action=follow_copy&rev=HEAD&stop_rev=8478&mode=follow_copy
===============================================================================
Changes in 1.4.1
===============================================================================
# OVERALL: Several improvements to the ARMCI API implementation
within MPICH2.
# Build system: Added beta support for DESTDIR while installing
MPICH2.
# PM/PMI: Upgrade hwloc to 1.2.1rc2.
# PM/PMI: Initial support for the PBS launcher.
# Several other minor bug fixes, memory leak fixes, and code cleanup.
A full list of changes is available using:
svn log -r8675:HEAD \
https://svn.mcs.anl.gov/repos/mpi/mpich2/tags/release/mpich2-1.4.1
... or at the following link:
https://trac.mcs.anl.gov/projects/mpich2/log/mpich2/tags/release/
mpich2-1.4.1?action=follow_copy&rev=HEAD&stop_rev=8675&mode=follow_copy
===============================================================================
Changes in 1.4
===============================================================================
# OVERALL: Improvements to fault tolerance for collective
operations. Thanks to Rui Wang @ ICT for reporting several of these
issues.
# OVERALL: Improvements to the universe size detection. Thanks to
Yauheni Zelenko for reporting this issue.
# OVERALL: Bug fixes for Fortran attributes on some systems. Thanks
to Nicolai Stange for reporting this issue.
# OVERALL: Added new ARMCI API implementation (experimental).
# OVERALL: Added new MPIX_Group_comm_create function to allow
non-collective creation of sub-communicators.
# FORTRAN: Bug fixes in the MPI_DIST_GRAPH_ Fortran bindings.
# PM/PMI: Support for a manual "none" launcher in Hydra to allow for
higher-level tools to be built on top of Hydra. Thanks to Justin
Wozniak for reporting this issue, for providing several patches for
the fix, and testing it.
# PM/PMI: Bug fixes in Hydra to handle non-uniform layouts of hosts
better. Thanks to the MVAPICH group at OSU for reporting this issue
and testing it.
# PM/PMI: Bug fixes in Hydra to handle cases where only a subset of
the available launchers or resource managers are compiled
in. Thanks to Satish Balay @ Argonne for reporting this issue.
# PM/PMI: Support for a different username to be provided for each
host; this only works for launchers that support this (such as
SSH).
# PM/PMI: Bug fixes for using Hydra on AIX machines. Thanks to
Kitrick Sheets @ NCSA for reporting this issue and providing the
first draft of the patch.
# PM/PMI: Bug fixes in memory allocation/management for environment
variables that was showing up on older platforms. Thanks to Steven
Sutphen for reporting the issue and providing detailed analysis to
track down the bug.
# PM/PMI: Added support for providing a configuration file to pick
the default options for Hydra. Thanks to Saurabh T. for reporting
the issues with the current implementation and working with us to
improve this option.
# PM/PMI: Improvements to the error code returned by Hydra.
# PM/PMI: Bug fixes for handling "=" in environment variable values in
hydra.
# PM/PMI: Upgrade the hwloc version to 1.2.
# COLLECTIVES: Performance and memory usage improvements for MPI_Bcast
in certain cases.
# VALGRIND: Fix incorrect Valgrind client request usage when MPICH2 is
built for memory debugging.
# BUILD SYSTEM: "--enable-fast" and "--disable-error-checking" are once
again valid simultaneous options to configure.
# TEST SUITE: Several new tests for MPI RMA operations.
# Several other minor bug fixes, memory leak fixes, and code cleanup.
A full list of changes is available using:
svn log -r7838:HEAD \
https://svn.mcs.anl.gov/repos/mpi/mpich2/tags/release/mpich2-1.4
... or at the following link:
https://trac.mcs.anl.gov/projects/mpich2/log/mpich2/tags/release/
mpich2-1.4?action=follow_copy&rev=HEAD&stop_rev=7838&mode=follow_copy
----------------------------------------------------------------------
KNOWN ISSUES
----------------------------------------------------------------------
### Known runtime failures
* MPI_Alltoall might fail in some cases because of the newly added
fault-tolerance features. If you are seeing this error, try setting
the environment variable MPICH_ENABLE_COLL_FT_RET=0.
### Threads
* ch3:sock does not (and will not) support fine-grained threading.
* MPI-IO APIs are not currently thread-safe when using fine-grained
threading (--enable-thread-cs=per-object).
* ch3:nemesis:tcp fine-grained threading is still experimental and may
have correctness or performance issues. Known correctness issues
include dynamic process support and generalized request support.
### Lacking channel-specific features
* ch3 does not presently support communication across heterogeneous
platforms (e.g., a big-endian machine communicating with a
little-endian machine).
* ch3:nemesis:mx does not support dynamic processes at this time.
* Support for "external32" data representation is incomplete. This
affects the MPI_Pack_external and MPI_Unpack_external routines, as
well the external data representation capabilities of ROMIO.
* ch3 has known problems in some cases when threading and dynamic
processes are used together on communicators of size greater than
one.
### Build Platforms
* Builds using the native "make" program on OpenSolaris fail unknown
reasons. A workaround is to use GNU Make instead. See the following
ticket for more information:
http://trac.mcs.anl.gov/projects/mpich2/ticket/1122
* Build fails with Intel compiler suite 13.0, because of weak symbol
issues in the compiler. A workaround is to disable weak symbol
support by passing --disable-weak-symbols to configure. See the
following ticket for more information:
https://trac.mcs.anl.gov/projects/mpich2/ticket/1659
* The sctp channel is fully supported for FreeBSD and Mac OS X. As of
the time of this release, bugs in the stack currently existed in
the Linux kernel, and will hopefully soon be resolved. It is known
to not work under Solaris and Windows. For Solaris, the SCTP API
available in the kernel of standard Solaris 10 is a subset of the
standard API used by the sctp channel. Cooperation with the Sun
SCTP developers to support ch3:sctp under Solaris for future
releases is currently ongoing. For Windows, no known kernel-based
SCTP stack for Windows currently exists.
### Process Managers
* The MPD process manager can only handle relatively small amounts of
data on stdin and may also have problems if there is data on stdin
that is not consumed by the program.
* The SMPD process manager does not work reliably with threaded MPI
processes. MPI_Comm_spawn() does not currently work for >= 256
arguments with smpd.
### Performance issues
* SMP-aware collectives do not perform as well, in select cases, as
non-SMP-aware collectives, e.g. MPI_Reduce with message sizes
larger than 64KiB. These can be disabled by the configure option
"--disable-smpcoll".
* MPI_Irecv operations that are not explicitly completed before
MPI_Finalize is called may fail to complete before MPI_Finalize
returns, and thus never complete. Furthermore, any matching send
operations may erroneously fail. By explicitly completed, we mean
that the request associated with the operation is completed by one
of the MPI_Test or MPI_Wait routines.
### C++ Binding:
* The MPI datatypes corresponding to Fortran datatypes are not
available (e.g., no MPI::DOUBLE_PRECISION).
* The C++ binding does not implement a separate profiling interface,
as allowed by the MPI-2 Standard (Section 10.1.10 Profiling).
* MPI::ERRORS_RETURN may still throw exceptions in the event of an
error rather than silently returning.