NameDateSize

..16-Mar-201612 KiB

.gitignore_global02-Oct-201426 KiB

.hgignore_global02-Oct-201440.7 KiB

.mailmap26-Mar-20149.4 KiB

AUTHORS17-Sep-20146.7 KiB

autogen.pl13-May-201441.4 KiB

config/02-Oct-201412 KiB

configure.ac08-Sep-201447.1 KiB

contrib/27-Aug-20144 KiB

Doxyfile26-Mar-201443 KiB

examples/16-Jun-20144 KiB

HACKING26-Mar-201410.9 KiB

INSTALL26-Mar-20143.4 KiB

LICENSE18-Jun-20144.7 KiB

Makefile.am14-May-20141.2 KiB

Makefile.ompi-rules02-Oct-20141.6 KiB

NEWS02-Oct-2014118.5 KiB

ompi/12-May-20144 KiB

opal/28-Jul-20144 KiB

orte/13-May-20144 KiB

oshmem/19-May-20144 KiB

README02-Oct-201490.2 KiB

README.JAVA.txt18-Aug-201410.4 KiB

test/26-Mar-20144 KiB

VERSION11-Aug-20144.6 KiB

README

1Copyright (c) 2004-2007 The Trustees of Indiana University and Indiana
2                        University Research and Technology
3                        Corporation.  All rights reserved.
4Copyright (c) 2004-2007 The University of Tennessee and The University
5                        of Tennessee Research Foundation.  All rights
6                        reserved.
7Copyright (c) 2004-2008 High Performance Computing Center Stuttgart, 
8                        University of Stuttgart.  All rights reserved.
9Copyright (c) 2004-2007 The Regents of the University of California.
10                        All rights reserved.
11Copyright (c) 2006-2014 Cisco Systems, Inc.  All rights reserved.
12Copyright (c) 2006-2011 Mellanox Technologies. All rights reserved.
13Copyright (c) 2006-2012 Oracle and/or its affiliates.  All rights reserved.
14Copyright (c) 2007      Myricom, Inc.  All rights reserved.
15Copyright (c) 2008      IBM Corporation.  All rights reserved.
16Copyright (c) 2010      Oak Ridge National Labs.  All rights reserved.
17Copyright (c) 2011      University of Houston. All rights reserved.
18Copyright (c) 2013-2014 Intel, Inc. All rights reserved 
19$COPYRIGHT$
20
21Additional copyrights may follow
22
23$HEADER$
24 
25===========================================================================
26
27When submitting questions and problems, be sure to include as much
28extra information as possible.  This web page details all the
29information that we request in order to provide assistance:
30
31     http://www.open-mpi.org/community/help/
32
33The best way to report bugs, send comments, or ask questions is to
34sign up on the user's and/or developer's mailing list (for user-level
35and developer-level questions; when in doubt, send to the user's
36list):
37
38        users@open-mpi.org
39        devel@open-mpi.org
40
41Because of spam, only subscribers are allowed to post to these lists
42(ensure that you subscribe with and post from exactly the same e-mail
43address -- joe@example.com is considered different than
44joe@mycomputer.example.com!).  Visit these pages to subscribe to the
45lists:
46
47     http://www.open-mpi.org/mailman/listinfo.cgi/users
48     http://www.open-mpi.org/mailman/listinfo.cgi/devel
49
50Thanks for your time.
51
52===========================================================================
53
54Much, much more information is also available in the Open MPI FAQ:
55
56    http://www.open-mpi.org/faq/
57
58===========================================================================
59
60The following abbreviated list of release notes applies to this code
61base as of this writing (April 2014):
62
63General notes
64-------------
65
66- Open MPI now includes two public software layers: MPI and OpenSHMEM.
67  Throughout this document, references to Open MPI implicitly include 
68  both of these layers. When distinction between these two layers is 
69  necessary, we will reference them as the "MPI" and "OSHMEM" layers
70  respectively. 
71  
72- OpenSHMEM is a collaborative effort between academia, industry, and
73  the U.S. Government to create a specification for a standardized API
74  for parallel programming in the Partitioned Global Address Space
75  (PGAS).  For more information about the OpenSHMEM project, including
76  access to the current OpenSHMEM specification, please visit:
77
78     http://openshmem.org/
79  
80  This OpenSHMEM implementation is provided on an experimental basis;
81  it has been lightly tested and will only work in Linux environments.
82  Although this implementation attempts to be portable to multiple
83  different environments and networks, it is still new and will likely
84  experience growing pains typical of any new software package.
85  End-user feedback is greatly appreciated.
86
87  This implementation will currently most likely provide optimal
88  performance on Mellanox hardware and software stacks.  Overall
89  performance is expected to improve as other network vendors and/or
90  institutions contribute platform specific optimizations.
91
92  See below for details on how to enable the OpenSHMEM implementation.
93
94- Open MPI includes support for a wide variety of supplemental
95  hardware and software package.  When configuring Open MPI, you may
96  need to supply additional flags to the "configure" script in order
97  to tell Open MPI where the header files, libraries, and any other
98  required files are located.  As such, running "configure" by itself
99  may not include support for all the devices (etc.) that you expect,
100  especially if their support headers / libraries are installed in
101  non-standard locations.  Network interconnects are an easy example
102  to discuss -- Myrinet and OpenFabrics networks, for example, both
103  have supplemental headers and libraries that must be found before
104  Open MPI can build support for them.  You must specify where these
105  files are with the appropriate options to configure.  See the
106  listing of configure command-line switches, below, for more details.
107
108- The majority of Open MPI's documentation is here in this file, the
109  included man pages, and on the web site FAQ
110  (http://www.open-mpi.org/).  This will eventually be supplemented
111  with cohesive installation and user documentation files.
112
113- Note that Open MPI documentation uses the word "component"
114  frequently; the word "plugin" is probably more familiar to most
115  users.  As such, end users can probably completely substitute the
116  word "plugin" wherever you see "component" in our documentation.
117  For what it's worth, we use the word "component" for historical
118  reasons, mainly because it is part of our acronyms and internal API
119  function calls.
120
121- The run-time systems that are currently supported are:
122  - rsh / ssh
123  - LoadLeveler
124  - PBS Pro, Torque
125  - Platform LSF (v7.0.2 and later)
126  - SLURM
127  - Cray XE and XC
128  - Oracle Grid Engine (OGE) 6.1, 6.2 and open source Grid Engine
129
130- Systems that have been tested are:
131  - Linux (various flavors/distros), 32 bit, with gcc
132  - Linux (various flavors/distros), 64 bit (x86), with gcc, Absoft,
133    Intel, and Portland (*)
134  - OS X (10.6, 10.7, 10.8, 10.9), 32 and 64 bit (x86_64), with gcc and
135    Absoft compilers (*)
136
137  (*) Be sure to read the Compiler Notes, below.
138
139- Other systems have been lightly (but not fully tested):
140  - Cygwin 32 & 64 bit with gcc
141  - ARMv4, ARMv5, ARMv6, ARMv7 (when using non-inline assembly; only
142    ARMv7 is fully supported when -DOMPI_DISABLE_INLINE_ASM is used).
143  - Other 64 bit platforms (e.g., Linux on PPC64)
144  - Oracle Solaris 10 and 11, 32 and 64 bit (SPARC, i386, x86_64),
145    with Oracle Solaris Studio 12.2 and 12.3
146
147Compiler Notes
148--------------
149
150- Mixing compilers from different vendors when building Open MPI
151  (e.g., using the C/C++ compiler from one vendor and the Fortran
152  compiler from a different vendor) has been successfully employed by
153  some Open MPI users (discussed on the Open MPI user's mailing list),
154  but such configurations are not tested and not documented.  For
155  example, such configurations may require additional compiler /
156  linker flags to make Open MPI build properly.
157
158- In general, the latest versions of compilers of a given vendor's
159  series have the least bugs.  We have seen cases where Vendor XYZ's
160  compiler version A.B fails to compile Open MPI, but version A.C
161  (where C>B) works just fine.  If you run into a compile failure, you
162  might want to double check that you have the latest bug fixes and
163  patches for your compiler.
164
165- Users have reported issues with older versions of the Fortran PGI
166  compiler suite when using Open MPI's (non-default) --enable-debug
167  configure option.  Per the above advice of using the most recent
168  version of a compiler series, the Open MPI team recommends using the
169  latest version of the PGI suite, and/or not using the --enable-debug
170  configure option.  If it helps, here's what we have found with some
171  (not comprehensive) testing of various versions of the PGI compiler
172  suite:
173
174    pgi-8 : NO known good version with --enable-debug
175    pgi-9 : 9.0-4 known GOOD
176    pgi-10: 10.0-0 known GOOD
177    pgi-11: NO known good version with --enable-debug
178    pgi-12: 12.10 known GOOD (and 12.8 and 12.9 both known BAD with
179            --enable-debug)
180    pgi-13: 13.10 known GOOD
181
182- Similarly, there is a known Fortran PGI compiler issue with long
183  source directory path names that was resolved in 9.0-4 (9.0-3 is
184  known to be broken in this regard).
185
186- IBM's xlf compilers: NO known good version that can build/link 
187  the MPI f08 bindings or build/link the OSHMEM Fortran bindings.
188
189- On NetBSD-6 (at least AMD64 and i386), and possibly on OpenBSD,
190  libtool misidentifies properties of f95/g95, leading to obscure
191  compile-time failures if used to build Open MPI.  You can work
192  around this issue by ensuring that libtool will not use f95/g95
193  (e.g., by specifying FC=<some_other_compiler>, or otherwise ensuring
194  a different Fortran compiler will be found earlier in the path than
195  f95/g95), or by disabling the Fortran MPI bindings with
196  --disable-mpi-fortran.
197
198- Absoft 11.5.2 plus a service pack from September 2012 (which Absoft
199  says is available upon request), or a version later than 11.5.2
200  (e.g., 11.5.3), is required to compile the new Fortran mpi_f08
201  module.
202
203- Open MPI does not support the Sparc v8 CPU target.  However,
204  as of Solaris Studio 12.1,  and later compilers, one should not 
205  specify -xarch=v8plus or -xarch=v9.  The use of the options
206  -m32 and -m64 for producing 32 and 64 bit targets, respectively,
207  are now preferred by the Solaris Studio compilers.  GCC may
208  require either "-m32" or "-mcpu=v9 -m32", depending on GCC version.
209
210- It has been noticed that if one uses CXX=sunCC, in which sunCC
211  is a link in the Solaris Studio compiler release, that the OMPI 
212  build system has issue with sunCC and does not build libmpi_cxx.so.
213  Therefore  the make install fails.  So we suggest that one should
214  use CXX=CC, which works, instead of CXX=sunCC.
215
216- If one tries to build OMPI on Ubuntu with Solaris Studio using the C++
217  compiler and the -m32 option, you might see a warning:
218
219    CC: Warning: failed to detect system linker version, falling back to
220    custom linker usage
221
222  And the build will fail.  One can overcome this error by either 
223  setting LD_LIBRARY_PATH to the location of the 32 bit libraries (most
224  likely /lib32), or giving LDFLAGS="-L/lib32 -R/lib32" to the configure
225  command.  Officially, Solaris Studio is not supported on Ubuntu Linux
226  distributions, so additional problems might be incurred.
227
228- The Solaris Studio 12.2 compilers may have a problem compiling
229  VampirTrace on some Linux platforms.  You can either upgrade to a
230  later version of the Solaris Studio compilers (e.g., 12.3 does not
231  have this problem), or disable building VampirTrace.
232
233- Open MPI does not support the gccfss compiler (GCC For SPARC
234  Systems; a now-defunct compiler project from Sun).
235
236- At least some versions of the Intel 8.1 compiler seg fault while
237  compiling certain Open MPI source code files.  As such, it is not
238  supported.
239
240- The Intel 9.0 v20051201 compiler on IA64 platforms seems to have a
241  problem with optimizing the ptmalloc2 memory manager component (the
242  generated code will segv).  As such, the ptmalloc2 component will
243  automatically disable itself if it detects that it is on this
244  platform/compiler combination.  The only effect that this should
245  have is that the MCA parameter mpi_leave_pinned will be inoperative.
246
247- It has been reported that the Intel 9.1 and 10.0 compilers fail to
248  compile Open MPI on IA64 platforms.  As of 12 Sep 2012, there is
249  very little (if any) testing performed on IA64 platforms (with any
250  compiler).  Support is "best effort" for these platforms, but it is
251  doubtful that any effort will be expended to fix the Intel 9.1 /
252  10.0 compiler issuers on this platform.
253
254- Early versions of the Intel 12.1 Linux compiler suite on x86_64 seem
255  to have a bug that prevents Open MPI from working.  Symptoms
256  including immediate segv of the wrapper compilers (e.g., mpicc) and
257  MPI applications.  As of 1 Feb 2012, if you upgrade to the latest
258  version of the Intel 12.1 Linux compiler suite, the problem will go
259  away.
260
261- Early versions of the Portland Group 6.0 compiler have problems
262  creating the C++ MPI bindings as a shared library (e.g., v6.0-1).
263  Tests with later versions show that this has been fixed (e.g.,
264  v6.0-5).
265
266- The Portland Group compilers prior to version 7.0 require the
267  "-Msignextend" compiler flag to extend the sign bit when converting
268  from a shorter to longer integer.  This is is different than other
269  compilers (such as GNU).  When compiling Open MPI with the Portland
270  compiler suite, the following flags should be passed to Open MPI's
271  configure script:
272
273  shell$ ./configure CFLAGS=-Msignextend CXXFLAGS=-Msignextend \
274	--with-wrapper-cflags=-Msignextend \
275	--with-wrapper-cxxflags=-Msignextend ...
276
277  This will both compile Open MPI with the proper compile flags and
278  also automatically add "-Msignextend" when the C and C++ MPI wrapper
279  compilers are used to compile user MPI applications.
280
281- Using the MPI C++ bindings with older versions of the Pathscale
282  compiler on some platforms is an old issue that seems to be a
283  problem when Pathscale uses a back-end GCC 3.x compiler. Here's a
284  proposed solution from the Pathscale support team (from July 2010):
285
286      The proposed work-around is to install gcc-4.x on the system and
287      use the pathCC -gnu4 option. Newer versions of the compiler (4.x
288      and beyond) should have this fixed, but we'll have to test to
289      confirm it's actually fixed and working correctly.
290
291  We don't anticipate that this will be much of a problem for Open MPI
292  users these days (our informal testing shows that not many users are
293  still using GCC 3.x).  Contact Pathscale support if you continue to
294  have problems with Open MPI's C++ bindings.
295
296- Using the Absoft compiler to build the MPI Fortran bindings on Suse
297  9.3 is known to fail due to a Libtool compatibility issue.
298
299- MPI Fortran API support has been completely overhauled since the
300  Open MPI v1.5/v1.6 series.  
301
302  ********************************************************************
303  ********************************************************************
304  *** There is now only a single Fortran MPI wrapper compiler and a
305  *** single Fortran OSHMEM wrapper compiler: mpifort and oshfort,
306  *** respectively.  mpif77 and mpif90 still exist, but they are
307  *** symbolic links to mpifort.
308  ********************************************************************
309  *** Similarly, Open MPI's configure script only recognizes the FC
310  *** and FCFLAGS environment variables (to specify the Fortran
311  *** compiler and compiler flags, respectively).  The F77 and FFLAGS
312  *** environment variables are IGNORED.
313  ********************************************************************
314  ********************************************************************
315  
316  As a direct result, it is STRONGLY recommended that you specify a
317  Fortran compiler that uses file suffixes to determine Fortran code
318  layout (e.g., free form vs. fixed).  For example, with some versions
319  of the IBM XLF compiler, it is preferable to use FC=xlf instead of
320  FC=xlf90, because xlf will automatically determine the difference
321  between free form and fixed Fortran source code.
322
323  However, many Fortran compilers allow specifying additional
324  command-line arguments to indicate which Fortran dialect to use.
325  For example, if FC=xlf90, you may need to use "mpifort --qfixed ..."
326  to compile fixed format Fortran source files.
327
328  You can use either ompi_info or oshmem_info to see with which Fortran 
329  compiler Open MPI was configured and compiled. 
330
331  There are up to three sets of Fortran MPI bindings that may be
332  provided depending on your Fortran compiler):
333
334  - mpif.h: This is the first MPI Fortran interface that was defined
335    in MPI-1.  It is a file that is included in Fortran source code.
336    Open MPI's mpif.h does not declare any MPI subroutines; they are
337    all implicit.
338
339  - mpi module: The mpi module file was added in MPI-2.  It provides
340    strong compile-time parameter type checking for MPI subroutines.
341
342  - mpi_f08 module: The mpi_f08 module was added in MPI-3.  It
343    provides many advantages over the mpif.h file and mpi module.  For
344    example, MPI handles have distinct types (vs. all being integers).
345    See the MPI-3 document for more details.
346
347    *** The mpi_f08 module is STRONGLY is recommended for all new MPI
348        Fortran subroutines and applications.  Note that the mpi_f08
349        module can be used in conjunction with the other two Fortran
350        MPI bindings in the same application (only one binding can be
351        used per subroutine/function, however).  Full interoperability
352        between mpif.h/mpi module and mpi_f08 module MPI handle types
353        is provided, allowing mpi_f08 to be used in new subroutines in
354        legacy MPI applications.
355
356  Per the OSHMEM specification, there is only one Fortran OSHMEM binding 
357  provided:
358
359  - shmem.fh: All Fortran OpenSHMEM programs **should** include 'shmem.fh',
360    and Fortran OSHMEM programs that use constants defined by OpenSHMEM
361    **MUST** include 'shmem.fh'. 
362
363  The following notes apply to the above-listed Fortran bindings:
364
365  - All Fortran compilers support the mpif.h/shmem.fh-based bindings,
366    with one exception: the MPI_SIZEOF interfaces will only be present
367    when Open MPI is built with a Fortran compiler that support the
368    INTERFACE keyword and ISO_FORTRAN_ENV.  Most notably, this
369    excludes the GNU Fortran compiler suite before version 4.9.
370
371  - The level of support provided by the mpi module is based on your
372    Fortran compiler.
373
374    If Open MPI is built with a non-GNU Fortran compiler, or if Open
375    MPI is built with the GNU Fortran compiler >= v4.9, all MPI
376    subroutines will be prototyped in the mpi module.  All calls to
377    MPI subroutines will therefore have their parameter types checked
378    at compile time.
379
380    If Open MPI is built with an old gfortran (i.e., < v4.9), a
381    limited "mpi" module will be built.  Due to the limitations of
382    these compilers, and per guidance from the MPI-3 specification,
383    all MPI subroutines with "choice" buffers are specifically *not*
384    included in the "mpi" module, and their parameters will not be
385    checked at compile time.  Specifically, all MPI subroutines with
386    no "choice" buffers are prototyped and will receive strong
387    parameter type checking at run-time (e.g., MPI_INIT,
388    MPI_COMM_RANK, etc.).
389
390    Similar to the mpif.h interface, MPI_SIZEOF is only supported on
391    Fortran compilers that support INTERFACE and ISO_FORTRAN_ENV.
392
393  - The mpi_f08 module is new and has been tested with the Intel
394    Fortran compiler and gfortran >= 4.9.  Other modern Fortran
395    compilers may also work (but are, as yet, only lightly tested).
396    It is expected that this support will mature over time.
397
398    Many older Fortran compilers do not provide enough modern Fortran
399    features to support the mpi_f08 module.  For example, gfortran <
400    v4.9 does provide enough support for the mpi_f08 module.
401
402  You can examine the output of the following command to see all
403  the Fortran features that are/are not enabled in your Open MPI
404  installation:
405
406  shell$ ompi_info | grep -i fort
407
408
409General Run-Time Support Notes
410------------------------------
411
412- The Open MPI installation must be in your PATH on all nodes (and
413  potentially LD_LIBRARY_PATH (or DYLD_LIBRARY_PATH), if libmpi/libshmem 
414  is a shared library), unless using the --prefix or
415  --enable-mpirun-prefix-by-default functionality (see below).
416
417- Open MPI's run-time behavior can be customized via MCA ("MPI
418  Component Architecture") parameters (see below for more information
419  on how to get/set MCA parameter values).  Some MCA parameters can be
420  set in a way that renders Open MPI inoperable (see notes about MCA
421  parameters later in this file).  In particular, some parameters have
422  required options that must be included.
423
424  - If specified, the "btl" parameter must include the "self"
425    component, or Open MPI will not be able to deliver messages to the
426    same rank as the sender.  For example: "mpirun --mca btl tcp,self
427    ..."
428  - If specified, the "btl_tcp_if_exclude" paramater must include the
429    loopback device ("lo" on many Linux platforms), or Open MPI will
430    not be able to route MPI messages using the TCP BTL.  For example:
431    "mpirun --mca btl_tcp_if_exclude lo,eth1 ..."
432
433- Running on nodes with different endian and/or different datatype
434  sizes within a single parallel job is supported in this release.
435  However, Open MPI does not resize data when datatypes differ in size
436  (for example, sending a 4 byte MPI_DOUBLE and receiving an 8 byte
437  MPI_DOUBLE will fail).
438
439
440MPI Functionality and Features
441------------------------------
442
443- All MPI-2.2 and nearly all MPI-3 functionality is supported.  The
444  only MPI-3 functionality that is missing is the new MPI-3 remote
445  memory access (aka "one-sided") functionality.
446
447- When using MPI deprecated functions, some compilers will emit
448  warnings.  For example:
449
450  shell$ cat deprecated_example.c
451  #include <mpi.h>
452  void foo(void) {
453      MPI_Datatype type;
454      MPI_Type_struct(1, NULL, NULL, NULL, &type);
455  }
456  shell$ mpicc -c deprecated_example.c
457  deprecated_example.c: In function 'foo':
458  deprecated_example.c:4: warning: 'MPI_Type_struct' is deprecated (declared at /opt/openmpi/include/mpi.h:1522)
459  shell$
460
461- MPI_THREAD_MULTIPLE support is included, but is only lightly tested.
462  It likely does not work for thread-intensive applications.  Note
463  that *only* the MPI point-to-point communication functions for the
464  BTL's listed here are considered thread safe.  Other support
465  functions (e.g., MPI attributes) have not been certified as safe
466  when simultaneously used by multiple threads.
467  - tcp
468  - sm
469  - self
470
471  Note that Open MPI's thread support is in a fairly early stage; the
472  above devices may *work*, but the latency is likely to be fairly
473  high.  Specifically, efforts so far have concentrated on
474  *correctness*, not *performance* (yet).
475
476  YMMV.
477
478- MPI_REAL16 and MPI_COMPLEX32 are only supported on platforms where a
479  portable C datatype can be found that matches the Fortran type
480  REAL*16, both in size and bit representation.
481
482- The "libompitrace" library is bundled in Open MPI and is installed
483  by default (it can be disabled via the --disable-libompitrace
484  flag).  This library provides a simplistic tracing of select MPI
485  function calls via the MPI profiling interface.  Linking it in to
486  your appliation via (e.g., via -lompitrace) will automatically
487  output to stderr when some MPI functions are invoked:
488
489  shell$ mpicc hello_world.c -o hello_world -lompitrace
490  shell$ mpirun -np 1 hello_world.c
491  MPI_INIT: argc 1
492  Hello, world, I am 0 of 1
493  MPI_BARRIER[0]: comm MPI_COMM_WORLD
494  MPI_FINALIZE[0]
495  shell$
496
497  Keep in mind that the output from the trace library is going to
498  stderr, so it may output in a slightly different order than the
499  stdout from your application.
500
501  This library is being offered as a "proof of concept" / convenience
502  from Open MPI.  If there is interest, it is trivially easy to extend
503  it to printf for other MPI functions.  Patches and/or suggestions
504  would be greatfully appreciated on the Open MPI developer's list.
505
506OSHMEM Functionality and Features
507------------------------------
508
509- All OpenSHMEM-1.0 functionality is supported.
510
511
512MPI Collectives
513-----------
514
515- The "hierarch" coll component (i.e., an implementation of MPI
516  collective operations) attempts to discover network layers of
517  latency in order to segregate individual "local" and "global"
518  operations as part of the overall collective operation.  In this
519  way, network traffic can be reduced -- or possibly even minimized
520  (similar to MagPIe).  The current "hierarch" component only
521  separates MPI processes into on- and off-node groups.
522
523  Hierarch has had sufficient correctness testing, but has not
524  received much performance tuning.  As such, hierarch is not
525  activated by default -- it must be enabled manually by setting its
526  priority level to 100:
527
528    mpirun --mca coll_hierarch_priority 100 ...
529
530  We would appreciate feedback from the user community about how well
531  hierarch works for your applications.
532
533- The "fca" coll component: the Mellanox Fabric Collective Accelerator
534  (FCA) is a solution for offloading collective operations from the
535  MPI process onto Mellanox QDR InfiniBand switch CPUs and HCAs.
536
537- The "ML" coll component is an implementation of MPI collective 
538  operations that takes advantage of communication hierarchies 
539  in modern systems. A ML collective operation is implemented by 
540  combining multiple independently progressing collective primitives 
541  implemented over different communication hierarchies, hence a ML 
542  collective operation is also referred to as a hierarchical collective 
543  operation. The number of collective primitives that are included in a 
544  ML collective operation is a function of subgroups(hierarchies). 
545  Typically, MPI processes in a single communication hierarchy such as 
546  CPU socket, node, or subnet are grouped together into a single subgroup
547  (hierarchy). The number of subgroups are configurable at runtime, 
548  and each different collective operation could be configured to have 
549  a different of number of subgroups.
550
551  The component frameworks and components used by\required for a 
552  "ML" collective operation.
553
554  Frameworks: 
555  * "sbgp" - Provides functionality for grouping processes into subgroups
556  * "bcol" - Provides collective primitives optimized for a particular 
557           communication hierarchy
558
559  Components:
560  * sbgp components     - Provides grouping functionality over a CPU socket
561                          ("basesocket"), shared memory ("basesmuma"), 
562                          Mellanox's ConnectX HCA ("ibnet"), and other 
563                          interconnects supported by PML ("p2p")
564
565  * BCOL components     - Provides optimized collective primitives for 
566                          shared memory ("basesmuma"), Mellanox's ConnectX 
567                          HCA ("iboffload"), and other interconnects supported 
568                          by PML ("ptpcoll")
569
570  * "ofacm"             - Provides connection manager functionality for
571                          InfiniBand communications
572  * "verbs"             - Provides commonly used verbs utilities
573  * "netpatterns"       - Provides an implementation of algorithm patterns 
574  * "commpatterns"      - Provides collectives for bootstrap
575   
576
577OSHMEM Collectives
578-----------
579
580- The "fca" scoll component: the Mellanox Fabric Collective Accelerator
581  (FCA) is a solution for offloading collective operations from the
582  MPI process onto Mellanox QDR InfiniBand switch CPUs and HCAs.
583
584- The "basic" scoll component: Reference implementation of all OSHMEM 
585  collective operations.
586
587
588Network Support
589---------------
590
591- There are two MPI network models available: "ob1", and "cm". "ob1"
592  uses BTL ("Byte Transfer Layer") components for each supported network.
593  "cm" uses MTL ("Matching Tranport Layer") components for each supported
594  network.
595
596  - "ob1" supports a variety of networks that can be used in
597    combination with each other (per OS constraints; e.g., there are
598    reports that the GM and OpenFabrics kernel drivers do not operate
599    well together):
600
601    - OpenFabrics: InfiniBand, iWARP, and RoCE
602    - Loopback (send-to-self)
603    - Shared memory
604    - TCP
605    - Intel Phi SCIF
606    - SMCUDA
607    - Cisco usNIC
608    - uGNI (Cray Gemini, Ares)
609    - vader (XPMEM)
610
611  - "cm" supports a smaller number of networks (and they cannot be
612    used together), but may provide better overall MPI performance:
613
614    - Myrinet MX and Open-MX
615    - InfiniPath PSM
616    - Mellanox MXM
617    - Portals4
618
619    Open MPI will, by default, choose to use "cm" when the InfiniPath
620    PSM or the Mellanox MXM MTL can be used.  Otherwise, "ob1" will be
621    used and the corresponding BTLs will be selected. Users can force
622    the use of ob1 or cm if desired by setting the "pml" MCA parameter
623    at run-time:
624
625      shell$ mpirun --mca pml ob1 ...
626      or
627      shell$ mpirun --mca pml cm ...
628
629- Similarly, there are two OSHMEM network models available: "yoda",
630  and "ikrit". "yoda" also uses the BTL components for many supported
631  network. "ikrit" interfaces directly with Mellanox MXM.
632
633  - "yoda" supports a variety of networks that can be used:
634    
635    - OpenFabrics: InfiniBand, iWARP, and RoCE
636    - Loopback (send-to-self)
637    - Shared memory
638    - TCP
639
640  - "ikrit" only supports Mellanox MXM. 
641 
642- MXM is the Mellanox Messaging Accelerator library utilizing a full
643  range of IB transports to provide the following messaging services
644  to the upper level MPI/OSHMEM libraries:
645
646  - Usage of all available IB transports 
647  - Native RDMA support
648  - Progress thread
649  - Shared memory communication
650  - Hardware-assisted reliability
651
652- The usnic BTL is support for Cisco's usNIC device ("userspace NIC")
653  on Cisco UCS servers with the Virtualized Interface Card (VIC).
654  Although the usNIC is accessed via the OpenFabrics / Verbs API
655  stack, this BTL is specific to the Cisco usNIC device.
656
657- uGNI is a Cray library for communicating over the Gemini and Ares
658  interconnects.
659
660- The OpenFabrics Enterprise Distribution (OFED) software package v1.0
661  will not work properly with Open MPI v1.2 (and later) due to how its
662  Mellanox InfiniBand plugin driver is created.  The problem is fixed
663  OFED v1.1 (and later).
664
665- Better memory management support is available for OFED-based
666  transports using the "ummunotify" Linux kernel module.  OFED memory
667  managers are necessary for better bandwidth when re-using the same
668  buffers for large messages (e.g., benchmarks and some applications).
669  
670  Unfortunately, the ummunotify module was not accepted by the Linux
671  kernel community (and is still not distributed by OFED).  But it
672  still remains the best memory management solution for MPI
673  applications that used the OFED network transports.  If Open MPI is
674  able to find the <linux/ummunotify.h> header file, it will build
675  support for ummunotify and include it by default.  If MPI processes
676  then find the ummunotify kernel module loaded and active, then their
677  memory managers (which have been shown to be problematic in some
678  cases) will be disabled and ummunotify will be used.  Otherwise, the
679  same memory managers from prior versions of Open MPI will be used.
680  The ummunotify Linux kernel module can be downloaded from:
681
682    http://lwn.net/Articles/343351/
683
684- The use of fork() with OpenFabrics-based networks (i.e., the openib
685  BTL) is only partially supported, and only on Linux kernels >=
686  v2.6.15 with libibverbs v1.1 or later (first released as part of
687  OFED v1.2), per restrictions imposed by the OFED network stack.
688
689- The Myrinet MX BTL has been removed; MX support is now only
690  available through the MX MTL.  Please use a prior version of Open
691  MPI if you need the MX BTL support.
692
693- Linux "knem" support is used when the "sm" (shared memory) BTL is
694  compiled with knem support (see the --with-knem configure option)
695  and the knem Linux module is loaded in the running kernel.  If the
696  knem Linux kernel module is not loaded, the knem support is (by
697  default) silently deactivated during Open MPI jobs.
698
699  See http://runtime.bordeaux.inria.fr/knem/ for details on Knem.
700
701- XPMEM is used by the vader shared-memory BTL when the XPMEM
702  libraries are installed. XPMEM allows Open MPI to map pages from
703  other processes into the current process' memory space. This
704  allows single-copy semantics for shared memory without the need
705  for a system call.
706
707Open MPI Extensions
708-------------------
709
710- An MPI "extensions" framework has been added (but is not enabled by
711  default).  See the "Open MPI API Extensions" section below for more
712  information on compiling and using MPI extensions.
713
714- The following extensions are included in this version of Open MPI:
715
716  - affinity: Provides the OMPI_Affinity_str() routine on retrieving
717    a string that contains what resources a process is bound to.  See
718    its man page for more details.
719  - cr: Provides routines to access to checkpoint restart routines.
720    See ompi/mpiext/cr/mpiext_cr_c.h for a listing of availble
721    functions.
722  - example: A non-functional extension; its only purpose is to
723    provide an example for how to create other extensions.
724
725===========================================================================
726
727Building Open MPI
728-----------------
729
730Open MPI uses a traditional configure script paired with "make" to
731build.  Typical installs can be of the pattern:
732
733---------------------------------------------------------------------------
734shell$ ./configure [...options...]
735shell$ make all install
736---------------------------------------------------------------------------
737
738There are many available configure options (see "./configure --help"
739for a full list); a summary of the more commonly used ones is included
740below.
741
742Note that for many of Open MPI's --with-<foo> options, Open MPI will,
743by default, search for header files and/or libraries for <foo>.  If
744the relevant files are found, Open MPI will built support for <foo>;
745if they are not found, Open MPI will skip building support for <foo>.
746However, if you specify --with-<foo> on the configure command line and
747Open MPI is unable to find relevant support for <foo>, configure will
748assume that it was unable to provide a feature that was specifically
749requested and will abort so that a human can resolve out the issue.
750
751INSTALLATION OPTIONS
752
753--prefix=<directory>
754  Install Open MPI into the base directory named <directory>.  Hence,
755  Open MPI will place its executables in <directory>/bin, its header
756  files in <directory>/include, its libraries in <directory>/lib, etc.
757
758--disable-shared
759  By default, libmpi and libshmem are built as a shared library, and 
760  all components are built as dynamic shared objects (DSOs). This 
761  switch disables this default; it is really only useful when used with
762  --enable-static.  Specifically, this option does *not* imply
763  --enable-static; enabling static libraries and disabling shared
764  libraries are two independent options.
765
766--enable-static
767  Build libmpi and libshmem as static libraries, and statically link in all
768  components.  Note that this option does *not* imply
769  --disable-shared; enabling static libraries and disabling shared
770  libraries are two independent options.
771
772  Be sure to read the description of --without-memory-manager, below;
773  it may have some effect on --enable-static.
774
775--disable-wrapper-rpath
776  By default, the wrapper compilers (e.g., mpicc) will enable "rpath"
777  support in generated executables on systems that support it.  That
778  is, they will include a file reference to the location of Open MPI's
779  libraries in the application executable itself.  This means that
780  the user does not have to set LD_LIBRARY_PATH to find Open MPI's
781  libraries (e.g., if they are installed in a location that the
782  run-time linker does not search by default).
783
784  On systems that utilize the GNU ld linker, recent enough versions
785  will actually utilize "runpath" functionality, not "rpath".  There
786  is an important difference between the two:
787
788  "rpath": the location of the Open MPI libraries is hard-coded into
789      the MPI/OSHMEM application and cannot be overridden at run-time.
790  "runpath": the location of the Open MPI libraries is hard-coded into
791      the MPI/OSHMEM application, but can be overridden at run-time by
792      setting the LD_LIBRARY_PATH environment variable.
793
794  For example, consider that you install Open MPI vA.B.0 and
795  compile/link your MPI/OSHMEM application against it.  Later, you install
796  Open MPI vA.B.1 to a different installation prefix (e.g.,
797  /opt/openmpi/A.B.1 vs. /opt/openmpi/A.B.0), and you leave the old
798  installation intact.
799
800  In the rpath case, your MPI application will always use the
801  libraries from your A.B.0 installation.  In the runpath case, you
802  can set the LD_LIBRARY_PATH environment variable to point to the
803  A.B.1 installation, and then your MPI application will use those
804  libraries.  
805
806  Note that in both cases, however, if you remove the original A.B.0
807  installation and set LD_LIBRARY_PATH to point to the A.B.1
808  installation, your application will use the A.B.1 libraries.
809
810  This rpath/runpath behavior can be disabled via
811  --disable-wrapper-rpath.
812
813--enable-dlopen
814  Build all of Open MPI's components as standalone Dynamic Shared
815  Objects (DSO's) that are loaded at run-time (this is the default).
816  The opposite of this option, --disable-dlopen, causes two things:
817
818  1. All of Open MPI's components will be built as part of Open MPI's
819     normal libraries (e.g., libmpi).  
820  2. Open MPI will not attempt to open any DSO's at run-time.
821
822  Note that this option does *not* imply that OMPI's libraries will be
823  built as static objects (e.g., libmpi.a).  It only specifies the
824  location of OMPI's components: standalone DSOs or folded into the
825  Open MPI libraries.  You can control whether Open MPI's libraries
826  are build as static or dynamic via --enable|disable-static and
827  --enable|disable-shared.
828
829--with-platform=FILE
830  Load configure options for the build from FILE.  Options on the
831  command line that are not in FILE are also used.  Options on the
832  command line and in FILE are replaced by what is in FILE.
833
834NETWORKING SUPPORT / OPTIONS
835
836--with-fca=<directory>
837  Specify the directory where the Mellanox FCA library and
838  header files are located.  
839
840  FCA is the support library for Mellanox QDR switches and HCAs.
841
842--with-hcoll=<directory>
843  Specify the directory where the Mellanox hcoll library and header
844  files are located.  This option is generally only necessary if the
845  hcoll headers and libraries are not in default compiler/linker
846  search paths.
847
848  hcoll is the support library for MPI collective operation offload on
849  Mellanox ConnectX-3 HCAs (and later).
850
851--with-knem=<directory>
852  Specify the directory where the knem libraries and header files are
853  located.  This option is generally only necessary if the knem headers
854  and libraries are not in default compiler/linker search paths.
855
856  knem is a Linux kernel module that allows direct process-to-process
857  memory copies (optionally using hardware offload), potentially
858  increasing bandwidth for large messages sent between messages on the
859  same server.  See http://runtime.bordeaux.inria.fr/knem/ for
860  details.
861
862--with-mx=<directory>
863  Specify the directory where the MX libraries and header files are
864  located.  This option is generally only necessary if the MX headers
865  and libraries are not in default compiler/linker search paths.
866
867  MX is the support library for Myrinet-based networks.  An open
868  source software package named Open-MX provides the same
869  functionality on Ethernet-based clusters (Open-MX can provide
870  MPI performance improvements compared to TCP messaging).
871
872--with-mx-libdir=<directory>
873  Look in directory for the MX libraries.  By default, Open MPI will
874  look in <mx directory>/lib and <mx directory>/lib64, which covers
875  most cases.  This option is only needed for special configurations.
876
877--with-mxm=<directory>
878  Specify the directory where the Mellanox MXM library and header
879  files are located.  This option is generally only necessary if the
880  MXM headers and libraries are not in default compiler/linker search
881  paths.
882
883  MXM is the support library for Mellanox Network adapters.
884
885--with-mxm-libdir=<directory>
886  Look in directory for the MXM libraries.  By default, Open MPI will
887  look in <mxm directory>/lib and <mxm directory>/lib64, which covers
888  most cases.  This option is only needed for special configurations.
889
890--with-usnic
891  Abort configure if Cisco usNIC support cannot be built.
892
893--with-verbs=<directory>
894  Specify the directory where the verbs (also know as OpenFabrics, and
895  previously known as OpenIB) libraries and header files are located.
896  This option is generally only necessary if the verbs headers and
897  libraries are not in default compiler/linker search paths.
898
899  "OpenFabrics" refers to operating system bypass networks, such as
900  InfiniBand, usNIC, iWARP, and RoCE (aka "IBoIP").
901
902--with-verbs-libdir=<directory>
903  Look in directory for the verbs libraries.  By default, Open
904  MPI will look in <openib directory>/lib and <openib
905  directory>/lib64, which covers most cases.  This option is only
906  needed for special configurations.
907
908--with-openib=<directory>
909  DEPRECATED synonym for --with-verbs.
910
911--with-openib-libdir=<directory>
912  DEPRECATED synonym for --with-verbs-libdir.
913
914--with-portals4=<directory>
915  Specify the directory where the Portals4 libraries and header files
916  are located.  This option is generally only necessary if the Portals4
917  headers and libraries are not in default compiler/linker search
918  paths.
919
920  Portals4 is the support library for Cray interconnects, but is also
921  available on other platforms (e.g., there is a Portals4 library
922  implemented over regular TCP).
923
924--with-portals4-libdir=<directory>
925  Location of libraries to link with for Portals4 support.
926
927--with-portals4-max-md-size=SIZE
928--with-portals4-max-va-size=SIZE
929  Set configuration values for Portals 4
930
931--with-psm=<directory>
932  Specify the directory where the QLogic InfiniPath PSM library and
933  header files are located.  This option is generally only necessary
934  if the InfiniPath headers and libraries are not in default
935  compiler/linker search paths.
936
937  PSM is the support library for QLogic InfiniPath network adapters.
938
939--with-psm-libdir=<directory>
940  Look in directory for the PSM libraries.  By default, Open MPI will
941  look in <psm directory>/lib and <psm directory>/lib64, which covers
942  most cases.  This option is only needed for special configurations.
943
944--with-sctp=<directory>
945  Specify the directory where the SCTP libraries and header files are
946  located.  This option is generally only necessary if the SCTP headers
947  and libraries are not in default compiler/linker search paths.
948
949  SCTP is a special network stack over Ethernet networks.
950
951--with-sctp-libdir=<directory>
952  Look in directory for the SCTP libraries.  By default, Open MPI will
953  look in <sctp directory>/lib and <sctp directory>/lib64, which covers
954  most cases.  This option is only needed for special configurations.
955
956--with-scif=<dir>
957  Look in directory for Intel SCIF support libraries
958
959RUN-TIME SYSTEM SUPPORT
960
961--enable-mpirun-prefix-by-default
962  This option forces the "mpirun" command to always behave as if
963  "--prefix $prefix" was present on the command line (where $prefix is
964  the value given to the --prefix option to configure).  This prevents
965  most rsh/ssh-based users from needing to modify their shell startup
966  files to set the PATH and/or LD_LIBRARY_PATH for Open MPI on remote
967  nodes.  Note, however, that such users may still desire to set PATH
968  -- perhaps even in their shell startup files -- so that executables
969  such as mpicc and mpirun can be found without needing to type long
970  path names.  --enable-orterun-prefix-by-default is a synonym for
971  this option.
972
973--enable-sensors
974  Enable internal sensors (default: disabled).
975
976--enable-orte-static-ports
977   Enable orte static ports for tcp oob (default: enabled).
978
979--with-alps
980  Force the building of for the Cray Alps run-time environment.  If
981  Alps support cannot be found, configure will abort.
982
983--with-loadleveler
984  Force the building of LoadLeveler scheduler support.  If LoadLeveler
985  support cannot be found, configure will abort.
986
987--with-lsf=<directory>
988  Specify the directory where the LSF libraries and header files are
989  located.  This option is generally only necessary if the LSF headers
990  and libraries are not in default compiler/linker search paths.
991
992  LSF is a resource manager system, frequently used as a batch
993  scheduler in HPC systems.
994
995  NOTE: If you are using LSF version 7.0.5, you will need to add
996        "LIBS=-ldl" to the configure command line.  For example:
997
998            ./configure LIBS=-ldl --with-lsf ...
999
1000        This workaround should *only* be needed for LSF 7.0.5.
1001
1002--with-lsf-libdir=<directory>
1003  Look in directory for the LSF libraries.  By default, Open MPI will
1004  look in <lsf directory>/lib and <lsf directory>/lib64, which covers
1005  most cases.  This option is only needed for special configurations.
1006
1007--with-pmi
1008  Build PMI support (by default on non-Cray XE/XC systems, it is not built).
1009  On Cray XE/XC systems, the location of pmi is detected automatically as
1010  part of the configure process.  For non-Cray systems, if the pmi2.h header
1011  is found in addition to pmi.h, then support for PMI2 will be built.
1012
1013--with-slurm
1014  Force the building of SLURM scheduler support.
1015
1016--with-sge
1017  Specify to build support for the Oracle Grid Engine (OGE) resource
1018  manager and/or the Open Grid Engine.  OGE support is disabled by
1019  default; this option must be specified to build OMPI's OGE support.
1020
1021  The Oracle Grid Engine (OGE) and open Grid Engine packages are
1022  resource manager systems, frequently used as a batch scheduler in
1023  HPC systems.
1024
1025--with-tm=<directory>
1026  Specify the directory where the TM libraries and header files are
1027  located.  This option is generally only necessary if the TM headers
1028  and libraries are not in default compiler/linker search paths.
1029
1030  TM is the support library for the Torque and PBS Pro resource
1031  manager systems, both of which are frequently used as a batch
1032  scheduler in HPC systems.
1033
1034MISCELLANEOUS SUPPORT LIBRARIES
1035
1036--with-blcr=<directory>
1037  Specify the directory where the Berkeley Labs Checkpoint / Restart
1038  (BLCR) libraries and header files are located.  This option is
1039  generally only necessary if the BLCR headers and libraries are not
1040  in default compiler/linker search paths.
1041
1042  This option is only meaningful if the --with-ft option is also used
1043  to active Open MPI's fault tolerance behavior.
1044
1045--with-blcr-libdir=<directory>
1046  Look in directory for the BLCR libraries.  By default, Open MPI will
1047  look in <blcr directory>/lib and <blcr directory>/lib64, which
1048  covers most cases.  This option is only needed for special
1049  configurations.
1050
1051--with-dmtcp=<directory>
1052  Specify the directory where the Distributed MultiThreaded
1053  Checkpointing (DMTCP) libraries and header files are located.  This
1054  option is generally only necessary if the DMTCP headers and
1055  libraries are not in default compiler/linker search paths.
1056
1057  This option is only meaningful if the --with-ft option is also used
1058  to active Open MPI's fault tolerance behavior.
1059
1060--with-dmtcp-libdir=<directory>
1061  Look in directory for the DMTCP libraries.  By default, Open MPI
1062  will look in <dmtcp directory>/lib and <dmtcp directory>/lib64,
1063  which covers most cases.  This option is only needed for special
1064  configurations.
1065
1066--with-libevent(=value)
1067  This option specifies where to find the libevent support headers and
1068  library.  The following VALUEs are permitted:
1069
1070    internal:    Use Open MPI's internal copy of libevent.
1071    external:    Use an external libevent installation (rely on default
1072                 compiler and linker paths to find it)
1073    <no value>:  Same as "internal".
1074    <directory>: Specify the location of a specific libevent
1075                 installation to use
1076
1077  By default (or if --with-libevent is specified with no VALUE), Open
1078  MPI will build and use the copy of libeveny that it has in its
1079  source tree.  However, if the VALUE is "external", Open MPI will
1080  look for the relevant libevent header file and library in default
1081  compiler / linker locations.  Or, VALUE can be a directory tree
1082  where the libevent header file and library can be found.  This
1083  option allows operating systems to include Open MPI and use their
1084  default libevent installation instead of Open MPI's bundled libevent.
1085
1086  libevent is a support library that provides event-based processing,
1087  timers, and signal handlers.  Open MPI requires libevent to build;
1088  passing --without-libevent will cause configure to abort.
1089
1090--with-libevent-libdir=<directory>
1091  Look in directory for the libevent libraries.  This option is only
1092  usable when building Open MPI against an external libevent
1093  installation.  Just like other --with-FOO-libdir configure options,
1094  this option is only needed for special configurations.
1095
1096--with-hwloc(=value)
1097  Build hwloc support (default: enabled).  This option specifies where
1098  to find the hwloc support headers and library.  The following values
1099  are permitted:
1100
1101    internal:    Use Open MPI's internal copy of hwloc.
1102    external:    Use an external hwloc installation (rely on default
1103                 compiler and linker paths to find it)
1104    <no value>:  Same as "internal".
1105    <directory>: Specify the location of a specific hwloc
1106                 installation to use
1107
1108  By default (or if --with-hwloc is specified with no VALUE), Open MPI
1109  will build and use the copy of hwloc that it has in its source tree.
1110  However, if the VALUE is "external", Open MPI will look for the
1111  relevant hwloc header files and library in default compiler / linker
1112  locations.  Or, VALUE can be a directory tree where the hwloc header
1113  file and library can be found.  This option allows operating systems
1114  to include Open MPI and use their default hwloc installation instead
1115  of Open MPI's bundled hwloc.
1116
1117  hwloc is a support library that provides processor and memory
1118  affinity information for NUMA platforms.
1119
1120--with-hwloc-libdir=<directory>
1121  Look in directory for the hwloc libraries.  This option is only
1122  usable when building Open MPI against an external hwloc
1123  installation.  Just like other --with-FOO-libdir configure options,
1124  this option is only needed for special configurations.
1125
1126--disable-hwloc-pci
1127  Disable building hwloc's PCI device-sensing capabilities.  On some
1128  platforms (e.g., SusE 10 SP1, x86-64), the libpci support library is
1129  broken.  Open MPI's configure script should usually detect when
1130  libpci is not usable due to such brokenness and turn off PCI
1131  support, but there may be cases when configure mistakenly enables
1132  PCI support in the presence of a broken libpci.  These cases may
1133  result in "make" failing with warnings about relocation symbols in
1134  libpci.  The --disable-hwloc-pci switch can be used to force Open
1135  MPI to not build hwloc's PCI device-sensing capabilities in these
1136  cases.
1137
1138  Similarly, if Open MPI incorrectly decides that libpci is broken,
1139  you can force Open MPI to build hwloc's PCI device-sensing
1140  capabilities by using --enable-hwloc-pci.
1141
1142  hwloc can discover PCI devices and locality, which can be useful for
1143  Open MPI in assigning message passing resources to MPI processes.
1144
1145--with-libltdl(=value)
1146  This option specifies where to find the GNU Libtool libltdl support
1147  library.  The following values are permitted:
1148
1149    internal:    Use Open MPI's internal copy of libltdl.
1150    external:    Use an external libltdl installation (rely on default
1151                 compiler and linker paths to find it)
1152    <no value>:  Same as "internal".
1153    <directory>: Specify the location of a specific libltdl
1154                 installation to use
1155
1156  By default (or if --with-libltdl is specified with no VALUE), Open
1157  MPI will build and use the copy of libltdl that it has in its source
1158  tree.  However, if the VALUE is "external", Open MPI will look for
1159  the relevant libltdl header file and library in default compiler /
1160  linker locations.  Or, VALUE can be a directory tree where the
1161  libltdl header file and library can be found.  This option allows
1162  operating systems to include Open MPI and use their default libltdl
1163  installation instead of Open MPI's bundled libltdl.
1164
1165  Note that this option is ignored if --disable-dlopen is specified.
1166
1167--disable-libompitrace
1168  Disable building the simple "libompitrace" library (see note above
1169  about libompitrace)
1170
1171--with-valgrind(=<directory>)
1172  Directory where the valgrind software is installed.  If Open MPI
1173  finds Valgrind's header files, it will include additional support
1174  for Valgrind's memory-checking debugger.
1175
1176  Specifically, it will eliminate a lot of false positives from
1177  running Valgrind on MPI applications.  There is a minor performance
1178  penalty for enabling this option.
1179
1180--disable-vt
1181  Disable building the VampirTrace that is bundled with Open MPI.
1182
1183MPI FUNCTIONALITY
1184
1185--with-mpi-param-check(=value)
1186  Whether or not to check MPI function parameters for errors at
1187  runtime.  The following values are permitted:
1188
1189    always:  MPI function parameters are always checked for errors 
1190    never:   MPI function parameters are never checked for errors 
1191    runtime: Whether MPI function parameters are checked depends on
1192             the value of the MCA parameter mpi_param_check (default:
1193             yes).
1194    yes:     Synonym for "always" (same as --with-mpi-param-check).
1195    no:      Synonym for "none" (same as --without-mpi-param-check).
1196
1197  If --with-mpi-param is not specified, "runtime" is the default.
1198
1199--with-threads=value
1200  Since thread support is only partially tested, it is disabled by
1201  default.  To enable threading, use "--with-threads=posix".  This is
1202  most useful when combined with --enable-mpi-thread-multiple.
1203
1204--enable-mpi-thread-multiple
1205  Allows the MPI thread level MPI_THREAD_MULTIPLE.  See
1206  --with-threads; this is currently disabled by default. Enabling
1207  this feature will automatically --enable-opal-multi-threads.
1208
1209--enable-opal-multi-threads
1210  Enables thread lock support in the OPAL and ORTE layers. Does
1211  not enable MPI_THREAD_MULTIPLE - see above option for that feature.
1212  This is currently disabled by default.
1213
1214--enable-mpi-cxx
1215  Enable building the C++ MPI bindings (default: disabled).
1216
1217  The MPI C++ bindings were deprecated in MPI-2.2, and removed from
1218  the MPI standard in MPI-3.0.
1219
1220--enable-mpi-java
1221  Enable building of an EXPERIMENTAL Java MPI interface (disabled by
1222  default).  You may also need to specify --with-jdk-dir,
1223  --with-jdk-bindir, and/or --with-jdk-headers.  See README.JAVA.txt
1224  for details.
1225
1226  Note that this Java interface is INCOMPLETE (meaning: it does not
1227  support all MPI functionality) and LIKELY TO CHANGE.  The Open MPI
1228  developers would very much like to hear your feedback about this
1229  interface.  See README.JAVA.txt for more details.
1230
1231--enable-mpi-fortran(=value)
1232  By default, Open MPI will attempt to build all 3 Fortran bindings:
1233  mpif.h, the "mpi" module, and the "mpi_f08" module.  The following
1234  values are permitted:
1235
1236    all:        Synonym for "yes".
1237    yes:        Attempt to build all 3 Fortran bindings; skip
1238                any binding that cannot be built (same as
1239                --enable-mpi-fortran).
1240    mpifh:      Build mpif.h support.
1241    usempi:     Build mpif.h and "mpi" module support.
1242    usempif08:  Build mpif.h, "mpi" module, and "mpi_f08"
1243                module support.
1244    none:       Synonym for "no".
1245    no:         Do not build any MPI Fortran support (same as
1246                --disable-mpi-fortran).  This is mutually exclusive
1247                with building the OSHMEM Fortran interface.
1248
1249--enable-mpi-ext(=<list>)
1250  Enable Open MPI's non-portable API extensions.  If no <list> is
1251  specified, all of the extensions are enabled.
1252
1253  See "Open MPI API Extensions", below, for more details.
1254
1255--with-io-romio-flags=flags
1256  Pass flags to the ROMIO distribution configuration script.  This
1257  option is usually only necessary to pass
1258  parallel-filesystem-specific preprocessor/compiler/linker flags back
1259  to the ROMIO system.
1260
1261--enable-sparse-groups
1262  Enable the usage of sparse groups. This would save memory
1263  significantly especially if you are creating large
1264  communicators. (Disabled by default)
1265
1266OSHMEM FUNCTIONALITY
1267
1268--disable-oshmem
1269  Disable building the OpenSHMEM implementation (by default, it is
1270  enabled).
1271
1272--disable-oshmem-fortran
1273  Disable building only the Fortran OSHMEM bindings. Please see 
1274  the "Compiler Notes" section herein which contains further 
1275  details on known issues with various Fortran compilers.
1276
1277MISCELLANEOUS FUNCTIONALITY
1278
1279--without-memory-manager
1280  Disable building Open MPI's memory manager.  Open MPI's memory
1281  manager is usually built on Linux based platforms, and is generally
1282  only used for optimizations with some OpenFabrics-based networks (it
1283  is not *necessary* for OpenFabrics networks, but some performance
1284  loss may be observed without it).
1285
1286  However, it may be necessary to disable the memory manager in order
1287  to build Open MPI statically.
1288
1289--with-ft=TYPE
1290  Specify the type of fault tolerance to enable.  Options: LAM
1291  (LAM/MPI-like), cr (Checkpoint/Restart).  Fault tolerance support is
1292  disabled unless this option is specified.
1293
1294--enable-peruse 
1295  Enable the PERUSE MPI data analysis interface.
1296
1297--enable-heterogeneous
1298  Enable support for running on heterogeneous clusters (e.g., machines
1299  with different endian representations).  Heterogeneous support is
1300  disabled by default because it imposes a minor performance penalty.
1301
1302  *** THIS FUNCTIONALITY IS CURRENTLY BROKEN - DO NOT USE ***
1303
1304--with-wrapper-cflags=<cflags>
1305--with-wrapper-cxxflags=<cxxflags>
1306--with-wrapper-fflags=<fflags>
1307--with-wrapper-fcflags=<fcflags>
1308--with-wrapper-ldflags=<ldflags>
1309--with-wrapper-libs=<libs>
1310  Add the specified flags to the default flags that used are in Open
1311  MPI's "wrapper" compilers (e.g., mpicc -- see below for more
1312  information about Open MPI's wrapper compilers).  By default, Open
1313  MPI's wrapper compilers use the same compilers used to build Open
1314  MPI and specify a minimum set of additional flags that are necessary
1315  to compile/link MPI applications.  These configure options give
1316  system administrators the ability to embed additional flags in
1317  OMPI's wrapper compilers (which is a local policy decision).  The
1318  meanings of the different flags are:
1319
1320  <cflags>:   Flags passed by the mpicc wrapper to the C compiler
1321  <cxxflags>: Flags passed by the mpic++ wrapper to the C++ compiler
1322  <fcflags>:  Flags passed by the mpifort wrapper to the Fortran compiler
1323  <ldflags>:  Flags passed by all the wrappers to the linker
1324  <libs>:     Flags passed by all the wrappers to the linker
1325
1326  There are other ways to configure Open MPI's wrapper compiler
1327  behavior; see the Open MPI FAQ for more information.
1328
1329There are many other options available -- see "./configure --help".
1330
1331Changing the compilers that Open MPI uses to build itself uses the
1332standard Autoconf mechanism of setting special environment variables
1333either before invoking configure or on the configure command line.
1334The following environment variables are recognized by configure:
1335
1336CC          - C compiler to use
1337CFLAGS      - Compile flags to pass to the C compiler
1338CPPFLAGS    - Preprocessor flags to pass to the C compiler
1339
1340CXX         - C++ compiler to use
1341CXXFLAGS    - Compile flags to pass to the C++ compiler
1342CXXCPPFLAGS - Preprocessor flags to pass to the C++ compiler
1343
1344FC          - Fortran compiler to use
1345FCFLAGS     - Compile flags to pass to the Fortran compiler
1346
1347LDFLAGS     - Linker flags to pass to all compilers
1348LIBS        - Libraries to pass to all compilers (it is rarely
1349              necessary for users to need to specify additional LIBS)
1350
1351PKG_CONFIG  - Path to the pkg-config utility
1352
1353For example:
1354
1355  shell$ ./configure CC=mycc CXX=myc++ FC=myfortran ...
1356
1357*** NOTE: We generally suggest using the above command line form for
1358    setting different compilers (vs. setting environment variables and
1359    then invoking "./configure").  The above form will save all
1360    variables and values in the config.log file, which makes
1361    post-mortem analysis easier if problems occur.
1362
1363Note that if you intend to compile Open MPI with a "make" other than
1364the default one in your PATH, then you must either set the $MAKE
1365environment variable before invoking Open MPI's configure script, or
1366pass "MAKE=your_make_prog" to configure.  For example:
1367
1368  shell$ ./configure MAKE=/path/to/my/make ...
1369
1370This could be the case, for instance, if you have a shell alias for
1371"make", or you always type "gmake" out of habit.  Failure to tell
1372configure which non-default "make" you will use to compile Open MPI
1373can result in undefined behavior (meaning: don't do that).
1374
1375Note that you may also want to ensure that the value of
1376LD_LIBRARY_PATH is set appropriately (or not at all) for your build
1377(or whatever environment variable is relevant for your operating
1378system).  For example, some users have been tripped up by setting to
1379use a non-default Fortran compiler via FC, but then failing to set
1380LD_LIBRARY_PATH to include the directory containing that non-default
1381Fortran compiler's support libraries.  This causes Open MPI's
1382configure script to fail when it tries to compile / link / run simple
1383Fortran programs.
1384
1385It is required that the compilers specified be compile and link
1386compatible, meaning that object files created by one compiler must be
1387able to be linked with object files from the other compilers and
1388produce correctly functioning executables.
1389
1390Open MPI supports all the "make" targets that are provided by GNU
1391Automake, such as:
1392
1393all       - build the entire Open MPI package
1394install   - install Open MPI
1395uninstall - remove all traces of Open MPI from the $prefix
1396clean     - clean out the build tree
1397
1398Once Open MPI has been built and installed, it is safe to run "make
1399clean" and/or remove the entire build tree.
1400
1401VPATH and parallel builds are fully supported.
1402
1403Generally speaking, the only thing that users need to do to use Open
1404MPI is ensure that <prefix>/bin is in their PATH and <prefix>/lib is
1405in their LD_LIBRARY_PATH.  Users may need to ensure to set the PATH
1406and LD_LIBRARY_PATH in their shell setup files (e.g., .bashrc, .cshrc)
1407so that non-interactive rsh/ssh-based logins will be able to find the
1408Open MPI executables.
1409
1410===========================================================================
1411
1412Open MPI Version Numbers and Binary Compatibility
1413-------------------------------------------------
1414
1415Open MPI has two sets of version numbers that are likely of interest
1416to end users / system administrator:
1417
1418    * Software version number
1419    * Shared library version numbers
1420
1421Both are described below, followed by a discussion of application
1422binary interface (ABI) compatibility implications.
1423
1424Software Version Number
1425-----------------------
1426
1427Open MPI's version numbers are the union of several different values:
1428major, minor, release, and an optional quantifier.
1429
1430  * Major: The major number is the first integer in the version string
1431    (e.g., v1.2.3). Changes in the major number typically indicate a
1432    significant change in the code base and/or end-user
1433    functionality. The major number is always included in the version
1434    number.
1435
1436  * Minor: The minor number is the second integer in the version
1437    string (e.g., v1.2.3). Changes in the minor number typically
1438    indicate a incremental change in the code base and/or end-user
1439    functionality. The minor number is always included in the version
1440    number. Starting with Open MPI v1.3.0, the minor release number
1441    took on additional significance (see this wiki page for more
1442    details):
1443
1444    o Even minor release numbers are part of "super-stable"
1445      release series (e.g., v1.4.0). Releases in super stable series
1446      are well-tested, time-tested, and mature. Such releases are
1447      recomended for production sites. Changes between subsequent
1448      releases in super stable series are expected to be fairly small.
1449    o Odd minor release numbers are part of "feature" release
1450      series (e.g., 1.3.7). Releases in feature releases are
1451      well-tested, but they are not necessarily time-tested or as
1452      mature as super stable releases. Changes between subsequent
1453      releases in feature series may be large.
1454
1455  * Release: The release number is the third integer in the version
1456    string (e.g., v1.2.3). Changes in the release number typically
1457    indicate a bug fix in the code base and/or end-user
1458    functionality. If the release number is 0, it is omitted from the
1459    version number (e.g., v1.2 has a release number of 0).
1460
1461  * Quantifier: Open MPI version numbers sometimes have an arbitrary
1462    string affixed to the end of the version number. Common strings
1463    include:
1464
1465    o aX: Indicates an alpha release. X is an integer indicating
1466      the number of the alpha release (e.g., v1.2.3a5 indicates the
1467      5th alpha release of version 1.2.3).
1468    o bX: Indicates a beta release. X is an integer indicating
1469      the number of the beta release (e.g., v1.2.3b3 indicates the 3rd
1470      beta release of version 1.2.3).
1471    o rcX: Indicates a release candidate. X is an integer
1472      indicating the number of the release candidate (e.g., v1.2.3rc4
1473      indicates the 4th release candidate of version 1.2.3).
1474    o rV or hgV: Indicates the Subversion / Mercurial repository
1475      number string that the release was made from (V is usually an
1476      integer for Subversion releases and usually a string for
1477      Mercurial releases). Although all official Open MPI releases are
1478      tied to a single, specific Subversion or Mercurial repository
1479      number (which can be obtained from the ompi_info command), only
1480      some releases have the Subversion / Mercurial repository number
1481      in the version number. Development snapshot tarballs, for
1482      example, have the Subversion repository included in the version
1483      to reflect that they are a development snapshot of an upcoming
1484      release (e.g., v1.2.3r1234 indicates a development snapshot of
1485      version 1.2.3 corresponding to Subversion repository number
1486      1234). 
1487
1488    Quantifiers may be mixed together -- for example v1.2.3rc7r2345
1489    indicates a development snapshot of an upcoming 7th release
1490    candidate for version 1.2.3 corresponding to Subversion repository
1491    number 2345.
1492
1493Shared Library Version Number
1494-----------------------------
1495
1496Open MPI started using the GNU Libtool shared library versioning
1497scheme with the release of v1.3.2.
1498
1499NOTE: Only official releases of Open MPI adhere to this versioning
1500      scheme. "Beta" releases, release candidates, and nightly
1501      tarballs, developer snapshots, and Subversion/Mercurial snapshot
1502      tarballs likely will all have arbitrary/meaningless shared
1503      library version numbers.
1504
1505For deep voodoo technical reasons, only the MPI API libraries were
1506versioned until Open MPI v1.5 was released (i.e., libmpi*so --
1507libopen-rte.so or libopen-pal.so were not versioned until v1.5).
1508Please see https://svn.open-mpi.org/trac/ompi/ticket/2092 for more
1509details.
1510
1511NOTE: This policy change will cause an ABI incompatibility between MPI
1512      applications compiled/linked against the Open MPI v1.4 series;
1513      such applications will not be able to upgrade to the Open MPI
1514      v1.5 series without re-linking.  Sorry folks!
1515
1516The GNU Libtool official documentation details how the versioning
1517scheme works.  The quick version is that the shared library versions
1518are a triple of integers: (current,revision,age), or "c:r:a".  This
1519triple is not related to the Open MPI software version number.  There
1520are six simple rules for updating the values (taken almost verbatim
1521from the Libtool docs):
1522
1523 1. Start with version information of "0:0:0" for each shared library.
1524
1525 2. Update the version information only immediately before a public
1526    release of your software. More frequent updates are unnecessary,
1527    and only guarantee that the current interface number gets larger
1528    faster.
1529
1530 3. If the library source code has changed at all since the last
1531    update, then increment revision ("c:r:a" becomes "c:r+1:a").
1532
1533 4. If any interfaces have been added, removed, or changed since the
1534    last update, increment current, and set revision to 0.
1535
1536 5. If any interfaces have been added since the last public release,
1537    then increment age.
1538
1539 6. If any interfaces have been removed since the last public release,
1540    then set age to 0.
1541
1542Here's how we apply those rules specifically to Open MPI:
1543
1544 1. The above rules do not apply to MCA components (a.k.a. "plugins");
1545    MCA component .so versions stay unspecified.
1546
1547 2. The above rules apply exactly as written to the following
1548    libraries starting with Open MPI version v1.5 (prior to v1.5,
1549    libopen-pal and libopen-rte were still at 0:0:0 for reasons
1550    discussed in bug ticket #2092
1551    https://svn.open-mpi.org/trac/ompi/ticket/2092):
1552
1553    * libopen-rte
1554    * libopen-pal
1555    * libmca_common_*
1556
1557 3. The following libraries use a slightly modified version of the
1558    above rules: rules 4, 5, and 6 only apply to the official MPI
1559    interfaces (functions, global variables).  The rationale for this
1560    decision is that the vast majority of our users only care about
1561    the official/public MPI interfaces; we therefore want the .so
1562    version number to reflect only changes to the official MPI API.
1563    Put simply: non-MPI API / internal changes to the
1564    MPI-application-facing libraries are irrelevant to pure MPI
1565    applications.
1566
1567    * libmpi
1568    * libmpi_mpifh
1569    * libmpi_usempi_tkr
1570    * libmpi_usempi_ignore_tkr
1571    * libmpi_usempif08
1572    * libmpi_cxx
1573
1574 4. Note, however, that libmpi.so can have its "revision" number
1575    incremented if libopen-rte or libopen-pal change (because these
1576    two libraries are wholly included in libmpi.so).  Specifically:
1577    the revision will change, but since we have defined that the only
1578    relevant API interface in libmpi.so is the official MPI API,
1579    updates to libopen-rte and libopen-pal do not change the "current"
1580    or "age" numbers of libmpi.so.
1581
1582Application Binary Interface (ABI) Compatibility
1583------------------------------------------------
1584
1585Open MPI provided forward application binary interface (ABI)
1586compatibility for MPI applications starting with v1.3.2.  Prior to
1587that version, no ABI guarantees were provided.  
1588
1589Starting with v1.3.2, Open MPI provides forward ABI compatibility in
1590all versions of a given feature release series and its corresponding
1591super stable series.  For example, on a single platform, an MPI
1592application linked against Open MPI v1.7.2 shared libraries can be
1593updated to point to the shared libraries in any successive v1.7.x or
1594v1.8 release and still work properly (e.g., via the LD_LIBRARY_PATH
1595environment variable or other operating system mechanism).
1596
1597* A bug that causes an ABI compatibility issue was discovered after
1598  v1.7.3 was released.  The bug only affects users who configure their
1599  Fortran compilers to use "large" INTEGERs by default, but still have
1600  "normal" ints for C (e.g., 8 byte Fortran INTEGERs and 4 byte C
1601  ints).  In this case, the Fortran MPI_STATUS_SIZE value was computed
1602  incorrectly.
1603
1604  Fixing this issue breakes ABI *only in the sizeof(INTEGER) !=
1605  sizeof(int) case*.  However, since Open MPI provides ABI guarantees
1606  for the v1.7/v1.8 series, this bug is only fixed if Open MPI is
1607  configured with the --enable-abi-breaking-fortran-status-i8-fix
1608  flag, which, as its name implies, breaks ABI.  For example:
1609
1610    shell$ ./configure --enable-abi-breaking-fortran-status-i8-fix \
1611             CC=icc F77=ifort FC=ifort CXX=icpc \
1612             FFLAGS=i8 FCFLAGS=-i8 ...
1613
1614* A second bug was discovered after v1.7.3 was released that causes
1615  ABI to be broken for gfortran users who are using the "mpi" Fortran
1616  module.  In short, for versions of gfortran that do not support
1617  "ignore TKR" functionality (i.e., gfortran <=v4.8), Open MPI was
1618  providing interfaces for MPI subroutines with choice buffers (e.g.,
1619  MPI_Send) in the Fortran mpi module.  The MPI-3.0 specification
1620  expressly states not to do this.  To be consistent with MPI-3, Open
1621  MPI v1.7.4 therefore removed all MPI interfaces with choice buffers
1622  from the no-ignore-TKR version of the Fortran mpi module, even
1623  though this breaks ABI between v1.7.3 and v1.7.4.  Affected users
1624  should be able to recompile their MPI applications with v1.7.4 with
1625  no changes.
1626
1627  Other Fortran compilers that provide "ignore TKR" functionality are
1628  not affected by this change.
1629
1630Open MPI reserves the right to break ABI compatibility at new feature
1631release series.  For example, the same MPI application from above
1632(linked against Open MPI v1.7.2 shared libraries) will likely *not*
1633work with Open MPI v1.9 shared libraries.
1634
1635===========================================================================
1636
1637Checking Your Open MPI Installation
1638-----------------------------------
1639
1640The "ompi_info" command can be used to check the status of your Open
1641MPI installation (located in <prefix>/bin/ompi_info).  Running it with
1642no arguments provides a summary of information about your Open MPI
1643installation.   
1644
1645Note that the ompi_info command is extremely helpful in determining
1646which components are installed as well as listing all the run-time
1647settable parameters that are available in each component (as well as
1648their default values).
1649
1650The following options may be helpful:
1651
1652--all       Show a *lot* of information about your Open MPI
1653            installation. 
1654--parsable  Display all the information in an easily
1655            grep/cut/awk/sed-able format.
1656--param <framework> <component>
1657            A <framework> of "all" and a <component> of "all" will
1658            show all parameters to all components.  Otherwise, the
1659            parameters of all the components in a specific framework,
1660            or just the parameters of a specific component can be
1661            displayed by using an appropriate <framework> and/or
1662            <component> name.
1663--level <level>
1664            By default, ompi_info only shows "Level 1" MCA parameters
1665            -- parameters that can affect whether MPI processes can
1666            run successfully or not (e.g., determining which network
1667            interfaces to use).  The --level option will display all
1668            MCA parameters from level 1 to <level> (the max <level>
1669            value is 9).  Use "ompi_info --param <framework>
1670            <component> --level 9" to see *all* MCA parameters for a
1671            given component.  See "The Modular Component Architecture
1672            (MCA)" section, below, for a fuller explanation.
1673
1674Changing the values of these parameters is explained in the "The
1675Modular Component Architecture (MCA)" section, below.
1676
1677When verifying a new Open MPI installation, we recommend running six 
1678tests:
1679
16801. Use "mpirun" to launch a non-MPI program (e.g., hostname or uptime)
1681   across multiple nodes.
1682
16832. Use "mpirun" to launch a trivial MPI program that does no MPI
1684   communication (e.g., the hello_c program in the examples/ directory
1685   in the Open MPI distribution).
1686
16873. Use "mpirun" to launch a trivial MPI program that sends and
1688   receives a few MPI messages (e.g., the ring_c program in the
1689   examples/ directory in the Open MPI distribution).
1690
16914. Use "oshrun" to launch a non-OSHMEM program across multiple nodes.
1692   
16935. Use "oshrun" to launch a trivial MPI program that does no OSHMEM
1694   communication (e.g., hello_shmem.c program in the examples/ directory
1695   in the Open MPI distribution.)
1696
16976. Use "oshrun" to launch a trivial OSHMEM program that puts and gets
1698   a few messages. (e.g., the ring_shmem.c in the examples/ directory
1699   in the Open MPI distribution.)
1700
1701If you can run all six of these tests successfully, that is a good
1702indication that Open MPI built and installed properly.
1703
1704===========================================================================
1705
1706Open MPI API Extensions
1707-----------------------
1708
1709Open MPI contains a framework for extending the MPI API that is
1710available to applications.  Each extension is usually a standalone set of
1711functionality that is distinct from other extensions (similar to how
1712Open MPI's plugins are usually unrelated to each other).  These
1713extensions provide new functions and/or constants that are available
1714to MPI applications.
1715
1716WARNING: These extensions are neither standard nor portable to other
1717MPI implementations!
1718
1719Compiling the extensions
1720------------------------
1721
1722Open MPI extensions are not enabled by default; they must be enabled
1723by Open MPI's configure script.  The --enable-mpi-ext command line
1724switch accepts a comma-delimited list of extensions to enable, or, if
1725it is specified without a list, all extensions are enabled.
1726
1727Since extensions are meant to be used by advanced users only, this
1728file does not document which extensions are available or what they
1729do.  Look in the ompi/mpiext/ directory to see the extensions; each
1730subdirectory of that directory contains an extension.  Each has a
1731README file that describes what it does.
1732
1733Using the extensions
1734--------------------
1735
1736To reinforce the fact that these extensions are non-standard, you must
1737include a separate header file after <mpi.h> to obtain the function
1738prototypes, constant declarations, etc.  For example:
1739
1740-----
1741#include <mpi.h>
1742#if defined(OPEN_MPI) && OPEN_MPI
1743#include <mpi-ext.h>
1744#endif
1745
1746int main() {
1747    MPI_Init(NULL, NULL);
1748
1749#if defined(OPEN_MPI) && OPEN_MPI
1750    {
1751        char ompi_bound[OMPI_AFFINITY_STRING_MAX];
1752        char current_binding[OMPI_AFFINITY_STRING_MAX];
1753        char exists[OMPI_AFFINITY_STRING_MAX];
1754        OMPI_Affinity_str(OMPI_AFFINITY_LAYOUT_FMT, ompi_bound,
1755                          current_bindings, exists);
1756    }
1757#endif
1758    MPI_Finalize();
1759    return 0;
1760}
1761-----
1762
1763Notice that the Open MPI-specific code is surrounded by the #if
1764statement to ensure that it is only ever compiled by Open MPI.  
1765
1766The Open MPI wrapper compilers (mpicc and friends) should
1767automatically insert all relevant compiler and linker flags necessary
1768to use the extensions.  No special flags or steps should be necessary
1769compared to "normal" MPI applications.
1770
1771===========================================================================
1772
1773Compiling Open MPI Applications
1774-------------------------------
1775
1776Open MPI provides "wrapper" compilers that should be used for
1777compiling MPI and OSHMEM applications:
1778
1779C:          mpicc, oshcc
1780C++:        mpiCC, oshCC (or mpic++ if your filesystem is case-insensitive)
1781Fortran:    mpifort, oshfort
1782
1783For example:
1784
1785  shell$ mpicc hello_world_mpi.c -o hello_world_mpi -g
1786  shell$
1787
1788For OSHMEM applications:
1789
1790  shell$ oshcc hello_shmem.c -o hello_shmem -g
1791  shell$ 
1792
1793All the wrapper compilers do is add a variety of compiler and linker
1794flags to the command line and then invoke a back-end compiler.  To be
1795specific: the wrapper compilers do not parse source code at all; they
1796are solely command-line manipulators, and have nothing to do with the
1797actual compilation or linking of programs.  The end result is an MPI
1798executable that is properly linked to all the relevant libraries.
1799
1800Customizing the behavior of the wrapper compilers is possible (e.g.,
1801changing the compiler [not recommended] or specifying additional
1802compiler/linker flags); see the Open MPI FAQ for more information.
1803
1804Alternatively, Open MPI also installs pkg-config(1) configuration
1805files under $libdir/pkgconfig.  If pkg-config is configured to find
1806these files, then compiling / linking Open MPI programs can be
1807performed like this:
1808
1809  shell$ gcc hello_world_mpi.c -o hello_world_mpi -g \
1810              `pkg-config ompi-c --cflags --libs`
1811  shell$
1812
1813Open MPI supplies multiple pkg-config(1) configuration files; one for
1814each different wrapper compiler (language):
1815
1816------------------------------------------------------------------------
1817ompi       Synonym for "ompi-c"; Open MPI applications using the C
1818           MPI bindings
1819ompi-c     Open MPI applications using the C MPI bindings
1820ompi-cxx   Open MPI applications using the C or C++ MPI bindings
1821ompi-fort  Open MPI applications using the Fortran MPI bindings
1822------------------------------------------------------------------------
1823
1824The following pkg-config(1) configuration files *may* be installed,
1825depending on which command line options were specified to Open MPI's
1826configure script.  They are not necessary for MPI applications, but
1827may be used by applications that use Open MPI's lower layer support
1828libraries.
1829
1830orte:       Open MPI Run-Time Environment applicaions
1831opal:       Open Portable Access Layer applications
1832
1833===========================================================================
1834
1835Running Open MPI Applications
1836-----------------------------
1837
1838Open MPI supports both mpirun and mpiexec (they are exactly
1839equivalent) to launch MPI applications.  For example:
1840
1841  shell$ mpirun -np 2 hello_world_mpi
1842  or
1843  shell$ mpiexec -np 1 hello_world_mpi : -np 1 hello_world_mpi
1844
1845are equivalent.  Some of mpiexec's switches (such as -host and -arch)
1846are not yet functional, although they will not error if you try to use
1847them.  
1848
1849The rsh launcher (which defaults to using ssh) accepts a -hostfile
1850parameter (the option "-machinefile" is equivalent); you can specify a
1851-hostfile parameter indicating an standard mpirun-style hostfile (one
1852hostname per line):
1853
1854  shell$ mpirun -hostfile my_hostfile -np 2 hello_world_mpi
1855
1856If you intend to run more than one process on a node, the hostfile can
1857use the "slots" attribute.  If "slots" is not specified, a count of 1
1858is assumed.  For example, using the following hostfile:
1859
1860---------------------------------------------------------------------------
1861node1.example.com
1862node2.example.com
1863node3.example.com slots=2
1864node4.example.com slots=4
1865---------------------------------------------------------------------------
1866
1867  shell$ mpirun -hostfile my_hostfile -np 8 hello_world_mpi
1868
1869will launch MPI_COMM_WORLD rank 0 on node1, rank 1 on node2, ranks 2
1870and 3 on node3, and ranks 4 through 7 on node4.
1871
1872Other starters, such as the resource manager / batch scheduling
1873environments, do not require hostfiles (and will ignore the hostfile
1874if it is supplied).  They will also launch as many processes as slots
1875have been allocated by the scheduler if no "-np" argument has been
1876provided.  For example, running a SLURM job with 8 processors:
1877
1878  shell$ salloc -n 8 mpirun a.out
1879
1880The above command will reserve 8 processors and run 1 copy of mpirun,
1881which will, in turn, launch 8 copies of a.out in a single
1882MPI_COMM_WORLD on the processors that were allocated by SLURM.
1883
1884Note that the values of component parameters can be changed on the
1885mpirun / mpiexec command line.  This is explained in the section
1886below, "The Modular Component Architecture (MCA)".
1887
1888Open MPI supports oshrun to launch OSHMEM applications. For example:
1889
1890   shell$ oshrun -np 2 hello_world_oshmem
1891
1892OSHMEM applications may also be launched directly by resource managers such as
1893SLURM. For example, when OMPI is configured --with-pmi and --with-slurm one may
1894launch OSHMEM applications via srun
1895
1896   shell$ srun -N 2 hello_world_oshmem 
1897
1898
1899===========================================================================
1900
1901The Modular Component Architecture (MCA)
1902
1903The MCA is the backbone of Open MPI -- most services and functionality
1904are implemented through MCA components.  Here is a list of all the
1905component frameworks in Open MPI:
1906
1907---------------------------------------------------------------------------
1908
1909MPI component frameworks:
1910-------------------------
1911
1912allocator - Memory allocator
1913bcol      - Base collective operations
1914bml       - BTL management layer
1915btl       - MPI point-to-point Byte Transfer Layer, used for MPI
1916            point-to-point messages on some types of networks
1917coll      - MPI collective algorithms
1918crcp      - Checkpoint/restart coordination protocol
1919dpm       - MPI-2 dynamic process management
1920fbtl      - file byte transfer layer: abstraction for individual 
1921            read/write operations for OMPIO
1922fcoll     - collective read and write operations for MPI I/O
1923fs        - file system functions for MPI I/O
1924io        - MPI-2 I/O
1925mpool     - Memory pooling
1926mtl       - Matching transport layer, used for MPI point-to-point
1927            messages on some types of networks
1928op        - Back end computations for intrinsic MPI_Op operators
1929osc       - MPI-2 one-sided communications
1930pml       - MPI point-to-point management layer
1931pubsub    - MPI-2 publish/subscribe management
1932rcache    - Memory registration cache
1933rte       - Run-time environment operations
1934sbgp      - Collective operation sub-group
1935sharedfp  - shared file pointer operations for MPI I/O
1936topo      - MPI topology routines
1937vprotocol - Protocols for the "v" PML
1938
1939OSHMEM component frameworks:
1940-------------------------
1941
1942atomic    - OSHMEM atomic operations
1943memheap   - OSHMEM memory allocators that support the 
1944            PGAS memory model
1945scoll     - OSHMEM collective operations
1946spml      - OSHMEM "pml-like" layer: supports one-sided,
1947            point-to-point operations 
1948 
1949
1950Back-end run-time environment (RTE) component frameworks:
1951---------------------------------------------------------
1952
1953dfs       - Distributed file system
1954errmgr    - RTE error manager
1955ess       - RTE environment-specfic services
1956filem     - Remote file management
1957grpcomm   - RTE group communications
1958iof       - I/O forwarding
1959odls      - OpenRTE daemon local launch subsystem
1960oob       - Out of band messaging
1961plm       - Process lifecycle management
1962ras       - Resource allocation system
1963rmaps     - Resource mapping system
1964rml       - RTE message layer
1965routed    - Routing table for the RML
1966sensor    - Software and hardware health monitoring
1967snapc     - Snapshot coordination
1968sstore    - Distributed scalable storage
1969state     - RTE state machine
1970
1971Miscellaneous frameworks:
1972-------------------------
1973
1974backtrace   - Debugging call stack backtrace support
1975compress    - Compression algorithms
1976crs         - Checkpoint and restart service
1977db          - Internal database support
1978event       - Event library (libevent) versioning support
1979hwloc       - Hardware locality (hwloc) versioning support
1980if          - OS IP interface support
1981installdirs - Installation directory relocation services
1982memchecker  - Run-time memory checking
1983memcpy      - Memopy copy support
1984memory      - Memory management hooks
1985pstat       - Process status
1986shmem       - Shared memory support (NOT related to OSHMEM)
1987timer       - High-resolution timers
1988
1989---------------------------------------------------------------------------
1990
1991Each framework typically has one or more components that are used at
1992run-time.  For example, the btl framework is used by the MPI layer to
1993send bytes across different types underlying networks.  The tcp btl,
1994for example, sends messages across TCP-based networks; the openib btl
1995sends messages across OpenFabrics-based networks; the MX btl sends
1996messages across Myrinet MX / Open-MX networks.
1997
1998Each component typically has some tunable parameters that can be
1999changed at run-time.  Use the ompi_info command to check a component
2000to see what its tunable parameters are.  For example:
2001
2002  shell$ ompi_info --param btl tcp
2003
2004shows a some of parameters (and default values) for the tcp btl
2005component.  
2006
2007Note that ompi_info only shows a small number a component's MCA
2008parameters by default.  Each MCA parameter has a "level" value from 1
2009to 9, corresponding to the MPI-3 MPI_T tool interface levels.  In Open
2010MPI, we have interpreted these nine levels as three groups of three:
2011
2012 1. End user / basic
2013 2. End user / detailed
2014 3. End user / all
2015
2016 4. Application tuner / basic
2017 5. Application tuner / detailed
2018 6. Application tuner / all
2019
2020 7. MPI/OSHMEM developer / basic
2021 8. MPI/OSHMEM developer / detailed
2022 9. MPI/OSHMEM developer / all
2023
2024Here's how the three sub-groups are defined:
2025
2026 1. End user: Generally, these are parameters that are required for
2027    correctness, meaning that someone may need to set these just to
2028    get their MPI/OSHMEM application to run correctly. 
2029 2. Application tuner: Generally, these are parameters that can be
2030    used to tweak MPI application performance.
2031 3. MPI/OSHMEM developer: Parameters that either don't fit in the other two,
2032    or are specifically intended for debugging / development of Open
2033    MPI itself.
2034
2035Each sub-group is broken down into three classifications:
2036
2037 1. Basic: For parameters that everyone in this category will want to
2038    see.
2039 2. Detailed: Parameters that are useful, but you probably won't need
2040    to change them often.
2041 3. All: All other parameters -- probably including some fairly
2042    esoteric parameters.
2043
2044To see *all* available parameters for a given component, specify that
2045ompi_info should use level 9:
2046
2047  shell$ ompi_info --param btl tcp --level 9
2048
2049These values can be overridden at run-time in several ways.  At
2050run-time, the following locations are examined (in order) for new
2051values of parameters:
2052
20531. <prefix>/etc/openmpi-mca-params.conf
2054
2055   This file is intended to set any system-wide default MCA parameter
2056   values -- it will apply, by default, to all users who use this Open
2057   MPI installation.  The default file that is installed contains many
2058   comments explaining its format.
2059
20602. $HOME/.openmpi/mca-params.conf
2061
2062   If this file exists, it should be in the same format as
2063   <prefix>/etc/openmpi-mca-params.conf.  It is intended to provide
2064   per-user default parameter values.
2065
20663. environment variables of the form OMPI_MCA_<name> set equal to a
2067   <value>
2068
2069   Where <name> is the name of the parameter.  For example, set the
2070   variable named OMPI_MCA_btl_tcp_frag_size to the value 65536
2071   (Bourne-style shells):
2072
2073   shell$ OMPI_MCA_btl_tcp_frag_size=65536
2074   shell$ export OMPI_MCA_btl_tcp_frag_size
2075
20764. the mpirun/oshrun command line: --mca <name> <value>
2077 
2078   Where <name> is the name of the parameter.  For example:
2079
2080   shell$ mpirun --mca btl_tcp_frag_size 65536 -np 2 hello_world_mpi
2081
2082These locations are checked in order.  For example, a parameter value
2083passed on the mpirun command line will override an environment
2084variable; an environment variable will override the system-wide
2085defaults.
2086
2087Each component typically activates itself when relavant.  For example,
2088the MX component will detect that MX devices are present and will
2089automatically be used for MPI communications.  The SLURM component
2090will automatically detect when running inside a SLURM job and activate
2091itself.  And so on.
2092
2093Components can be manually activated or deactivated if necessary, of
2094course.  The most common components that are manually activated,
2095deactivated, or tuned are the "BTL" components -- components that are
2096used for MPI point-to-point communications on many types common
2097networks. 
2098
2099For example, to *only* activate the TCP and "self" (process loopback)
2100components are used for MPI communications, specify them in a
2101comma-delimited list to the "btl" MCA parameter:
2102
2103   shell$ mpirun --mca btl tcp,self hello_world_mpi
2104
2105To add shared memory support, add "sm" into the command-delimited list
2106(list order does not matter):
2107    
2108   shell$ mpirun --mca btl tcp,sm,self hello_world_mpi
2109
2110To specifically deactivate a specific component, the comma-delimited
2111list can be prepended with a "^" to negate it:
2112
2113   shell$ mpirun --mca btl ^tcp hello_mpi_world
2114
2115The above command will use any other BTL component other than the tcp
2116component.
2117
2118===========================================================================
2119
2120Common Questions
2121----------------
2122
2123Many common questions about building and using Open MPI are answered
2124on the FAQ:
2125
2126    http://www.open-mpi.org/faq/
2127
2128===========================================================================
2129
2130Got more questions?
2131-------------------
2132
2133Found a bug?  Got a question?  Want to make a suggestion?  Want to
2134contribute to Open MPI?  Please let us know!
2135
2136When submitting questions and problems, be sure to include as much
2137extra information as possible.  This web page details all the
2138information that we request in order to provide assistance:
2139
2140     http://www.open-mpi.org/community/help/
2141
2142User-level questions and comments should generally be sent to the
2143user's mailing list (users@open-mpi.org).  Because of spam, only
2144subscribers are allowed to post to this list (ensure that you
2145subscribe with and post from *exactly* the same e-mail address --
2146joe@example.com is considered different than
2147joe@mycomputer.example.com!).  Visit this page to subscribe to the
2148user's list:
2149
2150     http://www.open-mpi.org/mailman/listinfo.cgi/users
2151
2152Developer-level bug reports, questions, and comments should generally
2153be sent to the developer's mailing list (devel@open-mpi.org).  Please
2154do not post the same question to both lists.  As with the user's list,
2155only subscribers are allowed to post to the developer's list.  Visit
2156the following web page to subscribe:
2157
2158     http://www.open-mpi.org/mailman/listinfo.cgi/devel
2159
2160Make today an Open MPI day!
2161

README.JAVA.txt

1***************************************************************************
2IMPORTANT NOTE
3
4JAVA BINDINGS ARE PROVIDED ON A "PROVISIONAL" BASIS - I.E., THEY ARE
5NOT PART OF THE CURRENT OR PROPOSED MPI STANDARDS. THUS, INCLUSION OF
6JAVA SUPPORT IS NOT REQUIRED BY THE STANDARD. CONTINUED INCLUSION OF
7THE JAVA BINDINGS IS CONTINGENT UPON ACTIVE USER INTEREST AND
8CONTINUED DEVELOPER SUPPORT.
9
10***************************************************************************
11
12This version of Open MPI provides support for Java-based
13MPI applications.
14
15The rest of this document provides step-by-step instructions on
16building OMPI with Java bindings, and compiling and running
17Java-based MPI applications. Also, part of the functionality is
18explained with examples. Further details about the design,
19implementation and usage of Java bindings in Open MPI can be found
20in [1]. The bindings follow a JNI approach, that is, we do not
21provide a pure Java implementation of MPI primitives, but a thin
22layer on top of the C implementation. This is the same approach
23as in mpiJava [2]; in fact, mpiJava was taken as a starting point
24for Open MPI Java bindings, but they were later totally rewritten.
25
26 [1] O. Vega-Gisbert, J. E. Roman, and J. M. Squyres. "Design and
27     implementation of Java bindings in Open MPI". In preparation
28     (2013).
29
30 [2] M. Baker et al. "mpiJava: An object-oriented Java interface to
31     MPI". In Parallel and Distributed Processing, LNCS vol. 1586,
32     pp. 748-762, Springer (1999).
33
34============================================================================
35
36Building Java Bindings
37
38If this software was obtained as a developer-level
39checkout as opposed to a tarball, you will need to start your build by
40running ./autogen.pl. This will also require that you have a fairly
41recent version of autotools on your system - see the HACKING file for
42details.
43
44Java support requires that Open MPI be built at least with shared libraries
45(i.e., --enable-shared) - any additional options are fine and will not
46conflict. Note that this is the default for Open MPI, so you don't
47have to explicitly add the option. The Java bindings will build only
48if --enable-mpi-java is specified, and a JDK is found in a typical
49system default location.
50
51If the JDK is not in a place where we automatically find it, you can
52specify the location. For example, this is required on the Mac
53platform as the JDK headers are located in a non-typical location. Two
54options are available for this purpose:
55
56--with-jdk-bindir=<foo> - the location of javac and javah
57--with-jdk-headers=<bar> - the directory containing jni.h
58
59For simplicity, typical configurations are provided in platform files
60under contrib/platform/hadoop. These will meet the needs of most
61users, or at least provide a starting point for your own custom
62configuration.
63
64In summary, therefore, you can configure the system using the
65following Java-related options:
66
67$ ./configure --with-platform=contrib/platform/hadoop/<your-platform>
68...
69
70or
71
72$ ./configure --enable-mpi-java --with-jdk-bindir=<foo>
73              --with-jdk-headers=<bar> ...
74
75or simply
76
77$ ./configure --enable-mpi-java ...
78
79if JDK is in a "standard" place that we automatically find.
80
81----------------------------------------------------------------------------
82
83Running Java Applications
84
85For convenience, the "mpijavac" wrapper compiler has been provided for
86compiling Java-based MPI applications. It ensures that all required MPI
87libraries and class paths are defined. You can see the actual command
88line using the --showme option, if you are interested.
89
90Once your application has been compiled, you can run it with the
91standard "mpirun" command line:
92
93$ mpirun <options> java <your-java-options> <my-app>
94
95For convenience, mpirun has been updated to detect the "java" command
96and ensure that the required MPI libraries and class paths are defined
97to support execution. You therefore do NOT need to specify the Java
98library path to the MPI installation, nor the MPI classpath. Any class
99path definitions required for your application should be specified
100either on the command line or via the CLASSPATH environmental
101variable. Note that the local directory will be added to the class
102path if nothing is specified.
103
104As always, the "java" executable, all required libraries, and your
105application classes must be available on all nodes.
106
107----------------------------------------------------------------------------
108
109Basic usage of Java bindings
110
111There is an MPI package that contains all classes of the MPI Java
112bindings: Comm, Datatype, Request, etc. These classes have a direct
113correspondence with classes defined by the MPI standard. MPI primitives
114are just methods included in these classes. The convention used for
115naming Java methods and classes is the usual camel-case convention,
116e.g., the equivalent of MPI_File_set_info(fh,info) is fh.setInfo(info),
117where fh is an object of the class File.
118
119Apart from classes, the MPI package contains predefined public attributes
120under a convenience class MPI. Examples are the predefined communicator
121MPI.COMM_WORLD or predefined datatypes such as MPI.DOUBLE. Also, MPI
122initialization and finalization are methods of the MPI class and must
123be invoked by all MPI Java applications. The following example illustrates
124these concepts:
125
126import mpi.*;
127
128class ComputePi {
129
130    public static void main(String args[]) throws MPIException {
131
132        MPI.Init(args);
133
134        int rank = MPI.COMM_WORLD.getRank(),
135            size = MPI.COMM_WORLD.getSize(),
136            nint = 100; // Intervals.
137        double h = 1.0/(double)nint, sum = 0.0;
138
139        for(int i=rank+1; i<=nint; i+=size) {
140            double x = h * ((double)i - 0.5);
141            sum += (4.0 / (1.0 + x * x));
142        }
143
144        double sBuf[] = { h * sum },
145               rBuf[] = new double[1];
146
147        MPI.COMM_WORLD.reduce(sBuf, rBuf, 1, MPI.DOUBLE, MPI.SUM, 0);
148
149        if(rank == 0) System.out.println("PI: " + rBuf[0]);
150        MPI.Finalize();
151    }
152}
153
154----------------------------------------------------------------------------
155
156Exception handling
157
158Java bindings in Open MPI support exception handling. By default, errors
159are fatal, but this behavior can be changed. The Java API will throw
160exceptions if the MPI.ERRORS_RETURN error handler is set:
161
162    MPI.COMM_WORLD.setErrhandler(MPI.ERRORS_RETURN);
163
164If you add this statement to your program, it will show the line
165where it breaks, instead of just crashing in case of an error.
166Error-handling code can be separated from main application code by
167means of try-catch blocks, for instance:
168
169    try
170    {
171        File file = new File(MPI.COMM_SELF, "filename", MPI.MODE_RDONLY);
172    }
173    catch(MPIException ex)
174    {
175        System.err.println("Error Message: "+ ex.getMessage());
176        System.err.println("  Error Class: "+ ex.getErrorClass());
177        ex.printStackTrace();
178        System.exit(-1);
179    }
180
181
182----------------------------------------------------------------------------
183
184How to specify buffers
185
186In MPI primitives that require a buffer (either send or receive) the
187Java API admits a Java array. Since Java arrays can be relocated by
188the Java runtime environment, the MPI Java bindings need to make a
189copy of the contents of the array to a temporary buffer, then pass the
190pointer to this buffer to the underlying C implementation. From the
191practical point of view, this implies an overhead associated to all
192buffers that are represented by Java arrays. The overhead is small
193for small buffers but increases for large arrays.
194
195There is a pool of temporary buffers with a default capacity of 64K.
196If a temporary buffer of 64K or less is needed, then the buffer will
197be obtained from the pool. But if the buffer is larger, then it will
198be necessary to allocate the buffer and free it later.
199
200The default capacity of pool buffers can be modified with an 'mca'
201parameter:
202
203    mpirun --mca mpi_java_eager size ...
204
205Where 'size' is the number of bytes, or kilobytes if it ends with 'k',
206or megabytes if it ends with 'm'.
207
208An alternative is to use "direct buffers" provided by standard
209classes available in the Java SDK such as ByteBuffer. For convenience
210we provide a few static methods "new[Type]Buffer" in the MPI class
211to create direct buffers for a number of basic datatypes. Elements
212of the direct buffer can be accessed with methods put() and get(),
213and the number of elements in the buffer can be obtained with the
214method capacity(). This example illustrates its use:
215
216    int myself = MPI.COMM_WORLD.getRank();
217    int tasks  = MPI.COMM_WORLD.getSize();
218
219    IntBuffer in  = MPI.newIntBuffer(MAXLEN * tasks),
220              out = MPI.newIntBuffer(MAXLEN);
221
222    for(int i = 0; i < MAXLEN; i++)
223        out.put(i, myself);      // fill the buffer with the rank
224
225    Request request = MPI.COMM_WORLD.iAllGather(
226                      out, MAXLEN, MPI.INT, in, MAXLEN, MPI.INT);
227    request.waitFor();
228    request.free();
229
230    for(int i = 0; i < tasks; i++)
231    {
232        for(int k = 0; k < MAXLEN; k++)
233        {
234            if(in.get(k + i * MAXLEN) != i)
235                throw new AssertionError("Unexpected value");
236        }
237    }
238
239Direct buffers are available for: BYTE, CHAR, SHORT, INT, LONG,
240FLOAT, and DOUBLE. There is no direct buffer for booleans.
241
242Direct buffers are not a replacement for arrays, because they have
243higher allocation and deallocation costs than arrays. In some
244cases arrays will be a better choice. You can easily convert a
245buffer into an array and vice versa.
246
247All non-blocking methods must use direct buffers and only
248blocking methods can choose between arrays and direct buffers.
249
250The above example also illustrates that it is necessary to call
251the free() method on objects whose class implements the Freeable
252interface. Otherwise a memory leak is produced.
253
254----------------------------------------------------------------------------
255
256Specifying offsets in buffers
257
258In a C program, it is common to specify an offset in a array with
259"&array[i]" or "array+i", for instance to send data starting from
260a given positon in the array. The equivalent form in the Java bindings
261is to "slice()" the buffer to start at an offset. Making a "slice()"
262on a buffer is only necessary, when the offset is not zero. Slices
263work for both arrays and direct buffers.
264
265    import static mpi.MPI.slice;
266    ...
267    int numbers[] = new int[SIZE];
268    ...
269    MPI.COMM_WORLD.send(slice(numbers, offset), count, MPI.INT, 1, 0);
270
271----------------------------------------------------------------------------
272
273If you have any problems, or find any bugs, please feel free to report
274them to Open MPI user's mailing list (see
275http://www.open-mpi.org/community/lists/ompi.php).
276