Molpro 2023.1.0

Webpage

https://www.molpro.net/

Version

2023.1.0

Build Environment

  • GCC 12.1.1 (gcc-toolset-12)
  • Open MPI 4.1.5
  • Eigen 3.4.0
  • MKL 2023.2.0

Files Required

  • molpro-2023.1.0.tar.gz
  • ga-5.8.2.tar.gz
  • work.patch
  • patch-argos-binput.F
  • patch-cic-ItfFortranInt.h
  • patch-common_modules-common_cconf1
    • Change some parameters for huge CI calculations and modify default path for temporary files.
    • Patch files are placed at /apl/molpro/2023.1.0/patches directory.
  • token

Build Procedure

#!/bin/sh

GA_VERSION=5.8.2
GA_ARCHIVE=/home/users/${USER}/Software/GlobalArrays/${GA_VERSION}/ga-${GA_VERSION}.tar.gz

MOLPRO_VERSION=2023.1.0
MOLPRO_DIRNAME=molpro-${MOLPRO_VERSION}
PARALLEL=12
BASEDIR=/home/users/${USER}/Software/Molpro/${MOLPRO_VERSION}
MOLPRO_TARBALL=${BASEDIR}/${MOLPRO_DIRNAME}.tar.gz

PATCH0=${BASEDIR}/work.patch
PATCH1=${BASEDIR}/patch-argos-binput.F
PATCH2=${BASEDIR}/patch-cic-ItfFortranInt.h
PATCH3=${BASEDIR}/patch-common_modules-common_cconf1

TOKEN=${BASEDIR}/token

WORKDIR=/gwork/users/${USER}
GA_INSTALLDIR=${WORKDIR}/ga-temporary
INSTALLDIR=/apl/molpro/${MOLPRO_VERSION}

#------------------------------------------
umask 0022
ulimit -s unlimited

export LANG=
export LC_ALL=C
export OMP_NUM_THREADS=1

cd $WORKDIR
if [ -d ga-${GA_VERSION} ]; then
  mv ga-${GA_VERSION} ga_tmp
  rm -rf ga_tmp &
fi
if [ -d ga-temporary ]; then
  mv ga-temporary ga_tmp_tmp
  rm -rf ga_tmp_tmp &
fi
if [ -d ${MOLPRO_DIRNAME} ]; then
  mv ${MOLPRO_DIRNAME} molpro_tmp
  rm -rf molpro_tmp &
fi

module -s purge
module -s load gcc-toolset/12
module -s load openmpi/4.1.5/gcc12
module -s load eigen/3.4.0

tar zxf ${GA_ARCHIVE}
cd ga-${GA_VERSION}

export CFLAGS="-mpc80"
export FFLAGS="-mpc80"
export FCFLAGS="-mpc80"
export CXXFLAGS="-mpc80"

export F77=mpif90
export F90=mpif90
export FC=mpif90
export CC=mpicc
export CXX=mpicxx
export MPIF77=mpif90
export MPICC=mpicc
export MPICXX=mpicxx
export GA_FOPT="-O3"
export GA_COPT="-O3"
export GA_CXXOPT="-O3"

./autogen.sh
./configure --enable-i8 \
            --with-mpi-pr \
            --prefix=${GA_INSTALLDIR}

make -j ${PARALLEL}
make check
make install

# mkl for molpro
module -s load mkl/2023.2.0

cd ${WORKDIR}
tar zxf ${MOLPRO_TARBALL}
cd ${MOLPRO_DIRNAME}

patch -p0 < ${PATCH0}
patch -p0 < ${PATCH1}
patch -p0 < ${PATCH2}
patch -p0 < ${PATCH3}

export PATH="${GA_INSTALLDIR}/bin:$PATH" # where ga-config exists

CPPFLAGS="-I${GA_INSTALLDIR}/include" \
LDFLAGS="-L${GA_INSTALLDIR}/lib64" \
    ./configure --prefix=${INSTALLDIR} \
                --enable-slater

make -j ${PARALLEL}
cp $TOKEN lib/.token # this file will be protected manually later

make tuning

MOLPRO_OPTIONS="" make quicktest
MOLPRO_OPTIONS="-n2" make test

make install
cp -a testjobs ${INSTALLDIR}/molpro*/
cp -a bench ${INSTALLDIR}/molpro*/

Tests

  • h2o_rvci and h2o_rvci_dip failed with following error message.
    • Compiler version, MPI (Open MPI, Intel MPI, MVAPICH), BLAS (MKL or OpenBLAS) do not seem to be related to this issue.
    • This is simply ignored for now.

Running job h2o_rvci.test
At line 453 of file ../src/vscf/mod_surf_headquarter.F90 (unit = 86)
Fortran runtime error: Cannot open file '********/molpro-2023.1.0/testjobs/h2o_dip.pot_NEW': No such file or directory
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus cau
sing
the job to be terminated. The first process to do so was:

  Process name: [[29951,1],0]
  Exit code:    2
--------------------------------------------------------------------------
**** PROBLEMS WITH JOB h2o_rvci.test
h2o_rvci.test: ERRORS DETECTED: non-zero return code ... inspect output
**** For further information, look in the output file
**** /**********/molpro-2023.1.0/testjobs/h2o_rvci.errout
Running job h2o_rvci_dip.test
At line 453 of file ../src/vscf/mod_surf_headquarter.F90 (unit = 86)
Fortran runtime error: Cannot open file '/*************/molpro-2023.1.0/testjobs/h2o_dip.pot_NEW': No such file or directory
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[31011,1],0]
  Exit code:    2
--------------------------------------------------------------------------
**** PROBLEMS WITH JOB h2o_rvci_dip.test
h2o_rvci_dip.test: ERRORS DETECTED: non-zero return code ... inspect output
**** For further information, look in the output file
**** /*************/molpro-2023.1.0/testjobs/h2o_rvci_dip.errout

Notes

  • When MKL is enabled during the build of GA, the build failed. We thus don't use MKL for GA.
  • gcc11 failed to build molpro due to a compilation error. (gcc10 can do without problem.)
  • Vanilla Open MPI is employed for this version. HPC-X 2.13.1 (which are used in previous builds) seems to cause problem occasionally.
    • Moreover, hcoll library (with hpc-x 2.13.1/2.11) may not be good for molpro. hcoll is not used in vanilla Open MPI.
    • We will continue to investigate this issue.
  • MVAPICH can build molpro without problem. However, MCSCF calculation was terribly slow (and unstable?) in this version. We thus don't employ mvapich.
  • If disk option (default setting for single node run) is used in PNO-LCCSD, it sometimes crashes with the following error message.
    • We don't know how to avoid this.

ERROR: Error setting an MPI file to atomic mode
The problem occurs in PNO-LCCSD

  • According to the description in the official documentation, openmpi may not work with -–ga-impl disk when multiple parallel Molpro calculations are executed simultaneously on a node. (This error has not yet been reproduced, though.)
    • We have confirmed several jobs that hang, but we couldn't distinguish them from the cases where hcoll is suspicious.
    • ref: https://www.molpro.net/manual/doku.php?id=ga_installation
  • In parallel calculations of molpro (# of MPI processes >= 3), the calculated results (such as energy) seem to be different each time. It is assumed that this is due to the atomic operations, which seems to be carried out in random order.
    • In the normal HF/DFT calculations, these errors are too small and negligible; the values (such as energy) in output file is identical in most of the cases. However, in some complicated MCSCF calculation for example, there could be large deviations. Results of serial calculations (n=1 or 2) are always identical in all the cases we have investigated.
    • (This is not specific to version 2023.1.0. This also happens in former versions.)