LAMMPS 2Aug23 (intel oneapi classic)

Webpage

https://www.lammps.org

Version

2Aug23

Build Environment

  • Intel oneAPI Compiler Classic 2023.2.0
  • Intel MKL 2023.2.0
  • Intel MPI 2021.10.0

Files Required

  • lammps-stable_2Aug2023.tar.gz
  • (many packages will be downloaded in the procedure below.)

Build Procedure

#!/bin/sh

VERSION=2Aug23
NAME=lammps-stable_2Aug2023
INSTALL_PREFIX=/apl/lammps/2023-Aug2-intel

BASEDIR=/home/users/${USER}/Software/LAMMPS/${VERSION}
LAMMPS_TARBALL=${BASEDIR}/${NAME}.tar.gz

WORKDIR=/gwork/users/${USER}
LAMMPS_WORKDIR=${WORKDIR}/${NAME}

VMD_MOLFILE_INC=/home/users/${USER}/Software/VMD/1.9.4/vmd-1.9.4a57/plugins/include

PARALLEL=12

#------------------------------------------------------------------
umask 0022
export LANG=C
ulimit -s unlimited

module -s purge

# load intel env (2023.2.0)
. ~/intel/oneapi/compiler/latest/env/vars.sh

module -s load intelmpi/2021.10.0
module -s load mkl/2023.2.0

PYTHONEXE=/usr/bin/python3.6m
PYTHONINC=/usr/include/python3.6m

cd ${WORKDIR}
if [ -d ${NAME} ]; then
  mv ${NAME} lammps_erase
  rm -rf lammps_erase &
fi

tar zxf ${LAMMPS_TARBALL}

cd ${NAME}
sed -i -e "s/xHost/march=core-avx2/" cmake/CMakeLists.txt
mkdir build && cd build

# Disabled PKGs:
# FFMPEG, ADIOS, MDI, VTK: noavail
# QUIP: no avail
# ML-HDNNP: failed to build
# ML-IAP: failed to build
# KIM: CDDL is imcompatible with GPL
# MPIIO: not maintained?

cmake ../cmake \
  -DLAMMPS_MACHINE=rccs \
  -DENABLE_TESTING=on \
  -DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX} \
  -DCMAKE_C_COMPILER=icc \
  -DCMAKE_CXX_COMPILER=icpc \
  -DCMAKE_Fortran_COMPILER=ifort \
  -DCMAKE_MPI_C_COMPILER=mpiicc \
  -DCMAKE_MPI_CXX_COMPILER=mpiicpc \
  -DCMAKE_MPI_Fortran_COMPILER=mpiifort \
  -DCMAKE_C_FLAGS_RELEASE="-O3 -DNDEBUG" \
  -DCMAKE_CXX_FLAGS_RELEASE="-O3 -DNDEBUG" \
  -DCMAKE_Fortran_FLAGS_RELEASE="-O3 -DNDEBUG" \
  -DPython_EXECUTABLE=${PYTHONEXE} \
  -DPython_INCLUDE_DIR=${PYTHONINC} \
  -DLAMMPS_EXCEPTIONS=on \
  -DBUILD_SHARED_LIBS=on \
  -DBUILD_TOOLS=on \
  -DBUILD_MPI=on \
  -DBUILD_OMP=on \
  -DBUILD_LAMMPS_GUI=off \
  -DFFT=MKL \
  -DFFT_SINGLE=on \
  -DFFT_MKL_THREADS=on \
  -DWITH_JPEG=on \
  -DWITH_PNG=on \
  -DWITH_GZIP=on \
  -DPKG_ADIOS=off \
  -DPKG_AMOEBA=on \
  -DPKG_ASPHERE=on \
  -DPKG_ATC=on \
  -DPKG_AWPMD=on \
  -DPKG_BOCS=on \
  -DPKG_BODY=on \
  -DPKG_BPM=on \
  -DPKG_BROWNIAN=on \
  -DPKG_CG-DNA=on \
  -DPKG_CG-SPICA=on \
  -DPKG_CLASS2=on \
  -DPKG_COLLOID=on \
  -DPKG_COLVARS=on \
  -DPKG_COMPRESS=on \
  -DPKG_CORESHELL=on \
  -DPKG_DIELECTRIC=on \
  -DPKG_DIFFRACTION=on \
  -DPKG_DIPOLE=on \
  -DPKG_DPD-BASIC=on \
  -DPKG_DPD-MESO=on \
  -DPKG_DPD-REACT=on \
  -DPKG_DPD-SMOOTH=on \
  -DPKG_DRUDE=on \
  -DPKG_ELECTRODE=on \
  -DPKG_EFF=on \
  -DPKG_EXTRA-COMPUTE=on \
  -DPKG_EXTRA-DUMP=on \
  -DPKG_EXTRA-FIX=on \
  -DPKG_EXTRA-MOLECULE=on \
  -DPKG_EXTRA-PAIR=on \
  -DPKG_FEP=on \
  -DPKG_GPU=off \
  -DPKG_GRANULAR=on \
  -DPKG_H5MD=on \
  -DPKG_INTEL=on \
  -DPKG_INTERLAYER=on \
  -DPKG_KIM=off \
  -DDOWNLOAD_KIM=off \
  -DPKG_KOKKOS=off \
  -DKokkos_ARCH_ZEN3=off \
  -DKokkos_ENABLE_OPENMP=off \
  -DPKG_KSPACE=on \
  -DPKG_LATBOLTZ=on \
  -DPKG_LEPTON=on \
  -DPKG_MACHDYN=on \
  -DDOWNLOAD_EIGEN3=on \
  -DPKG_MANIFOLD=on \
  -DPKG_MANYBODY=on \
  -DPKG_MC=on \
  -DPKG_MDI=on \
  -DPKG_MEAM=on \
  -DPKG_MESONT=on \
  -DPKG_MGPT=on \
  -DPKG_MISC=on \
  -DPKG_ML-HDNNP=off \
  -DDOWNLOAD_N2P2=off \
  -DPKG_ML-IAP=off \
  -DPKG_ML-PACE=on \
  -DPKG_ML-QUIP=off \
  -DDOWNLOAD_QUIP=off \
  -DPKG_ML-RANN=on \
  -DPKG_ML-SNAP=on \
  -DPKG_MOFFF=on \
  -DPKG_MOLECULE=on \
  -DPKG_MOLFILE=on \
  -DMOLFILE_INCLUDE_DIR=${VMD_MOLFILE_INC} \
  -DPKG_MPIIO=off \
  -DPKG_MSCG=on \
  -DPKG_NETCDF=on \
  -DPKG_OPENMP=on \
  -DPKG_OPT=on \
  -DPKG_ORIENT=on \
  -DPKG_PERI=on \
  -DPKG_PHONON=on \
  -DPKG_PLUGIN=on \
  -DPKG_PLUMED=on \
  -DDOWNLOAD_PLUMED=on \
  -DPKG_POEMS=on \
  -DPKG_PTM=on \
  -DPKG_PYTHON=on \
  -DPKG_QEQ=on \
  -DPKG_QMMM=on \
  -DPKG_QTB=on \
  -DPKG_REACTION=on \
  -DPKG_REAXFF=on \
  -DPKG_REPLICA=on \
  -DPKG_RIGID=on \
  -DPKG_SCAFACOS=on \
  -DDOWNLOAD_SCAFACOS=on \
  -DPKG_SHOCK=on \
  -DPKG_SMTBQ=on \
  -DPKG_SPH=on \
  -DPKG_SPIN=on \
  -DPKG_SRD=on \
  -DPKG_TALLY=on \
  -DPKG_UEF=on \
  -DPKG_VORONOI=on \
  -DDOWNLOAD_VORO=on \
  -DPKG_VTK=off \
  -DPKG_YAFF=on \
  -DBLAS_LIBRARIES="-qmkl" \
  -DCMAKE_BUILD_TYPE=Release

make VERBOSE=1 -j ${PARALLEL}

export OMP_NUM_THREADS=2

make test # will put error...
make install

cp -a ../examples ${INSTALL_PREFIX}

cd ${INSTALL_PREFIX}
for f in etc/profile.d/*; do
  ln -s $f .
done

cd lib64
if [ -f liblammps_rccs.so ]; then
  ln -s liblammps_rccs.so liblammps.so
fi
if [ -f liblammps_rccs.so.0 ]; then
  ln -s liblammps_rccs.so.0 liblammps.so.0
fi

List of Available Packages

AMOEBA ASPHERE ATC AWPMD BOCS BODY BPM BROWNIAN CG-DNA CG-SPICA CLASS2 COLLOID COLVARS COMPRESS CORESHELL DIELECTRIC DIFFRACTION DIPOLE DPD-BASIC DPD-MESO DPD-REACT DPD-SMOOTH DRUDE EFF ELECTRODE EXTRA-COMPUTE EXTRA-DUMP EXTRA-FIX EXTRA-MOLECULE EXTRA-PAIR FEP GRANULAR H5MD INTEL INTERLAYER KSPACE LATBOLTZ LEPTON MACHDYN MANIFOLD MANYBODY MC MDI MEAM MESONT MGPT MISC ML-PACE ML-RANN ML-SNAP MOFFF MOLECULE MOLFILE MSCG NETCDF OPENMP OPT ORIENT PERI PHONON PLUGIN PLUMED POEMS PTM PYTHON QEQ QMMM QTB REACTION REAXFF REPLICA RIGID SCAFACOS SHOCK SMTBQ SPH SPIN SRD TALLY UEF VORONOI YAFF

Tests

Copy of test results are available at /apl/lammps/2023-Aug2-intel/Testing/. Please see there for details.
(Although there are so many failed tests, most of them are caused by minor numerical errors. Some of failures might not be negligible.)

         13 - AtomStyles (Failed)
         37 - SimpleCommands (SEGFAULT)
         39 - Groups (Failed)
         40 - Regions (Subprocess aborted)
        118 - MolPairStyle:coul_diel (Failed)
        125 - MolPairStyle:coul_shield (Failed)
        127 - MolPairStyle:coul_slater_long (Failed)
        169 - MolPairStyle:lj_class2_soft (Failed)
        188 - MolPairStyle:lj_cut_soft (Failed)
        194 - MolPairStyle:lj_expand_coul_long (Failed)
        211 - MolPairStyle:lj_spica_coul_long (Failed)
        212 - MolPairStyle:lj_spica_coul_table (Failed)
        213 - MolPairStyle:lj_switch3_coulgauss_long (Failed)
        237 - MolPairStyle:tip4p_long_soft (Failed)
        240 - MolPairStyle:wf_cut (Failed)
        249 - AtomicPairStyle:buck_coul_cut_qeq_point (Failed)
        250 - AtomicPairStyle:buck_coul_cut_qeq_shielded (Failed)
        267 - AtomicPairStyle:edip (Failed)
        273 - AtomicPairStyle:lepton_sphere (Failed)
        274 - AtomicPairStyle:lj_cut_sphere (Failed)
        275 - AtomicPairStyle:lj_expand_sphere (Failed)
        277 - AtomicPairStyle:meam (Failed)
        280 - AtomicPairStyle:meam_spline (Failed)
        281 - AtomicPairStyle:meam_sw_spline (Failed)
        284 - AtomicPairStyle:reaxff-acks2 (Failed)
        285 - AtomicPairStyle:reaxff-acks2_efield (Failed)
        286 - AtomicPairStyle:reaxff (Failed)
        287 - AtomicPairStyle:reaxff_lgvdw (Failed)
        288 - AtomicPairStyle:reaxff_noqeq (Failed)
        289 - AtomicPairStyle:reaxff_tabulate (Failed)
        290 - AtomicPairStyle:reaxff_tabulate_flag (Failed)
        307 - ManybodyPairStyle:comb (Failed)
        315 - ManybodyPairStyle:ilp-graphene-hbn (Failed)
        316 - ManybodyPairStyle:ilp-graphene-hbn_notaper (Failed)
        320 - ManybodyPairStyle:lcbop (Failed)
        329 - ManybodyPairStyle:pace_product (Failed)
        330 - ManybodyPairStyle:pace_recursive (Failed)
        345 - ManybodyPairStyle:tersoff (Failed)
        350 - ManybodyPairStyle:tersoff_shift (Failed)
        360 - BondStyle:gaussian (Subprocess aborted)
        363 - BondStyle:harmonic_restrain (Failed)
        399 - KSpaceStyle:ewald_conp_charge (Failed)
        409 - KSpaceStyle:pppm_ad (Failed)
        410 - KSpaceStyle:pppm_cg (Failed)
        412 - KSpaceStyle:pppm_cg_tiled (Failed)
        421 - KSpaceStyle:pppm_disp_tip4p (Failed)
        429 - KSpaceStyle:pppm_tip4p (Failed)
        434 - KSpaceStyle:scafacos_direct (Failed)
        435 - KSpaceStyle:scafacos_ewald (Failed)
        436 - KSpaceStyle:scafacos_fmm (Failed)
        437 - KSpaceStyle:scafacos_fmm_tuned (Failed)
        438 - KSpaceStyle:scafacos_p2nfft (Failed)
        442 - FixTimestep:addforce_const (Failed)
        444 - FixTimestep:addtorque_const (Failed)
        451 - FixTimestep:deform_tri (Failed)
        468 - FixTimestep:nph (Failed)
        469 - FixTimestep:nph_sphere (Failed)
        471 - FixTimestep:npt_iso (Failed)
        472 - FixTimestep:npt_sphere_aniso (Failed)
        473 - FixTimestep:npt_sphere_iso (Failed)
        482 - FixTimestep:nvt-psllod (Failed)
        483 - FixTimestep:nvt-sllod (Failed)
        501 - FixTimestep:rigid_npt_small (Failed)
        515 - FixTimestep:smd_couple (Failed)
        523 - FixTimestep:temp_csld (Failed)
        531 - FixTimestep:wall_morse_const (Failed)
        533 - FixTimestep:wall_table_linear (Failed)
        534 - FixTimestep:wall_table_spline (Failed)
        549 - DihedralStyle:table_cut_linear (Failed)
        551 - DihedralStyle:table_linear (Failed)
        552 - DihedralStyle:table_spline (Failed)
        562 - ImproperStyle:inversion_harmonic (Failed)

Notes

  • If your calculation can be accelerated by "intel" package, this version might be better than GCC version. Please consider to use GCC version if your calculation can't be accelerated by "intel" package.
  • In case you employ "intel" package, please disable libhcoll.
    • You don't need to mind this if you use module to set up lammps environment. I_MPI_COLL_EXTERNAL=no is defined in the module file (lammps/2023-Aug2-intel).
  • Accelration of "intel" package does not work for Intel Classic Compilers if "-xHost" is specified. In the above procedure, we manually replaced the optimization option.
    • In case of new oneAPI compilers (icx, icpx, ifx), "-xHost" works well. However, the performance is bit worse than classic compilers + "-march=core-avx2". (in case of rhodo test)
  • There seem to be some problems for oneAPI compilers (icx, icpx, ifx).
    • Scafacos of this lammps version doesn't support new compilers. (failed to build)
    • There is some performance issue of "intel" package as described above.
  • Open MPI also works fine. Segmentation fault of SimpleCommands disappears and the results of other tests are almost the same as those of Intel MPI version. However, computation performance of Open MPI version is bit unstable than Intel MPI.