Submit Gaussian Jobs (g16sub, g09sub)

(Last update: Nov 19, 2024)

Gaussian jobs can be submitted with special commands g16sub and g09sub on RCCS system.
There is a page about g16sub and g09sub in quick start guide. Please also check this.

Basic Usage

Prepare input file of Gaussian (mol.gjf) and transfer that file somewhere in your home directory. Then go to the directory where the input files exists (somewhere) and run "g16sub mol.gjf" command. Available memory amount, number of CPU cores (%mem, %nprocshared, %cpu) are automatically added by g16sub/g09sub. %mem, %nprocshared, and %cpu lines in the input file are ignored. G16sub and g09sub usually overwrite those lines. Other header lines such as %chk and %oldchk are not modified by g16sub/g09sub.

[user@ccfep somewhere]$ g16sub mol.gjf

(Please don't type $ and string before $. You need to type "g16sub mol.gjf" part only.

In the default setting, 8 CPU cores, 9.6 GB of memory (%mem), and 72 hours of wall time limit are specified. Available memory size is determined from the number of CPU cores. Please increase the number of CPU cores if you need larger amount of memory.

Example

Following output (*** parts depend on the location where you place the input file) would be displayed when you run the "g16sub" command and it finishes immediately. If the Gaussian job is successfully submitted, message like 4008669.ccpbs1 (4008669 is the ID of this job) is shown. Otherwise, some error messages would be shown.

$ g16sub mol.gjf
QUEUE detail
------------------------------------------------------------------------------
QUEUE(MACH)  Jobtype  MaxMem     DefMem     TimLim     DefCPUs(Min-Max)
------------------------------------------------------------------------------
   H(  ap)           1.8GB      1.2GB      72:00:00   8(1-128)
------------------------------------------------------------------------------
JOB detail
======================================================================
MOL name(s)    : mol
INP file(s)    : mol.gjf.ap
OUT file(s)    : mol.out
Current dir    : /lustre/home/users/***
SCRATCH dir    : /lwork/users/${USER}/${PBS_JOBID}/gaussian

QUEUE          : H
Memory         : 9.6GB
Time limit     : 72:00:00
Job script     : /lustre/home/users/***/H-1571896.sh
Input modified : y
======================================================================

/usr/local/bin/jsub –q H /lustre/home/users/***/H-1571896.sh

4008669.ccpbs1
$

8 CPU cores and max walltime of 72 hours are specified in this example.

Options of g16sub/g09sub

Following options are available for g16sub. g09sub also supports most of these options.

--help

Show list of available options. There are some options not mentioned in this page. (They are usually not so useful, though.)
 

-np (# of CPU cores)

Specify number of CPU cores. If not specified, 8 cores will be used. Valid values are 1-64 or 128. The amount of memory (%mem) is determined from this value and added in the input file.
 

--walltime (time)

Specify max walltime of the job. The format is (hours):(minutes):(seconds).
Default value is 72:00:00 (72 hours).
 

-j (jobtype)

Specify jobtype. Valid jobtype value is "core", "vnode", "gpu", and "largemem". You don't need to specify other than "largemem", since other values are automatically determined from -np and -ng values. (If ng > 0, jobtype is "gpu". If np is 1-63, jobtype is "core". If np is 64 or 128, jobtype is "vnode".) Please specify "-j largemem" if you want to use large memory node.
 

-N

Use another scratch space. The default scratch directory is /lwork/users/${USER}/${PBS_JOBID} on local disk of computation node. The /lwork is fast but the capacity is limited. If this option is added, the scratch space is created on /gwork/users/${USER}. /gwork is a huge space and is shared among all the computation nodes and login servers. However, the I/O performance is not as good as /lwork. Please use this option when you need large scratch space.
 

-M

Email would be sent to you when the job begins and finishes.
 

-rev (revision name)

You can choose Gaussian 16 revision. Following revisions are available.

  • g16b01  (Gaussian 16 Revision B.01)
  • g16c01  (Gaussian 16 Revision C.01)
  • g16c02  (Gaussian 16 Revision C.02; default)

 

-P

Only do preparation of the job. Input file and job script (H-*****.sh file) are created but the job is not submitted.
 

-ng (# of GPU)

Specify number of GPUs. The max value is 8. Also, (# of CPU cores)/(# of GPUs) must be less than or equal to 16.
 

--name (job name)
-C (job name)

Add name to the job. The job names will be shown in the output of jobinfo command. -C and --name are the exactly the same function. (-C is an alias for --name.)
 

--autoname
-X

Add name to the job automatically. The name is created by removing directory path and file extension from the input file. -X and --autoname are the exactly the same function. (-X is an alias for --autoname.)
 

-O

Overwrite the existing file.
 

-q (queue name)

Specify queue name. You don't need to specify this usually. (Default is "-q H".)
 

For example, following two lines have the same meaning.

$ g16sub ch3cl.gjf
$ g16sub -q H -j core -rev g16c02 -np 8 -walltime 72:00:00 ch3cl.gjf

To run only with 4 CPU cores, you can do it as follows.

$ g16sub -np 4 ch3cl.gjf

Tips

Scratch Directory

The default scratch directory is /lwork/users/${USER}/${PBS_JOBID}, which is on the local disk of the computation nodes. The available /lwork space is proportional to the requested number of CPU cores (11.9 GB/core). Please increase the number of CPU cores if you need more scratch space. You can also use /gwork as a scratch space by adding "-N" option. /gwork has a very much larger capacity than /lwork. However, the I/O performance of /gwork is not as good as /lwork. There is no limit to the available size for /gwork.

Data in /lwork is removed immediately after the job termination. Data in /gwork may live longer than /lwork. However, data in /gwork is not permanent. Please copy data to your home directory if you need something in scratch directory.

The working directory (SCRATCH) is not yet determined when g16sub is executed. g16sub puts the scratch directory path like:

SCRATCH dir: /lwork/users/${USER}/${PBS_JOBID}/gaussian

${USER} and ${PBS_JOBID} are replaced by user id and the job ID (4008669.ccpbs1 in the example above), respectively. The actual scratch directory path can be checked in the output file of Gaussian (ch3cl.out in the example above).

Number of Cores

  • (This is a hint for jobtype=core jobs. Inter-node parallel runs are not available on RCCS system, due to the lack of TCP Linda.)
  • More of CPU cores does not mean more faster run. Too many CPU cores could reduce performance.
    • In terms of cost-performance, smaller number of cores is favorable.
    • You job will required quite a long time with too less number of CPU cores, though.
    • The optimal number severely depends on the situation, your intention, and many others.
  • If there is insufficient memory (or /lwork space) problem, please increase the number of CPU cores.
    • (The error messages are sometimes difficult to understand. Still, increasing the number of cores might solve the problem.)

Submit without g16sub/g09sub

In some special cases, you may need to run Gaussian jobs without support of g16sub/g09sub. You can prepare job script template with the following methods. 

  • Use sample job script in /apl/gaussian/16c02/samples (Gaussian 16 C.02 case) directory as a template file.
  • Run g16sub with -P option; you can use the generated job script (H_****) as your input template.

In either of cases, you need to submit Gaussian jobs with "jsub" command. If you want to save files in the scratch directory (e.g., rwf files), you may need to add cp command in the job script file, or use /gwork/users/$USER instead of /lwork/users/$USER as your working directory.  Files under /gwork will eventually be deleted automatically, but not as quickly as those under /lwork, so there may be time to copy the files manually after the job termination. (Please don't forget scheduled maintenance days. Files in /gwork may be removed during the maintenance.)