You are here

RCCS Reference Manual

Maintenance: Mar 1 9:00 - 19:00
Login to frontend node and user information update may be disabled during the period.

[Last update] 2020-Sep-7

Table of Contents


Login to RCCS

  • Frontend nodes ( can be accessed using ssh with public key authentication.
  • GPU equipped frontend nodes ('ccgpup', 'ccgpuv') can be accessed from 'ccfep'.
  • All computers will stop during 9:00-19:00 on the first Monday of each month because of maintenance. The maintenance time might be extended.
  • Access to frontend nodes is allowed only from IPv4 address assigned to Japan or other registered IP addresses. See Application for SSH connection from outside Japan for details.

Register SSH Public Key and  Password for Web

Please prepare a public/private key pair of ssh.  If you do not know the procedure, please search in internet by yourself.

First registration / Missing your username or password for web

  1. Open to request mail for registration in web browser.
  2. Fill your email which is written in your application, then press button "E-mail new password".
  3. After you recieve an email from RCCS, open URL in the mail to login in web browser.
  4. Fill your new password in "Password" and "Confirm password".
  5. Paste your public key in "Public key".
  6. Press "Save" button.

Using your username and password for web

  1. Open in web browser, then fill your username and password and press "Log in" button.
  2. Press "My account" which is located in top right corner.
  3. Press "Edit" tab.
  4. To change password, fill current password and new passwords.
  5. Paste your public key.
  6. Press "Save" button.

Login Shell

  • /bin/csh(tcsh), /bin/bash and /bin/zsh are available.
  • You can select login shell in the sampe page as ssh public key.  It will take some time to change login shell.
  • You can custumize your .login or .cshrc in your home directory, but be carefully.

Whole System of RCCS

  • Whole system of RCCS is shown in the figure below.
  • Interactive nodes are ccfep (8 nodes), ccgpup, ccgpuv. You can build or debug applications on them.
  • ccfep (8 nodes) is the login node. You can't login to other interactive nodes from internet.
  • There are four kinds of disks, namely /work, /ramd, /home and /save. Access speed and data lifetime are different.
  • Width of lines between disks and computers represents transfer speed. Wider is faster.
  • Disk /work is temporary space for your calculation. All files will be DELETED after the completion of your job.
  • Disk /ramd is RAM disk of each node. The size is about 176 GB or 752 GB, depending on node type. The sum of memory used in the job and RAM disk is controlled by the queuing system.
  • There are no differences between /home and /save other than the names. (Formerly, there had been a difference regarding backuping policy.)
  • Use of /tmp, /var/tmp or /dev/shm is not allowed. Jobs using those dirctories will be killed by the administrator.

RCCS Resources

CPU Points and Queue Factor

CPU points are spent when you use CPU or GPU.
Queue factors are determined as follows on each systems.

System CPU Queue Factor GPU Queue Factor
42 / (point/(1 node * 1 hour)) -
28 / (point/(1 node * 1 hour)) -
1.0 / (point/(1 core * 1 hour)) -
(jobtype=gpu, gpup)
1.0 / (point/(1 core * 1 hour)) 10 / (point(1 GPU * 1 hour))
1.0 / (point/(1 core * 1 hour)) 15 / (point(1 GPU * 1 hour))
  • On ccfep and ccgpuv, CPU points will be consumed according to CPU time spent.
  • On ccgpup, CPU points won't be required.
  • On computation nodes, CPU points will be calculated from the elaps of processes.
  • We don't charge money for the supercomputer usage.

If you want to know your current CPU points, run "showlim -c".

Checking Resources

  • CPU points used by batch jobs and total disk usage (corresponding to "showlim -c" and "showlim -d", respectively) are updated every 10 minutes.
  • CPU points used in interactive nodes are updated on 5:15.
  • If CPU points run out, all running jobs of your group will be killed and further job submissions won't be allowed.
  • If your disk usage exceed the limit, new job submissions will be rejected.

Individulal Limitation of Resources

Access to "Limiting Resources Page"  with your web browser.

  • Only representative user can limit maximum resources of each members.
  • Normal user can only view the values of maximum resources.
  • Maximum available number of cpus, point and amount of disks can be limitted.

Queueing System

Overview of Queueing System

Queue Classes

Queue class for all users

System Class Node Memory Limitation
for a job
# of cores per group # of jobs per group
Assigned points # of
Assigned points # of jobs
cclx PN
ccnf 18.8GB/core 1-10 nodes
(40-400 cores)
3,000,000 -
1,000,000 -
300,000 -
100,000 -
- 100,000
 3,000,000 -
1,000,000 -
300,000 -
100,000 -
- 100,000
cclx PN
4.4GB/core 1-32 nodes
(40-1280 cores)
cclx PN
4.8GB/core 1-36 cores
cclx PN
(jobtype=gpu, gpup)
ccca 7.3GB/core 1-48 gpus
2-24 cores/node (2 gpus/node)
1-12 cores/node (1 gpu/node)
cclx PN
ccca 7.3GB/core 1-8 gpus
1-3 cores/gpu
(multinode jobs not allowed)
  • Max elaps time for jobtype=core and ncpus>18 jobs must be <= 1 week (168 hours).
  • Max elaps time for jobtype=gpuv jobs must be <= 1 week (168 hours).
  • Max elaps time for the other jobs is until next maintenance. Only half nodes can be assigned for over 1 week jobs.
  • A job which use <= 526 nodes should be assigned in the same Omni-Path group.
  • 272 nodes of Node "ccnn" are used only for 1-4 node jobs.
  • Jobtype "small" jobs whose walltime is less than 1 day might use Node "ccnf".
  • Jobtype "core" jobs whose walltime is less than 1 day and requested cores is from 4 to 18 might use Node "ccca".
  • GPU Direct Peer-to-peer communication is available in any ccca node (jobtype = gpup, gpuv, or gpu).
    • jobtype=gpup jobs shall run on NVIDIA Tesla P100 equipped nodes.
    • jobtype=gpuv jobs shall run on NVIDIA Tesla V100 equipped node.
    • jobtype=gpu jobs will run on either of P100 or V100 equipped nodes.

Special queue class

The settings of queue class are following.

System Class Wall Time Memory # of cores per job # of cores per group
cclx (occupy) 7 days 4.4GB/core ask us allowed number of cores

Show Job Status

ccfep% jobinfo [-c] [-s] [-l|-m|-w [-g|-a]] [-n] -h cclx


ccfep% jobinfo [-c] [-s] [-l|-m|-w [-g|-a]] [-n] -q (PN|PNR[0-9])

  • -c .... Use the latest infomation instead of cached one.
  • -s .... Show summary.
  • -m .... Show also memory information.
  • -w .... Show also working directory where job was submitted.
  • -l .... Show list of jobs.
  • -g .... Show also jobs of the group members.
  • -a .... Show jobs of all users.
  • -n .... Show node information.
  • -h .... Specify host.
  • -q .... Specify queue class.


e.g. Summary View

Display numbers of running/queued/held jobs, numbers of CPU cores/GPUs running/queued/held.

ccfep% jobinfo -h cclx

If you are not using special class queues, the following command is equivalent to above one.

ccfep% jobinfo -q PN

You can add "-c" option to show the latest data (otherwise cached data will be displayed), while it would take longer time than the normal use.

e.g. See Status of Jobs

ccfep% jobinfo -h cclx -l

If you don't use special queues, example above is the same as:

ccfep% jobinfo -q PN -l

In case you want to know the status of jobs just submitted, the following command would be useful.

ccfep% jobinfo -q PN -c -l


e.g. Status of your group jobs

If you add "-g" and "-l" options, jobs of your group member will also be shown.

ccfep% jobinfo -q PN -g -l -c

If you add "-a" instead of "-g", all the jobs will be shown (other users' information is hidden, though).

e.g. Show workding directory of jobs

If you forgot where you submitted the jobs, "-w" option would be useful.
Working directory (PBS_O_WORKDIR) of jobs will also be shown.

ccfep% jobinfo -q PN -w

(You can add "-c" to the example above. )


Submit Your Jobs

Description of the header part

Your have to write a script file which is written in C-shell to submit your job. An example for each system is following.

  • csh, bash (/bin/sh), zsh can be used for the job submission script.
  • lines started with #PBS are common, regardless of the shell type.
  • Sample scripts can be found in ccfep:/local/apl/lx/(application name)/samples/.
Meaning Header part Importance
The First Line (csh)  #!/bin/csh -f
(bash) #!/bin/sh
(zsh)  #!/bin/zsh
(choose one)
Needed Number of CPU #PBS -l select=[Nnode:]ncpus=Ncore:mpiprocs=Nproc:ompthreads=Nthread:jobtype=Jobtype[:ngpus=Ngpu] Required
Wall Time #PBS -l walltime=72:00:00 Required
Mail at Start and End #PBS -m abe Optional
Prevent Rerun #PBS -r n Optional
Change to Submitted
cd ${PBS_O_WORKDIR} Recommended
  • Nnode: # of physical node
  • Ncore: # of reserved cores per physical node
  • Nproc: # of processes per node
  • Nthread: # of threads per process
  • Jobtype: large, small, core, gpu, gpup, gpuv
    • large: 18.8GB / core
    • small: 4.4GB / core
    • core: job for less than 18 cores
    • gpu, gpup, gpuv: GPU jobs
  • Ngpu: # of GPUs
Example of "Needed Number of CPU": Case of 80 mpi processes on 2 nodes
#PBS -l select=2:ncpus=40:mpiprocs=40:ompthreads=1:jobtype=small
Example of "Needed Number of CPU": Case of GPGPU
#PBS -l select=1:ncpus=6:mpiprocs=1:ompthreads=1:jobtype=gpu:ngpus=1

You can find some other examples in this page.

Job submission

After you write the script, type following command to submit.

ccfep% jsub -q (PN|PNR[0-9]) [-g XXX] [-W depend=(afterok|afterany):JOBID1[:JOBID2...]] script.csh

If you want to submit your jobs by 'Supercomputing Consortium for Computational Materials Science' group, use -g option.  (XXX is name of its group)
You can describe dependency of jobs using -W option.
If you want to describe dependency that a job should run after the dependent job exit successfully, use keyword "afterok".
If a job should run after the dependent job including abnormal exit, use keyword "afterany".
Sample script files exist in ccfep:/local/apl/lx/*/samples/.

Sequential Jobs (step jobs)

You can easily submit sequential jobs by using --step or --stepany command.

step job 1 (afterok): run a series of jobs sequentially. If a job exited abnormally, following jobs will be deleted.

ccfep% jsub -q (PN|PNR[0-9]) [-g XXX] --step [-W depend=(afterok|afterany):JOBID1[:JOBID2...]] script.csh script2.csh ...

step job 2 (afterany): run a series of jobs sequentially. If a job finished, next job will run regardless of the exit status.

ccfep% jsub -q (PN|PNR[0-9]) [-g XXX] --stepany [-W depend=(afterok|afterany):JOBID1[:JOBID2...]] script.csh script2.csh ...


ccfep% jsub -q PN --stepany job1.csh job2.csh job3.csh


Delete Jobs

First of all, you should get ID of the target jobs using "jobinfo" command. Then, run following command, where the RequestID is the job id.

ccfep% jdel [-h cclx] RequestID

Hold/Release Jobs

You can prevent queued job to run by the following command (hold a job, in other words). (Use jobinfo to get target job id.)

ccfep% jhold [-h cclx] RequestID

You can release the restiction using jrls command.

ccfep% jrls [-h cclx] RequestID

Get Information of Finished Jobs

You can get information of finished jobs, such as finish time, elaps time, parallel efficiency,  by jobeff command.

ccfep% jobeff -h (cclx|cckf) [-d "last_n_day"] [-a] [-o item1[,item2[,...]]]

You can customize items which are displayed in -o option.  Available keywords are,

  • queue: Queue name
  • jobid: Job ID
  • user: User name
  • group: Group name
  • node: Job head node name
  • Node: All node names
  • start: Start time(YYYY/MM/DD HH:MM)
  • Start: Start time(YYYY/MM/DD HH:MM:SS)
  • finish: Finish time(YYYY/MM/DD HH:MM)
  • Finish: Finish time(YYYY/MM/DD HH:MM:SS)
  • elaps: Elaps
  • cputime: Total CPU time
  • used_memory: Used memory size
  • ncpu: Number of reserved cpu cores
  • ngpu: Number of reserved gpus
  • nproc: Number of MPI processes
  • nsmp: Number of threads per process
  • peff: Efficiency of job
  • attention: Bad efficiency job
  • command: Job name
  • point: CPU points
  • all: show all the major items


e.g. 1: Show job ID, start and end datetime, CPU points of jobs finished these 10 days.

ccfep% jobeff -h cclx -d 10 -o jobid,user,start,finish,point

e.g. 2: Show job ID, end datetime, CPU points, and workding directory of jobs in FY2020.

ccfep% jobeff -h cclx -y 2020 -o jobid,finish,point,Workdir

e.g. 3: Show major parameters of jobs finished in these two days.

(Option "all" wouldn't show CPU points. Please use example above to show CPU points of jobs.)

ccfep% jobeff -h cclx -d 2 -o all


Build and Run

Command to Build

System Language Non-Parallel Auto-Parallel OpenMP MPI
Fortran ifort ifort -parallel ifort -qopenmp mpiifort
C icc icc -parallel icc -qopenmp mpiicc
C++ icpc icpc -parallel icpc -qopenmp mpiicpc
Fortran pgfortran pgfortran -Mconcur pgfortran -mp  
C pgcc pgcc -Mconcur pgcc -mp  
C++ pgcpp pgcpp -Mconcur pgcpp -mp  

Please also check the package program list page for the available libraries and MPI environments.

Running Parallel Program


Please specify correct numbers of MPI processes and OpenMP threads in "mpiprocs" and "ompthreads" values of select line in your job script.
e.g. 12 CPUs, 4 MPI processes and 3 OpenMP threads case -> ncpus=12:mpiprocs=4:ompthreads=3

Please also check sample jobs scripts by RCCS (locating at /local/apl/(system name)/(software name)/samples ).

Number of threads specification for OpenMP jobs

When submitting job using "jsub", the value specified by "ompthreads" is considered as number of OpenMP threads.
You can specify or change the value by setting OMP_NUM_THREADS envirnment variable manually.
If you run a command not via "jsub" (on frontend node for example), you should set OMP_NUM_THREADS environment variable manually.

Host list specification for MPI

List of hosts can be found in the file name specified in PBS_NODEFILE environment variable.
MPI environments installed by RCCS (Intel MPI and OpenMPI) automatically consider this environment variable.
Therefore, you can skip specification of machine file when invoking "mpirun" command.

e.g. 4 MPI * 3 OpenMP hybrid parallel

#PBS -l select=1:ncpus=12:mpiprocs=4:ompthreads=3:jobtype=core
#PBS -l walltime=24:00:00
mpirun -np 4 /some/where/my/program options

  • OpenMP thread number is determined from the value specified by "ompthreads" in select. (equivalent to OMP_NUM_THREADS=3)
  • MPI machine file name can be omitted when invoking mpirun (if that mpirun is installed by RCCS).
    • You can specify machine file to filename specified in PBS_NODEFILE env variable. (This doesn't change anything.)


Development Tools

Some tools can be used in command line, but it is better to use X Window edition's.

Intel Inspector XE

  • Memory / Thread inspector
  • (GUI command) inspxe-gui
  • (CUI command) inspxe-cl

Intel Vtune Amplifier XE

  • Hotspot analizer
  • (GUI command) amplxe-gui
  • (CUI command) amplxe-cl

Allinea Forge

  • Debugger
  • (GUI command) ddt

Environment Modules

Environment Modules ("module" command) is available from July 2018. See this page for detailed information.

Package Programs

Request installation you want to use

Please fill the following items and send it rccs-admin[at]

  • Software name and version that you want to use
  • Overview of the software and its feature
  • Necessity of installation to supercomputers in RCCS
  • URL of the software development

Special Commands of RCCS

Related Queueing System

You can find descriptions about "jobinfo", "jsub", "jdel", "jhold", "jrls", and "jobeff" above in this page.

Submitting Gaussian Jobs

Case of Gaussian 16

ccfep% g16sub [-q "QUE_NAME"] [-j "jobtype"] [-g "XXX"] [-walltime "hh:mm:ss"] [-noedit] \
              [-rev "g16xxx"] [-np "ncpus"] [-ngpus "n"] [-mem "size"] [-save] [-mail] input_files

  • Command "g09sub" are also available to use Gaussian 09.
  • Default walltime is set to 72 hours.  Please set excepted time for calculation and extra to do long job or run early.
  • If you want to know the meaning of options and examples, please just type "g16sub".
  • %mem, %nproc, %cpu in the original input files will be overwritten by g16sub / g09sub. Please specify those values with -mem, -np etc. command line options.
    • You can prevent this automatic overwrite by -noedit option. But this is not recommended.
    • Safe maximum %mem value will be automatically assigned by g09sub/g16sub. You may not need to set that manually unless you need to reduce the amount of memory.

example case: g16c01, timelimit 24 hours, 12 cores job (Gaussian input file name is my_gaussian_input.gjf)

ccfep% g16sub -rev g16c01 -walltime 24:00:00 -np 12 my_gaussian_input.gjf


Job wait time estimator (waitest)

Job start time can be estimated by "waitest" command on ccfep. Please read this page (login required) for details about "waitest".

Showing Used Resources

ccfep% showlim (-cpu|-c|-disk|-d) [-m]

  • -cpu|-c: Show used point and limited value.
  • -disk|-d: Show current disk size and limited value.
  • -m: Show values of each members.

example 1: show CPU points (approved and used) of YOU and WHOLE YOUR GROUP

ccfep% showlim -c

example 2: show CPU points (approved and used) of YOU, YOUR GROUP MEMBERS, and WHOLE GROUP

ccfep% showlim -c -m

example 3: show disk status (approved and used)  of YOU, YOUR GROUP MEMBERS, and WHOLE GROUP

ccfep% showlim -d -m


Utility Commands for Batch Jobs

Limit the walltime of command

/local/apl/lx/ps_walltime -d duration -- command [arguments...]

  • -d duration: Duration to execute command is described as "-d 72:00:00".
  • Command will be killed after specified duriation.

Showing statistic of current job


  • Show similar statistic information in the mail notification of end of your job when you describe in PBS header line.
  • This statistic information is the information when the command is executed.

Output Items

  • resources_used.cpupercent: Efficiency of CPU usage.  Maximum value is multiplication the number of threads and 100.  Illegal value will be shown when multi nodes are used.
  • resources_used.cput: Sum of actual time for calculation in each CPUs.
  • resources_used.mem: Amount of actual memory size.
  • resources_used.ncpus: Number of actually used CPUs.
  • resources_used.walltime: Real time for calculation.

Manipulation of Files on Computation Nodes

You can access local files on computation nodes which cannot be directly accessed from the frontend nodes (ccfep) via "remsh" command.

remsh hostname command options

  • hostname: Hostname such as cccc???, cccca???, ccnn???, or ccnf???.
  • command: Command to be executed on the node. Acceptable commands are: ls, cat, cp, and find.
  • options: Commandline arguments to the comannd.

Example: how to access ramdisk of a computation node, ccnnXXX, by user "zzz"

remsh ccnnXXX ls /ramd/users/zzz

remsh ccnnXXX cat /ramd/users/zzz/99999/fort.10 | tail

Host names and jobids of your jobs can be found in the output of "jobinfo" command. (see above for the usage)