Submitting Jobs


You use bsub to submit a job to the LSF batch system. In general the syntax is
bsub [- bsub-options] executable [- executable-options]
An extensive documentation on the bsub command can be found in the LSF documentation: "Running Jobs with Platform LSF".

Some common parameters are listed in the following table. More specialised options will be mentioned in the appropriate sections.

-n <processors>The number of processors you need.required
-R 'span[ptile=64]'
-R 'span[hosts=1]'
Request to always get 64 processor together on one node, i.e. full nodes.
If you use 64 processors or less, use '-R 'span[hosts=1]' instead to ensure all processes running on one node.
-W <minutes>
-W <hours:minutes>
The run time limit of your jobrequired
-q <queue>The queue your job should run in (default: short).required
-app <profile>Use an application profile to request an predefiened amount of memory for each process (default: 300M).
Alternatively, use -M <memory in MB>
-i <filename>Specifies additional input data.optional
-e <filename>.errWill direct stderr to a seperate error log file.optional
-o <filename>.log Will direct stdout, stderr and the job summary to the log file.optional
-NIn addtition to -o will send the job summary seperately within a mail to the user.optional
-IRun the job interactively, showing you its output directly in your shell session. Useful for debugging or short runs.optional
-R 'affinity[core(<processors>)]'Number of processors to pin to one process (threads per process, default: 1). optional
-J <Job name>Job name.advised
-w <expression>sets a dependency expressionoptional


Job names are important for the LSF scheduler (when searching for particular jobs, working with job dependencies, etc.). We therefore ask to specify meaningful and preferably unique job names.

In addition we will adjust jobnames in the near future:

Our policy is:

  • Job names should not contain whitespace or semicolons. In this case the job name will be truncated to the first part of the original job name, taking whitespace and semicolons as delimiters.
  • Missing jobnames or insanly long ones get a jobname consisting of $USER_$RANDOMSTRING. Here, $RANDOMSTRING, obviously will be a random string, 20 characters long.

Benutzung (eng)

Overview on how to use Mogon

There are some requirements that have to met before you can start using Mogon. You'll find more information about it here.

Enroll for the Mogon-Introductory-course as soon as possible - especially if you don't have experience in working in a HPC environment.

Mogon is a Linux-based cluster currently running Scientific-Linux/CentOS and LSF as batch-system. There are two 2 login nodes for all users except for the ETAP users - they have their own 4 login nodes.

Policy for login nodes

Calculations are to be submitted as jobs. This means jobs are not to be run on login-nodes. Processes run directly on these nodes should be limited to tasks such as editing, data transfer and management, data analysis, compiling codes and debugging, as long as it is not resource intensive (memory, cpu, network and/or i/o). Any resource intensive work must be run on the compute nodes through the batch system.

Please do not impair the work of other users by cluttering login-nodes.

Therefore: Any process that is consuming extensive resources on a login-node may be killed, especially when it begins to impact other users on that node. If a process is creating significant problems on the system, the process will be killed immediately and the user will be contacted via email.

Repeated abuse of login-nodes may result in notification of your group administrator and potentially locking your account.

Avoiding the Batch System

While is is not (easily) possible to calculate on compute hosts without using the batch system: It is prohibited. Any computational process not associated with a corresponding job will be killed.


For any publication that made use of the HPC ressources we kindly ask to add one of the following sentences to the acknowledgement section of your publication.

Parts of this research were conducted using the supercomputer Mogon and/or advisory services offered by Johannes Gutenberg University Mainz (, which is a member of the AHRP and the Gauss Alliance e.V.

The authors gratefully acknowledge the computing time granted on the supercomputer Mogon at Johannes Gutenberg University Mainz (