Using GPU

Using GPUs

The titan-Queues (titanshort/long) currently include that carry 4 GeForce GTX TITAN, hence a usage request up to cuda=4 can be selected (see below). In contrast for the tesla-Queues (teslashort/long) 4 Tesla K20m cards are installed.

GPU Usage

To use a GPU you have to explicitly reserve it as a resource in the bsub call:

The code or application to be carried out needs to

  1. be an executable script or program.
  2. carry a shebang.

While this is true for LSF in general, it is imposed for the GPU-resource requests.

Using multiple GPUs

If supported by the queue, you can request multiple GPUs like

Be sure to add a sufficient time estimate with -W. Also, multiple CPUs can be requested with the usual ptile option.

Using multiple nodes and multiple GPUs

In order to use multiples nodes, you have to request entire nodes and entire GPU sets, e.g.

In this example 2 entire titannodes will be used (also the CPU set).

Your job script / job command has to export the environment of your job. mpirun implementations do have an option for this (see your mpirun man page).

Multiple GPU nodes require to take entire nodes. The entire GPU set has to be claimed and the entire CPU set - either by setting affinity(core) or ptile.