Valgrind on Mogon

Dear Developers,

If interested in finding memory leaks or finding cache misses you might heard of the Valgrind Tool Suite . This tool suite can be used on Mogon, too. You can find the relevant documentation (and the reference to the really good Valgrind documentation), here.

Your HPC-Team

Intel® Parallel Studio 2017 is available on Mogon

Dear Users

the new release of the enhanced Intel® Parallel Studio XE 2017 Edition(s) is now available on the Mogon Cluster.

You can use modules as usual in order to load the environtment variables.

The tools components of the cluster editions of Intel® Parallel Studio XE 2017 installed on the cluster:

Intel® C++ Compiler module load intel/composer/2017
Intel® Fortran Compiler module load intel/composer/2017
Intel® Distribution for Python module load intel/python/2.7
module load intel/python/3.5
Intel® Math Kernel Library
(C, C++, Fortran)
module load intel/mkl/2017/intel64
module load intel/mkl/2017/intel64-ilp64-f95mods
module load intel/mkl/2017/intel64-lp64-f95mods
module load intel/mkl/2017/mic
module load intel/mkl/2017/mic-ilp64-f95mods
module load intel/mkl/2017/mic-lp64-f95mods
Intel® Data Analytics Acceleration Library
(C++, Java)
module load intel/daal/2017
Intel® Integrated Performance Primitives
(C, C++)
module load intel/ipp/2017
Intel® Threading Building Blocks
module load intel/tbb/2017
Intel® Advisor
(C, C++, Fortran)
module load intel/advisor/2017
Intel® Inspector
(C, C++, Fortran)
module load intel/inspector/2017
Intel® VTune™ Amplifier XE
(C, C++, Fortran, C#, Java*, Python*, Go*)
module load intel/vtune/2017
Intel® MPI Library
(C, C++, Fortran)
module load intel/mpi/2017
module load mpi/intelmpi/2017
Intel® Trace Analyzer and Collector
(C, C++, Fortran)
Intel® Cluster Checker -

Your HPC Team

Veröffentlicht am | Veröffentlicht in Software, Ticker

Matlab-flags changed

Dear Users,

since we observe a considerable amount of Matlab-Jobs in the cluster, and observe some troubles in context with the Batch-System, we have made some changes on the matlab startup-script.

Before talking about the changes we want to remind you that your scripts should be compiled using the Matlab-Compiler since we have only a few licenses in the cluster and those are meant to be used for interactive use, mainly (code-checking, profiling,...).

One major issue that repeatedly is coming up is a mismatch of ressource reservation. Matlab tries to use all hardware on a machine by default. This might be fine on your local workstation but it's not on a cluster. As for Matlab there's just the option of taking one core or taking them all. This can be controlled by the use of the flag -singleCompThread. It seems like not many users are aware of this, which made us changing the startup to use this flag -singleCompThread by default. If you have a script that takes advantage of internal, parallelized routines of Matlab and want to allow it to take the full machine, you have to use the new flag -multiCompThread and reserve a full node, of coure.

Let's look at two examples how to start Matlab, just to elaborate it:

1.) Matlab-Script that needs only one computational thread

matlab -nosplash -r my_singleComp_script

2.) Matlab-Script that makes use of a lot of internal, parallelized routines (only inside an interactive job):

matlab -nosplash -multiCompThread -r my_multiComp_script

As mentioned above, the code should be compiled when run in the cluster as a job. Accordingly, the compiling should look like:

1.) mcc -R -m my_singleComp_script or
2.) mcc -R -multiCompThread -m my_multiComp_script

Note that it's NOT '-R -multiCompThread' but just '-multiCompThread'.

Your job reservation has to be adapted to the needs of your script, of course.

We have added the environment variable MATLABROOT to the module files to make the call of your compiled code a little more convenient:
./ $MATLABROOT arg1 arg2 arg3 ...

In case you have a strong use of the Floating-Point-Unit you might consider using -R 'affinity[core(2)]' or more advanced reservations.

Your HPC-Team

Veröffentlicht am | Veröffentlicht in Software

New OpenMPI-Module(s) supporting MPI 3.1 standard

Dear Users,

We are pleased to announce the installation of new OpenMPI (version 1.10.2) modules, supporting the MPI 3.1 standard. Please check out the respective site in our wiki.

For Fortran users: Please note the different compiler versions - they have to match the compiler version noted in the module string. All other software can be compiled against the default module with the systems compiler (gcc 4.4.7). The wiki gives a more detailed description.

Your HPC-Team

Python 3.5 with numpy linked against MKL

Dear Users,


We now provide a new version of Python -- 3.5-- as usual along with a great number of scientific Python modules. In particular, numpy, the basic array library, is compiled against Intel's MKL resulting in a tremendous speed-up when dealing with numerical computations, e.g. matrix multiplication.


See our wiki entry for more details.

Veröffentlicht am | Veröffentlicht in Software, Ticker

Profiling on Mogon

We strive to offer Profiling tools for high performance computing. If you develop software, please check out our documentation on Intel's Vtune software, also Allinea, both are parallel profiling tools. (Documentation for Allinea on Mogon is forthcoming.)

If interested drop us a mail at to arrange a slot during the Thursdays Workshop.


Liebe Nutzer,


wir streben an, die Installation von Software zu vereinfachten und zu standardisieren. Hierzu gibt es ab sofort (neben der Anfrage unter die Möglichkeit ein kleines Formular zu nutzen mit dem die Fragen, die wir ggf. haben werden vorweggenommen werden.


Euer HPC-Team

Veröffentlicht am | Veröffentlicht in Software, Ticker

New Module: R Version 3.2.2

We are pleased to announce the installation of the statistical software R, version 3.2.2, in a new module. This module includes many packages, including Bioconductor package. Parallelisation support is included. Check out the local R documentation.