Winter Workshop on Complex Systems 2020

The Winter Workshop on Complex Systems is a one-week workshop where young researchers from all over the world gather together for discussing about complex systems.

A complex system is a system composed of many components which may interact with each other. Examples of complex systems are Earth’s global climate, organisms, diseases, the human brain, infrastructure such as power grid, transportation or communication systems, social and economic organizations (like cities), an ecosystem, a living cell, and ultimately the entire universe.

The study of complex systems calls for an interdisciplinary approach combining disciplines such as Biology, Computer Science, Data Science, Economics, Mathematics, Physics and Sociology.

Important Dates

Start of applications: October 7th
Deadline for applications: October 20th
Notification of acceptance: November 10th

Workshop registration fee is 200 CHF

Julia for HPC

Julia for HPC

Many researchers may face this problem developing numerical applications. We prototype using a high-level language appealing for its eye to program, readability, nice plotting, very talkative debugger. When it comes to productions runs, we like to translate the prototypes in lower-level compiler languages to benefit from runtime performance, parallelisation possibilities but loosing many interesting features from high-level languages.

Our contribution presented at JuliaCon 2019 (Baltimore MA, USA) is an illustration of Julia solving “the two language problem”. We replace our Matlab prototype and the CUDA C + MPI production code by a single Julia code that serves both prototyping and production tasks. We showcase the port to Julia of a massively parallel Multi-GPU hydro-mechanical stencil-based solver in 3-D. The iterative solver can be applied to a wide range of coupled differential equations.

Figure 1. Weak scaling of the parallel GPU MPI hydro-mechanical solver. We report the parallel efficiency for both the Julia and the CUDA C implementation on 1024 and 5200 GPUs (full machine) respectively, on the Piz Daint hybrid Cray XC 50 at the Swiss National Computing Centre, CSCS.

We report a close to optimal weak scaling on 1024 NVIDIA Tesla P100 GPUs on the hybrid Cray XC-50 “Piz Daint” supercomputer at the Swiss National Supercomputing Centre, CSCS (Figure 1). We compare these results obtained with our Julia prototype to a reference scaling realised using the Multi-GPU production code solver written in CUDA C + MPI that achieved a high performance and a nearly ideal parallel efficiency on up to 5120 NVIDIA Tesla P100 GPUs “Piz Daint”. Soon in press.

Nvidia GTC 2019

Nvidia GTC 2019

This year’s Nvidia GPU Technology Conference to take place in San Jose, Silicon Valley, CA. Besides the opening keynote by CEO Jensen Huang, a former researcher from the Swiss Geocomputing Centre, Ludovic Räss, gave a talk on geo-supercomputing. Recording is accessible hereafter or on GTC on-demand:


For further information, feel free to contact lraess[at]

Is faster better … and greener ?

Common thoughts suggests that environmental respective “green” computing is tightly linked to rapid execution of a given computer program or routine. This post reports some interesting relations between computer languages, their relative execution time and their energy consumption

In case your already running after your schedule, here is the conclusion: C is fast and features an especially low energy consumption footprint.

Resolving thermomechanical coupling in two and three dimensions

Resolving thermomechanical coupling in two and three dimensions

Thibault Duretz, Ludovic Räss, Yury Podladchikov and Stefan Schmalholz recently published a new study about multi-physics couplings with focus on thermomechanical interactions.

Thermomechanical strain localisation in 3-D for a cylindrical and spherical inclusion. The solution is obtained using a pseudo-transient GPU-based solver.

Their contribution (accessible here) assesses the ability of an iterative technique to resolve nonlinear interactions between thermal and mechanical processes. This new approach to solution is particularly well suited for parallel devices, such GPUs. The authors benchmark their proposed solver against a direct-iterative type of more classical approach (TM2Di).

The codes illustrating and supporting this work can be accessed on the software page, or on the authors Bitbucket repository.

Conference Jose Fernando Mendes, 14 Nov 2018

Conference Jose Fernando Mendes, 14 Nov 2018

Jose Fernando Mendes
Professor at the University of Aveiro

Structural Properties of Multiplex Networks

Many complex systems, both natural, and man-made, can be represented as multiplex or interdependent networks. Multiple dependencies make a system more fragile: damage to one element can lead to avalanches of failures throughout the system.
In this talk I will present recent developments about the structural properties of multiplex networks. The transition founded is asymmetric. It is hybrid in nature, having a discontinuity like a first-order transition, but exhibiting critical behavior, only above the transition, like a second-order transition. A complete understanding of the transition cannot therefore be had without first understanding this critical behavior. I will discuss and describe the nature of such hybrid phase transitions and the appearance of avalanches at criticality.
José Fernando F. Mendes is a theoretical physicist working on Statistical Physics. His research focuses mainly in the study of complex systems and the structure and the evolution of complex networks like the World Wide Web, the Internet, biological networks, etc. Other interests are related with: granular media, self-organized criticality, non-equilibrium phase transitions, deposition models,etc.
He is co-author of over 130 scientific papers receiving about 18,000 citations, with his most cited works more than 3,000 citations.