TYC@Imperial: GPU-Accelerated Kinetic Lattice Monte Carlo for Experimental-Scale Studies

Jeffrey Kelling

Helmholtz Zentrum Dresden Rossendorf, Germany

Monday 16th October, 2017
Time: 12.00pm
Venue: tbc
Contact: Ms Hafiza Bibi

Abstract: Micro- and nano-structured materials, including composites, are crucial for future energy technologies. Key processes during production and life-time are governed by self-organization in phase separation processes at the micro and nano scale.

Examples include nano-structured Silicon thin film absorber layers in solar cells providing tailored band-gaps [1] on top of reduced production cost. In the case of micro-patterned electrolyte-matrices, used in a range of fuel cell technologies, both production and aging are governed by phase separation and affect the efficiency and lifetime of large industrial installations.

Simulations of these out-of-equilibrium, inhomogeneous real world systems provide important insights, finding reaction pathways for self-organization and self-alignment of nanostructures. To this end, 3D kinetic Metropolis lattice Monte Carlo simulations can be used to model physical systems at experimental scales in an atomistic way, thereby side-stepping many caveats connected with the alternative phase-field simulations.

These same long-time and large-scale simulations also provide important insights into more fundamental physical problems. The question of super- universality, that is if and how different types of quenched disorder affect universal properties, is still under investigation even for fundamental models like Ising [2], realization of which can be found in important complex magnetic systems, apart from binary mixtures.

We propose massively parallel simulation techniques using the architecture of modern graphics processing units (GPUs) to address these problems, ranging from the kinetic Metropolis Monte Carlo to any Potts models with quenched disorder.

While pioneering work in this area [3] focused on efficient but correlated stochastic cellular automaton implementations, our simulations can be virtually correlation-free [4].

Here, we present two implementations for large-scale simulations on GPUs: One is optimized to offer fast time-to-solution on experimental-scale simulations [5], the other provides highly efficient parameter studies or large sample sizes for large-scale simulations [6]. Harnessing the compute power of modern (multi-)GPU installations leads to increased energy efficiency as well as reduced time-to-solution.


[1] Apl. Phys. Lett. 103, 133106 (2013); Appl. Phys. Lett. 103, 203103 (2013); Nanolett. 16, 1942 (2016) [2] e.g. EPL 117, 10012 (2017); Phys. Rev. B 88 042129 (2013); Phys. Rev. B 78

224419 (2012); J. Phys. A 24 L1087 (1991) [3] J. Comp. Phys. 228, 4469 (2009); Phys. Proc. 15 92 (2001) [4] http://arxiv.org/abs/1705.01022 ; https://arxiv.org/abs/1701.03638 [5] EPJST 210, 175 (2012) [6] Phys. Rev. E 94 022107 (2016)


Follow @tyc_london for updates from the Thomas Young Centre.