Research Interests

I have interest in a variety of research areas surrounding high-performance computation, parallel computing, distributed computing, smoothed particle hydrodynamics, computational fluid dynamics, turbulence, computational astrophysics, data science, and fundamental machine learning.

If you are a student looking to work together with me at the undergraduate or graduate level, please check out my current student opportunities.

Parallel and Distributed Computing

Within the next several years, supercomputering clusters will achieve exascale computation, that is, 10^18 flops. Designing parallel algorithms to take advantage of future many-core and multi-core architectures will require significant effort to overcome memory and communication bottlenecks. I work on high-performance computational fluid dynamics codes to make efficient use of powerful supercomputers. I am a development area lead in the Phantom collaboration to build Phantom, a parallel- and distributed-computing SPH simulation software.

Numerical Analysis and Convergence

The accuracy and convergence properties of numerical methods need to be understood in order to validate that they produce useful results. All simulations are underpinned by the quality of their methods. This requires careful numerical analysis and convergence testing to understand the source of errors, and the physical condictions under which results are valid. A recent project of mine in this area was to study SPH's ability to model the Kelvin-Helmholtz instability. I showed that SPH does converge to correct solutions, provided that the interpolation error from the smoothing kernel is appropriately handled.

Computational Fluid Dynamics

I am broadly interested in all types of simulations that use Smoothed Particle Hydrodynamics (SPH). In SPH, a fluid is broken up into a set of particles, with each particle carrying some amount of mass, energy and momentum of the fluid. These work collectively together to reproduce the behaviour of hydrodynamics. SPH has been widely used throughout engineering and science, and even in Hollywood movies!

Multi-Physics Algorithms

I am interested in all SPH simulations involving magnetic fields and multiple physics. I have developed methods that improve the accuracy of magnetic fields in SPH. In particular, I have developed a divergence cleaning method that removes errors, in the form of magnetic monopoles, from the magnetic field. These methods have enabled a number of different astrophysical simulations, from jets and outflows during star formation, magnetised turbulence, the magnetic field structure of the Milky Way, and more.

Turbulence

Turbulence is everywhere in nature. It is produced in coffee cups by stirring, by the dimples on golf balls, and is present in our atmosphere, in the Sun, and throughout the interstellar medium. I am interested in SPH simulations of Kolmogorov, subsonic, supersonic and magnetohydrodynamical turbulence. Further understanding the statistical behaviour of turbulence in SPH simulations is crucial to trusting simulation results. I am have performed the first ever SPH simulations of magnetohydrodynamical supersonic turbulence, which I compared to grid-based computational fluid dynamics methods.

Synthetic Data Generation

I am interested in the generation of high-quality synthetic data sets. Synthetic data is artifically created data that looks like real data, both on an individual level and in its statistical properties. Creating synthetic data is a significant challenge, as there are many non-linear relationships present in real data that are difficult to capture. One focus area is on financial transaction sequences and networks to model a person's, or a population of people's, financial behaviour.

Interpretable Machine Learning

Machine learning models are algorithms that can learn patterns by training it over a set of data. Knowledge of those patterns, once learned, can be used to predict future events or data. This avoids the requirement of explicitly teaching the pattern to the model, but can mean that it can be difficult to understand the pattern that the model has learned. I am particularly interested in the interpretability of machine learning, that is, understanding why a machine learning model makes certain predictions, both on a global scale and for individual predictions.