The 2nd workshop AI and Physical Sciences @AMU will take place in Marseille on Tuesday, November 12, 2024. The goal is to create interactions between the fields of AI and Physical Sciences, and build a community of AMU researchers around these fields by gathering, for the second time, experts and exchanging valuable experience or knowledge at the interface of these disciplines.
The workshop scope includes (but is not limited to) the following topics:
- AI for problems in physical sciences,
- Physics-informed machine learning including machine learning methods for dynamics systems (e.g., PINNs, FNO, DeepONets)
- AI-based computer vision and imaging for physical sciences
- Contributions of physical sciences to machine learning
- Machine learning algorithms on specific hardware
- AI and quantum mechanics
Registration is free but mandatory (see below).
Program
9:00 : welcome
9:10: Introduction to the workshop
Including an evocation of the 2024 Nobel Prizes in Physics and Chemistry, by Thierry Artières (LIS)
9:30: invited talk
Enhancing Uncertainty Quantification in Chemical Modeling: A Comparative Study of Deep Evidential Regression and Ensembles with Post-Hoc Calibration by Bidhan Chandra Garain (ICR, AMU)
Machine learning (ML) models are increasingly used to speed up structural optimization and molecular dynamics simulations, offering cost-effective solutions for exploring complex chemical systems. However, these applications require reliable predictions, and poorly calibrated uncertainty estimates can lead to errors or excessive quantum calculations. Standard approaches like deep ensembles and deep evidential regression often lack well-calibrated uncertainty, which may overlook key data or lead to computational inefficiencies.
In this talk, I will introduce a framework for uncertainty quantification that enhances predictive reliability through post-hoc calibration techniques. This approach improves active learning by more accurately identifying areas needing additional data or high-precision calculations, comparing the effectiveness of deep evidential regression and ensemble methods. The framework aims to offer adaptable uncertainty quantification, ultimately facilitating more efficient and accurate ML-driven chemical modeling.
[Slides]
10:10: invited talk
Artificial Intelligence applied to nuclear fusion plasmas for instability detection and turbulence surrogate models by David Zarzoso Fernandez (M2P2, AMU)
Nuclear fusion holds the potential to provide a nearly limitless, clean energy source by mimicking the processes that power the Sun, offering a sustainable alternative to fossil fuels without harmful emissions or long-lived radioactive waste. If achieved at scale, fusion could revolutionize global energy systems, reducing our dependence on nonrenewable resources and significantly cutting greenhouse gas emissions.
However, on the route towards steady-state energy production by means of nuclear fusion reactions, several issues need to be addressed. First, the confinement of energetic particles (EP) is crucial to ensure the transfer of energy to the thermal plasma and to achieve self-sustained fusion reactions. These EP can drive some instabilities that tend to de-confine them. Second, together with EP, turbulence is another major element in the description of a fusion plasma. Turbulence is known to limit the performance of nuclear fusion devices reducing the confinement of particles and energy. Therefore, controlling EP-driven instabilities and turbulence is of paramount importance.
Two approaches are typically used to study these two phenomena: experiments and numerical modelling. However, both approaches have limitations: humans cannot identify and classify every instability that may arise in a single experiment, and turbulence analysis through simulations can become computationally expensive and, at times, unfeasible. In that context, we have investigated the use of AI algorithms to assist scientists in gaining deeper insights into experiments and creating surrogate models for turbulence.
10:50: Break
11:10: keynote
Physics-Aware Deep Learning for Modeling Dynamical Systems by Patrick Gallinari (ISIR – Sorbonne Universite and Criteo AI Lab, Paris)
Deep learning has recently gained traction in modeling complex physical processes across industrial and scientific fields such as environment, climate, health, biology, and engineering. This rapidly evolving interdisciplinary topic presents new challenges for machine learning. I will introduce deep learning approaches for physics-aware machine learning, as part of the broader topic AI4Science, focusing on modeling dynamic physical systems ubiquitous in science. I will illustrate some of the main challenges of this topic, including incorporating physical priors in machine learning models, generalization issues for modeling physical processes, and neural operators. This will be illustrated with feature applications from different domains.
Patrick Gallinari is a professor at Sorbonne University, affiliated with the ISIR laboratory (Institute of Intelligent Systems and Robotics), and a distinguished researcher at Criteo AI Lab in Paris. He is a pioneer in the field of neural networks. His research focuses on statistical learning and deep learning, with applications in various domains such as semantic data processing and complex data analysis. A few years ago, he spearheaded research on physics-aware machine learning and contributed to seminal works in this field. Additionally, he holds a national AI chair titled “Deep Learning for Physical Processes with applications to Earth System Science”.
[slides]
12:10: LUNCH
13:30: Poster Session
Development of neural network embedded on FPGAs for the computation of the energy deposited in the ATLAS liquid argon calorimeter by Raphaël Bertrand (CPPM)
The LHC is the largest particle collider ever built. It allows to probe the fine structure of our universe by colliding protons at very high energy. It allowed to discover the Higgs boson in 2012 using the data collected by the ATLAS and CMS detectors that study the remnants of the collisions at the LHC. Within the second phase of the upgrade of the LHC, that will start in 2026, the readout electronics of the ATLAS Liquid Argon (LAr) Calorimeter is prepared for high luminosity operation expecting up to 200 simultaneous proton-proton interactions. This busy environment increases the difficulty of energy computation out of the raw detector data. The energy computation should be performed in real-time using dedicated electronic boards based on FPGAs. To cope with the high number of simultaneous collisions, and thus overlapping electronic signals in the calorimeter, new machine learning approaches are explored. The main challenge is to develop neural networks that are efficient for the energy computation while being of reduced side to fit into FPGAs and the stringent requirement on the computation time (order of 100 nano seconds). In this presentation, I will discuss the usage of recursive neural networks (RNNs) and dense networks for the energy computation and their implementation on INTEL FPGAs. The challenges to develop neural networks that require a small number of computational operations will be also discussed.
Decomposing data-fields with multiple transports using the neural implicit flow by Arthur Marmin (I2M)
This work presents a neural network-based methodology for the decomposition of transport-dominated fields using the shifted proper orthogonal decomposition (sPOD) and neural implicit flow. Classical sPOD methods typically require an a priori knowledge of the transport operators to determine the co-moving fields. However, such knowledge is usually difficult or even impossible to obtain, limiting the applicability and benefits of the sPOD. To address this issue, our approach estimates both the transport and co-moving fields simultaneously using neural implicit flow. This is achieved by training two sub-networks dedicated to learning the transports and the co-moving fields, respectively. Applications to synthetic data and a wildland fire model demonstrate the ability of the neural sPOD approach to separate the different fields effectively.
This is joint work with Philipp Krah, Beata Zorawski, Shubhaditya Burela, Arthur Marmin and Kai Schneider.
[related paper] [poster]
Inferring the location and orientation of cell divisions on time-lapse image sequences by Jean-Francois Rupprecht (CPT)
We propose a two-stage method to characterize cell divisions. In a first stage, the division detection problem is recast into a semantic segmentation task on image sequences. In a second stage, a local regression on individual divisions yields the orientation and distance between daughter cells. We apply our formalism to confocal image sequences of neural tube formation in chicken embryos, where divisions occur within a well-defined plane. We show that our two-stage method can be implemented using simple networks, e.g. a U-Net for the segmentation and a 4-layer CNN for the regression. Optimization of the networks was achieved through a systematic exploration of hyperparameters. In particular, we show that considering several frames as inputs significantly improves the segmentation performance. We reach a performance of 96% in the F1 measure for the detection and errors for the angle, which are within the bounds of the uncertainty of the ground-truth annotation dataset.
Physics-informed machine learning for hyperpolarized solid-state nuclear magnetic resonance by Valentin Emiya (LIS, ICR)
We present ongoing works initiated in 2024 on machine learning methods for hyperpolarized solid-state nuclear magnetic resonance. We address the problem of spin diffusion in the context of a spherical solid of interest with unknown radius in a liquid upon microwave irradiation. Our approach is physically informed in a twofold flavour : a physics-based data augmentation technique to extend the average polarization measured a several instants; a physics-infomed neural network (PINN) trained by considering a diffusion PDE as well as the augmented data and initial and boundary conditions. Based on experimental results, we discuss several modeling hypotheses and the ability to learn the radius of the solid together with the parameters of the network.
This is co-work by Matis Brossa, Alban Bourcier, Théo Trossevin, Quentin Diacono (Licence MPCI), Pierre Thureau and Samuel Cousin (ICR), Arthur Pinon (Gothenburg University), Valentin Emiya (LIS)
[poster]
FASTER : IA methods for fast and accurate turbulent transport prediction in tokamaks by Guillaume Fuhr (PIIM)
Accurate simulation of fusion plasma turbulence is required for reactor operation and control, but is either too slow or lacks accuracy with present techniques. The FASTER project aims to circumvent these conflicting constraints of accuracy and tractability and provide real-time capable turbulent transport models with increased physics fidelity for tokamak temperature, density, and rotation velocity prediction through usage of machine learning techniques. In recent years, a new type of neural network-(NN) based quasilinear turbulent transport model has been developed for the simulation of fusion plasmas, giving increasingly promising and fast results and allowing their use in integrated simulations [1][2]. These surrogate models are obtained by training NNs on large datasets of simulations generated with reduced quasi-linear codes like QuaLikiz [3] or TGLF[4]. While extremely powerful, this technique limits the accuracy of the surrogate model to that of the original one. One way to further improve the capabilities of NNs based quasi-linear models is to train them on datasets generated with higher fidelity codes. For instance, the linear response of state-of-the-art gyrokinetic flux tube codes such as GKW [5] or GENE [6] could be used. Thanks to the growth of HPC resources, the generation of a dataset of a few million linear gyrokinetic simulations is now within the reach of a single research group. The size of the dataset can be further increased by mobilizing the community and collecting gyrokinetic simulations performed worldwide. To this end, we have extended the IMAS data model to include a unified standard for the inputs and outputs of gyrokinetic simulations. This standard is used to store gyrokinetic simulation results from different codes in a common database: the GyroKinetic DataBase (GKDB). The GKDB is designed to be a repository of open source simulation data, a platform for code benchmarking, and a springboard for the development of fast and accurate turbulent transport models. The project is hosted and documented on GitLab (https://gitlab.com/gkdb/). Thanks to the unified data model used for the database, quasilinear as well as linear and nonlinear simulations can be stored sharing compatible inputs and output. This offers the possibility to build fast quasi-linear models by training neural networks on the linear simulation data and to test their robustness against the non-linear simulation data. Moreover, code comparison is always challenging due to the different normalizations and conventions used. The IMAS “gyrokinetics” standard greatly facilitates the benchmarking of codes (δf flux tube gyrokinetic simulations and/or quasilinear models) against each other. We will give an overview of the FASTER project and presents some proof of concepts of database usage including data access and visualization.
References :
[1] K.L. van de Plassche et al., Phys. Plasmas 27, 022310 (2020)
[2] O. Meneghini et al., Nucl. Fusion 61, 026006 (2020)
[3] G.M. Staebler et al., Phys. Plasmas 14, 055909 (2007)
[4] J. Citrin et al., Plasma Phys. Control. Fusion 59, 124005 (2017) – www.qualikiz.com
[5] A.G. Peeters et al., Comput. Phys. Comm. 180, 2650 (2009) and GKW website
[6] F. Jenko et al., Phys. Plasmas 7, 1904 (2000) and GENE website
Comparative Study of U-Net Architectures and Count Loss Functions for Phage Counting in TIRF Microscopy by Chandrasekar
Subramani Narayana (LIS, LAI)
This study explores the binding interactions between antibodies and antigens using M13 bacteriophage as a vector for antibody presentation on antigen-coated surfaces. The filamentous M13 phage, approximately 1 µm in length and 5 nm in diameter, displays antibodies on its head. To visualize these interactions, the phages are conjugated with a fluorescent dye and observed through Total Internal Reflection Fluorescence (TIRF) Microscopy. This technique provides high-resolution imaging near the surface (~100 nm), allowing precise monitoring of binding events. We investigated binding dynamics under conditions free from external forces in a solution environment, focusing on how the phages bind to antigens. This required accurate counting of phages. To address this, we developed an encoder-decoder network based on U-Net, trained with count loss. We explored the significance of temporal information in image stacks to improve phage identification by using variants of U-Net: 2D U-Net (input from a single frame), 2.5D U-Net (mean image of stacks), and 3D U-Net (full image stacks). The investigation showed that count loss outperformed density-based localization loss, especially in 3D U-Net, which captured both spatial and temporal dynamics. We also addressed the concept shift typically observed due to variations in data distribution used for model training. To mitigate this, we generated synthetic data by manipulating various components contributing to the construction of fluorescence microscopy images. This approach allowed us to train the model and make predictions on unseen data, ultimately leading to better capture of kinetic curves, as well as the association and dissociation events of antibodies. These findings have broader implications for diagnostics and therapeutics.
Exploring cellular dynamics in predation via panoptic segmentation, by Florian Saby (LCB, IRPHE)
This talk will explores the use of panoptic segmentation as a method for species identification to enhance the analysis of collective behavior in different patterns. By integrating panoptic segmentation, which enables the simultaneous detection of individual instances and semantic categories, we aim to improve the accuracy of species recognition in complex environments. This approach provides valuable insights into how species exhibit collective behaviors under varying patterns, contributing to a deeper understanding of interaction dynamics within biological systems. The results demonstrate the potential of computer vision techniques to advance studies in behavioral ecology.
14:15: inVITED TALK
Multi-level Neural Networks for Accurate Solutions of Boundary-Value Problems by Régis Cottereau (LMA, AMU)
The solution to partial differential equations using deep learning approaches has shown promising results for several classes of initial and boundary-value problems. However, their ability to surpass, particularly in terms of accuracy, classical discretization methods such as the finite element methods, remains a significant challenge. Deep learning methods usually struggle to reliably decrease the error in their approximate solution. A new methodology to better control the error for deep learning methods is presented here. The main idea consists in computing an initial approximation to the problem using a simple neural network and in estimating, in an iterative manner, a correction by solving the problem for the residual error with a new network of increasing complexity. This sequential reduction of the residual of the partial differential equation allows one to decrease the solution error, which, in some cases, can be reduced to machine precision. The underlying explanation is that the method is able to capture at each level smaller scales of the solution using a new network. Numerical examples in 1D and 2D dealing with linear and non-linear problems are presented to demonstrate the effectiveness of the proposed approach.
Cowork with Ziad Aldirany, Marc Laforest and Serge Prudhomme
[related paper] [slides]
14:55: ORAL SESSION
Reinforcement learning for bio-inspired navigation in turbulent flows by Aurore Loisy (IRPHE)
Navigation is about optimising a route to go from A to B. Animal and robotic navigation however fundamentally differ from the routing of planes or ships, because it is autonomous: the self-propelled “agent” has only access to information from its own sensors to make decisions. This type of problem is well-suited for reinforcement learning, a branch of artificial intelligence which has gained popularity by beating human players at games. In this talk I will show how we can leverage modern (deep) reinforcement learning techniques to solve two navigation problems inspired by the animal world: the vertical migration of plankton through the water column and the search for an odor source by insects.
Diffusion Models for Rapid Generation of Granular Media Samples by Muhammad Moeeze Hassan (LMA, SNCF)
Discrete Element Methods (DEM) are crucial in understanding complex dynamics of granular media, for example sand, rocks and powders. Railway ballast also behaves as a granular media, and is used on a train track for various purposes, most notably, load distribution and vibration damping. For accurate understanding of these behaviors, DEM simulations tend to be the most accurate methods, with their inherent ability to model the individual particles and their neigborhood interactions. However, these simulations are computationally prohibitive in traditional continuum mechanics pipelines. The computation time scales up significantly with the number of particles, since such methods are difficult to parallelize. Furthermore, the required simulations usually follow a sample setup phase before the acutal simulation, where the sample on which the simulation is to be run on, is generated in an appropriate manner. In case of railway track, this means generation of ballast particles in the air, letting them fall onto ground, compacting them and then running the actual simulation. This setup phase is almost always a significant part of the simulation. This quickly becomes prohibitive when DEM is used to generate a large database, say for an AI model or developing an inverse method. This project proposes a method to rapidly generate large 2D samples of granular media, concretely the cross sections of railway ballast layers. The method uses a diffusion models that learn to generate the realistic samples, based on pre-existing small DEM database. The models follow a Markovian diffusion process, where noise is removed from samples of noise distribution, in a sequential manner through learnt neural network transformations, ultimately generating new samples. Commonly, these tranformations include transformers, that excel in capturing spatial patterns and fine details in samples. The method first creates small patches of these ballast cross sections from trained diffusion models, spreads them on a checkerboard pattern grid, where they are used for conditional generation for the adjacent areas, ensuring spatial consistency. The method hence can generate longer samples by generating patches conditioned by neighbors. We also discovered that StableDiffusion can also generate the conditional patches once we have the unconditioned patches, without any further fine tuning, significantly reducing the expected training computation resources. Segmentation is done using WaterShed algorithm to converted RGB images into a comprehensible array of input into DEM software. In this way, the method can generate large samples of granular media in a fraction of the time it usually takes for the DEM to produce the same size of sample. The method is tested on the DEM database of railway ballast and a statistical study is done to ensure the realism in the generated samples.
[slides]
Artificial Intelligence applied to Plasma Spectroscopy by Mohammed
Koubiti (PIIM)
Artificial Intelligence (AI) occupies an important place in the field of plasma science [1] including plasma physics [2-4]. In this context, the recent years have seen an incrased actitivity related to the use of Machine-Learning particularly in the field of plasma spectroscopy. Combining machine or deep learning methods with plasma spectroscopy have various purposes including real-time inference of plasma dynamics in magnetic fusion devices [5] or the prediction of the plasma parameters [6]. In this paper, we discuss the various applications of machine-learning and deep-learning algorithms to spectroscopic data in plasmas with a focus on magnetic fusion plasmas without excluding other types of plasmas. We also present our own work related to the use of neural networks such as Convolutional Neural Network (CNN) to theoretical Balmer-alpha line spectra emitted by hydrogen isotopes for the hydrogen isotopic ratio prediction for Tokamak plasmas [7-8].
[slides]
References
[1] R. Anirudh et al., IEEE Transactions on Plasma Science, 51, 1750-1838 (2023).
[2] C. M. Samuell et al, Rev. Sci. Instrum., 92, 043520 (2021).
[3] B. Dorland, machine learning for plasma physics and fusion energy, Journal of Plasma Physics (2022).
[4] Machine learning methods in plasma physics, Contrib. Plasma. Phys, 63, Issues 5-6 (2023).
[5] L. Malhotra et al, 4th IAEA Techn. Meeting on Fusion Data Processing, Validation and analysis (2021).
[6] D. Nishijima et al, Rev. Sci. Instrum., 92 023505 (2021).
[7] M. Koubiti, EPJD, 77, 137 (2023).
[8] N. Saura, M. Koubiti, S. Benkadda, Study of line spectra emitted by hydrogen isotopes in tokamaks through Deep-Learning algorithms, submitted to Nuclear Materials and Energy
15:55: BREAK
16:15: ORAL SESSION
Comparative study of transformer robustness for multiple particle tracking without clutter by Piyush Mishra (I2M, Inst. Fresnel, Turing Centre for Living Systems)
The tracking of multiple particles in lengthy image sequences is challenged by the stochastic nature of displacements, particles detection errors, and the combinatorial explosion of all possible trajectories. As such, extensive work has focused on the modeling of noisy trajectories to try and predict the most likely trajectory-to-measurements associations. Recently, transformers have been shown to significantly accelerate the evaluation of probabilistic models for the system dynamics and detection clutter generated from false positives. However, little work has focused on clutter-free scenarios with multiple particles moving erratically, where the challenge resides not in the model complexity, but in the combinatorial burden of considering all possible trajectory-to-measurements associations. This is a common occurrence in fluorescence microscopy at low framerate. This paper offers a proof-of-concept study of the benefit of the transformer architecture in such scenarios through the simulation of two-particle-systems undergoing Brownian motion. Specifically, we designed a transformer for this estimation task and compared it with the Multiple Hypothesis Tracker (MHT), the optimal estimator when all trajectory-to-measurement associations can be computed. We first show increased robustness of the transformer against erratic displacements over long sequences, with significantly lower computational complexity than MHT. Then, we show that while the transformer requires very little training to significantly outperform MHT on long sequences, it cannot match the theoretically optimal performances of MHT on short sequences even with extensive training. Hence, our work motivates the broader application of transformers in high-SNR sequences and opens the way to the development of frugal methods thanks to the combination of both statistical and neural network frameworks for particle tracking
[related paper] [slides]
On Physics-Informed Neural Networks and DeepONet for Vascular Flow Simulations in Aortic Aneurysms by Oscar L. Cruz-Gonzalez (IRPHE)
This study explores the application of Physics-Informed Neural Networks (PINNs) and Deep Operator Networks (DeepONet) in predicting vascular flow simulations in the context of a 3D Abdominal Aortic Aneurysm (AAA) idealized model. Traditional computational methods for simulating vascular flow are often time-consuming and computationally expensive. To address these challenges, we dive into new data assimilation algorithms that offer possible alternatives. PINNs integrate the physical laws governing fluid dynamics within a neural network framework, allowing for accurate simulations. DeepONet, a novel operator learning approach, further enhances this capability by effectively capturing the underlying operator structure of the problem. We validate our models against CFD simulations for benchmark datasets and demonstrate good agreement between the results. The proposed methodology serves as a starting point for future research in the application of Deep Learning in cardiovascular disease.
[slides]
Inferring rheological properties from PIV using Physics-Informed Neural Networks, by Martin Lardy (IRPHE/IBDM)
Embryonic tissues are striking examples of complex fluids that self-organize into highly organized structures. In vitro tissue models, including biomimetic tissues and organoids, serve as invaluable experimental platforms for exploring the intricate interplay between cellular deformations, intercellular interactions, and tissue rheology. These models can be subjected to mechanical perturbations, such as pipette or microfluidic aspirations, enabling detailed investigations into their rheological properties. This approach has been fruitful to unravel behaviors analog with active viscoelastic materials (e.g the emergence of active stresses) or complex amorphous cellular materials (e.g non-Newtonian properties). Nonetheless, deciphering how spatial variation of gene expression within tissues influence their rheological properties remains a difficult challenge. Here, we delve into the exploration of inferring rheological laws in heterogeneous tissue flow scenarios, such as those encountered in organoid microfluidic aspirations. We propose an approach using a Physics Informed Neural Network (PINN) algorithm. In this approach, a fully-connected forward network is used to represent the velocity field. The algorithm aims to optimize the network parameters to fit the output field with the provided experimental/numerical data. At the same time it also optimizes rheological parameters in order to satisfy prescribed physical equations for the system. This poster or this talk, will show our machine learning algorithm in action using numerical simulation data. We’ll demonstrate its ability to handle various scenarios by applying it to different geometries, to two distinct rheological models, the Herschel-Buckley and Carreau ones and to noisy situations.
Reference: Cai, S., Mao, Z., Wang, Z., Yin, M., & Karniadakis, G. E. (2021). Physics-informed neural networks (PINNs) for fluid mechanics: A review. Acta Mechanica Sinica, 37(12), 1727-1738.
[slides]
17:15: ROUND TABLE DISCUSSION
On the issues, challenges and structuring of AI and Physical Sciences within AMU
18:00: End of the workshop
Venue
The workshop will take place in Saint Charles Campus in Marseille, at Turbulence Building.
After the campus entrance, go to the left, behind the University Library, rounding it to the left along the railings, see map here.
Registration
News (November 4): the room’s capacity (74) is reached. New registration will be in a waiting list and attendance is subject to the availability of free seats due to cancellations. An email will be sent to the new registered people on November 11 about the status of their attendance.
Registration is free and mandatory here.
Call for contributions
The call for contribution is close.
Any permanent or non-permanent researcher is welcome to present published or ongoing works related to AI and Physical Sciences. Proposal will be scheduled as oral presentation or posters (remote presentations are not allowed).
Proposals should be submitted here before Friday, October 11, including a title, an abstract and possible related publications. English is preferred (at least slides/posters in English).
Organization comitee
- Mario Barbatti (ICR)
- Hachem Kadri, Thierry Artières and Valentin Emiya (LIS)
- Sandrine Anthoine and Christophe Gomez (I2M)
Contact: firstname.lastname@univ-amu.fr