Category: Publishing

What is the significance of a discovery or a result? How do they emerge? Learn more about the publications of FGSE researchers.

FGSE publications are also deposited in SERVAL.

  • Why did some ancient animals fossilize while others vanished?

    Why did some ancient animals fossilize while others vanished?

    crevette
    Cretaceous fossil shrimp from Jbel Oum Tkout, Morocco registered at the Museum d’histoire naturelle de Marrakech (© Sinéad Lynch – UNIL).

    Why do some ancient animals become fossils while others disappear without a trace? A new study from the University of Lausanne, published in Nature Communications, reveals that part of the answer lies in the body itself. The research shows that an animal’s size and chemical makeup can play an important role in determining whether it’s preserved for millions of years—or lost to time.

    Fossils are more than just bones; some of the most remarkable finds include traces of soft tissues like muscles, guts, and even brains. These rare fossils offer vivid glimpses into the past, but scientists have long puzzled over why such preservation happens only for certain animals and organs but not others.

    To dig into this mystery, a team of scientists from the University of Lausanne (UNIL) in Switzerland turned to the lab. They conducted state-of-the-art decay experiments, allowing a range of animals including shrimps, snails, starfish, and planarians (worms) to decompose under precisely controlled conditions. As the bodies broke down, the researchers used micro-sensors to monitor the surrounding chemical environment, particularly the balance between oxygen-rich (oxidizing) and oxygen-poor (reducing) conditions.

    The results were striking and have now been published in Nature Communications . The researchers discovered that larger animals and those with a higher protein content tend to create reducing (oxygen-poor) conditions more rapidly. These conditions are crucial for fossilization because they slow down decay and trigger chemical reactions such as mineralization or tissue replacement by more durable minerals.

    “This means that, in nature, two animals buried side by side could have vastly different fates as fossils, simply because of differences in size or body chemistry,” affirms Nora Corthésy, PhD student at UNIL and lead author of the study. “One might vanish entirely, while the other could be immortalized in stone” adds Farid Saleh, Swiss National Science Foundation Ambizione Fellow at UNIL, and Senior author of the paper. According to this study, animals such as large arthropods are more likely to be preserved than small planarians or other aquatic worms. “This could explain why fossil communities dating from the Cambrian and Ordovician periods (around 500 million years ago) are dominated by arthropods,” states Nora Corthésy.

    These findings not only help explain the patchy nature of the fossil record but also offer valuable insight into the chemical processes that shape what ancient life we can reconstruct today. Pinpointing the factors that drive soft-tissue fossilization, brings us closer to understanding how exceptional fossils form—and why we only see fragments of the past.

    Source

    N. Corthésy, J. B. Antcliffe, and F. Saleh, Taxon-specific redox conditions control fossilisation pathways, Nature Communications, 2025

    Research fundings

    SNF Ambizione Grant (PZ00P2_209102)


    Questions to Nora Corthésy,
    principal author of the study at UNIL

    Why did you choose shrimps, snails and starfish to conduct your study?

    These present-day animals were the best representatives of extinct animals we had in the lab. From a phylogenetic (relationship between species) and compositional point of view, they are close to certain animals of the past. The composition of the cuticles and appendages of modern shrimps, for example, is more or less similar to that of ancient arthropods.

    How can we know that animals lived, then disappeared without a trace, if we have no evidence of this?

    When studying preservation in the laboratory, it becomes possible to distinguish between ecological and preservational absences in the fossil record. If an animal decays rapidly, its absence is likely due to poor preservation. If it decays slowly, its absence is more likely to be ecological, that is, a true absence from the original ecosystem.  Our study shows that larger, protein-rich organisms are more likely to be preserved and turned into fossils. We can therefore hypothesize that smaller, less protein-rich organisms, which have very little chance of dropping their redox potential, may not have been fossilized due to preservational reasons. It is therefore possible that some organisms could never have been preserved, and that we may never, or only with great difficulty, be able to observe them. Nevertheless, all of this remains hypothetical, as we are unable to travel back in time millions of years to confirm exactly what lived in these ancient ecosystems.

    What about the external conditions in which fossils are formed, such as climate?

    The effect of these conditions is very complicated to understand since it is nearly impossible to replicate ancient climatic conditions in the laboratory. Nevertheless, we know that certain sediments can facilitate the preservation of organic matter, giving clues as to which deposits are the most favorable for finding fossils. We also know that factors such as  salinity and temperature, also play a role in preservation. For example, high salinity can increase an organism’s preservation potential, as large amounts of salt slow down decay in a similar way to low temperatures. Our study here focuses solely on the effect of organic matter and organism size on redox conditions around a carcass. It is therefore one indicator among others, and there is still a lot that needs to be done to understand the impact of various natural conditions on fossil preservation.

  • A classification of drugs based on their environmental impact

    A classification of drugs based on their environmental impact

    Scientists at UNIL and Unisanté classified 35 commonly used drugs in Switzerland based on their impact on the aquatic biodiversity. The aim of this research is to provide medical staff with a tool for considering the environmental risks associated with certain common drugs when prescribing them. The proposed list is subject to change when new data become available, their rarity being a limiting factor for classification.

    Every day in Western countries, thousands of drugs are consumed, whether to relieve pain, regulate blood pressure or treat infections. But what happens after ingesting these products? Evacuated via urine, many substances end up in wastewater. They are only partially eliminated by these systems, and end up in lakes, rivers and streams, posing a risk to aquatic ecosystems. This risk is now recognized, but it is difficult for doctors to know how to integrate it into their practice.

    At the University of Lausanne (UNIL), scientists from the Faculty of Biology and Medicine (FBM) and the Faculty of Geosciences and Environment (FGSE) have carried out an unprecedented classification of widely-used drugs according to their ecotoxicity, i.e. their danger to the aquatic ecosystem. Published in the International Journal of Environmental Research and Public Health , the study reveals that drugs commonly prescribed in general medicine – to combat inflammation or infection, for example – have significant consequences for the health of fish, algae and bacteria essential to aquatic biodiversity. 

    Painkillers and antibiotics among the most problematic

    The researchers classified 35 drugs commonly consumed in Switzerland into categories ranging from low to high toxicity for aquatic ecosystems. To do this, they cross-referenced three pieces of information: the 50 most widely sold drugs in Switzerland (by weight), those for which ecotoxicity thresholds exist, and the concentration of those found in the rivers of Vaud and Lake Geneva (in the form of active ingredient).

    Among the most problematic drugs are common painkillers and anti-inflammatories such as diclofenac, which is toxic to fish liver and can lead to fish death. There are also antibiotics such as ciprofloxacin, which can eliminate bacteria useful to the ecosystem’s balance, and encourage the emergence of antibiotic-resistant bacteria. Mefenamic acid and paracetamol, on the other hand, are in the category with the lowest environmental risks. 

    One health: for people and the planet

    “This classification is far from complete, because of the lack of data. It does, however, give some initial indications for practitioners,” comments Nathalie Chèvre, ecotoxicologist at the FGSE and co-director of the study. “Of the 2000 or so drugs on the European market, we have only classified 35. This is a good start, but more ecotoxic thresholds need to be established and accepted to enable us to continue this kind of analysis”, adds Tiphaine Charmillot, a researcher at the FBM and Unisanté, and first author of the article.

    In Switzerland, new treatments are being introduced at WWTPs, with promising results. “However, they are costly both economically and ecologically,” says Nathalie Chèvre. “Nor does it solve the problem of poor connections and wet-weather discharges. So it’s always preferable to fight at source.”

    In the meantime, the scientists hope that this approach, which represents a first step, will encourage the integration of environmental considerations into therapeutic choices, as is already advocated within the framework of various initiatives such as “smarter medicine – Choosing Wisely Switzerland ”. The idea is to control the environmental impact of healthcare professionals’ practices, while offering the best possible quality of care.

    In practice, this could mean using this classification to prioritize the least harmful option when prescribing medication, in cases where two treatments have the same therapeutic efficacy – for example, favoring the use of mefenamic acid over diclofenac for the treatment of pain; avoiding unnecessary prescriptions, such as antibiotics for non-bacterial infections (e.g., colds); and finally, proposing non-pharmacological approaches where possible (treatment of chronic pain by physiotherapy or behavioral therapy; treatment of mild depression by phytotherapy, etc.). 

    “The concept of health should encompass human health, the health of all living things and the health of the natural environment. Eco-responsible medicine also benefits patients directly, by avoiding over-medication, but also indirectly, by promoting a healthier environment, which is essential for well-being”.

    Nicolas Senn, researcher at FBM and Unisanté, and co-director of the study.

    Source

    T. Charmillot, N. Chèvre, N. Senn, Developing an Ecotoxicological Classification for Frequently Used Drugs in Primary Care , International Journal of Environmental Research and Public Health, 2025.

    Drug classification1

    High to very high-risk level for aquatic life and ecosystems

    • Antibiotics (clarithromycin, azithromycin, ciprofloxacin, sulfamethoxazole)
    • Painkiller, anti-inflammatory (diclofenac, ibuprofen)
    • Antiepileptic, mood stabilizer (Carbamazepine)
    • Iodinated contrass agent (iopromide, iomeprol)

    Moderate environmental for aquatic life and ecosystems

    • Antibiotics (clindamycin, erythromycin, metronidazole, trimethoprim)
    • Antidepressant (venlafaxine)
    • Painkiller, anti-inflammatory (ketoprofen, mefenamic acid, naproxen)
    • Beta-blocker (metoprolol, propranolol, sotalol)

    Low to Very Low Environmental Risk for aquatic life and ecosystems

    • Antibiotics (ofloxacin, sulfadiazine)
    • Antidepressant (amisulpride, citalopram, mirtazapine)
    • Antidiabetic (metformine)
    • Painkiller (paracetamol, tramadol)
    • Antiepileptic (gabapentin, lamotrigin, primidone)
    • Anti-hypertensive (candesartan, irbesartan)
    • Betablocker (atenolol)
    • Diuretic (hydrochlorothiazid)
    1. The word “drug” is used here to refer to the active ingredients of the drug. ↩︎
  • AI enables a major innovation in glacier modelling and offers groundbreaking simulation of the last Alpine glaciation

    AI enables a major innovation in glacier modelling and offers groundbreaking simulation of the last Alpine glaciation

    Tancrède Leger, Institut des dynamiques de la surface terrestre

    Scientists at the University of Lausanne (UNIL) have used AI for the first time to massively speed up computer calculations and simulate the last ice cover in the Alps. Much more in line with field observations, the new results show that the ice was thinner than in previous models.

    This innovative method opens the door to countless new simulations and predictions linked to climate upheavals. The research is published in Nature Communications.

    25,000 years ago, the Alps were covered by a layer of ice up to 2 kilometers thick. For almost 15 years, this glaciation has been put into perspective by 3D digital models, based on climate reconstructions, thermodynamics and ice physics. However, these models have sparked debate in the scientific community, as until now there has not been a full  correspondence between these simulations and the physical traces – rocks, moraines, etc. – found in the field, particularly erosion lines, which bear witness to past ice thicknesses.

    A team of scientists from the University of Lausanne (UNIL) have just solved this persistent problem. For the first time, they have used artificial intelligence to massively boost their new glacial evolution model, generating a large series of simulations of unprecedented accuracy: they correspond much more closely to the physical traces left on the ground. Their results show an average ice cover 35-50% thinner than in previous reference simulations. Model resolution has been increased from two kilometers to 300 meters, and it is only thanks to this precision that it is possible to describe the complex topography of the Alps numerically.

    In line with the current state of scientific knowledge, based on field observations, it shows, for example, that certain peaks such as the Matterhorn and Grand Muveran were clearly protruding from the ice. This breakthrough is published in Nature Communications.

    The research is significant in more ways than one. Firstly, the ability to correctly model the glacial past is essential to understanding our environment.  For over 2 million years, the Earth has experienced alternating glacial and warm cycles, which have profoundly shaped the landscape in which we live. The new models now corresponds much more closely to the evidence left on the ground following the retreat of the glaciers, and make it possible to better quantify many natural phenomena, such as glacial erosion, which has largely contributed to sculpting the relief of the Alps.

    On the other hand, the innovative methodology used in this research marks a new era in numerical modelling. “By using recent technology, and applying it to the last major glaciation in the Alps, we can finalize a 17,000-year simulation at very high resolution (300 m) in 2.5 days, whereas such spatial resolution would have taken 2.5 years to calculate using traditional methods, which are also extremely costly and energy-intensive”, explains Tancrède Leger, researcher at UNIL’s Faculty of Geosciences and Environment (FGSE), and first author of the study.

    With this approach, the model first learns about the physics of ice flow, using Deep Learning methods. It then receives data on the climate of the period (temperature, precipitation, etc.), to calculate ice supply and melt.

    Deep learning calculations are then performed not by the traditional central processing unit (CPU), but via a GPU (or graphics processing unit), which enables numerous operations to be performed in parallel, boosting the computer’s computing power phenomenally.

    “It’s as if we once had six Ferraris at our disposal, and now we have ten thousand small cars. We’ve gone from very large machine clusters to a simple 30 cm graphics card. We’re not doing anything new, but we’re doing it a thousand times faster, making it possible to achieve resolutions that were not even considered before”.

    Guillaume Jouvet, FGSE prof. behind the AI model and co-first author of the study.

    This progress will enable new research to be launched. In particular, a new SNSF-funded project is about to get underway to use this revolutionary method to better predict the impact of the melting Greenland and Antarctic ice sheets on global sea level rise.


    Sources

  • First traces of water on Mars date back 4.45 billion years

    First traces of water on Mars date back 4.45 billion years

    Designated Northwest Africa (NWA) 7034, and nicknamed Black Beauty, this Martian meteorite weighs approximately 320 g – © NASA
    Jack Gillespie, Institute of Earth Sciences

    By analyzing a Martian meteorite, scientists from the University of Lausanne and Curtin University have discovered traces of water in the crust of Mars dating back 4.45 billion years, i.e. to near the very beginning of the planet’s formation.

    This new information strengthens the hypothesis that the planet may have been habitable at some point in its history.

    Thanks to observations from Mars rovers and spacecraft, we’ve known for decades that the planet Mars was once home to water, and probably had rivers and lakes. However, many questions remain. When did this precious liquid first appear in the history of Mars, and did the Red Planet, in the course of its evolution, create the conditions necessary for the emergence of life?

    By analyzing the composition of a mineral (zircon) found in a Martian meteorite, scientists from the University of Lausanne, Curtin University and the University of Adelaide have succeeded in dating traces of water in the crust of Mars. According to the study, published in Science Advances, hydrothermal activity dates back 4.45 billion years, just 100 million years after the planet’s formation.  

    “Our data suggests the presence of water in the crust of Mars at a comparable time to the earliest evidence for water on Earth’s surface, around 4.4 billion years ago,” comments Jack Gillespie, first author of the study and researcher at the University of Lausanne’s Faculty of Geosciences and Environment. “This discovery provides new evidence for understanding the planetary evolution of Mars, the processes that took place on it and its potential to have harboured life”.

    A Martian meteorite found in the desert

    The scientists worked on a small piece of the meteorite NWA 7034 “Black Beauty”, which was discovered in the Sahara Desert in 2011. “Black Beauty” originates from the Martian surface and was thrown to Earth following an impact on Mars around 5-10 million years ago. Analysis focused on zircon; a mineral contained in the meteorite. Highly resistant, zircon crystals are key tools for dating geological processes: they contain chemical elements that make it possible to reconstruct the date and conditions under which they crystallized (temperature, interaction with fluids, etc.). “Zircon contains traces of uranium, an element that acts as a natural clock,” explains Jack Gillespie. “This element decays to lead over time at a precisely known rate. By comparing the ratio of uranium to lead, we can calculate the age of crystal formation.”

    Through nano-scale spectroscopy, the team identified element patterns in this unique zircon, including unusual amounts of iron, aluminium, and sodium. These elements were incorporated as the zircon formed 4.45 billion years ago, suggesting water was present during early Martian magmatic activity.

    These new findings further support the hypothesis that the Red Planet may have once offered conditions favorable to life at some point in its history.

    “Hydrothermal systems were essential for the development of life on Earth, and our findings suggest Mars also had water, a key ingredient for habitable environment, during the earliest history of crust formation”

    Aaron Cavosie from Curtin’s School of Earth and Planetary Sciences, co-author

    Lead author Dr Jack Gillespie from the University of Lausanne was a Postdoctoral Research Associate at Curtin’s School of Earth and Planetary Sciences when work began on the study, which was co-authored by researchers from Curtin’s Space Science and Technology Centre , the John de Laeter Centre  and the University of Adelaide, with funding from the Australian Research Council, Curtin University, and the Swiss National Science Foundation.

    Source

    J. Gillespie, A. J. Cavosie, D. Fougerouse, C. L. Ciobanu, W. D. A. Rickard, D. W. Saxey, G. K. Benedix, and P. A. Bland, Zircon trace element evidence for early hydrothermal activity on Mars, Science Advances, 2024 (DOI 10.1126/sciadv.adq3694)

  • Discovery of the first ancestors of scorpions, spiders and horseshoe crabs

    Discovery of the first ancestors of scorpions, spiders and horseshoe crabs

    Lorenzo Lustri, Institute of Earth Sciences

    Who were the earliest ancestors of scorpions, spiders and horseshoe crabs?

    A PhD student from the University of Lausanne (Switzerland), with the support of a CNRS researcher , has identified a fossil that fills the gap between modern species and those from the Cambrian period (505 million years ago), solving a long paleontological mystery.

    Modern scorpions, spiders and horseshoe crabs belong to the vast lineage of arthropods, which appeared on earth nearly 540 million years ago. More precisely, they belong to a subphylum that includes organisms equipped with pincers used notably for biting, grasping prey, or injecting venom – the chelicerae, hence their name chelicerates. But what are the ancestors of this very specific group?

    This question has puzzled paleontologists ever since the study of ancient fossils began. It was impossible to identify with certainty any forms among early arthropods that shared enough similarities with modern species to be considered ancestors. The mystery was further compounded by the lack of fossils available for the key period between -505 and -430 million years ago, which would have facilitated genealogical investigation.

    One of the Setapedites abundantis fossils that have been used to trace the origins of spiders, scorpions and horseshoe crabs. © UNIL

    Lorenzo Lustri, then a PhD student at the University of Lausanne (UNIL)’s Faculty of Geosciences and Environment, provided the missing piece of the puzzle. Together with his supervisors, he studied a hundred fossils dating back 478 million years from the Fezouata Shale of Morocco and identified the candidate that links modern organisms to those of the Cambrian (505 million years ago). The study was published in Nature Communications.

    Fossils from the Fezouata Shale were discovered in the early 2000s and have undergone extensive analysis. However, the fossil illustrated in the publication, one of the most abundant in the deposit, had never been described before. Measuring between 5 and 10 millimeters in size, it has been named Setapedites abundantis. This animal makes it possible, for the first time, to trace the entire lineage of chelicerates, from the appearance of the earliest arthropods to modern spiders, scorpions and horseshoe crabs.

    “Initially, we only intended to describe and name this fossil. We had absolutely no idea that it would hold so many secrets,” confides Lorenzo Lustri, the paper’s first author, who defended his PhD in March 2023. “It was therefore an exhilarating surprise to realize, after careful observations and analysis, that it also filled an important gap in the evolutionary tree of life.”

    Still, the fossil has yet to reveal all its secrets. In fact, some of its anatomical features allow for a deeper understanding of the early evolution of the chelicerate group, and perhaps even link to this group other fossil forms whose affinities remain highly debated.

    A temporary exhibition on the Fezouata biota, in collaboration with UNIL, will soon be held at the Palais de Rumine in Lausanne, Switzerland.

    Source

    Method

    Reconstruction of Setapedites abundantis ©Elissa Sorojsrisom

    To obtain these results, the scientists studied a hundred fossils and used an X-ray scanner to reconstruct their anatomy in detail and in 3D. They were then able to draw comparisons with numerous fossil chelicerates from other sites, as well as with their more ancient relatives. Finally, the importance of the Fezouata fossil became clear with the help of phylogenetic analyses, which mathematically reconstruct the family tree of different species based on the “coding” of all their anatomical traits.  

  • Unveiling the sustainability landscape in cultural organizations: A global benchmark

    Unveiling the sustainability landscape in cultural organizations: A global benchmark

    Julie Grieshaber and Martin Müller, Institute of Geography and Sustainability, authors of the study (© UNIL)

    Are museums, theaters, and opera houses truly walking the talk when it comes to social and environmental sustainability? The University of Lausanne delved into this pressing question, conducting an international survey with over 200 major cultural organizations. The verdict? While there’s significant room for improvement across the spectrum, Anglophone countries lead the charge.

    Cultural organizations, with their wide-reaching influence and power to shape narratives and imaginations, are poised to be trailblazers in championing sustainability causes. Recognizing this pivotal role, researchers from UNIL’s Institute of Geography and Sustainability initiated a comprehensive international survey to assess progress in the realms of social and environmental sustainability.

    This global benchmark survey was answered by 206 leading museums, theaters, and opera houses on every continent. Respondents answered questions on diverse criteria, ranging from the inclusiveness and well-being of employees (social aspects) to waste management, energy consumption, catering practices, and carbon impact (environmental considerations).

    Published in Sustainability: Science, Practice and Policy, a leading global journal for sustainability, the results underscore a collective need for improvement, with 60% of respondents integrating sustainability into their strategies only in the last five years or less. On average, cultural organizations obtained only 37 out of 100 possible points in the sustainability score, doing better on social sustainability than on environmental sustainability. UNIL professor Martin Müller, spearheading the research, notes a gap between declarations and implementation.

    Sustainability champions: a global strategy, a dedicated team and cross-functionality

    However, amidst the challenges, the study unveils sustainability champions, 14 in all. A correlation emerges between social and environmental sustainability, emphasizing that those excelling in one area tend also to shine in the other. The top 14 cultural organizations features notable Anglophone organizations like the National Galleries of Scotland and the Sydney Opera House. The study guaranteed the anonymity of the participating institutions, so only the top performers who gave their explicit consent are mentioned: See results.

    What sets the top-ranking organizations apart is their integration of sustainability into overall strategy and the establishment of dedicated internal groups, so-called green teams, that drive coordinated actions. National contexts and political decisions further influence these endeavors.

    In England, for instance, publicly funded organizations must report on sustainability, adding an extra layer of accountability.

    Julie Grieshaber, co-author.

    “We’re incredibly proud”, says Anne Lyden, Director General of the National Galleries of Scotland, the most sustainable museum in the study. “We actively support Scotland’s aim to reach net-zero before 2045, cutting our carbon footprint by 60% between 2008 and 2022”, she adds. “We understand how important it is to play our part in making a more sustainable future, not just for Scotland but the world.”

    Louise Herron, CEO of the Sydney Opera House (first-ranked organisation in the study), says: “Sustainability has been part of the Opera House’s DNA since the beginning and over recent years, we’ve been focused on bringing together our efforts to drive social and environmental change, embedding sustainability into our organisational strategy and making it part of everyone’s daily lives. These are urgent challenges that we’re facing, which can only be tackled through coordinated action and as cultural organisations we have a tremendous opportunity to inspire others and bring about change together.”

    Establishing a model to follow

    Looking ahead, the UNIL researchers aim to extend their impact. Plans include forging a global alliance of cultural organizations committed to sustainability and introducing a label to structure these efforts effectively. Professor Martin Müller, securing substantial funding for a program to promote practical innovation based on scientific research, is poised to be at the forefront of this transformative journey. The future promises not just academic analysis but a concrete path towards a sustainable cultural landscape.

    Survey methodology

    IA generated (copilot)

    Questionnaires were completed by 206 organizations from all continents. The data was analysed according to a model comprising three areas: governance (commitment, strategy, implementation, transparency); social (integrity, partnerships, urban integration, community, access, diversity & inclusion, employee well-being, learning & inspiration); and environmental (climate, biodiversity, water, waste, energy, mobility & transport, food & beverage, supply chain).

    The organizations included in the survey were selected according to criteria such as their importance to the sector (based on a body of literature), their attractiveness (number of visitors) and the costs invested in their development. The idea was to select deliberately large organizations as the major players in the field.

  • Algorithms made more “robust” by 13 Swiss and U.S. scientists to anticipate the future of the climate using AI     

    Algorithms made more “robust” by 13 Swiss and U.S. scientists to anticipate the future of the climate using AI     

    Machine learning algorithms, which are increasingly used in climate applications, are currently faced with a major problem: their difficulty in correctly predicting climate regimes for which they are not trained, thus generating uncertainties in projections. In a study published in the journal Science Advances, a team of researchers from the University of Lausanne and several American universities have revealed that, by transforming the data submitted to the algorithms using well-established physics principles, they can make them more “robust” for solving climate problems. This method has been successfully tested on three different atmospheric models. The implications of this finding go beyond climate science.

    Climate change projection is a generalization based on extrapolation from the recent past using physical models for past, present and future climates. However, current climate models face challenges related to the need to represent processes at scales smaller than the model grid size, thus generating uncertainties in projections. Although recent machine learning algorithms offer advantages for improving these process representations, they show a tendency to extrapolate poorly to climate regimes for which they are not trained.

    To overcome these limitations, the research team has proposed an innovative approach called “Climate-invariant Machine Learning”. This approach seeks to merge the physical understanding of climate models with the capabilities of machine learning algorithms to improve consistency, data efficiency, and generalization across diverse climate regimes. The results suggest that this integration of physical knowledge could strengthen the reliability of data-driven climate process models in the future.

    Seven points to better understand the key aspects of the study: 

    1. What climate applications are machine learning algorithms used for?

    Machine learning (ML) algorithms play a key role in enhancing climate models by simulating intricate processes such as storm dynamics, oceanic eddies, and cloud formation, which are costly with traditional methods. They’re pivotal in remote sensing for cloud detection and classification, and for downscaling global climate models to produce detailed local projections that better align with our observational record.

    2. Why have machine learning algorithms struggled with predicting climate change effects?

    Machine learning models, particularly neural networks, excel within the scope of their training data but can falter significantly with data that differ markedly from what they’ve seen before. This discrepancy arises because these models make implicit assumptions that may not hold true under novel climate conditions, leading to potential inaccuracies in projections outside their training regimes.

    3. What atmospheric models did the study focus on?

    The initial focus was on an “ocean world” model, a simplified representation of Earth’s climate system without continents, which helped identify and understand errors in extrapolation. The researchers progressed to more sophisticated, realistic atmospheric models that simulate the Earth’s climate dynamics. Integrating machine learning into these models is a promising venture to improve their realism, particularly for long-term climate projections, thereby contributing significantly to climate change adaptation and mitigation strategies.

    4. What does this mean beyond climate science?

    Beyond climate science, the methodology of the study offers a blueprint for incorporating physical principles into machine learning across various disciplines. By transforming the data using known physical invariances, the researchers can train ML models that work across various physical regimes despite only having been trained in a couple of them. This could work in any scientific domain with known invariances, e.g., in planetary science to create models generalizing across planets or in fluid dynamics to create models generalizing across flow regimes.

    5. How could this change climate science research?

    The study’s approach has the potential to advance climate science by enabling more accurate and generalizable process modeling. For example, machine learning models that have been trained on current climate data could, with appropriate physical adjustments, offer reliable projections for future climates. This breakthrough in generalization could lead to advancements in both weather forecasting and long-term climate projections.

    6. What are the next steps for this research?

    The groups which participated in this collaboration are exploring diverse avenues, including enhancing the generalizability of state-of-the-art data-driven weather forecasting models to future climates and incorporating these robust, climate-invariant modules into existing climate models. The goal is to keep pushing these data-driven models beyond their current limits, fostering a culture of rigorous and vigorous testing that could unearth new physical principles that extend beyond current observational capabilities.

    7. What’s the long-term impact of these findings and the potential of AI in climate science?

    The authors of the study anticipate these findings will foster deeper collaborations between the climate science and artificial intelligence communities. By encouraging the climate science community to view data-driven models not as a replacement but as an augmentation to traditional methodologies, and by promoting the development of AI techniques that are not just data-informed but also domain-aware, they anticipate a future where AI contributes significantly to advancing our understanding of climate processes, thereby enhancing collective ability to respond to the challenges posed by climate change.

    Additional information

    Tom Beucler, Pierre Gentine, Janni Yuval, Ankitesh Gupta, Liran Peng, Jerry Lin, Sungduk Yu, Stephan Rasp, Fiaz Ahmed, Paul A. O’Gorman, J. David Neelin, Nicholas J. Lutsko, Michael Pritchard, “Climate-Invariant Machine Learning”, Science Advances, 2024. [full text PDF]

  • Alpine glaciers will lose at least a third of their volume by 2050, whatever happens

    Alpine glaciers will lose at least a third of their volume by 2050, whatever happens

    The Aletsch glacier in 2009 {© UNIL, Guillaume Jouvet)

    Even if greenhouse gas emissions were to cease altogether, the volume of ice in the European Alps would fall by 34% by 2050. If the trend observed over the last 20 years continues at the same rate, however, almost half the volume of ice will be lost as has been demonstrated by scientists from UNIL in a new international study.

    By 2050, i.e. in 26 years’ time, we will have lost at least 34% of the volume of ice in the European Alps, even if global warming were to stop completely and immediately. This is the prediction of a new computer model developed by scientists from the Faculty of Geosciences and Environment at the University of Lausanne (UNIL), in collaboration with the University of Grenoble, ETHZ and the University of Zurich. In this scenario, developed using machine-learning algorithms and climate data, warming is stopped in 2022, but glaciers continue to suffer losses due to inertia in the climate system. This most optimistic of predictions is far from a realistic future scenario, however, as greenhouse gas emissions continue to rise worldwide.

    In reality, more than half the volume of ice will disappear

    Another more realistic projection from the study shows that, without drastic changes or measures, if the melting trend of the last 20 years continues, almost half (46%) of the Alps’ ice volume will actually have disappeared by 2050. This figure could even rise to 65%, if we extrapolate the data from the last ten years alone.

    2050: the near future

    Unlike traditional models, which project estimates for the end of the century, the new study, published in Geophysical Research Letters, considers the shorter term, making it easier to see the relevance in our own lifetimes and thus encouraging action. How old will our children be in 2050? Will there still be snow in 2038, when Switzerland may host the Olympic Games? These estimates are all the more important as the disappearance of kilometers of ice will have marked consequences for the population, infrastructure and water reserves. “The data used to build the scenarios stop in 2022, a year that was followed by an exceptionally hot summer. It is therefore likely that the situation will be even worse than the one we present”, states Samuel Cook, researcher at UNIL and first author of the study.

    Artificial intelligence boosts models

    Guillaume Jouvet, Institute of Earth Surface Dynamics

    The simulations were carried out using artificial-intelligence algorithms. The scientists used deep-learning methods to train their model to understand physical concepts, and fed it real climate and glaciological data. “Machine learning is revolutionizing the integration of complex data into our models. This essential step, previously notoriously complicated and computationally expensive, is now becoming more accurate and efficient”, explains Guillaume Jouvet, prof. at the FGSE and co-author of the study.

    The modelling was performed with the IGM model developed in UNIL ICEgroup.

    Source
  • How can we protect biodiversity? By improving monitoring of global genetic diversity

    How can we protect biodiversity? By improving monitoring of global genetic diversity

    The “Greek frog”, Rana graeca, is one of the species included in the analysis. Photo credit: Andreas Meyer

    Genetic diversity is crucial if species are to adapt to climate change. An international study co-conducted by UNIL researchers shows that current efforts to monitor genetic diversity in Europe are incomplete and insufficient. It proposes a novel approach for identifying and pinpointing important geographical areas on which to focus.

    • The genetic diversity of animals and plants is essential for their adaptation to climate change.
    • Current monitoring of this diversity is inadequate and could lead to the loss of important genetic variants.
    • A study co-directed by UNIL provides information on where to monitor genetic diversity in Europe.
    • It confirms that better monitoring of species and their genetic diversity is urgently needed internationally.

    Every living thing on our planet is distinguished from its fellow creatures by small differences in its hereditary material. So, when the environment changes and becomes unfavorable to populations of species (plants and animals), this genetic variability can enable them to adapt to the new conditions, rather than becoming extinct or having to migrate to other habitats. In simple terms, then, gene diversity is one of the keys to species survival. In 2022, the International Convention on Biological Diversity (CBD) placed increased emphasis on the need to protect the genetic diversity found in wild species, a fundamental component of biological diversity and one that has been generally neglected previously.

    Global warming is already putting a great deal of pressure on many species in Europe, particularly those having populations at the climatic limits of their range. The ability of species to resist greater heat or drought, as well as new species colonizing their environment, therefore determines their survival. It is in these borderline situations that it is most urgent to measure genetic diversity, in order to assess the ability of the species in question to persist.

    An international study co-directed by UNIL and published in Nature Ecology & Evolution has examined the monitoring of genetic diversity in Europe. Olivier Broennimann and Antoine Guisan, from the Faculty of Biology and Medicine and the Faculty of Geosciences and Environment, have made an essential contribution, developing a novel tool to identify geographical areas where genetic monitoring should be a priority. Their results show that efforts to monitor genetic diversity in Europe are incomplete and need to be supplemented.

    By analyzing all genetic monitoring programs in Europe, the study showed the geographic areas in which greater monitoring efforts are needed, mainly in southeastern Europe (Turkey and the Balkans). “Without better European monitoring of genetic diversity, we risk losing important genetic variants,” says Peter Pearman, lead author of the study and a former UNIL collaborator. Improved monitoring would make it possible to detect areas favorable to these variants, and to protect them in order to maintain the genetic diversity that is essential to the long-term survival of species. Some of these threatened species also provide invaluable services to humans, such as crop pollination, pest control, water purification and climate regulation.

    The study incorporated the efforts of 52 scientists who represent 60 universities and research institutes from 31 countries. The results suggest that European genetic diversity monitoring programs should be adapted systematically to span full environmental gradients, and to include all sensitive and high-biodiversity regions. In view of recent agreements to halt the decline in biodiversity, to which Switzerland is a signatory country, the study also points out that better monitoring of species in general, and their genetic diversity in particular, is urgently needed at an international level. This will enable better land-use planning, and better support for ecosystem conservation and restoration actions, which help to ensure the persistence of species and the services they provide.

    Source

    • Peter Pearman, Olivier Broennimann, [+ 48 authors] & Antoine Guisan, Mike Brufford. Monitoring species genetic diversity in Europe varies greatly and overlooks potential climate change impacts, Nature Ecology & Evolution, 2023.
  • They decode colonization methods through the communication of E. coli bacteria

    They decode colonization methods through the communication of E. coli bacteria

    Researchers have reproduced an intestine-like structure on a silicone chip. Here, a real mouse intestine, image credit: HistoPathology Core Facility, Institut Pasteur.
    Pietro de Anna, Institute of Earth Sciences

    Naturally present in our digestive tract, E. coli bacteria have very specific ways of communicating and colonizing complex environments. Scientists at UNIL have reproduced the complex structure of an intestine on a microchip, and unraveled these mechanisms for the first time. The study, published in Nature communications, represents a step towards a better understanding of host-microbe interactions.

    On average, a human intestine contains around 2 kilos of bacteria, or almost 10,000 billion individuals, belonging to 1,000 different species. Among them are the Escherichia coli bacteria, which are usually harmless and beneficial, but some minority strains are pathogenic. However, we don’t really understand the mechanisms that govern their behavior. How do they communicate with each other?  How do they colonize complex environments such as the human digestive tract?

    At UNIL, a team from the Faculty of Geosciences and Environment (FGSE), in collaboration with a team from the Faculty of Biology and Medicine (FBM), has unraveled the processes by which these bacteria colonize a sinuous environment. Two mechanisms were studied: their movement in response to chemical stimuli (chemotaxis), and their ability to estimate the number of similar bacteria in the vicinity (quorum sensing), and to react if there are too many. The research was published in Nature Communication. It opens the door to a better understanding of the relationship between microbes and their hosts, and the consequences for the latter’s health.

    An intestine on a chip

    To carry out their research, the scientists reproduced the structure of an intestine on a microfluidic chip, into which they injected nutrients (glucose) and E.coli bacteria. Bacteria spread in this confined environment, then accumulate in the cavities to take part in a small feast. They consume oxygen and absorb available glucose, releasing substances produced by their metabolic activity (such as AI-2). “This signal acts as a chemical stimulus, attracting other bacteria swimming nearby. As a result, the cavity fills up more and more, until it becomes crowded,” explains Pietro De Anna, professor at the FGSE and co-author of the study.

    A second phenomenon then occurs. When all the nutrients have been consumed, the bacteria activate a kind of sensor, enabling them to estimate the density of the crowd around them. “By estimating the quantity of signals, the bacteria suddenly notice that the place, which is a dead end, is full to bursting: their survival is at stake”, explains the professor. The bacteria then produce biomass faster than usual. They multiply as much as possible, to grow enough to get out of the cavity. 

    “Understanding how bacteria colonize complex, heterogeneous environments such as the gut is essential for understanding natural phenomena, and those that lead to their functioning or to pathology,” comments Pietro De Anna. “Furthermore, as AI-2 is a mode of interspecies communication, our research could provide valuable insights for other bacterial species.”

    Source
  • Digital Twin Cities : the need to integrate more complexity in the analysis

    Digital Twin Cities : the need to integrate more complexity in the analysis

    Céline Rozenblat, Institute of Geography and Durability (IGD)

    Virtual representations of cities – or Digital Twin Cities – are in full expansion and are intended to facilitate the management and planning of urban systems in the short and medium term. But are they relevant?

    The European Union has mandated a group of experts in modeling, which includes Prof. Céline Rozenblat, in order to evaluate them. The result: the models suffer from a lack of complexity, and neglect in particular the socio-economic components of the urban fabric, as well as the approaches at different levels and scales necessary for long-term sustainable urban development planning.

    Created more than fifty years ago by NASA to test rockets, the “digital twins” have progressively developed in many fields. In the last few years, the development of Digital Twin Cities has grown in parallel with the research conducted on the subject (> 400 articles published in 3 years). 

    They are the virtual representation of the processes and systems that make up the city. They aim at facilitating its management and planning in the short and medium term. Their modeling relies on large amounts of data from human and physical systems, for which automated sensors are available to provide this data in near real time.

    In 2022, a commission of experts, including Céline Rozenblat, professor at IGD, has been mandated within the European Union to establish an ISO standard defining the criteria to be met for the establishment of local digital twins. An article published in Nature computational sciences “The role of complexity for digital twins of cities” (full-text access to a view-only version of your paper by using the following SharedIt link) reviews the existing models and addresses several criticisms: lack of transparency of the data and models used; focused on the infrastructures and buildings of the cities; analyses carried out at the same scale and stopping most of the time at the administrative limits of the central city. This “mechanical” approach shows significant shortcomings and neglects elements that are crucial to the development of a city, such as the economic fabric and social links. Also, interactions at micro-levels can influence elements at a larger scale.

    Thus, according to the experts, it is necessary to introduce in digital city models several levels of complexity and different types of data, in order to have the most faithful representation of real cities. It will thus be possible to meet the needs of the governance of these cities by giving them the means to act at several levels, as well as at different scales of space and time.

  • Media coverage of climate change research does not inspire action

    Media coverage of climate change research does not inspire action

    Media coverage of scientific advances on climate issues does not activate the mechanisms known in psychology to trigger action in individuals and groups. This is the conclusion of a study conducted by social scientists and geoscientists from the University of Lausanne (UNIL – Switzerland).

    The planet is warming because of human activities and the consequences will be devastating for all living beings, including humans. At present, everyone is potentially exposed this information in the media. But how do scientific journals and the media relay research related to these issues? Is the scientific focus of climate warming research reflected in what the media decided to present?

    In a study published in Global Environmental Change, scientists from UNIL specialized in geosciences and psychology have examined these questions. An analysis of the collection of about 50,000 scientific publications on climate change for the year 2020 was carried out to identify what of this impressive body of research made its way into the mainstream media. The analysis showed that that most of the research selected by the media was biased to the natural sciences. It overly focused on large-scale climate projections that will occur in the future, and a narrow range of threats such as polar bears, drought and melting glaciers. The paper shows that this type of narrative does not activate the mechanisms known from research on psychology that might engage pro-environmental behaviors in readers. On the contrary, the way the media’s selective choice of certain elements of climate change research could backfire, provoking denial and avoidance.

    Presenting the problem, but also the solutions

    The study speaks of a possible distancing reaction on the part of the public, resulting from this globalizing approach. “The individuals exposed to these facts, not feeling directly concerned by them, will tend towards a peripheral, superficial and distracted treatment of the information. Only a central, deep and attentive consideration will allow the public to transform what they know into mechanisms of action and commitment”, explains Fabrizio Butera, professor at the Institute of Psychology of the UNIL, and co-author of the study. Marie-Elodie Perga, professor at the UNIL Institute of Land Surface Dynamics and co-author of the paper adds, “If the goal of mediating research is to have a societal impact, then it seems that we are pushing all the buttons that don’t work.”  

    If the goal of mediating research is to have a societal impact, then it seems that we are pushing all the buttons that don’t work!

    Marie-Elodie Perga, professor at the UNIL Institute of Land Surface Dynamics

    Large-scale threats can create fear. But, as Fabrizio Butera reminds us, “research on human behavior shows that fear can lead to behavioral change in individuals and groups, but only if the problem presented is accompanied by solutions.” Faced with purely descriptive articles that emphasise only highly selected elements of climate change, the public will tend to ignore the problem, seek out less anxiety-provoking information and surround themselves with networks that present a more serene reality.  

    Research, scientific journals and media

    What can be done, then, to communicate in an effective, encouraging way, encouraging society to engage more widely in climate protection action? “The treatment of environmental issues in a transversal and solution-oriented way would be useful. It would show that climate change has direct consequences on our lifestyles, our immediate environment or our finances, for example,” says Marie-Elodie Perga.

    This approach requires a change in the behaviour of communication managers in research institutions, in publishers, as well as in the media.  “For the time being, the most renowned scientific publications favor end-of-century studies,” she explains. ”Journalists then give very wide coverage to the publications of these journals, which are the most highly rated.” Instead, in France, for example, a group of journalists has drawn up a charter advocating the adaptation of media coverage of these issues, and calling for more cross-disciplinarity,” says Marie-Elodie Perga. Isolated, a human being will not have an impact, but collective actions are very effective. There are solutions, but they need to be brought to light, beyond local initiatives.

    Bibliography

    This research was facilitated through the Center for Climate Impact and Action (CLIMACT), affiliated with UNIL and EPFL. CLIMACT’s mission is to promote systemic solutions to climate change. It collaborates with the political, media and cultural worlds to strengthen the dialogue between science and society.

  • A roadmap for integrating species in biodiversity restoration

    A roadmap for integrating species in biodiversity restoration

    Un écosystème de forêt fluviale dans le nord de la Californie, aux États-Unis, dans l’aire de répartition historique du castor américain (Castor canadensis). L’intégration des ingénieurs d’écosystème dans les décisions de restauration et de gestion peut conduire à de meilleurs résultats pour le fonctionnement de l’écosystème. (Crédit Photo : © Understory)

    Par leur simple présence, certaines espèces, plantes ou animales peuvent fortement modifier le paysage, créer de nouveaux habitats pour la faune et augmenter la biodiversité. A l’Université de Lausanne (UNIL), des scientifiques ont mis au point une « boîte à outil » décrivant les mécanismes et conséquences liés à l’introduction de ces « ingénieurs des écosystèmes ». Cette feuille de route est destinée aux agences environnementales et aux responsables de programme de conservation, notamment. Elle vise à permettre l’intégration de ces espèces dans les projets de conservation de la biodiversité, quel que soit l’écosystème.

    Gianalberto Losapio, Institute of Earth Surface Dynamics (IDYST)

    De manière générale, dans les écosystèmes, toutes les espèces interagissent les unes avec les autres et avec leur environnement, participant ainsi au fonctionnement du milieu. Certaines espèces exercent cependant une influence bien plus importante que d’autres sur leurs semblables et sur l’environnement. On les appelle les ingénieurs des écosystèmes. 

    L’un des exemples le plus connu est celui du castor. En construisant des barrages, les castors modifient le débit des cours d’eau et transforment les écosystèmes terrestres en zone humide, entraînant toute une cascade de processus et l’arrivée de nouveaux animaux. Or si les cas particuliers sont bien documentés, les mécanismes à l’œuvre dans leur globalité ne sont pas encore bien compris. 

    En collaboration avec une équipe de Stanford, des scientifiques de l’UNIL ont mis au point une «boîte à outil» pour prédire et mesurer l’influence des espèces sur les écosystèmes, selon différentes conditions. Cette feuille de route pourrait être utilisée par différents acteurs tels que les gestionnaires de zones protégées, les agences environnementales ou les responsables de programme de conservation et de restauration. Le but étant d’inclure les ingénieurs des écosystèmes dans les processus de préservation de la biodiversité, et de maintien des écosystèmes. Leur «review» a été publiée dans le journal Functional Ecology.

    De l’observation à l’élaboration d’une marche à suivre

    Pour établir ce cadre, les scientifiques ont procédé en plusieurs étapes. D’abord, il a fallu collecter les connaissances et la littérature concernant les ingénieurs des écosystèmes. Sur cette base, les chercheuses et chercheurs ont développé un cadre permettant de modéliser les effets des espèces, puis de les quantifier. Enfin, ils ont mis au point une marche à suivre pour permettre d’inclure autant que possible ces régulateurs naturels sur le terrain. 

    « Ce guide vise à aider spécialistes et les collectivités à se poser les bonnes questions lors de la mise en place de programmes de conservation. Par exemple : quel est le but à atteindre ? Quelles sont les caractéristiques du terrain, ainsi que le contexte spatial ? », explique Gianalberto Losapio, chercheur à la Faculté des géosciences et de l’environnement de l’UNIL et auteur principal de l’étude. « Si vous souhaitez réintroduire une espèce spécifique de poisson dans un milieu, par exemple, vous ne pouvez pas simplement apporter les animaux dans le lieu choisi, il faut réfléchir de façon plus globale », illustre-t-il. Le « guide » fournit également des outils pour évaluer l’impact des actions menées, de sorte à adapter l’activité, si besoin. « Certains projets de restauration finissent par être abandonnées car les arbres qui ont été plantés meurent, ou les espèces introduites ne peuvent pas survivre », ajoute le chercheur. « Nous pensons qu’une approche globale aura plus de chances de réussir ». 

    Référence bibliographique
    • G. Losapio, L. Genes, C. J. Knight, T. N. McFadden, L. Pavan, Monitoring and modelling the effects of ecosystem engineers on ecosystem functioning, Functional Ecology, 2 avril 2023
      doi.org/10.1111/1365-2435.14315
  • Oxidation phenomena would have taken place much earlier than thought on Earth

    Oxidation phenomena would have taken place much earlier than thought on Earth

    Johanna Marin Carbonne & Juliette Dupeyron (© Bastien Ruols, UNIL)

    By analyzing thousands of data from 3.8- to 1.8-billion-year-old rock samples, a team from the University of Lausanne (Switzerland) has demonstrated that the phenomenon of iron oxidation occurred on Earth much earlier than previously thought. This discovery raises many questions. What caused this oxidation? Bacteria? Oxygen?

    When did oxygen first appear on Earth? According to the scientific consensus, it would have massively accumulated in the atmosphere nearly 2.4 billion years ago, oxidizing its environment. Before this event, traditional hypotheses consider that there was practically no oxidation phenomenon. 

    Scientists at the University of Lausanne (UNIL) were therefore surprised to observe traces of iron oxidation in rocks dating from 3.8 to 1.8 billion years ago, well before what is commonly known as “the great oxidation”. The results were published in Earth and Planetary Science Letters.

    (© Bastien Ruols, UNIL)

    Behind this discovery is an innovative analytical method used by the scientists. Thanks to this novel approach, they were able to analyze mineral grains down to 5 microns in size and study a much wider range of rocks than conventional methods allow. “I had different types of very old rocks in my drawers that I had collected over the years, from Australia, South Africa and Gabon,” explains Johanna Marin Carbonne, co-author of the study and professor at the Faculty of Geosciences and Environment of UNIL. “As a first step, we analyzed them with this promising method, which had never been done at this scale.” Researcher Juliette Dupeyron, then a Master’s student and now a Doctoral student at the Institute of Earth Sciences, set about compiling these data and identifying trends. She came up with these unexpected results. 

    Two hypotheses to choose from 

    What do these new data mean? That these ancient rocks were confronted with an oxidant long before the massive appearance of oxygen. Three hypotheses are currently emerging. The iron could have been oxidized by UV light, by microorganisms such as bacteria, or by oxygen, produced by bacteria. “The UV light scenario probably took place, but in small proportions and would not be sufficient to explain our observations,” comments Juliette Dupeyron, first author of the study. As for the other two hypotheses, it is not possible to distinguish them at this stage. “What is certain is that these results raise questions about the timing of Earth’s surface oxygenation.”

    However, a lot remains to be unraveled. “We would like to see further scientific studies to investigate this discovery,” says the researcher. “There is still a lot of work to be done to shed light on these observations. Johanna Marin-Carbonne adds: “Only 10% of the rocks of this age are currently available to scientists. Thus, it is difficult to reconstruct with certainty all the phenomena that took place at that time.”

    The techniques behind this discovery

    Traditionally, scientists have used plasma source mass spectrometry (MC-ICP-MS) to analyze ancient rocks, a method that requires separating the mineral of interest, in this case pyrite, from the rest of the rock and then going through various chemical processes to recover the desired chemical element, iron. The disadvantage is that the analysis of pyrite-poor rocks is tedious and previous studies were limited to a single rock type.

    The secondary ion mass spectrometry (SIMS) and laser ablation mass spectrometry (LA-MC-ICP-MS) techniques used in this study open new doors because they allow the analysis of mineral grains of the order of microns and can thus be applied to a much wider range of rocks. Finally, they offer the possibility of studying the surface of the sample directly, without the need to separate the mineral of interest from the rest of the rock, thus preserving the rock’s structure.

    What do rocks tell us about the past?

    Rocks bear the mark of their past interactions with their environment. By analyzing very old rocks, it is possible to trace their evolution and to deduce the environmental phenomena of their time of formation. In this study, scientists are interested in the presence of pyrite grains in rocks, which is a mineral containing sulfur and iron. The chemical composition of the mineral – more specifically its isotope composition – contains very important information to understand the past.     

    Bibliography
  • Solution travel in porous media: from microscopic flow to macroscopic transport

    Solution travel in porous media: from microscopic flow to macroscopic transport

    An important study published in Nature Communications describes through modeling the physical influence of microstructures in porous media on the global transport of particles entrained by fluids.

    This fundamental advance in the field of fluid dynamics is anything but trivial in view of its numerous fields of application, from the environment to medicine, from scavenging in rivers to remineralization or decontamination of soils, to blood flow or to the transport of molecules in biological membranes and tissues.

    A few drops of serendipity

    How did Dr. Ankur D. Borodoloi and Dr. David Scheidweiler, two postdocs at the Institute for Earth Sciences (ISTE), successfully model fluid transport in porous media? 

    During their work in the Fluid Mechanics Laboratory, studying the transport of particles inside an artificial substrate that closely mimics the structures of porous media, these two young researchers found that their observations did not match the predictions of established models (based mainly on passive diffusion principles). In particular, they found that the particles (colloids) took significantly longer to pass through the medium than expected.

    They therefore extended their study by observing the flow of these colloids in the very heart of the microstructures constituting the substrate. By compiling thousands of fluorescence microscopy images, they were able to identify that convection currents were created inside “dead-end” pores, keeping some of the particles “trapped”. This result was unexpected, as it was not imagined that such currents could be created on such a small scale.

    A mathematical model was then established, in order to describe and predict the transport speed of microparticles and the time needed for them to cross a porous medium. The key factors of this model are the thickness of the medium and the distribution of the size of the pores it contains. A better understanding of these subtle mechanisms opens up many development perspectives. Pietro de Anna’s team is working on several projects related to this topic, including the growth dynamics of bacteria in such media. The understanding of these phenomena is particularly interesting, considering, for example, that the cells of the wall of the intestine or the kidney elaborate microvilli creating an environment similar to those studied here. Thus, the dynamics of absorption or diffusion of drug molecules in these organs could be approached under a new angle.

    The Fluid Mechanics Laboratory

    The Fluid Mechanics Laboratory works mainly on identifying the mechanisms that link phenomena that can be observed at the macroscopic level to processes that exist at the microscopic level.

    To go further with Prof. Pietro de Anna

    Why was the flow of fluids in a porous medium not better known until now?

    The flows through porous media are very slow (a few microns per second). Therefore, most scientists considered that liquids and microparticles passively pass through them without any particular dynamics and that it was sufficient to determine an average flow and diffusion velocity to describe their transport. The very complex structures of these media were also a hindrance to further studies. Indeed, no existing experimental device allowed to recreate this complexity or to obtain a fine observation of the flows within the microstructures.

    What were the key steps to successfully define this model?

    In the framework of our research, we have worked on the realization of a synthetic medium allowing us to recreate the complexity of a porous medium and to observe the flows within the microstructures. We have succeeded in realizing transparent polymer wafers (i.e. microfluidics) with an internal structure that we can shape to our needs. The microfluidics are in the form of thin slides in which we pass colloid suspensions.

    Schematic of the experiment performed: a. Microfluidic plate containing a solution of colloids. b. Zoom on the microstructures composing the microfluidic.

    Once this setup was in place, we performed several experiments measuring particle transport through the microfluidics to determine if the simple models used so far held true. We found anomalies with respect to the expected results with diffusion delays.

    Porous structures are networks of channels filled with particles or microorganisms in suspension which are interspersed with “dead-end” pores in which these flows are interrupted. These structures are present in soils, industrial filters, membranes or biological tissues. Microfluidics are designed to recreate the conditions of porous media with a homogeneous distribution of isolated channels and pores.  This method is described in more detail in the Geoblog article: Day of a researcher – Pietro de Anna

    We therefore designed an experiment that allows us to observe the transport mechanisms occurring at the microscopic level, in order to understand the unexpected effects observed at the macroscopic level.

    Prof. Pietro de Anna
    Particle flux observed at the exit of the microfluidics (blue dots) compared to the expected flux (green line). A longer time than expected is required for the colloids to pass through the medium.

    The microfluidics were filled with a suspension of colloids and a wash solution was injected into one end of the plate. The movement of the suspended colloids was determined by recording images at regular intervals using a fluorescence microscope. These images were then superimposed to evaluate the movement or stagnation of suspended colloids.

    Images of colloids in suspension at the beginning of the experiment and after 6h of treatment. The colloids present in the channels (green) are mostly washed out, while those in the pores (red) remain trapped.

    At the end of the experiment and the superposition of thousands of images, we were able to observe that a convective movement is created in the “dead-end” pores, which retains the particles inside. This is quite unexpected as it was not thought that such movements could be formed at such a small scale (about 20 microns).

    Picture of colloids swirling in a pore (in blue). This photo has been selected for the [Figure 1.A.] 2022 competition and will be exhibited at the Lausanne City Hall from September 21 to October 3, 2022.
    Schéma des courants de convection observés.

    From these observations we have developed a mathematical model to describe these vortex phenomena. This model allows to describe the transport of particles through a porous medium at any stage of the transport. The determining elements of this model are the length of the traversed material and the pore size distribution.

    Bibliographical reference

    • Bordoloi, A.D., Scheidweiler, D., Dentz, M. et al. Structure induced laminar vortices control anomalous dispersion in porous media. Nat Commun 13, 3820 (2022).
      doi.org/10.1038/s41467-022-31552-5