Professor
GIGA Interfaculty Research Center of Excellence in Biomedical Research, University of Liège
Engineering and Medicine, KU Leuven
Executive Director of VPH (International Society of in Silico Medicine)
Dr. Geris earned her MSc and PhD in Mechanical Engineering at KU Leuven and then completed post-doctoral training at KU Leuven and the University of Oxford.
Who have been your mentors?
My PhD supervisor, Jos Vander Sloten, showing me how to be a supportive manager. Marco Viceconti, a colleague from Bologna and a leader in the in silico medicine community, showing me how to follow through on your vision, build the support for it from different angles. Many many other colleagues from different fields and backgrounds. I shamelessly copy the good practices and great examples I see from so many colleagues – sometimes we have explicit conversations about these practices, sometimes we don’t. But if you are open to it, there is always something to learn from your colleagues – regardless of their career stage.
How are you currently applying computational models of cellular behavior to your bone fracture research program?
We are using various types of in silico technologies for answering different questions on bone fracture healing. We use bioinformatics tools to analyse (spatial) single cell RNAseq data of various fracture healing situations (large defects, loaded defects, inflammatory phase). This allows us to build biological blueprint and identify key players for these processes. We use agent-based models (Edoardo Borgiani’s COMMBINI) of the inflammatory phase, combining intracellular mechanisms with cell-based processes, and the inclusion of mechanical loading. This allows us to understand better the interaction between inflammation and mechanics and allows us to test hypotheses. We use continuum models of the reparative phase of the healing process, simulating the actions and interactions of key players, including cells (as densities), growth factors (concentrations) and tissue types (as densities). This allows us to better understand the influences of external factors such as blood flow, administration of growth factors, presence of biomaterials. We use continuum models to simulate the ingrowth of neotissue and bone in 3D scaffolds used in treatment of large bone defects. These models allow us to optimize the scaffold’s design.
What are advantages and challenges associated with using computational models to inform biological or clinical research?
Several advantages have already been demonstrated on the use of in silico modeling: Integrate data from different time and length scales into a single framework that provides a holistic picture of the processes under study. Use models to perform in silico trials to refine, reduce (or in some very specific cases even replace) part of the in vitro and in vivo (animal/human) testing. Use models to design and optimize therapeutic strategies in silico. Use models to make sense of real-world data and integrate it back in the R&D process. There are of course various challenges. Model credibility assessment is crucial. This is more than just validation (comparison between simulation outcome and experimental data), but also includes verification (ensuring the computer simulations accurately implement the mathematics and no errors are made) and uncertainty quantification (what is the effect on the simulation outcome of assumptions made and uncertainty on the input data). Availability of credible input and validation data. Even though many papers are available reporting on processes, there are many factors in the experimental set-ups that are not accurately reported, leading to lots of missing information. Regulatory science and standards: the community has been working very hard the past decade to engage with regulatory agencies and standards bodies to develop the regulatory science on in silico medicine tools. Education & information of the experimental scientists, clinicians and patients. Thanks to the AI (r)evolution, there is now more curiosity towards what in silico tools can mean in research & clinics. Also the in silico medicine community is focusing much more on the stakeholder engagement, education & communication.
Are there specific barriers that once addressed might make using computational models to inform ex vivo study design more common?
As mentioned above, credibility assessment is important. As well as the availability of data. Both go hand in hand. Having several success stories will allow people to assess what in silico models can do for them. This will open the path towards having interdisciplinary interactions. Having conversations on model development prior to any experiments are done will also help to ensure all the necessary data is collected for model validation.
What advice would you give investigators who want to incorporate computational models into their research program? What learning resources would you recommend?
Talk to colleagues developing these models. They would be more than happy to discuss the potential of these models for your questions. Do not expect miracles though. A good model starts with a good question of interest and focusses on a well defined context of use. That means that there is no model that works for all questions and additional work (and data) will be necessary. If you are looking for information on computational modeling, I recommend following sources:
- Code & cure animated video series, explaining basic concepts in 10 short videos.
- The Digital Twin Theory podcast series, going more in detail on the different concepts related to in silico medicine.
- The Virtual Human Twin roadmap, describing the need, the current status, the technology, infrastructure, ELSI, regulatory science, standards, market and uptake of personalized computer models in all areas of health & care (including research). It is a big document but written in an accessible manner and a good place to start to gain a 360° view on the field.
Can you speak to ways your multi-disciplinary collaboration informs your individual research programs?
My team is composed of biomedical, mechanical & materials engineers, mathematicians, biomedical scientists, dentists, medical doctors and laboratory technicians. This interdisciplinarity is crucial to advance on the scientific questions we are trying to answer, using the best tools (or all the relevant tools) that will help to provide the necessary pieces of the puzzle. We developed the experimental side of our activities in order to ensure the appropriate data would be available to calibrate and validate our models. It is really gratifying to see people interact across their disciplinary backgrounds, have computer modelers perform experiments and biomedical scientists run models.
How do your computational models account for the complexity of cell-matrix interactions and vascularization processes that are critical for bone healing in ex vivo systems?
I’m not sure I understand the question from a biological perspective. There is no real bone ‘healing’ in ex vivo systems. We can model certain processes but it would never be real ‘healing’ as encountered in vivo. What computer models would be able to do is link the observations made in various in vitro and ex vivo systems together to create a holistic picture of healing that might approach the in vivo reality. Different modeling strategies allow us to capture essential aspects of cell-matrix interactions and of vascularization processes, as we understand them today. These approaches can focus on the biological aspects, the mechanics, the chemistry or all of them at the same time, depending the exact question of interest you are studying.
What strategies do you use to validate your computational predictions experimentally, and how do you handle discrepancies between model outputs and observed biological responses?
We use a variety of data sources for model calibration and validation. These include 2D cell culture experiments, organ-on-chip experiments, ex vivo experiments and in vivo experiments. While in vivo experiments are of course the closest we can get to reality without going to human patients, there are many elements of the processes that are hard to measure in a longitudinal manner or in a way that preserves all the contextual information (microenvironment). That is why we use the combination of various methods to extract data for specific parts of the models. We also use machine learning techniques to perform in silico screenings to identify what parts of the model need particular attention during validation. A discrepancy between computational models and biological observations can have several causes
- The model made wrong assumptions > revise the model. Talk to experimentalists to ensure that the known mechanisms are included
- The model did not capture certain mechanisms in sufficient detail to capture the observations > increase granularity of the model
- The model did none of the above, yet results are not in line with experiments > this is the most interesting situation because it points towards a gap in the knowledge. When all the things we know about a process do not ‘add up’ to the observed result, it means something is missing in our story. The model can then be used to perform a screening to investigate what kind of changes to the model could lead to the observed results, giving us hints as to what mechanisms require elaboration from the experimental perspective.
- The experiments are not the correct ones to study the mechanisms of interest. Whereas experimental data is often considered to be ‘real data’ (compared to simulated data from the model output), it does not mean that the experiments are performed correctly, are using the write in vitro/in vivo model or are properly addressing the question we are investigating.
Can you elaborate on how your modeling framework balances simplification for computational feasibility with maintaining sufficient biological realism to guide ex vivo study design?
A good model (whether it is in silico, in vitro or in vivo) starts with the definition of the question of interest and context of use. Once you are clear on what you want to address, you can identify the factors that you need to take into account and to what level of detail. In more exploratory settings, we use an iterative process to find the right balance between sufficient detail and computational feasibility. Adding more parameters does not necessarily enhance the comprehension when they are not relevant to the question being addressed.
Given the variability in donor tissue and cell behavior, how robust are your models to inter-individual biological differences, and have you explored integrating patient-specific data?
There are different levels of personalization that are used in computer models when making them patient-specific. The first level is the creation of a generic model where we aim to get a result that sits somewhere in the realm of physiologically relevant results observed in experiments/clinics. The second level is where we aim to capture a population behavior, where the models aims to capture the mean/median behavior that is observed in experiments/clinics and is able to – when parameters values are varied following the observed variability in properties observed in experiments/clinics – capture the variability in results with respect to this mean/median behavior. The third level of personalization is where you inform your model with as much subject-specific data as possible and aim to obtain a result that captures the observed behavior of that patient. Frequently, models start with the first level of personalization and then move to the higher levels. Model credibility assessment for each of these levels has specific challenges to it as, for instance, model validation on actual patient-specific models is not always possible due to lack of sufficient data that can be obtained without harming the patient. In the group we are now working on moving from the first level to the second level for some of our model. It is of course easy to use a patient-specific geometry based on medical images, but that is not enough to make a model really patient-specific. Biological factors, lifestyle factors and others, all are much harder to quantify and translate into changes in parameter values of the model.
How do you envision integrating these computational tools into translational pipelines to optimize preclinical testing or accelerate the development of bone regenerative therapies?
The optimization of preclinical testing and the acceleration of the development of bone regenerative therapies is the reason why we started developing these models in the first place. In silico tools can provide additional information and insight in every step of the lifecycle of therapies, from the early design optimization (finding more interesting design, performing in silico screening to detect potential failure modes) over pre-clinical (design & interpretation of experiments) to clinical (stratification of patients, design of clinical trials) and post-market (interpretation of real-world data). In our team we focus mostly on the low TRL, early developments, and we have developed a wide range of models to help in all the different aspects as mentioned above. To name one example, our work on the simulation of neotissue growth inside 3D porous scaffolds has led to the optimization of scaffold microstructure geometry that is now being used by a Liège start-up creating 3D printed CaP-based biomaterials for maxillofacial applications (already in clinical use).

Leave A Comment