Recherche

Vous pouvez utiliser notre plugin pour insérer des élements de votre rapport d’activité (raweb). Attention le rapport n’existant qu’en anglais, la page générée ici sera également en anglais.

La page de présentation

Exemple : abs

Overall objectives

Biomolecules and their function(s).

 Computational Structural Biology (CSB) is the scientific domain concerned with the development of algorithms and software to understand and predict the structure and function of biological macromolecules. This research field is inherently multi-disciplinary. On the experimental side, biology and medicine provide the objects studied, while biophysics and bioinformatics supply experimental data, which are of two main kinds. On the one hand, genome sequencing projects give supply protein sequences, and ~200 millions of sequences have been archived in UniProtKB/TrEMBL – which collects the protein sequences yielded by genome sequencing projects. On the other hand, structure determination experiments (notably X-ray crystallography, nuclear magnetic resonance, and cryo-electron microscopy) give access to geometric models of molecules – atomic coordinates. Alas, only ~150,000 structures have been solved and deposited in the Protein Data Bank (PDB), a number to be compared against the 108 sequences found in UniProtKB/TrEMBL. With one structure for ~1000 sequences, we hardly know anything about biological functions at the atomic/structural level. Complementing experiments, physical chemistry/chemical physics supply the required models (energies, thermodynamics, etc). More specifically, let us recall that proteins with n atoms has d=3n Cartesian coordinates, and fixing these (up to rigid motions) defines a conformation. As conveyed by the iconic lock-and-key metaphor for interacting molecules, Biology is based on the interactions stable conformations make with each other. Turning these intuitive notions into quantitative ones requires delving into statistical physics, as macroscopic properties are average properties computed over ensembles of conformations. Developing effective algorithms to perform accurate simulations is especially challenging for two main reasons. The first one is the high dimension of conformational spaces – see d=3n above, typically several tens of thousands, and the non linearity of the energy functionals used. The second one is the multiscale nature of the phenomena studied: with biologically relevant time scales beyond the millisecond, and atomic vibrations periods of the order of femto-seconds, simulating such phenomena typically requires 1012 conformations/frames, a (brute) tour de force rarely achieved 38.

Computational Structural Biology: three main challenges.

 The first challenge, sequence-to-structure prediction, aims to infer the possible structure(s) of a protein from its amino acid sequence. While recent progress has been made recently using in particular deep learning techniques 37, the models obtained so far are static and coarse-grained.

The second one is protein function prediction. Given a protein with known structure, i.e., 3D coordinates, the goal is to predict the partners of this protein, in terms of stability and specificity. This understanding is fundamental to biology and medicine, as illustrated by the example of the SARS-CoV-2 virus responsible of the Covid19 pandemic. To infect a host, the virus first fuses its envelope with the membrane of a target cell, and then injects its genetic material into that cell. Fusion is achieved by a so-called class I fusion protein, also found in other viruses (influenza, SARS-CoV-1, HIV, etc). The fusion process is a highly dynamic process involving large amplitude conformational changes of the molecules. It is poorly understood, which hinders our ability to design therapeutics to block it.

Figure 1: The synergy modeling – experiments, and challenges faced in CSB: illustration on the problem of designing miniproteins blocking the entry of SARS-CoV-2 into cells. From 29. Of note: the first step of the infection by SARS-CoV-2 is the attachment of its receptor binding domain of its spike (RBD, blue molecule), to a target protein found on the membrane of our cells, ACE2 (orange molecule). A strategy to block infection is therefore to engineer a molecule binding the RBD, preventing its attachment to ACE2. (A) Design of a helical protein (orange) mimicking a region of the ACE2 protein. (B) Assessment of binding modes (conformation, binding energies) of candidate miniproteins neutralizing the RBD.

Finally, the third one, large assembly reconstruction, aims at solving (coarse-grain) structures of molecular machines involving tens or even hundreds of subunits. This research vein was promoted about 15 years back by the work on the nuclear pore complex 26. It is often referred to as reconstruction by data integration, as it necessitates to combine coarse-grain models (notably from cryo-electron microscopy (cryo-EM) and native mass spectrometry) with atomic models of subunits obtained from X ray crystallography. Fitting the latter into the former requires exploring the conformation space of subunits, whence the importance of protein dynamics.

As an illustration of these three challenges, consider the problem of designing proteins blocking the entry of SARS-CoV-2 into our cells (Fig. 1). The first challenge is illustrated by the problem of predicting the structure of a blocker protein from its sequence of amino-acids – a tractable problem here since the mini proteins used only comprise of the order of 50 amino-acids (Fig. 1(A), 29). The second challenge is illustrated by the calculation of the binding modes and the binding affinity of the designed proteins for the RBD of SARS-CoV-2 (Fig. 1(B)). Finally, the last challenge is illustrated by the problem of solving structures of the virus with a cell, to understand how many spikes are involved in the fusion mechanism leading to infection. In 29, the promising designs suggested by modeling have been assessed by an array of wet lab experiments (affinity measurements, circular dichroism for thermal stability assessment, structure resolution by cryo-EM). The hyperstable minibinders identified provide starting points for SARS-CoV-2 therapeutics 29. We note in passing that this is truly remarkable work, yet, the designed proteins stem from a template (the bottom helix from ACE2), and are rather small.

Figure 2: The main challenges of molecular simulation: Finding significant local minima of the energy landscape, computing statistical weights of catchment basins by integrating Boltzmann’s factor, and identifying transitions. Practically, d>100.

Protein dynamics: core CS – maths challenges.

To present challenges in structural modeling, let us recall the following ingredients (Fig. 2). First, a molecular model with n atoms is parameterized over a conformational space 𝒳 of dimension d=3n in Cartesian coordinates, or d=3n6 in internal coordinate–upon removing rigid motions, also called degree of freedom (d.o.f.). Second, recall that the potential energy landscape (PEL) is the mapping V(·) from d to providing a potential energy for each conformation 39, 36. Example potential energies (PE) are CHARMM, AMBER, MARTINI, etc. Such PE belong to the realm of molecular mechanics, and implement atomic or coarse-grain models. They may embark a solvent model, either explicit or implicit. Their definition requires a significant number of parameters (up to 1,000), fitted to reproduce physico-chemical properties of (bio-)molecules 40.

These PE are usually considered good enough to study non covalent interactions – our focus, even though they do not cover the modification of chemical bonds. In any case, we take such a function for granted 1.

The PEL codes all structural, thermodynamic, and kinetic properties, which can be obtained by averaging properties of conformations over so-called thermodynamic ensembles. The structure of a macromolecular system requires the characterization of active conformations and important intermediates in functional pathways involving significant basins. In assigning occupation probabilities to these conformations by integrating Boltzmann’s distribution, one treats thermodynamics. Finally, transitions between the states, modeled, say, by a master equation (a continuous-time Markov process), correspond to kinetics. Classical simulation methods based on molecular dynamics (MD) and Monte Carlo sampling (MC) are developed in the lineage of the seminal work by the 2013 recipients of the Nobel prize in chemistry (Karplus, Levitt, Warshel), which was awarded “for the development of multiscale models for complex chemical systems”. However, except for highly specialized cases where massive calculations have been used 38, neither MD nor MC give access to the aforementioned time scales. In fact, the main limitation of such methods is that they treat structural, thermodynamic and kinetic aspects at once 32. The absence of specific insights on these three complementary pieces of the puzzle makes it impossible to optimize simulation methods, and results in general in the inability to obtain converged simulations on biologically relevant time-scales.

The hardness of structural modeling owes to three intertwined reasons.

First, PELs of biomolecules usually exhibit a number of critical points exponential in the dimension 27; fortunately, they enjoy a multi-scale structure 30. Intuitively, the significant local minima/basins are those which are deep or isolated/wide, two notions which are mathematically qualified by the concepts of persistence and prominence. Mathematically, problems are plagued with the curse of dimensionality and measure concentration phenomena. Second, biomolecular processes are inherently multi-scale, with motions spanning 15 and 4 orders of magnitude in time and amplitude respectively 25. Developing methods able to exploit this multi-scale structure has remained elusive. Third, macroscopic properties of biomolecules, i.e., observables, are average properties computed over ensembles of conformations, which calls for a multi-scale statistical treatment both of thermodynamics and kinetics.

Validating models.

A natural and critical question naturally concerns the validation of models proposed in structural bioinformatics. For all three types of questions of interest (structures, thermodynamics, kinetics), there exist experiments to which the models must be confronted – when the experiments can be conducted.

For structures, the models proposed can readily be compared against experimental results stemming from X ray crystallography, NMR, or cryo electron microscopy. For thermodynamics, which we illustrate here with binding affinities, predictions can be compared against measurements provided by calorimetry or surface plasmon resonance. Lastly, kinetic predictions can also be assessed by various experiments such as binding affinity measurements (for the prediction of Kon and Koff), or fluorescence based methods (for kinetics of folding).

Last activity report : 2025

Les résultats

New results

Modeling the dynamics of proteins

Keywords: Protein flexibility, protein conformations, collective coordinates, conformational sampling, loop closure, kinematics, dimensionality reduction.

Simpler protein domain identification using spectral clustering

The decomposition of a biomolecular complex into domains is an important step to investigate biological functions and ease structure determination. A successful approach to do so is the SPECTRUS algorithm, which provides a segmentation based on spectral clustering applied to a graph coding inter-atomic fluctuations derived from an elastic network model.

We present  19, which makes three straightforward and useful additions to SPECTRUS. For single structures, we show that high quality partitionings can be obtained from a graph Laplacian derived from pairwise interactions–without normal modes. For sets of homologous structures, we introduce a Multiple Sequence Alignment mode, exploiting both the sequence based information (MSA) and the geometric information embodied in experimental structures. Finally, we propose to analyze the clusters/domains delivered using the so-called D-Family matching algorithm, which establishes a correspondence between domains yielded by two decompositions, and can be used to handle fragmentation issues.

Our domains compare favorably to those of the original SPECTRUS, and those of the deep learning based method Chainsaw. Using two complex cases, we show in particular that is the only method handling complex conformational changes involving several sub-domains. Finally, a comparison of and Chainsaw on the manually curated domain classification ECOD as a reference shows that high quality domains are obtained without using any evolutionary related piece of information.

is provided in the Structural Bioinformatics Library, see SBL and Spectral domain explorer.

Algorithmic foundations

Keywords: Computational geometry, computational topology, optimization, graph theory, data analysis, statistical physics.

Improved seeding strategies for k-means and k-GMM

In 18, we revisit the randomized seeding techniques for k-means clustering and k-GMM (Gaussian Mixture model fitting with Expectation-Maximization), formalizing their three key ingredients: the metric used for seed sampling, the number of candidate seeds, and the metric used for seed selection. This analysis yields novel families of initialization methods exploiting a lookahead principle–conditioning the seed selection to an enhanced coherence with the final metric used to assess the algorithm, and a multipass strategy to tame down the effect of randomization.

Experiments show a significant improvement over classical contenders. In particular, for k-means, our methods improve on the recently designed multi-swap strategy (similar results in terms of sum of square errors (SSE), seeding ×6 faster), which was the first one to outperform the greedy k-means++ seeding.

Our experimental analysis also shed light on subtle properties of k-means often overlooked, including the (lack of) correlations between the SSE upon seeding and the final SSE, the variance reduction phenomena observed in iterative seeding methods, and the sensitivity of the final SSE to the pool size for greedy methods.

Practically, our most effective seeding methods are strong candidates to become one of the–if not the–standard technique(s). From a theoretical perspective, our formalization of seeding opens the door to a new line of analytical approaches.

Modeling high dimensional point clouds with the spherical cluster model

In collaboration with L. Goldenberg (former Inria intern).

A parametric cluster model is a statistical model providing geometric insights onto the points defining a cluster. The spherical cluster model (SC) approximates a finite point set Pd by a sphere S(c,r) as follows. Taking r as a fraction η(0,1) (hyper-parameter) of the standard deviation of distances between the center c and the data points, the cost of the SC model is the sum over all data points lying outside the sphere S of their power distance with respect to S. The center c of the SC model is the point minimizing this cost. Note that η=0 yields the celebrated center of mass used in KMeans clustering. We make three contributions 21.

First, we show that fitting a spherical cluster yields a strictly convex but not smooth combinatorial optimization problem. Second, we present an exact solver using the Clarke gradient on a suitable stratified cell complex defined from an arrangement of hyper-spheres. Finally, we present experiments on a variety of datasets ranging in dimension from d=9 to d=10,000, with two main observations. First, the exact algorithm is orders of magnitude faster than Broyden-Fletcher-Goldfarb-Shanno (BFGS) based heuristics for datasets of small/intermediate dimension and small values of η, and for high dimensional datasets (say d>100) whatever the value of η. Second, the center of the SC model behaves as a parameterized high-dimensional median.

The SC model is of direct interest for high dimensional multivariate data analysis, and the application to the design of mixtures of SC will be reported in a companion paper.

Applications in structural bioinformatics and beyond

Keywords: Docking, scoring, interfaces, protein complexes, phylogeny, evolution.

Fold or flop: quality assessment of AlphaFold predictions on whole proteomes

Reliability of AlphaFold predictions is primarily assessed by the method’s self-reported score predicted Local Distance Difference Test (pLDDT). For model organisms, AlphaFold predictions show that 30% to 40% of all amino acids fall into the low-confidence range of pLDDT. Moreover, pLDDT has occasionally failed to flag predictions that are physically implausible. This raises two fundamental questions: can we identify more robust indicators of reliability? And do unreliable predictions exhibit shared structural or biophysical traits?

To address these questions, we introduce semi-global statistics characterizing packing properties at multiple scales, and performing dimensionality reduction and clustering at once 23. We use these to perform a systematic whole-proteome structural quality assessment of prediction contained in the AlphaFold Database (AFDB), investigating connections between unreliable predictions, fold classification, and intrinsic disorder propensity.

Our results reveal consistent relationships between low-confidence predictions, clustering of intrinsically disordered regions (IDRs), and distinctive packing properties, thereby highlighting both strengths and limitations of current self-assessment metrics. This work provides a framework for deeper confidence assessment of AlphaFold predictions and offers generalizable strategies for distinguishing reliable from unreliable structural models.

Characterizing the fragmentation of AlphaFold predictions

The Nobel prize winning program AlphaFold computes plausible structures of (well) folded proteins. The main quality assessment is based on the predicted Local Distance Difference Test (pLDDT), a per amino acid confidence score. To enhance quality assessment, we provide novel quantitative measures to identify coherent amino acid (a.a.) stretches along the sequence in terms of pLDDT values 22. These measures, which rely on standard tools from topological data analysis and combinatorics, qualify the coherence / fragmentation of AlphaFold predictions. The outcome of our analysis can readily be used to select reliable regions/domains within proteins whose pLDDT values span the entire pLDDT range.

Orphan genes survey

Orphan genes are protein-coding genes that lack detectable homologs in other species, making them lineage-specific and evolutionarily enigmatic. This review 20  synthesizes research on orphan genes in animals and fungi, summarizing their prevalence, proposed origins (including divergence and de novo emergence), and biological roles. Orphan genes are implicated in diverse processes such as reproduction, development, adaptation, and disease, highlighting their functional importance. They are especially interesting for computational biology because identifying them challenges homology-based annotation methods and requires novel comparative and statistical approaches. By consolidating scattered knowledge, this work provides a foundation for developing better computational tools to detect, classify, and model the evolution and function of orphan genes.

Orphan genes detection and classification

Building on the broader synthesis of orphan gene prevalence and function, we provide a focused, data-driven case in plant-parasitic nematodes of the genus Meloidogyne. Using comparative genomics across 85 nematode species, we show that orphan genes are not rare anomalies but constitute  18% of the genome, with strong transcriptional support 24. By integrating synteny and ancestral sequence reconstruction, the work quantifies the relative contributions of divergence and de novo gene birth, directly addressing questions raised in the earlier review. Proteomic and translatomic evidence further validates these genes as bona fide coding sequences with distinctive molecular features. Together, this study builds a new and effective pipeline for detecting and classifying orphan genes, and exemplifies how computational approaches can move from cataloging orphan genes to dissecting their origins and linking them to lineage-specific adaptations such as parasitism.

Vous pouvez ajouter du texte formaté selon vos besoins ici.

  • Axe de recherche 1

    …….

  • Axe de recherche 2

    ……….

  • Axe de recherche 3

    ……….