Diffusion on Language Model Encodings for Protein Sequence Generation

1 Constructor University, 2 AIRI
International Conference on Machine Learning (ICML) 2025

*Indicates Equal Contribution

Abstract

Protein sequence design has seen significant advances through discrete diffusion and autoregressive approaches, yet the potential of continuous diffusion remains underexplored. Here, we present DiMA, a latent diffusion framework that operates on protein language model representations. Through systematic exploration of architectural choices and diffusion components, we develop a robust methodology that generalizes across multiple protein encoders ranging from 8M to 3B parameters. We demonstrate that our framework achieves consistently high performance across sequence-only (ESM-2, ESMc), dual-decodable (CHEAP), and multimodal (SaProt) representations using the same architecture and training approach. We conduct extensive evaluation of existing methods alongside DiMA using multiple metrics across two protein modalities, covering quality, diversity, novelty, and distribution matching of generated proteins. DiMA consistently produces novel, high-quality and diverse protein sequences and achieves strong results compared to baselines such as autoregressive, discrete diffusion and flow matching language models. The model demonstrates versatile functionality, supporting conditional generation tasks including protein family-generation, motif scaffolding and infilling, and fold-specific sequence design, despite being trained solely on sequence data. This work provides a universal continuous diffusion framework for protein sequence generation, offering both architectural insights and practical applicability across various protein design scenarios.

Model Overview

In this study, we develop DiMA, a new latent diffusion model that operates on protein language model representations. We demonstrate that continuous diffusion on protein embeddings enables effective sequence and structure generation across multiple tasks and encoder architectures. DiMA addresses the limitations of previous continuous diffusion approaches that have been limited to specific representations by establishing a unified framework that generalizes across diverse protein encoders. By operating in the continuous latent space of these pre-trained encoders, our approach circumvents the challenges associated with discrete sequence modeling while maintaining the expressiveness needed for complex protein design tasks. The framework is designed to be encoder-agnostic, allowing it to benefit from advances in protein representation learning without requiring architectural modifications.

Model Overview
DiMA. The framework consists of three main components: (1) a pre-trained protein language model encoder that maps amino acid sequences to continuous latent representations, (2) a diffusion denoiser that generates latent vectors from Gaussian noise, and (3) sequence and structure decoders that reconstruct amino acid sequences and protein structures from the generated latent representations. During training, the model learns to denoise corrupted protein representations. During inference, the framework supports both unconditional generation and conditional generation tasks including motif scaffolding, fold conditioning, and family-specific generation. The approach enables joint sequence-structure generation while operating entirely in continuous latent space.

BibTeX

@inproceedings{meshchaninov2025dima,
    title={Diffusion on Language Model Encodings for Protein Sequence Generation},
    author={Meshchaninov, Viacheslav and Strashnov, Pavel and Shevtsov, Andrey and Nikolaev, Fedor and Ivanisenko, Nikita and Kardymon, Olga and Vetrov, Dmitry},
    booktitle={International Conference on Machine Learning (ICML)},
    year={2025}
}