Protein sequence design has seen significant advances through discrete diffusion and autoregressive approaches, yet the potential of continuous diffusion remains underexplored. Here, we present DiMA, a latent diffusion framework that operates on protein language model representations. Through systematic exploration of architectural choices and diffusion components, we develop a robust methodology that generalizes across multiple protein encoders ranging from 8M to 3B parameters. We demonstrate that our framework achieves consistently high performance across sequence-only (ESM-2, ESMc), dual-decodable (CHEAP), and multimodal (SaProt) representations using the same architecture and training approach. We conduct extensive evaluation of existing methods alongside DiMA using multiple metrics across two protein modalities, covering quality, diversity, novelty, and distribution matching of generated proteins. DiMA consistently produces novel, high-quality and diverse protein sequences and achieves strong results compared to baselines such as autoregressive, discrete diffusion and flow matching language models. The model demonstrates versatile functionality, supporting conditional generation tasks including protein family-generation, motif scaffolding and infilling, and fold-specific sequence design, despite being trained solely on sequence data. This work provides a universal continuous diffusion framework for protein sequence generation, offering both architectural insights and practical applicability across various protein design scenarios.
In this study, we develop DiMA, a new latent diffusion model that operates on protein language model representations. We demonstrate that continuous diffusion on protein embeddings enables effective sequence and structure generation across multiple tasks and encoder architectures. DiMA addresses the limitations of previous continuous diffusion approaches that have been limited to specific representations by establishing a unified framework that generalizes across diverse protein encoders. By operating in the continuous latent space of these pre-trained encoders, our approach circumvents the challenges associated with discrete sequence modeling while maintaining the expressiveness needed for complex protein design tasks. The framework is designed to be encoder-agnostic, allowing it to benefit from advances in protein representation learning without requiring architectural modifications.
@inproceedings{meshchaninov2025dima,
title={Diffusion on Language Model Encodings for Protein Sequence Generation},
author={Meshchaninov, Viacheslav and Strashnov, Pavel and Shevtsov, Andrey and Nikolaev, Fedor and Ivanisenko, Nikita and Kardymon, Olga and Vetrov, Dmitry},
booktitle={International Conference on Machine Learning (ICML)},
year={2025}
}