Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"
Image for the paper "Beyond attention: breaking the limits of transformer context length with recurrent memory"

Beyond attention

Machine learning

Investigating recurrent memory augmentation of pre-trained transformer models reveals the scope for storage in memory of up to 2 million tokens.

Beyond attention: breaking the limits of transformer context length with recurrent memory

Conference on Neural Information Processing Systems

A. Bulatov, Y. Kuratov, Y. Kapushev, M. Burtsev

A major limitation for the broader scope of problems solvable by transformers is the quadratic scaling of computational complexity with input size. In this study, we investigate the recurrent memory augmentation of pre-trained transformer models to extend input context length while linearly scaling compute. Our approach demonstrates the capability to store information in memory for sequences of up to an unprecedented two million tokens while maintaining high retrieval accuracy. Experiments with language modelling tasks show perplexity improvement as the number of processed input segments increases. These results underscore the effectiveness of our method, which has significant potential to enhance long-term dependency handling in natural language understanding and generation tasks, as well as enable large-scale context processing for memory-intensive applications.

Conference on Neural Information Processing Systems

A. Bulatov, Y. Kuratov, Y. Kapushev, M. Burtsev