Our papers are the official record of our discoveries. They allow others to build on and apply our work. Each paper is the result of many months of research, so we make a special effort to make them clear, beautiful and inspirational, and publish them in leading journals.

  • Date
  • Subject
  • Theme
  • Journal
  • Citations
  • Altmetric
  • SNIP
  • Author
x5
  • The working limitations of large language models

    Machine learning

    MBM. BurtsevMRM. ReevesAJ MIT Sloan Management Review

    The limits of LLMs

    Large language models like ChatGPT can generate human-like text but businesses that overestimate their abilities risk misusing the technology.

  • DeepPavlov dream: platform for building generative AI assistants

    Machine learning

    DZDKFIMTDE... Meeting for the Association of Computational Linguistics

    DeepPavlov dream

    A new open-source platform is specifically tailored for developing complex dialogue systems, like generative conversational AI assistants.

  • Monolingual and cross-lingual knowledge transfer for topic classification

    Computational linguistics

    DKMBM. Burtsev Arxiv

    Cross-lingual knowledge

    Models trained on a Russian topical dataset, of knowledge-grounded human-human conversation, are capable of real-world tasks across languages.

  • GENA-LM: A family of open-source foundational models for long DNA sequences

    Machine learning

    VFYKMPASDS... Arxiv

    Speaking DNA

    A family of transformer-based DNA language models can interpret genomic sequences, opening new possibilities for complex biological research.

  • Machine learning

    Submitted

    BERT enhanced with recurrence

    The quadratic complexity of attention in transformers is tackled by combining token-based memory and segment-level recurrence, using RMT.