Compresr specializes in context compression technology for large language model (LLM) agents, achieving up to 90% reduction in token usage while maintaining semantic meaning. Their proprietary model, cmprsr-v1, is designed for diverse applications, including finance, legal, and healthcare sectors.
Compress context for LLM queries to save on token usage; Utilize pre-compressed knowledge for efficient data retrieval; Implement custom compression for finance and legal documents; Achieve extreme compression rates for specific queries; Reduce API costs for engineering teams
Co-founder & COO/CPO
Co-founder and CTO
Co-founder & CAIO
Doctoral Assistant (PhD) at EPFL
Compresr specializes in context compression technology for large language model (LLM) agents, offering significant reductions in token usage while preserving semantic meaning. Their main product is the cmprsr-v1 model, which achieves up to a 90% reduction in token usage. This model is designed for various applications, particularly in sectors such as finance, legal, and healthcare.
Key features of Compresr's offerings include:
The benefits of using Compresr's technology include lower operational costs for engineering teams and data scientists, improved performance of LLMs, and the ability to handle larger datasets without compromising on quality.
Backed by Y Combinator; Achieves up to 64% cost reduction for API usage; Demonstrated accuracy improvement of +3% with 2X compression