Mercury
Overview of Mercury
Mercury: Revolutionizing AI with Diffusion LLMs
What is Mercury? Mercury, developed by Inception, represents a new era in Large Language Models (LLMs) by leveraging diffusion technology. These diffusion LLMs (dLLMs) offer significant advantages in speed, efficiency, accuracy, and controllability compared to traditional auto-regressive LLMs.
How does Mercury work?
Unlike conventional LLMs that generate text sequentially, one token at a time, Mercury's dLLMs generate tokens in parallel. This parallel processing dramatically increases speed and optimizes GPU efficiency, making it ideal for real-time AI applications.
Key Features and Benefits:
- Blazing Fast Inference: Experience ultra-low latency, enabling responsive AI interactions.
- Frontier Quality: Benefit from high accuracy and controllable text generation.
- Cost-Effective: Reduce operational costs with maximized GPU efficiency.
- OpenAI API Compatible: Seamlessly integrate Mercury into existing workflows as a drop-in replacement for traditional LLMs.
- Large Context Window: Both Mercury Coder and Mercury support a 128K context window.
AI Applications Powered by Mercury:
Mercury's speed and efficiency unlock a wide range of AI applications:
- Coding: Accelerate coding workflows with lightning-fast autocomplete, tab suggestions, and editing.
- Voice: Deliver responsive voice experiences in customer service, translation, and sales.
- Search: Instantly surface relevant data from any knowledge base, minimizing research time.
- Agents: Run complex multi-turn systems while maintaining low latency.
Mercury Models:
- Mercury Coder: Optimized for coding workflows, supporting streaming, tool use, and structured output. Pricing: Input $0.25 | Output $1 per 1M tokens.
- Mercury: General-purpose dLLM providing ultra-low latency, also supporting streaming, tool use, and structured output. Pricing: Input $0.25 | Output $1 per 1M tokens.
Why choose Mercury?
Testimonials from industry professionals highlight Mercury's exceptional speed and impact:
- Jacob Kim, Software Engineer: "I was amazed by how fast it was. The multi-thousand tokens per second was absolutely wild, nothing like I've ever seen."
- Oliver Silverstein, CEO: "After trying Mercury, it's hard to go back. We are excited to roll out Mercury to support all of our voice agents."
- Damian Tran, CEO: "We cut routing and classification overheads to sub-second latencies even on complex agent traces."
Who is Mercury for?
Mercury is designed for enterprises seeking to:
- Enhance AI application performance.
- Reduce AI infrastructure costs.
- Gain a competitive edge with cutting-edge AI technology.
How to integrate Mercury:
Mercury is available through major cloud providers like AWS Bedrock and Azure Foundry. It's also accessible via platforms like OpenRouter and Quora. You can start with their API.
To explore fine-tuning, private deployments, and forward-deployed engineering support, contact Inception.
Mercury offers a transformative approach to AI, making it faster, more efficient, and more accessible for a wide range of applications. Try the Mercury API today and experience the next generation of AI.
Best Alternative Tools to "Mercury"

Mammouth AI offers access to top AI models like GPT, Claude, and Gemini in one subscription. Stay updated with the latest AI advancements for just €10/month.

AI Runner is an offline AI inference engine for art, real-time voice conversations, LLM-powered chatbots, and automated workflows. Run image generation, voice chat, and more locally!

Explore TavonnAI, the ultimate platform for open-source AI. Generate images, animated GIFs, and chat with AI using 30+ LLMs. Try it free today!

MultiChat AI allows you to chat with top LLMs like GPT-4, Claude-3, Gemini 1.5 Pro, and more, all in one place. Also offers AI image generation and editing tools.

KoboldCpp: Run GGUF models easily for AI text & image generation with a KoboldAI UI. Single file, zero install. Supports CPU/GPU, STT, TTS, & Stable Diffusion.

What-A-Prompt is a user-friendly prompt optimizer for enhancing inputs to AI models like ChatGPT and Gemini. Select enhancers, input your prompt, and generate creative, detailed results to boost LLM outputs. Access a vast library of optimized prompts.

Chat with AI using your API keys. Pay only for what you use. GPT-4, Gemini, Claude, and other LLMs supported. The best chat LLM frontend UI for all AI models.

Explore AI Library, the comprehensive catalog of over 2150 neural networks and AI tools for generative content creation. Discover top AI art models, tools for text-to-image, video generation, and more to boost your creative projects.

TemplateAI is the leading NextJS template for AI apps, featuring Supabase auth, Stripe payments, OpenAI/Claude integration, and ready-to-use AI components for fast full-stack development.

Sagify is an open-source Python tool that streamlines machine learning pipelines on AWS SageMaker, offering a unified LLM Gateway for seamless integration of proprietary and open-source large language models to boost productivity.

Chat & Ask AI is an advanced AI chatbot powered by multiple LLMs, offering faster AI chat, image generation, writing tools, AI assistants, and WhatsApp integration.

deepsense.ai offers custom AI software development and consulting, specializing in LLMs, MLOps, computer vision, and AI-powered automation to drive business growth. Partner with trusted AI experts.

Symbl.ai transforms unstructured conversations into knowledge, events, and insights using state-of-the-art understanding and generative models.

Meteron AI is an all-in-one AI toolset that handles LLM and generative AI metering, load-balancing, and storage, freeing developers to focus on building AI-powered products.