AXON

Adaptive eXchange Oriented Notation

The next-generation data serialization format engineered for LLM interactions. Save up to 95% on token costs without sacrificing type safety or validation.

60-95%
Token Reduction
204x
More Efficient for Repeated Data
93%
Test Coverage

See the Difference

Paste your JSON data and watch AXON compress it in real-time

Note: This is a conceptual demonstration. Real AXON uses binary encoding and is more sophisticated.

JSON Input 0 tokens
AXON Output 0 tokens
AXON output will appear here...
0% Token Savings
0 tokens saved with AXON

Get Started in Seconds

Simple API, powerful compression

Installation
npm install @axon-format/core
Basic Usage
import { encode, decode } from '@axon-format/core';

// Your data
const data = {
  users: [
    { name: "Alice", age: 30, active: true },
    { name: "Bob", age: 30, active: true },
    { name: "Carol", age: 30, active: true }
  ]
};

// Encode to AXON (saves 60-95% tokens)
const encoded = encode(data);

// Send to LLM...
const response = await sendToLLM(encoded);

// Decode back to original
const decoded = decode(response);
With Schema Validation
import { encode, decode, type Schema } from '@axon-format/core';

// Define schema as a plain object
const userSchema: Schema = {
  name: 'User',
  fields: [
    { name: 'name', type: 'str' },
    { name: 'age', type: 'u8' },
    { name: 'email', type: 'str' },
    { name: 'verified', type: 'bool' }
  ]
};

// Your data
const userData = {
  name: "Alice",
  age: 30,
  email: "alice@example.com",
  verified: true
};

// Encode with validation
const encoded = encode(userData, { schemas: [userSchema] });

// Type-safe decode
const decoded = decode(encoded, { schemas: [userSchema] });

Drop-in Replacement

Works with your existing JSON data structures

Type Safe

13 validated types with schema support

Battle Tested

93% test coverage, 342 passing tests

Open Source

MIT licensed, community-driven

Calculate Your Savings

See how much you could save on your LLM API costs

75%

AXON typically achieves 60-95% compression depending on data patterns

With JSON
$125.00
per month
With AXON
$31.25
per month
You Save
$93.75
per month
$1,125 per year

These estimates are conservative. Actual savings may be higher depending on your data patterns.

Why AXON?

Built specifically for the AI era

Massive Cost Savings

60-95% token reduction means dramatically lower API costs. Save thousands per month on GPT-4, Claude, and other LLM APIs.

Type Safety

13 validated types including u8, i32, f64, bool, iso8601, uuid, and more. Catch errors before they reach production.

5 Compression Algorithms

RLE, Dictionary, Delta, Bit Packing, and Varint encoding. Automatically selects the best algorithm for your data.

6 Encoding Modes

Compact, Nested, Columnar, Stream, Sparse, and JSON modes. Optimized for different data structures and use cases.

Battle Tested

93.51% test coverage with 342 passing tests. Thoroughly tested and ready for production workloads.

Open Source

MIT licensed and community-driven. Free to use, modify, and distribute. Contribute on GitHub.

Perfect For

LLM API Calls

Reduce token costs when sending structured data to GPT-4, Claude, or other language models. Perfect for function calling and structured outputs.

RAG Systems

Compress context data in Retrieval Augmented Generation pipelines. Fit more context within token limits while reducing costs.

AI Agents

Efficiently pass data between AI agents and tools. Reduce latency and costs in multi-agent systems.

Chatbot Context

Manage conversation history and user data more efficiently. Extend conversation length within token budgets.

Ready to Dive Deeper?

Explore comprehensive documentation, API reference, and advanced usage guides

View Complete Documentation