Adaptive eXchange Oriented Notation
The next-generation data serialization format engineered for LLM interactions. Save up to 95% on token costs without sacrificing type safety or validation.
Paste your JSON data and watch AXON compress it in real-time
Note: This is a conceptual demonstration. Real AXON uses binary encoding and is more sophisticated.
AXON output will appear here...
Simple API, powerful compression
npm install @axon-format/core import { encode, decode } from '@axon-format/core';
// Your data
const data = {
users: [
{ name: "Alice", age: 30, active: true },
{ name: "Bob", age: 30, active: true },
{ name: "Carol", age: 30, active: true }
]
};
// Encode to AXON (saves 60-95% tokens)
const encoded = encode(data);
// Send to LLM...
const response = await sendToLLM(encoded);
// Decode back to original
const decoded = decode(response); import { encode, decode, type Schema } from '@axon-format/core';
// Define schema as a plain object
const userSchema: Schema = {
name: 'User',
fields: [
{ name: 'name', type: 'str' },
{ name: 'age', type: 'u8' },
{ name: 'email', type: 'str' },
{ name: 'verified', type: 'bool' }
]
};
// Your data
const userData = {
name: "Alice",
age: 30,
email: "alice@example.com",
verified: true
};
// Encode with validation
const encoded = encode(userData, { schemas: [userSchema] });
// Type-safe decode
const decoded = decode(encoded, { schemas: [userSchema] }); Works with your existing JSON data structures
13 validated types with schema support
93% test coverage, 342 passing tests
MIT licensed, community-driven
See how much you could save on your LLM API costs
AXON typically achieves 60-95% compression depending on data patterns
These estimates are conservative. Actual savings may be higher depending on your data patterns.
Built specifically for the AI era
60-95% token reduction means dramatically lower API costs. Save thousands per month on GPT-4, Claude, and other LLM APIs.
13 validated types including u8, i32, f64, bool, iso8601, uuid, and more. Catch errors before they reach production.
RLE, Dictionary, Delta, Bit Packing, and Varint encoding. Automatically selects the best algorithm for your data.
Compact, Nested, Columnar, Stream, Sparse, and JSON modes. Optimized for different data structures and use cases.
93.51% test coverage with 342 passing tests. Thoroughly tested and ready for production workloads.
MIT licensed and community-driven. Free to use, modify, and distribute. Contribute on GitHub.
Reduce token costs when sending structured data to GPT-4, Claude, or other language models. Perfect for function calling and structured outputs.
Compress context data in Retrieval Augmented Generation pipelines. Fit more context within token limits while reducing costs.
Efficiently pass data between AI agents and tools. Reduce latency and costs in multi-agent systems.
Manage conversation history and user data more efficiently. Extend conversation length within token budgets.
Explore comprehensive documentation, API reference, and advanced usage guides
View Complete Documentation