Adaptive eXchange Oriented Notation
A data serialization format built for LLMs. Feed it JSON, XML, or any verbose format. AXON produces compact, typed, validated output that cuts token costs up to 95% while preserving every bit of meaning.
npm install @axon-format/core Paste JSON or XML and watch AXON compress it in real time.
Conceptual demo - production AXON uses binary encoding for even greater savings
AXON output will appear here...
Compression Techniques
Real-World Examples
Import, encode, done. AXON works with your existing data structures with no migration required.
npm install @axon-format/core import { encode, decode } from '@axon-format/core';
// Your data
const data = {
users: [
{ name: "Alice", age: 30, active: true },
{ name: "Bob", age: 30, active: true },
{ name: "Carol", age: 30, active: true }
]
};
// Encode to AXON (saves 60-95% tokens)
const encoded = encode(data);
// Send to LLM...
const response = await sendToLLM(encoded);
// Decode back to original
const decoded = decode(response); import { encode, decode, type Schema } from '@axon-format/core';
// Define schema as a plain object
const userSchema: Schema = {
name: 'User',
fields: [
{ name: 'name', type: 'str' },
{ name: 'age', type: 'u8' },
{ name: 'email', type: 'str' },
{ name: 'verified', type: 'bool' }
]
};
// Your data
const userData = {
name: "Alice",
age: 30,
email: "alice@example.com",
verified: true
};
// Encode with validation
const encoded = encode(userData, { schemas: [userSchema] });
// Type-safe decode
const decoded = decode(encoded, { schemas: [userSchema] }); Accepts JSON, XML, and more data formats
13 validated types with schema support
93% test coverage, 342 passing tests
MIT licensed, community-driven
Plug in your numbers. The savings compound fast.
AXON typically achieves 60-95% compression depending on data patterns
These estimates are conservative. Actual savings may be higher depending on your data patterns.
Every feature exists to solve a real problem in LLM data pipelines.
60–95% fewer tokens on every API call. Smaller payloads mean lower bills and more budget for the interactions that matter.
u8 through f64, booleans, ISO 8601 dates, UUIDs, and more. Catch malformed data before it reaches your model.
RLE, Dictionary, Delta, Bit Packing, and Varint. Automatically selected based on your data, no configuration needed.
Compact, Nested, Columnar, Stream, Sparse, and JSON. Each mode is optimized for a specific data shape and access pattern.
342 tests, 93% coverage, zero dependencies. Ship with confidence. AXON won't break your build or bloat your bundle.
Free to use, modify, and ship. No vendor lock-in, no license fees, no strings attached. Built by the community, for the community.
Encode function calls, structured outputs, and tool-use payloads in a fraction of the tokens. Fits naturally into any OpenAI or Anthropic pipeline.
Pack more retrieved context into your token window. More context means better answers and sharper reasoning from your model.
When agents pass structured data between each other, every token counts. AXON keeps inter-agent communication lean and fast.
Store more chat history and user context in your token window. Longer conversations with better continuity and stronger recall.
Type system reference, compression algorithms, schema validation, encoding modes, and real-world examples. All in one place.