Pragmatica Aether

The Unified Application Runtime for Java

Java was designed for managed environments. Applets ran in browsers. Servlets ran in app servers. EJBs ran in containers. The fat-jar era threw that away — we started bundling web servers, serialization, service discovery, and configuration management into every application, then wrapping it all in Docker.

Aether returns Java to its natural habitat. Applications handle business logic. The runtime handles everything else.

The Problem

Every Java microservice carries a heavy coat of infrastructure: web servers, serialization, DI containers, service discovery, config management, metrics, retry logic, circuit breakers.

Your pom.xml doesn't distinguish business dependencies from infrastructure dependencies. They compile together, deploy together, and break together. A security patch in a web server library requires rebuilding every service.

DI frameworks fight with service meshes for routing control. Cloud SDK retry logic conflicts with application-level resilience libraries. The conflicts surface as bugs in production — not during development.

The Architecture

Separate the layers. Let the runtime manage infrastructure — resource provisioning, scaling, transport, discovery, retries, circuit breakers, configuration, observability, security. None of these are application concerns.

Update the runtime — roll it out across nodes without touching applications. Update business logic — deploy new versions without touching infrastructure. Each independently, each without downtime.

When layers don't share a deployment unit, they don't share a deployment schedule.

What You Write

A slice is a Java interface — the same mental model as a service. If you've written a Spring service, you can write a slice.

@Slice
public interface OrderService {
    Promise<OrderResult> placeOrder(PlaceOrderRequest request);

    static OrderService orderService(InventoryService inventory,
                                     PricingEngine pricing) {
        return request -> inventory.check(request.items())
                                   .flatMap(pricing::calculate)
                                   .map(OrderResult::placed);
    }
}

What You Don't Write

  • No HTTP clients — inter-slice calls are direct method invocations via generated proxies
  • No service discovery — the runtime tracks where every slice instance lives
  • No retry logic — built-in retry with exponential backoff and node failover
  • No circuit breakers — the reliability fabric handles failure automatically
  • No serialization code — request/response types are serialized transparently
  • No configuration management — consensus-based config propagates cluster-wide

Typed Java interfaces with request/response semantics. Not message passing, not actors. Your existing service-based designs and team knowledge transfer directly. The only design requirement: slice methods should be idempotent.

Under The Hood

Key architectural decisions.

  • Consensus KV Store. Leaderless Rabia consensus — single source of truth for config, deployment state, and discovery. No external coordination services.
  • Two-Layer Topology. Core nodes (3-9) run consensus. Worker groups scale to thousands via SWIM gossip — O(1) failure detection, zero consensus overhead.
  • Declarative Deployment. Blueprints carry code + config + database migrations. One artifact, one command, atomic deployment.
  • ClassLoader Isolation. Per-slice classloader — two slices can use different versions of the same library without conflict.
  • Native Async PostgreSQL. Built-in driver with binary protocol and pipelining. No external connection pooler needed.
  • Transport Security. Mutual TLS for all TCP, AES-256-GCM for SWIM gossip. HKDF-derived from cluster secret — no external PKI.
  • RBAC and Audit. ADMIN/OPERATOR/VIEWER roles with per-route enforcement. SHA-256 API keys, 7-type audit trail.
  • QUIC Transport. Stream-per-message-type multiplexing, mandatory TLS 1.3, 0-RTT reconnection.
  • Infrastructure Independence. Single aether-node.jar for core and worker. Node discovers its role from topology.
  • Four Cloud Providers. AWS, GCP, Azure, Hetzner — all without vendor SDKs. Adding a provider is implementing an interface.
# Blueprint: describe what you want
id = "org.example:commerce:1.0.0"

[[slices]]
artifact = "org.example:inventory-service:1.0.0"
instances = 3

[[slices]]
artifact = "org.example:order-processor:1.0.0"
instances = 5

aether deploy commerce-blueprint.jar

The blueprint artifact carries slice configs, database migrations, and application settings. The cluster resolves artifacts, runs schema migrations, loads slices, distributes instances across nodes, registers routes, and starts serving traffic. One artifact, one command.

Fault Tolerance

The system survives failure of less than half the core nodes. A 5-node cluster tolerates 2 simultaneous failures. A 7-node cluster tolerates 3.

When a node fails, recovery is automatic. Requests retry on healthy nodes. A replacement provisions, connects, restores state, and begins serving traffic. No human intervention.

Zero-Downtime Deployments

Four strategies, built in:

  • Rolling — weighted traffic routing with gradual shift
  • Canary — progressive traffic shift through configurable stages with auto-evaluation and auto-rollback
  • Blue-green — atomic switchover via consensus (~5ms routing change)
  • A/B testing — deterministic traffic split by request context

Deploy during business hours. Health degrades? Instant rollback.

Scaling That Manages Itself

Three Dimensions

Slice scaling — more instances on existing nodes. Classes already loaded; scaling takes milliseconds.

Node scaling — add machines. Node connects, restores state, begins serving.

Worker group scaling — thousands of nodes via SWIM gossip. Zero consensus overhead.

Predictive Intelligence

  • Tier 1 — Decision Tree (1-second). Reactive: CPU, latency, queue depth, error rate.
  • Tier 2 — TTM Predictor (60-second). ONNX ML model, 2-hour sliding window, 11-metric feature vector. Scale before the spike.
  • Tier 3 — LLM-based (planned). Long-term capacity planning.

If TTM fails, Decision Tree continues. No scaling disruption.

Legacy and Greenfield

Legacy Migration

Pick a boundary, extract an interface, annotate with @Slice, wrap the implementation:

Promise.lift(() -> legacyService.generate(request));

Start in Ember — single JVM alongside your existing application. Strangler fig from there: extract a hot path, deploy as a slice, route traffic, repeat.

Greenfield

Each slice can be a single method. No operational tradeoffs for small slices — Aether handles all infrastructure.

One slice at 50 instances during peak while another idles at minimum. That granularity is the default, not a special configuration.

JBCT patterns compose naturally within slices. Each method is a data transformation pipeline.

Tooling and Observability

CLI and API

  • aether — 60+ CLI commands, batch and REPL modes
  • aether-forge — local simulator with visual dashboard
  • Management API — 60+ REST endpoints, WebSocket streams
  • Passive Load Balancer — cluster-aware, smart routing, automatic failover

Built-in Observability

  • Prometheus metrics (Micrometer)
  • Per-method P50/P95/P99 tracking
  • Dynamic tracing — toggle per method at runtime, no restart
  • Dynamic log levels per logger at runtime
  • Cluster event aggregator — 11 event types, WebSocket feed
Aether Topology Dashboard

Topology Dashboard — compile-time data-flow graph with swim-lane layout, pub-sub routing, and search filtering.

Proven in Practice

8K req/s
Sustained throughput
<5ms
p95 latency
0.00%
Error rate (5.9M req)
3,000+
Tests

Docker cluster on laptop (i7-11800H, 32GB), real PostgreSQL, mixed read/write. Scales to 15K req/s with sub-50ms p95.

Licensing

Business Source License 1.1, transitioning to Apache 2.0 after 4 years.

  • Free for internal use — no cost
  • Free for non-production — evaluate, develop, test
  • Commercial license — only for offering Aether as a service

Pilot Program

Direct team access, priority support, roadmap influence, favorable terms.

Ideal: Java backend with scaling challenges, microservices pain, production target.

Resources

Documentation

Getting started, developer guide, feature catalog, operator guide.

View Docs

Source Code

Aether, Pragmatica Core, JBCT tools, and integrations.

GitHub

Questions?

Discuss if Aether is right for your use case.

Book a Call