Let Java Be Java
Java was designed for managed environments. Applets ran in browsers. Servlets ran in app servers. EJBs ran in containers. The fat-jar era threw that away — we started bundling web servers, serialization, service discovery, and configuration management into every application, then wrapping it all in Docker.
Aether returns Java to its natural habitat. Applications handle business logic. The runtime handles everything else.
Every Java microservice carries a heavy coat of infrastructure: web servers, serialization, DI containers, service discovery, config management, metrics, retry logic, circuit breakers.
Your pom.xml doesn't distinguish business dependencies from infrastructure dependencies. They compile together, deploy together, and break together. A security patch in Netty requires rebuilding every service that embeds a web server — which is all of them.
Spring's DI fights with Kubernetes service mesh for routing control. Your cloud SDK's retry logic conflicts with Resilience4j. Every layer claims authority over the same cross-cutting concerns, and the conflicts surface as bugs in production — not during development.
This is an architecture problem. Architecture problems have architectural solutions.
Separate the layers. Let the runtime manage infrastructure — resource provisioning, scaling, transport, discovery, retries, circuit breakers, configuration, observability, security. None of these are application concerns.
Update Java — roll it out across nodes without touching applications. Update business logic — deploy new versions without touching infrastructure. Each independently, each without downtime. When layers don't share a deployment unit, they don't share a deployment schedule.
An interface annotated with @Slice, plus business logic implementation.
@Slice
public interface OrderService {
Promise<OrderResult> placeOrder(PlaceOrderRequest request);
static OrderService orderService(InventoryService inventory,
PricingEngine pricing) {
return request -> inventory.check(request.items())
.flatMap(available -> pricing.calculate(available))
.map(priced -> OrderResult.placed(priced));
}
}
A method call via imported interface is the only visible contract. The only design requirement: slice methods should be idempotent — enabling retry, scaling, and fault tolerance to work transparently.
Five architectural decisions make this possible.
# Blueprint: describe what you want
id = "org.example:commerce:1.0.0"
[[slices]]
artifact = "org.example:inventory-service:1.0.0"
instances = 3
[[slices]]
artifact = "org.example:order-processor:1.0.0"
instances = 5
aether blueprint apply commerce.toml
The cluster resolves artifacts, loads slices, distributes instances across nodes, registers routes, and starts serving traffic. Convergence to desired state is automatic.
The system survives failure of less than half the nodes. Not graceful degradation — actual redundancy. A 5-node cluster tolerates 2 simultaneous failures. A 7-node cluster tolerates 3.
When a node fails, recovery is automatic. Requests are immediately retried on healthy nodes. A replacement is provisioned, connects to peers, restores state from a cluster snapshot, and begins serving traffic. No human intervention required.
Rolling updates leverage fault tolerance for safe deployments with weighted traffic routing.
aether update start org.example:order-processor 2.0.0
aether update routing <id> -r 1:3 # 25% to v2
aether update routing <id> -r 1:1 # 50/50
aether update complete <id> # 100% to v2
Deploy during business hours. Shift traffic gradually. If health degrades — instant rollback with one command.
Slice scaling — spin up more instances of a specific slice on existing nodes. Classes are already loaded; scaling takes milliseconds.
Node scaling — add more machines to the cluster. The node connects, restores state, and begins accepting work.
Independent controls, combined effect. No coordination between the two dimensions required.
If TTM fails, the Decision Tree continues with default thresholds. No scaling disruption.
Same slice model, different granularity. Legacy and greenfield coexist in the same cluster.
Your legacy system doesn't need a rewrite. Pick a boundary, extract an interface, annotate with @Slice, wrap the implementation:
Promise.lift(() -> legacyService.generate(request));
One line to enter the Aether world. Start in Ember — single JVM, same process as your existing application. No worse than what you have today. From there, the strangler fig pattern: extract a hot path, deploy as a slice, route traffic, repeat.
One sprint to first slice in production.
Each slice can be as lean as a single method — and that's the recommended approach. No operational tradeoffs for small slices because Aether handles all infrastructure overhead.
One slice serving 50 instances during peak load while another idles at minimum. That granularity would be operationally insane with traditional microservices. With Aether, it's the default.
JBCT patterns — Leaf, Sequencer, Fork-Join — compose naturally within slices. Each slice method is a data transformation pipeline: parse input, gather data, process, respond.
Three environments, zero code changes.
Single-process runtime. Multiple logical nodes in the same JVM. Fast startup, standard debugger. Deploy slices alongside your existing application — the zero-risk entry point for legacy migration.
5-node cluster simulator on your laptop. Real consensus, real routing, real failure scenarios. Web dashboard with live metrics. Chaos injection — kill nodes, crash leaders, trigger rolling restarts.
Production cluster. Same slices, same code, different scale. Your code doesn't know which environment it's running in. Moving from Ember to Aether is a configuration change, not a code change.
Not a concept paper. Automated tests against real clusters.
Performance as of v0.15.1. Tests cover cluster formation, leader election, node failures, network partitions, rolling updates, and chaos scenarios.
Aether uses BSL 1.1, transitioning to Apache 2.0 after 4 years. This means:
BSL protects our ability to sustain development while keeping Aether accessible. Large cloud providers can't wrap and sell it, but you can use it freely for your applications.
It's the same model used by MariaDB, CockroachDB, and other successful projects.
We're looking for pilot partners to deploy Aether in production.
Source code for Aether, Pragmatica Core, JBCT tools, and integrations.
pragmaticalabs/pragmatica