The Unified Application Runtime for Java
Java was designed for managed environments. Applets ran in browsers. Servlets ran in app servers. EJBs ran in containers. The fat-jar era threw that away — we started bundling web servers, serialization, service discovery, and configuration management into every application, then wrapping it all in Docker.
Aether returns Java to its natural habitat. Applications handle business logic. The runtime handles everything else.
Every Java microservice carries a heavy coat of infrastructure: web servers, serialization, DI containers, service discovery, config management, metrics, retry logic, circuit breakers.
Your pom.xml doesn't distinguish business dependencies from infrastructure dependencies. They compile together, deploy together, and break together. A security patch in a web server library requires rebuilding every service.
DI frameworks fight with service meshes for routing control. Cloud SDK retry logic conflicts with application-level resilience libraries. The conflicts surface as bugs in production — not during development.
Separate the layers. Let the runtime manage infrastructure — resource provisioning, scaling, transport, discovery, retries, circuit breakers, configuration, observability, security. None of these are application concerns.
Update the runtime — roll it out across nodes without touching applications. Update business logic — deploy new versions without touching infrastructure. Each independently, each without downtime.
When layers don't share a deployment unit, they don't share a deployment schedule.
A slice is a Java interface — the same mental model as a service. If you've written a Spring service, you can write a slice.
@Slice
public interface OrderService {
Promise<OrderResult> placeOrder(PlaceOrderRequest request);
static OrderService orderService(InventoryService inventory,
PricingEngine pricing) {
return request -> inventory.check(request.items())
.flatMap(pricing::calculate)
.map(OrderResult::placed);
}
}
Typed Java interfaces with request/response semantics. Not message passing, not actors. Your existing service-based designs and team knowledge transfer directly. The only design requirement: slice methods should be idempotent.
Key architectural decisions.
aether-node.jar for core and worker. Node discovers its role from topology.# Blueprint: describe what you want
id = "org.example:commerce:1.0.0"
[[slices]]
artifact = "org.example:inventory-service:1.0.0"
instances = 3
[[slices]]
artifact = "org.example:order-processor:1.0.0"
instances = 5
aether deploy commerce-blueprint.jar
The blueprint artifact carries slice configs, database migrations, and application settings. The cluster resolves artifacts, runs schema migrations, loads slices, distributes instances across nodes, registers routes, and starts serving traffic. One artifact, one command.
The system survives failure of less than half the core nodes. A 5-node cluster tolerates 2 simultaneous failures. A 7-node cluster tolerates 3.
When a node fails, recovery is automatic. Requests retry on healthy nodes. A replacement provisions, connects, restores state, and begins serving traffic. No human intervention.
Four strategies, built in:
Deploy during business hours. Health degrades? Instant rollback.
Slice scaling — more instances on existing nodes. Classes already loaded; scaling takes milliseconds.
Node scaling — add machines. Node connects, restores state, begins serving.
Worker group scaling — thousands of nodes via SWIM gossip. Zero consensus overhead.
If TTM fails, Decision Tree continues. No scaling disruption.
Pick a boundary, extract an interface, annotate with @Slice, wrap the implementation:
Promise.lift(() -> legacyService.generate(request));
Start in Ember — single JVM alongside your existing application. Strangler fig from there: extract a hot path, deploy as a slice, route traffic, repeat.
Each slice can be a single method. No operational tradeoffs for small slices — Aether handles all infrastructure.
One slice at 50 instances during peak while another idles at minimum. That granularity is the default, not a special configuration.
JBCT patterns compose naturally within slices. Each method is a data transformation pipeline.
Docker cluster on laptop (i7-11800H, 32GB), real PostgreSQL, mixed read/write. Scales to 15K req/s with sub-50ms p95.
Business Source License 1.1, transitioning to Apache 2.0 after 4 years.
Direct team access, priority support, roadmap influence, favorable terms.
Ideal: Java backend with scaling challenges, microservices pain, production target.