Appearance
The Twelve-Factor App
Introduction
The Twelve-Factor App is a methodology for building modern, scalable, and maintainable software-as-a-service applications. Originally published by Heroku co-founder Adam Wiggins, these twelve principles provide a framework for developing applications that are portable across execution environments, suitable for cloud deployment, and capable of scaling without significant architectural changes. Understanding these factors is essential for any developer building production systems in containerized, serverless, or cloud-native environments.
Core Concepts
The twelve factors address the most common structural and operational problems encountered when building distributed systems. They promote declarative configuration, clean contracts between components, maximum portability, and continuous deployment readiness.
Overview of All Twelve Factors
| # | Factor | Summary |
|---|---|---|
| 1 | Codebase | One codebase tracked in version control, many deploys |
| 2 | Dependencies | Explicitly declare and isolate dependencies |
| 3 | Config | Store configuration in the environment |
| 4 | Backing Services | Treat backing services as attached resources |
| 5 | Build, Release, Run | Strictly separate build and run stages |
| 6 | Processes | Execute the app as one or more stateless processes |
| 7 | Port Binding | Export services via port binding |
| 8 | Concurrency | Scale out via the process model |
| 9 | Disposability | Maximize robustness with fast startup and graceful shutdown |
| 10 | Dev/Prod Parity | Keep development, staging, and production as similar as possible |
| 11 | Logs | Treat logs as event streams |
| 12 | Admin Processes | Run admin/management tasks as one-off processes |
Factor 1: Codebase
A twelve-factor app is always tracked in a version control system (Git, Mercurial, etc.). There is a one-to-one correlation between a codebase and an app. If there are multiple codebases, it's not an app — it's a distributed system, and each component is an app that should individually comply with twelve-factor.
Multiple deploys (staging, production, developer local) share the same codebase but may run different versions.
Factor 2: Dependencies
A twelve-factor app never relies on implicit system-wide packages. All dependencies are declared explicitly via a dependency manifest (e.g., pom.xml for Maven, build.gradle for Gradle) and isolated using a tool that ensures no dependencies leak in from the surrounding system.
xml
<!-- pom.xml — Explicit dependency declaration -->
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>twelve-factor-demo</artifactId>
<version>1.0.0</version>
<packaging>jar</packaging>
<dependencies>
<!-- Every dependency is explicitly declared -->
<dependency>
<groupId>com.sparkjava</groupId>
<artifactId>spark-core</artifactId>
<version>2.9.4</version>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.10.1</version>
</dependency>
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>s3</artifactId>
<version>2.21.0</version>
</dependency>
</dependencies>
</project>Factor 3: Config
Configuration that varies between deploys (database URLs, credentials, feature flags) must be stored in environment variables, not in code. This enforces strict separation of config from code.
java
import java.util.Optional;
public class AppConfig {
private final String databaseUrl;
private final String databaseUser;
private final String databasePassword;
private final int serverPort;
private final String environment;
public AppConfig() {
// Factor 3: All config comes from environment variables
this.databaseUrl = requireEnv("DATABASE_URL");
this.databaseUser = requireEnv("DATABASE_USER");
this.databasePassword = requireEnv("DATABASE_PASSWORD");
this.serverPort = Integer.parseInt(
Optional.ofNullable(System.getenv("PORT")).orElse("8080")
);
this.environment = Optional.ofNullable(
System.getenv("APP_ENV")
).orElse("development");
}
private String requireEnv(String name) {
String value = System.getenv(name);
if (value == null || value.isBlank()) {
throw new IllegalStateException(
"Required environment variable '" + name + "' is not set"
);
}
return value;
}
public String getDatabaseUrl() { return databaseUrl; }
public String getDatabaseUser() { return databaseUser; }
public String getDatabasePassword() { return databasePassword; }
public int getServerPort() { return serverPort; }
public String getEnvironment() { return environment; }
public static void main(String[] args) {
try {
AppConfig config = new AppConfig();
System.out.println("Server starting on port: " + config.getServerPort());
System.out.println("Environment: " + config.getEnvironment());
System.out.println("Database: " + config.getDatabaseUrl());
} catch (IllegalStateException e) {
System.err.println("Configuration error: " + e.getMessage());
System.exit(1);
}
}
}Factor 4: Backing Services
A twelve-factor app treats backing services — databases, message queues, SMTP services, caching systems — as attached resources accessible via a URL or locator stored in config. The app makes no distinction between local and third-party services; swapping a local PostgreSQL for Amazon RDS requires only a config change.
java
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.sqs.SqsClient;
import software.amazon.awssdk.services.sqs.model.SendMessageRequest;
import software.amazon.awssdk.services.sqs.model.SendMessageResponse;
import java.net.URI;
public class BackingServiceExample {
// Factor 4: Backing service is an attached resource
// The queue URL comes from configuration, not hardcoded
private final SqsClient sqsClient;
private final String queueUrl;
public BackingServiceExample() {
String endpoint = System.getenv("AWS_SQS_ENDPOINT"); // null in prod, set for local
this.queueUrl = System.getenv("QUEUE_URL");
var builder = SqsClient.builder()
.region(Region.of(System.getenv("AWS_REGION")));
// Swap local vs. cloud with just a config change
if (endpoint != null) {
builder.endpointOverride(URI.create(endpoint));
}
this.sqsClient = builder.build();
}
public String sendMessage(String body) {
SendMessageResponse response = sqsClient.sendMessage(
SendMessageRequest.builder()
.queueUrl(queueUrl)
.messageBody(body)
.build()
);
return response.messageId();
}
public static void main(String[] args) {
BackingServiceExample service = new BackingServiceExample();
String messageId = service.sendMessage("{\"event\": \"user.created\"}");
System.out.println("Sent message: " + messageId);
}
}Factor 5: Build, Release, Run
The deployment pipeline is strictly separated into three stages:
- Build: Converts code into an executable bundle (compile, package dependencies)
- Release: Combines the build artifact with deploy-specific config to produce an immutable release
- Run: Launches the app in the execution environment
Every release has a unique ID (timestamp or version). Releases are append-only and immutable.
Factor 6: Processes
Twelve-factor processes are stateless and share-nothing. Any data that needs to persist is stored in a backing service (database, object store). Session state, if needed, goes into a datastore with time-expiration such as Redis.
java
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
// ANTI-PATTERN: In-memory session storage
public class StickySessionHandler {
// This violates Factor 6 — state is lost on restart or scale-out
private final Map<String, String> sessions = new ConcurrentHashMap<>();
public void saveSession(String sessionId, String data) {
sessions.put(sessionId, data);
}
}
// CORRECT: Externalized session storage
public class ExternalSessionHandler {
private final RedisClient redisClient;
public ExternalSessionHandler() {
// Factor 4 + Factor 6: state lives in a backing service
this.redisClient = RedisClient.create(System.getenv("REDIS_URL"));
}
public void saveSession(String sessionId, String data) {
redisClient.set(sessionId, data);
redisClient.expire(sessionId, 3600); // TTL of 1 hour
}
public String getSession(String sessionId) {
return redisClient.get(sessionId);
}
}Factor 7: Port Binding
The app is self-contained and exports HTTP (or other services) by binding to a port. It does not rely on runtime injection of a web server (like deploying a WAR into Tomcat). The app itself embeds the server.
java
import static spark.Spark.*;
import com.google.gson.Gson;
import java.util.Map;
public class PortBindingApp {
public static void main(String[] args) {
// Factor 7: App binds to a port and serves requests
int port = Integer.parseInt(
System.getenv().getOrDefault("PORT", "8080")
);
port(port);
Gson gson = new Gson();
get("/health", (req, res) -> {
res.type("application/json");
return gson.toJson(Map.of(
"status", "UP",
"port", port
));
});
get("/api/greeting/:name", (req, res) -> {
res.type("application/json");
String name = req.params("name");
return gson.toJson(Map.of(
"message", "Hello, " + name + "!"
));
});
System.out.println("Application bound to port " + port);
}
}Factor 8: Concurrency
Scale by running multiple processes, not by making a single process larger. Different work types (web requests, background jobs, scheduled tasks) are handled by different process types.
Factor 9: Disposability
Processes should start fast and shut down gracefully. On receiving a SIGTERM, the process should finish current requests, release resources, and exit. This supports elastic scaling, rapid deployment, and robustness.
java
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class GracefulShutdownApp {
private final ExecutorService executor = Executors.newFixedThreadPool(10);
private volatile boolean running = true;
public void start() {
// Factor 9: Register shutdown hook for graceful termination
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
System.out.println("SIGTERM received. Shutting down gracefully...");
running = false;
executor.shutdown();
try {
if (!executor.awaitTermination(30, TimeUnit.SECONDS)) {
System.err.println("Forcing shutdown after timeout");
executor.shutdownNow();
}
} catch (InterruptedException e) {
executor.shutdownNow();
Thread.currentThread().interrupt();
}
System.out.println("Shutdown complete.");
}));
System.out.println("Application started. Processing jobs...");
while (running) {
executor.submit(this::processJob);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
break;
}
}
}
private void processJob() {
// Simulate work
System.out.println(Thread.currentThread().getName() + " processing job");
}
public static void main(String[] args) {
new GracefulShutdownApp().start();
}
}Factor 10: Dev/Prod Parity
Keep the gap between development and production small across three dimensions:
- Time gap: Deploy hours after coding, not weeks
- Personnel gap: Developers who write code are closely involved in deploying and observing it
- Tools gap: Keep dev and prod tooling as similar as possible
yaml
# docker-compose.yml — Dev environment matching production
version: '3.8'
services:
app:
build: .
ports:
- "8080:8080"
environment:
DATABASE_URL: jdbc:postgresql://db:5432/myapp
DATABASE_USER: myapp
DATABASE_PASSWORD: secret
REDIS_URL: redis://cache:6379
QUEUE_URL: http://localstack:4566/000000000000/my-queue
depends_on:
- db
- cache
db:
image: postgres:15
environment:
POSTGRES_DB: myapp
POSTGRES_USER: myapp
POSTGRES_PASSWORD: secret
cache:
image: redis:7-alpine
localstack:
image: localstack/localstack
environment:
SERVICES: sqs,s3Factor 11: Logs
A twelve-factor app never concerns itself with routing or storage of its output stream. It writes all logs to stdout as an unbuffered event stream. The execution environment captures, collates, and routes these streams to observability platforms.
java
import java.time.Instant;
import java.util.logging.*;
public class LogStreamApp {
// Factor 11: Logs go to stdout as structured events
private static final Logger logger = Logger.getLogger(LogStreamApp.class.getName());
static {
// Remove default handlers
Logger rootLogger = Logger.getLogger("");
for (Handler handler : rootLogger.getHandlers()) {
rootLogger.removeHandler(handler);
}
// Write to stdout only
ConsoleHandler consoleHandler = new ConsoleHandler() {
{ setOutputStream(System.out); }
};
consoleHandler.setFormatter(new Formatter() {
@Override
public String format(LogRecord record) {
return String.format(
"{\"timestamp\":\"%s\",\"level\":\"%s\",\"message\":\"%s\"}%n",
Instant.ofEpochMilli(record.getMillis()),
record.getLevel(),
record.getMessage()
);
}
});
rootLogger.addHandler(consoleHandler);
}
public static void main(String[] args) {
logger.info("Application starting");
logger.info("Processing request for user=12345");
logger.warning("Slow query detected: 2340ms");
logger.severe("Connection to backing service lost");
logger.info("Application shutting down");
}
}Factor 12: Admin Processes
Administrative tasks — database migrations, console sessions, one-off scripts — should run as one-off processes in an identical environment to the app's regular long-running processes. They ship with the same codebase and config.
java
public class AdminMigration {
public static void main(String[] args) {
// Factor 12: Run as one-off process with same config
// Example: java -cp app.jar com.example.AdminMigration migrate
if (args.length == 0) {
System.err.println("Usage: AdminMigration <command>");
System.err.println("Commands: migrate, seed, cleanup");
System.exit(1);
}
String command = args[0];
AppConfig config = new AppConfig(); // Same config class as the main app
switch (command) {
case "migrate":
System.out.println("Running database migrations against: "
+ config.getDatabaseUrl());
runMigrations(config);
break;
case "seed":
System.out.println("Seeding database...");
seedDatabase(config);
break;
case "cleanup":
System.out.println("Cleaning up expired sessions...");
cleanupSessions(config);
break;
default:
System.err.println("Unknown command: " + command);
System.exit(1);
}
System.out.println("Admin task '" + command + "' completed.");
}
private static void runMigrations(AppConfig config) {
// Flyway or Liquibase migration logic
System.out.println("Applied 3 pending migrations.");
}
private static void seedDatabase(AppConfig config) {
System.out.println("Inserted 100 seed records.");
}
private static void cleanupSessions(AppConfig config) {
System.out.println("Removed 2,847 expired sessions.");
}
}Complete Architecture: Twelve-Factor App on AWS
Factor Compliance Checklist
Best Practices
- Start with config and dependencies: These two factors give you the biggest immediate return in portability and reproducibility.
- Use containers for parity: Docker and docker-compose make Factor 10 (dev/prod parity) trivial to achieve.
- Never store state in the filesystem: Ephemeral storage disappears when containers restart; use S3 or a database instead.
- Automate the build-release-run pipeline: Use CodePipeline, GitHub Actions, or GitLab CI to enforce strict stage separation.
- Emit structured logs to stdout: JSON-formatted log lines are easily parsed by CloudWatch, Datadog, and ELK stacks.
- Design for disposability from day one: Implement health checks, graceful shutdown hooks, and idempotent operations.
- Use a process manager, not manual scaling: Let ECS, Kubernetes, or a PaaS manage process count and types.
- Treat backing services as swappable: Write code against interfaces, not specific implementations, and inject connection strings via environment.
- Run admin tasks in the same environment: Package migration scripts in the same Docker image and run them as one-off ECS tasks or Kubernetes Jobs.
- Audit factor compliance regularly: Use the checklist above during code reviews and architecture discussions.
Common Anti-Patterns vs. Twelve-Factor Solutions
Related Concepts
- Serverless and Container Workloads: Twelve-factor apps are naturally suited for containerized and serverless deployments
- Eventual Consistency: Stateless processes and backing services often operate with eventual consistency
- REST HTTP Verbs and Status Codes: Factor 7 (port binding) typically exposes RESTful HTTP interfaces
- Asynchronous Programming: Factor 8 (concurrency) often involves asynchronous processing patterns
- OAuth: Factor 3 (config) applies directly to token and credential management