Appearance
Serverless and Container Workloads
Introduction
Modern applications demand flexible deployment strategies that balance cost, scalability, and operational complexity. Serverless computing and containerized workloads represent two dominant paradigms that solve different classes of infrastructure problems. Understanding when to apply each approach — and how to migrate between them — is a core skill for cloud-native Java developers.
Monolithic Architecture
A traditional Spring Boot application bundles all services — orders, users, payments — into a single deployable artifact. This simplifies early development but creates scaling and deployment bottlenecks as the application grows.
The monolith places every domain concern in a single JVM process. A deployment of any feature forces the entire application to restart. A spike in payment processing load forces scaling of the entire pod, even the idle user service.
Java: Monolithic Spring Boot Application
java
// All services in a single Spring Boot application
@SpringBootApplication
public class MonolithApplication {
public static void main(String[] args) {
SpringApplication.run(MonolithApplication.class, args);
}
}
@RestController
@RequestMapping("/orders")
class OrderController {
private final OrderRepository orderRepo;
private final UserService userService;
private final PaymentService paymentService;
OrderController(OrderRepository orderRepo, UserService userService,
PaymentService paymentService) {
this.orderRepo = orderRepo;
this.userService = userService;
this.paymentService = paymentService;
}
@PostMapping
public ResponseEntity<Order> createOrder(@RequestBody CreateOrderRequest req) {
User user = userService.findById(req.getUserId());
if (user == null) return ResponseEntity.badRequest().build();
PaymentResult payment = paymentService.charge(user, req.getAmount());
if (!payment.isSuccess()) return ResponseEntity.status(402).build();
Order order = new Order(user.getId(), req.getItems(), payment.getTransactionId());
return ResponseEntity.ok(orderRepo.save(order));
}
}
@Service
class UserService {
private final UserRepository repo;
UserService(UserRepository repo) { this.repo = repo; }
public User findById(String id) { return repo.findById(id).orElse(null); }
}
@Service
class PaymentService {
public PaymentResult charge(User user, BigDecimal amount) {
// Call payment gateway synchronously within the same JVM
String txId = UUID.randomUUID().toString();
return new PaymentResult(true, txId);
}
}Microservices with Containers (Docker)
Microservices split each domain into independently deployable services. Docker packages each service with its runtime, eliminating the "works on my machine" problem and enabling per-service scaling.
Java: Dockerfile for a Spring Boot Microservice
java
// Dockerfile — place in the root of your microservice module
// (shown as a Java-annotated code block for documentation purposes)
/*
# Stage 1: Build
FROM eclipse-temurin:21-jdk-alpine AS build
WORKDIR /app
COPY mvnw pom.xml ./
COPY .mvn .mvn
RUN ./mvnw dependency:go-offline -q
COPY src ./src
RUN ./mvnw package -DskipTests -q
# Stage 2: Runtime (minimal image)
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
COPY --from=build /app/target/*.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", \
"-XX:+UseContainerSupport", \
"-XX:MaxRAMPercentage=75.0", \
"-jar", "app.jar"]
*/
// In Java, read the PORT environment variable at startup:
@SpringBootApplication
public class OrderServiceApplication {
public static void main(String[] args) {
// Honour PORT env var injected by container runtime / Kubernetes
String port = System.getenv().getOrDefault("PORT", "8080");
System.setProperty("server.port", port);
SpringApplication.run(OrderServiceApplication.class, args);
}
}Container Orchestration with Kubernetes
Kubernetes manages the lifecycle of containers at scale: scheduling, self-healing, rolling deployments, and horizontal pod autoscaling.
Kubernetes Deployment Manifest
java
/*
# kubernetes/order-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
labels:
app: order-service
spec:
replicas: 3
selector:
matchLabels:
app: order-service
template:
metadata:
labels:
app: order-service
spec:
containers:
- name: order-service
image: myregistry/order-service:1.4.2
ports:
- containerPort: 8080
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 20
periodSeconds: 10
env:
- name: SPRING_PROFILES_ACTIVE
value: "kubernetes"
---
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
selector:
app: order-service
ports:
- port: 80
targetPort: 8080
type: ClusterIP
*/
// Spring Boot readiness endpoint is auto-configured when using Actuator:
// management.endpoint.health.probes.enabled=true
// management.health.livenessstate.enabled=true
// management.health.readinessstate.enabled=true
public class KubernetesReadinessExample {}Serverless Computing (AWS Lambda)
AWS Lambda executes code in response to events without provisioning servers. The platform handles capacity, patching, and scaling. You pay per invocation and duration.
Java: AWS Lambda Handler
java
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.SQSEvent;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.*;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.logging.Logger;
public class OrderEventHandler implements RequestHandler<SQSEvent, SQSBatchResponse> {
private static final Logger log = Logger.getLogger(OrderEventHandler.class.getName());
// Initialized once per container lifecycle — reused on warm starts
private static final DynamoDbClient dynamo = DynamoDbClient.create();
private static final String TABLE = System.getenv("ORDER_TABLE");
@Override
public SQSBatchResponse handleRequest(SQSEvent event, Context context) {
List<SQSBatchResponse.BatchItemFailure> failures = new ArrayList<>();
for (SQSEvent.SQSMessage msg : event.getRecords()) {
try {
processMessage(msg);
} catch (Exception e) {
log.severe("Failed to process message " + msg.getMessageId() + ": " + e.getMessage());
// Return messageId so SQS retries only the failed item
failures.add(SQSBatchResponse.BatchItemFailure.builder()
.withItemIdentifier(msg.getMessageId())
.build());
}
}
return SQSBatchResponse.builder()
.withBatchItemFailures(failures)
.build();
}
private void processMessage(SQSEvent.SQSMessage msg) {
String body = msg.getBody();
log.info("Processing order event: " + body);
dynamo.putItem(PutItemRequest.builder()
.tableName(TABLE)
.item(Map.of(
"PK", AttributeValue.fromS("ORDER#" + msg.getMessageId()),
"SK", AttributeValue.fromS("EVENT"),
"body", AttributeValue.fromS(body),
"ts", AttributeValue.fromN(String.valueOf(System.currentTimeMillis()))
))
.build());
}
}Event-Driven Architecture with SQS and Lambda
Decoupling producers from consumers via SQS allows each component to scale independently and buffers traffic spikes without dropping messages.
DynamoDB Integration
Lambda functions pair naturally with DynamoDB: both scale to zero, both charge per operation, and both are regionally managed services requiring no connection pools.
Java: Lambda DynamoDB Integration
java
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.*;
import java.util.HashMap;
import java.util.Map;
public class DynamoOrderRepository {
private final DynamoDbClient client;
private final String tableName;
public DynamoOrderRepository(DynamoDbClient client, String tableName) {
this.client = client;
this.tableName = tableName;
}
public void save(String orderId, String userId, String status) {
Map<String, AttributeValue> item = new HashMap<>();
item.put("PK", AttributeValue.fromS("ORDER#" + orderId));
item.put("SK", AttributeValue.fromS("DETAIL"));
item.put("userId", AttributeValue.fromS(userId));
item.put("status", AttributeValue.fromS(status));
item.put("createdAt", AttributeValue.fromN(
String.valueOf(System.currentTimeMillis())));
client.putItem(PutItemRequest.builder()
.tableName(tableName)
.item(item)
// Prevent overwriting an existing order
.conditionExpression("attribute_not_exists(PK)")
.build());
}
public Map<String, AttributeValue> findById(String orderId) {
GetItemResponse response = client.getItem(GetItemRequest.builder()
.tableName(tableName)
.key(Map.of(
"PK", AttributeValue.fromS("ORDER#" + orderId),
"SK", AttributeValue.fromS("DETAIL")
))
.consistentRead(true) // Strong consistency for reads after write
.build());
return response.hasItem() ? response.item() : null;
}
public void updateStatus(String orderId, String newStatus) {
client.updateItem(UpdateItemRequest.builder()
.tableName(tableName)
.key(Map.of(
"PK", AttributeValue.fromS("ORDER#" + orderId),
"SK", AttributeValue.fromS("DETAIL")
))
.updateExpression("SET #s = :status, updatedAt = :ts")
.expressionAttributeNames(Map.of("#s", "status"))
.expressionAttributeValues(Map.of(
":status", AttributeValue.fromS(newStatus),
":ts", AttributeValue.fromN(
String.valueOf(System.currentTimeMillis()))
))
.build());
}
}Architecture Comparison
Choosing between monolith, containers, and serverless involves trade-offs across cost model, operational complexity, cold start latency, and scalability ceiling.
Java: Monolith vs Lambda Structure Side-by-Side
java
// ──────────────────────────────────────────────────
// APPROACH A: Traditional Spring Boot Monolith
// ──────────────────────────────────────────────────
@SpringBootApplication
public class MonolithApp {
public static void main(String[] args) {
SpringApplication.run(MonolithApp.class, args);
}
}
@RestController
@RequestMapping("/api")
class OrderEndpoint {
@PostMapping("/orders")
public ResponseEntity<String> createOrder(@RequestBody String payload) {
// Synchronous DB call in the request thread
return ResponseEntity.ok("Order created");
}
}
// ──────────────────────────────────────────────────
// APPROACH B: AWS Lambda Handler (no Spring context)
// ──────────────────────────────────────────────────
public class LambdaOrderHandler
implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
// Initialized once per container — simulates Spring singleton bean lifetime
private static final DynamoDbClient dynamo = DynamoDbClient.create();
@Override
public APIGatewayProxyResponseEvent handleRequest(
APIGatewayProxyRequestEvent event, Context context) {
String body = event.getBody();
// Process order...
return new APIGatewayProxyResponseEvent()
.withStatusCode(200)
.withBody("{\"status\":\"created\"}");
}
}Deployment Pipeline Diagram
Service Mesh Diagram
Best Practices
Right-size the architecture for your use case. Serverless excels at spiky, event-driven workloads with sub-15-minute execution windows. Containers shine for long-running, memory-intensive, or latency-sensitive applications. Avoid over-engineering a simple CRUD app into microservices.
Monitor and mitigate cold starts with AWS X-Ray. Instrument Lambda functions with the X-Ray SDK to capture initialization time separately from invocation time. Use Provisioned Concurrency for latency-sensitive endpoints and SnapStart for Java Lambda functions to reduce cold start time by up to 90%.
Build a CI/CD pipeline for Lambda using SAM or CDK. Treat infrastructure as code. Use
sam deploy --guidedfor initial setup and parameterized CloudFormation stacks for environment promotion. Implement canary traffic shifting via CodeDeploy Lambda deployment configurations.Use container image Lambda for large Java runtimes. When your deployment artifact exceeds 50 MB (common for Spring Boot), package Lambda as a container image (up to 10 GB). This removes the ZIP size limit and enables consistent local testing with Docker Desktop.
Use SQS for decoupling producers from consumers. A standard SQS queue acts as a durable buffer, absorbing traffic spikes and enabling independent scaling of producers and consumers. Configure a Dead-Letter Queue (DLQ) with an alarm on
ApproximateNumberOfMessagesNotVisibleto detect poison-pill messages early.Apply DynamoDB single-table design from the start. Modeling multiple entity types in one DynamoDB table eliminates cross-table joins, reduces latency, and simplifies IAM policies. Define your access patterns before your schema, and use composite sort keys to enable range queries without GSIs where possible.
Enforce least-privilege IAM roles. Each Lambda function should have its own execution role scoped to the exact DynamoDB tables, SQS queues, and CloudWatch log groups it needs. Never attach
AdministratorAccessto a Lambda role.
Related Concepts
- Eventual Consistency — Understanding how DynamoDB's eventual and strong consistency modes affect Lambda-driven workflows
- Asynchronous Programming — Reactive patterns for non-blocking Lambda handlers
- Security and Cryptography — Encrypting DynamoDB data at rest and SQS messages in transit