A monolith’s greatest strength is its simplicity, but that simplicity is also its ultimate undoing.
Let’s watch a simple e-commerce monolith in action.
Imagine a single ecommerce-app.jar file running on a server. It handles everything: user authentication, product catalog, order processing, and payment gateway integration. When a user browses products, the monolith’s internal ProductService queries the products table. When they add to cart, CartService interacts with users and products. When they check out, OrderService talks to CartService, UserService, and an external PaymentGatewayClient. All these services live within the same process, sharing memory, threads, and often, a single database.
// Example: ProductService within a Monolith
public class ProductService {
private DatabaseConnection dbConnection;
public ProductService(DatabaseConnection dbConnection) {
this.dbConnection = dbConnection;
}
public List<Product> getAllProducts() {
// Direct database query
return dbConnection.executeQuery("SELECT * FROM products");
}
public Product getProductById(String id) {
// Direct database query
return dbConnection.executeQuery("SELECT * FROM products WHERE id = " + id);
}
}
The problem this solves is obvious: get a product to market fast. One codebase, one deployment, one team if it’s small enough. Internal communication is just method calls. Debugging is often just stepping through code in a single IDE.
The mental model is straightforward: a single, large kingdom. All citizens (features) live within its walls. The king (deployment pipeline) dictates when anything changes. The royal guard (database) is shared by all.
However, as the kingdom grows, the king’s decrees become slower. Adding a new feature requires understanding the entire kingdom’s layout, lest you accidentally disrupt the royal bakery by changing the water supply route. Scaling becomes "buy a bigger castle." If one part of the kingdom gets sick (a bug in the payment processing), the entire kingdom grinds to a halt.
Now, consider a microservices approach for the same e-commerce platform. Instead of one ecommerce-app.jar, we have many smaller services: user-service, product-catalog-service, order-service, payment-service. Each is a separate application, often running in its own container, communicating over the network via APIs (like REST or gRPC).
# Example: Product Catalog Service (Microservice)
from flask import Flask, jsonify, request
import requests
app = Flask(__name__)
@app.route('/products', methods=['GET'])
def get_all_products():
# This service talks to its own database
products_data = get_products_from_db()
return jsonify(products_data)
@app.route('/products/<id>', methods=['GET'])
def get_product(id):
# This service talks to its own database
product_data = get_product_by_id_from_db(id)
return jsonify(product_data)
# Example of Product Service calling Order Service
@app.route('/products/<id>/details', methods=['GET'])
def get_product_with_order_count(id):
product_data = get_product_by_id_from_db(id)
# Network call to Order Service
order_count_response = requests.get(f"http://order-service:5000/orders/count_for_product/{id}")
order_count = order_count_response.json().get("count", 0)
product_data["order_count"] = order_count
return jsonify(product_data)
Here, product-catalog-service might expose an API endpoint /products. When a user requests product information, the API Gateway (another microservice) routes the request to product-catalog-service. If the user then wants to place an order, the order-service (another microservice) handles it, potentially calling user-service to verify credentials and payment-service to process payment. Each service has its own database and its own deployment pipeline.
This allows for independent scaling. If the product catalog is under heavy load, you can scale up only the product-catalog-service instances without affecting others. Teams can develop and deploy their services independently, leading to faster iteration cycles. Technology diversity is also possible; the product-catalog-service could be written in Java, while payment-service uses Go.
The core problem microservices solve is managing complexity in large, evolving systems by breaking it down into smaller, manageable, independent units. This leads to greater agility and resilience.
The most surprising thing about microservices is how much operational overhead they introduce, not just in terms of infrastructure, but in the cognitive load of understanding distributed system behavior. Debugging a request that traverses five services becomes a puzzle involving distributed tracing, correlation IDs, and understanding network latency. What was once a single stack trace is now a series of logs across multiple machines and services.
The next challenge you’ll face is managing inter-service communication efficiently.