Prism mocks your API from a specification, giving you a fully functional, albeit fake, backend to test against.
Let’s see it in action. Imagine you’ve got a simple OpenAPI 3 spec for a pet store:
openapi: 3.0.0
info:
title: Pet Store API
version: 1.0.0
paths:
/pets:
get:
summary: List all pets
responses:
'200':
description: A list of pets.
content:
application/json:
schema:
type: array
items:
type: object
properties:
id:
type: integer
name:
type: string
species:
type: string
/pets/{petId}:
get:
summary: Info for a specific pet
parameters:
- name: petId
in: path
required: true
schema:
type: integer
responses:
'200':
description: Details of a specific pet.
content:
application/json:
schema:
type: object
properties:
id:
type: integer
name:
type: string
species:
type: string
'404':
description: Pet not found.
To mock this, you’d install Prism:
npm install -g @prism-cli/cli
Then, run Prism with your spec file:
prism mock spec.yaml --host 127.0.0.1 --port 4010
Now, if you curl your mock server:
curl http://127.0.0.1:4010/pets
You’ll get a 200 OK response with a JSON array of fake pets, like this:
[
{
"id": 5270720,
"name": "Judd",
"species": "dog"
},
{
"id": 3930942,
"name": "Gus",
"species": "cat"
}
]
If you request a specific pet that doesn’t exist (e.g., /pets/999999):
curl http://127.0.0.1:4010/pets/999999
Prism intelligently returns a 404 Not Found based on your spec’s defined error responses.
The problem Prism solves is the classic "backend not ready" or "backend too slow" issue during frontend development or API integration testing. Instead of waiting for a real API, which might be in development, deployed to a staging environment, or simply not available, you can spin up a local, fast, and predictable mock server. This decouples your development workflow, allowing teams to work in parallel.
Internally, Prism parses your OpenAPI or similar specification. It builds an in-memory representation of your API’s paths, methods, request parameters, and response schemas. When a request hits the mock server, Prism matches the incoming HTTP method and path against its parsed spec. If a match is found, it then looks at the query parameters, path parameters, and request body (if applicable) to determine which response to generate. For 200 OK responses, it uses the defined schema to generate realistic-looking fake data. For error responses (like 404, 400, etc.), it returns the specified status code and an empty or example response body if provided in the spec.
The exact levers you control are primarily within the specification itself. The structure of your OpenAPI document dictates what Prism will mock. For example, if you want Prism to generate specific example data for a 200 response, you’d add an example field within the content object for that response:
responses:
'200':
description: A list of pets.
content:
application/json:
schema:
type: array
items:
type: object
properties:
id:
type: integer
name:
type: string
species:
type: string
example:
- id: 101
name: Fido
species: dog
- id: 102
name: Whiskers
species: cat
When you provide this example in your spec, Prism will serve that exact JSON array for the /pets endpoint instead of generating random data, making your mock even more tailored to specific test cases.
Prism supports a wide range of specifications beyond OpenAPI 3, including OpenAPI 2 (Swagger 2.0), API Blueprint, and RAML, making it a versatile tool. It also has capabilities for request validation, ensuring your client is sending requests that conform to the spec, and response validation, checking if the mock server is generating responses that adhere to the spec. This dual validation provides even more confidence in your API contract.
Beyond serving example data, Prism can also generate data based on schema types. If you don’t provide an example, Prism will create placeholder data that matches the schema. For instance, if your schema defines an integer for an id, Prism will generate a random integer. If it’s a string, it might generate a common word or phrase. This automatic generation is incredibly useful for quickly getting a functional mock without having to manually craft every single data point.
The next step after mocking is often integrating with a real backend or exploring more advanced mocking scenarios like dynamic data generation or stateful mocks.