Back to main page Traveling Coderman

Node.js Microservice Architecture Testing Knex SQL queries with Jest and testcontainers

In the previous post, we created a module with queries to the Postgres database.

import { v4 as uuid } from "uuid";
import { db } from "../../database/db";
import { Todo, TodoId } from "./todo.type";

export function getTodo(id: TodoId): Promise<Todo | undefined> {
return db.table<Todo>("todos").where("id", id).first<Todo>();
}

export function getTodos(): Promise<Todo[]> {
return db.table<Todo>("todos").select("*");
}

export async function createTodo(todo: Omit<Todo, "id">): Promise<Todo> {
const todoWithId = {
...todo,
id: uuid(),
};
await db.table<Todo>("todos").insert(todoWithId);
return todoWithId;
}

export async function updateTodo(
id: TodoId,
todo: Omit<Todo, "id">
): Promise<Todo | "NotFound"> {
const todoWithId = {
...todo,
id,
};
const changedRowCount = await db
.table<Todo>("todos")
.where("id", id)
.update(todoWithId);
return changedRowCount === 0 ? "NotFound" : todoWithId;
}

export async function deleteTodo(id: TodoId): Promise<"Ok" | "NotFound"> {
const deletedRowCount = await db
.table<Todo>("todos")
.where("id", id)
.delete();
return deletedRowCount === 0 ? "NotFound" : "Ok";
}

In this post, we create test cases that check if the SQL queries executed by these methods work as expected and that they work with the database schema. We configure the test cases to automatically spawn a single Postgres database, make the test cases and the underlying application connect with it and tear it down once the test cases ran.

Available Options πŸ”—

Before we delve into the implementation, let's take a look at the options and their pros and cons.

In our setup, we have tests that require a database and we have tests that don't require a database. Ideally, the tests that don't require a database should be in the majority.

We have several options:

  • One database per test file: Each test file requiring a database sets up and tears down a dedicated database. Great isolation, but performance does not scale and lots of docker containers. On the bright side, if you run npm run test -- --watch and no database test is affected, then no database is spawned.
  • Single database, started on-demand: The first test file requiring a database sets it up. Better performance, worse isolation. Unclear how to share connection details, where and when to tear it down and database setup errors will appear in unexpected places.
  • Single database, always started: Before all test files, a single database is set up globally and teared down afterwards. Better performance, worse isolation. There is always a database spawned regardless if a database test is executed.

Note: These are the options if we don't separate database tests from other unit tests. I believe that it is best to not separate and instead to have a single command that runs all checks. This command runs identically locally and on CI. Any separation would lead to some checks running less frequently which undermines their purpose of catching defects early.

In this post, I present how to achieve the setup with a single database that is always started. I believe it is the best compromise between isolation and performance.

Installing Testcontainers πŸ”—

We use the Node.js package of testcontainers to spawn docker containers. There is an npm package that allows to use TypeScript code to spawn containers.

npm install --save-dev testcontainers

First test case πŸ”—

Before we configure the setup of the database, we create a first test case in a file src/controllers/todos/todo.dao.spec.ts. If we ask for the todos in an empty database, then an empty list of todos should be returned.

import { getTodos } from "./todo.dao";

describe("the todo dao", () => {
it("retrieves no todos if the database is empty", async () => {
await expect(getTodos()).resolves.toEqual([]);
});
});

Right now it fails, since there is no database to connect to.

Note: If you directly want to see the complete working code, check out the GitHub repo. In this post, there are a lot of incremental changes to the files and the the snippets you see are still extended in later sections.

Global setup πŸ”—

Jest provides an option in the jest.config.js to run a global setup script before the execution of all tests. If you run npm run test, then it is run exactly once before all tests. If you run npm run test -- --watch, then it runs before all tests after each file system change trigger.

process.env = Object.assign(process.env, {
DISABLE_LOGGER: "true",
});

module.exports = {
// ...
globalSetup: "<rootDir>/src/database/db-test.setup.ts",
};

In this script src/database/db-test.setup.ts, we spawn the database container.

function spawnDatabase(): Promise<StartedTestContainer> {
return new GenericContainer("postgres:14")
.withEnv("POSTGRES_USER", "postgres")
.withEnv("POSTGRES_DB", "postgres")
.withEnv("POSTGRES_PASSWORD", "secret")
.withExposedPorts(5432)
.withTmpFs({ "/temp_pgdata": "rw,noexec,nosuid,size=65536k" })
.start();
}

module.exports = spawnDatabase;

We specify the container name and version postgres:14. This is the name and version how it is defined on DockerHub. We pass environment variables to the Postgres docker container. The standard port 5432 of a Postgres database is exposed with a random port on localhost.

Note: It's always a random port on localhost such that the tests are not depending on the port to be free. This is however also the source of complication to share this random port to the tests and application.

We also configure the docker container to use a temporary file system /temp_pgdata for the storage of the database data.

Sharing the database port πŸ”—

With this global setup, we have a database container running during the tests. However, since it is exposed at a random port on localhost, the tests and the application can not find it. The application still expects the database to be on port 5432.

npm run test

● the todo dao β€Ί retrieves no todos if the database is empty

connect ECONNREFUSED 127.0.0.1:5432

Sharing this random port is tricky.

The Jest global setup does not have a return value that is accessible in the tests. It even runs in a separate node environment. That means that global variables written in the Jest global setup are not accessible in the tests.

To overcome this challenge, we utilize a workaround that also Puppeteer used: We write the database port to a temporary file and read it from there in the tests.

This workaround requires us to take multiple steps:

  • We need to write the port to a temporary file.
  • We need to introduce a different configuration that reads the database port from this file.
  • We need to configure Jest to start with that configuration.

Writing the database port to a temporary file πŸ”—

In the Jest global setup db-test.setup.ts, we create a second function shareDatabaseConfig that is executed after the spawning of the database.

import { GenericContainer, StartedTestContainer } from "testcontainers";
import { mkdir, writeFile } from "fs/promises";
import os from "os";
import path from "path";

function spawnDatabase(): Promise<StartedTestContainer> {
// ...
}

async function shareDatabaseConfig(container: StartedTestContainer) {
const variablesDir = path.join(
os.tmpdir(),
"jest_testcontainers_global_setup"
);
await mkdir(variablesDir, { recursive: true });
await writeFile(
path.join(variablesDir, "databasePort"),
container.getMappedPort(5432).toString()
);
}

async function setupDatabase(): Promise<void> {
const container = await spawnDatabase();
await shareDatabaseConfig(container);
}

module.exports = setupDatabase;

In this function shareDatabaseConfig, we first create a directory jest_testcontainers_global_setup in temporary directory of the operating system. Afterwards, we create a file jest_testcontainers_global_setup/databasePort and fill it with the randomly assigned port of the database.

Note: In case you are unfamiliar with why there are two ports 5432 and a random port: The Postgres container expects traffic on port 5432. The docker environment allows to map this port to a different host port, such that you can run potentially multiple databases.

Introducing a test configuration πŸ”—

We only want to read that port in test cases, not on local execution and not on production. Therefore, we introduce a new environment test in the config type config.type.ts beside the already existing environments local and production.

// ...

export type Environment =
// The service running in a production cluster available for customers
| "production"
// The service running locally on a development machine
| "local"
// The service running with Jest executing tests
| "test";

export interface Config {
environment: Environment;
// ...
}

// ...

Then, we can return a different config for this environment in the file get-config.ts.

import { Config, Environment, ProcessVariables } from "../config.type";
import { getTestConfig } from "./get-test.config";
import { getLocalConfig } from "./get-local.config";
import { getProductionConfig } from "./get-production.config";

export function getConfig(processVariables: ProcessVariables): Config {
const environment: Environment = processVariables.ENV || "local";
switch (environment) {
case "production":
return getProductionConfig(processVariables);
case "local":
return getLocalConfig(processVariables);
case "test":
return getTestConfig(processVariables);
}
}

In the file get-test.config.ts, we read the database port from the temporary file.

import path from "path";
import os from "os";
import { Config, ProcessVariables } from "../config.type";
import { readFileSync } from "fs";

function readDatabasePort(): number | undefined {
const variablesDir = path.join(
os.tmpdir(),
"jest_testcontainers_global_setup"
);
const port = readFileSync(path.join(variablesDir, "databasePort"), "utf8");
return port ? parseInt(port) : undefined;
}

export function getTestConfig(processVariables: ProcessVariables): Config {
return {
environment: "test",
logLevel: processVariables.LOG_LEVEL ?? "info",
authentication: {
enabled: false,
jwksUrl: "<jwksUrl> is not set in a test environment",
},
database: {
user: "postgres",
host: "localhost",
database: "postgres",
port: readDatabasePort(),
password: "secret",
ssl: false,
},
};
}

Now, we can configure Jest in the jest.config.js to use the test environment instead of the local environment.

process.env = Object.assign(process.env, {
DISABLE_LOGGER: "true",
ENV: "test",
});

module.exports = {
// ...
};

Migrating the database schema πŸ”—

If we run the tests npm run test, then the database can now be accessed. However, it complains about the non-existing table todos.

npm run test

● the todo dao β€Ί retrieves no todos if the database is empty

expect(received).resolves.toEqual()

Received promise rejected instead of resolved
Rejected to value: [error: select * from "todos" - relation "todos" does not exist]

We need to run the database schema migrations before the tests are executed. In a file src/database/db-test.functions.ts we create a function migrateDatabase.

import { db } from "./db";

export async function migrateDatabase() {
await db.migrate.latest({ directory: "./src/database/migrations" });
}

This function allows us to put an expressive statement beforeAll(migrateDatabase) into each test file requiring the database. Only the first call actually migrates the database. In the other cases, Knex discovers that the migrations already took place and skips the schema migration.

import { migrateDatabase } from "../../database/db-test.functions";
import { getTodos } from "./todo.dao";

describe("the todo dao", () => {
beforeAll(migrateDatabase);

it("retrieves no todos if the database is empty", async () => {
await expect(getTodos()).resolves.toEqual([]);
});
});

Note Ideally, we would extend the Jest global setup to migrate the database schema after the database is spawned. However, I am not able to get the migrations to run in the global setup due to some module import errors. It seems like the Jest environment does not like something about the module format of Knex or how Knex dynamically loads the TypeScript migration scripts from the source code. All fixes I found on the web did not work in this setup with Knex running in Jest. If you know how to fix it, let me know on Mastodon @tcoderman@techhub.social and I will improve this section referring to you.

Adding more tests πŸ”—

Now that the tables are created, we can add more tests.

import { omit } from "lodash/fp";
import { migrateDatabase } from "../../database/db-test.functions";
import {
createTodo,
deleteTodo,
getTodo,
getTodos,
updateTodo,
} from "./todo.dao";
import { createStubTodo } from "./todo.stub";
import { v4 as uuid } from "uuid";

describe("the todo dao", () => {
beforeAll(migrateDatabase);

it("retrieves no todos if the database is empty", async () => {
await expect(getTodos()).resolves.toEqual([]);
});

it("inserts a todo and retrieves it", async () => {
const todo = omit("id", createStubTodo());
const createdTodo = await createTodo(todo);
expect(createdTodo).toHaveProperty("id");
await expect(getTodos()).resolves.toEqual([createdTodo]);
await expect(getTodo(createdTodo.id)).resolves.toEqual(createdTodo);
});

it("updates an existing todo", async () => {
const toBeCreatedTodo = omit("id", createStubTodo());
const createdTodo = await createTodo(toBeCreatedTodo);
const toBeUpdatedTodo = omit("id", createStubTodo());
const updatedTodo = await updateTodo(createdTodo.id, toBeUpdatedTodo);
await expect(getTodos()).resolves.toEqual([updatedTodo]);
await expect(getTodo(createdTodo.id)).resolves.toEqual(updatedTodo);
});

it("deletes an existing todo", async () => {
const toBeCreatedTodo = omit("id", createStubTodo());
const createdTodo = await createTodo(toBeCreatedTodo);
await expect(deleteTodo(createdTodo.id)).resolves.toEqual("Ok");
await expect(getTodos()).resolves.toEqual([]);
await expect(getTodo(createdTodo.id)).resolves.toEqual(undefined);
});

it("does not find a non-existing todo on update", async () => {
const todo = omit("id", createStubTodo());
await expect(updateTodo(uuid(), todo)).resolves.toEqual("NotFound");
});

it("does not find a non-existing todo on delete", async () => {
await expect(deleteTodo(uuid())).resolves.toEqual("NotFound");
});
});

On execution, we quickly notice that the tests affect each other. The database is not cleaned up after each test.

Cleaning the tables πŸ”—

In the file db-test.functions.ts, we create a function truncateTables that cleans all tables. A truncate is a fast delete of all entries in a table.

import { db } from "./db";

export async function migrateDatabase() {
// ...
}

export async function truncateTables() {
await db.table("todos").truncate();
}

In each test file, we can call this function after each test.

beforeAll(migrateDatabase);
afterEach(truncateTables);

Note: The table name todos is hardcoded in the function truncateTables. In a more complex application, you might want to go for a more sophisticated approach to avoid forgetting to clean some tables.

Shutting down the database πŸ”—

If you run npm run test, then a special docker container ryuk from testcontainers will take care of shutting down the database container once the node environment has shut down (that is once the command npm run test terminated).

However, if you run npm run test -- --watch, then the node environment never shuts down. You can inspect this if you run watch docker ps while running npm run test -- --watch.

Still though, Jest runs the global setup on each trigger of file system changes and therefore spawns more and more databases. To avoid consuming more and more resources with a long running npm run test -- --watch locally, we shut down the database in the global Jest teardown script.

// ...

module.exports = {
// ...
globalSetup: "<rootDir>/src/database/db-test.setup.ts",
globalTeardown: "<rootDir>/src/database/db-test.teardown.ts",
};

In the Jest global setup db-test.setup.ts, we share the spawned container to a global variable.

import { GenericContainer, StartedTestContainer } from "testcontainers";
import { mkdir, writeFile } from "fs/promises";
import os from "os";
import path from "path";

function spawnDatabase(): Promise<StartedTestContainer> {
// ...
}

async function shareDatabaseConfig(container: StartedTestContainer) {
// ...
}

function shareContainerForTeardown(container: StartedTestContainer) {
(globalThis as any).__DATABASE_CONTAINER__ = container;
}

async function setupDatabase(): Promise<void> {
const container = await spawnDatabase();
await shareDatabaseConfig(container);
shareContainerForTeardown(container);
}

module.exports = setupDatabase;

Wait, what? Why didn't we share the database port the same way? In contrast to the individual test cases, the global teardown runs in the same node environment as the global setup. Hence, we can access global variables from the global setup in the global teardown.

In the global teardown db-test.teardown.ts, we call stop() on the started container __DATABASE_CONTAINER__.

export async function teardownDatabase(): Promise<void> {
(globalThis as any).__DATABASE_CONTAINER__.stop();
}

module.exports = teardownDatabase;

Appendix: Date workaround πŸ”—

There is something weird going with Knex and date formats passed to Postgres. It's fixable via specifying a date parser in the database file db.ts.

import { Knex } from "knex";
import { types } from "pg";
import { config } from "../configuration/config";

types.setTypeParser(1082, (date: string) => date);

export const db = require("knex")({
client: "pg",
connection: config.database,
}) as Knex;

Conclusion πŸ”—

We added tests for the DAO todo.dao.ts. These tests verify the SQL queries of the DAO via executing them against a real Postgres database with the latest database schema. The tests spawn one shared database, clean it after each test and shut it down after all tests.

Become a GitHub sponsor to access the code of this post as a GitHub repo.