Back to main page Traveling Coderman

Node.js Microservice Architecture Knex.js schema migrations with Postgres in Node.js

We create the first database table with Knex schema migrations and make sure that we can reset the database state locally.

We want to be able to continuously evolve the database schema with fluent migrations. The library Knex bases migrations on a directory of TypeScript files. Each file describes one migration including a rollback step. Knex manages the execution state within a dedicated database table migrations.

For Knex to find the database locally and on production, we provide it with the connection settings in a file ./src/database/knexfile.ts.

Note I position the Knexfile at this position ./src/database/knexfile.ts instead of top level ./. This way, TypeScript considers it part of the code base and it can import the configuration ./src/configuration/config and type definitions.

import { Knex } from "knex";
import { config } from "../configuration/config";

module.exports = {
client: "pg",
connection: config.database,
migrations: {
directory: "./migrations",
},
} as Knex.Config;

This leads to the following directory structure.

src
`-- database
|-- db.ts
|-- knexfile.ts
`-- migrations
|-- 1-<name>.ts
|-- 2-<name>.ts
`-- <n>-<name>.ts

Migrations 🔗

Knex knows now where the database is. We can write the first migration and want the todo entities to be persisted. Hence, we create a table todos.

import { Knex } from "knex";

export async function up(knex: Knex): Promise<void> {
await knex.schema.createTable("todos", (table) => {
table.uuid("id").primary();
table.string("name").notNullable();
table.string("assignee").notNullable();
table.date("dueDate").notNullable();
});
}

export async function down(knex: Knex): Promise<void> {
await knex.schema.dropTable("todos");
}

We can run the migrations with a shell command.

npx knex --knexfile ./src/database/knexfile.ts migrate:latest

Knex executes all migrations that have not been successfully executed yet. If we add a second migration script later, then Knex inspects the migrations database table and only executes the second script.

Note On production, this migration command should be run on each release. If you run multiple replicas of the application, the migration should only run once instead of per replica. The specific configuration depends on your runtime technology.

Seeds 🔗

Locally, it is helpful to populate the database with example data. This makes it easier to experiment with the application. To achieve this, Knex offers seed scripts.

In the Knexfile ./src/database/knexfile.ts, we tell Knex where to find the seed scripts.

import { Knex } from "knex";
import { config } from "../configuration/config";

module.exports = {
client: "pg",
connection: config.database,
migrations: {
directory: "./migrations",
},
seeds: {
directory: "./seeds",
},
} as Knex.Config;

In a file ./src/database/seeds/todos.ts, we insert three todos into the empty table.

import { Knex } from "knex";
import { Todo } from "../../controllers/todos/todo.type";

const todos: Todo[] = [
{
id: "142e8446-f5ba-4813-bfb5-f3192a37f1bf",
name: "Create Todo",
assignee: "Naomi",
dueDate: "2025-01-05",
},
{
id: "bf6091cc-69f8-4a9e-93ed-0294c3a8ac2f",
name: "Delete Todo",
assignee: "Peter",
dueDate: "2029-06-11",
},
{
id: "6e5bad4c-7f1c-4ab4-9361-c6bd697b4257",
name: "Change Todo",
assignee: "Sheila",
dueDate: "2032-11-27",
},
];

export async function seed(knex: Knex): Promise<void> {
await knex("todos").truncate();
await knex("todos").insert(todos);
}

Similar to the migrations, we can execute the seed scripts.

npx knex --knexfile ./src/database/knexfile.ts seed:run

Utility scripts 🔗

The commands with the custom Knexfile ./src/database/knexfile.ts are rather long. Therefore, we create scripts in the package.json to migrate and seed. Additionally, we provide scripts to reset the local database to a default state.

There are three steps in a reset of the local database:

  • Wipe: Deleting the existing database docker volume
  • Migrate: Create the tables
  • Seed: Populate example data
{
// ...
"scripts": {
// ...
"db:wipe": "docker-compose down; docker volume rm nodemicroservicearchitecture_database-data; docker-compose up -d",
"db:migrate": "npx knex --knexfile ./src/database/knexfile.ts migrate:latest",
"db:seed": "npx knex --knexfile ./src/database/knexfile.ts seed:run",
"db:reset": "npm run db:wipe && npm run db:migrate && npm run db:seed"
}
// ...
}

The script db:wipe removes the docker volume and can only do so if the Postgres database is shut down with docker-compose down. Afterwards it spins it up again with docker-compose up -d.

Note The volume name nodemicroservicearchitecture_database-data is assembled by docker-compose and might change between versions. You might need to adjust it on a docker upgrade.

Accessing the new table 🔗

The new table todos is created with the first migration. We can change the health endpoint to access this table and verify that it was created.

import { NextFunction, Request, Response } from "express";
import { db } from "../database/db";

export async function healthController(
_request: Request,
response: Response,
next: NextFunction
): Promise<void> {
try {
await db.from("todos").select("id").limit(1);
response.sendStatus(204);
} catch (error) {
next(error);
}
}

Conclusion 🔗

We have setup the first Postgres database table with Knex schema migrations. New migrations can be added and can be executed with scripts from the package.json. The local setup includes example data and allows for a one-step database reset.

Become a GitHub sponsor to access the code of this post as a GitHub repo.