Testing Lambda functions with Vitest
Testing serverless projects locally can be a pain. Separating business logic from calls to AWS services or external endpoints reduces the need to run the entire function locally.
The first thing you need to do to add a test suite to your project is installing Vitest:
npm install --save-dev vitest
Having the library already installed it will be easier to use it thanks to the IDE code auto-completion.
Create tests
Create a ./tests
folder into the project and a simple test file in it. The test file’s extension can be .js or .mjs and the name need to ends with .test like “mytest.test.mjs”.
import { test, expect } from 'vitest'
test('simple test', async () => {
expect(true).toBe(true)
})
Execute test with npm run test
:
RUN v0.34.6 /home/user/example-nodejs
✓ tests/example.test.mjs (1)
✓ simple test
Test Files 1 passed (1)
Tests 1 passed (1)
Start at 15:37:40
Duration 777ms (transform 45ms, setup 0ms, collect 24ms, tests 4ms, environment 0ms, prepare 174ms)
Lambda handler management
A Lambda function can be tested locally using different approaches:
- Testing the entire logic and use local-emulated AWS services.
- Testing the entire logic providing AWS credentials to connect to a remote environments.
- Unit-testing only the “core” logic in isolation without triggering AWS APIs.
Using local or remote service does not require a lot of code changes, with this approach the Lambda function is just execute as is.
Focusing on unit tests, it require a separation from core logic and AWS API/SDK calls. This can be archive quite simply with the AWS SDK v3, there being a separation between the command and the client that executes it:
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, GetCommand } from "@aws-sdk/lib-dynamodb";
// create the client
const client = DynamoDBDocumentClient.from(new DynamoDBClient({}));
// Lambda handler
export async function handler(event) {
// execute a command
const { Item: item } = await client.send(new GetCommand({
TableName: 'table-name',
Key: 'object-key'
}));
// do things with "item"
return item;
}
Now it’s necessary to separate the execution of the command and the remaining logic:
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, GetCommand } from "@aws-sdk/lib-dynamodb";
// create the client
const client = DynamoDBDocumentClient.from(new DynamoDBClient({}));
// Lambda handler
export async function handler(event) {
return customLogic(client);
}
// Custom logic handler
export async function customLogic(client) {
// execute a command
const { Item: item } = await client.send(new GetCommand({
TableName: 'table-name',
Key: 'object-key'
}));
// do things with "item"
return item;
}
Now you can recover the “core logic” function and test it by replacing the AWS SDK client with a mocked version:
import { test, expect } from 'vitest'
import { customLogic } from '../src/index.mjs'
const client = { send: async () => ({ Item: {} }) } // mocked client
test('simple test', async () => {
// custom logic execution
const item = await customLogic(client)
// result evaluation
expect(item).toStrictEqual({})
})
AWS SDK mocking
Mocking can be simplified using the community-driven AWS SDK Client mock library.
npm install --save-dev aws-sdk-client-mock
Include “aws-sdk-client-mock” library into test file and provided mocking implementation for each executed command:
import { mockClient } from "aws-sdk-client-mock";
import { DynamoDBDocumentClient, GetCommand } from "@aws-sdk/lib-dynamodb";
const client = mockClient(DynamoDBDocumentClient);
client
.on(GetCommand)
.callsFake((input) => {
const key = input.Key.key;
// do things with item's key
return { Item: {} };
})
Now let’s integrate this into the unit test:
import { test, expect } from 'vitest'
import { customLogic } from '../src/index.mjs'
import { mockClient } from 'aws-sdk-client-mock'
import { DynamoDBDocumentClient, GetCommand } from '@aws-sdk/lib-dynamodb'
// create mocked clients
const client = mockClient(DynamoDBDocumentClient)
// add mocked command implementation
client
.on(GetCommand)
.callsFake(() => {
// emulate the real response, does not require promises usage
return { Item: { key: 'object-key' } }
})
test('simple test', async () => {
const item = await customLogic(client)
expect(item).toStrictEqual({ key: 'object-key' })
})
Executing tests
It is very simple to execute locally and within the pipeline. It costs less to run one of these tests than to run a Docker container, emulate an AWS service on-premises, or deploy it via the SAM CLI (except for sync command).
Add some useful scripts into the package.json
:
"scripts": {
"test": "vitest run",
"test:report": "vitest run --outputFile test-results/junit.xml --reporter default --reporter junit",
"test:watch": "vitest"
}
npm run test
to run all tests.npm run test:watch
to run all tests and listening for changes, when a test file is saved it will be automatically execute showing the result.npm run test:report
to run all tests in a pipeline context and exporting atest-results/junit.xml
that can be parsed by pipeline actor (like Bitbucket).