Mocked API with API Gateway

Fabio Gollinucci
5 min readMar 20, 2024

--

I recently had to test a serverless solution whose main job is to contact an HTTP API of a service exposed by the client. The main problem was the absence of a test environment with resources and data separate from the production one.

Infrastructure schema

Application level mocking

In order to perform a test, you can act at the application level by not making outgoing calls, but pretending that the API was called and that a certain fixed data item responded. Essentially this can be implemented with a test suite mocking the client that makes the HTTP calls to the API.

With this approach I can test the application layer, but I’m not really using the HTTP client, this means that the reliability of the test is limited on the type of data sent, not on timing or infrastructure setup.

Mocked API

Another useful approach to testing was to create a clone API endpoint that fixedly responds to requests. There are some online services that allow you to record incoming calls and describe the responses, but I wanted to replicate this kind of service with AWS.

The simplest thing that comes to mind is a Lambda function, called from API Gateway, with a fixed response in the code:

Resources:  
MockedApi:
Type: AWS::Serverless::Api
Properties:
Name: !Ref AWS::StackName
StageName: api

MockedResponseFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub "${AWS::StackName}-response"
Runtime: nodejs16.x
Handler: index.handler
InlineCode: !Sub |
exports.handler = async (event) => ({
statusCode: 200,
body: JSON.stringify({
"id": "example",
"sku": "example",
"name": "Example",
"description": "This is an example",
"price": 15.49
})
})
Events:
HttpRequest:
Type: Api
Properties:
RestApiId:
Ref: MockedApi
Method: GET
Path: /

I didn’t want to add running code so as not to have to mess with the cost of Lambda and any limitations in the stress tests.

API Gateway integrations with AWS services, including Lambda but also DynamoDB and EventBridge, are described through the "API Gateway Integration Response"; one of these possible integrations is called “mock”. This type of configuration can be described by declaring API methods and resources, very verbose in my opinion, or via the DefinitionBody passing a Swagger definition:

Resources:
MockedApi:
Type: AWS::Serverless::Api
Properties:
Name: !Ref AWS::StackName
StageName: api
DefinitionBody:
swagger: 2.0
info:
title: "Mocked API"
schemes:
- "https"
paths:
/:
get:
produces:
- "application/json"
responses:
"200":
description: "OK"
x-amazon-apigateway-integration:
type: "mock"
requestTemplates:
application/json: |
{
"statusCode": 200
}
responses:
default:
statusCode: "200"
responseTemplates:
application/json: |
{
"id": "example",
"sku": "example",
"name": "Example",
"description": "This is an example",
"price": 15.49
}

Obliviously also POST, PUT and DELETE requests can be used!

By creating a stack from this template an endpoint will be created which, if contacted on the declared method, will respond with a simple 200:

curl -X GET https://xxxxxxxx.execute-api.eu-west-1.amazonaws.com/api/
{
"id": "example",
"sku": "example",
"name": "Example",
"description": "This is an example",
"price": 15.49
}

This allows you to test with the overhead of an HTTP call with associated network consumption, memory and response times.

Conditional responses

API gateway integration template is not so flexible, allow you to create custom responses based on request parameters:

paths:
/:
get:
produces:
- "application/json"
- "text/plain"
responses:
"200":
description: "OK"
"404":
description: "Not Found"
x-amazon-apigateway-integration:
type: "mock"
requestTemplates:
application/json: |
{
#if( $input.params('error') == "yes" )
"statusCode": 404
#else
"statusCode": 200
#end
}
responses:
"404":
statusCode: "404"
responseTemplates:
text/plain: |
'Not Found'
default:
statusCode: "200"
responseTemplates:
application/json: |
{
"id": "example",
"sku": "example",
"name": "Example",
"description": "This is an example",
"price": 15.49
}

In this example adding a query parameter will cause the request to fail:

curl -i -X GET https://xxxxxxxxx.execute-api.eu-west-1.amazonaws.com/api/?error=yes
HTTP/2 404
content-type: application/json
content-length: 12
...

'Not Found'

If you need greater flexibility in describing the response you can always fall back to the solution of a Lambda function.

Method Throttling

However, these solutions do not allow you to test behaviors such as call throttling. This limitation in the HTTP API is more or less declared, if the service is enterprise grade there will probably be a limitation of calls per second per user (API key), if it is a more “domestic” service the limitation could be due to the hardware limit. Even simply an AWS service can have limits, imposed by the platform, some of which are modifiable and others not. More simply, the limitation may be due to the project budget, an API with no limitations would imply having infinite resources and money.

API gateway has a very useful throttling configuration that can be set by a couple of simple API endpoint resource parameters:

Resources:  
MockedApi:
Type: AWS::Serverless::Api
Properties:
Name: !Ref AWS::StackName
StageName: api
MethodSettings:
- ResourcePath: "/*"
HttpMethod: "*"
ThrottlingRateLimit: 10
ThrottlingBurstLimit: 1
DefinitionBody:
# ...

These two configurations regulate the limit for the overall number of requests in one second (ThrottlingRateLimit) and the maximum number of “concurrent” requests, within 100ms (ThrottlingBurstLimit). So giving a ThrottlingRateLimit of 10 and ThrottlingBurstLimit mean that API Gateway will handle 10 requests per seconds but just one at time.

As a benchmark tool I used autocannon, very simple and immediate to use. I used some of its parameters to control the number of parallel connections (-c) and the number of requests per connection (-r) to describe some cases. For example setting the throttling configuration to RateLimit 1 and BurstLimit 1, I can stay under the limit with:

autocannon -c 1 -r 1 -d 5 https://xxxxxxxx.execute-api.eu-west-1.amazonaws.com/api/
...
6 requests in 5.03s, 2.7 kB read

doubling the number of requests per connection causes requests to begin to fail:

autocannon -c 1 -r 2 -d 5 https://xxxxxxxx.execute-api.eu-west-1.amazonaws.com/api/
...
5 2xx responses, 5 non 2xx responses
12 requests in 5.04s, 5.07 kB read

However too high! It should only be 5 of those in 200 and all the others in 429, there is something wrong. This happens because API Gateway use an algorithm called “token bucket” to evaluate whether a request should be blocked or not.

In short, the token bucket algorithm works like this:

  • Every seconds the system (virtually) adds tokens to the bucket in the amount indicated by the RateLimit value.
  • Every request made to the endpoint method remove a token from that bucket, when the bucket empties the requests are throttled.
  • The bucket can only contain a certain number of tokens, indicated by the value of BurstLimit, if the bucket is full no more tokens are added.

API Usage Plan

The same throttling configuration can be applied using an API Usage Plan with a related API key, with the same results as above.

Resources:  
MockedApi:
Type: AWS::Serverless::Api
Properties:
Name: !Ref AWS::StackName
StageName: api
Auth:
ApiKeyRequired: true
UsagePlan:
CreateUsagePlan: SHARED
Description: Throttled API key
Throttle:
BurstLimit: 10
RateLimit: 1

In conclusion, however, by remaining below the limit no requests are blocked, rising above the limit the number of blocked requests is not reliable.

Metrics and alarms

An interesting addition is the monitoring of any errors due to excessive competition via API Gateway metrics. If the mocked API solution wants to be used as a target during end to end tests, it may be useful to add alarms, perhaps with notifications to alert the development team.

CloudWatch metrics for ApiGateway

The metrics to keep an eye on is have namespace as “AWS/ApiGateway”, as ApiName your stack name.

--

--