Tech Talk: Developing APIs the Easy Way – Streamline your API process with an endpoint-focused approach on Dec 5 at 11 am EST! Register now

Back to blog
API MOCKING

Mocking an AI Chatbot API with Blackbird

Matt Voget
July 31, 2024 | 8 min read
AI Chatbot API

I’ve been developing APIs for years. In every API project, I face the same struggle of needing mocks, but spending too much time setting up, hosting, and maintaining them. I’d rather spend that time implementing the APIs or client code. API Mocking is important — it’s useful in development, debugging, testing, and can power the “try-it-out” portion of an API’s docs.

As a software engineer I’ve used a variety of mocking tools and libraries including Mock Server, SOAP UI, and Mock Service Worker. These tools are helpful, and can be quick to setup for personal use, but still require a lot of infrastructure and setup to effectively use on a dev team or in CI/CD scenarios. Every software company I’ve worked at had a different home-grown way of configuring and hosting mocks for APIs, and usually the hosting part was managed by a separate DevOps or infra team.

Blackbird is a new tool I helped create at Ambassador that among other things can quickly create shareable mock instances of APIs and host them in a SaaS environment. This post will walk through an example of using Blackbird to setup mocking on an AI chatbot application that both consumes and implements an API.

AI Chatbot Example

Let’s assume the following scenario: I want to create an AI chatbot app that uses a microservice to consume OpenAI’s API. The microservice will host a few RESTful endpoints for configuring and sending chat requests over to OpenAI and sending back responses from the LLM. Here is a diagram of what this will look like:

Mocking an AI Chatbot API

I can already tell that I’ll want a mock server setup for the Chatbot API microservice, which will enable development and testing of the chatbot UI. But I will also need a mock server setup against the OpenAI API — this will allow for development and testing of my microservice without relying on live (and potentially expensive) requests to OpenAI’s API.

Mocking the Chatbot API

Let’s start with my microservice. As with most APIs, I like to start with an Open API spec to serve as my contract for this microservice. Blackbird has it’s own AI-powered API spec creator that I will use as a starting point. After logging into Blackbird and selecting “Create new API” I can create my spec using the following prompt:

I want to create an API for a microservice for my AI chatbot. I want the API to be Open API version 3.0.1. I want my API to have a post endpoint called chat that accepts a request body with a single property for a prompt and handles a response with a single property for a chat message. I also want a get endpoint called model for information about the AI model being used and a post endpoint to change models.

And here is Blackbird’s response:

{
"openapi": "3.0.1",
"info": {
"description": "API for AI Chatbot Microservice",
"title": "AI Chatbot API",
"version": "1.0.0"
},
"tags": [
{
"name": "Chat"
},
{
"name": "Model"
}
],
"paths": {
"/chat": {
"post": {
"tags": [
"Chat"
],
"summary": "Initiate a chat with the AI chatbot",
"operationId": "chat",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ChatRequest"
}
}
}
},
"responses": {
"200": {
"description": "Successful response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ChatResponse"
}
}
}
},
"404": {
"description": "Not Found"
},
"500": {
"description": "Internal Server Error"
}
}
}
},
"/model": {
"get": {
"tags": [
"Model"
],
"summary": "Get information about the AI model being used",
"operationId": "getModel",
"responses": {
"200": {
"description": "Successful response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ModelInfo"
}
}
}
},
"404": {
"description": "Not Found"
},
"500": {
"description": "Internal Server Error"
}
}
},
"post": {
"tags": [
"Model"
],
"summary": "Change the AI model being used",
"operationId": "changeModel",
"responses": {
"200": {
"description": "Successful response"
},
"404": {
"description": "Not Found"
},
"500": {
"description": "Internal Server Error"
}
}
}
}
},
"components": {
"schemas": {
"ChatRequest": {
"type": "object",
"properties": {
"prompt": {
"type": "string"
}
},
"required": [
"prompt"
],
"additionalProperties": false
},
"ChatResponse": {
"type": "object",
"properties": {
"chatMessage": {
"type": "string"
}
},
"required": [
"chatMessage"
],
"additionalProperties": false
},
"ModelInfo": {
"type": "object",
"properties": {
"modelName": {
"type": "string"
},
"modelVersion": {
"type": "string"
}
},
"required": [
"modelName",
"modelVersion"
],
"additionalProperties": false
}
}
}
}

Perhaps I’ll make a few tweaks, but it actually produced a robust spec for me. As with most LLMs the quality of the output is dependent upon the quality of the input prompt. It’s still nice that Blackbird could do a little prompt engineering for me to create a valid OAS file versus using ChatGPT.

Blackbird API Development Platform also gave me the ability to download my spec, which I will put into my code repo. As a bonus, it gave me the option to launch a mock server right away:


AI Chatbot API

Within seconds I see a public URL for a mock server on my chatbot API. Let’s test it out by curling the endpoints:

curl -s https://matts-org-ec914.blackbird-relay.a8r.io/ai-chatbot-api/model | jq
{
"modelName": "in Duis ex aute adipisicing",
"modelVersion": "ullamco"
}
curl -s \
-H "Content-Type: application/json" \
--request POST \
--data '{"prompt": "Hello chatbot API!"}' \
https://matts-org-ec914.blackbird-relay.a8r.io/ai-chatbot-api/chat | jq
{
"chatMessage": "mollit ex nisi"
}

A few nice things to note: the mock URL that Blackbird provided is publicly accessible, which is great for collaboration, but Blackbird also supports locking it down with an API key in case I wanted to keep the mock URL internal to my team.

Now with the Chatbot API mocked, development on the Chatbot app frontend can commence while the API is being implemented.

Mocking OpenAI’s API

Before we implement the Chatbot API, I also want to mock out the interface between the Chatbot API and the external OpenAI API. Let’s use Blackbird again, but this time from the CLI.

First, I’m going to download Blackbird’s CLI:

# 1. Create directory if it doesn't exist
sudo mkdir -p /usr/local/bin
# 2. Download the latest binary
sudo curl -fL https://storage.googleapis.com/releases.datawire.io/blackbird/v0.3.0-beta/darwin/amd64/blackbird -o /usr/local/bin/blackbird
# 3. Make the binary executable:
sudo chmod a+x /usr/local/bin/blackbird

Next I’m going to login — this will connect my CLI to to the Blackbird webapp

blackbird login

In order to create a mock for OpenAI’s API, I will need to download their API spec file.

Once downloaded, I can spin up another mock server with the Blackbird CLI in a single command:

blackbird mock create openai -s openai_api.yaml
✔ input validated
✔ spec file validated as an OpenAPI 3.0.x spec
✔ checking existing deployments
✔ environment is ready
✔ creating application for deployment
✔ uploading specification file
✔ mock instance created
✔ mock instance is ready, URL: https://matts-org-ec914.blackbird-relay.a8r.io/openai/

Once again, the OpenAI mock server is ready in seconds with a public URL. Let’s test out the chat completion endpoint:

curl -s \
-H "Content-Type: application/json" \
--request POST \
--data '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role":"system",
"content": "You are a helpful assistant."
},
{
"role":"user",
"content":"Hello!"
}
]
}' \
https://matts-org-ec914.blackbird-relay.a8r.io/openai/chat/completions
{"type":"NO_PATH_MATCHED_ERROR","title":"Route not resolved, no path matched","status":404,"detail":"The route /v1/chat/completions hasn't been found in the specification file"}% mattvoget@a2251-11 Downloads % curl -s -H "Content-Type: application/json" --request POST --data '{"model": "gpt-3.5-turbo", "messages": [ { "role":"system", "content": "You are a helpful assistant."}, { "role":"user", "content":"Hello!" } ] }' https://default-blackbird-matts-organization-a0696-0.blackbird-stg-relay.datawire.io/openai/chat/completions | jq
{
"type": "UNAUTHORIZED",
"title": "Invalid security scheme used",
"status": 401,
"detail": "Your request does not fullfil the security requirements and no HTTP unauthorized response was found in the spec, so Spectra is generating this error for you.",
"headers": {
"WWW-Authenticate": "Bearer"
}
}

Testing this endpoint gave me back a 401 error saying that my request wasn’t properly formed — and sure enough I forgot to include a header to pass a bearer token (something I’ll need to remember when implementing the code to consume this API).

curl -s \
-H "Content-Type: application/json" \
-H "Authorization: Bearer abc123" \
--request POST \
--data '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role":"system",
"content": "You are a helpful assistant."
},
{
"role":"user",
"content":"Hello!"
}
]
}' \
https://matts-org-ec914.blackbird-relay.a8r.io/openai/chat/completions
{
"choices": [
{
"finish_reason": "function_call",
"index": 27121527,
"message": {
"role": "assistant",
"content": "est dolore",
"function_call": {
"name": "veniam elit ea dolore minim",
"arguments": "velit elit ut occaecat",
"adipisicing_42": -74878644.6990763
},
"tool_calls": [
{
...
},
...
]
},
"logprobs": {
"content": null
}
},
{
"finish_reason": "content_filter",
"index": 42608321,
"message": {
...
},
"logprobs": null,
"est573": 33824876.70214461,
"incididunt_1f3": 82999925.43383965
}
],
"created": 17226006,
"id": "dolore officia dolor occaecat",
"model": "irure adipisicing reprehenderit",
"object": "chat.completion",
"service_tier": "default",
"system_fingerprint": "dolor ut",
"usage": {
"ad5": -10679086,
"completion_tokens": 96235456,
"inc0d": -845831.3006245494,
"non_6": -10729837,
"prompt_tokens": -12780052,
"reprehenderit_e5a": "sint sunt",
"sed_74_": false,
"total_tokens": -31534344
}
}

Much better. And in looking at this mock response I also notice a few goodies:

  • There are Lorem Ipsum properties in the response, such as
    "nullaf": "consequat",
    - these are added because the OpenAI spec file indicates
    additionalProperties: true
    in the Chat Completion response. The mock server automatically added them for me which is great for testing.
  • The
    choices
    array in the response gives a varying number of elements each time I curl the endpoint. Also super useful in testing.

This mock server will now enable development on my chatbox API microservice without me needing to call out to the live OpenAI API.

Wrapping it up

Let’s use Blackbird to review the state of APIs and mocks for my app. First let’s see what APIs we’ve cataloged in Blackbird:

blackbird api list
✔ getting a list of available APIs
+----------------+----------------+------------------+------------+
| API NAME | SLUG NAME | SPEC FILE | CREATED BY |
+----------------+----------------+------------------+------------+
| AI Chatbot API | ai-chatbot-api | chatbot_api.json | Matt |
+----------------+----------------+------------------+------------+

This only shows the Chatbot API spec, but that was because I didn’t upload the OpenAI spec into Blackbird’s catalog, but instead created it ad-hoc with the downloaded spec file. Uploading it to Blackbird is also easy:

blackbird api create openai -s openai_api.yaml
✔ openapi file loaded
✔ input validated as an OpenAPI 3.0.x spec
✔ API created
+----------+-----------+-----------------+------------+
| API NAME | SLUG NAME | SPEC FILE | CREATED BY |
+----------+-----------+-----------------+------------+
| openai | openai | openai_api.yaml | Matt |
+----------+-----------+-----------------+------------+
blackbird api list
✔ getting a list of available APIs
+----------------+----------------+------------------+------------+
| API NAME | SLUG NAME | SPEC FILE | CREATED BY |
+----------------+----------------+------------------+------------+
| AI Chatbot API | ai-chatbot-api | chatbot_api.json | Matt |
| openai | openai | openai_api.yaml | Matt |
+----------------+----------------+------------------+------------+

The benefit here is now I can re-launch a mock instance without needing a local copy of the spec. I can now reference that spec by the API name through Blackbird.

Next let’s review our mock servers:

blackbird mock list
+---------------------+------+--------+---------------------------------------------------------------------+------+
| NAME | TYPE | STATUS | URL | USER |
+---------------------+------+--------+---------------------------------------------------------------------+------+
| AI Chatbot API Mock | Mock | Ready | https://matts-org-ec914.blackbird-relay.a8r.io/ai-chatbot-api-mock/ | Matt |
| openai | Mock | Ready | https://matts-org-ec914.blackbird-relay.a8r.io/openai/ | Matt |
+---------------------+------+--------+---------------------------------------------------------------------+------+

Both are live and serving up mock responses. As I mentioned earlier, I can use the Chatbot API mock to help start development and testing on the Chatbot UI. And I also have the OpenAI API mock to begin work on the Chatbot API. This is exactly what I need to get started building my AI chatbot app with a team of developers.

Key Takeaways

  • Blackbird gave me two easy ways to catalog and mock APIs — either from the UI or from the CLI
  • Each mock server has an out-of-the-box public URL. This is by far the biggest time saver for me; manually setting up the infra would’ve taken hours and is not something in my wheelhouse as an app developer.
  • Related to the point above, I don’t need to worry about maintaining the infra to host additional mock servers, or tear down existing ones. Blackbird does the hosting for me.
  • Blackbird’s AI helper for quickly creating an API spec is very handy. Without it, I’d be stumbling through JSON/YAML or using a clunky form editor like SwaggerHub.
  • Finally (and a somewhat subtle point), using Blackbird as a CLI tool means I can script it into things like a CI/CD pipeline. When these spec files change in a code repo, it will be easy to auto-update their mocks. I will explore this case in a future blog post.

Mocking is still something I dread, but Blackbird makes it quick and painless, especially when it comes to setting up the hosting infra. Blackbird can do plenty more beyond mocking to speed up my API development, which I plan to explore in future posts. Stay tuned!

Blackbird API Development Platform

Streamline Your API Testing with Blackbird