Telegram bot powered by Amazon Bedrock

Fabio Gollinucci
5 min readNov 28, 2023

--

The time is coming, the public launch of Amazon Bedrock finally let me play with ML in a professional way, without requiring too much knowledge about that topic.

The following is a Serverless integration for connecting a Telegram bot with Amazon Bedrock

Infrastructure schema

Setup

I started requesting access to some models from the web console.

Model access list

Spend some time playing around with the playground asking questions and tuning settings.

Chat playground

Chat bot logic

On the next step I add some logic from my bot template daaru00/aws-telegram-bot-connector using a StepFunction state machine triggered by incoming messages.

MessageReceivedStateMachine:
Type: AWS::Serverless::StateMachine
Properties:
Name: !Ref AWS::StackName
DefinitionUri: states/message.asl.yaml
DefinitionSubstitutions:
EventBusName: !Ref MessagesEventBus
EventSource: !Ref AWS::StackName
BedrockIntegrationFunctionArn: !GetAtt BedrockIntegrationFunction.Arn
Policies:
- EventBridgePutEventsPolicy:
EventBusName: !Ref MessagesEventBus
- LambdaInvokePolicy:
FunctionName: !Ref BedrockIntegrationFunction
Events:
ReceivedEvent:
Type: EventBridgeRule
Properties:
EventBusName: !Ref MessagesEventBus
Pattern:
source:
- !Ref AWS::StackName
detail-type:
- 'Webhook Event Received'
detail:
message:
from:
username: !Ref UsernameWhitelist
StartAt: SendTyping
States:
SendTyping:
Type: Task
Resource: 'arn:aws:states:::events:putEvents'
Parameters:
Entries:
- EventBusName: ${EventBusName}
Source: ${EventSource}
DetailType: Send Chat Action
Detail:
chat_id.$: $.detail.message.chat.id
action: typing
ResultPath: null
Next: GetResponse

GetResponse:
Type: Task
Resource: '${BedrockIntegrationFunctionArn}'
Parameters:
text.$: $.detail.message.text
ResultPath: $.response
Next: SendResponse

SendResponse:
Type: Task
Resource: 'arn:aws:states:::events:putEvents'
Parameters:
Entries:
- EventBusName: ${EventBusName}
Source: ${EventSource}
DetailType: Send Message
Detail:
chat_id.$: $.detail.message.chat.id
text.$: $.response
ResultPath: null
End: true

The first state machine’s step is sending back to the user a “typing..” feedback, this to keep the user engaged.

Next I use a Lambda function to interact with ML models through Bedrock APIs.

BedrockIntegrationFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Ref AWS::StackName
Handler: bedrock.handler
MemorySize: 512
Timeout: 20
Environment:
Variables:
INSTRUCTIONS: !Ref BotInstructions
HISTORY_BUCKET: !Ref ChatHistoryBucket
Policies:
- S3CrudPolicy:
BucketName: !Ref ChatHistoryBucket
- Statement:
- Effect: "Allow"
Action: "bedrock:InvokeModel"
Resource: "*"

At this the moment the version of the SDK within the Lambda function does not yet contain the Bedrock client, so you need to install the package:

npm install @aws-sdk/client-bedrock-runtime

Bedrock runtime SDK is quite easy to use, it requires the model id to invoke and the payload to send.

import { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime'
const bedrock = new BedrockRuntimeClient()

export async function handler ({ text }) {
let { body } = await bedrock.send(new InvokeModelCommand({
modelId: 'ai21.j2-mid-v1',
contentType: 'application/json',
accept: 'application/json',
body: JSON.stringify({
'prompt': text
})
}))

body = JSON.parse(Buffer.from(body).toString())
return body.completions[0].data.text
}

The payload sent to the model contains the information about the “question” asked to the model; this operation is called “inference”. The parameters differ between models, consult the AWS documentation for more information.

Providing context

After some testing I noticed a poor efficiency in responses, digging around I found that the chat playground is sending the entire chat history on every model invoke request, also adding the additional information to describe the bot’s behavior.

In a nutshell, when the model is invoked the entire chat is sent to it, leaving it to complete with the response at the bottom, practically it works like an auto-completion.

Question: Present yourself

Answer: Hello World! How can I assist you today?

Question: What's the colour of the sun?

Answer: Yellow

Question: And the sea?

Instructions: Your are a friendly chatbot

Answer:

Implemented this feature adding a S3 bucket to store the history in a plain text file and storing questions and answers in the Lambda function that interact with Bedrock.

// retrieve history

let history = ''
try {
const { Body: body } = await s3.send(new GetObjectCommand({
Bucket: HISTORY_BUCKET,
Key: chat_id.toString(),
}))
history = await readIncomingMessage(body)
console.log(`HistoryLength: ${history.length}`)
} catch (error) {
if (error.Code !== 'NoSuchKey') {
throw error
}
}

// compose inference prompt parameter

const prompt = [
history,
LINE_END,
LINE_END,
`Question: ${text}`,
LINE_END,
LINE_END,
'Answer: ',
].join('')

// invoke model

let { body, contentType, $metadata } = await bedrock.send(new InvokeModelCommand({
modelId: 'ai21.j2-mid-v1',
contentType: 'application/json',
accept: 'application/json',
body: JSON.stringify({
'prompt': prompt,
'maxTokens': 200,
'temperature': 0.7,
'topP': 1,
'stopSequences': [],
'countPenalty': { 'scale': 0 },
'presencePenalty': { 'scale': 0 },
'frequencyPenalty': { 'scale': 0 }
})
}))

// compose history

history = [
history,
`Question: ${text}`,
LINE_END,
LINE_END,
`Answer: ${response}`,
LINE_END,
].join('')

// update history

await s3.send(new PutObjectCommand({
Bucket: HISTORY_BUCKET,
Key: chat_id.toString(),
Body: history
}))

I don’t want to have too long a history so I added a lifecycle role that keeps files clean that haven’t been touched for more than 1 day (no interaction with the bot).

ChatHistoryBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref AWS::StackName
PublicAccessBlockConfiguration:
BlockPublicAcls: true
IgnorePublicAcls: true
BlockPublicPolicy: true
RestrictPublicBuckets: true
LifecycleConfiguration:
Rules:
- Id: delete-old-chats
ExpirationInDays: 1
Status: Enabled

Also added an automatic cleaning of the history every time the client’s history is cleaned, and the /start command is sent again. Also property handle that command and replace with an introductory message.

let history = ''
if (text !== '/start') {
try {
const { Body: body } = await s3.send(new GetObjectCommand({
Bucket: HISTORY_BUCKET,
Key: chat_id.toString(),
}))
history = await readIncomingMessage(body)
console.log(`HistoryLength: ${history.length}`)
} catch (error) {
if (error.Code !== 'NoSuchKey') {
throw error
}
}
} else {
text = 'Present yourself'
}

Finally I also added a series of information in order to customize the model answers with more context about itself, the user and the current time and date.

const instructions = 'You are a chatbot, you answer questions'
const userContext = `The person you are interacting with is called ${user}.`
const dateTimeContext = `Now is ${new Date()}`

const prompt = [
history,
LINE_END,
LINE_END,
`Instructions: ${instructions}. ${userContext}. ${dateTimeContext}.`,
LINE_END,
LINE_END,
`Question: ${text}`,
LINE_END,
LINE_END,
'Answer: ',
].join('')

Now it is starting to get excellent feedback from the tests, the bot understands the context of the individual message much better and is able to adapt to the conversation.

Telegram chat with conversation with Bedrock model

Custom models

I haven’t started to see creating a custom model or educating agents that well yet. From what I understand it is necessary to buy some provisioned throughput and serve a series of questions/answers via JSONL files on an S3 bucket.

{"input": "<prompt text>", "output": "<expected generated text>"}
{"input": "<prompt text>", "output": "<expected generated text>"}
{"input": "<prompt text>", "output": "<expected generated text>"}

--

--