Getting Started with AWS Bedrock: Invoking Amazon Titan Text Lite v1
Developers can create and scale generative AI applications with Amazon Bedrock, a fully managed service, by utilising foundation models from AWS and other suppliers. In this guide, I’ll walk you through getting started with AWS Bedrock and invoking the Amazon Titan Text Lite v1 model for text generation.
Prerequisites
Before you begin, ensure that you have the following:
- AWS Account with access to Amazon Bedrock(For testing will be using AmazonBedrockFullAccess)
- AWS CLI installed and configured with appropriate permissions
- Boto3 (AWS SDK for Python) installed on your machine
You can install Boto3 using:
pip install boto3
Step 1: Set Up AWS Credentials
If you haven’t already configured your AWS credentials, run:
aws configure
Enter your AWS Access Key ID, Secret Key, and select your preferred region where Amazon Bedrock is available (e.g., us-east-1
).
Step 2: Initialize the Bedrock Client
To interact with Amazon Bedrock, we need to initialise the AWS Bedrock runtime client using Boto3:
import boto3
import json
# Initialize Bedrock client
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
Step 3: Invoke Amazon Titan Text Lite v1
Let’s create a simple script to invoke Amazon Titan Text Lite v1 for generating a text response.
# Define the input text
question = "What is the capital of India?"
# Prepare the payload
payload = {
"inputText": question,
"textGenerationConfig": {
"maxTokenCount": 100,
"temperature": 0.5,
"topP": 0.9
}
}
# Invoke Titan Text Lite v1
response = bedrock.invoke_model(
modelId="amazon.titan-text-lite-v1",
contentType="application/json",
accept="application/json",
body=json.dumps(payload)
)
# Parse response
result = json.loads(response["body"].read().decode("utf-8"))
# Extract and print the output text
if "results" in result and isinstance(result["results"], list):
print("Answer:", result["results"][0]["outputText"].strip())
else:
print("Unexpected response format:", result)
Step 4: Running the Script
Save the script as invoke_bedrock.py
and run it using:
python invoke_bedrock.py
Expected Output:
Answer: New Delhi is the capital of India. It is situated in the countrys federal district, which is known as the National Capital Territory of Delhi (NCT), and is located in the Indian subcontinent.
Step 5: Fine-tuning Model Parameters
Amazon Titan models allow temperature and topP tuning for response variation:
temperature
: Controls randomness (Lower = More deterministic, Higher = More creative)topP
: Controls sampling probability (Higher = More diverse responses)
Adjust these values in the textGenerationConfig
section for different results.
Conclusion
You have successfully invoked the Amazon Titan Text Lite v1 model using AWS Bedrock! You can now integrate this into your applications for chatbots, summarization, and content generation.
Happy coding! 🚀