Client Setup
All providers use the OpenAI SDK with provider-specific headers. Choose your provider to get started:Input File Format
Create a JSONL file with one JSON object per line. Each line represents a single request:custom_id
values, minimum 100 records for AWS Bedrock.
Before using AWS Bedrock batch processing, ensure you have:
- S3 Bucket: For storing input and output files
- IAM Execution Role: With permissions for S3 access and Bedrock model invocation
- User Permissions: Including
iam:PassRole
to pass the execution role to Bedrock
Workflow Steps
The batch process follows these steps for all providers:- Upload: Upload JSONL file → Get file ID
- Create: Create batch job → Get batch ID
- Monitor: Check status until complete
- Fetch: Download results
Step-by-Step Examples
1. Upload Input File
1. Upload Input File
2. Create Batch Job
2. Create Batch Job
3. Check Batch Status
3. Check Batch Status
4. Fetch Results
4. Fetch Results
Batch Status Reference
validating
: Initial validation of the batchin_progress
: Processing the requestscompleted
: All requests processed successfullyfailed
: Batch processing failed
Best Practices
- File Format: Use meaningful
custom_id
values and valid JSONL format - Error Handling: Implement proper error handling and status monitoring
- Security: Store API keys securely, use minimal IAM permissions
- AWS Bedrock Specific:
- Minimum 100 records required in JSONL file
- Verify IAM roles and S3 bucket permissions
AWS Bedrock Permissions
User Permissions (for API calls)
User Permissions (for API calls)
These are the minimum permissions required to use the Bedrock Batch APIs. For complete official guidance, see AWS Bedrock Batch Inference Permissions.
Service Role Permissions (for batch execution)
Service Role Permissions (for batch execution)
The service role (role_arn) used for creating and executing the batch job requires:Trust Relationship:Permission Policy: