Skip to main content
The Image Edit API lets you modify images using text instructions. You can edit specific parts of an image, add new elements, or extend the image beyond its original boundaries. Just provide your source image and describe what changes you want to make.

Supported Providers

  • OpenAI: Supports dall-e-2 and gpt-image-1 models
  • Vertex AI: Supports imagen-3.0-generate-001 model
  • AWS Bedrock: Supports amazon-nova-canvas model
  • Azure OpenAI: Supports gpt-image-1 model

Requirements

ProviderModelFormatSize LimitImage Count
OpenAIgpt-image-1PNG, WebP, JPG< 50MBUp to 16 images
OpenAIdall-e-2Square PNG< 4MB1 image only
Vertex AIimagen-3.0-capability-001PNG, JPG< 20MB1 image only
AWS Bedrockamazon-nova-canvasPNG, JPG< 5MB1 image only
Azure OpenAIgpt-image-1PNG, WebP, JPG< 50MBUp to 16 images

Example Usage

OpenAI supports both gpt-image-1 (up to 16 images, PNG/WebP/JPG, <50MB) and dall-e-2 (1 square PNG image, <4MB). The meaning of the parameters are present in the OpenAI API documentation.
from openai import OpenAI

BASE_URL = "https://{controlPlaneUrl}/api/llm"
API_KEY = "your-truefoundry-api-key"

# Configure OpenAI client with TrueFoundry settings
client = OpenAI(
    api_key=API_KEY,
    base_url=BASE_URL,
)

response = client.images.edit(
    model="openai-main/gpt-image-1",
    image=[
        open("image1.png", "rb"),  # First source image
        open("image2.png", "rb")   # Second source image (up to 16 images supported)
    ],
    prompt="Replace the background with a beach scene and add palm trees on both sides",
    mask=open("mask.png", "rb")    # Optional mask to specify edit areas
)

print(response.data[0].url)
gpt-image-1 supports up to 16 images in PNG, WebP, or JPG format with a 50MB size limit per image. dall-e-2 requires a single square PNG image under 4MB.
Vertex AI’s Imagen 3.0 model supports editing a single image in PNG or JPG format (max 20MB). The meaning of the parameters are present in the Google Vertex documentation.
from openai import OpenAI

BASE_URL = "https://{controlPlaneUrl}/api/llm"
API_KEY = "your-truefoundry-api-key"

# Configure OpenAI client with TrueFoundry settings
client = OpenAI(
    api_key=API_KEY,
    base_url=BASE_URL,
)

response = client.images.edit(
    model="image-edit/imagen-3.0-capability-001",
    image=open("image.png", "rb"), # REQUIRED, source image
    prompt="Replace the background with a beach scene and add palm trees on both sides", # Optional
    mask=open("mask.png", "rb"), # REQUIRED
    n = 2,
    extra_body={
        "maskMode": "MASK_MODE_BACKGROUND",
        "maskClasses": [162, 170], # ONLY if maskMode="MASK_MODE_SEMANTIC"
        "dilation": 0.01,
        "baseSteps": 35,
        "editMode": "EDIT_MODE_INPAINT_REMOVAL"
    }

print(response.data[0].b64_json)
Vertex AI supports a single image in PNG or JPG format with a maximum size of 20MB.
AWS Bedrock’s Nova Canvas model supports editing a single image in PNG or JPG format (max 5MB).
The meaning of the parameters are present in the AWS Bedrock documentation.
from openai import OpenAI

BASE_URL = "https://{controlPlaneUrl}/api/llm"
API_KEY = "your-truefoundry-api-key"

# Configure OpenAI client with TrueFoundry settings
client = OpenAI(
    api_key=API_KEY,
    base_url=BASE_URL,
)

response = client.images.edit(
    model="tfy-ai-bedrock/amazon-nova-canvas"
    image=open("image.png", "rb"), # REQUIRED, source image
    prompt="Replace the background with a beach scene and add palm trees on both sides",
    # Mask image OR mask prompt is required, not both
    mask=open("mask.png", "rb")    # Mask image to specify edit areas
    
    extra_body={
	    "taskType": "OUTPAINTING"          # Optional: defaults to INPAINTING
	    "maskPrompt": "Box in the center." # Required if mask image is not specified, should NOT be included if mask image is specified
	    "negativeText": "dogs, cats"       # Optional: A text prompt to define what not to include in the image.
	    
	    # If taskType=OUTPAINTING
	    "outPaintingMode": "PRECISE"       # Optional: DEFAULT | PRECISE
	    
	    # For VIRTUAL_TRY_ON, need to use curl request only
	  }
)

print(response.data[0].b64_json)
The meaning of the parameters are present in the AWS Bedrock documentation.
from openai import OpenAI

BASE_URL = "https://{controlPlaneUrl}/api/llm"
API_KEY = "your-truefoundry-api-key"

# Configure OpenAI client with TrueFoundry settings
client = OpenAI(
    api_key=API_KEY,
    base_url=BASE_URL,
)

response = client.images.edit(
    model="tfy-ai-bedrock/stable-image-inpaint1",
    image=open("image.png", "rb"), # REQUIRED, source image
    prompt="Replace the background with a beach scene and add palm trees on both sides", # REQUIRED
		output_format="jpeg", # Optional
    mask=open(filePath, "rb"), # Optional
    extra_body={
       "style_preset": "anime", # Optional
       "negative_prompt": "dogs", # Optional
       "seed": 42, # Optional
       "grow_mask": 7 # Optional
   }

print(response.data[0].b64_json)
AWS Bedrock supports a single image in PNG or JPG format with a maximum size of 5MB.
Azure OpenAI’s gpt-image-1 model supports up to 16 images in PNG, WebP, or JPG format (max 50MB each). The meaning of the parameters are present in the Azure OpenAI documentation.
from openai import OpenAI

BASE_URL = "https://{controlPlaneUrl}/api/llm"
API_KEY = "your-truefoundry-api-key"

# Configure OpenAI client with TrueFoundry settings
client = OpenAI(
    api_key=API_KEY,
    base_url=BASE_URL,
)

response = client.images.edit(
    model="azure-main/gpt-image-1",
    image=[
        open("image1.png", "rb"),  # First source image
        open("image2.png", "rb")   # Second source image (up to 16 images supported)
    ],
    prompt="Replace the background with a beach scene and add palm trees on both sides",
    mask=open("mask.png", "rb")    # Optional mask to specify edit areas
)

print(response.data[0].url)
Azure OpenAI supports up to 16 images in PNG, WebP, or JPG format with a 50MB size limit per image.

Response Format

The API returns an ImagesResponse object containing:
ImagesResponse(
    created=1755685741,
    data=[
        Image(
            url='https://oaidalleapiprodscus.blob.core.windows.net/private/org-ojH41IdW0UR2VlysxKUx8AjA/user-9QSCTtrOEHbbiQRFfFbwT8fx/img-PlwCalRpn4j5jQxG1wKvQYGc.png?st=2025-08-20T09%3A29%3A01Z&se=2025-08-20T11%3A29%3A01Z&sp=r&sv=2024-08-04&sr=b&rscd=inline&rsct=image/png&skoid=32836cae-d25f-4fe9-827b-1c8c59c442cc&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2025-08-20T10%3A01%3A35Z&ske=2025-08-21T10%3A01%3A35Z&sks=b&skv=2024-08-04&sig=g21zsMrRuM8aRjO5lLyVVwxZD7K4Ng1OoI7QZ5e8Y4Q%3D',
            b64_json=None,
            revised_prompt=None
        )
    ],
    background=None,
    output_format=None,
    quality=None,
    size=None,
    usage=None
)