Fal AI
Fal AI provides fast, scalable access to state-of-the-art image generation models including FLUX, Stable Diffusion, Imagen, and more.
Overviewโ
| Property | Details |
|---|---|
| Description | Fal AI offers optimized infrastructure for running image generation models at scale with low latency. |
| Provider Route on LiteLLM | fal_ai/ |
| Provider Doc | Fal AI Documentation โ |
| Supported Operations | /images/generations |
Setupโ
API Keyโ
import os
# Set your Fal AI API key
os.environ["FAL_AI_API_KEY"] = "your-fal-api-key"
Get your API key from fal.ai.
Supported Modelsโ
| Model Name | Description | Documentation |
|---|---|---|
fal_ai/fal-ai/flux-pro/v1.1-ultra | FLUX Pro v1.1 Ultra - High-quality image generation | Docs โ |
fal_ai/fal-ai/imagen4/preview | Google's Imagen 4 - Highest quality model | Docs โ |
fal_ai/fal-ai/recraft/v3/text-to-image | Recraft v3 - Multiple style options | Docs โ |
fal_ai/fal-ai/stable-diffusion-v35-medium | Stable Diffusion v3.5 Medium | Docs โ |
fal_ai/bria/text-to-image/3.2 | Bria 3.2 - Commercial-grade generation | Docs โ |
Image Generationโ
Usage - LiteLLM Python SDKโ
- Basic Usage
- Imagen 4
- Recraft v3
- Async Usage
- Advanced Parameters
Basic Image Generation
import litellm
import os
# Set your API key
os.environ["FAL_AI_API_KEY"] = "your-fal-api-key"
# Generate an image
response = litellm.image_generation(
model="fal_ai/fal-ai/flux-pro/v1.1-ultra",
prompt="A serene mountain landscape at sunset with vibrant colors"
)
print(response.data[0].url)
Google Imagen 4 Generation
import litellm
import os
os.environ["FAL_AI_API_KEY"] = "your-fal-api-key"
# Generate with Imagen 4
response = litellm.image_generation(
model="fal_ai/fal-ai/imagen4/preview",
prompt="A vintage 1960s kitchen with flour package on countertop",
aspect_ratio="16:9",
num_images=1
)
print(response.data[0].url)
Recraft v3 with Style
import litellm
import os
os.environ["FAL_AI_API_KEY"] = "your-fal-api-key"
# Generate with specific style
response = litellm.image_generation(
model="fal_ai/fal-ai/recraft/v3/text-to-image",
prompt="A red panda eating bamboo",
style="realistic_image",
image_size="landscape_4_3"
)
print(response.data[0].url)
Async Image Generation
import litellm
import asyncio
import os
async def generate_image():
os.environ["FAL_AI_API_KEY"] = "your-fal-api-key"
response = await litellm.aimage_generation(
model="fal_ai/fal-ai/stable-diffusion-v35-medium",
prompt="A cyberpunk cityscape with neon lights",
guidance_scale=7.5,
num_inference_steps=50
)
print(response.data[0].url)
return response
asyncio.run(generate_image())
Advanced FLUX Pro Generation
import litellm
import os
os.environ["FAL_AI_API_KEY"] = "your-fal-api-key"
# Generate with advanced parameters
response = litellm.image_generation(
model="fal_ai/fal-ai/flux-pro/v1.1-ultra",
prompt="A majestic dragon soaring over mountains",
n=2,
size="1792x1024", # Maps to aspect_ratio="16:9"
seed=42,
safety_tolerance="2",
enhance_prompt=True
)
for image in response.data:
print(f"Generated image: {image.url}")
Usage - LiteLLM Proxy Serverโ
1. Configure your config.yamlโ
Fal AI Image Generation Configuration
model_list:
- model_name: flux-ultra
litellm_params:
model: fal_ai/fal-ai/flux-pro/v1.1-ultra
api_key: os.environ/FAL_AI_API_KEY
model_info:
mode: image_generation
- model_name: imagen4
litellm_params:
model: fal_ai/fal-ai/imagen4/preview
api_key: os.environ/FAL_AI_API_KEY
model_info:
mode: image_generation
- model_name: stable-diffusion
litellm_params:
model: fal_ai/fal-ai/stable-diffusion-v35-medium
api_key: os.environ/FAL_AI_API_KEY
model_info:
mode: image_generation
general_settings:
master_key: sk-1234
2. Start LiteLLM Proxy Serverโ
Start Proxy Server
litellm --config /path/to/config.yaml
# RUNNING on http://0.0.0.0:4000
3. Make requestsโ
- OpenAI SDK
- LiteLLM SDK
- cURL
Generate via Proxy - OpenAI SDK
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:4000",
api_key="sk-1234"
)
response = client.images.generate(
model="flux-ultra",
prompt="A beautiful sunset over the ocean",
n=1,
size="1024x1024"
)
print(response.data[0].url)
Generate via Proxy - LiteLLM SDK
import litellm
response = litellm.image_generation(
model="litellm_proxy/imagen4",
prompt="A cozy coffee shop interior",
api_base="http://localhost:4000",
api_key="sk-1234"
)
print(response.data[0].url)
Generate via Proxy - cURL
curl --location 'http://localhost:4000/v1/images/generations' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer sk-1234' \
--data '{
"model": "stable-diffusion",
"prompt": "A serene Japanese garden with cherry blossoms",
"n": 1,
"size": "1024x1024"
}'
Using Model-Specific Parametersโ
LiteLLM forwards any additional parameters directly to the Fal AI API. You can pass model-specific parameters in your request and they will be sent to Fal AI.
Pass Model-Specific Parameters
import litellm
# Any parameters beyond the standard ones are forwarded to Fal AI
response = litellm.image_generation(
model="fal_ai/fal-ai/flux-pro/v1.1-ultra",
prompt="A beautiful sunset",
# Model-specific Fal AI parameters
aspect_ratio="16:9",
safety_tolerance="2",
enhance_prompt=True,
seed=42
)
For the complete list of parameters supported by each model, see:
- FLUX Pro v1.1-ultra Parameters โ
- Imagen 4 Parameters โ
- Recraft v3 Parameters โ
- Stable Diffusion v3.5 Parameters โ
- Bria 3.2 Parameters โ
Supported Parametersโ
Standard OpenAI-compatible parameters that work across all models:
| Parameter | Type | Description | Default |
|---|---|---|---|
prompt | string | Text description of desired image | Required |
model | string | Fal AI model to use | Required |
n | integer | Number of images to generate (1-4) | 1 |
size | string | Image dimensions (maps to model-specific format) | Model default |
api_key | string | Your Fal AI API key | Environment variable |
Getting Startedโ
- Sign up at fal.ai
- Get your API key from your account settings
- Set
FAL_AI_API_KEYenvironment variable - Choose a model from the Fal AI model gallery
- Start generating images with LiteLLM