with one click
fal-video
// Use fal.ai for image to video generation, model comparison, queue-based inference, pricing and usage tracking, and portable multi-model experiment workflows with local prompt and cost logs.
// Use fal.ai for image to video generation, model comparison, queue-based inference, pricing and usage tracking, and portable multi-model experiment workflows with local prompt and cost logs.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | fal-video |
| description | Use fal.ai for image to video generation, model comparison, queue-based inference, pricing and usage tracking, and portable multi-model experiment workflows with local prompt and cost logs. |
| metadata | {"short-description":"fal.ai video generation, comparison, and cost-aware experiment tooling."} |
Use this skill when the user wants to generate media through fal.ai, compare multiple marketplace models, or build repeatable experiment workflows with prompts, inputs, outputs, and costs tracked in a consistent way.
fal gives you one platform for many models, but it does not give you one truthful schema for all of them. The right abstraction is:
Before generating, ask:
Core principles:
seedance-pro-i2vkling-v3-pro-i2vhailuo-02-standard-i2vFor video jobs, this skill uses falโs queue API:
POST https://queue.fal.run/{endpoint_id}GET https://queue.fal.run/{endpoint_id}/requests/{request_id}/statusGET https://queue.fal.run/{endpoint_id}/requests/{request_id}Authentication uses:
Authorization: Key $FAL_KEYFAL_API_KEY is also accepted by the bundled scriptsImportant platform headers for repeatable comparison runs:
X-Fal-Store-IO: 1x-app-fal-disable-fallback: trueThe runner also captures response headers such as:
x-fal-request-idx-fal-billable-unitsThe official fal-client SDK is valid and documented, but this repoโs first requirement is portability inside a Codex skill. The scripts therefore keep a deterministic raw-HTTP queue path and also use fal-client automatically when it is installed.
In this repo's retained live runs, some endpoints were most reliable when invoked with:
uv run --with fal-client python3 ...Prompt like direction, not like marketing copy:
For sprite-animation comparison:
1280x720 flat-background anchor plate for walk-cycle image-to-video runsDo not overload early comparison runs with cinematic flourishes. The first job is to test motion usefulness and identity preservation.
For image-to-video sprite walk cycles, use a direction-specific neutral plate instead of a spritesheet guide:
Checkerboards and alternating-pixel guides are useful for still-image spritesheet generation, but video models tend to treat them as physical floors or rooms. That can introduce perspective, camera drift, and character rotation.
In retained sprite walk runs, bytedance/seedance-2.0/image-to-video with a minimal payload and generate_audio=false produced useful motion references. Extract raw frames first, build contact sheets/GIFs for curation, and defer background cleanup to selected frames.
scripts/fal_queue_video_run.py
scripts/fal_platform_models.py
scripts/fal_video_experiment_matrix.py
Machine-readable tracking:
experiments/fal/ledger.jsonlexperiments/fal/ledger.csvexperiments/fal/<timestamp>-<slug>/batch.jsonHuman-readable tracking:
prompts/<timestamp>-...-prompts.mdlearnings/<timestamp>-...-learnings.mdGenerated media should still live under the appropriate public/assets/.../concepts/... path for the asset family being tested.
โ Anti-pattern: flattening all fal models into one fake schema Why bad: you lose the knobs that actually matter and comparisons become misleading. Better: use shared runner behavior plus explicit per-model presets.
โ Anti-pattern: recording only prompts and outputs Why bad: you cannot audit request IDs, retries, or costs later. Better: always save raw JSON, normalized manifests, and ledger rows.
โ Anti-pattern: comparing models with hidden fallback routing
Why bad: you may think you tested one endpoint but actually hit another route.
Better: set x-app-fal-disable-fallback: true on strict comparison runs.
โ Anti-pattern: forcing every model to pretend it supports the same size and duration controls Why bad: it creates fake parity and bad assumptions. Better: normalize the task, then document the actual resolved arguments used per model.
โ Anti-pattern: waiting until the end to think about spend Why bad: expensive comparison batches get hard to control. Better: estimate before the run and reconcile after the run.
โ Anti-pattern: using checkerboards as sprite video start backgrounds Why bad: models often convert the guide into a floor, horizon, or scene and stop behaving like a locked sprite reference. Better: use a neutral flat-background anchor plate and keep grids for still-image guide sheets.
references/fal-platform-notes.mdreferences/fal-queue-and-inference.mdreferences/fal-video-models.mdassets/model-presets.jsonA good fal workflow is not just โcan it generate.โ It is: