with one click
video
// End-to-end video generation via fal.ai through Starchild paid proxy. Covers text-to-video, image-to-video, video-to-video, model selection, billing, polling, and serving local reference assets via a public preview.
// End-to-end video generation via fal.ai through Starchild paid proxy. Covers text-to-video, image-to-video, video-to-video, model selection, billing, polling, and serving local reference assets via a public preview.
| name | video |
| version | 3.1.0 |
| description | End-to-end video generation via fal.ai through Starchild paid proxy. Covers text-to-video, image-to-video, video-to-video, model selection, billing, polling, and serving local reference assets via a public preview. |
| metadata | {"starchild":{"emoji":"๐ฌ","skillKey":"video","requires":{"env":["FAL_KEY"]}}} |
| user-invocable | true |
| disable-model-invocation | false |
Use this skill for all video-generation requests on Starchild.
Core principle: call the provided scripts. Do not re-implement proxy/billing/upload plumbing.
exec(open('skills/video/generate_video.py').read())
result = generate_video(
prompt="A cinematic drone shot over snowy mountains at sunrise",
model="balanced", # "budget" | "balanced" | "premium"
duration=5,
)
# result -> {"success": True, "cost": 0.70, "video_url": "...", "local_path": "output/videos/..."}
generate_video automatically: submits โ polls โ fetches result โ downloads mp4 to output/videos/.
Never hand the user the raw video_url (e.g. https://*.fal.media/.../*.mp4). fal serves these files with Content-Security-Policy: sandbox; default-src 'none', which means:
<video> / <iframe> is blocked by CSP.Content-Disposition: attachment header, so the browser does not auto-download either.?download=1, etc.) cannot fix this โ only a server-side header change would, and we don't control fal's CDN.The only reliable user-facing delivery path is the already-downloaded local file:
result["local_path"] (e.g. output/videos/xxx.mp4) โ generate_video always downloads on success.output/videos/<filename> and is viewable in the workspace file panel / file browser.
(or link as [video](output/videos/<filename>.mp4) โ the workspace serves these directly with the right headers).send_to_telegram(file_path="output/videos/...", message_type="video") or send_to_wechat(file_path="output/videos/...", message_type="video").If the download somehow failed (local_path missing) โ re-fetch with:
curl -L -o output/videos/<filename>.mp4 "<video_url>"
Then deliver the local path. Still do not give the user the raw fal URL as the primary deliverable.
fal.ai needs the reference asset as a public https URL. fal storage upload requires a Serverless permission your key currently does not have. The reliable path is to expose the asset via a published Starchild preview.
output/fal_assets/ using publish_asset.py.fal-assets is running and published (one-time setup, see ยง3).<preview_base>/<filename>.generate_video(... image_url=public_url).# Step 1: publish a local image into the asset folder
exec(open('skills/video/publish_asset.py').read())
asset = publish_local('/path/to/your/photo.jpg')
# or: publish_from_url('https://example.com/photo.jpg')
filename = asset['filename']
# Step 2: combine with the preview's public base URL (see ยง3)
public_url = f"https://community.iamstarchild.com/<user_slug>-fal-assets/{filename}"
# Step 3: image-to-video
exec(open('skills/video/generate_video.py').read())
result = generate_video(
prompt="gentle cinematic camera push-in",
model="balanced",
duration=5,
image_url=public_url,
)
generate_video auto-rewrites the model path from */text-to-video to */image-to-video whenever image_url is provided. The same approach works for video-to-video models โ pass an mp4 URL instead.
publish_asset.py).jpg .jpeg .png .webp .gif .bmp, max 10 MB.mp4 .mov .webm .mkv .m4v, max 100 MBfal-assets public preview setupRun this once per workspace. The preview keeps running across sessions.
# 3.1 ensure the asset folder exists with a placeholder index
import os, pathlib
pathlib.Path('output/fal_assets').mkdir(parents=True, exist_ok=True)
if not os.path.exists('output/fal_assets/index.html'):
open('output/fal_assets/index.html', 'w').write(
'<!doctype html><html><body><h1>fal asset host</h1></body></html>'
)
# 3.2 start the preview
preview(action='serve', dir='output/fal_assets', title='fal-assets')
# 3.3 publish to a public URL
preview(action='publish', preview_id='<id from step 3.2>', slug='fal-assets', title='fal-assets')
# โ public base: https://community.iamstarchild.com/<user_slug>-fal-assets/
After publish, the public base URL is reusable for every future image-to-video / video-to-video task. Files dropped into output/fal_assets/ become reachable as <base>/<filename> immediately โ no re-publish needed.
Verify with:
curl -sI https://community.iamstarchild.com/<user_slug>-fal-assets/<filename>
# expect: HTTP/2 200, content-type: image/* or video/*
If preview(action='serve') returns No available ports in pool, ask the user which existing preview can be stopped to free a port โ never silently kill one.
| Tier | Model | Cost / 5s | Notes |
|---|---|---|---|
| budget | fal-ai/wan/v2.5/text-to-video | $0.25 | Fastest, cheapest; good for prompt iteration |
| balanced | alibaba/happy-horse/text-to-video | $0.70 | Default; best lip-sync, most use cases |
| premium | bytedance/seedance-2.0/fast/text-to-video | $1.20 | Best motion + camera direction |
Override by passing the full model id to generate_video(model=...). Image-to-video variants are auto-derived by replacing text-to-video with image-to-video.
Pricing details and model registry live in generate_video.py::estimate_cost.
exec(open('skills/video/poll_status.py').read())
result = poll_video("019ded6c-d871-7290-bbf1-ddc6993f8958")
Use this when an earlier generate_video call timed out or you only have a request_id.
generate_video.py โ submit โ poll โ download. Handles text-to-video and image-to-video.publish_asset.py โ copy local files (or download remote URLs) into output/fal_assets/ so they can be served by the fal-assets preview.poll_status.py โ resume polling by request_id, downloads the result on completion.| Problem | Fix |
|---|---|
image_url must be a public HTTP(S) URL | Use publish_asset.py + fal-assets preview, then pass the public URL |
No available ports in pool (preview serve) | Ask the user which preview to stop; do not auto-kill |
downstream_service_error after COMPLETED | Reference asset host failed mid-render โ re-encode/resize to 16:9, re-publish, retry |
HTTP 402 insufficient_credits | Top up balance; cost is pre-charged on submit |
HTTP 403 endpoint_not_allowed | sc-proxy only allows approved fal video endpoints; pick one from the model table |
Generation FAILED upstream | Shorten prompt, drop unusual tokens, retry once before changing model |
Job stuck IN_PROGRESS >15 min | Save request_id, resume later with poll_status.py |
| User reports the fal.media link "shows nothing" / "blank page" | Expected โ fal serves with CSP: sandbox; default-src 'none'. Deliver the local file at result["local_path"] instead of the raw URL (see ยง1). |
sc-proxy โ queue.fal.run (and api.fal.ai) โ fal model providersAuthorization: Key fake-falai-key-12345 (proxy injects the real FAL_KEY)403 endpoint_not_allowed.https://*.fal.media/... โ public CDN, no auth needed for download.generate_video.py::estimate_cost and in transparent-proxy/apis/falai.py::_VIDEO_PRICING.FAL_KEY lacks Serverless permission. Keep using the preview-based approach until that changes.[HINT] Download the complete skill directory including SKILL.md and all related files