with one click
with one click
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | write-script-duckdb |
| description | MUST use when writing DuckDB queries. |
Place scripts in a folder.
After writing, tell the user which command fits what they want to do:
wmill script preview <script_path> — default when iterating on a local script. Runs the local file without deploying.wmill script run <path> — runs the script already deployed in the workspace. Use only when the user explicitly wants to test the deployed version, not local edits.wmill generate-metadata — generate .script.yaml and .lock files for the script you modified.wmill sync push — deploy local changes to the workspace. Only suggest/run this when the user explicitly asks to deploy/publish/push — not when they say "run", "try", or "test".If the user says "run the script", "try it", "test it", "does it work" while there are local edits to the script file, use script preview. Do NOT push the script to then script run it — pushing is a deploy, and deploying just to test overwrites the workspace version with untested changes.
Only use script run when:
Only use sync push when:
If the user hasn't already told you to run/test/preview the script, offer it as a one-sentence next step (e.g. "Want me to run wmill script preview with sample args?"). Do not present a multi-option menu.
If the user already asked to test/run/try the script in their original request, skip the offer and just execute wmill script preview <path> -d '<args>' directly — pick plausible args from the script's declared parameters. The shape varies by language: main(...) for code languages, the SQL dialect's own placeholder syntax ($1 for PostgreSQL, ? for MySQL/Snowflake, @P1 for MSSQL, @name for BigQuery, etc.), positional $1, $2, … for Bash, param(...) for PowerShell.
wmill script preview does not deploy, but it still executes script code and may cause side effects; run it yourself when the user asked to test/preview (or after confirming that execution is intended). wmill sync push and wmill generate-metadata modify workspace state or local files — only run these when the user explicitly asks; otherwise tell them which to run.
For a visual open-the-script-in-the-dev-page preview (rather than script preview's run-and-print-result), use the preview skill.
Use wmill resource-type list --schema to discover available resource types.
Arguments are defined with comments and used with $name syntax:
-- $name (text) = default
-- $age (integer)
SELECT * FROM users WHERE name = $name AND age > $age;
Attach Ducklake for data lake operations:
-- Main ducklake
ATTACH 'ducklake' AS dl;
-- Named ducklake
ATTACH 'ducklake://my_lake' AS dl;
-- Then query
SELECT * FROM dl.schema.table;
Connect to external databases using resources:
ATTACH '$res:path/to/resource' AS db (TYPE postgres);
SELECT * FROM db.schema.table;
Read files from S3 storage:
-- Default storage
SELECT * FROM read_csv('s3:///path/to/file.csv');
-- Named storage
SELECT * FROM read_csv('s3://storage_name/path/to/file.csv');
-- Parquet files
SELECT * FROM read_parquet('s3:///path/to/file.parquet');
-- JSON files
SELECT * FROM read_json('s3:///path/to/file.json');
Declare the arg with type (s3object). Windmill renders an S3 file picker for it
and binds the arg as the bare s3://storage/key URI, which DuckDB's reader
functions consume directly:
-- $file (s3object)
SELECT * FROM read_parquet($file);
Works with any DuckDB reader: read_csv($file), read_json($file), etc.
DuckDB writes to S3 natively via COPY ... TO:
COPY (SELECT * FROM users) TO 's3:///exports/users.parquet' (FORMAT PARQUET);
Use this instead of the -- s3 streaming directive supported by the other SQL
dialects — that directive is not available in DuckDB.