with one click
game-art-orchestrator
// Automates the process of generating game assets matching a specific art style using RAG, Few-Shot visual prompting (via generate_image), and a Human-In-The-Loop VLM evaluation pipeline.
// Automates the process of generating game assets matching a specific art style using RAG, Few-Shot visual prompting (via generate_image), and a Human-In-The-Loop VLM evaluation pipeline.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | game-art-orchestrator |
| description | Automates the process of generating game assets matching a specific art style using RAG, Few-Shot visual prompting (via generate_image), and a Human-In-The-Loop VLM evaluation pipeline. |
This skill provides a 2-phase pipeline for strictly adhering to a stylistic Generation_DNA and Global rules when generating new game assets.
Whenever the user asks you to "draw", "generate", or "create" an asset, BEFORE generating the image, you MUST:
StyleLibrary/ and pick the closest matching subfolder based on semantic meaning.Generation_DNA.md inside that style folder.python .agents/skills/game-art-orchestrator/scripts/retrieve_orchestrator_context.py <relative_path_to_style> "<user_prompt>"
(For example: python .agents/skills/game-art-orchestrator/scripts/retrieve_orchestrator_context.py StyleLibrary/SciFi "laser blaster")=== ORCHESTRATOR CONTEXT PAYLOAD === printed to the console by the script. CRITICAL: Do NOT attempt to manually parse the index JSON files or manually select reference images. You MUST strictly use the text rules and absolute image paths provided by the script.Generation_DNA.md with the retrieved Global RAG rules and Image paths.Characters, Items, Obstacles, UI, VFX), you MUST explicitly inject the following constraints into your image generation prompt:
"SOLID PURE MAGENTA BACKGROUND (#FF00FF), no gradients, no shadows on the background. Do not generate a floor.""Generate EXACTLY ONE single asset in the center of the image. DO NOT generate multiple variants, character sheets, split views, or multiple angles. ONLY ONE FIGURE."generate_image tool (Gemini Nanobanana integration), passing the combined DNA rules into Prompt, and the paths of the Few-shot images into ImagePaths. IMPORTANT: At this stage, append instructions to focus ONLY on Step 1: Sketching & Silhouettes. Do not generate full colors or polish yet.Instead of generating multiple variants simultaneously, the orchestration follows a structured 2-step approval process:
Step 1: Sketching & Silhouettes
Present the generated sketch/silhouette to the user for confirmation. CRITICAL: You MUST embed the generated image directly in your chat response using absolute markdown syntax:  so the user can see it. On Windows, you MUST replace all backslashes (\) with forward slashes (/) or use file:/// format so markdown parses correctly.
HUMAN-IN-THE-LOOP CHECKPOINT: Stop your execution and ask the user:
Step 2: Flat Colors, Shading & Material Polish
Once (and ONLY after) the user confirms the sketch, update the generation prompt to apply Flat Colors, Shading, Material Polish and Post-processing while strictly maintaining the approved silhouette. Execute the generate_image tool to create the final render.
(Optional) Run python scripts/downscale_image.py <image_paths> if the final image is large and you anticipate Token Overflow when reading it.
Adopt the persona in prompts/evaluator-vlm.md. Evaluate the final rendered image against the user's initial prompt, the retrieved Global Rules, and the Evaluation_Rules.json. Provide a Binary Validation Checklist followed by an Aesthetic Score (0-100) and Correction_Guidance.
FINAL CHECKPOINT: Present the final image and Evaluator scoring to the user. CRITICAL: You MUST embed the final image directly in your chat response using absolute markdown syntax: . Remember to use forward slashes (/) for paths. Ask them:
If the user requests Round 2, use the Orchestrator to perform a Prompt Translation on the updated Correction_Guidance. Convert the qualitative feedback into explicit technical parameters (update positive text, inject negative keywords, adjust weights) and re-run Step 2 rendering.
When the user explicitly approves a generated variant:
FlappyTrippy). If unknown, verify with the user before exporting.Characters, Environments, Items, UI, Obstacles).<workspace>/Assets/Projects/<Project_Name>/GameAssets/<Style_Name>/<Category>/.gemini/artifacts/ folder into the target directory, ensuring the original remains in artifacts for rendering.[object_name]_[YYYY-MM-DD].png (e.g., laser_pistol_2026-04-20.png).UI, you MUST automatically perform three tasks:
PIL) to auto-crop the transparent bounding box (getbbox()) and proportionally resize the UI elements to standard mobile game dimensions (e.g., Icons max 256px, Buttons max 512px, Panels max 1024px) to optimize memory.<Project_Name>_UI_Integration_Guide.md file in the project root. This document MUST detail how to assemble the generated UI components in Unity (e.g., specifying which elements require 9-Slicing, how to nest Icons inside blank Button bases, and identifying reusable components) to guarantee optimal VRAM usage and responsive UI scaling for developers.<workspace>/Assets/Projects/<Project_Name>/GameAssets/Generated_Asset_Catalog.md. Use your code editing tool to append a new row to the table in this file, logging the Category, Asset Name, Target Style, Absolute File Path, and a short 1-sentence prompt description. If the catalog file doesn't exist yet, create it with a standard markdown table header.. Make sure to replace all backslashes (\) with forward slashes (/)./finish workflow to extract any successful prompting techniques or rendering gotchas. Also, remind the user to run /commit to save the newly generated assets.