| name | gee-workbench |
| description | Use the workflow-centered PyGEE Workbench as the execution surface for Chat with GEE tasks: run Earth Engine Python code, manage data/catalog actions, update map layers, track workflow records/tasks, and report each operation's goal, method, changed data/code, verification, and next step. |
GEE Workbench
Use this skill when the user wants a local browser-based Earth Engine workbench similar to geemap or the GEE Code Editor, especially for running Python ee code, viewing map layers, managing data catalogs, tracking workflow records, and inspecting spectral or time-series results.
Companion documents in this folder:
GEE_QUALITY_GUIDE.md: code quality, performance, and client/server rules for Earth Engine Python.
GEE_INTERACTION_PROTOCOL.md: clarify-confirm-execute-report rules for non-trivial GEE work.
GEE_WORKFLOW_TEMPLATES.md: reusable workbench task templates.
GEE_DATASET_RECIPES.md: compact dataset-specific loading and preprocessing patterns.
GEE_ERROR_PATTERNS.md: common GEE failure modes and repair rules.
GEE_ACTION_ROUTING.md: LLM-led routing guidance for deciding whether a request should become a light run under the current experiment or a heavy action that starts a new experiment.
Read GEE_QUALITY_GUIDE.md before changing the execution chain, /api/run_code usage, or workbench script sync behavior.
When the task is non-trivial, dataset-sensitive, export-related, or scientifically consequential, read the relevant companion document before generating code.
When the task changes script intent or may affect experiment boundaries, read GEE_ACTION_ROUTING.md before deciding whether to patch the current experiment or start a new one.
Core Contract
- Treat the workbench as the canonical execution surface for GEE data actions. For imagery, vectors, assets, exports, catalog updates, layers, charts, and inspector work, write or run code through the workbench Python pane or
/api/run_code.
- Do not perform ordinary GEE data actions as hidden one-off terminal scripts when the same action can be expressed in workbench code. Platform maintenance, packaging, tests, and dependency installation may still use the terminal.
- Keep user-visible state synchronized: code should appear in the Script tab when relevant, map outputs should appear as layers, datasets should be reflected in Data/Catalog where applicable, and exports/imports should create Workflow and Task records.
- Do not restart the workbench for normal data operations. Restart only after changing platform code.
- Organize outputs as project artifacts: keep code, local data, assets, charts, layers, tasks, and workflow records linked whenever the API supports it.
- Use Workflow as the durable experiment notebook. Major goal changes should create a new experiment; parameter changes, bug fixes, exports/imports, screenshots, and reruns should be attached to the active experiment with linked scripts, data, layers, tasks, and charts.
- Use an LLM-led action router for non-trivial experiment-boundary decisions.
- The router should judge whether the request continues the current experiment or creates a new one.
- Program logic should validate router output and dispatch execution, but should not be the primary semantic judge.
Preferred Execution Chain
For normal GEE analysis requests, use this exact chain unless there is a platform bug:
- Read the current workbench state from
/api/state if the active map/script matters.
- Generate or update the full intended Python script in
projects/default_project/scripts/generated/current_workbench.py.
- For a light task, obtain or derive one stable
request_id, then submit that exact file content to /api/run_code once.
- Read the returned state payload and verify:
- expected layers or time series exist;
logs contain the intended metadata;
- Workflow and Tasks reflect the run.
- Report the result in Chinese using the response contract.
When sending code to /api/run_code, prefer this transport order:
- Best: write the script to
current_workbench.py, serialize a file-backed payload with request_id, user_request, route_decision, and script, then POST with curl --data-binary @payload.json.
- Acceptable: short inline
curl -d '{"script":"..."}' only for tiny smoke tests.
- Avoid: large shell-escaped JSON strings assembled inline.
- Avoid: local Python or Node HTTP clients when sandbox/network policy may block localhost access.
Execution Anti-Patterns
Do not do these for routine workbench data operations:
- Do not replace the real script with
print("ping") or other smoke tests unless you immediately restore the full script.
- Do not send long multi-line scripts through fragile shell interpolation when a file-backed payload is available.
- Do not use hidden terminal-side Earth Engine analysis as a substitute for workbench execution.
- Do not re-submit the same light task through a second transport when the first submission already has the same
request_id or route_signature.
- Do not blindly retry an identical failed
LIGHT_RUN; inspect the returned error, change the assumptions, or use force_retry=true deliberately.
- Do not debug
/api/run_code by repeatedly changing transport methods without first confirming whether the failure is:
- workbench logic,
- localhost connectivity,
- shell quoting,
- sandbox policy.
If a run is slow, first separate:
- Earth Engine compute time,
- workbench HTTP transport time,
- local command/sandbox failures,
- script replacement mistakes.
For simple products like NDVI from the current Landsat scene, Earth Engine compute should be fast. If the overall task takes long, assume the bottleneck is likely in the execution chain, not the geospatial algorithm.
Action Routing Contract
Before executing a non-trivial request, classify it internally as:
LIGHT_RUN
HEAVY_EXPERIMENT
AMBIGUOUS
Execution rules:
LIGHT_RUN: patch the current script and attach the result as a new run under the active experiment.
HEAVY_EXPERIMENT: create a new experiment and attach the new script and outputs there.
AMBIGUOUS: prefer LIGHT_RUN unless the current experiment would become semantically misleading.
Execution guardrails for LIGHT_RUN:
- include a stable
request_id;
- let
/api/run_code enforce single-flight submission;
- treat the backend
route_signature as the dedupe key;
- if the backend returns a duplicate or in-flight response, do not try a second transport path.
Routing policy:
- The classification should be produced primarily by an LLM router using compressed current-experiment context and the new user request.
- The LLM router should return structured JSON with:
decision
reason
goal_change
data_source_change
time_structure_change
output_type_change
main_flow_rewrite
needs_new_title
- Program logic should only:
- package context,
- validate the JSON schema,
- dispatch the selected execution path,
- and write workflow records consistently.
Examples that are usually light:
- compute NDVI for the current image;
- adjust cloud threshold;
- switch RGB bands;
- add one reducer statistic;
- export the current result.
Examples that are usually heavy:
- change Landsat to Sentinel-2;
- convert a single-scene workflow into a monthly or 15-day monitoring workflow;
- change from browsing imagery to annual change analysis;
- change from map display to a new batch production pipeline.
Response Contract
After each user-requested operation, report briefly in Chinese:
- Goal: what was completed.
- Method: the workbench code/API path used.
- Data changes: datasets, layers, assets, charts, exports, or local files added/modified/deleted.
- Code changes: scripts or plugin/platform files added/modified/deleted.
- Verification: what was checked and the observed status.
- Next step: one practical recommendation.
If the turn only discusses design, give the same structure in proposal form: target, planned method, expected data/code effects, validation plan, and next decision.
What The Plugin Provides
The workbench lives in assets/workbench/ and includes:
app.py: Flask + Earth Engine backend.
templates/index.html: Leaflet-based browser workbench.
requirements.txt: Python dependencies.
examples/: ready-to-run Earth Engine scripts.
Core browser API available inside the Python pane:
ee
Map
mask_s2_clouds
mask_landsat_c2_l2
PROJECT
RECENT_START_DATE
RECENT_END_DATE
LANDSAT_START_DATE
DEFAULT_RADIUS_METERS
Map methods:
Map.clear_layers()
Map.set_center(lon, lat, zoom=None)
Map.center_object(ee_object, zoom=11)
Map.set_options("SATELLITE")
Map.add_layer(ee_object, vis, name, shown=True, opacity=1.0)
Map.add_time_series(images, labels, vis, name, opacity=1.0)
Workbench UI features to prefer when relevant:
- Classified local raster display: use discrete numeric classes with per-class colors;
#00000000 can be used for transparent/no-data classes.
- Screenshot capture: save map views under
charts/maps/ and analysis outputs under charts/data_charts/, then link them into Workflow.
- Inspector: use Data Table, Spectral, and Time Series views for pixel-level QA, preserving no-data timestamps in time series.
Install Into A Workspace
When using this plugin from Codex, copy the bundled workbench into a user workspace:
cp -R plugins/gee-workbench/assets/workbench ./gee-workbench
cd ./gee-workbench
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
If Earth Engine is not already authenticated, run:
earthengine authenticate
Start
cd ./gee-workbench
source .venv/bin/activate
GEE_WORKBENCH_PROJECT=your-ee-project python app.py --host 127.0.0.1 --port 5050
Open:
http://127.0.0.1:5050/
Configuration
Environment variables:
GEE_WORKBENCH_PROJECT: Earth Engine / Google Cloud project.
GEE_WORKBENCH_HOST: Flask host.
GEE_WORKBENCH_PORT: Flask port.
GEE_WORKBENCH_DATE: date for default examples.
GEE_WORKBENCH_URL: helper script target URL.
Example Tasks
Use scripts in examples/ as templates:
latest_landsat_daxing.py
latest_sentinel2_toa_daxing.py
monthly_landsat_2025_daxing.py
ndvi_15day_sentinel2_2025_daxing.py
For a user request, prefer writing code into the workbench Python pane or POSTing it to /api/run_code. Do not restart the workbench for ordinary map/data operations. Restart only when changing platform code.
Action-router backend endpoints:
POST /api/action_router/context: returns compact current-experiment context for LLM routing.
POST /api/action_router/validate: validates a proposed LLM route decision against the expected JSON schema.
POST /api/run_code: supports script, user_request, route_decision, optional request_id, and optional force_retry. For LIGHT_RUN, the backend now enforces single-flight and deduplicates repeated submissions.
When generating workbench code:
- Start with a clear block of imports and parameters.
- Use
Map.clear_layers() only when the user wants to replace the current map state.
- Use
Map.add_layer(...) for static images/vectors and Map.add_time_series(...) for temporal collections or composites.
- Print concise run metadata so the Console and Workflow can explain what happened.
- Preserve time axes for masked/no-data observations when inspecting time series.
- For exports, prefer Earth Engine tasks visible in the Tasks pane and include destination metadata.
Operational Notes
- The workbench is local development software, not a hardened multi-user service.
- Code entered into the browser Python pane should be treated as trusted local code.
- Inspector charts may be expensive for large time series; keep frame counts reasonable for interactive use.