with one click
enterprise-infrastructure
// Kailash enterprise infra: progressive infrastructure, dialect-portable SQL, store factory, task queues, worker registry, idempotency.
// Kailash enterprise infra: progressive infrastructure, dialect-portable SQL, store factory, task queues, worker registry, idempotency.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | enterprise-infrastructure |
| description | Kailash enterprise infra: progressive levels, dialect SQL, scheduler (cron/interval), durable execution + checkpointing, task queues, worker registry, idempotency. |
Comprehensive guide to Kailash's progressive infrastructure model for scaling from single-process SQLite to multi-worker PostgreSQL/MySQL deployments, plus the scheduler and durable-execution primitives that ship in kailash.runtime, kailash.middleware.gateway, and kailash.servers.
Read this first before grepping the source tree. Every primitive listed below ships in the kailash package today.
| Primitive | Module | Purpose |
|---|---|---|
WorkflowScheduler | kailash.runtime.scheduler | Cron + interval + one-shot scheduling (APScheduler-backed SQLite jobstore) |
FabricScheduler | dataflow.fabric.scheduler | DataFlow product-refresh cron (asyncio + croniter, supervised tasks) |
ExecutionTracker | kailash.runtime.execution_tracker | Per-node checkpoint primitive (records completion + cached output) |
Checkpoint / ExecutionJournal / DurableRequest | kailash.middleware.gateway.durable_request | Per-request event log + checkpoint blob + state machine |
CheckpointManager + DBCheckpointStore | kailash.middleware.gateway.checkpoint_manager + kailash.infrastructure.checkpoint_store | Tiered checkpoint persistence (memory/disk/cloud + DB-backed durable store) |
DurableWorkflowServer | kailash.servers.durable_workflow_server | Server-mode wiring of checkpointing + dedup + event store |
SQLTaskQueue | kailash.infrastructure.task_queue | DB-backed work queue for distributed execution (FOR UPDATE SKIP LOCKED) |
SQLWorkerRegistry | kailash.infrastructure.worker_registry | Worker-fleet membership + heartbeats + dead-worker reaping |
IdempotentExecutor | kailash.infrastructure.idempotency | At-most-once execution semantics (claim-execute-store) |
ConnectionManager | kailash.db.connection | Dialect-portable connection pooling |
The enterprise infrastructure layer provides:
KAILASH_DATABASE_URLFOR UPDATE SKIP LOCKEDExecutionTracker checkpoints + CheckpointManager tiered storage + DurableRequest resumable state machine + DurableWorkflowServer server-mode wiringkailash_meta table with downgrade protection# Level 0: Zero config (default)
from kailash.runtime import LocalRuntime
runtime = LocalRuntime() # SQLite stores, in-process
# Level 1: Set KAILASH_DATABASE_URL=postgresql://user:pass@localhost/kailash
# Auto-detects PG, all stores use shared ConnectionManager
# Level 2: Set KAILASH_QUEUE_URL=redis://localhost:6379/0
# OR KAILASH_QUEUE_URL=postgresql://user:pass@localhost/kailash
# Recurring schedules: APScheduler-backed cron + interval (persists across restarts)
from kailash.runtime.scheduler import WorkflowScheduler
scheduler = WorkflowScheduler() # default jobstore: kailash_schedules.db
scheduler.start()
scheduler.schedule_cron(my_workflow, "0 22 * * *") # daily 22:00 UTC
scheduler.schedule_interval(my_workflow, seconds=300) # every 5 minutes
# Durable execution: server-mode with checkpointing + recovery
from kailash.servers.durable_workflow_server import DurableWorkflowServer
server = DurableWorkflowServer(enable_durability=True) # default CheckpointManager
The user's workflow code is identical at all levels.
? placeholders, _validate_identifier()WorkflowScheduler (cron + interval + one-shot), FabricScheduler (DataFlow product refresh), multi-instance hazardsExecutionTracker per-node checkpoints, CheckpointManager + DBCheckpointStore, DurableWorkflowServer, resume-from-checkpoint contract| Level | Config | Runtime | Persistence |
|---|---|---|---|
| 0 | None | In-process, LocalRuntime | SQLite + in-memory |
| 1 | KAILASH_DATABASE_URL | In-process, shared DB | PostgreSQL/MySQL/SQLite |
| 2 | KAILASH_QUEUE_URL | Multi-worker + task queue | Shared DB + Redis or SQL queue |
| Variable | Purpose | Default |
|---|---|---|
KAILASH_DATABASE_URL | Infrastructure stores | None (Level 0) |
DATABASE_URL | Fallback for KAILASH_DATABASE_URL | None |
KAILASH_QUEUE_URL | Task queue broker | None (no queue) |
| Package | Purpose |
|---|
? canonical placeholders in all SQL -- ConnectionManager translates automatically_validate_identifier() before interpolationdialect.upsert() instead of check-then-act (TOCTOU race)async with conn.transaction() as tx: for multi-statement operationsAUTOINCREMENT in shared DDL (SQLite-specific)Use this skill when you need to:
DurableWorkflowServer that wires checkpointing + dedup + event sourcing in one processFor complex infrastructure questions, invoke:
infrastructure-specialist - Progressive infrastructure, dialect portability, store factorytesting-specialist - Infrastructure testing with real databasessecurity-reviewer - SQL injection prevention, transaction safety