with one click
working-with-runtime-system
// Guide to understanding and working with the kdn runtime system architecture
// Guide to understanding and working with the kdn runtime system architecture
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | working-with-runtime-system |
| description | Guide to understanding and working with the kdn runtime system architecture |
| argument-hint |
The runtime system provides a pluggable architecture for managing workspaces on different container/VM platforms (Podman, MicroVM, Kubernetes, etc.). This skill provides detailed guidance on understanding and working with the runtime system.
The runtime system enables kdn to support multiple backend platforms through a common interface. Each runtime implementation handles the platform-specific details of creating, starting, stopping, and managing workspace instances.
pkg/runtime/runtime.go): Contract all runtimes must implementpkg/runtime/registry.go): Manages runtime registration and discoverypkg/runtime/<runtime-name>/): Platform-specific packages (e.g., fake)pkg/runtimesetup/register.go): Automatically registers all available runtimesCommands use runtimesetup.RegisterAll() to automatically register all available runtimes:
import "github.com/openkaiden/kdn/pkg/runtimesetup"
// In command preRun
manager, err := instances.NewManager(storageDir)
if err != nil {
return err
}
// Register all available runtimes
if err := runtimesetup.RegisterAll(manager); err != nil {
return err
}
This automatically registers all runtimes from pkg/runtimesetup/register.go that report as available (e.g., only registers Podman if podman CLI is installed).
Some runtimes may implement additional optional interfaces to provide extended functionality beyond the base Runtime interface. These are checked at runtime using type assertions, allowing runtimes to opt-in to features they support.
The StorageAware interface enables runtimes to persist data in a dedicated storage directory managed by the registry.
type StorageAware interface {
Initialize(storageDir string) error
}
How it works:
When a runtime implements StorageAware, the registry will:
<registry-storage>/<runtime-type>/Initialize(storageDir) with the pathExample implementation:
type myRuntime struct {
storageDir string
storageFile string
instances map[string]Instance
}
// Implement StorageAware
func (r *myRuntime) Initialize(storageDir string) error {
r.storageDir = storageDir
r.storageFile = filepath.Join(storageDir, "instances.json")
// Load existing state from disk
return r.loadFromDisk()
}
func (r *myRuntime) Create(ctx context.Context, params runtime.CreateParams) (runtime.RuntimeInfo, error) {
// ... create instance logic ...
// Persist to storage directory
if err := r.saveToDisk(); err != nil {
return runtime.RuntimeInfo{}, fmt.Errorf("failed to persist instance: %w", err)
}
return info, nil
}
runtimesetup.ListRuntimes() returns structured information about all available runtimes (excluding the internal fake runtime). It is used by kdn runtime list.
The function relies on two mandatory methods that all runtimes must implement:
// Description returns a human-readable description of the runtime.
Description() string
// Local reports whether the runtime executes workspaces on the local machine.
Local() bool
The AgentLister interface enables runtimes to report which agents they support. This is used by the info command to discover available agents without requiring direct knowledge of runtime-specific configuration.
type AgentLister interface {
ListAgents() ([]string, error)
}
How it works:
When a runtime implements AgentLister, the runtimesetup.ListAgents() function will:
Example implementation (Podman runtime):
func (p *podmanRuntime) ListAgents() ([]string, error) {
if p.config == nil {
return []string{}, nil
}
return p.config.ListAgents()
}
This pattern decouples agent discovery from runtime-specific configuration details, allowing the info command to query agents generically through the runtime interface.
The FlagProvider interface enables runtimes to declare CLI flags that appear on the init command. This decouples runtime-specific options from the command layer.
type FlagDef struct {
Name string
Usage string
Completions []string
}
type FlagProvider interface {
Flags() []FlagDef
}
How it works:
When a runtime implements FlagProvider:
runtimesetup.ListFlags() discovers and deduplicates flags from all available FlagProvider runtimesinit command registers them as cobra flags (with shell completions if Completions is non-empty)map[string]string and passed through AddOptions.RuntimeOptions ā CreateParams.RuntimeOptionsparams.RuntimeOptions in Create()Example implementation:
func (r *myRuntime) Flags() []runtime.FlagDef {
return []runtime.FlagDef{{
Name: "my-driver",
Usage: "Driver to use (podman, vm)",
Completions: []string{"podman", "vm"},
}}
}
func (r *myRuntime) Create(ctx context.Context, params runtime.CreateParams) (runtime.RuntimeInfo, error) {
driver := params.RuntimeOptions["my-driver"]
// ... use driver value ...
}
The Terminal interface enables interactive terminal sessions for connecting to running instances. This is used by the terminal command.
type Terminal interface {
// Terminal starts an interactive terminal session inside a running instance.
// The agent parameter is used to load agent-specific configuration for the terminal session.
// The command is executed with stdin/stdout/stderr connected directly to the user's terminal.
Terminal(ctx context.Context, agent string, instanceID string, command []string) error
}
Example implementation (Podman runtime):
func (p *podmanRuntime) Terminal(ctx context.Context, agent string, instanceID string, command []string) error {
if agent == "" {
return fmt.Errorf("%w: agent is required", runtime.ErrInvalidParams)
}
if instanceID == "" {
return fmt.Errorf("%w: instance ID is required", runtime.ErrInvalidParams)
}
if len(command) == 0 {
return fmt.Errorf("%w: command is required", runtime.ErrInvalidParams)
}
// Build podman exec -it <container> <command...>
args := []string{"exec", "-it", instanceID}
args = append(args, command...)
return p.executor.RunInteractive(ctx, args...)
}
How optional interfaces work:
The Terminal interface follows the same pattern as StorageAware - it's optional, and runtimes that don't support interactive sessions simply don't implement it. The instances manager checks for Terminal support at runtime using type assertion:
if terminalRuntime, ok := runtime.(Terminal); ok {
return terminalRuntime.Terminal(ctx, agent, instanceID, command)
}
return errors.New("runtime does not support terminal sessions")
This pattern allows runtimes to provide additional capabilities without requiring all runtimes to implement every possible feature.
The Experimental interface marks a runtime's support as experimental. Its presence alone is the signal ā the method carries no return value and callers never invoke it directly.
type Experimental interface {
IsExperimental()
}
When the init command runs, it calls manager.GetRuntime(runtimeType) and checks for this interface. If present, it prints a warning to stderr before creating the workspace:
ā ļø <DisplayName> runtime support is experimental
The <DisplayName> comes from the runtime's DisplayName() method, not its Type() identifier.
The warning is suppressed in JSON output mode (--output json).
Example implementation:
func (r *myRuntime) IsExperimental() {}
Add this method to a runtime when its implementation is not yet stable enough for production use.
All runtimes must return valid WorkspaceState values in RuntimeInfo.State. The instances manager enforces validation at the boundary using a fail-fast approach.
The following four states are the only valid values (defined in github.com/openkaiden/kdn-api/cli/go):
running - The instance is actively runningstopped - The instance is created but not runningerror - The instance encountered an errorunknown - The instance state cannot be determinedThe instances manager validates all RuntimeInfo values returned from runtimes at three boundaries:
Add() - validates state after runtime.Create()Start() - validates state after runtime.Start()Stop() - validates state after runtime.Info()If a runtime returns an invalid state, the manager immediately returns an error:
// In pkg/instances/manager.go
runtimeInfo, err := rt.Create(ctx, params)
if err != nil {
return nil, fmt.Errorf("failed to create runtime instance: %w", err)
}
// Validate state at boundary
if err := runtime.ValidateState(runtimeInfo.State); err != nil {
return nil, fmt.Errorf("runtime %q returned invalid state: %w", runtimeType, err)
}
Benefits of boundary validation:
Runtime implementations must map platform-specific states to the four valid states. You do NOT need to call runtime.ValidateState() yourself - the manager does this automatically.
Example: Podman runtime state mapping
// In pkg/runtime/podman/info.go
func mapPodmanState(podmanState string) api.WorkspaceState {
switch podmanState {
case "running":
return api.WorkspaceStateRunning
case "created", "exited", "stopped", "paused", "removing":
return api.WorkspaceStateStopped
case "dead":
return api.WorkspaceStateError
default:
return api.WorkspaceStateUnknown
}
}
func (p *podmanRuntime) Info(ctx context.Context, id string) (runtime.RuntimeInfo, error) {
// Get podman-specific state
podmanState := getPodmanContainerState(id)
// Map to valid WorkspaceState (no validation needed - manager handles it)
state := mapPodmanState(podmanState)
return runtime.RuntimeInfo{
ID: id,
State: state,
Info: info,
}, nil
}
If a runtime returns an invalid state, the error message clearly identifies the problem:
runtime "my-runtime" returned invalid state: invalid runtime state: "created"
(must be one of: running, stopped, error, unknown)
This tells you:
runtime.ValidateState() yourself - the manager handles thisReference implementation: See pkg/runtime/podman/info.go for complete state mapping example
Boundary validation tests: See pkg/instances/manager_test.go for tests that verify the manager rejects invalid states
When implementing a new runtime, the container-side path used for $SOURCES must not be a direct child of /.
/sources ā not ok: $SOURCES/.. resolves to /, which means the containment check in pkg/config/config.go would accept any path as a sibling mount, including escaping paths like /etc/workspace/sources ā ok: $SOURCES/.. resolves to /workspace, a safe shared root for sibling repos/mnt/sources ā ok/mnt/sub/sources ā okUsers can mount sibling source directories using $SOURCES/../sibling. The containment check validates that $SOURCES-based targets stay within the parent of $SOURCES. If $SOURCES is mounted at a root-level directory, that parent is / and the check provides no protection.
There is no specific depth requirement for $HOME.
Podman runtime paths (reference):
var containerWorkspaceSources = path.Join("/workspace", "sources") // parent is /workspace ā
var containerHome = path.Join("/home", constants.ContainerUser) // no depth constraint
Use the /add-runtime skill which provides step-by-step instructions for creating a new runtime implementation. The fake runtime in pkg/runtime/fake/ serves as a reference implementation.
/add-runtime - Step-by-step guide to create a new runtime implementation/working-with-steplogger - Add progress feedback to runtime operations/working-with-podman-runtime-config - Configure the Podman runtimepkg/runtime/runtime.gopkg/runtime/registry.gopkg/runtimesetup/register.gopkg/runtime/fake/pkg/runtime/podman/