with one click
generate-addon
// Use when generating or modifying a KubeBlocks addon Helm chart, adding an engine/version/topology, or selecting KubeBlocks API references before writing addon YAML.
// Use when generating or modifying a KubeBlocks addon Helm chart, adding an engine/version/topology, or selecting KubeBlocks API references before writing addon YAML.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | generate-addon |
| description | Use when generating or modifying a KubeBlocks addon Helm chart, adding an engine/version/topology, or selecting KubeBlocks API references before writing addon YAML. |
Reference resolution: when this source-derived skill mentions docs/..., resolve it from the shared support package beside the installed user skills: ~/.codex/skills/kubeblocks-addon-source-docs/docs/... for Codex or ~/.claude/skills/kubeblocks-addon-source-docs/docs/... for Claude Code. In the shared kubeblocks-addon-docs checkout, the same files live under skills/kubeblocks-addon-source-docs/docs/.... When it mentions scripts/..., resolve it from the same support package under scripts/.... If you are working inside a checkout of the original apecloud/kubeblocks-addon-skills, repo-relative paths are also valid.
Execute the full KubeBlocks addon development workflow for the following goal:
Goal: $ARGUMENTS
Parse the goal to identify:
SCRIPT_DIR="$(git rev-parse --show-toplevel 2>/dev/null || pwd)"
[ -f "$SCRIPT_DIR/.env" ] && source "$SCRIPT_DIR/.env"
[ -n "$KUBECONFIG" ] && export KUBECONFIG
# Registry decision: probe docker.io; fall back to ALIYUN_IMAGE_REGISTRY if unreachable
if curl -s --connect-timeout 15 "https://registry-1.docker.io/v2/" -o /dev/null 2>/dev/null; then
IMAGE_REGISTRY="docker.io"
SKOPEO_CREDS="--no-creds"
elif [ -n "$ALIYUN_IMAGE_REGISTRY" ]; then
IMAGE_REGISTRY="${ALIYUN_IMAGE_REGISTRY}"
SKOPEO_CREDS="--creds ${ALIYUN_DOCKER_USERNAME}:${ALIYUN_DOCKER_PASSWORD}"
else
IMAGE_REGISTRY="docker.io"
SKOPEO_CREDS="--no-creds"
fi
echo "Image registry: ${IMAGE_REGISTRY}"
# Node arch/OS — used by skopeo to probe the correct manifest variant
NODE_ARCH=$(kubectl get nodes -o jsonpath='{.items[0].status.nodeInfo.architecture}' 2>/dev/null || echo "amd64")
NODE_OS=$(kubectl get nodes -o jsonpath='{.items[0].status.nodeInfo.operatingSystem}' 2>/dev/null || echo "linux")
echo "Node arch: ${NODE_ARCH} os: ${NODE_OS}"
# Detect installed KubeBlocks version
KB_VERSION=$(kubectl get deployment -n kb-system kubeblocks \
-o jsonpath='{.metadata.labels.app\.kubernetes\.io/version}' 2>/dev/null \
| grep -oE '^[0-9]+\.[0-9]+' || echo "unknown")
echo "KubeBlocks version: $KB_VERSION"
Select the API reference based on detected version:
1.0.x → use docs/kb-api-reference-1.0.md1.1.x → use docs/kb-api-reference-1.1.md1.2.x or unknown (latest) → use docs/kb-api-reference.mdRead the selected API reference file before generating any YAML.
Routing decision:
# What files already exist?
find addons/<engine>/ -type f 2>/dev/null | sort
# Check values.yaml for existing version array structure
cat addons/<engine>/values.yaml 2>/dev/null
# Check if kblib exists (needed for Chart.yaml dependency)
ls addons/kblib/ 2>/dev/null && echo "kblib present" || echo "kblib absent"
Read all existing files in addons/<engine>/. Also read a reference addon (e.g., addons/redis/) to understand the conventions used in this particular repo.
Follow rules in docs/coding-rules.md and the version-specific API reference selected in Phase 0:
For a new addon:
Chart.yaml with kblib dependency and KB annotationsvalues.yaml with versions arraytemplates/_helpers.tpl with naming, regex, and annotation helperstemplates/clusterdefinition.yamltemplates/cmpd-<component>.yaml (one per component type)templates/cmpv-<component>.yaml (one per component type)config/<engine>-config.yaml (ConfigMap for config file template)scripts/<engine>-start.sh (startup script)For adding a new major version to existing addon:
values.yaml <engine>Versions array{{- range }} loop in cmpd-*.yaml and cmpv-*.yaml auto-generates the new resourcesCritical rules:
kubeblocks.io/crd-api-version: apps.kubeblocks.io/v1 and apps.kubeblocks.io/skip-immutable-check: "true"name: {{ printf "%s-%s" .componentDef $.Chart.Version }}compDef in ClusterDefinition: use regex helper (e.g. ^redis-\d+), not a hard-coded stringconfigs[].template (not templateRef) for ConfigMap referencesroles[].isExclusive: true on leader/primary rolesClusterVersion resource anywhere# Update dependencies first if Chart.yaml has kblib
helm dependency update addons/<engine>
# Then validate rendering
helm template test-addon addons/<engine>
If this fails: read the error, fix the specific issue, re-run. Retry up to 3 times. If still failing after 3 attempts, report to user.
Evaluate the generated code (rules from docs/coding-rules.md review checklist):
serviceVersion fields, and appVersion in Chart.yaml match the requested version?_helpers.tpl definitions still present?configs[].template field (not templateRef) used?configs[].volumeName has a matching entry in runtime.volumes AND runtime.containers[*].volumeMounts?roles[].isExclusive: true set for leader/primary roles?Do NOT flag: securityContext, privileged mode, resource requests/limits — intentional for DB internals.
If issues found: fix them in Phase 1, then re-run review.
kubectl cluster-info
# If this fails, the cluster is unreachable — report to user
helm dependency update addons/<engine>
helm template test-addon addons/<engine> --debug \
--set "image.registry=${IMAGE_REGISTRY}" \
| kubectl apply -f -
ENGINE=<engine>
TIMEOUT=90
INTERVAL=5
for ((i=0; i<TIMEOUT; i+=INTERVAL)); do
PHASE=$(kubectl get clusterdefinition $ENGINE -o jsonpath='{.status.phase}' 2>/dev/null)
echo " [${i}s] ClusterDefinition: ${PHASE:-pending}"
[[ "$PHASE" == "Available" ]] && break
sleep $INTERVAL
done
Also check each ComponentDefinition and ComponentVersion.
kubectl describe componentdefinition <name>
kubectl describe clusterdefinition <name>
KB_POD=$(kubectl get pods -n kb-system -l app.kubernetes.io/name=kubeblocks \
-o jsonpath='{.items[0].metadata.name}')
kubectl logs -n kb-system "$KB_POD" --tail=100
Common errors and fixes:
field is immutable → missing apps.kubeblocks.io/skip-immutable-check: "true" annotationunknown field "templateRef" → should be template: in configs[]configmap "xxx" not found → the ConfigMap named in configs[].template wasn't appliedls addons-cluster/<engine>/Chart.yaml || echo "addons-cluster not found — skipping instance tests"
# Topologies
helm template test-addon addons/<engine> | python3 -c "
import sys, yaml
for doc in yaml.safe_load_all(sys.stdin):
if doc and doc.get('kind') == 'ClusterDefinition':
for t in doc.get('spec', {}).get('topologies', []):
print(t['name'], '(default)' if t.get('default') else '')
"
# Available service versions
kubectl get componentversion <engine> -o jsonpath='{.spec.releases[*].serviceVersion}' \
| tr ' ' '\n' | sort -V
for VERSION in <versions-to-test>; do
if skopeo inspect "docker://${IMAGE_REGISTRY}/apecloud/${ENGINE}:${VERSION}" \
--override-arch "${NODE_ARCH}" --override-os "${NODE_OS}" \
$SKOPEO_CREDS 2>/dev/null 1>/dev/null; then
echo "$VERSION: EXISTS"
else
echo "$VERSION: MISSING — will skip"
fi
done
Skip tests for missing images. Do NOT modify version tags in YAML.
For each (topology, version) combination where the image exists:
Create workspace/tests/<engine>-<topology>-test.yaml with:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: kb-test-<engine> # keep name short and unique
namespace: default
spec:
terminationPolicy: Delete # required
clusterDef: <engine>
topology: <topology>
componentSpecs:
- name: <component-name> # must match topology's component name exactly
serviceVersion: "<version>"
replicas: 1
resources:
limits: { cpu: "0.5", memory: "512Mi" }
requests: { cpu: "0.1", memory: "256Mi" }
volumeClaimTemplates:
- name: data
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
resources:
requests:
storage: 1Gi
For multi-component topologies (e.g., replication with sentinel), include a componentSpecs entry for each component in the topology with the correct name field.
kubectl apply -f workspace/tests/<engine>-<topology>-test.yaml
Poll for Running phase up to 180 seconds. Check pod status every 15 seconds.
KB v1 has NO cluster-level "Failed" phase. Watch for pod-level failures:
CrashLoopBackOff, ImagePullBackOff, ErrImagePull, CreateContainerConfigError → abort immediatelyFailedScheduling, FailedMount after 45s → infrastructure issue, abortOn failure: run Phase 5 (Diagnose) before making any code changes.
When cluster instances are failing:
CLUSTER=<cluster-name>
kubectl get cluster "$CLUSTER" -o yaml
kubectl get component -l app.kubernetes.io/instance="$CLUSTER" -o wide
kubectl get pods -l app.kubernetes.io/instance="$CLUSTER" -o wide
for POD in $(kubectl get pods -l app.kubernetes.io/instance="$CLUSTER" \
-o jsonpath='{.items[*].metadata.name}'); do
echo "=== Pod: $POD ==="
for C in $(kubectl get pod "$POD" -o jsonpath='{.spec.initContainers[*].name}' 2>/dev/null); do
echo "--- Init: $C ---"
kubectl logs "$POD" -c "$C" --tail=80 2>&1
done
for C in $(kubectl get pod "$POD" -o jsonpath='{.spec.containers[*].name}'); do
echo "--- Container: $C ---"
kubectl logs "$POD" -c "$C" --tail=80 2>&1
done
kubectl get events --field-selector involvedObject.name="$POD" --sort-by='.lastTimestamp'
done
KB_POD=$(kubectl get pods -n kb-system -l app.kubernetes.io/name=kubeblocks \
-o jsonpath='{.items[0].metadata.name}')
kubectl logs -n kb-system "$KB_POD" --tail=60
Identify root cause. Then:
Summarize: