with one click
engine-hints-elasticsearch
// Use when developing, reviewing, or testing Elasticsearch KubeBlocks addon behavior and needing Elasticsearch-specific topology, probe, storage, config, or operational constraints.
// Use when developing, reviewing, or testing Elasticsearch KubeBlocks addon behavior and needing Elasticsearch-specific topology, probe, storage, config, or operational constraints.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | engine-hints-elasticsearch |
| description | Use when developing, reviewing, or testing Elasticsearch KubeBlocks addon behavior and needing Elasticsearch-specific topology, probe, storage, config, or operational constraints. |
Reference resolution: when this source-derived skill mentions docs/..., resolve it from the shared support package beside the installed user skills: ~/.codex/skills/kubeblocks-addon-source-docs/docs/... for Codex or ~/.claude/skills/kubeblocks-addon-source-docs/docs/... for Claude Code. In the shared kubeblocks-addon-docs checkout, the same files live under skills/kubeblocks-addon-source-docs/docs/.... When it mentions scripts/..., resolve it from the same support package under scripts/.... If you are working inside a checkout of the original apecloud/kubeblocks-addon-skills, repo-relative paths are also valid.
Engine name in KubeBlocks: elasticsearch
All Elasticsearch components require at least 2Gi memory. JVM auto-sets heap to 50% of the container memory limit; at 1Gi limit the heap is 512Mi, which is insufficient even for v7/v8 — OOMKill (exit 137) observed in multi-container pods. Override the generic memory: 512Mi default for all components:
| Component | Version | Topology | Minimum memory limit | Reason |
|---|---|---|---|---|
| master | any | any | 2Gi | JVM heap = 50% of limit; 512Mi heap (1Gi limit) OOMKills during cluster formation |
| data | any | any | 2Gi | JVM heap = 50% of limit; 512Mi heap (1Gi limit) OOMKills under indexing/search load |
| mdit | any | any | 2Gi | Combined master+data+ingest roles amplify heap pressure; OOMKill at 1Gi |
| coordinator | any | any | 2Gi | JVM heap = 50% of limit; 512Mi insufficient under query routing load |
| kibana | v8+ | any | 1Gi | Node.js JS heap exhausts 512Mi before startup probe fires → CrashLoopBackOff |
single-nodeUses discovery.type: single-node — a single ES node acting as master + data.
| Operation | Support | Note |
|---|---|---|
| HorizontalScaling Out/In | N/A | Cannot add nodes to a single-node cluster |
| HscaleOfflineInstances | N/A | Same reason |
| HscaleOnlineInstances | N/A | Same reason |
| SwitchOver | N/A | No secondary exists |
| Failover (all 11 cases) | K8s restart only | No HA election; recovery is Kubernetes restartPolicy, not ES election |
multi-node / mdit / multi-node-index (3-node and above)Full HScale, SwitchOver, and Failover support.
When scaling in data nodes, the KubeBlocks controller will log:
wait to delete "Pod/<name>" in Component: data
This is correct behavior — the controller is waiting for the member-leave lifecycle action to complete ES shard relocation before safely deleting the pod. Even on a fresh cluster with no user data, this process involves shard rebalancing across the remaining nodes and can take 3–10 minutes.
Do not interpret this as a hang. Use a scale-in timeout of 600s for data-node components.
Elasticsearch prohibits in-place downgrades at the data layer. If on-disk index data is from a newer ES version, the node will refuse to start with a version mismatch error.
| Method | Support | Note |
|---|---|---|
es-dump | PASSED | Preferred backup method for all ES versions |
full-backup | PASSED | Supported for all ES versions |
Elasticsearch components do not have roles defined in their ComponentDefinition. Use Approach B (direct Service) for all components.
COMPONENT=<component> # e.g. master, data, mdit
SVC_NAME="${CLUSTER_NAME}-${COMPONENT}-internet"
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: ${SVC_NAME}
namespace: default
labels:
app.kubernetes.io/instance: ${CLUSTER_NAME}
apps.kubeblocks.io/component-name: ${COMPONENT}
spec:
type: LoadBalancer
selector:
app.kubernetes.io/instance: ${CLUSTER_NAME}
apps.kubeblocks.io/component-name: ${COMPONENT}
ports:
- name: http
port: 9200
targetPort: http
protocol: TCP
EOF
# Wait for external IP
for ((i=0; i<120; i+=5)); do
IP=$(kubectl get svc ${SVC_NAME} -o jsonpath='{.status.loadBalancer.ingress[0].ip}' 2>/dev/null)
[[ -n "$IP" ]] && echo "✓ LB ready: $IP" && break
sleep 5
done
# Disable: delete the service
kubectl delete svc ${SVC_NAME}
Do not use the OpsRequest Expose approach for ES — it will hang because ES components have no
roles and injecting a roleSelector causes the controller to wait forever.
N/A — ParametersDef is not implemented for Elasticsearch.
kubectl get parametersdef --no-headers 2>/dev/null | grep elasticsearch
# Expected: no output
Mark all Reconfiguring rows as N/A (ParametersDef not yet implemented).
# Get credentials
kubectl get secret -l app.kubernetes.io/instance=$CLUSTER_NAME -o name
ES_USER=$(kubectl get secret <cluster-name>-<component>-account-root -o jsonpath='{.data.username}' | base64 -d)
ES_PASS=$(kubectl get secret <cluster-name>-<component>-account-root -o jsonpath='{.data.password}' | base64 -d)
# Test connectivity
kubectl exec -it <pod-name> -- curl -u "$ES_USER:$ES_PASS" http://localhost:9200/_cluster/health