with one click
digital-human-generator
// NexTide 数字人生成。使用人物图片/视频、音频和口播文本创建图片源或视频源数字人口播任务。适合“生成数字人视频”“用这张人物图做口播”“图片数字人/视频数字人”等请求。
// NexTide 数字人生成。使用人物图片/视频、音频和口播文本创建图片源或视频源数字人口播任务。适合“生成数字人视频”“用这张人物图做口播”“图片数字人/视频数字人”等请求。
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | digital-human-generator |
| description | NexTide 数字人生成。使用人物图片/视频、音频和口播文本创建图片源或视频源数字人口播任务。适合“生成数字人视频”“用这张人物图做口播”“图片数字人/视频数字人”等请求。 |
| allowed-tools | Read, Write, Bash |
Follow shared NexTide rules in:
nextide-sharedUse this skill when the user wants to create a digital human video from:
Capability id:
digital-human.video.generate
CLI contract:
npm run nextide -- capability run digital-human.video.generate \
--input .nextide/input/digital-human-video.json \
--output .nextide/output/digital-human-video-result.json \
--mode submit \
--user-api-key <NEX用户积分API_KEY>
Image source lip-sync:
{
"sourceType": "IMAGE",
"imageUrl": "https://.../person.png",
"audioUrl": "https://.../voice.mp3",
"type": "LIP_SYNC"
}
Voice clone / script-aware task:
{
"sourceType": "IMAGE",
"personImage": "https://.../person.png",
"audioUrl": "https://.../voice.mp3",
"script": "大家好,今天讲一个...",
"type": "VOICE_CLONE",
"durationSeconds": 30
}
Aliases:
personImage / sourceImage → imageUrlpersonVideo / sourceVideo → videoUrlvoiceUrl → audioUrlscript / text → scriptContentDigital human generation can take up to 60 minutes.
The capability usually returns:
waiting_callback
The result contains a data.id, which is the NexTide digitalHumanVideo record id.
Progress can be inspected in NexTide UI or through the underlying API:
GET /api/digital-human/videos/<id>
Expected result shape:
{
"run": {
"status": "waiting_callback",
"result": {
"data": {
"id": "...",
"status": "GENERATING",
"resultUrl": null
}
}
}
}
If resultUrl is present, export multimodal artifacts and return the local video/preview paths:
RUN_ID=$(node -e "const r=require('./.nextide/output/digital-human-video-result.json'); console.log(r.run && r.run.runId)")
npm run nextide -- run artifacts "$RUN_ID" \
--output-dir .nextide/output/$RUN_ID \
--download \
--gallery
Return:
preview.html path if generatedresultUrl exists or status is completed.audioUrl.digital-human.video.generate0.2.0videoavailableinternal_apitruehighnexTideApiKeyN8N_DIGITAL_HUMAN_WEBHOOK or DIGITAL_HUMAN_WEBHOOK_URLpaid5/minute, 20/hourdigital-human, video, long-runningDescription:
使用人物图、文案和声音生成口播数字人视频。最长可能 60 分钟。
Examples:
图片数字人口播
{
"personImage": "https://example.com/person.png",
"audioUrl": "https://example.com/audio.mp3",
"script": "口播文案"
}
Input fields:
personImage (string, required):人物图片 URL。script (string, required):口播文案。voiceId (string):声音 ID 或预设。duration (number):目标时长,秒。Output fields:
videoUrl (string):生成视频 URL。taskId (string):内部任务 ID。CLI:
nextide capability run digital-human.video.generate \
--input .nextide/input/digital-human.video.generate.json \
--output .nextide/output/digital-human.video.generate-result.json \
--mode submit \
--wait \
--timeout 3600 \
--interval 5
RUN_ID=$(node -e "const r=require('./.nextide/output/digital-human.video.generate-result.json'); console.log(r.run && r.run.runId)")
nextide run follow "$RUN_ID" \
--output-dir .nextide/output/$RUN_ID \
--timeout 3600 \
--interval 5
Artifact-first reading order:
.nextide/output/$RUN_ID/summary.json..nextide/output/$RUN_ID/manifest.json.preview.html / gallery.html with rich preview when supported.datatable.json for data/table results.available, fail fast and explain what is missing.--wait when the user wants a finished result in the same turn.nextide run artifacts <run-id> --output-dir .nextide/output/<run-id> --download --gallery --datatable and read summary.json then manifest.json.nextide run follow <run-id> --output-dir .nextide/output/<run-id> --timeout 1800 --interval 5.summary.recommendedResponse.message, preview.html, datatable.json, and local artifact paths over pasting huge raw JSON.explanation, convert it into a clear user-facing failure message with next actions.