with one click
guardrail-monitor
// The system's value alignment and truth verification layer, ensuring output safety and epistemic integrity.
// The system's value alignment and truth verification layer, ensuring output safety and epistemic integrity.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | guardrail-monitor |
| description | The system's value alignment and truth verification layer, ensuring output safety and epistemic integrity. |
The Guardrail Monitor is a tripartite engine designed to prevent AI drift and ensure that outputs align with both user values and empirical ground truth. It consists of three specialized modules: Pathos, Pheme, and Kratos.
Pathos monitors the "emotional and ethical salience" of the interaction. It identifies what the user cares about and flags when a proposed output might conflict with those priorities.
Pheme provides a mechanism for verifying claims against trusted external sources.
VERIFIED (sufficient agreement), CONTRADICTED (high-reliability contradiction), or UNVERIFIABLE.Kratos resolves contradictions between conflicting sources using a hierarchical authority model.
check_value_saliencyAnalyzes a topic or decision context against tracked user values.
topic (required), decision_context (optional), user_values (optional).verify_ground_truthVerifies a claim using a list of sources.
claim (required), sources (optional), require_min_sources (optional).arbitrate_conflictResolves a conflict between two competing claims.
claimA, claimB, sourceA, sourceB (required), domain (optional).