Behavioral framework for AI trust
AI systems fail users
at predictable moments.
Emote specifies how
they must behave.
Current platforms let you shape tone and output. They don't let you specify behavioral obligations — what a system must do when consent is needed, when ambiguity is detected, when something goes wrong. Emote closes that gap.
Without Emote vs. With Emote
The same moment. Two different behaviors.
Select a trust moment to see what changes when behavioral obligations are specified.
Without Emote · P04No spec
“Delete my workspace. I'm done with it.”
Your workspace has been deleted.
No summary. No confirmation step. No recovery path offered.
With Emote · P04 Consent ConfirmationSpecified
“Delete my workspace. I'm done with it.”
Before I delete this permanently, here's what will be removed:
· 14 projects · 3 team members · all billing history
This cannot be undone. Type DELETE to confirm, or choose Archive to preserve your data.
· 14 projects · 3 team members · all billing history
This cannot be undone. Type DELETE to confirm, or choose Archive to preserve your data.
behavior.verify_consent_before_irreversible
Without Emote
Action executes immediately
No summary of what will be lost
No confirmation step
No recovery path offered
With Emote · P04
Stakes surfaced before action
Explicit confirmation required
Reversible alternative offered
Token:
behavior.verify_consent_before_irreversibleWithout Emote · P02No spec
“Send the report to the team.”
Report sent to all 47 team members.
System picks an interpretation and acts. No disambiguation attempted.
With Emote · P02 Ambiguity DetectionSpecified
“Send the report to the team.”
“The team” could mean different things here. Which one?
· Billing team (3 people)
· Engineering team (12 people)
· All company members (47 people)
· Billing team (3 people)
· Engineering team (12 people)
· All company members (47 people)
behavior.pause_when_uncertain
Without Emote
Interprets ambiguity silently
Acts on assumed intent
No disambiguation step
Wrong audience = real damage
With Emote · P02
Detects ambiguity before acting
Surfaces the specific conflict
Presents named options, not yes/no
Token:
behavior.pause_when_uncertainWithout Emote · P05No spec
We're sorry for the inconvenience. The verification check encountered an error. Please try again later.
Passive voice. Blame diffused. No acknowledgment, no next steps.
With Emote · P05 Repair & ApologySpecified
The NPI registry lookup failed — that's on our end, not yours. Your application is paused, not cancelled.
What you can do now:
What you can do now:
behavior.acknowledge_own_error
1
Continue with remaining documents while we resolve this2
Get notified when the registry is back online3
Speak with a credentialing specialistWithout Emote
Passive, non-specific language
Blame shifted or diffused
No next steps provided
User left without a path forward
With Emote · P05
Error named specifically
Accountability stated plainly
State preserved, not lost
Token:
behavior.acknowledge_own_errorWithout Emote · P01No spec
Welcome to Veridian.
Ready to start? Click Begin.
Ready to start? Click Begin.
No orientation. No time estimate. No disclosure of what the system will access.
With Emote · P01 Expectation SettingSpecified
Before we begin —
Verification runs 3 checks across external registries. Here's what to expect:
Verification runs 3 checks across external registries. Here's what to expect:
behavior.set_expectations_early
1
License lookup — ~2 min · State medical board2
Sanctions check — ~3 min · OIG exclusions database3
DEA verification — ~2 min · Federal registryVeridian queries each source. You approve any flags before this file advances.
Without Emote
No orientation before action begins
Scope and duration unknown
External access not disclosed
Only exit is Cancel — destroys state
With Emote · P01
3 named steps with time estimates
External registries named explicitly
Agency boundary stated clearly
Token:
behavior.set_expectations_earlyThe framework
Six trust moment patterns
P01
Expectation Setting
Triggered before any significant action begins. Sets scope, time, and agency before the user commits.
behavior.set_expectations_early
behavior.state_time_and_steps
P02
Ambiguity Detection
Triggered when intent is unclear and acting on the wrong interpretation has real cost.
behavior.pause_when_uncertain
behavior.name_the_conflict
P03
Interpretive Support
Triggered when output requires interpretation. Guides without steering; explains without deciding.
behavior.explain_without_deciding
behavior.clarify_before_action
P04
Consent Confirmation
Triggered before irreversible actions. Makes stakes explicit and requires deliberate confirmation.
behavior.verify_consent_before_irreversible
behavior.offer_reversible_alternative
P05
Repair & Apology
Triggered after system error or failure. Acknowledges responsibility and provides a path forward.
behavior.acknowledge_own_error
behavior.preserve_user_state
P06
State Reorientation
Triggered when context has shifted. Re-establishes where the user is and what's still valid.
behavior.reorient_after_interruption
behavior.confirm_current_state
Behavioral vocabulary
19 tokens. One shared language.
behavior.set_expectations_earlyOrient before action begins
behavior.state_time_and_stepsName duration and sequence
behavior.pause_when_uncertainStop before interpreting risk
behavior.name_the_conflictSurface the specific ambiguity
behavior.clarify_before_actionAsk before doing
behavior.explain_without_decidingGuide without steering
behavior.verify_consentConfirm intent before acting
behavior.name_risk_transparentlyState stakes plainly
behavior.offer_reversible_pathProvide a safer alternative
behavior.acknowledge_own_errorOwn the failure directly
behavior.preserve_user_stateDon't lose their work
behavior.restore_trust_after_failureRebuild, not just apologize
Foundation
Built on published research
Emote's patterns are grounded in trauma-informed design, clinical psychology, therapeutic communication, consent ethics, and crisis communication — operationalized into a specification layer for AI systems.
The gap analysis paper documents what Anthropic and OpenAI documentation currently specifies — and what it leaves unspecified — at trust-sensitive moments.
See the Veridian worked example →Gap Analysis
Platform Controls vs. Behavioral Obligations: What Anthropic and OpenAI Specify
emote.dev/research/gap-analysis · 2025
Framework
Emote Doctrine: Behavioral Specification for AI Trust Moments
arXiv preprint · cs.HC
Worked Example
Behavioral Specification in Clinical Credentialing: The Veridian Case
emote.dev/examples · March 2026