Last Updated: 2026-03-23

AI Governance

Hone Studio is an AI-first platform. This page explains exactly what our AI does, what data it sees, what safeguards we have, and what it will never do.

Core Principles

Every AI Output Is a Draft

No AI output in Hone Studio takes effect without human review. Every generated document, extracted fact, research summary, and assistant response is presented to a human for review, editing, approval, or rejection.

Feature
Document generation
What AI Does
Produces draft sections from prompts and knowledge base
What Humans Do
Review, edit, approve, or reject every section
Feature
Knowledge extraction
What AI Does
Extracts facts, decisions, and corrections from documents
What Humans Do
Review facts, adjust confidence, delete incorrect ones
Feature
AI assistant
What AI Does
Answers questions with cited sources
What Humans Do
Verify citations, assess quality, provide feedback
Feature
Document review
What AI Does
Analyzes documents and suggests improvements
What Humans Do
Accept or reject each suggestion
Feature
Research
What AI Does
Synthesizes web and knowledge base sources
What Humans Do
Evaluate sources, verify claims, choose what to include

Your Data Is Never Used for Training

No client data is used to train, fine-tune, or improve AI models. This is not just our policy — it is contractually guaranteed by every AI provider we use.

Provider
Anthropic
Model
Claude (current generation)
Guarantee
Not used for training under API terms. Zero data retention agreement requested
Provider
Google
Model
Gemini Embedding API
Guarantee
Zero data retention for paid API tier. Not used for training
Provider
Cohere
Model
Rerank API
Guarantee
Zero data retention agreement requested, pending confirmation. Not used for training
Provider
Perplexity AI
Model
Web Search API
Guarantee
Zero data retention. Not used for training
Provider
Firecrawl
Model
Web Extraction API
Guarantee
Receives public URLs only — no client content transmitted. Not used for training

Zero-retention and no-training guarantees reflect each provider's API terms as of 2026-03-05.

Citation Support

Important: AI-generated citations are assistive, not authoritative. All citations must be manually verified before use in official documentation.

The platform includes citation features designed to help users locate potential sources for AI-generated content. Source markers link claims to specific document chunks or facts. A sources panel shows which knowledge base entries were retrieved as context. Confidence signals indicate how well the retrieved context matches the query. These features support verification but do not guarantee accuracy or completeness.

What Data Goes to AI Providers

Your Query

Receives: User question or prompt

Embedding API

Google Gemini

Receives: Query text only

Returns: Numeric vector

Zero Retention

Vector Search

Your Database

Receives: Query vector

Returns: Relevant documents

No External Call

Reranking

Cohere

Receives: Documents + query

Returns: Relevance scores

Zero Retention

LLM

Anthropic Claude

Receives: Context + instructions + message

Returns: Generated text with citations

ZDR Requested

What Is Sent vs. Not Sent

Sent to AI Providers
Document text included in prompts
Not Sent to AI Providers
Your email address or identity
Sent to AI Providers
Conversation messages
Not Sent to AI Providers
File names, folder names, document metadata
Sent to AI Providers
Retrieved knowledge base excerpts
Not Sent to AI Providers
Workspace names or organizational structure
Sent to AI Providers
System instructions (client-configured)
Not Sent to AI Providers
API keys, tokens, or credentials
Sent to AI Providers
Search queries
Not Sent to AI Providers
Other clients' data — ever
Sent to AI Providers
AI-generated web search queries (to Perplexity)
Not Sent to AI Providers
Raw document content (to Perplexity or Firecrawl)
Sent to AI Providers
User-specified public URLs (to Firecrawl)
Not Sent to AI Providers

Data Minimization Controls

  • Parameter whitelisting: A strict allowlist limits which API parameters can flow to the LLM
  • Server-side prompt control: Model selection, token limits, and system prompts are set by the server
  • Tool output sanitization: When the AI uses tools, outputs are sanitized and size-limited before inclusion in subsequent prompts
  • No metadata leakage: User emails, file names, workspace names, and organizational metadata are stripped from all prompts

AI Systems Inventory

Area
Document Generation
What AI Does
Produces draft sections grounded in knowledge base
Model
Anthropic Claude
Area
Research
What AI Does
Orchestrates multi-source research, normalizes evidence
Model
Anthropic Claude
Area
AI Assistant
What AI Does
Conversational AI with knowledge retrieval and citations
Model
Anthropic Claude
Area
Knowledge Extraction
What AI Does
Extracts structured knowledge from documents
Model
Anthropic Claude
Area
Semantic Search
What AI Does
Converts text to vector representations
Model
Google Gemini
Area
Search Relevance
What AI Does
Improves search result ordering
Model
Cohere
Area
Web Research
What AI Does
Searches the web for relevant context during research, assistant, and generation
Model
Perplexity AI
Area
Web Extraction
What AI Does
Extracts content from user-specified public URLs
Model
Firecrawl

Risk Classification

All AI systems are classified as advisory/generative.

What AI Does

  • AI generates content for human review — it does not make decisions
  • AI does not autonomously act on data
  • AI does not interact with external systems without explicit user initiation

What AI Does NOT Do

  • Make decisions about student enrollment, grades, admissions, or financial aid
  • Determine eligibility for any program or benefit
  • Autonomously modify institutional records
  • Send communications on behalf of the institution
  • Access or process biometric data
  • Perform surveillance or behavioral profiling
  • Make hiring, disciplinary, or personnel decisions
  • Take any action without explicit user initiation

Quality Controls

Control
Automated quality review
How It Works
AI outputs scored for quality — low-quality outputs flagged
Control
Citation verification
How It Works
Citations verified against actual source material
Control
Confidence scoring
How It Works
Extracted facts assigned confidence scores. Low-confidence flagged
Control
Token budgets
How It Works
Configurable hourly and daily limits catch runaway operations
Control
Iteration limits
How It Works
AI operations have bounded iteration and tool call limits

Bias Mitigation

Approach
Model-level alignment
Detail
Anthropic's Constitutional AI training reduces harmful outputs
Approach
Citation transparency
Detail
Claims traceable to source — users can check for bias
Approach
Multi-domain classification
Detail
Knowledge extraction spans multiple domains to prevent skew
Approach
Deterministic operations
Detail
Analysis uses settings that reduce output variability
Approach
Human review
Detail
All outputs reviewed by institutional experts
Approach
Feedback mechanism
Detail
Users can flag quality or bias concerns

Operational Resilience

Kill Switches

We can immediately disable any AI capability without code deployment via feature toggles. Kill switches exist for knowledge extraction, quality scoring, search features, and token budget enforcement. If an AI capability behaves unexpectedly, we can disable it within minutes while preserving all other platform functionality.

Graceful Degradation

All AI provider calls are protected by circuit breakers. When an AI provider is unavailable or degraded, the platform continues operating — users can browse documents, access their knowledge base, read conversations, and manage workspaces. AI-powered features resume automatically when the provider recovers.

Self-Healing Operations

Background AI operations (knowledge extraction, quality scoring) are monitored continuously. If an operation stalls or fails, it is automatically detected and recovered without human intervention. No user action is required — the system self-heals.

FERPA and AI

  • AI processes institutional knowledge content under the institution's authority
  • If education records are in the knowledge base, they are processed under the DPA as a "school official" function
  • AI-generated content containing FERPA-derived information inherits the FERPA classification of its source material
  • No AI-generated content is disclosed to parties outside the institution's authorized users
  • AI providers do not use education records for training. Zero-retention agreements are confirmed with Google and Perplexity, and requested from Anthropic and Cohere

Model Change Management

  1. Assess impact: Which features are affected? What changes?
  2. Test: Verify output quality, citation accuracy, and extraction precision against benchmarks
  3. Notify: All clients receive at least 14 days' advance notice before planned model changes. For urgent changes, minimum 24 hours with a rollback plan
  4. Document rollback: Every model change has a documented reversion plan
  5. Update governance: This page and our internal AI Governance Policy are updated to reflect changes

Usage Tracking

Every AI API call is logged for audit and monitoring.

  • Who: User identity (for audit trail)
  • What: Provider, model, feature (generation, assistant, research, etc.)
  • How much: Input/output tokens, calculated cost
  • When: UTC timestamp, request latency
  • Where: Workspace and entity context