Back to Home

Security & Trust

SciAI is designed for research integrity. From content firewalls to artifact provenance, every layer is built for auditability and trust.

Defense in Depth

Multiple security layers protect your research data and ensure process integrity.

Content Firewall

All external content (PDFs, web pages) is sanitized before LLM processing. Known prompt injection patterns are detected and redacted.

  • PDF text extraction with sanitization
  • Web fetcher with domain allowlist
  • Injection pattern detection
  • Malicious content blocking

Data Classification

Data is tagged with sensitivity levels that determine where it can be processed. Sensitive data routes to appropriate models.

PUBLICOpen data, any model
INTERNALStandard cloud providers
CONFIDENTIALTrusted providers only
RESTRICTEDLocal models only

Artifact Integrity

Every artifact is SHA-256 hashed at creation. Hashes are verified on access. Tampering is detectable.

  • SHA-256 content hashing
  • Hash verification on read
  • Immutable storage references
  • Tamper detection

Locking & Deviations

Artifacts can be locked to prevent modification. Locked changes require formal deviation workflows.

  • Lock artifacts at any time
  • Changes require deviation request
  • Deviation includes rationale
  • Approval required to proceed

Approval Gates

Sensitive operations require explicit human approval. No silent execution of high-impact actions.

  • Stage transitions require approval
  • Locked artifact changes need sign-off
  • Sensitive agent actions flagged
  • Approval audit trail

Audit Trail

Complete history of all runs, approvals, checkpoints, and changes. Full transparency for review.

  • Agent run logs
  • Approval history
  • Stage checkpoint snapshots
  • Change attribution

What We Don't Claim

SciAI does not guarantee correctness

AI agents can make mistakes. Language models hallucinate. Statistical methods require human judgment to apply correctly.

What we do guarantee: SciAI captures the process with integrity. You can see what was done, when, by which agent, and whether it was reviewed.

Scientific correctness remains your responsibility. We make it possible to prove your process was rigorous—not that your conclusions are right.

Current vs Planned

We're transparent about what's implemented now and what's on the roadmap.

✓ Implemented

  • Stage machine (S0–S11)
  • Agent Fleet with SSE streaming
  • Artifact creation with metadata
  • OAuth authentication
  • Project management
  • Demo project seeding

⏳ Planned

  • Real LLM integration
  • PDF text extraction
  • Claim extraction pipeline
  • Full artifact locking
  • Deviation workflows
  • Team collaboration

For Regulated Environments

Self-Hosted Option

For institutions that require on-premises deployment, SciAI can be self-hosted in your infrastructure with full data sovereignty.

Contact Enterprise Sales

Questions about security?

Book a demo and we'll walk through our security model in detail.

Book a Demo