WHITEPAPER
RANKIGI: A Cryptographic Governance Layer for Autonomous AI Agents
How enterprises can establish tamper-evident audit infrastructure before regulation mandates it.
RANKIGI Technical Whitepaper
A Cryptographic Governance Layer
for Autonomous AI Agents
March 2026 · Rankigi Inc.
1. Abstract
Autonomous AI agents are increasingly deployed across regulated industries to execute actions with real-world consequences. Existing observability and logging infrastructure, designed for applications rather than autonomous decision-makers, lacks the cryptographic tamper-evidence, behavioral profiling, and compliance mapping required for enterprise governance. This paper presents RANKIGI, a passive sidecar governance layer that cryptographically records, verifies, and reports on agent behavior using SHA-256 hash chains, Ed25519 identity verification, and automated compliance mapping to EU AI Act, SOC 2, and HIPAA frameworks. We describe the system architecture, security model, and the Know Your Agent (KYA) standard for agent identity and accountability.
2. The Governance Gap
The deployment of AI agents across finance, healthcare, legal, and infrastructure has created a governance gap: these agents execute consequential actions — accessing sensitive data, initiating transactions, generating legally binding documents — without industry-standard mechanisms for accountability. Traditional logging fails on three counts. First, logs are mutable: database administrators can modify entries, log management systems can overwrite data, and compromised servers can have their history rewritten. Second, logs lack identity: there is no standard way to cryptographically verify which agent produced a given log entry. Third, logs are not compliance-aware: raw event data cannot be directly mapped to regulatory frameworks without significant manual effort.
The regulatory landscape is converging on mandatory governance requirements. EU AI Act Articles 9, 12, and 13 require risk management systems, event logging, and transparency measures for high-risk AI systems. SOC 2 Trust Service Criteria increasingly apply to AI-driven processes. HIPAA audit controls extend to any autonomous agent that accesses protected health information. Organizations deploying agents without governance infrastructure accumulate unquantified compliance exposure with each passing day.
The regulatory landscape is converging on mandatory governance requirements. EU AI Act Articles 9, 12, and 13 require risk management systems, event logging, and transparency measures for high-risk AI systems. SOC 2 Trust Service Criteria increasingly apply to AI-driven processes. HIPAA audit controls extend to any autonomous agent that accesses protected health information. Organizations deploying agents without governance infrastructure accumulate unquantified compliance exposure with each passing day.
3. Architecture Overview
RANKIGI operates as a passive sidecar observer alongside existing agent infrastructure. The architecture is designed around four core principles: non-blocking operation (agents continue regardless of RANKIGI availability), data minimization (hashes and metadata only, no raw sensitive data), cryptographic verifiability (any party with read access can independently verify the record), and append-only immutability (database-level triggers prevent modification of the audit trail).
3.1 Sidecar Observation Model. The RANKIGI SDK (available for Node.js and Python) integrates into existing agent code with minimal surface area. Agents call a single method —
3.2 SHA-256 Hash Chain. Every agent event is hashed using the formula:
3.3 Agent Passport System. Each agent is issued a cryptographic identity (Agent Passport) using Ed25519 key pairs. The public key is registered with RANKIGI; the private key remains with the agent. Agents sign events with their private key, and RANKIGI verifies signatures against the registered public key. This provides non-repudiation: each action is cryptographically attributed to a specific, verified agent. Passports include risk scores computed from behavioral data, chain integrity status, and event volume.
3.4 Behavioral Profiling.After every 100 events, RANKIGI computes a behavioral profile for each agent. The profile captures tool usage distribution, action frequency patterns, drift scores (measuring deviation from established behavior), anomaly frequency, and temporal patterns. Profiles enable early detection of behavioral drift — before users or regulators notice the change.
3.1 Sidecar Observation Model. The RANKIGI SDK (available for Node.js and Python) integrates into existing agent code with minimal surface area. Agents call a single method —
trackToolCall() or track_tool_call()— to record actions. The SDK buffers events, retries on failure, and uses a daemon thread (Python) or background flush (Node.js) to minimize latency impact. Ingestion p95 target is under 200 milliseconds.3.2 SHA-256 Hash Chain. Every agent event is hashed using the formula:
hash = SHA-256(prev_hash | occurred_at | org_id | agent_id | canonical_payload). The pipe-delimited input produces a hex-encoded hash that is stored alongside the event. Each event's hash includes the hash of the previous event, creating a chain where any modification to any past entry is immediately detectable. Genesis blocks use a prev_hash of 64 zeros. The canonical payload is computed using deterministic JSON serialization (sorted keys, maximum 10 levels of nesting) to ensure consistent hashing regardless of key order.3.3 Agent Passport System. Each agent is issued a cryptographic identity (Agent Passport) using Ed25519 key pairs. The public key is registered with RANKIGI; the private key remains with the agent. Agents sign events with their private key, and RANKIGI verifies signatures against the registered public key. This provides non-repudiation: each action is cryptographically attributed to a specific, verified agent. Passports include risk scores computed from behavioral data, chain integrity status, and event volume.
3.4 Behavioral Profiling.After every 100 events, RANKIGI computes a behavioral profile for each agent. The profile captures tool usage distribution, action frequency patterns, drift scores (measuring deviation from established behavior), anomaly frequency, and temporal patterns. Profiles enable early detection of behavioral drift — before users or regulators notice the change.
4. Security Model
The RANKIGI security model is designed for zero-trust operation. Data at rest is encrypted using AES-256. Data in transit uses TLS 1.3. API keys are peppered with a server-side secret and SHA-256 hashed before storage; raw keys cannot be recovered. Database access is scoped by organization using row-level security (RLS) enforced at the Postgres layer — there is no cross-tenant data access path. The event ledger is append-only: database triggers prevent UPDATE and DELETE operations on the event_hash_chain table. These triggers are enforced at the database engine level and cannot be bypassed by application code. Per-agent advisory locks prevent concurrent chain forks during event ingestion.
5. Compliance Framework
RANKIGI maps agent behavior to three major compliance frameworks:
EU AI Act. Article 9 (risk management): continuous behavioral profiling and drift detection. Article 12 (record-keeping): tamper-evident hash chain with cryptographic verification. Article 13 (transparency): governance reports with plain-language summaries of agent behavior. Article 14 (human oversight): configurable policy enforcement that can flag, alert, or block specific agent actions.
SOC 2. CC6.1 (logical access controls): RBAC with admin/auditor/read-only roles. CC7.2 (system monitoring): real-time event ingestion and alerting. CC8.1 (change management): behavioral drift detection and anomaly flagging. PI1.1 (processing integrity): hash chain verification provides mathematical proof of data integrity.
HIPAA. Section 164.312(b) (audit controls): complete event audit trail for any agent accessing protected health information. Section 164.312(c) (integrity controls): SHA-256 hash chain ensures audit records cannot be altered. Section 164.312(d) (entity authentication): Agent Passport system provides cryptographic agent identity verification.
EU AI Act. Article 9 (risk management): continuous behavioral profiling and drift detection. Article 12 (record-keeping): tamper-evident hash chain with cryptographic verification. Article 13 (transparency): governance reports with plain-language summaries of agent behavior. Article 14 (human oversight): configurable policy enforcement that can flag, alert, or block specific agent actions.
SOC 2. CC6.1 (logical access controls): RBAC with admin/auditor/read-only roles. CC7.2 (system monitoring): real-time event ingestion and alerting. CC8.1 (change management): behavioral drift detection and anomaly flagging. PI1.1 (processing integrity): hash chain verification provides mathematical proof of data integrity.
HIPAA. Section 164.312(b) (audit controls): complete event audit trail for any agent accessing protected health information. Section 164.312(c) (integrity controls): SHA-256 hash chain ensures audit records cannot be altered. Section 164.312(d) (entity authentication): Agent Passport system provides cryptographic agent identity verification.
6. The KYA Standard
Know Your Agent (KYA) is a proposed standard for agent identity, accountability, and behavioral verification — analogous to Know Your Customer (KYC) in financial services. KYA requires three capabilities: cryptographic agent identity (who is this agent?), behavioral accountability (what has this agent done?), and continuous verification (is this agent behaving as expected?). RANKIGI provides the infrastructure to implement KYA through Agent Passports (identity), hash-chained audit trails (accountability), and behavioral profiling with drift detection (verification). As autonomous agents increasingly interact with each other and with external systems, KYA provides the trust foundation for the emerging agentic web.
7. Conclusion
The governance gap for autonomous AI agents represents both a risk and an opportunity for enterprises. Organizations that establish tamper-evident audit infrastructure now — before regulation mandates it — gain a defensible compliance position, accelerated enterprise sales cycles, and a foundation for responsible AI deployment. RANKIGI provides this infrastructure as a passive, non-blocking sidecar that integrates with existing agent stacks in minutes. The cryptographic guarantees are mathematical, not procedural. The audit trail speaks for itself.
Download the Full Whitepaper (PDF)