Integration Guide
LangChain Integration
Zero-config governance for LangChain agents. Add a callback handler and every tool call, agent decision, LLM response, and error is automatically captured in your tamper-evident audit trail.
How It Works
The RANKIGI callback handler implements LangChain's BaseCallbackHandler interface. Attach it to any agent, chain, or executor and it passively observes every action — tool invocations, agent decisions, LLM outputs, and errors. Every event is SHA-256 hashed and appended to your tamper-evident chain.
The handler is completely non-blocking. If RANKIGI is unavailable, your agent continues operating normally. Events are buffered and retried silently. No exceptions are ever raised in your agent's execution path.
Installation
Node.js / TypeScript
npm install @rankigi/sdk @langchain/corePython
pip install rankigi langchain-coreQuick Start — Node.js
import { RangigiCallbackHandler } from "@rankigi/sdk/langchain";
import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";
import { ChatOpenAI } from "@langchain/openai";
const handler = new RangigiCallbackHandler({
apiKey: process.env.RANKIGI_API_KEY,
agentId: process.env.RANKIGI_AGENT_ID,
});
const llm = new ChatOpenAI({ modelName: "gpt-4" });
const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt });
const executor = new AgentExecutor({
agent,
tools,
callbacks: [handler],
});
// RANKIGI now governs this agent automatically.
// Every tool call is hashed and chained.
const result = await executor.invoke({ input: "your task" });Quick Start — Python
import os
from rankigi.langchain import RangigiCallbackHandler
from langchain.agents import initialize_agent, AgentType
from langchain_openai import ChatOpenAI
handler = RangigiCallbackHandler(
api_key=os.environ["RANKIGI_API_KEY"],
agent_id=os.environ["RANKIGI_AGENT_ID"],
)
llm = ChatOpenAI(model="gpt-4")
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS,
callbacks=[handler],
)
# RANKIGI now governs this agent automatically.
# Every tool call is hashed and chained.
result = agent.run("your task here")Constructor
Node.js
import { RangigiCallbackHandler } from "@rankigi/sdk/langchain";
const handler = new RangigiCallbackHandler({
apiKey: process.env.RANKIGI_API_KEY,
agentId: process.env.RANKIGI_AGENT_ID,
});Python
from rankigi.langchain import RangigiCallbackHandler
handler = RangigiCallbackHandler(
api_key=os.environ["RANKIGI_API_KEY"],
agent_id=os.environ["RANKIGI_AGENT_ID"],
)Events Captured
# Events captured automatically:
#
# on_tool_start → tool name + input recorded, timer started
# on_tool_end → tool_call event + latency measurement
# on_tool_error → error event (severity: warn)
# on_agent_action → agent_action event (tool selection + reasoning)
# on_agent_finish → agent_output event (return values)
# on_llm_end → response_generated event (LLM output)
# on_llm_error → error event (severity: critical)
# on_chain_error → error event (severity: critical)Tool calls include automatic latency measurement. The handler records the start time on on_tool_start and computes elapsed time on on_tool_end, emitting a separate tool_latency event.
Verbose Mode
Enable verbose mode to print all tracked events to stderr for debugging.
Node.js
const handler = new RangigiCallbackHandler({
apiKey: process.env.RANKIGI_API_KEY,
agentId: process.env.RANKIGI_AGENT_ID,
verbose: true, // prints tracking events to stderr
});Python
handler = RangigiCallbackHandler(
api_key=os.environ["RANKIGI_API_KEY"],
agent_id=os.environ["RANKIGI_AGENT_ID"],
verbose=True, # prints tracking events to stderr
)