Platform
What you can build. How it stays safe.
Forge isn't a collection of features. It's a single runtime where every capability was designed to work with every other. Here's what that means for the people who use it.
Build Faster
Your team shouldn't spend 70% of their time wiring infrastructure. Forge gives them three ways to build agents, a visual workflow designer, a built-in knowledge pipeline, and a natural language command interface — so they can focus on business logic, not plumbing.
Scenario
Your team needs a customer escalation agent by Friday.
A product manager describes what it should do. An engineer opens the Agent Builder, writes a system prompt, assigns tools from the registry (CRM lookup, ticket creation, Slack notification), sets a $0.50 budget cap, and deploys. The agent runs in a governed sandbox with its own egress rules. No framework. No vector database. No integration work. The agent is in production the same day.
Three ways to build an agent
- Prompt-only: Describe what the agent should do. No code required.
- Code + AI delegation: Deterministic logic that hands off to AI when it needs reasoning.
- Hybrid with fallback: Code runs first. If it fails, AI takes over automatically.
Visual workflow orchestration
- Drag-and-drop node designer with branching, loops, and parallel execution
- Human-in-the-loop gates that actually halt — not "act first, notify later"
- Webhook triggers, deployment versioning, nested sub-workflows
Built-in knowledge pipeline
- Upload PDF, Word, HTML, scanned images — all processed automatically
- Built-in OCR and format conversion. No external services required.
- Searchable from chat, agents, and investigations immediately
171+ tools, ready to use
- Full MCP client and server — connect any MCP-compatible system
- Import any OpenAPI spec as tools. Bring your own APIs.
- Per-agent tool allowlists — agents only access what you grant
Deploy Safely
Your security team has been blocking AI deployments because they can't get straight answers about control. Forge gives them structural guarantees, not policies. The bypass doesn't exist.
Scenario
Your CISO asks: "Can an agent call an unauthorized API?"
The answer isn't "we have a policy that prevents it." The answer is: "The runtime doesn't provide the capability." There is no ai.chat() in agent code. Network egress is filtered at the sandbox level before the call leaves the process. Budget caps are checked before the model call executes, not reported after. Your CISO reviews the architecture, sees structural controls, and signs off.
Governance by architecture
- No raw AI API calls exist in agent code. The bypass doesn't exist.
- Network egress allowlists enforced at the sandbox, not a proxy
- Every AI interaction logged to an immutable audit ledger
Budget and cost controls
- Per-agent and per-execution USD budget caps
- Caps enforced before the model call, not reported after
- Tenant model allowlists — control which AI models are available
Human-in-the-loop that actually works
- Approval gates halt execution completely until a human approves
- Notifications via Slack, email, or SMS with full execution context
- Approvers see exactly what they're approving — not a generic "OK?"
Sandboxed execution
- Each agent runs in its own WebAssembly sandbox with CPU/memory limits
- Configurable sandbox profiles per agent type
- 18-stage validation pipeline catches issues before they reach production
Understand Everything
When something goes wrong with an AI agent, you need to know exactly what happened, why, and how much it cost. Forge gives you one trace from request to result — not five dashboards from five systems with five different definitions of "request."
Scenario
A customer reports a wrong answer from your AI assistant.
Your operations lead opens the workspace, clicks the trace explorer, and sees the full execution path: which agent handled it, which tools it called, what data it retrieved, which model it used, what it cost, and exactly where the reasoning went wrong. One system, one trace, one answer — found in minutes, not hours of log archaeology.
Document intelligence
Structured investigations with paragraph-level citations. Every finding traces back to its source. Every artifact has an evidence chain.
Cross-session memory
Agents remember previous interactions. Auto-capture of key facts. Context compression keeps costs controlled without losing institutional knowledge.
Cost tracking per everything
Per-agent, per-execution, per-workflow, per-tenant. Know exactly what each AI operation costs before it runs, not after.
Scale to Everyone
The biggest waste in enterprise AI: building powerful capabilities that only ten engineers can use. Forge makes AI accessible to the entire organization. IT configures governance once. Everyone builds on that foundation.
Scenario
A compliance analyst needs to review 200 vendor contracts.
She doesn't write code. She doesn't configure a pipeline. She opens a workspace, uploads the documents, and asks: "Which of these contracts have liability clauses that exceed our standard cap?" The platform processes the documents, runs the investigation, and returns findings with citations pointing to the exact paragraphs. The analyst who has this answer in an hour didn't need to understand embeddings, vector databases, or prompt engineering.
Natural language to everything
The AI Assist panel is a command interface to the entire platform. Create agents, configure workflows, query knowledge — in plain English. Not a chatbot. A control surface.
Multi-tenant by design
Structural isolation at the database and runtime level. Each tenant has their own governance, their own knowledge, their own agents. Share nothing that shouldn't be shared.
Model-agnostic
OpenAI, Anthropic, Google Gemini, Cloudflare Workers AI, xAI Grok. Intelligent model routing via ProviderMux. Switch models without changing agent code.
Workspaces for everyone
Governed AI environments with chat, files, artifacts, and full execution ledgers. The compliance analyst, the operations manager, and the data scientist all work in the same governed environment.
By the numbers
72
Platform capabilities
171+
Built-in tools
18
Validation stages
16
Capability groups
See all 16 capability groups →
Agent Intelligence & Execution
10 capabilities — three creation paths, sandboxed runtime, delegation, budgets
Workflow Orchestration
6 capabilities — visual designer, branching, loops, HITL gates, versioning
Workspaces
11 capabilities — governed environments, chat, files, delegation trees, traces
Document Intelligence (DIUX)
4 capabilities — investigations, citations, findings, artifacts
Knowledge Fabric
4 capabilities — ingestion, OCR, chunking, embedding, retrieval
Memory Fabric
4 capabilities — cross-session recall, auto-capture, summarization, DLP
Prompt Engineering & Skills
6 capabilities — Prompt Studio, versioning, semantic discovery, TTL
Tool Ecosystem & MCP
7 capabilities — 171+ tools, MCP, OpenAPI import, registry
Governance & Security
7 capabilities — architecture-level governance, egress, sandboxes, HITL
Validation Pipeline
7 capabilities — 18-stage L0-L6 progressive validation
AI Models & Providers
5 capabilities — model registry, ProviderMux, multi-provider
Observability & Cost
6 capabilities — analytics, tracing, billing, telemetry
Multi-Tenancy & Admin
8 capabilities — tenant isolation, users, roles, settings
AI Assist
2 capabilities — natural language interface, context-aware
Blueprints & Chatbots
4 capabilities — guided creation, embeddable bots
Integrations & Messaging
5 capabilities — Slack, email, SMS, webhooks, inbox
See it for yourself.
Join the waitlist for early access.
Was this page helpful?