Overview

Philosophy

Detection is deterministic. AI is explanatory. OpenAudit AI is built on a clear separation between these two concerns.

The problem with AI-first security tools

As language models became capable enough to read code, a wave of "AI audit" tools emerged. Ask GPT-4 to find vulnerabilities, receive a list of findings. Simple. Fast. But fundamentally unreliable for security work.

The issue is not that AI is useless for security. It is that language models are stochastic. Run the same prompt on the same contract twice — you may get different findings. The model may hallucinate a vulnerability that doesn't exist. It may miss one that does. It has no ground truth against which to verify its claims.

AI-first security
  • Non-deterministic output
  • Hallucinated findings
  • Hard to test or version
  • Cannot be trusted in CI
  • Different results each run
OpenAudit AI approach
  • Deterministic rule engine
  • No hallucinations
  • Version-controlled rules
  • CI/CD native
  • Same output every run

The OpenAudit AI model

OpenAudit AI separates the security layer from the explanation layer with a clear boundary:

  • The rule engine is responsible for detection. It runs deterministic checks against the Solidity AST. Either a rule fires, or it doesn't. No inference. No guessing.
  • The AI layer is responsible for explanation. Given a structured finding from the rule engine, it generates a human-readable description of what the finding means and how to fix it. It does not decide whether something is a vulnerability.
Important: The AI layer is completely optional. You can run OpenAudit AI with zero AI involvement and get the same structured security findings.

Three principles

1. Deterministic

A tool that produces different findings on the same code cannot be trusted in production workflows. OpenAudit AI's detection is grounded in explicit, enumerable rules. Every rule is testable, versioned, and documentable.

2. Reproducible

Teams need to share findings, compare them across branches, and track them over time. Reproducibility means anyone on the team, in any environment, running the same version of the tool against the same code gets the same output.

3. CI/CD friendly

Security tooling that cannot be integrated into automated pipelines has limited leverage. OpenAudit AI emits structured JSON, provides meaningful exit codes, and is designed to be a blocking gate in deployment workflows — not an afterthought.

What AI is good for in security tooling

AI language models are exceptionally good at explaining things in plain language. Given a structured finding — a rule ID, a code snippet, a location — a well-prompted model can produce a clear, developer-friendly explanation of the issue and how to fix it.

This is exactly what OpenAudit AI uses AI for. The AI doesn't audit. The AI explains. The distinction matters for trust, testability, and operational reliability.