Skip to main content
The ZEN Engine is written in Rust and achieves exceptional throughput across all language bindings. On a MacBook Pro M3 single-core, the engine processes an average of 91,000 evaluations per second across 74 real-world decision benchmarks.

Download benchmark report

Full benchmark results with 74 decision scenarios

Benchmark highlights

ScenarioThroughput
Realtime fraud detection191K req/s
Dynamic FX rate pricing184K req/s
Insurance agent commission148K req/s
Loan approval45K req/s
Company analysis (complex)4K req/s
Performance varies based on decision structure. Pure decision tables and expression nodes achieve 150K+ evaluations per second. Decisions using Function nodes run slower (3-50K req/s) due to JavaScript runtime overhead.

Language binding performance

All bindings share the same Rust core. The binding layer adds minimal overhead for most languages.
LanguageBindingOverheadNotes
RustNativeNoneBaseline performance
Node.jsNAPI-RSLowNear-native, minimal serialization overhead
PythonPyO3LowEfficient type conversions, async support
GoCustom C + CGOLowCGO boundary overhead, native evaluation
WebAssemblyWASM + NAPILowBrowser and edge deployments
SwiftUniFFILowIdiomatic Swift APIs
Kotlin/Java/AndroidJNA (UniFFI)HighJVM boundary and object marshalling overhead

Performance optimization tips

Pre-compile decisions. Use ZenDecisionContent to parse and compile decisions once, then reuse them for multiple evaluations. This is available in Node.js, Python, and Go SDKs. Reuse engine instances. Create a single engine at startup rather than instantiating per-request. Engine creation has initialization costs. Batch evaluations when possible. If you’re evaluating the same decision with many inputs, batch them to amortize any per-call overhead. Profile your specific decisions. Performance varies significantly based on decision complexity. A simple lookup table runs 50x faster than a complex multi-stage graph with custom functions.

Benchmark methodology

All benchmarks were run on a MacBook Pro with Apple M3 chip using single-core execution. Each scenario represents a real-world business rule pattern:
  • Decision tables only (100K+ req/s): Pure table lookups and expressions
  • Mixed graphs (50-100K req/s): Multiple nodes, branching logic
  • Function-heavy (3-50K req/s): Decisions using Function nodes with JavaScript execution
The benchmark measures pure evaluation time excluding I/O, network latency, and decision loading. Production throughput depends on your infrastructure, decision structure, and data serialization overhead.
The slowest benchmarks (Company analysis, Insurance breakdown, AML) use Function nodes extensively. If you need maximum throughput, prefer decision tables and expression nodes over custom JavaScript functions.