Skip to main content
The ZEN Engine is written in Rust, giving you direct access to the core engine with zero FFI overhead.

Installation

Add to your Cargo.toml:
[dependencies]
zen-engine = "*"
zen-expression = "*"
serde_json = "1.0"
tokio = { version = "1", features = ["full"] }

Basic usage

use zen_engine::DecisionEngine;
use zen_engine::model::DecisionContent;
use serde_json::json;

#[tokio::main]
async fn main() {
    let decision_content: DecisionContent =
        serde_json::from_str(include_str!("./pricing-rules.json")).unwrap();

    let engine = DecisionEngine::default();
    let decision = engine.create_decision(decision_content.into());

    let response = decision.evaluate(json!({
        "customer": { "tier": "gold", "yearsActive": 3 },
        "order": { "subtotal": 150, "items": 5 }
    }).into()).await.unwrap();

    println!("{}", response.result);
    // => {"discount":0.15,"freeShipping":true}
}

Loader

The loader pattern enables dynamic decision loading from any source. ZEN Engine provides several built-in loaders.

FilesystemLoader

Load decisions from a directory:
use zen_engine::DecisionEngine;
use zen_engine::loader::{FilesystemLoader, FilesystemLoaderOptions};
use serde_json::json;
use std::sync::Arc;

#[tokio::main]
async fn main() {
    let loader = FilesystemLoader::new(FilesystemLoaderOptions {
        root: "./decisions".to_string(),
        keep_in_memory: true,
    });
    let engine = DecisionEngine::default().with_loader(Arc::new(loader));

    let response = engine.evaluate("pricing.json", json!({ "amount": 100 }).into()).await.unwrap();
    println!("{}", response.result);
}

MemoryLoader

Store decisions in memory:
use zen_engine::DecisionEngine;
use zen_engine::loader::MemoryLoader;
use zen_engine::model::DecisionContent;
use serde_json::json;
use std::sync::Arc;

#[tokio::main]
async fn main() {
    let loader = Arc::new(MemoryLoader::default());

    let content: DecisionContent = serde_json::from_str(include_str!("./pricing.json")).unwrap();
    loader.add("pricing", content);

    let engine = DecisionEngine::default().with_loader(loader);

    let response = engine.evaluate("pricing", json!({ "amount": 100 }).into()).await.unwrap();
    println!("{}", response.result);
}

Closure loader

Define custom loading logic with an async callback:
use zen_engine::DecisionEngine;
use zen_engine::loader::LoaderError;
use zen_engine::model::DecisionContent;
use serde_json::json;
use std::sync::Arc;

#[tokio::main]
async fn main() {
    let engine = DecisionEngine::default().with_closure_loader(|key| async move {
        // Load from any source: HTTP, S3, database, etc.
        match key.as_str() {
            "pricing" => {
                let content: DecisionContent = load_from_source("pricing").await?;
                Ok(Arc::new(content))
            }
            _ => Err(LoaderError::NotFound(key).into()),
        }
    });

    let response = engine.evaluate("pricing", json!({ "amount": 100 }).into()).await.unwrap();
    println!("{}", response.result);
}

Custom loader

Implement the DecisionLoader trait for full control:
use zen_engine::loader::{DecisionLoader, LoaderError, LoaderResponse};
use zen_engine::model::DecisionContent;
use std::collections::HashMap;
use std::future::Future;
use std::sync::{Arc, RwLock};

#[derive(Default)]
pub struct MyLoader {
    cache: RwLock<HashMap<String, Arc<DecisionContent>>>,
}

impl DecisionLoader for MyLoader {
    fn load<'a>(&'a self, key: &'a str) -> impl Future<Output = LoaderResponse> + 'a {
        async move {
            // Check cache
            if let Some(content) = self.cache.read().unwrap().get(key) {
                return Ok(content.clone());
            }

            // Load from source
            let content = load_from_source(key).await?;
            let arc_content = Arc::new(content);

            // Cache it
            self.cache.write().unwrap().insert(key.to_string(), arc_content.clone());

            Ok(arc_content)
        }
    }
}

Cloud storage loaders

For production-ready cloud storage loaders (AWS S3, Azure Blob Storage, Google Cloud Storage) with zip support, see the reference implementation:

GoRules Agent

Reference implementation with cloud storage loaders for S3, Azure, and GCS

Pre-compilation

Pre-compile decisions for improved evaluation performance:
let mut decision = engine.create_decision(content.into());
decision.compile();

// Subsequent evaluations are faster
let response = decision.evaluate(input.into()).await.unwrap();
Compilation parses and optimizes the decision graph ahead of time, reducing overhead during evaluation. This is especially beneficial when the same decision is evaluated many times.

Error handling

use zen_engine::EvaluationError;

match decision.evaluate(input).await {
    Ok(response) => println!("{}", response.result),
    Err(e) => eprintln!("Evaluation failed: {:?}", e),
}

Tracing

Enable tracing to inspect decision execution:
use zen_engine::EvaluationOptions;

let response = decision.evaluate_with_opts(input, EvaluationOptions {
    trace: true,
    ..Default::default()
}).await.unwrap();

println!("{:?}", response.trace);
// Each node's input, output, and performance timing

println!("{}", response.performance);
// Total evaluation time

Expression utilities

The zen-expression crate provides expression evaluation outside of decisions:
use zen_expression::{evaluate_expression, evaluate_unary_expression};
use serde_json::json;

// Standard expressions
let result = evaluate_expression("a + b", json!({ "a": 5, "b": 3 }).into()).unwrap();
// => 8

let total = evaluate_expression("sum(items)", json!({ "items": [1, 2, 3, 4] }).into()).unwrap();
// => 10

// Unary expressions (comparison against $)
let is_valid = evaluate_unary_expression(">= 5", json!({ "$": 10 }).into()).unwrap();
// => true

let in_list = evaluate_unary_expression("'US', 'CA', 'MX'", json!({ "$": "US" }).into()).unwrap();
// => true

High performance with Isolate

For repeated evaluations, use Isolate to reuse allocated memory:
use zen_expression::Isolate;
use serde_json::json;

let context = json!({ "tax": { "percentage": 10 } });
let mut isolate = Isolate::with_environment(context.into());

// Reuses memory across evaluations
for amount in [50, 100, 150, 200] {
    let tax = isolate.run_standard(&format!("{} * tax.percentage / 100", amount)).unwrap();
    println!("Tax on {}: {}", amount, tax);
}

Design notes

Single-threaded expression engine

The expression engine is single-threaded by design for maximum performance. This avoids synchronization overhead and enables optimizations like memory reuse in Isolate.

Thread-pinned futures

Although evaluate is async, the returned Future is !Send — it must complete on the same thread where it was started. This is intentional: sending data across threads would be costly in this scenario, and pinning enables significant performance gains. However, this can be awkward with async runtimes that expect Send futures. For multi-threaded workloads, use LocalPoolHandle from tokio-util to spawn pinned tasks:
use std::future::Future;
use std::sync::OnceLock;
use std::thread::available_parallelism;
use tokio::task::JoinHandle;
use tokio_util::task::LocalPoolHandle;

fn parallelism() -> usize {
    available_parallelism().map(Into::into).unwrap_or(1)
}

fn worker_pool() -> LocalPoolHandle {
    static LOCAL_POOL: OnceLock<LocalPoolHandle> = OnceLock::new();
    LOCAL_POOL
        .get_or_init(|| LocalPoolHandle::new(parallelism()))
        .clone()
}

fn spawn_pinned<F, Fut>(create_task: F) -> JoinHandle<Fut::Output>
where
    F: FnOnce() -> Fut + Send + 'static,
    Fut: Future + 'static,
    Fut::Output: Send + 'static,
{
    worker_pool().spawn_pinned(create_task)
}
Usage:
let result = spawn_pinned(|| async {
    let engine = DecisionEngine::default();
    let decision = engine.create_decision(content.into());
    decision.evaluate(input.into()).await
}).await.unwrap();

Best practices

Use FilesystemLoader with keep_in_memory: true. This caches parsed decisions in memory for optimal performance. Initialize the engine once. Create a single DecisionEngine instance at application startup and reuse it for all evaluations. Use Isolate for repeated expression evaluation. It reuses allocated memory, drastically improving throughput.