GoRules architecture separates rule management from rule execution. This separation enables business users to author and test rules centrally while developers deploy execution engines across any environment.
Architecture overview
Enterprise GoRules deployments consist of two main components:
| Component | Purpose |
|---|
| GoRules BRMS | Management layer — author, test, version, simulate, and publish rules |
| Execution layer | Runtime layer — evaluate rules in your applications via Agent or SDK |
Management: GoRules BRMS
The BRMS (Business Rules Management System) is where you create and manage rules. It provides a web interface for business users and APIs for developer automation.
Capabilities
- Authoring — Visual editor for decision tables, graphs, expressions, functions, and custom apps
- Testing — Built-in simulator to validate rules before deployment
- Versioning — Git-like branching, merging, and history for all rules
- Publishing — Create release artifacts and deploy to any environment
- Collaboration — Multi-user editing with role-based access control and conflict resolution
Components
The BRMS bundles three components in a single Docker image, distributed via Docker Hub:
| Component | Purpose |
|---|
| BRMS UI | Web interface for rule authoring and collaboration |
| BRMS API | RESTful interface for integration and automation |
| ZEN Engine | High-performance rules evaluation for testing and simulation |
Infrastructure requirements
| Requirement | Specification |
|---|
| Database | PostgreSQL 12+ (required for all BRMS data) |
| Runtime | Docker (Linux x86_64) |
| Memory | 1GB minimum per instance |
| CPU | 0.5 vCPU minimum |
| Network | Access to portal.gorules.io for licensing |
The BRMS is stateless — all data (projects, decisions, users, audit logs) lives in PostgreSQL. This enables horizontal scaling by adding instances behind a load balancer.
Execution
After publishing rules from the BRMS, you have two options for executing them in your target environments. Both the Agent and the ZEN Engine SDK are open source.
Whether you use GoRules Cloud or self-hosted BRMS, the execution layer (Agent or SDK) always runs inside your own infrastructure. Your data never leaves your environment during rule evaluation.
GoRules Agent
The Agent is an open-source headless rules engine that serves decisions over REST API. Built entirely in Rust, it delivers maximum performance with minimal resource usage. Distributed via Docker Hub.
Best for: Language-agnostic integration, centralized execution, automatic updates without code changes.
How deployment works:
-
Publish — In BRMS, merge changes to main branch and create a release. The release artifact deploys to object storage (S3, Azure Blob, GCS).
-
Detect — Agent polls object storage on synchronized intervals. When the remote etag differs from local, Agent knows a new version is available.
-
Reload — Agent pulls the new package into memory and compiles rules at runtime. The switch happens atomically with zero downtime.
This pattern supports multiple environments (DEV, UAT, PROD) with independent Agents polling from different storage paths or buckets.
| Requirement | Specification |
|---|
| Runtime | Docker (Linux x86_64) |
| Memory | 512MB minimum |
| CPU | 0.25 vCPU minimum |
| Storage access | S3, Azure Blob, GCS, or bundled files |
ZEN Engine SDK
Embed the ZEN Engine directly in your application code for in-process evaluation.
Best for: Sub-millisecond latency, offline capability, tight integration.
Supported languages: Rust, Go, Python, Node.js, Java, Kotlin, Swift
How deployment works:
- Export — Use BRMS API to export decision files to your Git repository
- Build — Your CI/CD pipeline packages rules with the application
- Execute — SDK loads rules from bundled files or fetches from storage at startup
This pattern gives you full control over when rules deploy and enables offline execution.
Multi-environment setup
A typical enterprise deployment separates environments while sharing a single BRMS instance.
Each environment (DEV, UAT, PROD) runs its own Agent instances with IAM-scoped access to environment-specific storage paths. CICD integration is optional — you can publish directly from BRMS or integrate with your existing release pipelines.
Scaling
Both BRMS and Agent support horizontal and vertical scaling. Because management and execution are decoupled, you can scale each layer independently based on your needs.
Horizontal scaling — Add more instances behind a load balancer. Both BRMS and Agent are stateless, so you can scale out without coordination.
Vertical scaling — Increase memory and CPU to handle larger rule sets or higher throughput. The Agent’s Rust implementation provides linear scaling with available resources.
| Metric | Agent | SDK |
|---|
| Latency | 10-20ms (same VNET) | < 1ms |
| Throughput | 1-10K req/s (single core) | 10-100K req/s (single core) |
| Hot-reload | Yes (zero downtime) | Depends on implementation |
Performance varies based on rule complexity, extremely complex models with thousands of rules can run slower.
Decision tables and expression nodes achieve the highest throughput, while decisions using Function nodes run slower due to JavaScript runtime overhead. See Performance benchmarks for detailed SDK measurements.
High availability
For production deployments:
- Multiple BRMS instances — Run 2+ replicas behind a load balancer
- Database replication — Use managed PostgreSQL with read replicas
- Storage redundancy — Enable multi-AZ and versioning on object storage
- Multiple Agent instances — Run at least 2 Agent replicas in production with health checks for failover
See Disaster recovery for more details.