Performance Optimizations
The Rule Engine is designed for high-performance event processing with multiple optimization strategies. This document explains the performance characteristics and optimization techniques used throughout the system.
Performance Budget
Target: Process events in <5ms for typical scenarios
Typical measurements:
- No matching rules: <0.1ms
- Simple rule (1 trigger, 1 condition): 0.3-0.5ms
- Complex rule (multiple conditions, variables): 1-3ms
- Multiple workspaces (3 workspaces): 2-5ms
Optimization goal: Scale with complexity of matching rules, not total rule count
O(1) TriggerMatcher Indexing
The Problem
Naive approach: Check every rule for every event
// BAD: O(n) where n = total rules
function findMatchingRules(event: ClientEvent, allRules: Rule[]): Rule[] {
return allRules.filter(rule =>
rule.triggers.some(trigger => triggerMatches(trigger, event))
);
}Time complexity: O(n * t) where n = rules, t = triggers per rule
At scale:
- 10 rules: ~10 iterations
- 100 rules: ~100 iterations
- 1000 rules: ~1000 iterations
The Solution: Map-Based Indexing
Build index during registration:
// lib/ruleEngine/triggers/TriggerMatcher.ts:193-202
private indexEventType(ruleId: string, eventType: string): void {
if (eventType === "*") {
this.wildcardRules.add(ruleId);
} else {
if (!this.triggerIndex.has(eventType)) {
this.triggerIndex.set(eventType, new Set());
}
this.triggerIndex.get(eventType)!.add(ruleId);
}
}Index structure:
Map {
"pageView" => Set { "rule-1", "rule-5", "rule-12" },
"lead" => Set { "rule-2", "rule-7" },
"purchase" => Set { "rule-3", "rule-7", "rule-9" }
}Lookup:
// lib/ruleEngine/triggers/TriggerMatcher.ts:69-83
public findMatchingRuleIds(event: ClientEvent): Set<string> {
const matchingRuleIds = new Set<string>();
// O(w) - Add wildcard rules
this.wildcardRules.forEach(ruleId => matchingRuleIds.add(ruleId));
// O(1) - Map lookup by event type
const eventType = event.event;
const typeMatches = this.triggerIndex.get(eventType);
// O(m) - Add matching rule IDs
if (typeMatches) {
typeMatches.forEach(ruleId => matchingRuleIds.add(ruleId));
}
return matchingRuleIds;
}Time complexity: O(w + m) where:
- w = wildcard rules (typically 0-5)
- m = rules matching this event type (typically 1-10)
Independent of: Total rule count (could be 1000+)
Performance gain:
- 100 rules, 5 matching: 100 iterations → 5 iterations = 20x faster
- 1000 rules, 10 matching: 1000 iterations → 10 iterations = 100x faster
Space Complexity Trade-off
Memory usage: O(n * t) where n = rules, t = triggers per rule
Typical scenario:
- 100 rules
- 2 triggers per rule
- 20 unique event types
- Index size: ~200 entries
- Memory: <10KB
Conclusion: Negligible memory cost for significant performance gain
Deep Clone Only When Needed
The Problem
Requirement: Each workspace must process events independently
Naive approach: Clone event for every rule
// BAD: Clone for every rule
for (const rule of rules) {
const ruleEventClone = deepClone(event); // Expensive!
processRule(rule, ruleEventClone);
}Cost: If 10 rules in workspace, clone 10 times
The Solution: Clone Once Per Workspace
// lib/ruleEngine/core/ruleEngine.ts:328-510
for (const workspaceId of allWorkspaceIds) {
// OPTIMIZATION: Clone once per workspace
const workspaceEventClone = deepClone(event);
const rules = this.workspaceRules.get(workspaceId) || [];
const context = this.createExecutionContext(workspaceEventClone, workspaceId);
// All rules in this workspace share the same event clone
const matchingRules = this.findMatchingRules(rules, context);
for (const rule of matchingRules) {
// Rules can modify context.event (the clone)
const actionResult = this.actionExecutor.execute(rule.actions, context);
if (actionResult.modifiedEvent) {
context.event = actionResult.modifiedEvent; // Update shared clone
}
}
}Performance:
- 3 workspaces, 10 rules each: 3 clones (not 30)
- 10 workspaces, 5 rules each: 10 clones (not 50)
Deep clone cost: Typical event (1-5KB) = ~0.1-0.5ms per clone
Savings: ~90% reduction in clone operations
Deep Clone Implementation
Uses structured clone algorithm:
// lib/ruleEngine/utils/objectUtils.ts
export function deepClone<T>(obj: T): T {
if (obj === null || typeof obj !== 'object') {
return obj;
}
if (obj instanceof Date) {
return new Date(obj.getTime()) as any;
}
if (obj instanceof Array) {
const cloneArr: any[] = [];
for (let i = 0; i < obj.length; i++) {
cloneArr[i] = deepClone(obj[i]);
}
return cloneArr as any;
}
if (obj instanceof Object) {
const cloneObj: any = {};
for (const key in obj) {
if (obj.hasOwnProperty(key)) {
cloneObj[key] = deepClone(obj[key]);
}
}
return cloneObj;
}
return obj;
}Handles:
- Nested objects
- Arrays
- Date objects
- Null/undefined
Doesn't handle (intentionally):
- Functions (not needed in events)
- Circular references (breaks them)
- WeakMap/WeakSet (not needed)
Why not structuredClone()? Not available in all browsers we support
Why not JSON.parse(JSON.stringify())? Loses Date objects and has performance issues
Early Exit in Condition Groups
Short-Circuit Evaluation
AND logic (within group):
// lib/ruleEngine/conditions/conditionEvaluator.ts:27-29
public evaluateAll(conditions: Condition[], context: ExecutionContext): boolean {
return conditions.every(condition => this.evaluate(condition, context));
}Array.every() behavior:
- Returns
falseon first failing condition - Doesn't evaluate remaining conditions
Example:
conditions: [
{ field: "page.url", operator: "contains", value: "checkout" }, // ✓ PASS
{ field: "event.type", operator: "equals", value: "lead" }, // ✗ FAIL - stops here
{ field: "data.price", operator: "greaterThan", value: 100 }, // Not evaluated
{ field: "data.items", operator: "arrayContains", value: "X" } // Not evaluated
]Performance gain: 50% reduction in evaluations on average
OR logic (between groups):
// lib/ruleEngine/conditions/conditionEvaluator.ts:58-74
public evaluateConditionGroups(conditionGroups: ConditionGroup[], context: ExecutionContext): boolean {
if (!conditionGroups || conditionGroups.length === 0) {
return true;
}
// OR between groups - any group matching = true
return conditionGroups.some(group => {
if (!group.conditions || group.conditions.length === 0) {
return true;
}
return this.evaluateAll(group.conditions, context);
});
}Array.some() behavior:
- Returns
trueon first matching group - Doesn't evaluate remaining groups
Example:
conditionGroups: [
{ conditions: [A, B, C] }, // All pass - returns true, stops here
{ conditions: [D, E, F] }, // Not evaluated
{ conditions: [G, H, I] } // Not evaluated
]Performance gain: 66% reduction in group evaluations for this example
Regex Caching
The Problem
Regex compilation is expensive:
// BAD: Compiles regex every evaluation
function matches(str: string, pattern: string): boolean {
return new RegExp(pattern).test(str); // ~100μs compilation
}Scenario: Rule evaluated 1000 times with same pattern
- Compilation time: 1000 × 100μs = 100ms wasted
The Solution: RegexCache
Location: /lib/ruleEngine/utils/regexCache.ts
export class RegexCache {
private cache: Map<string, RegExp | null> = new Map();
private maxSize: number = 100;
public getOrCreate(pattern: string, flags: string = ""): RegExp | null {
const key = `${pattern}::${flags}`;
// Return cached if exists
if (this.cache.has(key)) {
return this.cache.get(key)!;
}
// Compile and cache
try {
const regex = new RegExp(pattern, flags);
// Evict oldest if cache full
if (this.cache.size >= this.maxSize) {
const firstKey = this.cache.keys().next().value;
this.cache.delete(firstKey);
}
this.cache.set(key, regex);
return regex;
} catch (error) {
console.error(`Invalid regex pattern: ${pattern}`, error);
this.cache.set(key, null); // Cache failure to avoid retrying
return null;
}
}
}Features:
- Caches compiled RegExp objects
- Key includes flags (
"pattern::i"vs"pattern::") - Caches failures (invalid patterns stored as
null) - Size limit (100 patterns max)
- LRU-like eviction (removes oldest when full)
Usage
// lib/ruleEngine/conditions/conditionEvaluator.ts:370-379
private matches(str: any, pattern: any, caseSensitive?: boolean): boolean {
if (typeof str !== "string" || typeof pattern !== "string") return false;
const flags = caseSensitive === false ? 'i' : '';
const regex = this.regexCache.getOrCreate(pattern, flags);
if (!regex) {
return false; // Invalid pattern
}
return regex.test(str);
}Performance Impact
First match (cache miss):
- Compile regex: ~100μs
- Test: ~1μs
- Total: ~101μs
Subsequent matches (cache hit):
- Lookup: ~0.01μs
- Test: ~1μs
- Total: ~1.01μs
Speedup: 100x faster for repeated patterns
Real-world scenario:
- Rule checks URL contains "checkout" on every pageView
- Page has 10 pageView events per session
- Without cache: 10 × 100μs = 1000μs = 1ms
- With cache: 100μs + 9 × 1μs = 109μs = 0.1ms
- Savings: 0.9ms per session
Infinite Loop Prevention
The Problem
Scenario: Rule fires event that triggers itself
// Rule 1: On pageView, fire customEvent
{
triggers: [{ type: "oneTrackEvent", config: { eventType: "pageView" } }],
actions: [{ type: "fireEvent", config: { eventType: "customEvent" } }]
}
// Rule 2: On customEvent, fire pageView
{
triggers: [{ type: "oneTrackEvent", config: { eventType: "customEvent" } }],
actions: [{ type: "fireEvent", config: { eventType: "pageView" } }]
}
// Result: pageView → customEvent → pageView → customEvent → ... INFINITE LOOPWithout protection: Stack overflow, browser hang
The Solution: Recursion Depth Tracking
// lib/ruleEngine/core/ruleEngine.ts:127-128
private eventProcessingDepth: number = 0;
private readonly MAX_RECURSION_DEPTH = 10;
// lib/ruleEngine/core/ruleEngine.ts:283-293
public processEvent(event: ClientEvent): ProcessedEvent[] {
// Increment depth
this.eventProcessingDepth++;
if (this.eventProcessingDepth > this.MAX_RECURSION_DEPTH) {
console.error(
`[RuleEngine] Maximum recursion depth (${this.MAX_RECURSION_DEPTH}) exceeded. ` +
`This likely indicates an infinite loop where rules are firing events that trigger themselves. ` +
`Event type: ${event.event}, Event ID: ${eventId}`
);
this.eventProcessingDepth--;
return [event]; // Safe fallback: return original event
}
try {
// ... process event
} finally {
// Always decrement, even on error
this.eventProcessingDepth--;
}
}Max depth: 10 levels of nesting
Why 10? Balances legitimate use cases vs infinite loops:
- Legitimate: Event A → B → C (3 levels) is reasonable
- Infinite loop: Depth 10+ almost always indicates misconfiguration
Behavior on overflow:
- Log error with context
- Decrement counter
- Return original event unchanged
- Continue processing (no cascade failure)
Performance: Single integer increment/decrement (~0.001μs)
Lazy Background Monitor Initialization
The Problem
Background monitors have performance cost:
| Monitor | Cost |
|---|---|
| ScrollDepthMonitor | Scroll event listener (passive) |
| TimeOnPageMonitor | 1-second interval timer |
| PostMessageMonitor | Message event listener |
| DOMEventMonitor | One-time listeners (low cost) |
Don't start monitors if no rules use them
The Solution: Conditional Initialization
// lib/ruleEngine/triggers/backgroundTriggerManager.ts:31-71
export function initializeBackgroundTriggers(ruleEngine: RuleEngine): void {
if (isInitialized || typeof window === "undefined") {
return;
}
// Get required trigger types from registered rules
const requiredTriggers = ruleEngine.getRequiredBackgroundTriggers();
// Only initialize monitors that are actually needed
if (requiredTriggers.has("scrollDepth")) {
scrollDepthMonitor = new ScrollDepthMonitor(ruleEngine);
scrollDepthMonitor.start();
activeMonitors.add("scrollDepth");
}
if (requiredTriggers.has("timeOnPage")) {
timeOnPageMonitor = new TimeOnPageMonitor(ruleEngine);
timeOnPageMonitor.start();
activeMonitors.add("timeOnPage");
}
// ... similar for other monitors
}Check required triggers:
// lib/ruleEngine/core/ruleEngine.ts:226-242
public getRequiredBackgroundTriggers(): Set<string> {
const requiredTriggers = new Set<string>();
// Check all rules across all workspaces
this.workspaceRules.forEach(rules => {
rules.forEach(rule => {
rule.triggers.forEach(trigger => {
// Only add background trigger types (not oneTrackEvent)
if (trigger.type !== "oneTrackEvent") {
requiredTriggers.add(trigger.type);
}
});
});
});
return requiredTriggers;
}Performance impact:
| Scenario | Monitors Started | Cost |
|---|---|---|
| No background triggers | 0 | Zero overhead |
| 1 scroll trigger | 1 (scrollDepth) | 1 passive listener |
| 1 time trigger | 1 (timeOnPage) | 1 interval timer |
| All trigger types | 4 | All monitors |
Savings: If 80% of workspaces don't use background triggers, 80% reduction in monitor overhead
Dynamic Start/Stop
Start monitor when rule registered:
// lib/ruleEngine/core/ruleEngine.ts:165-174
enabledRules.forEach(rule => {
rule.triggers.forEach(trigger => {
this.triggerMatcher.registerTrigger(rule.id, trigger);
// Start background monitors dynamically if needed
if (trigger.type !== "oneTrackEvent" && this.isInitialized) {
startMonitor(trigger.type, this);
}
});
});Stop monitor when no longer needed:
// lib/ruleEngine/core/ruleEngine.ts:1328-1330
removedTriggerTypes.forEach(triggerType => {
stopMonitor(triggerType, this); // Only stops if no rules still use it
});Benefit: Monitors lifecycle matches actual usage
Execution Limit Checking Optimization
Early Check Before Matching
Problem: Expensive to check execution limit after full rule evaluation
Solution: Check limit BEFORE trigger/condition evaluation
// lib/ruleEngine/core/ruleEngine.ts:346-365
for (const rule of rules) {
const shouldSuppress = rule.executionLimit?.suppressOnLimitReached !== false;
if (rule.executionLimit && shouldSuppress) {
// Check if limit reached
if (!this.canRuleExecute(rule)) {
// Would this rule match if it could execute?
if (this.wouldRuleMatch(rule, context)) {
console.log(`Rule hit execution limit - suppressing event`);
ruleHitExecutionLimit = true;
suppressedWorkspaces.push(workspaceId);
break;
}
}
}
}Optimization: Only evaluate wouldRuleMatch() if limit already reached
Typical scenario:
- Rule has
maxExecutions: 1 - First event: Limit not reached, full evaluation
- Subsequent events: Limit reached, quick check + suppress
Performance:
- Without optimization: Full evaluation every time (wasted)
- With optimization: Quick limit check + skip evaluation
Variable Resolution Caching
System Variables Pre-Resolved
System variables resolved once per context:
// lib/ruleEngine/core/ruleEngine.ts:1072-1098
private initializeSystemVariables(context: ExecutionContext): void {
const systemVars: SystemVariables = {
event: context.event,
timestamp: Date.now(),
workspace: { id: context.workspaceId },
page: {
url: typeof window !== "undefined" ? window.location.href : "",
path: typeof window !== "undefined" ? window.location.pathname : "",
hostname: typeof window !== "undefined" ? window.location.hostname : "",
title: typeof document !== "undefined" ? document.title : "",
referrer: typeof document !== "undefined" ? document.referrer : ""
},
device: { type: getDeviceType() },
browser: { name: getBrowserName() }
};
for (const [key, value] of Object.entries(systemVars)) {
context.variables.set(key, value);
}
}Benefit: Page URL, title, etc. only read once per event (not per rule)
Savings: If 10 rules reference page.url:
- Without caching: 10 ×
window.location.hreflookups - With caching: 1 × lookup, 9 × Map lookups
Performance: Map lookup (~0.001μs) vs DOM access (~0.1μs) = 100x faster
Workspace Variables Pre-Resolved
// lib/ruleEngine/core/ruleEngine.ts:1057-1064
const workspaceVars = this.workspaceVariables.get(workspaceId) || [];
const allVariables = [...this.globalVariables, ...workspaceVars];
// Resolve all variables into context ONCE
for (const variable of allVariables) {
const value = this.variableResolver.resolve(variable, context);
context.variables.set(variable.name, value);
}Benefit: Workspace/global variables resolved once, not per rule
Example: Variable reads cookie value
- Without caching: Parse
document.cookiefor every rule - With caching: Parse once, lookup from Map for each rule
Path Resolution Optimization
Variable Map Lookup First
// lib/ruleEngine/utils/pathResolver.ts:3-23
export function resolvePath(path: string, context: ExecutionContext): any {
// OPTIMIZATION: Check variables first (fast Map lookup)
if (context.variables.has(path)) {
return context.variables.get(path);
}
// Special prefix handling (only if not a variable)
if (path.startsWith("event.")) {
return getValueFromPath(path.substring(6), context.event);
}
if (path.startsWith("page.")) {
const pageVar = context.variables.get("page");
if (pageVar) {
return getValueFromPath(path.substring(5), pageVar);
}
}
// Fallback: Direct path on event
return getValueFromPath(path, context.event);
}Fast path: Variable lookup (Map) = O(1)
Slow path: Deep object traversal = O(d) where d = depth
Optimization priority:
- Map lookup (~0.001μs)
- String prefix check (~0.01μs)
- Object traversal (0.1-1μs depending on depth)
Debug Mode Zero-Cost Abstraction
Early Return Pattern
Every debug method:
public broadcastSomething(data: any): void {
// FIRST LINE: Check if enabled
if (!this.debugEnabled || typeof window === "undefined") return;
// Only executes if debug enabled
// ...
}Cost when disabled: Single boolean check (~0.0001μs)
Cost when enabled: Full broadcast (~10-50μs)
Production impact: Negligible (0.01ms per 100 calls)
No Object Creation When Disabled
Avoided allocations:
// BAD: Creates object even if debug disabled
const debugInfo = { ruleId, eventType, duration }; // Allocation happens
this.debugBroadcaster.broadcast(debugInfo); // Then checked if enabled
// GOOD: Check first, allocate only if needed
if (this.debugEnabled) {
const debugInfo = { ruleId, eventType, duration }; // Only allocates if enabled
this.debugBroadcaster.broadcast(debugInfo);
}Memory savings: ~100-500 bytes per debug call (garbage collector overhead eliminated)
Performance Monitoring Built-In
Timing Every Event
// lib/ruleEngine/core/ruleEngine.ts:278
const startTime = performance.now();
// ... process event
const endTime = performance.now();
this.debug(`Processed event in ${(endTime - startTime).toFixed(2)}ms`);Use performance.now() instead of Date.now():
- Higher precision (microsecond vs millisecond)
- Monotonic (doesn't go backwards)
- Not affected by system clock changes
Performance.now() cost: ~1μs (negligible overhead)
Rule Execution Timing
// lib/ruleEngine/core/ruleEngine.ts:390-437
for (const rule of matchingRules) {
const ruleStartTime = performance.now();
try {
// ... execute rule
} finally {
const ruleExecutionTime = performance.now() - ruleStartTime;
// Broadcast execution time for monitoring
}
}Enables: Performance regression detection, bottleneck identification
Memory Management
Execution Tracker Cleanup
Execution counts stored per rule:
private ruleExecutionTracker: Map<string, number> = new Map();Cleanup on destroy:
// lib/ruleEngine/core/ruleEngine.ts:1339-1348
public destroy(): void {
this.workspaceRules.clear();
this.workspaceVariables.clear();
this.globalVariables = [];
this.ruleExecutionTracker.clear(); // Prevent memory leak
this.isInitialized = false;
destroyBackgroundTriggers();
}Why important: Long-lived SPAs need cleanup to avoid memory leaks
RegexCache Size Limit
private maxSize: number = 100;
if (this.cache.size >= this.maxSize) {
const firstKey = this.cache.keys().next().value;
this.cache.delete(firstKey); // LRU eviction
}Prevents: Unbounded cache growth with dynamic patterns
Typical size: 10-30 patterns (well under limit)
Performance Benchmarks
Trigger Matching
| Scenario | Time | Operations |
|---|---|---|
| No matching rules | 0.05ms | Map lookup + empty set |
| 5 matching rules (100 total) | 0.1ms | Map lookup + Set iteration |
| 10 matching rules (1000 total) | 0.15ms | Map lookup + Set iteration |
Scales with: Matching rules (m), not total rules (n)
Condition Evaluation
| Scenario | Time | Operations |
|---|---|---|
| Simple string equals | 0.01ms | String comparison |
| Regex match (cached) | 0.02ms | Regex test |
| Regex match (uncached) | 0.1ms | Compile + test |
| Deep path (5 levels) | 0.05ms | Object traversal |
| Complex condition (10 checks) | 0.2ms | Multiple comparisons |
Variable Resolution
| Scenario | Time | Operations |
|---|---|---|
| Constant | 0.001ms | Return value |
| Pre-resolved variable | 0.002ms | Map lookup |
| Event property | 0.01ms | Path resolution |
| DOM element | 0.1ms | querySelector |
| Cookie | 0.05ms | Parse document.cookie |
| Custom JavaScript | 0.5-5ms | Function execution |
Complete Event Processing
| Scenario | Time | Breakdown |
|---|---|---|
| No matching rules | 0.1ms | Context creation + trigger lookup |
| 1 simple rule | 0.5ms | + Condition eval + action |
| 3 complex rules | 2ms | + Multiple conditions + variables |
| 5 workspaces | 4ms | + Clone × 5 + processing × 5 |
Key Takeaways for Developers
- O(1) trigger lookup via Map indexing - Scales with matching rules, not total
- Clone once per workspace - Not per rule (90% reduction)
- Early exit in conditions - .every() and .some() short-circuit
- Regex caching - 100x speedup for repeated patterns
- Recursion depth limit - Prevents infinite loops (max 10)
- Lazy monitor initialization - Zero cost if not used
- Variable pre-resolution - System/workspace vars resolved once
- Debug mode zero-cost - Early return when disabled
- LRU cache eviction - Prevents unbounded memory growth
- Performance.now() for timing - Microsecond precision for monitoring
Overall strategy: Optimize the common path, gracefully handle edge cases