Documentation
Complete guide to using jsondb-high, the blazing fast JSON database for Node.js with a Rust-powered core via N-API.
Getting Started
Introduction
jsondb-high is a blazing fast, feature-rich JSON database for Node.js with a Rust-powered core via N-API. It offers zero dependencies, ACID durability, and multi-core parallelism for production workloads.
Features
- โก Blazing Fast: Core logic in Rust via N-API (~2M ops/s reads, ~260k ops/s writes)
- ๐ Multi-Process Safe: OS-level advisory file locking prevents data corruption across multiple processes
- ๐งต Multi-Core Processing: Adaptive parallelism using Rayon - automatically scales with your CPU
- ๐ก๏ธ Atomic Operations: Group Commit Write-Ahead Logging (WAL) ensures ACID durability with near-zero overhead
- ๐ O(1) Indexing: In-memory Map indices for instant, constant-time lookups
- ๐ Encryption: AES-256-GCM encryption for data at rest
- ๐ฆ Zero Dependencies: Self-contained native binary; no external database servers required
- ๐ Middleware: Support for before and after hooks on all operations
- โฑ๏ธ TTL Support: Auto-expire keys after a specified time (like Redis)
- ๐ก Pub/Sub: EventEmitter-style subscriptions to data changes
- ๐ Aggregations: Built-in sum, avg, min, max, groupBy, distinct
Installation
# Using bun (recommended)
bun add jsondb-high
# Using npm
npm install jsondb-high
Requirements
- Node.js: >= 16.0.0
- Rust Toolchain: Installed and in PATH (Required for initial build)
- C++ Build Tools: Required by Cargo on some platforms (e.g., Visual Studio Build Tools on Windows)
Quick Start
import JSONDatabase from 'jsondb-high';
const db = new JSONDatabase('db.json');
// Write
await db.set('user.1', { name: 'Alice', role: 'admin' });
// Read
const user = await db.get('user.1');
console.log(user); // { name: 'Alice', role: 'admin' }
Hybrid Architecture (v4.5+)
jsondb-high offers multiple storage and safety modes. Choose based on your performance and durability needs.
Durability Modes
Configure the Write-Ahead Log (WAL) to balance speed and safety:
const db = new JSONDatabase('db.json', {
durability: 'batched', // 'none' | 'lazy' | 'batched' | 'sync'
walFlushMs: 10, // Sync every 10ms (Group Commit)
lockMode: 'exclusive'
});
| Mode | Throughput | Latency | Durability Window |
|---|---|---|---|
none | ~260k ops/s | 0.003ms | Manual save only |
lazy | ~200k ops/s | 0.003ms | 100ms |
batched | ~240k ops/s | 5ms | 10ms (Recommended) |
sync | ~2k ops/s | 0.5ms | Immediate |
Locking Modes
Prevent corruption from multiple processes using the same file:
const db = new JSONDatabase('db.json', {
lockMode: 'exclusive' // 'exclusive' | 'shared' | 'none'
});
O(1) Indexing
Define indices in the constructor for O(1) read performance:
const db = new JSONDatabase('db.json', {
indices: [{ name: 'email', path: 'users', field: 'email' }]
});
// Instant Lookup
const user = await db.findByIndex('email', 'alice@corp.com');
Multi-Core Parallel Processing
The database automatically detects available CPU cores and uses parallel processing for large datasets (โฅ100 items). Falls back to efficient single-threaded operation for small workloads to avoid overhead.
System Info
Check system capabilities for parallel processing:
const info = db.getSystemInfo();
console.log(info);
// {
// availableCores: 8,
// parallelEnabled: true,
// recommendedBatchSize: 1000
// }
How It Works
- Adaptive: Automatically uses 1-N cores based on workload size and system resources
- Efficient: Small workloads (<100 items) use single-threaded to avoid parallel overhead
- Resource-Aware: Leaves 1 core free for system/main thread
- Scalable: Performance scales linearly with available cores for large datasets
DBOptions Interface
Configuration options passed to the constructor:
interface DBOptions {
indices?: IndexConfig[]; // Array of index configurations
wal?: boolean; // Enable Write-Ahead Logging
encryptionKey?: string; // AES-256-GCM encryption key (32 chars)
autoSaveInterval?: number; // Auto-save interval in ms
lockMode?: 'exclusive' | 'shared' | 'none'; // v4.5: Process locking mode
lockTimeoutMs?: number; // v4.5: Lock timeout in ms
durability?: 'none' | 'lazy' | 'batched' | 'sync'; // v4.5: Durability mode
walBatchSize?: number; // v4.5: WAL batch size
walFlushMs?: number; // v4.5: WAL flush interval in ms
schemas?: Record<string, Schema>; // v5.1: Path-based schemas
slowQueryThresholdMs?: number; // v5.1: Slow query threshold in ms
}
Basic Operations
set(path, value)
Writes data. Creates nested paths automatically.
await db.set('config.theme', 'dark');
await db.set('users.1.settings.notifications', true);
get(path, defaultValue?)
Retrieves data. Returns defaultValue if path doesn't exist.
const val = await db.get('config.theme', 'light');
const user = await db.get('users.1');
has(path)
Checks if a key exists.
if (await db.has('users.1')) {
// User exists
}
delete(path)
Removes a key or object property.
await db.delete('users.1.settings'); // Delete nested property
await db.delete('users.1'); // Delete entire object
Array Operations
push(path, ...items)
Adds items to an array. Dedupes automatically.
await db.push('users.1.tags', 'premium', 'beta');
pull(path, ...items)
Removes items from an array (deep equality).
await db.pull('users.1.tags', 'beta');
Math Operations (Atomic)
add(path, amount)
Atomic increment. Returns new value.
const newCount = await db.add('users.1.loginCount', 1);
subtract(path, amount)
Atomic decrement. Returns new value.
const newCredits = await db.subtract('users.1.credits', 50);
Query Builder
Chainable query builder with aggregation support.
const results = await db.query('users')
.where('age').gt(18)
.where('role').eq('admin')
.limit(10)
.skip(0)
.sort({ age: -1 }) // Descending
.select(['id', 'name', 'email'])
.exec();
Where Clauses
| Method | Description |
|---|---|
.eq(value) | Equal |
.ne(value) | Not equal |
.gt(value) | Greater than |
.gte(value) | Greater or equal |
.lt(value) | Less than |
.lte(value) | Less or equal |
.between(min, max) | Between range |
.in([...]) | In array |
.notIn([...]) | Not in array |
.contains('x') | String contains |
.startsWith('x') | String starts with |
.endsWith('x') | String ends with |
.matches(/^x/) | Regex match |
.exists() | Field exists |
.isNull() | Is null |
.isNotNull() | Is not null |
Aggregations
const count = db.query('users').count();
const total = db.query('orders').sum('amount');
const average = db.query('orders').avg('amount');
const min = db.query('orders').min('amount');
const max = db.query('orders').max('amount');
const unique = db.query('users').distinct('role');
const grouped = db.query('users').groupBy('department');
Find Operations
find(path, predicate)
Find a single item matching predicate (function or object matcher).
// With function predicate
const user = await db.find('users', u => u.age > 18);
// With object matcher
const admin = await db.find('users', { role: 'admin' });
findAll(path, predicate)
Find all items matching predicate.
const adults = await db.findAll('users', u => u.age >= 18);
findByIndex(indexName, value)
O(1) lookup using a registered index.
const user = await db.findByIndex('email', 'alice@corp.com');
Pagination
Helper for API endpoints.
const page = await db.paginate('users', 1, 20);
// Returns: {
// data: [...],
// meta: { total, pages, page, limit, hasNext, hasPrev }
// }
Batch & Parallel Operations
batch(operations)
Execute multiple writes in a single IO tick.
await db.batch([
{ type: 'set', path: 'logs.1', value: 'log data' },
{ type: 'delete', path: 'temp.cache' },
{ type: 'add', path: 'stats.visits', value: 1 }
]);
batchSetParallel(operations)
Execute thousands of set operations efficiently using all available cores. Automatically parallelized when โฅ100 items.
const operations = [];
for (let i = 0; i < 10000; i++) {
operations.push({
path: `users.${i}`,
value: { id: i, name: `User ${i}`, active: true }
});
}
const result = await db.batchSetParallel(operations);
console.log(`Completed ${result.count} operations`);
// Returns: { success: boolean, count: number, error?: string }
parallelQuery(path, filters)
High-performance filtering using native Rust parallel iteration.
// Filter with multiple conditions - uses parallel processing for large collections
const activeAdults = await db.parallelQuery('users', [
{ field: 'age', op: 'gte', value: 18 },
{ field: 'status', op: 'eq', value: 'active' }
]);
// Available operators: eq, ne, gt, gte, lt, lte, contains, startswith, endswith, in, notin, regex, containsAll, containsAny
parallelAggregate(path, operation, field?)
Compute aggregations efficiently across large datasets.
const count = await db.parallelAggregate('orders', 'count');
const totalRevenue = await db.parallelAggregate('orders', 'sum', 'amount');
const avgOrderValue = await db.parallelAggregate('orders', 'avg', 'amount');
const minOrder = await db.parallelAggregate('orders', 'min', 'amount');
const maxOrder = await db.parallelAggregate('orders', 'max', 'amount');
Transactions
Atomic read-modify-write with automatic rollback on error.
await db.transaction(async (data) => {
if (data.bank.balance >= 100) {
data.bank.balance -= 100;
data.users['1'].wallet += 100;
}
return data;
});
Snapshots
Create and restore backups.
const backupPath = await db.createSnapshot('daily');
console.log('Backup saved to:', backupPath);
// Restore later
await db.restoreSnapshot(backupPath);
TTL (Time to Live)
Auto-expire keys like Redis.
setWithTTL(path, value, ttlSeconds)
// Set with TTL (expires in 60 seconds)
await db.setWithTTL('session.abc123', { userId: 1 }, 60);
setTTL(path, ttlSeconds)
Set TTL on an existing key.
db.setTTL('temp.data', 300);
getTTL(path)
Get remaining TTL in seconds (-1 = no TTL, -2 = key doesn't exist).
const ttl = await db.getTTL('session.abc123');
clearTTL(path)
Remove TTL from a key (make it persistent).
db.clearTTL('session.abc123');
hasTTL(path)
Check if a key has TTL set.
if (db.hasTTL('session.abc123')) {
// Key has TTL
}
TTL Events
// Listen for expirations
db.on('ttl:expired', ({ path }) => {
console.log('Key expired:', path);
});
Pub/Sub (Subscriptions)
Subscribe to key changes with pattern matching.
// Subscribe to all user changes
const unsubscribe = db.subscribe('users.*', (newValue, oldValue) => {
console.log('User changed:', newValue);
});
// Subscribe to specific path
db.subscribe('config.theme', (value) => {
applyTheme(value);
});
// Wildcards supported
db.subscribe('**', (value, old) => {
// Called for ALL changes
});
// Unsubscribe when done
unsubscribe();
Encryption
AES-256-GCM encryption for data at rest.
const db = new JSONDatabase('secure.json', {
encryptionKey: 'your-32-character-secret-key!!'
});
// All data is encrypted before writing to disk
await db.set('secrets', { apiKey: 'xyz123' });
Middleware
Intercept operations before/after they happen.
// Before hook - modify data before write
db.before('set', 'users.*', (ctx) => {
ctx.value.updatedAt = Date.now();
return ctx;
});
// After hook - react after write
db.after('set', 'users.*', (ctx) => {
console.log('User updated:', ctx.path);
return ctx;
});
Utility Methods
// Get all keys under a path
const keys = await db.keys('users');
// Get all values under a path
const values = await db.values('users');
// Count items
const count = await db.count('users');
// Clear all data
await db.clear();
// Get database statistics
const stats = await db.stats();
// { size: 1234, keys: 10, indices: 2, ttlKeys: 5, subscriptions: 3 }
// Force save to disk (Durable write)
await db.save();
// Explicit durability sync (v4.5+)
await db.sync();
// Check WAL status (v4.5+)
const wal = db.walStatus();
// { enabled: true, committedLsn: 12345 }
// Manually trigger index rebuild
db.rebuildIndex();
// Clean shutdown
await db.close();
Events
db.on('change', ({ path, value, oldValue }) => { ... });
db.on('batch', ({ operations }) => { ... });
db.on('transaction:commit', () => { ... });
db.on('transaction:rollback', ({ error }) => { ... });
db.on('snapshot:created', ({ path, name }) => { ... });
db.on('snapshot:restored', ({ path }) => { ... });
db.on('ttl:expired', ({ path }) => { ... });
db.on('error', (error) => { ... });
TypeScript Interfaces
IndexConfig
interface IndexConfig {
name: string; // Index name
path: string; // Path to collection
field: string; // Field to index
}
QueryFilter
interface QueryFilter {
field: string;
op: 'eq' | 'ne' | 'gt' | 'gte' | 'lt' | 'lte' |
'contains' | 'startswith' | 'endswith' |
'in' | 'notin' | 'regex' | 'containsAll' | 'containsAny';
value: any;
}
BatchOperation
interface BatchOperation {
type: 'set' | 'delete' | 'push' | 'add' | 'subtract';
path: string;
value?: unknown;
}
ParallelResult
interface ParallelResult {
success: boolean;
count: number;
error?: string;
}
SystemInfo
interface SystemInfo {
availableCores: number;
parallelEnabled: boolean;
recommendedBatchSize: number;
}
PaginationResult & PaginationMeta
interface PaginationMeta {
total: number;
pages: number;
page: number;
limit: number;
hasNext: boolean;
hasPrev: boolean;
}
interface PaginationResult<T> {
data: T[];
meta: PaginationMeta;
}
MiddlewareContext & MiddlewareFn
interface MiddlewareContext<T = unknown> {
path: string;
value: T;
operation: string;
timestamp: number;
}
type MiddlewareFn<T = unknown> =
(ctx: MiddlewareContext<T>) => MiddlewareContext<T> | void;
TTLEntry
interface TTLEntry {
path: string;
expiresAt: number;
}
Schema Types
SchemaType
type SchemaType = 'object' | 'array' | 'string' | 'number' | 'boolean' | 'null';
Schema
interface Schema {
type: SchemaType;
properties?: Record<string, Schema>;
required?: string[];
minLength?: number;
maxLength?: number;
pattern?: string;
minimum?: number;
maximum?: number;
exclusiveMinimum?: number;
exclusiveMaximum?: number;
items?: Schema;
minItems?: number;
maxItems?: number;
uniqueItems?: boolean;
enum?: unknown[];
}
Query Types
SortDirection & SortOptions
type SortDirection = 1 | -1;
interface SortOptions {
[key: string]: SortDirection;
}
ParallelConfig
interface ParallelConfig {
enabled?: boolean; // Enable parallel processing (auto-detected by default)
threshold?: number; // Minimum items before using parallel (default: 100)
maxThreads?: number; // Maximum threads (default: auto-detected cores - 1)
}
JoinConfig
interface JoinConfig {
from: string; // Source collection path
to: string; // Target collection path
localField: string; // Field in source collection
foreignField: string;// Field in target collection
as: string; // Output field name
}
NativeDb Class (Low-level)
The underlying Rust-powered native database class:
class NativeDb {
constructor(path: string, wal: boolean);
// v4.5: Create database with full options
static newWithOptions(
path: string,
lockMode: string,
durability: string,
walBatchSize?: number,
walFlushMs?: number
): NativeDb;
// Core operations
load(): void;
save(): void;
sync(): void; // v4.5: Explicit sync for durability
walStatus(): any; // v4.5: Get WAL status
// Data operations
get(path: string): any;
set(path: string, value: any): void;
has(path: string): boolean;
delete(path: string): void;
push(path: string, value: any): void;
// Parallel operations
batchSetParallel(operations: Array<[string, any]>): ParallelResult;
parallelQuery(path: string, filters: QueryFilter[]): any;
parallelAggregate(path: string, operation: string, field?: string | null): any;
parallelLookup(leftPath: string, rightPath: string,
leftField: string, rightField: string, asField: string): any;
// Index operations
registerIndex(name: string, field: string): void;
updateIndex(name: string, key: any, path: string, isDelete: boolean): void;
findIndexPaths(name: string, key: any): string[];
clearIndex(name: string): void;
// Schema operations
registerSchema(path: string, schemaJson: string): void;
validatePath(path: string, value: any): void;
// Transactions
begin_transaction(): void;
commit_transaction(): void;
rollback_transaction(): void;
create_savepoint(name: string): void;
rollback_to_savepoint(name: string): void;
// System info
getSystemInfo(): SystemInfo;
}