An ontology is to your data what a blueprint is to a building. It defines the types of things in your world, their properties, and how they connect — so your data is structured, validated, and queryable from day one.
Every industry has its own vocabulary and rules. An ontology captures these so your data engine enforces them automatically.
A hospital's ontology defines patients, medications, conditions, and practitioners — and enforces drug interaction checks automatically.
entity Patient: required: date_of_birth, mrn, gender unique: mrn standard: HL7 FHIR R4 entity Medication: required: drug_name, ndc_code validate: ndc_code matches /^\d{4}-\d{4}-\d{2}$/ relationship PRESCRIBED: from: Patient -> Medication required: prescriber, date_prescribed constraint: check_interactions(Patient.medications) relationship INTERACTS_WITH: from: Medication -> Medication required: severity (critical | moderate | low)
A patient gets prescribed Ciprofloxacin in the ER. Nobody checks it against the Metformin prescribed by endocrinology last week. The dangerous drug interaction goes undetected.
The PRESCRIBED relationship triggers an automatic INTERACTS_WITH check. The ontology flags the critical interaction before the prescription is saved. The ER doctor gets an alert in real time.
A supply chain ontology tracks suppliers, components, facilities, and risk signals — and automatically propagates risks upstream through the parts hierarchy.
entity Supplier: required: name, country, tier (OEM | Tier1 | Tier2 | Tier3) validate: compliance_certs is not empty entity Component: required: part_number, category validate: single_source_risk when supplier_count == 1 relationship FLAGGED_FOR: from: Supplier -> RiskSignal auto_propagate: upstream to all PART_OF chains
73% of your EV battery components route through one port. You find out when the port closes and production stops for six weeks.
The ontology's auto-propagation rule pushes the risk signal up the PART_OF chain. Every product affected is flagged before you even read the news.
A financial ontology models accounts, transactions, counterparties, and regulations — automatically detecting circular fund flows and structuring patterns.
entity Account: required: holder, jurisdiction, kyc_status validate: kyc_status in (Verified, Enhanced, Pending) entity Transaction: required: amount, currency, timestamp validate: amount > 0 relationship SENT: from: Account -> Transaction constraint: flag_circular_flow(depth=4, window=90d) constraint: flag_if amount_sum > threshold(jurisdiction)
Four accounts pass money in a circle, each transfer below $10K. Your flat transaction log shows four normal transfers. The $2.4M layering scheme goes unnoticed for months.
The ontology's flag_circular_flow constraint detects the loop pattern automatically. The $2.4M layering scheme is flagged in sub-microsecond graph traversal time.
An intelligence ontology links people, organisations, locations, and intel reports — automatically correlating overlapping entities across classification boundaries.
entity Person: required: classification (UNCLASS | SECRET | TOP_SECRET) validate: access_check(viewer.clearance >= classification) entity IntelReport: required: source_type (HUMINT | SIGINT | OSINT | GEOINT) required: classification, confidence_score relationship CORROBORATES: from: IntelReport -> IntelReport auto_compute: when entities overlap across reports
SIGINT picks up a phone number. HUMINT has a source mentioning the same contact. GEOINT shows activity at a linked facility. Three analysts in three systems never connect the dots.
The ontology auto-computes CORROBORATES relationships when entities overlap across reports. The 4-entity chain surfaces in 0.4ms. One unified picture instead of three blind spots.
A codebase ontology maps modules, functions, dependencies, and vulnerabilities — automatically computing impact chains and circular dependency detection.
entity Function: required: name, module, language computed: risk_score from (callers * complexity * age) entity Module: required: path, language computed: coupling_score from call_graph_density relationship CALLS: from: Function -> Function constraint: detect_circular(depth=5) auto_compute: impact_chain for downstream callers relationship HAS_VULNERABILITY: from: Function -> Vulnerability auto_propagate: to all CALLS.callers (upstream)
You change AuthToken.validate(). You don't know 14 payment flows depend on it across 3 microservices. You find out in production at 2 AM.
The ontology's auto-computed impact chain shows every caller. Before you deploy, you see the full blast radius — 14 flows, 3 services, zero surprises.
Scalix Prime doesn't just store your ontology — it evolves it. A five-stage pipeline monitors usage, detects drift, and recommends improvements. Humans approve every change.
Monitor every entity creation, relationship link, property update, and query pattern — 7 event types in total.
Batch events into usage windows. Compute frequency, co-occurrence, and pattern drift across entity types.
Suggest schema changes: new entity types, missing relationships, property additions, index optimisations. 6 recommendation types.
Humans review every recommended change. Nothing modifies your data model without explicit approval.
The engine applies changes safely with full version control, audit trails, and one-click rollback.
A composite score (0.0–1.0) that measures how well your schema matches actual data usage. It combines unknown entity ratios, unmatched queries, and validation failure rates.
Explore the interactive demo to see an ontology in action, or request access to try Scalix Prime with your own data.