Scalix Prime User Guide

Everything you need to explore your data, build apps, automate workflows, and get answers from your knowledge graph.

What is Scalix Prime?

Scalix Prime is an enterprise data platform that connects all your information into a knowledge graph — a living map of how your data relates. Instead of keeping data in isolated spreadsheets, databases, and systems, Prime brings it together so you can see the full picture.

With Scalix Prime, you can:

  • Explore your data visually — Browse entities and their relationships across departments and systems.
  • Ask questions in plain English — AI agents search your data and return answers grounded in real information.
  • Build apps without coding — Create dashboards, operational tools, and reports using drag-and-drop widgets.
  • Automate workflows — Design multi-step AI pipelines that search, reason, and take action on your data.
  • Transform and analyze data — Clean, filter, aggregate, and visualize data with point-and-click tools.

Everything runs through the Workshop — a browser-based workspace where you access all of these tools from a single sidebar.


Key Concepts

Domains

A domain is a top-level grouping for related data — like healthcare, financial-services, or supply-chain. Each domain has its own entity types, relationships, and data schema. When you select a domain in the Workshop, you're scoping everything to that data boundary.

Entity Types & Entities

An entity type is a category of objects — Patient, Account, Supplier. Each entity is a specific instance with properties (like a patient's name, date of birth, and medical record number) and relationships to other entities.

Relationships

Relationships are connections between entities. For example, a Patient HAS_CONDITION Condition, or an Account TRANSFERS_TO Account. These connections are what make a knowledge graph powerful — they reveal patterns that isolated data can't show.

Ontology

The ontology is the blueprint of your knowledge graph. It defines which entity types exist, what properties they have, and how they connect. Scalix Prime observes your data patterns and recommends ontology improvements for your approval.

Tip

Use the Ontology Explorer in Workshop to browse domains, entity types, and relationships interactively. Click any entity to see its full details and follow its connections.


Quick Start

Get started with Scalix Prime Workshop in three steps:

  1. Open the Workshop — Navigate to your Workshop URL. You'll land on the Home page with quick access to all tools and your saved apps.
  2. Explore your data — Go to Ontology Explorer, select a domain, and browse entity types. Click any entity to see its properties and relationships.
  3. Ask a question — Open AI Chat, pick an agent, and ask a question about your data in plain English. The agent will search the knowledge graph and return results.
First-time users

Each Workshop page includes a guided tour that runs automatically on your first visit. Look for the Tour button in any page header to replay it at any time.


Ontology Explorer

The Ontology Explorer is a three-panel browser for navigating your knowledge graph. Select a domain, pick an entity type, and drill into individual entities and their relationships.

How to use it

  1. Select a domain — Use the dropdown in the left panel to choose which data area to explore (e.g., Healthcare, Financial Services).
  2. Choose an entity type — Click a type button like Patient or Account. The center panel loads all entities of that type.
  3. Browse entities — Search or scroll through the list. Click any row to open its detail panel on the right.
  4. Explore relationships — In the detail panel, expand relationship groups to see connected entities. Click a related entity to navigate to it.

Use the Open in Graph button to see an entity and its neighborhood as an interactive visual diagram.


AI Chat

AI Chat lets you interact with your knowledge graph using natural language. Ask questions, run searches, and trigger actions through specialized AI agents.

Choosing an agent

  • General — For open-ended questions about your data and platform capabilities.
  • Data Query — Specialized for searching the knowledge graph. Translates your questions into structured queries.
  • Action — Executes workflows: creates entities, triggers pipelines, or updates data. Always asks for confirmation first.
  • Admin — Manages ontology schemas, monitors system health, and configures settings.
  • Analytics — Computes aggregations, spots patterns, and generates data summaries.

How it works

Type your question in plain English. The agent will search the knowledge graph, call relevant tools, and return an answer grounded in your actual data. You'll see tool calls as expandable sections in the conversation — click them to see exactly what the agent searched for and what it found.

Tip

Click the suggested questions at the bottom of the chat to explore common tasks. You can modify the message before sending.


App Builder

Build data applications without writing code. Create dashboards, operational tools, and analytical views by dragging widgets onto a grid canvas.

How to use it

  1. Create or open an app — From the Home page, click New App or open an existing one.
  2. Add widgets — Drag widgets from the left palette onto the canvas. Options include charts, tables, search bars, graph viewers, KPI cards, forms, timelines, maps, and more.
  3. Configure — Click any widget to open its settings on the right. Set the data source (domain + entity type), display options, column mappings, and event handlers.
  4. Preview & share — Click Preview to see your app as end users will experience it. Use Export to download the app definition, or Share to copy a preview URL.

Widget types

Data Widgets

Table, Chart (bar/line/pie/scatter), KPI Card, Pivot Table, Graph Viewer, Map, Cypher Editor

Interactive Widgets

Search, Filter, Form, Action Button, Chat, Tabs, Timeline, Gantt

Tip

Widgets can communicate through variables. For example, a Search widget can pass its results to a Table widget. Configure this in each widget's Event Mappings section.


Agent Studio

Build multi-step AI workflows visually. Chain together search, reasoning, transformation, and action steps to automate complex data tasks.

Node types

  • Input — Entry point. Receives a user query, webhook payload, or scheduled trigger.
  • Search — Searches the knowledge graph using text, semantic, or hybrid search.
  • Reason — Sends data to an AI model for analysis. Adjust the temperature slider for focused (0) or creative (2) responses.
  • Transform — Manipulates data — extract fields, format text, compute values.
  • Condition — Branches the flow based on a true/false expression.
  • Action — Executes operations — create entities, send notifications, call external APIs.
  • Output — Returns the final result.

Building a pipeline

  1. Add nodes — Drag nodes from the left palette onto the canvas.
  2. Connect them — Drag from a node's output handle to another node's input handle to create a data flow.
  3. Configure each node — Click a node to set its parameters in the right panel.
  4. Run the pipeline — Click Run Pipeline. Nodes light up green as they complete or red on failure.

Data Pipeline

Import, clean, transform, and load data — all visually. Bring data in from files or APIs, process it step by step, and load it into the knowledge graph or export it.

Step types

  • Source — Where data comes from: CSV upload, PostgreSQL, REST API, S3 bucket, or an existing domain entity.
  • Clean — Remove nulls, trim whitespace, normalize case, remove duplicates, fix date formats.
  • Transform — Rename columns, cast types, compute new fields, split or merge columns.
  • Filter — Keep only rows matching a condition (equals, contains, greater than, regex, etc.).
  • Join — Combine two data sources on a matching key. Supports inner, left, right, and full outer joins.
  • Aggregate — Group by field(s) and compute count, sum, average, min, or max per group.
  • Destination — Where results go: create ontology entities, export CSV, or POST to a webhook.
Tip

Click any step card to see a live preview of what the data looks like after that step. Use this to verify your transformations before running the full pipeline.


Analysis Workbench

Explore and visualize data without writing queries. Load data from any domain, apply transformations, and view results as tables or charts.

Workflow

  1. Select data — Choose a domain and entity type from the dropdowns, then click Load Data.
  2. Build a pipeline — Add analysis steps: Filter, Sort, Group, Aggregate, Column Select, or Limit. Each step transforms the data for the next.
  3. Visualize — Switch between Table, Chart, or Split view. In chart mode, pick a chart type (bar, line, pie, scatter) and map your X and Y axes.
  4. Export — Download your results as CSV for use in Excel or other tools.

Administration

The Admin panel provides tools for managing your platform. It is organized into tabs:

  • Ontology — Browse and manage your domains, entity types, relationships, and properties.
  • Monitoring — View system health, component status, and uptime metrics.
  • Data Quality — See quality scores across dimensions like completeness, consistency, and accuracy. Review and resolve flagged issues.
  • Standards — Browse industry standards (FHIR, MITRE ATT&CK, STIX, Schema.org) and import them into your domains.
  • Classification — Configure security classification levels and redaction rules for sensitive data.
  • Extensions — View loaded engine extensions and their status.
  • MCP Tools — Browse the tools available to the AI system for queries, mutations, and admin tasks.
  • Entity Resolution — Find and merge potential duplicate entities using fuzzy matching.

Code Intelligence

Map your entire codebase as an interactive knowledge graph. Scalix Prime parses source code in 6 languages (Python, TypeScript, Go, Rust, Java, C#) and builds a graph of modules, functions, classes, dependencies, and call chains.

What you can do

Dependency Mapping

See every import, call chain, and circular dependency across your monorepo. Identify which modules are tightly coupled and which are isolated.

Impact Analysis

Before changing a function, trace every downstream consumer. Know exactly which services, APIs, and tests will be affected — even across microservice boundaries.

Dead Code Detection

Automatically find functions, classes, and modules with zero callers. Clean up legacy code with confidence.

Security Hotspots

Detect potential SQL injection risks, unsanitized inputs, and known vulnerabilities in dependencies. Trace call paths from user input to database queries.

Semantic Code Search

Search by meaning, not keywords. "Find all functions that handle authentication" returns results even if they're named validateToken or checkCredentials.


Healthcare & Life Sciences

Scalix Prime includes built-in support for the HL7 FHIR R4 healthcare standard, mapping patient records, conditions, medications, and encounters into a standards-compliant knowledge graph.

What you can do

Cross-Department Visibility

Link patient records across ER, endocrinology, cardiology, and pharmacy. See the complete patient journey in one connected view.

Drug Interaction Detection

Automatically flag contraindicated medication combinations. The graph reveals dangerous interactions that siloed systems miss — like when an ER physician prescribes a drug that conflicts with a medication from a different department.

Clinical Trial Matching

Trace patient medical history through conditions, observations, and medications to match eligible candidates for active clinical trials.

Care Gap Identification

Detect missing follow-up appointments, overdue screenings, and incomplete treatment protocols.

Built-in standards

Scalix Prime supports FHIR R4 entity types (Patient, Condition, Observation, MedicationRequest, Encounter, Practitioner) with ICD-10, SNOMED-CT, LOINC, RxNorm, and NPI coding systems.


Supply Chain & Logistics

Model your entire supply chain as a connected graph — every supplier, sub-supplier, warehouse, shipment route, and material flow. See the ripple effects of disruptions before they hit your production line.

What you can do

Full Network Visibility

Map every tier of your supply network. Discover sub-supplier dependencies that procurement teams don't know about.

Disruption Impact Analysis

When a supplier goes offline, instantly trace which products, warehouses, and delivery commitments are affected.

Single-Source Risk Detection

Identify critical components sourced from a single supplier or region. Surface concentration risks before they become crises.

ESG & Compliance Monitoring

Flag suppliers with ESG risks, missing certifications, or regulatory non-compliance across your entire network.


Defense & Intelligence

Built for environments where security is non-negotiable. Scalix Prime runs completely offline — no internet, no cloud dependency, no telemetry. Classification labels are enforced on every piece of data.

What you can do

Multi-Source Intelligence Fusion

Connect and cross-reference data from HUMINT, SIGINT, OSINT, and GEOINT sources in a single graph. Discover connections that siloed databases hide.

Network & Pattern Analysis

Track how threat networks evolve over time. Identify key nodes, communication patterns, and emerging relationships.

Classification-Level Access Control

Access controls map directly to security classification levels. An analyst with SECRET clearance sees only SECRET-and-below data — enforced automatically.

Air-Gapped Deployment

Deploy as a single binary on hardened devices in disconnected environments. No network access required.

Built-in standards

Scalix Prime includes MITRE ATT&CK v14 (adversary tactics and techniques) and STIX 2.1 (structured threat intelligence) as importable ontology standards.


Financial Services

Trace transaction flows across accounts, institutions, and jurisdictions. Scalix Prime's real-time graph queries make it viable for fraud detection, AML/KYC compliance, and risk monitoring.

What you can do

Transaction Tracing

Follow money across accounts, institutions, and jurisdictions. Detect circular flows, layering, and structuring patterns used in money laundering.

Fraud Detection

Build fraud detection graphs that link accounts, devices, IP addresses, and behavioral signals. Score every transaction in real time.

Hidden Exposure Discovery

Detect circular dependencies and cascading risks across counterparties that traditional risk systems miss.

Regulatory Reporting

Full audit trail on every data access and modification. Generate compliance reports for AML, KYC, and MiFID II requirements.


Platform Capabilities

Scalix Prime combines several technologies into one integrated platform:

ScalixCore Engine

The core data engine unifies tables and graphs in the same system. Query with SQL, Cypher, or natural language — all against the same data, in the same transaction.

Scalix Brain

AI layer providing multi-agent chat, natural language query routing, domain auto-discovery, hybrid search (text + semantic + fuzzy + graph), and the code intelligence pipeline.

ScalixDB

Managed PostgreSQL with vector search and physical isolation per client pod. Handles storage, replication, and data persistence for the knowledge graph.

Scalix Router

Translates natural language questions into optimized queries, selects the best AI model for each task, and validates every response against the knowledge graph before returning it.

Key benefit

You don't need to interact with these components separately. The Workshop provides a unified interface — just use the tools, and the platform handles the rest.


API Reference

Every operation available in the Workshop is also available through the REST API. Enterprise clients can integrate directly without using the drag-and-drop UI — ingest data, query the graph, manage schemas, and run AI agents all via HTTP.

Base URL

https://your-instance.scalix.cloud/api/v1/ — All endpoints accept and return JSON. Authentication uses Bearer tokens in the Authorization header.

Health & Status

GET /api/v1/health

System health check — returns status, uptime, registered domains, and component checks.

Data Ingestion

Create entities and relationships in the knowledge graph programmatically. Supports single, batch, and CSV upload.

POST /api/v1/ingest/entity

Create a single entity (node) with domain, type, name, properties, and optional classification level.

POST /api/v1/ingest/relationship

Create a relationship (edge) between two entities by specifying source, target, and relationship type.

POST /api/v1/ingest/batch

Ingest multiple entities and relationships in a single request for high-throughput data loading.

POST /api/v1/ingest/csv

Upload a CSV file (multipart/form-data) and auto-parse rows into entities.

PATCH /api/v1/ingest/entity/{node_id}

Update properties on an existing entity.

DELETE /api/v1/ingest/entity/{node_id}

Remove an entity from the graph.

Search & Traversal

POST /api/v1/search

Hybrid search combining full-text, vector (semantic), and graph-aware ranking. Returns matched entities with relevance scores.

POST /api/v1/graph/traverse

BFS/DFS graph traversal from a starting node with configurable depth and relationship type filters.

AI Agents & Reasoning

POST /api/v1/query

Run a natural language query through the full agent pipeline (scout, analyst, compliance). Returns structured results with provenance.

POST /api/v1/brain/reason

Direct AI reasoning with tool calling and agent trace output. Use this for custom agent integrations.

GET /api/v1/brain/tools

List all MCP tools available to the AI system for queries, mutations, and admin tasks.

Domain & Schema Management

GET /api/v1/domains

List all registered domains with entity type counts and statistics.

GET /api/v1/domains/{domain_id}

Full domain schema — node types, relationship types, property definitions, and current stats.

POST /api/v1/schema/{domain_id}/propose

Propose a schema change (add type, add property, modify relationship). Requires approval before applying.

GET /api/v1/schema/{domain_id}/history

Complete schema version history with diffs — every change is versioned and auditable.

Training & Drift Detection

GET /api/v1/training/{domain_id}/drift

Data drift report — composite drift score, affected entity types, and recommended schema changes.

POST /api/v1/training/{domain_id}/train

Trigger a training cycle that analyzes recent data patterns and generates schema recommendations.

GET /api/v1/training/{domain_id}/recommendations

Retrieve pending recommendations for review. Each includes a description, confidence score, and impact preview.

Connectors

GET /api/v1/connectors

List data connectors (PostgreSQL, S3, Snowflake, Kafka, REST API). Includes status and last sync time.

POST /api/v1/connectors

Create a new data connector with connection configuration and sync schedule.

POST /api/v1/connectors/{connector_id}/test

Test connector connectivity and latency before using it in production.

Code Analysis

GET /api/v1/dead-code

Detect unused functions, classes, variables, and imports across the indexed codebase.

GET /api/v1/impact/{symbol}

Blast radius analysis — direct and transitive impact of changing a specific symbol.

GET /api/v1/context/{symbol}

360-degree symbol context — callers, callees, type references, documentation, and usage patterns.

Full API documentation

The API serves interactive OpenAPI (Swagger) documentation at /docs and ReDoc at /redoc. These include request/response schemas, example payloads, and a "Try It" interface.

Example: Ingest and query via curl

# Create an entity
curl -X POST https://your-instance/api/v1/ingest/entity \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "domain": "finance",
    "entity_type": "Transaction",
    "name": "TXN-2024-001",
    "properties": {"amount": 50000, "currency": "USD"}
  }'

# Search the knowledge graph
curl -X POST https://your-instance/api/v1/search \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"query": "high risk transactions over 10000", "limit": 20}'

# Ask a question via the agent pipeline
curl -X POST https://your-instance/api/v1/query \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"query": "Which accounts have circular fund flows?", "domain": "finance"}'

Performance

Scalix Prime is built for speed. Here's what that means for your day-to-day experience:

  • Real-time graph queries — Navigate complex data relationships in under a millisecond. Finding how two entities are connected across 3 hops takes less than 1μs.
  • Instant search — Combined text, semantic, and fuzzy search returns results in milliseconds across your entire knowledge graph.
  • Live permission checks — Access control is enforced on every single property access without any noticeable delay.
  • No waiting for queries — Whether you're exploring data in Ontology Explorer, running analysis, or asking the AI a question, results come back fast enough for real-time interaction.
What this means

Performance at this level makes use cases like real-time fraud detection, clinical decision support, and live supply chain monitoring practical — not just theoretical.


Security & Compliance

Your data is isolated

Every customer gets their own dedicated database instance. Your data is physically separated from other customers' data — not just logically partitioned. Cross-tenant data access is structurally prevented.

Encryption & integrity

  • In transit — All network communication uses TLS 1.3.
  • At rest — Customer-managed encryption keys (CMEK) and automated key rotation on the roadmap.
  • Data integrity — WAL protected with CRC32 integrity checksums. Every write is verified on read.

Fine-grained access control

Access permissions go all the way down to individual properties on individual entities. For example, a nurse can see a patient's vitals but not their psychiatric notes. Roles inherit permissions from parent roles, and wildcard rules make broad grants easy while keeping fine-grained overrides simple.

Compliance

GDPR

Full data subject rights — right to erasure, data portability, and audit trails on all data access.

HIPAA

PHI safeguards with property-level access control, audit logging, encryption at rest and in transit.

SOC 2

Framework covering security, availability, and confidentiality trust service criteria.


Smart Schema

Scalix Prime includes a schema evolution system that improves your data model over time. Instead of manually defining every entity type and relationship, the platform observes how your data behaves and recommends changes for your approval.

How it works

  1. Observe — The system monitors data operations: entity creation, property updates, relationship changes, query patterns, and access frequencies.
  2. Recommend — Based on observed patterns, it suggests new entity types, missing relationships, property additions, index optimizations, and schema refinements.
  3. You decide — Every recommendation requires your review and approval before it's applied. The system never changes your data model without permission.
  4. Drift alerts — If new data patterns diverge from your current schema, you'll be alerted before applications break — especially valuable when data sources change without notice.
Human in the loop

The schema system is advisory only. All changes are versioned, auditable, and reversible. You stay in control.

Built-in industry standards

You can import pre-built ontology schemas for common industry standards:

  • HL7 FHIR R4 — Healthcare interoperability (Patient, Condition, Observation, etc.)
  • MITRE ATT&CK v14 — Cybersecurity threat taxonomy
  • STIX 2.1 — Structured Threat Intelligence eXpression
  • Schema.org — General-purpose linked data vocabulary

Import any of these from the Administration > Standards tab in Workshop.


Frequently Asked Questions

Do I need to know SQL or Cypher?

No. The AI Chat agents translate your plain English questions into queries automatically. Workshop tools like Ontology Explorer and Analysis Workbench are fully point-and-click. If you do know SQL or Cypher, the Query tool in Chat lets you run them directly.

Can I export my data?

Yes. The Analysis Workbench lets you download results as CSV. App Builder can export app definitions as JSON. Data Pipeline supports exporting to CSV and webhooks.

What happens if the AI gives a wrong answer?

AI Chat agents always ground their answers in your actual data. You can see exactly what the agent searched for and what it received by expanding the tool call sections in the conversation. The data shown is real — the AI interprets it but doesn't fabricate data points.

Can multiple people use the Workshop at the same time?

Yes. Each user has their own session, saved apps, and conversation history. Access control ensures users only see data they're authorized to access.

How do I add my own data?

Use the Data Pipeline tool to import data from CSV files, databases, APIs, or S3 buckets. The pipeline lets you clean and transform the data before loading it into the knowledge graph.

How do I get help inside the Workshop?

Every page has a Tour button that walks you through the key features. Look for the small ? icons next to complex controls for contextual explanations. You can also ask the AI Chat General agent questions about how to use the platform.

Can I use Scalix Prime without the Workshop UI?

Yes. Every operation available in the Workshop is exposed through the REST API at /api/v1/. You can ingest data, search the graph, manage schemas, run AI agents, and administer connectors entirely via curl, Python, or any HTTP client. Interactive API docs are served at /docs (Swagger) and /redoc. See the API Reference section for details.

Can Scalix Prime run without internet?

Yes. Scalix Prime supports fully air-gapped deployment for defense and high-security environments. The engine runs as a self-contained binary with no external dependencies.