Core Concepts
Core Concepts
Understanding these fundamental concepts will help you make the most of Outrun's data synchronization platform. These concepts form the foundation of how Outrun ingests, processes, and delivers your data.
The Outrun Data Flow
Outrun follows a systematic approach to data synchronization:
Ingestion
Raw data collected from sources
Consolidation
Data merged and cleaned
Standardization
Transformed into standard objects
Delivery
Sent to destinations
Key Concepts
Objects
The standardized object types that represent all business data: People, Organizations, Relationships, and Analytics.
- • Universal data model
- • Source and destination mappings
- • Cross-object relationships
📥 Ingestion
How Outrun collects and stores raw data from your sources using real-time streams or batch jobs.
- • Real-time vs batch collection
- • Raw data preservation
- • Stream storage and metadata
Standardization
How raw data transforms into standardized People, Organizations, and Relationships.
- • Standard object types
- • Field mapping and transformation
- • Cross-system compatibility
Delivery
How standardized data is delivered to your target systems with intelligent mapping and rate limiting.
- • Destination-specific transformation
- • Rate limiting and performance
- • Error handling and retry logic
Storage
Multi-region storage architecture with intelligent regional data placement for optimal compliance.
- • Multi-region replication
- • Regional data centers
- • Intelligent data placement
Security & Compliance
Comprehensive security measures, SOC 2 compliance, and data ownership policies.
- • SOC 2 compliance framework
- • Data encryption and access controls
- • Regional compliance strategies
Data Relationships
How Outrun maintains connections between data objects across different systems.
- • Cross-system relationships
- • Relationship mapping
- • Connection maintenance
Data Storage Architecture
Understanding how Outrun stores and processes your data:
Stream Data
stream_datatable: Raw data as received from APIs, partitioned bysource_id- First-in, first-out: Chronological data storage
- Metadata enriched: System metadata stored in a separate JSONB column
- Original format: Data preserved as close to source format as possible in a
recordJSONB column
Consolidated Data
consolidated_datatable: Merged and cleaned data, keyed bysource_id+object_type- Deduplication: Duplicate records identified and merged via
external_id - Data quality: Validation and cleansing applied
- Relationship mapping: Cross-record relationships established
Standardized Objects
peopletable: Contacts, leads, users from any systemorganizationstable: Companies, accounts, business entitiesrelationshipstable: Connections between people and organizationssearch_analytics_datatable: Search metrics, analytics, and performance data
The Outrun Philosophy
Standardization Over Customization
Outrun focuses on creating standardized approaches rather than custom integrations:
- Opinionated Mappings: Pre-built field mappings for common use cases
- Standard Objects: Universal data models that work across systems
- Best Practices: Built-in data quality and validation rules
- Simplified Setup: Minimal configuration required
Data Preservation
We maintain data integrity throughout the process:
- Original Format: Raw data stored as received from APIs
- Audit Trail: Complete history of data transformations
- Metadata Enrichment: System information without altering source data
- Reversible Process: Ability to trace back to original data
Performance & Reliability
Built for enterprise-scale data synchronization:
- Rate Limit Management: Intelligent API quota management
- Error Handling: Comprehensive retry and recovery logic
- Monitoring: Real-time sync status and performance metrics
- Scalability: Designed to handle large data volumes
Next Steps
Start with Ingestion
Learn how Outrun collects and stores data from your sources.
Learn About Ingestion →Understanding these concepts will help you design effective data synchronization strategies with Outrun.