Data Out
Best practices for exporting data from Omnium using delta queries, events, and scrolling for optimal performance.
Overview
Omnium provides multiple mechanisms for exporting data and keeping external systems synchronized. Choose the right approach based on your use case.
Data Export (Output)
Delta Queries
Delta queries allow you to efficiently retrieve only the changes since a specific timestamp. This is ideal for polling-based integrations.
Example: Get orders modified since a timestamp:
Benefits:
- Retrieve only what has changed
- Reduce data transfer and processing
- Simple to implement with timestamp-based polling
- Works with standard API rate limits
Best Practice: Store the last successful query timestamp as a high-water mark to ensure exactly-once processing on your side.
Practical Example: Inventory Delta Query with Auto-Scrolling
Here's a real-world example showing how delta queries automatically trigger scrolling for large result sets:
Sample request for inventory delta query:
Sample response when result set exceeds 1000 items:
Notice that the scrollId property has been populated because totalHits exceeds 1000, meaning you should use the scroll endpoint to fetch remaining inventory items:
Important: When the total amount is larger than 1000, the response will contain a scrollId which can be used with the scroll endpoint to fetch all remaining items:
Continue calling the scroll endpoint with the latest scrollId until all items are retrieved.
Event-Driven Export
Omnium fires events for most operations, whether executed via API or UI. Subscribe to these events for near real-time data synchronization.
Event Subscription Types:
- Webhooks
- Azure Storage Queues
- Azure Service Bus
- Apache Kafka
Setting Up Event Subscriptions:
Event subscribers are registered in the Omnium UI under Configuration > Event subscriptions. You can filter events by:
- Entity type (orders, products, inventory, etc.)
- Order type (Click and Collect, standard, etc.)
- Status changes
- Store
- Market
Example Event Payload:
Learn More: See our Events documentation for detailed information on event subscriptions, retry policies, and event logging.
Scrolling for Large Datasets
For initial loads, large backfills, and recovery scenarios, use scrolling to retrieve large datasets efficiently.
Example: Initial scroll request:
Response includes a scrollId:
Continue scrolling with scrollId:
Key Features:
- Consistent snapshot of data at query time
- Handles datasets too large for standard pagination
- Must complete or terminate scroll to free resources
- Stricter rate limits than standard endpoints
Important Warning: Unfinished scroll operations continue to hold server resources. Always complete your scroll operations by fetching all results, or the scroll context will remain open until it expires (typically after 5 minutes of inactivity). Leaving many scrolls open can negatively impact system performance.
Learn More: See our Scrolling documentation for best practices and detailed workflow.
Event-Driven vs Polling Integrations
Choosing the right integration pattern depends on your use case. Here's guidance on when to use each approach.
Polling integrations use delta queries (aka. changedSince or modifiedSince parameters) to periodically check for changes, while event-driven integrations receive push notifications when changes occur.
Quick Comparison
| Aspect | Event-Driven (Push) | Polling (Pull/Delta Queries) | 
|---|---|---|
| Latency | Near real-time | Controlled delay (e.g., 30 sec to hours/daily) | 
| Best For | Low-volume, time-critical changes | High-volume, frequently changing data | 
| Complexity | Higher (event processing, replays) | Lower (simple scheduled jobs) | 
| Message Volume | Can be very high for noisy domains | Batched, controlled volume | 
| Infrastructure | Requires event processing system | Standard HTTP polling | 
| Control | Omnium pushes when changes occur | You control polling frequency | 
| Examples | Order status, payments | Inventory sync, product updates | 
When Events Shine
Use events when you need near real-time propagation and precision:
- Order lifecycle changes that drive customer communications or fulfillment
- **Customer profile updates such as consent, loyalty points, or address changes
- Low-volume but time-critical operations
Benefits:
- Low latency (near real-time)
- Fine-grained change signals
- Natural fit for reactive workflows
- Immediate downstream processing
Trade-offs:
- Potentially very high message counts for noisy domains (e.g., inventory - could be, but not always)
- Complex error handling and replays
- Harder to aggregate or reorder for downstream batch systems
- Requires robust event processing infrastructure
Example Use Cases:
- Notify customer when order is ready for pickup
- Update product availability on website immediately when stock changes
- Trigger fulfillment workflows on order placement
When Polling is Better
For high-churn domains such as inventory, polling with delta queries is often more efficient:
Recommended Pattern:
- Poll delta queries at fixed intervals (e.g., every 30 seconds to 1 minute for high-priority data, or hourly/daily for less time-sensitive data - tuned to your business needs)
- Apply changes downstream using bulk operations
- Keep a durable high-water mark to ensure exactly-once application on your side
Store the timestamp of successful processing and use it for the next poll.
Benefits:
- Lower operational overhead and simpler error handling
- Natural batching for downstream systems that prefer bulk writes
- Easier throughput control and backoff under load
- Reduced message volume for high-frequency changes
- Better for aggregation and bulk processing
- You control the polling frequency
Trade-offs:
- Small, controlled delay compared to pure events (typically 30-60 seconds)
- You must manage scheduling and checkpoints
- Requires durable storage for high-water marks (last successful timestamp)
Example Use Cases:
- Synchronizing inventory levels to ERP systems
- Bulk product updates to search indexes
- Periodic synchronization of customer data to marketing platforms
--- I
Hybrid Approach
The best integrations often combine both patterns:
- 
Events for critical, low-volume, latency-sensitive changes - Order status changes
- Payment confirmations
- Critical inventory thresholds
 
- 
Polling (delta queries) for high-volume, frequently changing entities - Inventory levels
- Product catalog updates
- Bulk price changes
 
- 
Scrolling for initial loads, large backfills, and recovery - Initial data synchronization
- System recovery after outages
- Full catalog exports
 
Example Hybrid Architecture:
Example Use Case: Exporting Product Data for External Systems
A common scenario is exporting product data from Omnium to external systems such as search engines, marketplaces, or AI-driven product feeds (ACP anyone? 🔥)
In these cases, batch-oriented polling is usually the better approach:
- Many external systems expect bulk imports, not a constant stream of granular updates.
- Polling with delta queries (or scroll) lets you collect all recent changes at a fixed interval and send them in controlled batches downstream.
- This reduces chatter and network overhead while improving stability and predictability.
Example (recommended):
Every 30 seconds, call
POST /api/products/scroll
withmodifiedFromset to your last successful timestamp.
Add filters to only retrieve the subset you actually need (for example by channel, assortment, market, or product type).
UseincludeFields/excludeFieldsto limit the payload to only the fields required by the external system. This reduces transfer size and makes the integration faster and more reliable.
Persist the timestamp of the last successful update so that if your application fails or restarts, you can resume from the same point by settingmodifiedFromaccordingly.
This way, you get up-to-date product data without flooding the receiving system with unnecessary updates.
Best Practices Summary
- Use events for time-critical, low-volume changes
- Use polling (delta queries) for high-volume, frequently changing entities
- Use scrolling for initial loads and large backfills
- Consider implementing a hybrid approach for optimal performance
Next Steps
Now that you understand data export patterns:
- Explore the Events documentation for event subscription details
- Review the Scrolling documentation for large dataset handling
- Check out the Data In for import strategies
- Review the Integrations overview for additional integration patterns
