




Have any questions? We’re here to help You
The initial sync occurs when you first connect a system and retrieves historical data to establish a baseline. This includes records from a configurable time period (typically 30-90 days) and can take several minutes to hours depending on data volume. Initial syncs are complete snapshots of the requested data. Incremental syncs occur on subsequent runs and retrieve only records created or modified since the last successful sync. Makini tracks sync timestamps and uses them to query for changes efficiently. Incremental syncs are much faster, usually completing in seconds to minutes. This approach minimizes API load on source systems while keeping your data current.
Design your webhook receiver to handle duplicates and out-of-order webhooks, as network issues or retries can cause both scenarios. Keep the receiver lightweight—ideally writing incoming webhooks to a queue or reliable storage—then process them asynchronously. This prevents timeouts and allows your system to handle high-volume webhook spikes. Respond with a 200 status code immediately after receiving the webhook, before processing begins. Implement idempotency by tracking processed webhook IDs and skipping duplicates. Use constant-time comparison for signature verification to prevent timing attacks. If webhook processing fails, log the error but still return 200 to prevent unnecessary retries. Set up monitoring and alerts for webhook failures so you can investigate issues promptly. For critical workflows, combine webhooks with periodic polling as a fallback mechanism.
For bulk operations, we recommend batch processing with appropriate rate limiting and error handling. Makini Flows provides built-in batch processing capabilities with configurable batch sizes, delays between batches, and error handling. For API-based bulk operations, implement pagination when retrieving large datasets—our API returns results in pages with continuation tokens for fetching subsequent pages. When writing large volumes of data, break operations into smaller batches (typically 50-100 records per batch) with delays between batches to avoid overwhelming the target system. Implement comprehensive error logging to identify which specific records fail in a batch. For very large operations (thousands of records), consider asynchronous processing patterns where you queue operations and process them in the background.
Write operation limitations vary by system. Common limitations include: field-level restrictions (some fields may be read-only), business rule validation (orders may require certain fields or valid vendor codes), permission requirements (the connected account needs specific permissions), timing restrictions (some systems prevent modifications after certain workflow states), and rate limits on write operations. Custom fields in target systems may not be writable through standard APIs. Some systems have transactional requirements—for example, purchase order line items must be created in the same transaction as the order header. During implementation, we identify write operation limitations for your specific use cases and design workflows that work within those constraints.
