




Have any questions? We’re here to help You
Makini provides a `/sync-status` API endpoint that returns the current synchronization state for a connection. The response includes the last successful sync timestamp, sync status (in progress, completed, failed), any error messages, and the next scheduled sync time. You can query this endpoint to monitor sync health and detect issues. For workflow-based syncs using Makini Flows, each workflow execution is logged with detailed status information including start time, completion time, success/failure status, and any errors encountered. The Makini dashboard also provides visual sync status monitoring across all connections.
Makini sends webhooks for several event types: sync completion (successful or failed), connection authentication required (when credentials need renewal), connection status changes (online/offline), and system errors requiring attention. Each webhook payload includes the event type, timestamp, connection ID, and event-specific details like error messages or affected entities. You can configure which events trigger webhooks on a per-connection basis. For workflow-based integrations using Makini Flows, you can also set up custom webhooks triggered by specific conditions in your business logic, providing granular control over real-time notifications.
Connection-specific errors often relate to system configuration, permissions, or connectivity issues. Common scenarios include: the system is offline or unreachable, credentials have expired, API rate limits on the source system, or permission changes in the source system. Use the connection status endpoint to check connection health before making API calls. Implement circuit breaker patterns—if a connection repeatedly fails, temporarily stop making requests to avoid cascading failures. Log connection-specific errors separately to identify problematic connections. When errors occur, check if the issue affects all operations or specific entity types, which helps narrow down permission or configuration issues. For on-premises systems, verify network connectivity and firewall rules. Contact support if connection errors persist, providing the connection ID and affected operations.
For bulk operations, we recommend batch processing with appropriate rate limiting and error handling. Makini Flows provides built-in batch processing capabilities with configurable batch sizes, delays between batches, and error handling. For API-based bulk operations, implement pagination when retrieving large datasets—our API returns results in pages with continuation tokens for fetching subsequent pages. When writing large volumes of data, break operations into smaller batches (typically 50-100 records per batch) with delays between batches to avoid overwhelming the target system. Implement comprehensive error logging to identify which specific records fail in a batch. For very large operations (thousands of records), consider asynchronous processing patterns where you queue operations and process them in the background.
