




Have any questions? We’re here to help You
Integration timelines vary by complexity. For standard implementations with no customizations, connections can be live within 1-2 weeks. This includes authentication setup and basic workflow configuration. For implementations requiring custom workflows or specific business logic, timelines typically range from 2-6 weeks depending on the scope. Complex enterprise deployments with multiple systems and custom requirements may take 6-10 weeks. These timelines are significantly shorter than traditional integration projects, which often take 2-24 months.
Makini implements automatic retry logic for failed webhook deliveries. If your endpoint is unavailable or returns an error status code, we retry delivery with exponentially increasing intervals starting at 30 seconds. Retries continue for up to 24 hours. If delivery ultimately fails, the webhook is logged but not delivered. You can view failed webhooks in the Makini dashboard and manually retry them. To prevent webhook loss during extended downtime, implement a polling backup strategy—periodically check the sync status and query for recent changes if no webhooks have been received within the expected time window. Design your webhook receiver to be idempotent, as retry logic may result in duplicate deliveries.
Makini provides sandbox connections for testing without affecting production systems. Sandbox connections include sample data representing common scenarios: standard purchase orders, orders with custom fields, orders in various states (draft, approved, completed), and error cases like invalid vendors or out-of-stock items. Sandbox data is read-only for safety—write operations return success responses without modifying data. This allows thorough testing of your integration logic without risk. For testing with specific systems, we recommend using dedicated test instances of the actual systems (like SAP sandbox environments) connected through Makini, which provides the most realistic testing experience.
Makini uses cursor-based pagination for retrieving large datasets. API responses include a `next_cursor` field when additional results are available. To retrieve the next page, include the cursor value in your next request: `GET /api/v1/purchase-orders?cursor=CURSOR_VALUE`. Cursor-based pagination is more reliable than offset-based pagination because it handles data changes between requests—if records are added or deleted while you're paginating, you won't miss records or see duplicates. Page size is configurable up to a maximum limit (typically 100-500 records per page depending on entity type). For optimal performance, use the largest page size your application can handle efficiently. The API response also includes total count when available from the source system.
