




Have any questions? We’re here to help You
All API requests require authentication via bearer token. After successfully connecting a system through Makini's authentication module, you receive an API token. Include this token in the Authorization header of your requests: `Authorization: Bearer YOUR_API_TOKEN`. Each connection has a unique token, allowing you to manage multiple customer connections independently. Tokens remain valid as long as the underlying system credentials are valid and the connection is active. If a customer changes their system credentials, you'll need to reconnect to obtain a new token.
Yes, Makini supports both cloud-based and on-premises systems. For on-premises installations, connections require double the connection credits compared to cloud systems. The connection process typically requires opening specific ports and whitelisting Makini's IP addresses in your firewall configuration. For some on-premises systems, VPN tunnels may be necessary. We provide detailed technical requirements during implementation planning. In cases where security policies prohibit external connections, we offer self-hosted deployment options where Makini runs entirely within your infrastructure, eliminating the need for external network access to on-premises systems.
Makini's API supports date filtering on most endpoints using query parameters. You can filter by creation date, modification date, or entity-specific date fields like order date or delivery date. Common patterns include `modified_after=2024-01-01` to retrieve records updated since a specific date, or relative timestamps like `modified_after=2024-01-01T00:00:00Z`. For optimal performance, use incremental data retrieval patterns rather than repeatedly fetching all records. The sync status endpoint provides the last sync timestamp, which you can use as the `modified_after` value for your next query. This approach minimizes data transfer and API load while ensuring you capture all changes.
For bulk operations, we recommend batch processing with appropriate rate limiting and error handling. Makini Flows provides built-in batch processing capabilities with configurable batch sizes, delays between batches, and error handling. For API-based bulk operations, implement pagination when retrieving large datasets—our API returns results in pages with continuation tokens for fetching subsequent pages. When writing large volumes of data, break operations into smaller batches (typically 50-100 records per batch) with delays between batches to avoid overwhelming the target system. Implement comprehensive error logging to identify which specific records fail in a batch. For very large operations (thousands of records), consider asynchronous processing patterns where you queue operations and process them in the background.
