



Have any questions? We’re here to help You
Makini's unified API acts as a common denominator across all connected systems. We map each system's data structure to a standardized data model, exposing consistent endpoints regardless of the underlying platform. This means you write the same code to retrieve purchase orders from SAP, NetSuite, or Dynamics—the API calls and response formats are identical. You always get data in the same structure, making it easy to build consistent business logic. The unified approach eliminates the need to learn each system's unique API, manage multiple authentication methods, or handle varying data formats.
All API requests require authentication via bearer token. After successfully connecting a system through Makini's authentication module, you receive an API token. Include this token in the Authorization header of your requests: `Authorization: Bearer YOUR_API_TOKEN`. Each connection has a unique token, allowing you to manage multiple customer connections independently. Tokens remain valid as long as the underlying system credentials are valid and the connection is active. If a customer changes their system credentials, you'll need to reconnect to obtain a new token.
When customers change their system credentials, the existing Makini connection will lose access and workflows will begin failing with authentication errors. Makini provides webhook notifications when connections require reauthorization, allowing you to proactively notify customers. Customers can reconnect by logging into the system through Makini's authentication flow again, which issues a new API token. The reconnection process takes only a few minutes. Best practice is to implement connection health monitoring and automated alerts when connections require attention, so you can address issues before they impact operations.
For bulk operations, we recommend batch processing with appropriate rate limiting and error handling. Makini Flows provides built-in batch processing capabilities with configurable batch sizes, delays between batches, and error handling. For API-based bulk operations, implement pagination when retrieving large datasets—our API returns results in pages with continuation tokens for fetching subsequent pages. When writing large volumes of data, break operations into smaller batches (typically 50-100 records per batch) with delays between batches to avoid overwhelming the target system. Implement comprehensive error logging to identify which specific records fail in a batch. For very large operations (thousands of records), consider asynchronous processing patterns where you queue operations and process them in the background.
