



Have any questions? We’re here to help You
All API requests require authentication via bearer token. After successfully connecting a system through Makini's authentication module, you receive an API token. Include this token in the Authorization header of your requests: `Authorization: Bearer YOUR_API_TOKEN`. Each connection has a unique token, allowing you to manage multiple customer connections independently. Tokens remain valid as long as the underlying system credentials are valid and the connection is active. If a customer changes their system credentials, you'll need to reconnect to obtain a new token.
Yes, you can trigger syncs manually through both the API and the Makini dashboard. The API provides endpoints to initiate syncs for specific entities (purchase orders, work orders, etc.) on a given connection. Manual syncs are useful when you need immediate data updates outside the regular schedule, when onboarding new customers, or when recovering from sync failures. Manual syncs follow the same incremental logic as scheduled syncs, retrieving only changed records since the last successful sync. You can also trigger full re-syncs that ignore the last sync timestamp and retrieve all records within the configured historical period.
Makini supports create, read, update, and delete (CRUD) operations, though availability varies by system and entity type. Most systems support creating and updating core entities like purchase orders, work orders, and inventory items. Read operations are universally supported across all entity types. Delete operations are less commonly supported due to system constraints—many industrial systems use soft deletes or status changes rather than true deletion. Update operations may be limited to specific fields depending on system configuration and business rules. For example, some systems prevent modifying purchase orders after approval. We recommend validating specific operation support for your use case during the technical deep dive.
For bulk operations, we recommend batch processing with appropriate rate limiting and error handling. Makini Flows provides built-in batch processing capabilities with configurable batch sizes, delays between batches, and error handling. For API-based bulk operations, implement pagination when retrieving large datasets—our API returns results in pages with continuation tokens for fetching subsequent pages. When writing large volumes of data, break operations into smaller batches (typically 50-100 records per batch) with delays between batches to avoid overwhelming the target system. Implement comprehensive error logging to identify which specific records fail in a batch. For very large operations (thousands of records), consider asynchronous processing patterns where you queue operations and process them in the background.
