




Have any questions? We’re here to help You
API tokens must be stored securely and should never be exposed on the client side or in public repositories. Store tokens in secure environment variables or dedicated secrets management systems like AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault. Never hardcode tokens in application code or commit them to version control. Implement proper access controls so only authorized services can access stored tokens. For production environments, use separate tokens from development/testing environments. Rotate tokens periodically and immediately revoke tokens if you suspect they've been compromised. Makini tokens provide access to customer data, so treat them with the same security standards you'd apply to database credentials.
When customers change their system credentials, the existing Makini connection will lose access and workflows will begin failing with authentication errors. Makini provides webhook notifications when connections require reauthorization, allowing you to proactively notify customers. Customers can reconnect by logging into the system through Makini's authentication flow again, which issues a new API token. The reconnection process takes only a few minutes. Best practice is to implement connection health monitoring and automated alerts when connections require attention, so you can address issues before they impact operations.
Makini supports create, read, update, and delete (CRUD) operations, though availability varies by system and entity type. Most systems support creating and updating core entities like purchase orders, work orders, and inventory items. Read operations are universally supported across all entity types. Delete operations are less commonly supported due to system constraints—many industrial systems use soft deletes or status changes rather than true deletion. Update operations may be limited to specific fields depending on system configuration and business rules. For example, some systems prevent modifying purchase orders after approval. We recommend validating specific operation support for your use case during the technical deep dive.
For bulk operations, we recommend batch processing with appropriate rate limiting and error handling. Makini Flows provides built-in batch processing capabilities with configurable batch sizes, delays between batches, and error handling. For API-based bulk operations, implement pagination when retrieving large datasets—our API returns results in pages with continuation tokens for fetching subsequent pages. When writing large volumes of data, break operations into smaller batches (typically 50-100 records per batch) with delays between batches to avoid overwhelming the target system. Implement comprehensive error logging to identify which specific records fail in a batch. For very large operations (thousands of records), consider asynchronous processing patterns where you queue operations and process them in the background.
