




Have any questions? We’re here to help You
API tokens must be stored securely and should never be exposed on the client side or in public repositories. Store tokens in secure environment variables or dedicated secrets management systems like AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault. Never hardcode tokens in application code or commit them to version control. Implement proper access controls so only authorized services can access stored tokens. For production environments, use separate tokens from development/testing environments. Rotate tokens periodically and immediately revoke tokens if you suspect they've been compromised. Makini tokens provide access to customer data, so treat them with the same security standards you'd apply to database credentials.
Makini Flows is our embedded workflow automation platform, built on n8n, which we consider the best workflow automation tool available. It's fully integrated into Makini and runs on our infrastructure. Flows allows you to build complex integration logic using a visual workflow builder—no code required, though code is supported for advanced use cases. Workflows can be triggered by schedules, webhooks, API calls, or events from connected systems. You can perform data transformations, implement conditional logic, call external APIs, and orchestrate multi-step processes. Flows includes over 1,000 pre-built connectors beyond Makini's industrial systems, enabling integrations with databases, messaging platforms, cloud services, and more. Most customer activations are completed using Flows due to its flexibility and ease of use.
For bulk operations, we recommend batch processing with appropriate rate limiting and error handling. Makini Flows provides built-in batch processing capabilities with configurable batch sizes, delays between batches, and error handling. For API-based bulk operations, implement pagination when retrieving large datasets—our API returns results in pages with continuation tokens for fetching subsequent pages. When writing large volumes of data, break operations into smaller batches (typically 50-100 records per batch) with delays between batches to avoid overwhelming the target system. Implement comprehensive error logging to identify which specific records fail in a batch. For very large operations (thousands of records), consider asynchronous processing patterns where you queue operations and process them in the background.
Write operation limitations vary by system. Common limitations include: field-level restrictions (some fields may be read-only), business rule validation (orders may require certain fields or valid vendor codes), permission requirements (the connected account needs specific permissions), timing restrictions (some systems prevent modifications after certain workflow states), and rate limits on write operations. Custom fields in target systems may not be writable through standard APIs. Some systems have transactional requirements—for example, purchase order line items must be created in the same transaction as the order header. During implementation, we identify write operation limitations for your specific use cases and design workflows that work within those constraints.
