




Have any questions? We’re here to help You
Makini's unified API acts as a common denominator across all connected systems. We map each system's data structure to a standardized data model, exposing consistent endpoints regardless of the underlying platform. This means you write the same code to retrieve purchase orders from SAP, NetSuite, or Dynamics—the API calls and response formats are identical. You always get data in the same structure, making it easy to build consistent business logic. The unified approach eliminates the need to learn each system's unique API, manage multiple authentication methods, or handle varying data formats.
Industrial systems are often heavily customized, and Makini is built to handle this. For reading data, Makini can access virtually any field or custom table in connected systems. Through the connection settings interface, you can specify custom fields, tables, or entities to include in API responses. These show up alongside standard fields in the unified model. For custom objects not in our default model, you can request them through the interface and they'll be available immediately. For writing data, customization support varies by system but covers most common scenarios. During implementation, we work with you to identify required customizations and ensure they're properly configured before going live.
Testing should cover authentication, data retrieval, data writing, error handling, and workflow logic. Start by connecting a test system through Makini's authentication flow. Use sandbox or non-production instances of your target systems when available. Test API calls for each entity type you'll use (purchase orders, work orders, etc.) to verify data mapping and field coverage. Test error scenarios by providing invalid inputs or attempting operations without proper permissions. For workflow-based integrations, test each workflow step independently before testing end-to-end. Verify webhook delivery and signature verification. Test with realistic data volumes to identify performance issues. Include tests for connection failure scenarios and verify your error handling and retry logic work correctly.
For bulk operations, we recommend batch processing with appropriate rate limiting and error handling. Makini Flows provides built-in batch processing capabilities with configurable batch sizes, delays between batches, and error handling. For API-based bulk operations, implement pagination when retrieving large datasets—our API returns results in pages with continuation tokens for fetching subsequent pages. When writing large volumes of data, break operations into smaller batches (typically 50-100 records per batch) with delays between batches to avoid overwhelming the target system. Implement comprehensive error logging to identify which specific records fail in a batch. For very large operations (thousands of records), consider asynchronous processing patterns where you queue operations and process them in the background.
