



Have any questions? We’re here to help You
Makini is a unified API platform for industrial systems integration. We provide connectivity to over 2,000 ERP, CMMS, and WMS systems through a single, standardized API. Instead of building separate integrations for each system, you connect once to Makini and gain access to all supported platforms. This approach transforms integration projects that typically cost tens of thousands of dollars and take months into a manageable operational expense with deployment times of 1-2 weeks.
Connection credits are Makini's billing unit. Each system integration consumes a specific number of credits based on complexity. Systems are divided into three tiers: Tier 1 (simple systems like cloud CMMS), Tier 2 (mid-complexity ERP systems), and Tier 3 (complex systems like SAP). On-premises installations require double the credits of their cloud equivalents. For example, a cloud SAP S4/HANA connection might use 4 credits, while an on-premises SAP ECC installation uses 8 credits. Connection credits are consumed when you establish a connection and are returned to your pool when you disconnect. This allows flexible allocation across customers—you're not locked into specific connections.
Data synchronization frequency is configurable based on your requirements. For real-time needs, Makini supports webhook-based synchronization where changes trigger immediate updates. For scheduled syncing, common intervals range from every 15 minutes to daily, depending on data volume and business requirements. The initial sync after connecting a system retrieves historical data based on your configuration—typically 30-90 days of historical records. Subsequent syncs are incremental, retrieving only records created or modified since the last sync. Sync frequency doesn't affect pricing. You can also trigger manual syncs on-demand via API when needed for specific workflows.
For bulk operations, we recommend batch processing with appropriate rate limiting and error handling. Makini Flows provides built-in batch processing capabilities with configurable batch sizes, delays between batches, and error handling. For API-based bulk operations, implement pagination when retrieving large datasets—our API returns results in pages with continuation tokens for fetching subsequent pages. When writing large volumes of data, break operations into smaller batches (typically 50-100 records per batch) with delays between batches to avoid overwhelming the target system. Implement comprehensive error logging to identify which specific records fail in a batch. For very large operations (thousands of records), consider asynchronous processing patterns where you queue operations and process them in the background.
