




Have any questions? We’re here to help You
Makini is SOC 2 Type 2 compliant and undergoes penetration testing twice annually. We use industry-standard encryption protocols including TLS 1.2+ for data in transit and AES-256 for data at rest. Customer credentials are encrypted using secure key management practices. Our infrastructure follows security best practices including network segmentation, access controls, and regular security audits. For highly regulated industries or customers with strict compliance requirements, we offer self-hosted deployment options that keep all data within your infrastructure. We've successfully met security requirements for enterprises including financial institutions and government contractors.
Connection credits are Makini's billing unit. Each system integration consumes a specific number of credits based on complexity. Systems are divided into three tiers: Tier 1 (simple systems like cloud CMMS), Tier 2 (mid-complexity ERP systems), and Tier 3 (complex systems like SAP). On-premises installations require double the credits of their cloud equivalents. For example, a cloud SAP S4/HANA connection might use 4 credits, while an on-premises SAP ECC installation uses 8 credits. Connection credits are consumed when you establish a connection and are returned to your pool when you disconnect. This allows flexible allocation across customers—you're not locked into specific connections.
Yes, Makini supports write operations including creating, updating, and in some cases deleting records in connected systems. Common write operations include creating purchase orders, updating work order status, modifying inventory levels, and creating vendor records. Write support varies by system and entity type—core entities like purchase orders have comprehensive write support across major systems, while more specialized entities may have limited write support in some systems. Write operations use the same unified API, so the code to create a purchase order in SAP is identical to creating one in NetSuite. Validate write requirements during implementation to ensure your target systems support needed operations.
For bulk operations, we recommend batch processing with appropriate rate limiting and error handling. Makini Flows provides built-in batch processing capabilities with configurable batch sizes, delays between batches, and error handling. For API-based bulk operations, implement pagination when retrieving large datasets—our API returns results in pages with continuation tokens for fetching subsequent pages. When writing large volumes of data, break operations into smaller batches (typically 50-100 records per batch) with delays between batches to avoid overwhelming the target system. Implement comprehensive error logging to identify which specific records fail in a batch. For very large operations (thousands of records), consider asynchronous processing patterns where you queue operations and process them in the background.
