




Have any questions? We’re here to help You
Makini is SOC 2 Type 2 compliant and undergoes penetration testing twice annually. We use industry-standard encryption protocols including TLS 1.2+ for data in transit and AES-256 for data at rest. Customer credentials are encrypted using secure key management practices. Our infrastructure follows security best practices including network segmentation, access controls, and regular security audits. For highly regulated industries or customers with strict compliance requirements, we offer self-hosted deployment options that keep all data within your infrastructure. We've successfully met security requirements for enterprises including financial institutions and government contractors.
Design your webhook receiver to handle duplicates and out-of-order webhooks, as network issues or retries can cause both scenarios. Keep the receiver lightweight—ideally writing incoming webhooks to a queue or reliable storage—then process them asynchronously. This prevents timeouts and allows your system to handle high-volume webhook spikes. Respond with a 200 status code immediately after receiving the webhook, before processing begins. Implement idempotency by tracking processed webhook IDs and skipping duplicates. Use constant-time comparison for signature verification to prevent timing attacks. If webhook processing fails, log the error but still return 200 to prevent unnecessary retries. Set up monitoring and alerts for webhook failures so you can investigate issues promptly. For critical workflows, combine webhooks with periodic polling as a fallback mechanism.
For bulk operations, we recommend batch processing with appropriate rate limiting and error handling. Makini Flows provides built-in batch processing capabilities with configurable batch sizes, delays between batches, and error handling. For API-based bulk operations, implement pagination when retrieving large datasets—our API returns results in pages with continuation tokens for fetching subsequent pages. When writing large volumes of data, break operations into smaller batches (typically 50-100 records per batch) with delays between batches to avoid overwhelming the target system. Implement comprehensive error logging to identify which specific records fail in a batch. For very large operations (thousands of records), consider asynchronous processing patterns where you queue operations and process them in the background.
Makini uses cursor-based pagination for retrieving large datasets. API responses include a `next_cursor` field when additional results are available. To retrieve the next page, include the cursor value in your next request: `GET /api/v1/purchase-orders?cursor=CURSOR_VALUE`. Cursor-based pagination is more reliable than offset-based pagination because it handles data changes between requests—if records are added or deleted while you're paginating, you won't miss records or see duplicates. Page size is configurable up to a maximum limit (typically 100-500 records per page depending on entity type). For optimal performance, use the largest page size your application can handle efficiently. The API response also includes total count when available from the source system.
