Use Cases
-
Mar 23, 2026

Auto Provisioning for B2B SaaS: HRIS-Driven Workflows | Knit

Auto provisioning is the automated creation, update, and removal of user accounts when a source system - usually an HRIS, ATS, or identity provider - changes. For B2B SaaS teams, it turns employee lifecycle events into downstream account creation, role assignment, and deprovisioning workflows without manual imports or ticket queues. Knit's Unified API connects HRIS, ATS, and other upstream systems to your product so you can build this workflow without stitching together point-to-point connectors.

If your product depends on onboarding employees, assigning access, syncing identity data, or triggering downstream workflows, provisioning cannot stay manual for long.

That is why auto provisioning matters.

For B2B SaaS, auto provisioning is not just an IT admin feature. It is a core product workflow that affects activation speed, compliance posture, and the day-one experience your customers actually feel. At Knit, we see the same pattern repeatedly: a team starts by manually creating users or pushing CSVs, then quickly runs into delays, mismatched data, and access errors across systems.

In this guide, we cover:

  • What auto provisioning is and how it differs from manual provisioning
  • How an automated provisioning workflow works step by step
  • Which systems and data objects are involved
  • Where SCIM fits — and where it is not enough
  • Common implementation failures
  • When to build in-house and when to use a unified API layer

What is auto provisioning?

Auto provisioning is the automated creation, update, and removal of user accounts and permissions based on predefined rules and source-of-truth data. The provisioning trigger fires when a trusted upstream system — an HRIS, ATS, identity provider, or admin workflow — records a change: a new hire, a role update, a department transfer, or a termination.

That includes:

  • Creating a new user when an employee or customer record is created
  • Updating access when attributes such as team, role, or location change
  • Removing access when the user is deactivated or leaves the organization

This third step — account removal — is what separates a real provisioning system from a simple user-creation script. Provisioning without clean deprovisioning is how access debt accumulates and how security gaps appear after offboarding.

For B2B SaaS products, the provisioning flow typically sits between a source system that knows who the user is, a policy layer that decides what should happen, and one or more downstream apps that need the final user, role, or entitlement state.

Why auto provisioning matters for SaaS products

Provisioning is not just an internal IT convenience.

For SaaS companies, the quality of the provisioning workflow directly affects onboarding speed, time to first value, enterprise deal readiness, access governance, support load, and offboarding compliance. If enterprise customers expect your product to work cleanly with their Workday, BambooHR, or ADP instance, provisioning becomes part of the product experience — not just an implementation detail.

The problem is bigger than "create a user account." It is really about:

  • Using the right source of truth (usually the HRIS, not a downstream app)
  • Mapping user attributes correctly across systems with different schemas
  • Handling role logic without hardcoding rules that break at scale
  • Keeping downstream systems in sync when the source changes
  • Making failure states visible and recoverable

When a new employee starts at a customer's company and cannot access your product on day one, that is a provisioning problem — and it lands in your support queue, not theirs.

How auto provisioning works - step by step

Most automated provisioning workflows follow the same pattern regardless of which systems are involved.

1. A source system changes

The signal may come from an HRIS (a new hire created in Workday, BambooHR, or ADP), an ATS (a candidate hired in Greenhouse or Ashby), a department or role change, or an admin action that marks a user inactive. For B2B SaaS teams building provisioning into their product, the most common source is the HRIS — the system of record for employee status.

2. The system detects the event

The trigger may come from a webhook, a scheduled sync, a polling job, or a workflow action taken by an admin. Most HRIS platforms do not push real-time webhooks natively - which is why Knit provides virtual webhooks that normalize polling into event-style delivery your application can subscribe to.

3. User attributes are normalized

Before the action is pushed downstream, the workflow normalizes fields across systems. Common attributes include user ID, email, team, location, department, job title, employment status, manager, and role or entitlement group. This normalization step is where point-to-point integrations usually break — every HRIS represents these fields differently.

4. Provisioning rules are applied

This is where the workflow decides whether to create, update, or remove a user; which role to assign; which downstream systems should receive the change; and whether the action should wait for an approval or additional validation. Keeping this logic outside individual connectors is what makes the system maintainable as rules evolve.

5. Accounts and access are provisioned downstream

The provisioning layer creates or updates the user in downstream systems and applies app assignments, permission groups, role mappings, team mappings, and license entitlements as defined by the rules.

6. Status and exceptions are recorded

Good provisioning architecture does not stop at "request sent." You need visibility into success or failure state, retry status, partial completion, skipped records, and validation errors. Silent failures are the most common cause of provisioning-related support tickets.

7. Deprovisioning is handled just as carefully

When a user becomes inactive in the source system, the workflow should trigger account disablement, entitlement removal, access cleanup, and downstream reconciliation. Provisioning without clean deprovisioning creates a security problem and an audit problem later. This step is consistently underinvested in projects that focus only on new-user creation.

Systems and data objects involved

Provisioning typically spans more than two systems. Understanding which layer owns what is the starting point for any reliable architecture.

Layer Common systems What they contribute
Source of truth HRIS, ATS, admin panel, CRM, customer directory Who the user is and what changed
Identity / policy layer IdP, IAM, role engine, workflow service Access logic, group mapping, entitlements
Target systems SaaS apps, internal tools, product tenants, file systems Where the user and permissions need to exist
Monitoring layer Logs, alerting, retry queue, ops dashboard Visibility into failures and drift

The most important data objects are usually: user profile, employment or account status, team or department, location, role, manager, entitlement group, and target app assignment.

When a SaaS product needs to pull employee data or receive lifecycle events from an HRIS, the typical challenge is that each HRIS exposes these objects through a different API schema. Knit's Unified HRIS API normalizes these objects across 60+ HRIS and payroll platforms so your provisioning logic only needs to be written once.

Manual vs. automated provisioning

Approach What it looks like Main downside
Manual provisioning Admins create users one by one, upload CSVs, or open tickets Slow, error-prone, and hard to audit
Scripted point solution A custom job handles one source and one target Works early, but becomes brittle as systems and rules expand
Automated provisioning Events, syncs, and rules control create/update/remove flows Higher upfront design work, far better scale and reliability

Manual provisioning breaks first in enterprise onboarding. The more users, apps, approvals, and role rules involved, the more expensive manual handling becomes. Enterprise buyers — especially those running Workday or SAP — will ask about automated provisioning during the sales process and block deals where it is missing.

Where SCIM fits in an automated provisioning strategy

SCIM (System for Cross-domain Identity Management) is a standard protocol used to provision and deprovision users across systems in a consistent way. When both the identity provider and the SaaS application support SCIM, it can automate user creation, attribute updates, group assignment, and deactivation without custom integration code.

But SCIM is not the whole provisioning strategy for most B2B SaaS products. Even when SCIM is available, teams still need to decide what the real source of truth is, how attributes are mapped between systems, how roles are assigned from business rules rather than directory groups, how failures are retried, and how downstream systems stay in sync when SCIM is not available.

The more useful question is not "do we support SCIM?" It is: do we have a reliable provisioning workflow across the HRIS, ATS, and identity systems our customers actually use? For teams building that workflow across many upstream platforms, Knit's Unified API reduces that to a single integration layer instead of per-platform connectors.

SAML auto provisioning vs. SCIM

SAML and SCIM are often discussed together but solve different problems. SAML handles authentication — it lets users log into your application via their company's identity provider using SSO. SCIM handles provisioning — it keeps the user accounts in your application in sync with the identity provider over time. SAML auto provisioning (sometimes called JIT provisioning) creates a user account on first login; SCIM provisioning creates and manages accounts in advance, independently of whether the user has logged in.

For enterprise customers, SCIM is generally preferred because it handles pre-provisioning, attribute sync, group management, and deprovisioning. JIT provisioning via SAML creates accounts reactively and cannot handle deprovisioning reliably on its own.

Common implementation failures

Provisioning projects fail in familiar ways.

The wrong source of truth. If one system says a user is active and another says they are not, the workflow becomes inconsistent. HRIS is almost always the right source for employment status — not the identity provider, not the product itself.

Weak attribute mapping. Provisioning logic breaks when fields like department, manager, role, or location are inconsistent across systems. This is the most common cause of incorrect role assignment in enterprise accounts.

No visibility into failures. If a provisioning job fails silently, support only finds out when a user cannot log in or cannot access the right resources. Observability is not optional.

Deprovisioning treated as an afterthought. Teams often focus on new-user creation and underinvest in access removal — exactly where audit and security issues surface. Every provisioning build should treat deprovisioning as a first-class requirement.

Rules that do not scale. A provisioning script that works for one HRIS often becomes unmanageable when you add more target systems, role exceptions, conditional approvals, and customer-specific logic. Abstraction matters early.

Native integrations vs. unified APIs for provisioning

When deciding how to build an automated provisioning workflow, SaaS teams typically evaluate three approaches:

Native point-to-point integrations mean building a separate connector for each HRIS or identity system. This offers maximum control but creates significant maintenance overhead as each upstream API changes its schema, authentication, or rate limits.

Embedded iPaaS platforms (like Workato or Tray.io embedded) let you compose workflows visually. These work well for internal automation but add a layer of operational complexity when the workflow needs to run reliably inside a customer-facing SaaS product.

Unified API providers like Knit normalize many upstream systems into a single API endpoint. You write the provisioning logic once and it works across all connected HRIS, ATS, and other platforms. This is particularly effective when provisioning depends on multiple upstream categories — HRIS for employee status, ATS for new hire events, identity providers for role mapping. See how Knit compares to other approaches in our Native Integrations vs. Unified APIs guide.

Auto provisioning and AI agents

As SaaS products increasingly use AI agents to automate workflows, provisioning becomes a data access question as well as an account management question. An AI agent that needs to look up employee data, check role assignments, or trigger onboarding workflows needs reliable access to HRIS and ATS data in real time.

Knit's MCP Servers expose normalized HRIS, ATS, and payroll data to AI agents via the Model Context Protocol — giving agents access to employee records, org structures, and role data without custom tooling per platform. This extends the provisioning architecture into the AI layer: the same source-of-truth data that drives user account creation can power AI-assisted onboarding workflows, access reviews, and anomaly detection. Read more in Integrations for AI Agents.

When to build auto provisioning in-house

Building in-house can make sense when the number of upstream systems is small (one or two HRIS platforms), the provisioning rules are deeply custom and central to your product differentiation, your team is comfortable owning long-term maintenance of each upstream API, and the workflow is narrow enough that a custom solution will not accumulate significant edge-case debt.

When to use a unified API layer

A unified API layer typically makes more sense when customers expect integrations across many HRIS, ATS, or identity platforms; the same provisioning pattern repeats across customer accounts with different upstream systems; your team wants faster time to market on provisioning without owning per-platform connector maintenance; and edge cases — authentication changes, schema updates, rate limits — are starting to spread work across product, engineering, and support.

This is especially true when provisioning depends on multiple upstream categories. If your provisioning workflow needs HRIS data for employment status, ATS data for new hire events, and potentially CRM or accounting data for account management, a Unified API reduces that to a single integration contract instead of three or more separate connectors.

Final takeaway

Auto provisioning is not just about creating users automatically. It is about turning identity and account changes in upstream systems — HRIS, ATS, identity providers — into a reliable product workflow that runs correctly across every customer's tech stack.

For B2B SaaS, the quality of that workflow affects onboarding speed, support burden, access hygiene, and enterprise readiness. The real standard is not "can we create a user." It is: can we provision, update, and deprovision access reliably across the systems our customers already use — without building and maintaining a connector for every one of them?

Frequently asked questions

What is auto provisioning?Auto provisioning is the automatic creation, update, and removal of user accounts and access rights when a trusted source system changes — typically an HRIS, ATS, or identity provider. In B2B SaaS, it turns employee lifecycle events into downstream account creation, role assignment, and deprovisioning workflows without manual imports or admin tickets.

What is the difference between SAML auto provisioning and SCIM?SAML handles authentication — it lets users log into an application via SSO. SCIM handles provisioning — it keeps user accounts in sync with the identity provider over time, including pre-provisioning and deprovisioning. SAML JIT provisioning creates accounts on first login; SCIM manages the full account lifecycle independently of login events. For enterprise use cases, SCIM is the stronger approach for reliability and offboarding coverage.

What is the main benefit of automated provisioning?The main benefit is reliability at scale. Automated provisioning eliminates manual import steps, reduces access errors from delayed updates, ensures deprovisioning happens when users leave, and makes the provisioning workflow auditable. For SaaS products selling to enterprise customers, it also removes a common procurement blocker.

How does HRIS-driven provisioning work?HRIS-driven provisioning uses employee data changes in an HRIS (such as Workday, BambooHR, or ADP) as the trigger for downstream account actions. When a new employee is created in the HRIS, the provisioning workflow fires to create accounts, assign roles, and onboard the user in downstream SaaS applications. When the employee leaves, the same workflow triggers deprovisioning. Knit's Unified HRIS API normalizes these events across 60+ HRIS and payroll platforms.

What is the difference between provisioning and deprovisioning?Provisioning creates and configures user access. Deprovisioning removes or disables it. Both should be handled by the same workflow — deprovisioning is not an edge case. Incomplete deprovisioning is the most common cause of access debt and audit failures in SaaS products.

Does auto provisioning require SCIM?No. SCIM is one mechanism for automating provisioning, but many HRIS platforms and upstream systems do not support SCIM natively. Automated provisioning can be built using direct API integrations, webhooks, or scheduled sync jobs. Knit provides virtual webhooks for HRIS platforms that do not support native real-time events, allowing provisioning workflows to be event-driven without requiring SCIM from every upstream source.

When should a SaaS team use a unified API for provisioning instead of building native connectors?A unified API layer makes more sense when the provisioning workflow needs to work across many HRIS or ATS platforms, the same logic should apply regardless of which system a customer uses, and maintaining per-platform connectors would spread significant engineering effort. Knit's Unified API lets SaaS teams write provisioning logic once and deploy it across all connected platforms, including Workday, BambooHR, ADP, Greenhouse, and others.

Want to automate provisioning faster?

If your team is still handling onboarding through manual imports, ticket queues, or one-off scripts, it is usually a sign that the workflow needs a stronger integration layer.

Knit connects SaaS products to HRIS, ATS, payroll, and other upstream systems through a single Unified API — so provisioning and downstream workflows do not turn into connector sprawl as your customer base grows.

Use Cases
-
Sep 26, 2025

Payroll Integrations for Leasing and Employee Finance

Introduction

In today's fast-evolving business landscape, companies are streamlining employee financial offerings, particularly in payroll-linked payments and leasing solutions. These include auto-leasing programs, payroll-based financing, and other benefits designed to enhance employee financial well-being.

By integrating directly with an organization’s Human Resources Information System (HRIS) and payroll systems, solution providers can offer a seamless experience that benefits both employers (B2B) and employees (B2C). This guide explores the importance of payroll integration, challenges businesses face, and best practices for implementing scalable solutions, with insights drawn from the B2B auto-leasing sector.

Why Payroll Integrations Matter for Leasing and Financial Benefits

Payroll-linked leasing and financing offer key advantages for companies and employees:

  • Seamless Employee Benefits – Employees gain access to tax savings, automated lease payments, and simplified financial management.
  • Enhanced Compliance – Automated approval workflows ensure compliance with internal policies and external regulations.
  • Reduced Administrative Burden – Automatic data synchronization eliminates manual processes for HR and finance teams.
  • Improved Employee Experience – A frictionless process, such as automatic payroll deductions for lease payments, enhances job satisfaction and retention.

Common Challenges in Payroll Integration

Despite its advantages, integrating payroll-based solutions presents several challenges:

  • Diverse HR/Payroll Systems – Companies use various HR platforms (e.g., Workday, Successfactors, Bamboo HR or in some cases custom/ bespoke solutions), making integration complex and costly.
  • Data Security & Compliance – Employers must ensure sensitive payroll and employee data are securely managed to meet regulatory requirements.
  • Legacy Infrastructure – Many enterprises rely on outdated, on-prem HR systems, complicating real-time data exchange.
  • Approval Workflow Complexity – Ensuring HR, finance, and management approvals in a unified dashboard requires structured automation.

Key Use Cases for Payroll Integration

Integrating payroll systems into leasing platforms enables:

  • Employee Verification – Confirm employment status, salary, and tenure directly from HR databases.
  • Automated Approvals – Centralized dashboards allow HR and finance teams to approve or reject leasing requests efficiently.
  • Payroll-Linked Deductions – Automate lease or financing payments directly from employee payroll to prevent missed payments.
  • Offboarding Triggers – Notify leasing providers of employee exits to handle settlements or lease transfers seamlessly.

End-to-End Payroll Integration Workflow

A structured payroll integration process typically follows these steps:

  1. Employee Requests Leasing Option – Employees select a lease program via a self-service portal.
  2. HR System Verification – The system validates employment status, salary, and tenure in real-time.
  3. Employer Approval – HR or finance teams review employee data and approve or reject requests.
  4. Payroll Setup – Approved leases are linked to payroll for automated deductions.
  5. Automated Monthly Deductions – Lease payments are deducted from payroll, ensuring financial consistency.
  6. Offboarding & Final Settlements – If an employee exits, the system triggers any required final payments.

Best Practices for Implementing Payroll Integration

To ensure a smooth and efficient integration, follow these best practices:

  • Use a Unified API Layer – Instead of integrating separately with each HR system, employ a single API to streamline updates and approvals.
  • Optimize Data Syncing – Transfer only necessary data (e.g., employee ID, salary) to minimize security risks and data load.
  • Secure Financial Logic – Keep payroll deductions, financial calculations, and approval workflows within a secure, scalable microservice.
  • Plan for Edge Cases – Adapt for employees with variable pay structures or unique deduction rules to maintain flexibility.

Key Technical Considerations

A robust payroll integration system must address:

  • Data Security & Compliance – Ensure compliance with GDPR, SOC 2, ISO 27001, or local data protection regulations.
  • Real-time vs. Batch Updates – Choose between real-time synchronization or scheduled batch processing based on data volume.
  • Cloud vs. On-Prem Deployments – Consider hybrid approaches for enterprises running legacy on-prem HR systems.
  • Authentication & Authorization – Implement secure authentication (e.g., SSO, OAuth2) for employer and employee access control.

Recommended Payroll Integration Architecture

A high-level architecture for payroll integration includes:

┌────────────────┐   ┌─────────────────┐
│ HR System      │   │ Payroll         │
│(Cloud/On-Prem) │ → │(Deduction Logic)│
└───────────────┘    └─────────────────┘
       │ (API/Connector)
       ▼
┌──────────────────────────────────────────┐
│ Unified API Layer                        │
│ (Manages employee data & payroll flow)   │
└──────────────────────────────────────────┘
       │ (Secure API Integration)
       ▼
┌───────────────────────────────────────────┐
│ Leasing/Finance Application Layer         │
│ (Approvals, User Portal, Compliance)      │
└───────────────────────────────────────────┘

A single API integration that connects various HR systems enables scalability and flexibility. Solutions like Knit offer pre-built integrations with 40+ HRMS and payroll systems, reducing complexity and development costs.

Actionable Next Steps

To implement payroll-integrated leasing successfully, follow these steps:

  • Assess HR System Compatibility – Identify whether your target clients use cloud-based or on-prem HRMS.
  • Define Data Synchronization Strategy – Determine if your solution requires real-time updates or periodic batch processing.
  • Pilot with a Mid-Sized Client – Test a proof-of-concept integration with a client using a common HR system.
  • Leverage Pre-Built API Solutions – Consider platforms like Knit for simplified connectivity to multiple HR and payroll systems.

Conclusion

Payroll-integrated leasing solutions provide significant advantages for employers and employees but require well-planned, secure integrations. By leveraging a unified API layer, automating approval workflows, and payroll deductions data, businesses can streamline operations while enhancing employee financial wellness.

For companies looking to reduce overhead and accelerate implementation, adopting a pre-built API solution can simplify payroll integration while allowing them to focus on their core leasing offerings. Now is the time to map out your integration strategy, define your data requirements, and build a scalable solution that transforms the employee leasing experience.

Ready to implement a seamless payroll-integrated leasing solution? Take the next step today by exploring unified API platforms and optimizing your HR-tech stack for maximum efficiency. To talk to our solutions experts at Knit you can reach out to us here

Use Cases
-
Sep 26, 2025

Streamline Ticketing and Customer Support Integrations

How to Streamline Customer Support Integrations

Introduction

Seamless CRM and ticketing system integrations are critical for modern customer support software. However, developing and maintaining these integrations in-house is time-consuming and resource-intensive.

In this article, we explore how Knit’s Unified API simplifies customer support integrations, enabling teams to connect with multiple platforms—HubSpot, Zendesk, Intercom, Freshdesk, and more—through a single API.

Why Efficient Integrations Matter for Customer Support

Customer support platforms depend on real-time data exchange with CRMs and ticketing systems. Without seamless integrations:

  • Support agents struggle with disconnected systems, slowing response times.
  • Customers experience delays, leading to poor service experiences.
  • Engineering teams spend valuable resources on custom API integrations instead of product innovation.

A unified API solution eliminates these issues, accelerating integration processes and reducing ongoing maintenance burdens.

Challenges of Building Customer Support Integrations In-House

Developing custom integrations comes with key challenges:

  • Long Development Timelines – Every CRM or ticketing tool has unique API requirements, leading to weeks of work per integration.
  • Authentication Complexities – OAuth-based authentication requires security measures that add to engineering overhead.
  • Data Structure Variations – Different platforms organize data differently, making normalization difficult.
  • Ongoing Maintenance – APIs frequently update, requiring continuous monitoring and fixes.
  • Scalability Issues – Scaling across multiple platforms means repeating the integration process for each new tool.

Use Case: Automating Video Ticketing for Customer Support

For example a company offering video-assisted customer support where users can record and send videos along with support tickets. Their integration requirements include:

  1. Creating a Video Ticket – Associating video files with support requests.
  2. Fetching Ticket Data – Automatically retrieving ticket and customer details from Zendesk, Intercom, or HubSpot.
  3. Attaching Video Links to Support Conversations – Embedding video URLs into CRM ticket histories.
  4. Syncing Customer Data – Keeping user information updated across integrated platforms.

With Knit’s Unified API, these steps become significantly simpler.

How Knit’s Unified API Simplifies Customer Support Integrations

By leveraging Knit’s single API interface, companies can automate workflows and reduce development time. Here’s how:

  1. User Records a Video → System captures the ticket/conversation ID.
  2. Retrieve Ticket Details → Fetch customer and ticket data via Knit’s API.
  3. Attach the Video Link → Use Knit’s API to append the video link as a comment on the ticket.
  4. Sync Customer Data → Auto-update customer records across multiple platforms.

Knit’s Ticketing API Suite for Developers

Knit provides pre-built ticketing APIs to simplify integration with customer support systems:

Best Practices for a Smooth Integration Experience

For a successful integration, follow these best practices:

  • Utilize Knit’s Unified API – Avoid writing separate API logic for each platform.
  • Leverage Pre-built Authentication Components – Simplify OAuth flows using Knit’s built-in UI.
  • Implement Webhooks for Real-time Syncing – Automate updates instead of relying on manual API polling.
  • Handle API Rate Limits Smartly – Use batch processing and pagination to optimize API usage.

Technical Considerations for Scalability

  • Pass-through Queries – If Knit doesn’t support a specific endpoint, developers can pass through direct API calls.
  • Optimized API Usage – Cache ticket and customer data to reduce frequent API calls.
  • Custom Field Support – Knit allows easy mapping of CRM-specific data fields.

How to Get Started with Knit

  1. Sign Up on Knit’s Developer Portal.
  2. Integrate the Universal API to connect multiple CRMs and ticketing platforms.
  3. Use Pre-built Authentication components for user authorization.
  4. Deploy Webhooks for automated updates.
  5. Monitor & Optimize integration performance.

Streamline your customer support integrations with Knit and focus on delivering a world-class support experience!


📞 Need expert advice? Book a consultation with our team. Find time here
Developers
-
Apr 19, 2026

API Pagination Stability: How to Avoid Duplicates, Gaps, and Cursor Drift (2026)

If you are looking to unlock 40+ HRIS and ATS integrations with a single API key, check out Knit API. If not, keep reading

Note: This is a part of our series on API Pagination where we solve common developer queries in detail with common examples and code snippets. Please read the full guide here where we discuss page size, error handling, pagination stability, caching strategies and more.

Ensure that the pagination remains stable and consistent between requests. Newly added or deleted records should not affect the order or positioning of existing records during pagination. This ensures that users can navigate through the data without encountering unexpected changes.

5 ways for pagination stability

To ensure that API pagination remains stable and consistent between requests, follow these guidelines:

1. Use a stable sorting mechanism

If you're implementing sorting in your pagination, ensure that the sorting mechanism remains stable. 

This means that when multiple records have the same value for the sorting field, their relative order should not change between requests. 

For example, if you sort by the "date" field, make sure that records with the same date always appear in the same order.

2. Avoid changing data order

Avoid making any changes to the order or positioning of records during pagination, unless explicitly requested by the API consumer

If new records are added or existing records are modified, they should not disrupt the pagination order or cause existing records to shift unexpectedly.

3. Use unique and immutable identifiers

It's good practice to use unique and immutable identifiers for the records being paginated. T

This ensures that even if the data changes, the identifiers remain constant, allowing consistent pagination. It can be a primary key or a unique identifier associated with each record.

4. Handle record deletions gracefully

If a record is deleted between paginated requests, it should not affect the pagination order or cause missing records. 

Ensure that the deletion of a record does not leave a gap in the pagination sequence.

For example, if record X is deleted, subsequent requests should not suddenly skip to record Y without any explanation.

5. Use deterministic pagination techniques

Employ pagination techniques that offer deterministic results. Techniques like cursor-based pagination or keyset pagination, where the pagination is based on specific attributes like timestamps or unique identifiers, provide stability and consistency between requests.

Also Read: 5 caching strategies to improve API pagination performance

Frequently Asked Questions

What is pagination stability in APIs?

Pagination stability means a client paginating through a dataset gets consistent, complete results — no duplicates, no missing records — even if the underlying data is modified during the pagination session. Stable pagination is critical for integration sync use cases where completeness matters. Unstable pagination — most commonly caused by offset on mutable data — is one of the most frequent but hardest-to-debug data integrity issues in API integrations. Knit builds pagination stability into its sync engine using cursor-based and keyset pagination with checkpointing, so concurrent writes to platforms like Workday, BambooHR, or SAP SuccessFactors don't corrupt in-progress data fetches.

Why does offset pagination produce inconsistent results?

Offset pagination produces inconsistent results because it defines page boundaries by row position (skip N, return M) rather than by a stable record pointer. If a record is inserted into the dataset after page 1 is fetched, every record shifts forward by one — the record pushed from page 1 into page 2 territory gets skipped. Deletes cause the reverse: records shift backward and appear twice. Offset is only reliable for truly static datasets where no inserts, updates, or deletes occur between pagination requests. For any live dataset, cursor-based or keyset pagination is the correct approach.

How do you implement stable cursor-based pagination?

Stable cursor-based pagination requires three things: a stable sort field (an indexed column like id or created_at that doesn't change once set), a cursor that encodes the last-seen value of that field (typically base64-encoded to prevent client manipulation), and a query that filters strictly after that value rather than using OFFSET. The server returns the cursor for the last record in each page; the client passes it back as the after parameter on the next request. To handle concurrent inserts, sort by a monotonically increasing field — auto-increment id is the most reliable, or a combination of created_at and id for tie-breaking when timestamps collide.

What is keyset pagination and when should I use it?

Keyset pagination (also called seek pagination) filters results using the actual values of one or more indexed columns rather than a row count offset. Instead of "skip 10,000 rows", a keyset query says "return records where id > 10000 ORDER BY id LIMIT 100". This is dramatically faster on large tables because the database uses an index seek rather than a full scan. Use keyset pagination when your dataset has millions of records, you need consistent performance across all pages (not just early ones), or deep pagination is a common access pattern. The main limitation is that it doesn't support jumping to an arbitrary page by number — access is sequential.

How do you handle pagination when records are deleted mid-sync?

Deletes mid-sync are only a problem with offset pagination — cursor and keyset pagination are unaffected because they don't depend on row position. If you must use offset, mitigate deletes by: fetching in reverse order (newest first) so deletes push records toward earlier already-fetched pages; using soft-deletes where records are marked deleted but not removed, filtering them out after fetching; or using a change-data-capture approach where you consume a log of inserts, updates, and deletes rather than paginating the live table. For integration sync, delta-based fetching — pulling only records modified since the last sync, including delete events — avoids the full re-pagination problem entirely.

What is cursor drift and how do you prevent it?

Cursor drift occurs when the sort field used for cursor pagination is not truly stable — for example, using updated_at as the cursor field when records can be re-updated between page requests. If a record from page 1 gets its updated_at timestamp bumped while you're fetching page 3, it will reappear in a later page (paginating by ascending updated_at) or be skipped (if descending). Prevent cursor drift by paginating on immutable fields: auto-increment id is the most reliable, or a combination of created_at and id for tie-breaking. If you need both creation-order and modification-order access, expose separate cursor-paginated endpoints for each rather than trying to serve both with one cursor.

Developers
-
Apr 19, 2026

Common API Pagination Errors and How to Fix Them (2026)

Note: This is a part of our series on API Pagination where we solve common developer queries in detail with common examples and code snippets. Please read the full guide here where we discuss page size, error handling, pagination stability, caching strategies and more.

It is important to account for edge cases such as reaching the end of the dataset, handling invalid or out-of-range page requests, and to handle this errors gracefully.

Always provide informative error messages and proper HTTP status codes to guide API consumers in handling pagination-related issues.

Here are some key considerations for handling edge cases and error conditions in a paginated API:

How to handle common errors and invalid requests in API pagination

Here are some key considerations for handling edge cases and error conditions in a paginated API:

1. Out-of-range page requests

When an API consumer requests a page that is beyond the available range, it is important to handle this gracefully. 

Return an informative error message indicating that the requested page is out of range and provide relevant metadata in the response to indicate the maximum available page number.

2.  Invalid pagination parameters

Validate the pagination parameters provided by the API consumer. Check that the values are within acceptable ranges and meet any specific criteria you have defined. If the parameters are invalid, return an appropriate error message with details on the issue.

3. Handling empty result sets

If a paginated request results in an empty result set, indicate this clearly in the API response. Include metadata that indicates the total number of records and the fact that no records were found for the given pagination parameters. 

This helps API consumers understand that there are no more pages or data available.

4. Server errors and exception handling

Handle server errors and exceptions gracefully. Implement error handling mechanisms to catch and handle unexpected errors, ensuring that appropriate error messages and status codes are returned to the API consumer. Log any relevant error details for debugging purposes.

5. Rate limiting and throttling

Consider implementing rate limiting and throttling mechanisms to prevent abuse or excessive API requests. 

Enforce sensible limits to protect the API server's resources and ensure fair access for all API consumers. Return specific error responses (e.g., HTTP 429 Too Many Requests) when rate limits are exceeded.

6. Clear and informative error messages

Provide clear and informative error messages in the API responses to guide API consumers when errors occur. 

Include details about the error type, possible causes, and suggestions for resolution if applicable. This helps developers troubleshoot and address issues effectively.

7. Consistent error handling approach

Establish a consistent approach for error handling throughout your API. Follow standard HTTP status codes and error response formats to ensure uniformity and ease of understanding for API consumers.

For example, consider the following API in Django

Copy to clipboard
        
from django.http import JsonResponse
from django.views.decorators.http import require_GET

POSTS_PER_PAGE = 10

@require_GET
def get_posts(request):
   # Retrieve pagination parameters from the request
   page = int(request.GET.get('page', 1))
  
   # Retrieve sorting parameter from the request
   sort_by = request.GET.get('sort_by', 'date')

   # Retrieve filtering parameter from the request
   filter_by = request.GET.get('filter_by', None)

   # Get the total count of posts (example value)
   total_count = 100

   # Calculate pagination details
   total_pages = (total_count + POSTS_PER_PAGE - 1) // POSTS_PER_PAGE
   next_page = page + 1 if page < total_pages else None
   prev_page = page - 1 if page > 1 else None

   # Handle out-of-range page requests
   if page < 1 or page > total_pages:
       error_message = 'Invalid page number. Page out of range.'
       return JsonResponse({'error': error_message}, status=400)

   # Retrieve posts based on pagination, sorting, and filtering parameters
   posts = retrieve_posts(page, sort_by, filter_by)

   # Handle empty result set
   if not posts:
       return JsonResponse({'data': [], 'pagination': {'total_records': total_count, 'current_page': page,
                                                        'total_pages': total_pages, 'next_page': next_page,
                                                        'prev_page': prev_page}}, status=200)

   # Construct the API response
   response = {
       'data': posts,
       'pagination': {
           'total_records': total_count,
           'current_page': page,
           'total_pages': total_pages,
           'next_page': next_page,
           'prev_page': prev_page
       }
   }


   return JsonResponse(response, status=200)

def retrieve_posts(page, sort_by, filter_by):
   # Logic to retrieve posts based on pagination, sorting, and filtering parameters
   # Example implementation: Fetch posts from a database
   offset = (page - 1) * POSTS_PER_PAGE
   query = Post.objects.all()

   # Add sorting condition
   if sort_by == 'date':
       query = query.order_by('-date')
   elif sort_by == 'title':
       query = query.order_by('title')

   # Add filtering condition
   if filter_by:
       query = query.filter(category=filter_by)


   # Apply pagination
   query = query[offset:offset + POSTS_PER_PAGE]

   posts = list(query)
   return posts

        
    

8. Consider an alternative

If you work with a large number of APIs but do not want to deal with pagination or errors as such, consider working with a unified API solution like Knit where you only need to connect with the unified API only once, all the authorization, authentication, rate limiting, pagination — everything will be taken care of the unified API while you enjoy the seamless access to data from more than 50 integrations.

Sign up for Knit today to try it out yourself in our sandbox environment (getting started with us is completely free)

Frequently Asked Questions

What are common API pagination errors?

The most common API pagination errors are: invalid or expired cursor tokens (the client retries a cursor that has timed out), missing records due to offset drift (inserts between pages shift results, silently skipping records), duplicate records on consecutive pages (a record updated between requests appears twice), out-of-range page requests returning 400 or empty responses, and inconsistent total counts when the dataset is modified mid-pagination. The root cause of most pagination bugs is using offset on mutable data — switching to cursor-based or keyset pagination eliminates the majority of these issues. Knit handles these edge cases internally when syncing from enterprise HRIS and ATS platforms, retrying expired cursors and surfacing sync errors clearly rather than silently dropping records.

Why are records missing from paginated API responses?

Missing records in paginated API responses are almost always caused by offset pagination on a dataset that was modified between page requests. When a record is deleted from page 1 after you've fetched it, every subsequent record shifts one position forward - the first record of page 2 is now the last record of page 1, and your client skips it entirely. The fix is to switch to cursor-based or keyset pagination, which uses a stable pointer that doesn't shift when records are inserted or deleted. If you must use offset, fetch records in reverse chronological order so insertions push records toward earlier already-fetched pages rather than creating gaps later.

How do you handle an invalid or expired pagination cursor?

When a pagination cursor expires or becomes invalid, the API should return a clear error — typically HTTP 400 with a descriptive code like cursor_expired or invalid_cursor — rather than silently returning wrong results. On the client side, handle this by restarting pagination from the beginning or from the last known good checkpoint, depending on whether your use case tolerates re-fetching records. Set cursor TTLs based on realistic client behaviour — cursors that expire in minutes will frustrate developers paginating large datasets. Knit implements automatic cursor retry and pagination checkpointing when syncing from enterprise APIs, so a single expired cursor doesn't trigger a full resync.

What HTTP status codes should a paginated API return for errors?

Paginated APIs should use standard HTTP status codes: 400 for invalid pagination parameters (bad page number, malformed cursor, page size exceeding maximum), 404 if the resource being paginated no longer exists, 422 for semantically invalid parameters (negative offset, zero page size), and 429 for rate limit exceeded on rapid page-through requests. Avoid returning 200 with an empty results array for genuinely invalid requests — it masks errors from clients. Always include a machine-readable error code in the response body alongside the human-readable message, so clients can programmatically distinguish cursor_expired from invalid_page_size without parsing strings.

How do you handle duplicate records in paginated API responses?

Duplicate records across paginated responses occur when offset pagination is used on a dataset where records can move between pages due to concurrent writes. The reliable fix is cursor-based or keyset pagination, where each page starts from a stable pointer that doesn't shift. If you cannot change the pagination method, track seen record IDs on the client and deduplicate before processing — but this is a workaround, not a fix. Knit uses cursor-based pagination internally to prevent duplicates when syncing employee records from platforms like Workday and BambooHR, where the underlying dataset changes continuously. If sort order can change mid-pagination, document this explicitly so integrators know to expect and handle duplicates.

Why does my paginated API return a 400 error for large page numbers?

APIs that return 400 errors for large page numbers are enforcing a maximum offset or page depth limit. Deep pagination with offset (e.g. OFFSET 10,000,000) is expensive on the database — it requires scanning and discarding millions of rows before returning results, and many APIs cap this to protect performance. If you need to access deep into a large dataset, the correct approach is cursor-based pagination, which fetches records from a stable pointer rather than skipping rows. If you're building an API and need to support deep access, implement cursor or keyset pagination and document the maximum supported offset clearly in your API reference.

Developers
-
Apr 19, 2026

5 API Pagination Techniques You Must Know (2026)

Note: This is a part of our series on API Pagination where we solve common developer queries in detail with common examples and code snippets. Please read the full guide here where we discuss page size, error handling, pagination stability, caching strategies and more.

There are several common API pagination techniques that developers employ to implement efficient data retrieval. Here are a few useful ones you must know:

1. Offset and Limit Pagination

This technique involves using two parameters: "offset" and "limit." The "offset" parameter determines the starting point or position in the dataset, while the "limit" parameter specifies the maximum number of records to include on each page.

For example, an API request could include parameters like "offset=0" and "limit=10" to retrieve the first 10 records.

GET /aCpi/posts?offset=0&limit=10

2. Cursor-Based Pagination

Instead of relying on numeric offsets, cursor-based pagination uses a unique identifier or token to mark the position in the dataset. The API consumer includes the cursor value in subsequent requests to fetch the next page of data.

This approach ensures stability when new data is added or existing data is modified. The cursor can be based on various criteria, such as a timestamp, a primary key, or an encoded representation of the record.

For example - GET /api/posts?cursor=eyJpZCI6MX0

In the above API request, the cursor value `eyJpZCI6MX0` represents the identifier of the last fetched record. This request retrieves the next page of posts after that specific cursor.

3. Page-Based Pagination

Page-based pagination involves using a "page" parameter to specify the desired page number. The API consumer requests a specific page of data, and the API responds with the corresponding page, typically along with metadata such as the total number of pages or total record count. 

This technique simplifies navigation and is often combined with other parameters like "limit" to determine the number of records per page.
For example - GET /api/posts?page=2&limit=20

In this API request, we are requesting the second page, where each page contains 20 posts.

4. Time-Based Pagination

In scenarios where data has a temporal aspect, time-based pagination can be useful. It involves using time-related parameters, such as "start_time" and "end_time", to specify a time range for retrieving data. 

This technique enables fetching data in chronological or reverse-chronological order, allowing for efficient retrieval of recent or historical data.

For example - GET/api/events?start_time=2023-01-01T00:00:00Z&end_time=2023-01-31T23:59:59Z

Here, this request fetches events that occurred between January 1, 2023, and January 31, 2023, based on their timestamp.

5. Keyset Pagination

Keyset pagination relies on sorting and using a unique attribute or key in the dataset to determine the starting point for retrieving the next page. 

For example, if the data is sorted by a timestamp or an identifier, the API consumer includes the last seen timestamp or identifier as a parameter to fetch the next set of records. This technique ensures efficient retrieval of subsequent pages without duplication or missing records.

To further simplify this, consider an API request GET /api/products?last_key=XYZ123. Here, XYZ123 represents the last seen key or identifier. The request retrieves the next set of products after the one with the key XYZ123.

Also read: 7 ways to handle common errors and invalid requests in API pagination

FAQs

What is API pagination?

API pagination is a technique for splitting large datasets into smaller, sequential chunks (pages) so clients can retrieve them incrementally rather than fetching everything at once. Without pagination, a single API request on a large dataset can time out, exhaust memory, or return millions of records the client doesn't need. Pagination controls - like page numbers, offsets, or cursors - let the client request exactly the range of data it needs, keeping response times fast and server load manageable.

What are pagination techniques?

The main API pagination techniques are: offset and limit (skip N records, return the next M), page-based (request page 3 of 10), cursor-based (use an opaque pointer to the last-seen record), time-based (fetch records created/updated after a given timestamp), and keyset/seek pagination (filter by the value of a sortable indexed column). Each suits different use cases - cursor-based is best for real-time feeds and large datasets, offset works for simple sorted results, and time-based is ideal for incremental data sync.

What are the different types of pagination?

The five most common types are:

(1) Offset pagination - uses offset and limit parameters, simple to implement but degrades on large datasets due to full table scans;

(2) Page-based pagination - uses page and per_page, conceptually simple but has the same performance limitations as offset;

(3) Cursor-based pagination - uses an opaque cursor token pointing to the last record, stable and performant even on large or frequently-updated datasets;

(4) Time-based pagination - fetches records within a time window using since and until parameters;

(5) Keyset pagination - filters by the value of an indexed column, combining the stability of cursors with direct SQL efficiency.

How to do pagination on API?

To implement pagination on an API: choose a pagination style (offset, cursor, or keyset depending on your dataset size and update frequency), add the relevant query parameters to your GET endpoint (e.g. ?limit=100&offset=0 or ?after=cursor_token), return pagination metadata in the response (total count, next cursor or next page URL), and handle the last page by returning an empty next cursor or a has_more: false flag. On the client side, follow the next link or cursor in each response until no further pages are returned.

What are the advantages of cursor-based pagination over offset pagination?

Cursor-based pagination has three key advantages over offset:

- Stability - if records are inserted or deleted between page requests, offset pagination skips or duplicates records; cursors point to a specific position so page boundaries remain consistent;

- Performance - offset pagination requires the database to scan and discard all preceding rows, which is slow on large tables; cursor-based queries use indexed lookups;

- Consistency at scale - cursor pagination works reliably on datasets with millions of records where offset becomes prohibitively slow.

The tradeoff is that cursor pagination doesn't support random page access or total record counts as easily.

What are API pagination best practices?

Key best practices: use cursor-based or keyset pagination for large or frequently-updated datasets rather than offset; always return a next cursor or link in the response so clients don't need to calculate the next page themselves; set a sensible default and maximum page size (e.g. default 100, max 1000) to prevent unbounded requests; include a has_more boolean or empty next to signal the final page clearly; use consistent parameter names (limit, after, before) so clients don't need to re-learn the interface per endpoint; and document the pagination model explicitly, since different endpoints on the same API sometimes use different styles.

When should I use time-based pagination?

Time-based pagination is best suited for incremental data sync use cases - where you want to fetch only records created or updated after a specific timestamp, rather than fetching all records from scratch on each run. It's commonly used in webhook alternative patterns, audit log retrieval, and integration sync loops. The main limitation is that it assumes records have reliable, indexed created_at or updated_at timestamps, and it can miss records if clock skew or delayed writes cause them to land before the since boundary.

How does API pagination affect integration performance?

Pagination style significantly affects integration performance. Offset pagination becomes slow on large tables and can produce inconsistent results under concurrent writes - a common problem when syncing employee data from HRIS platforms that update frequently. Cursor-based pagination is more reliable for integration sync loops because it handles insertions and deletions between pages gracefully. When building integrations against third-party APIs, always check which pagination style they use and implement retry logic with backoff for rate-limited page requests. Knit manages all kinds of pagination for you when you're running syncs on Knit so you don't have to worry about how different apps might behave.

Product
-
Mar 29, 2026

Top 5 Nango Alternatives

5 Best Nango Alternatives for Streamlined API Integration

Are you in the market for Nango alternatives that can power your API integration solutions? In this article, we’ll explore five top platforms—Knit, Merge.dev, Apideck, Paragon, and Tray Embedded—and dive into their standout features, pros, and cons. Discover why Knit has become the go-to option for B2B SaaS integrations, helping companies simplify and secure their customer-facing data flows.

TL;DR


Nango is an open-source embedded integration platform that helps B2B SaaS companies quickly connect various applications via a single interface. Its streamlined setup and developer-friendly approach can accelerate time-to-market for customer-facing integrations. However, coverage is somewhat limited compared to broader unified API platforms—particularly those offering deeper category focus and event-driven architectures.

Nango also relies heavily on open source communities for adding new connectors which makes connector scaling less predictable fo complex or niche use cases.

Pros (Why Choose Nango):

  • Straightforward Setup: Shortens integration development cycles with a simplified approach.
  • Developer-Centric: Offers documentation and workflows that cater to engineering teams.
  • Embedded Integration Model: Helps you provide native integrations directly within your product.

Cons (Challenges & Limitations):

  • Limited Coverage Beyond Core Apps: May not support the full depth of specialized or industry-specific APIs.
  • Standardized Data Models: With Nango you have to create your own standard data models which requires some learning curve and isn't as straightforward as prebuilt unified APIs like Knit or Merge
  • Opaque Pricing: While Nango has a free to build and low initial pricing there is very limited support provided initially and if you need support you may have to take their enterprise plans

Now let’s look at a few Nango alternatives you can consider for scaling your B2B SaaS integrations, each with its own unique blend of coverage, security, and customization capabilities.

1. Knit

Knit - How it compares as a nango alternative

Overview
Knit is a unified API platform specifically tailored for B2B SaaS integrations. By consolidating multiple applications—ranging from CRM to HRIS, Recruitment, Communication, and Accounting—via a single API, Knit helps businesses reduce the complexity of API integration solutions while improving efficiency. See how Knit compares directly to Nango →

Key Features

  • Bi-Directional Sync: Offers both reading and writing capabilities for continuous data flow.
  • Secure - Event-Driven Architecture: Real-time, webhook-based updates ensure no end-user data is stored, boosting privacy and compliance.
  • Developer-Friendly: Streamlined setup and comprehensive documentation shorten development cycles.

Pros

  • Simplified Integration Process: Minimizes the need for multiple APIs, saving development time and maintenance costs.
  • Enhanced Security: Event-driven design eliminates data-storage risks, reinforcing privacy measures.
  • New integrations Support : Knit enables you to build your own APIs in minutes or builds new integrations in a couple of days to ensure you can scale with confidence

2. Merge.dev

Overview
Merge.dev delivers unified APIs for crucial categories like HR, payroll, accounting, CRM, and ticketing systems—making it a direct contender among top Nango alternatives.

Key Features

  • Extensive Pre-Built Integrations: Quickly connect to a wide range of platforms.
  • Unified Data Model: Ensures consistent and simplified data handling across multiple services.

Pros

  • Time-Saving: Unified APIs cut down deployment time for new integrations.
  • Simplified Maintenance: Standardized data models make updates easier to manage.

Cons

  • Limited Customization: The one-size-fits-all data model may not accommodate every specialized requirement.
  • Data Constraints: Large-scale data needs may exceed the platform’s current capacity.
  • Pricing : Merge's platform fee  might be steep for mid sized businesses

3. Apideck

Overview
Apideck offers a suite of API integration solutions that give developers access to multiple services through a single integration layer. It’s well-suited for categories like HRIS and ATS.

Key Features

  • Unified API Layer: Simplifies data exchange and management.
  • Integration Marketplace: Quickly browse available integrations for faster adoption.

Pros

  • Broad Coverage: A diverse range of APIs ensures flexibility in integration options.
  • User-Friendly: Caters to both developers and non-developers, reducing the learning curve.

Cons

  • Limited Depth in Categories: May lack the robust granularity needed for certain specialized use cases.

4. Paragon

Overview
Paragon is an embedded integration platform geared toward building and managing customer-facing integrations for SaaS businesses. It stands out with its visual workflow builder, enabling lower-code solutions.

Key Features

  • Low-Code Workflow Builder: Drag-and-drop functionality speeds up integration creation.
  • Pre-Built Connectors: Quickly access popular services without extensive coding.

Pros

  • Accessibility: Allows team members of varying technical backgrounds to design workflows.
  • Scalability: Flexible infrastructure accommodates growing businesses.

Cons

  • May Not Support Complex Integrations: Highly specialized needs might require additional coding outside the low-code environment.

5. Tray Embedded

Overview
Tray Embedded is another formidable competitor in the B2B SaaS integrations space. It leverages a visual workflow builder to enable embedded, native integrations that clients can use directly within their SaaS platforms.

Key Features

  • Visual Workflow Editor: Allows for intuitive, drag-and-drop integration design.
  • Extensive Connector Library: Facilitates quick setup across numerous third-party services.

Pros

  • Flexibility: The visual editor and extensive connectors make it easy to tailor integrations to unique business requirements.
  • Speed: Pre-built connectors and templates significantly reduce setup time.

Cons

  • Complexity for Advanced Use Cases: Handling highly custom scenarios may require development beyond the platform’s built-in capabilities.

Conclusion: Why Knit Is a Leading Nango Alternative

When searching for Nango alternatives that offer a streamlined, secure, and B2B SaaS-focused integration experience, Knit stands out. Its unified API approach and event-driven architecture protect end-user data while accelerating the development process. For businesses seeking API integration solutions that minimize complexity, boost security, and enhance scalability, Knit is a compelling choice.

Interested in trying Knit? - Contact us for a personalized demo and see how Knit can simplify your B2B SaaS integrations
Product
-
Mar 29, 2026

Finch API Vs Knit API - What Unified HR API is Right for You?

Whether you are a SaaS founder/ BD/ CX/ tech person, you know how crucial data safety is to close important deals. If your customer senses even the slightest risk to their internal data, it could be the end of all potential or existing collaboration with you. 

But ensuring complete data safety — especially when you need to integrate with multiple 3rd party applications to ensure smooth functionality of your product — can be really challenging. 

While a unified API makes it easier to build integrations faster, not all unified APIs work the same way. 

In this article, we will explore different data sync strategies adopted by different unified APIs with the examples of  Finch API and Knit — their mechanisms, differences and what you should go for if you are looking for a unified API solution.

Let’s dive deeper.

But before that, let us first revisit the primary components of a unified API and how exactly they make building integration easier.

How does a unified API work?

As we have mentioned in our detailed guide on Unified APIs,  

“A unified API aggregates several APIs within a specific category of software into a single API and normalizes data exchange. Unified APIs add an additional abstraction layer to ensure that all data models are normalized into a common data model of the unified API which has several direct benefits to your bottom line”.

The mechanism of a unified API can be broken down into 4 primary elements — 

  • Authentication and authorization
  • Connectors (1:Many)
  • Data syncs 
  • Ongoing integration management

1.Authentication and authorization

Every unified API — whether its Finch API, Merge API or Knit API — follows certain protocols (such as OAuth) to guide your end users authenticate and authorize access to the 3rd party apps they already use to your SaaS application.

2. Connectors 

Not all apps within a single category of software applications have the same data models. As a result, SaaS developers often spend a great deal of time and effort into understanding and building upon each specific data model. 

A unified API standardizes all these different data models into a single common data model (also called a 1:many connector) so SaaS developers only need to understand the nuances of one connector provided by the unified API and integrate with multiple third party applications in half the time. 

3. Data Sync

The primary aim of all integration is to ensure smooth and consistent data flow — from the source (3rd party app) to your app and back — at all moments. 

We will discuss different data sync models adopted by Finch API and Knit API in the next section.

4. Ongoing integration Management 

Every SaaS company knows that maintaining existing integrations takes more time and engineering bandwidth than the monumental task of building integrations itself. Which is why most SaaS companies today are looking for unified API solutions with an integration management dashboards — a central place with the health of all live integrations, any issues thereon and possible resolution with RCA. This enables the customer success teams to fix any integration issues then and there without the aid of engineering team.

finch API alterative
how a unified API works

How data sync happens in Unified APIs?

For any unified API, data sync is a two-fold process —

  • Data sync between the source (3rd party app) and the unified API provider
  • Data sync between the unified API and your app

Between the third party app and unified API

First of all, to make any data exchange happen, the unified API needs to read data from the source app (in this case the 3rd party app your customer already uses).

However, this initial data syncing also involves two specific steps — initial data sync and subsequent delta syncs.

Initial data sync between source app and unified API

Initial data sync is what happens when your customer authenticates and authorizes the unified API platform (let’s say Finch API in this case) to access their data from the third party app while onboarding Finch. 

Now, upon getting the initial access, for ease of use, Finch API copies and stores this data in their server. Most unified APIs out there use this process of copying and storing customer data from the source app into their own databases to be able to run the integrations smoothly.

While this is the common practice for even the top unified APIs out there, this practice poses multiple challenges to customer data safety (we’ll discuss this later in this article). Before that, let’s have a look at delta syncs.

What are delta syncs?

Delta syncs, as the name suggests, includes every data sync that happens post initial sync as a result of changes in customer data in the source app.

For example, if a customer of Finch API is using a payroll app, every time a payroll data changes — such as changes in salary, new investment, additional deductions etc — delta syncs inform Finch API of the specific change in the source app.

There are two ways to handle delta syncs — webhooks and polling.

In both the cases, Finch API serves via its stored copy of data (explained below)

In the case of webhooks, the source app sends all delta event information directly to Finch API as and when it happens. As a result of that “change notification” via the webhook, Finch changes its copy of stored data to reflect the new information it received.

Now, if the third party app does not support webhooks, Finch API needs to set regular intervals during which it polls the entire data of the source application to create a fresh copy. Thus, making sure any changes made to the data since the last polling is reflected in its database. Polling frequency can be every 24 hours or less.

This data storage model could pose several challenges for your sales and CS team where customers are worried about how the data is being handled (which in some cases is stored in a server outside of customer geography). Convincing them otherwise is not so easy. Moreover, this friction could result in additional paperwork delaying the time to close a deal.

Data syncs between unified API and your app 

The next step in data sync strategy is to use the user data sourced from the third party app to run your business logic. The two most popular approaches for syncing data between unified API and SaaS app are — pull vs push.

What is Pull architecture?

pull data flow architecture

Pull model is a request-driven architecture: where the client sends the data request and then the server sends the data. If your unified API is using a pull-based approach, you need to make API calls to the data providers using a polling infrastructure. For a limited number of data, a classic pull approach still works. But maintaining polling infra and/making regular API calls for large amounts of data is almost impossible. 

What is Push architecture?

push data architecture: Finch API

On the contrary, the push model works primarily via webhooks — where you subscribe to certain events by registering a webhook i.e. a destination URL where data is to be sent. If and when the event takes place, it informs you with relevant payload. In the case of push architecture, no polling infrastructure is to be maintained at your end. 

How does Finch API send you data?

There are 3 ways Finch API can interact with your SaaS application.

  • First, for each connected user, you are required to maintain a polling infrastructure at your end and periodically poll the Finch copy of the customer data. This approach only works when you have a limited number of connected users.
  • You can write your own sync functions for more frequency data syncs or for specific data syncing needs at your end. This ad-hoc sync is easier than regular polling, but this method still requires you to maintain polling infrastructure at your end for each connected customer.
  • Finch API also uses webhooks to send data to your SaaS app. Based on your preference, it can either send you notification via webhooks to start polling at your end, or it can send you appropriate payload whenever an event happens.

How does Knit API send data?

Knit is the only unified API that does NOT store any customer data at our end. 

Yes, you read that right. 

In our previous HR tech venture, we faced customer dissatisfaction over data storage model (discussed above) firsthand. So, when we set out to build Knit Unified API, we knew that we must find a way so SaaS businesses will no longer need to convince their customers of security. The unified API architecture will speak for itself. We built a 100% events-driven webhook architecture. We deliver both the initial and delta syncs to your application via webhooks and events only.

The benefits of a completely event-driven webhook architecture for you is threefold —

  • It saves you hours of engineering resources that you otherwise would spend in building, maintaining and executing on polling infrastructure.
  • It ensures on-time data regardless of the payload. So, you can scale as you wish.
  • It supports real time use cases which a polling-based architecture doesn’t support.

Finch API vs Knit API

For a full feature-by-feature comparison, see our Knit vs Finch comparison page →

Let’s look at the other components of the unified API (discussed above) and what Knit API and Finch API offers.

1. Authorization & authentication

Knit’s auth component offers a Javascript SDK which is highly flexible and has a wider range of use cases than Reach/iFrame used by the Finch API for front-end. This in turn offers you more customization capability on the auth component that your customers interact with while using Knit API.

2. Ongoing integration Management

The Knit API integration dashboard doesn’t only provide RCA and resolution, we go the extra mile and proactively identify and fix any integration issues before your customers raises a request. 

Knit provides deep RCA and resolution including ability to identify which records were synced, ability to rerun syncs etc. It also proactively identifies and fixes any integration issues itself. 

In comparison, the Finch API customer dashboard doesn’t offer as much deeper analysis, requiring more work at your end.

Final thoughts

Wrapping up, Knit API is the only unified API that does not store customer data at our end, and offers a scalable, secure, event-driven push data sync architecture for smaller as well as larger data loads.

By now, if you are convinced that Knit API is worth giving a try, please click here to get your API keys. Or if you want to learn more, see our docs
Product
-
Mar 29, 2026

Top 5 Finch Alternatives

TL:DR:

Finch is a leading unified API player, particularly popular for its connectors in the employment systems space, enabling SaaS companies to build 1: many integrations with applications specific to employment operations. This translates to the ease for customers to easily leverage Finch’s unified connector to integrate with multiple applications in HRIS and payroll categories in one go. Invariably, owing to Finch, companies find connecting with their preferred employment applications (HRIS and payroll) seamless, cost-effective, time-efficient, and overall an optimized process. While Finch has the most exhaustive coverage for employment systems, it's not without its downsides - most prominent being the fact that a majority of the connectors offered are what Finch calls “assisted” integrations. Assisted essentially means a human-in-the-loop integration where a person has admin access to your user's data and is manually downloading and uploading the data as and when needed. Another one being that for most assisted integrations you can only get information once in a week which might not be ideal if you're building for use cases that depend on real time information.

Pros and cons of Finch
Why chose Finch (Pros)

● Ability to scale HRIS and payroll integrations quickly

● In-depth data standardization and write-back capabilities

● Simplified onboarding experience within a few steps

However, some of the challenges include(Cons):

● Most integrations are assisted(human-assisted) instead of being true API integrations

● Integrations only available for employment systems

● Not suitable for realtime data syncs

● Limited flexibility for frontend auth component

● Requires users to take the onus for integration management

Pricing: Starts at $35/connection per month for read only apis; Write APIs for employees, payroll and deductions are available on their scale plan for which you’d have to get in touch with their sales team.

Now let's look at a few alternatives you can consider alongside finch for scaling your integrations

Finch alternative #1: Knit

Knit is a leading alternative to Finch, providing unified APIs across many integration categories, allowing companies to use a single connector to integrate with multiple applications. Here’s a list of features that make Knit a credible alternative to Finch to help you ship and scale your integration journey with its 1:many integration connector:

Pricing: Starts at $2400 Annually

Here’s when you should choose Knit over Finch:

● Wide horizontal and deep vertical coverage: Knit not only provides a deep vertical coverage within the application categories it supports, like Finch, however, it also supports a wider horizontal coverage of applications, higher than that of Finch. In addition to applications within the employment systems category, Knit also supports a unified API for ATS, CRM, e-Signature, Accounting, Communication and more. This means that users can leverage Knit to connect with a wider ecosystem of SaaS applications.

● Events-driven webhook architecture for data sync: Knit has built a 100% events-driven webhook architecture, which ensures data sync in real time. This cannot be accomplished using data sync approaches that require a polling infrastructure. Knit ensures that as soon as data updates happen, they are dispatched to the organization’s data servers, without the need to pull data periodically. In addition, Knit ensures guaranteed scalability and delivery, irrespective of the data load, offering a 99.99% SLA. Thus, it ensures security, scale and resilience for event driven stream processing, with near real time data delivery.

● Data security: Knit is the only unified API provider in the market today that doesn’t store any copy of the customer data at its end. This has been accomplished by ensuring that all data requests that come are pass through in nature, and are not stored in Knit’s servers. This extends security and privacy to the next level, since no data is stored in Knit’s servers, the data is not vulnerable to unauthorized access to any third party. This makes convincing customers about the security potential of the application easier and faster.

● Custom data models: While Knit provides a unified and standardized model for building and managing integrations, it comes with various customization capabilities as well. First, it supports custom data models. This ensures that users are able to map custom data fields, which may not be supported by unified data models. Users can access and map all data fields and manage them directly from the dashboard without writing a single line of code. These DIY dashboards for non-standard data fields can easily be managed by frontline CX teams and don’t require engineering expertise.  

● Sync when needed: Knit allows users to limit data sync and API calls as per the need. Users can set filters to sync only targeted data which is needed, instead of syncing all updated data, saving network and storage costs. At the same time, they can control the sync frequency to start, pause or stop sync as per the need.

● Ongoing integration management: Knit’s integration dashboard provides comprehensive capabilities. In addition to offering RCA and resolution, Knit plays a proactive role in identifying and fixing integration issues before a customer can report it. Knit ensures complete visibility into the integration activity, including the ability to identify which records were synced, ability to rerun syncs etc.

As an alternative to Finch, Knit ensures:

● No-Human in the loop integrations

● No need for maintaining any additional polling infrastructure

● Real time data sync, irrespective of data load, with guaranteed scalability and delivery

● Complete visibility into integration activity and proactive issue identification and resolution

● No storage of customer data on Knit’s servers

● Custom data models, sync frequency, and auth component for greater flexibility

See the full Knit vs Finch comparison →

Finch alternative #2: Merge

Another leading contender in the Finch alternative for API integration is Merge. One of the key reasons customers choose Merge over Finch is the diversity of integration categories it supports.

Pricing: Starts at $7800/ year and goes up to $55K

Why you should consider Merge to ship SaaS integrations:

● Higher number of unified API categories; Merge supports 7 unified API categories, whereas Finch only offers integrations for employment systems

● Supports API-based integrations and doesn’t focus only on assisted integrations (as is the case for Finch), as the latter can compromise customer’s PII data

● Facilitates data sync at a higher frequency as compared to Finch; Merge ensures daily if not hourly syncs, whereas Finch can take as much as 2 weeks for data sync

However, you may want to consider the following gaps before choosing Merge:

● Requires a polling infrastructure that the user needs to manage for data syncs

● Limited flexibility in case of auth component to customize customer frontend to make it similar to the overall application experience

● Webhooks based data sync doesn’t guarantee scale and data delivery

Finch alternative #3: Workato

Workato is considered another alternative to Finch, albeit in the traditional and embedded iPaaS category.

Pricing: Pricing is available on request based on workspace requirement; Demo and free trial available

Why you should consider Workato to ship SaaS integrations:

● Supports 1200+ pre-built connectors, across CRM, HRIS, ticketing and machine learning models, facilitating companies to scale integrations extremely fast and in a resource efficient manner

● Helps build internal integrations, API endpoints and workflow applications, in addition to customer-facing integrations; co-pilot can help build workflow automation better

● Facilitates building interactive workflow automations with Slack, Microsoft Teams, with its customizable platform bot, Workbot

However, there are some points you should consider before going with Workato:

● Lacks an intuitive or robust tool to help identify, diagnose and resolve issues with customer-facing integrations themselves i.e., error tracing and remediation is difficult

● Doesn’t offer sandboxing for building and testing integrations

● Limited ability to handle large, complex enterprise integrations

Finch alternative #4: Paragon

Paragon is another embedded iPaaS that companies have been using to power their integrations as an alternative to Finch.

Pricing: Pricing is available on request based on workspace requirement;

Why you should consider Paragon to ship SaaS integrations:

● Significant reduction in production time and resources required for building integrations, leading to faster time to market

● Fully managed authentication, set under full sets of penetration and testing to secure customers’ data and credentials; managed on-premise deployment to support strictest security requirements

● Provides a fully white-labeled and native-modal UI, in-app integration catalog and headless SDK to support custom UI

However, a few points need to be paid attention to, before making a final choice for Paragon:

● Requires technical knowledge and engineering involvement to custom-code solutions or custom logic to catch and debug errors

● Requires building one integration at a time, and requires engineering to build each integration, reducing the pace of integration, hindering scalability

● Limited UI/UI customization capabilities

Finch alternative #5: Tray.io

Tray.io provides integration and automation capabilities, in addition to being an embedded iPaaS to support API integration.

Pricing: Supports unlimited workflows and usage-based pricing across different tiers starting from 3 workspaces; pricing is based on the plan, usage and add-ons

Why you should consider Tary.io to ship SaaS integrations:

● Supports multiple pre-built integrations and automation templates for different use cases

● Helps build and manage API endpoints and support internal integration use cases in addition to product integrations

● Provides Merlin AI which is an autonomous agent to build automations via chat interface, without the need to write code

However, Tray.io has a few limitations that users need to be aware of:

● Difficult to scale at speed as it requires building one integration at a time and even requires technical expertise

● Data normalization capabilities are rather limited, with additional resources needed for data mapping and transformation

● Limited backend visibility with no access to third-party sandboxes

TL:DR

We have talked about the different providers through which companies can build and ship API integrations, including, unified API, embedded iPaaS, etc. These are all credible alternatives to Finch with diverse strengths, suitable for different use cases. Undoubtedly, the number of integrations supported within employment systems by Finch is quite large, there are other gaps which these alternatives seek to bridge:

Knit: Providing unified apis for different categories, supporting both read and write use cases. A great alternative which doesn’t require a polling infrastructure for data sync (as it has a 100% webhooks based architecture), and also supports in-depth integration management with the ability to rerun syncs and track when records were synced.

Merge: Provides a greater coverage for different integration categories and supports data sync at a higher frequency than Finch, but still requires maintaining a polling infrastructure and limited auth customization.

Workato: Supports a rich catalog of pre-built connectors and can also be used for building and maintaining internal integrations. However, it lacks intuitive error tracing and remediation.

Paragon: Fully managed authentication and fully white labeled UI, but requires technical knowledge and engineering involvement to write custom codes.

Tray.io: Supports multiple pre-built integrations and automation templates and even helps in building and managing API endpoints. But, requires building one integration at a time with limited data normalization capabilities.

Thus, consider the following while choosing a Finch alternative for your SaaS integrations:

● Support for both read and write use-cases

● Security both in terms of data storage and access to data to team members

● Pricing framework, i.e., if it supports usage-based, API call-based, user based, etc.

● Features needed and the speed and scope to scale (1:many and number of integrations supported)

Depending on your requirements, you can choose an alternative which offers a greater number of API categories, higher security measurements, data sync (almost in real time) and normalization, but with customization capabilities.

Insights
-
Apr 19, 2026

Is MCP the Future of AI Integration? Roadmap, What Shipped, and What's Next (2026)

Since Anthropic introduced the Model Context Protocol in November 2024, MCP has moved faster than almost any open standard in recent memory. What began as an experimental protocol has become the de facto integration layer for agentic AI - natively supported by Anthropic, OpenAI, Google, and Microsoft, and deployed across millions of daily active developer tool users as of early 2026. The question is no longer whether MCP will become the universal standard for AI-tool integration. It's how quickly enterprises can build on top of what's already here -and what comes next as the specification matures.

The Current State of MCP: Building Momentum

Before exploring what lies ahead, it's essential to understand where MCP stands today. The protocol has experienced explosive growth, with thousands of MCP servers developed by the community and increasing enterprise adoption. The ecosystem has expanded to include integrations with popular tools like GitHub, Slack, Google Drive, and enterprise systems, demonstrating MCP's versatility across diverse use cases.

Understanding the future direction of MCP can help teams plan their adoption strategy and anticipate new capabilities. Many planned features directly address current limitations. Here's a look at key areas of development for MCP based on public roadmaps and community discussions. 

Read more: The Pros and Cons of Adopting MCP Today

MCP 2026 Roadmap: What Shipped and What's Still Coming

The MCP roadmap focuses on unlocking scale, security, and extensibility across the ecosystem.

Remote MCP Support and Authentication

✅ Shipped (early 2025). Remote MCP over HTTP/SSE is live and widely deployed. OAuth 2.1 support is partially implemented, with full SSO integration still in progress.

MCP Registry: The Centralized Discovery Service

One of the most transformative elements of the MCP roadmap is the development of a centralized MCP Registry. This discovery service will function as the "app store" for MCP servers, enabling:

  • Centralized Server Discovery: Developers and organizations will be able to browse, evaluate, and deploy MCP servers through a unified interface. This registry will include metadata about server capabilities, versioning information, and verification status.
  • Third-Party Marketplaces: The registry will serve as an API layer that enables third-party marketplaces and discovery services to build upon, fostering ecosystem growth and competition.
  • Verification and Trust: The registry will implement verification mechanisms to ensure MCP servers meet security and quality standards, addressing current concerns about server trustworthiness.

Microsoft has already demonstrated early registry concepts with their Azure API Center integration for MCP servers, showing how enterprises can maintain private registries while benefiting from the broader ecosystem.

Agent Orchestration and Hierarchical Systems

Multi-agent orchestration primitives are in spec but not yet standardised. Most production implementations still use custom scaffolding for agent-to-agent handoffs. The roadmap includes substantial enhancements for multi-agent systems and complex orchestrations:

  • Agent Graphs: MCP is evolving to support structured multi-agent systems where different agents can be organized hierarchically, enabling sophisticated coordination patterns. This includes namespace isolation to control which tools are visible to different agents and standardized handoff patterns between agents.
  • Asynchronous Operations: The protocol will support long-running operations that can survive disconnections and reconnections, essential for robust enterprise workflows. This capability will enable agents to handle complex, time-consuming tasks without requiring persistent connections.
  • Hierarchical Multi-Agent Support: Drawing inspiration from organizational structures, MCP will enable "supervisory" agents that manage teams of specialized agents, creating more scalable and maintainable AI systems.

Read more: Scaling AI Capabilities: Using Multiple MCP Servers with One Agent

Enhanced Security and Authorization

Fine-grained permissions and audit logging are on the spec roadmap; human-in-the-loop hooks are being adopted at the application layer rather than protocol level. Security remains a paramount concern as MCP scales to enterprise adoption. The roadmap addresses this through multiple initiatives:

  • Fine-Grained Authorization: Future MCP versions will support granular permission controls, allowing organizations to specify exactly what actions agents can perform under what circumstances. This includes support for conditional permissions based on context, time, or other factors.
  • Secure Authorization Elicitation: The protocol will enable developers to integrate secure authorization flows for downstream APIs, ensuring that MCP servers can safely access external services while maintaining proper consent chains.
  • Human-in-the-Loop Workflows: Standardized mechanisms for incorporating human approval and guidance into agent workflows will become a core part of the protocol. This includes support for mid-task user confirmation and dynamic policy enforcement.

Multimodality and Streaming Support

Text and structured data streaming are live. Full video/audio multimodal support is still rolling out.

  • Current MCP implementations focus primarily on text and structured data. The roadmap includes significant expansions to support the full spectrum of AI capabilities:
  • Additional Modalities: Video, audio, and other media types will receive first-class support in MCP, enabling agents to work with rich media content. This expansion is crucial as AI models become increasingly multimodal.
  • Streaming and Chunking: For handling large datasets and real-time interactions, MCP will implement comprehensive streaming support. This includes multipart messages, bidirectional communication for interactive experiences, and efficient handling of large file transfers.
  • Memory-Efficient Processing: New implementations will include sophisticated chunking strategies and memory management to handle large datasets without overwhelming system resources.

Reference Implementations and Compliance

TypeScript and Python SDKs are stable. Java and Go SDKs are now available. Rust is in community development.

  • Multi-Language Support: Beyond the current Python and TypeScript implementations, the roadmap includes reference implementations in Java, Go, and other major programming languages. This expansion will make MCP accessible to a broader developer community.
  • Compliance Test Suites: Automated testing frameworks will ensure that different MCP implementations adhere strictly to the specification, boosting interoperability and reliability across the ecosystem.
  • Performance Optimizations: Future implementations will include optimizations for faster local communication, better resource utilization, and reduced latency in high-throughput scenarios.

Ecosystem Development and Tooling

The roadmap recognizes that protocol success depends on supporting tools and infrastructure:

  • Enhanced Debugging Utilities: Advanced debugging tools, including improved MCP Inspectors and management UIs, will make it easier for developers to build, test, and deploy MCP servers.
  • Cloud Platform Integration: Tighter integration with major cloud platforms (Azure, AWS, Google Cloud) will streamline deployment and management of MCP servers in enterprise environments.
  • Standardized Multi-Tool Servers: To reduce deployment overhead, the ecosystem will develop standardized servers that bundle multiple related tools, making it easier to deploy comprehensive MCP capabilities.

Specification Evolution and Governance

As MCP matures, its governance model is becoming more structured to ensure the protocol remains an open standard:

  • Community-Driven Working Groups: The MCP project is organized into projects and working groups that handle different aspects of the protocol's evolution. This includes transport protocols, client implementation, and cross-cutting concerns.
  • Transparent Standardization Process: The process for evolving the MCP specification involves community-driven working groups and transparent standardization processes, reducing fragmentation risk.
  • Versioned Releases: The protocol will follow structured versioning (e.g., MCP 1.1, 2.0) as it matures, providing clear upgrade paths and compatibility guarantees.

Built for AI developers

Give your AI agents enterprise-grade integrations today.

Knit's MCP Servers connect Claude, GPT-5, and any MCP-compatible agent to 100+ enterprise APIs — Workday, BambooHR, Greenhouse, Salesforce — without custom connectors or OAuth headaches.

Implications of MCP for Builders, Strategists, and Enterprises

As MCP evolves from a niche protocol to a foundational layer for context-aware AI systems, its implications stretch across engineering, product, and enterprise leadership. Understanding what MCP enables and how to prepare for it can help organizations and teams stay ahead of the curve.

For Developers and Technical Architects

MCP introduces a composable, protocol-driven approach to building AI systems that is significantly more scalable and maintainable than bespoke integrations.

Key Benefits:

  • Faster Prototyping & Integration: Developers no longer need to hardcode tool interfaces or context management logic. MCP abstracts this with a clean and consistent interface.
  • Plug-and-Play Ecosystem: Reuse community-built servers and tools without rebuilding pipelines from scratch.
  • Multi-Agent Ready: Build agents that cooperate, delegate tasks, and invoke other agents in a standardized way.
  • Language Flexibility: With official SDKs expanding to Java, Go, and Rust, developers can use their preferred stack.
  • Better Observability: Debugging tools like MCP Inspectors will simplify diagnosing workflows and tracking agent behavior.

How to Prepare:

  • Start exploring MCP via small-scale local agents.
  • Participate in community-led working groups or follow MCP GitHub repos.
  • Plan for gradual modular migration of AI components into MCP-compatible servers.

For Product Managers and Innovation Leaders

MCP offers PMs a unified, open foundation for embedding AI capabilities across product experiences—without the risk of vendor lock-in or massive rewrites down the line.

Key Opportunities:

  • Faster Feature Delivery: Modular AI agents can be swapped in/out as use cases evolve.
  • Multi-modal and Cross-App Experiences: Orchestrate product flows that span chat, voice, document, and UI-based interactions.
  • Future-Proofing: Products built on MCP benefit from interoperability across emerging AI stacks.
  • Human Oversight & Guardrails: Design workflows where AI is assistive, not autonomous, by default—reducing risk.
  • Discovery & Extensibility: With MCP Registries, PMs can access a growing catalog of trusted tools and AI workflows to extend product capabilities.

How to Prepare:

  • Map high-friction, multi-tool workflows in your product that MCP could simplify.
  • Define policies for human-in-the-loop moments and user approval checkpoints.
  • Work with engineering teams to adopt the MCP Registry for tool discovery and experimentation.

For Enterprise IT, Security, and AI Strategy Teams

For enterprises, MCP represents the potential for secure, scalable, and governable AI deployment across internal and customer-facing applications.

Strategic Advantages:

  • Enterprise-Grade Security: Upcoming OAuth 2.1, fine-grained permissions, and SSO support allow alignment with existing identity and compliance frameworks.
  • Unified AI Governance: Establish policy-driven, auditable AI workflows across departments, HR, IT, Finance, Support, etc.
  • De-Risked AI Adoption: MCP’s open standard reduces dependence on proprietary orchestration stacks and black-box APIs.
  • Cross-Cloud Compatibility: MCP supports deployment across AWS, Azure, and on-prem, making it cloud-agnostic and hybrid-ready.
  • Cost Efficiency: Standardization reduces duplicative effort and long-term maintenance burdens from fragmented AI integrations.

How to Prepare:

  • Create internal sandboxes to evaluate and benchmark MCP-based workflows.
  • Define IAM, policy, and audit strategies for agent interactions and downstream tool access.
  • Explore enterprise-specific use cases like AI-assisted ticketing, internal search, compliance automation, and reporting.

For AI and Data Teams

MCP also introduces a new layer of control and coordination for data and AI/ML teams building LLM-powered experiences or autonomous systems.

What it Enables:

  • Seamless Tool and Model Integration: MCP doesn’t replace models, it orchestrates them. Use GPT-4, Claude, or fine-tuned LLMs as modular backends for agents.
  • Contextual Control: Embed structured, contextual memory and state tracking across interactions.
  • Experimentation Velocity: Mix and match tools across different model backends for faster experimentation.

How to Prepare:

  • Identify existing LLM or RAG pipelines that could benefit from agent-based orchestration.
  • Evaluate MCP’s streaming and chunking capabilities for handling large corpora or real-time inference.
  • Begin building internal MCP servers around common datasets or APIs for shared use.

Cross-Functional Collaboration

Ultimately, MCP adoption is a cross-functional effort. Developers, product leaders, security architects, and AI strategists all stand to gain, but also must align.

Best Practices for Collaboration:

  • Establish shared standards for agent behavior, tool definitions, and escalation protocols.
  • Adopt the MCP Registry as a centralized catalog of approved agents/tools within the organization.
  • Use versioning and policy modules to maintain consistency across evolving use cases.

Ecosystem Enablers

Segment Key Players / Examples
Protocol Stewards Anthropic (original authors), MCP Working Groups (open governance on GitHub)
Cloud Providers Microsoft Azure (early registry prototypes via Azure API Center), AWS (integration path discussed)
Tool & Agent Platforms LangChain, AutoGen, Semantic Kernel, Haystack – integrating MCP orchestration models
Infrastructure Projects OpenAI Tools, Claude Tool Use, HuggingFace tools (partial MCP compatibility emerging)
Developer Community Thousands of contributors on GitHub, Discord, and in hackathons; MCP CLI and SDK maintainers
Enterprise Adopters Early pilots across financial services, healthcare, and industrial automation sectors
Academic & Research Collaborations with academic labs exploring MCP for AI safety, interpretability, and HCI research

Industry Adoption and Market Trends

The trajectory of MCP adoption suggests significant market transformation ahead. This growth is driven by several factors:

  • Enterprise Digital Transformation: Organizations are increasingly recognizing that AI integration is not optional but essential for competitive advantage. MCP provides the standardized foundation needed for scalable AI deployment.
  • Developer Productivity: The protocol promises to reduce initial development time by up to 30% and ongoing maintenance costs by up to 25% compared to custom integrations. This efficiency gain is driving adoption among development teams seeking to accelerate AI implementation.
  • Ecosystem Network Effects: As more MCP servers become available, the value proposition for adopting the protocol increases exponentially. This network effect is accelerating adoption across both enterprise and open-source communities.

Challenges and Considerations

Despite its promising future, MCP faces several challenges that could impact its trajectory:

Security and Trust

The rapid proliferation of MCP servers has raised security concerns. Research by Equixly found command injection vulnerabilities in 43% of tested MCP implementations, with additional concerns around server-side request forgery and arbitrary file access. The roadmap's focus on enhanced security measures directly addresses these concerns, but implementation will be crucial.

Enterprise Readiness

While MCP shows great promise, current enterprise adoption faces hurdles. Organizations need more than just protocol standardization, they require comprehensive governance, policy enforcement, and integration with existing enterprise architectures. The roadmap addresses these needs, but execution remains challenging.

Complexity Management

As MCP evolves to support more sophisticated use cases, there's a risk of increasing complexity that could hinder adoption. The challenge lies in providing advanced capabilities while maintaining the simplicity that makes MCP attractive to developers.

Competition and Fragmentation

The emergence of competing protocols like Google's Agent2Agent (A2A) introduces potential fragmentation risks. While A2A positions itself as complementary to MCP, focusing on agent-to-agent communication rather than tool integration, the ecosystem must navigate potential conflicts and overlaps.

Real-World Applications and Case Studies

The future of MCP is already taking shape through early implementations and pilot projects:

  • Enterprise Process Automation: Companies are using MCP to create AI agents that can navigate complex workflows spanning multiple enterprise systems. For example, employee onboarding processes that previously required manual coordination across HR, IT, and facilities systems can now be orchestrated through MCP-enabled agents.
  • Financial Services: Banks and financial institutions are exploring MCP for compliance monitoring, risk assessment, and customer service applications. The protocol's security enhancements make it suitable for handling sensitive financial data while enabling sophisticated AI capabilities.
  • Healthcare Integration: Healthcare organizations are piloting MCP implementations that enable AI systems to access patient records, scheduling systems, and clinical decision support tools while maintaining strict privacy and compliance requirements.

Looking Ahead: The Next Five Years

The next five years will be crucial for MCP's evolution from promising protocol to industry standard. Several trends will shape this journey:

Standardization and Maturity

MCP is expected to achieve full standardization by 2026, with stable specifications and comprehensive compliance frameworks. This maturity will enable broader enterprise adoption and integration with existing technology stacks.

AI Agent Proliferation

As AI agents become more sophisticated and autonomous, MCP will serve as the foundational infrastructure enabling their interaction with the digital world. The protocol's support for multi-agent orchestration positions it well for this future.

Integration with Emerging Technologies

MCP will likely integrate with emerging technologies like blockchain for trust and verification, edge computing for distributed AI deployment, and quantum computing for enhanced security protocols.

Ecosystem Consolidation

The MCP ecosystem will likely see consolidation as successful patterns emerge and standardized solutions replace custom implementations. This consolidation will reduce complexity while increasing reliability and security.

TL;DR: The Future of MCP

  • Bright Future & Strong Roadmap: MCP’s roadmap directly addresses current limitations—security, remote server support, and complex orchestration—while positioning it for long-term success as the universal AI-tool integration standard.

  • Next-Generation Capabilities: Multi-agent orchestration, multimodal data support (video, audio, streaming), and enterprise-grade authentication will unlock advanced, scalable AI workflows.

  • Enterprise & Developer Alignment: Focused efforts on security, scalability, and developer experience are reducing barriers to enterprise adoption and accelerating developer productivity.

  • Strategic Imperative: As AI integration becomes mission-critical for enterprises, MCP provides a standardized foundation to build, scale, and govern AI-driven ecosystems.

  • Challenges Ahead: Security hardening, enterprise readiness, and preventing protocol fragmentation remain key hurdles. Success will depend on open governance, active community collaboration, and transparent evolution of the standard.

  • Early Adopter Advantage: Teams that adopt MCP now can gain a competitive edge through faster time-to-market, composable agent architectures, and access to a rapidly expanding ecosystem of tools.

MCP is on track to redefine how AI systems interact with tools, data, and each other. With industry backing, active development, and a clear technical direction, it’s well-positioned to become the backbone of context-aware, interconnected AI. The next phase will determine whether MCP achieves its bold vision of becoming the universal standard for AI integration, but its momentum suggests a transformative shift in how AI applications are built and deployed.

Next Steps:

Wondering whether going the MCP route is right? Check out: Should You Adopt MCP Now or Wait? A Strategic Guide

Frequently Asked Questions (FAQ)

1. Does MCP have a future?

Yes - and the 2026 roadmap makes the case. After early concerns about protocol fragility, MCP is now a multi-company open standard under the Linux Foundation, with AWS, Cloudflare, and Google all publishing production commitments. The 2026 roadmap focuses on enterprise readiness (audit trails, SSO auth, gateway patterns), transport scalability (stateless Streamable HTTP), and agent communication primitives (the Tasks primitive for async agent calls). The "MCP is dead" narrative peaked in March 2026 and was driven by specific limitations - most of which are active roadmap items. For teams building AI agents that need to connect to enterprise SaaS systems, MCP's trajectory in 2026 is solidly upward.

2. Is MCP future-proof?

MCP is designed for evolution, not a fixed protocol. It's now governed under the Linux Foundation with a formal SEP (Specification Evolution Process) for community-driven changes. The 2026 roadmap explicitly addresses the gaps that triggered "is MCP dying?" concerns: context bloat is being addressed through reference-based results and better streaming; auth limitations are being fixed with SSO-integrated flows (Cross-App Access); and enterprise observability (audit trails, gateway patterns) is a first-class 2026 priority. Whether MCP specifically or a successor protocol wins long-term, the patterns it's establishing - standardised tool discovery, capability negotiation, agent-to-server communication - are durable.

3.What will replace MCP?

Most likely nothing replaces MCP wholesale, but it continues to evolve. The protocols most frequently discussed as alternatives - A2A (Google's Agent-to-Agent Protocol) and direct CLI interfaces - solve different problems. A2A handles agent-to-agent communication; MCP handles agent-to-tool and data-source communication. They're complementary, not competing. In April 2026, AWS, Google, and Cloudflare have all doubled down on MCP rather than moving away from it. The realistic trajectory is MCP as the tool-layer standard, with A2A or similar handling orchestration between agents.

4.What is the MCP roadmap for 2026?

The official MCP 2026 roadmap (published March 2026, maintained by the Linux Foundation) has four priority areas: (1) Transport evolution — making Streamable HTTP work statelessly at scale, with proper load balancer and proxy support; (2) Agent communication — closing lifecycle gaps in the Tasks primitive (retry semantics, expiry policies); (3) Governance maturation — a formal contributor ladder and delegation model so the project doesn't depend on a small group; (4) Enterprise readiness — audit trails, SSO-integrated auth, and gateway patterns. On the horizon: event-driven triggers (webhooks), streamed and reference-based results, and a Skills primitive for composed capabilities.

5.Is MCP the next big thing in AI?

MCP is already a significant part of the AI infrastructure stack - with major adoption from AWS, Google, Cloudflare, and hundreds of independent server builders as of early 2026. Whether it stays dominant depends on how well it solves its current limitations. The strongest argument for MCP's continued centrality: it solves the right problem (making AI agents interoperable with external systems) at the right layer (below the agent framework, above the raw API). For enterprise use cases requiring structured data, audit, and multi-system coordination, MCP is well-positioned as the tool-layer standard.

6. How does MCP compare to A2A (Agent-to-Agent Protocol)?

MCP and A2A solve different problems. MCP (Model Context Protocol) connects AI agents to tools and data sources - it's the protocol for an agent to call an API, read a database, or execute a function. A2A (Google's Agent-to-Agent Protocol) connects AI agents to other AI agents - it's the protocol for one agent to delegate a task to another agent as a peer. In a production multi-agent system in 2026, you'd typically use both: MCP for each agent's tool access, and A2A for orchestrating work across agents. Google, AWS, and other major MCP contributors have adopted A2A, treating the two protocols as complementary rather than competing.

Insights
-
Apr 19, 2026

MCP Agent Orchestration: Chaining, Handoffs, and Multi-Agent Patterns Explained

The Model Context Protocol (MCP) started with a simple yet powerful goal: to create a simple yet powerful interface standard, aimed at letting AI agents invoke tools and external APIs in a consistent manner. But the true potential of MCP goes far beyond just calling a calculator or querying a database. It serves as a critical foundation for orchestrating complex, modular, and intelligent agent systems where multiple AI agents can collaborate, delegate, chain operations, and operate with contextual awareness across diverse tasks.

Suggested reading: Scaling AI Capabilities: Using Multiple MCP Servers with One Agent

In this blog, we dive deep into the advanced integration patterns that MCP unlocks for multi-agent systems. From structured handoffs between agents to dynamic chaining and even complex agent graph topologies, MCP serves as the "glue" that enables these interactions to be seamless, interoperable, and scalable.

What Are Advanced Integrations in MCP?

At its core, an advanced integration in MCP refers to designing intelligent workflows that go beyond single agent-to-server interactions. Instead, these integrations involve:

  • Multiple AI agents collaborating on a shared task
  • Orchestrators (either rule-based or LLM-driven) planning execution logic
  • Agents calling other agents as if they were tools
  • Context handoffs that preserve relevant state and reduce rework
  • Dynamically generated pipelines that change based on real-time input or system state

Multi-agent orchestration is the process of coordinating multiple intelligent agents to collectively perform tasks that exceed the capability or specialization of a single agent. These agents might each possess specific skills, some may draft content, others may analyze legal compliance, while another might optimize pricing models.

MCP enables such orchestration by standardizing the interfaces between agents and exposing each agent's functionality as if it were a callable tool. This plug-and-play architecture leads to highly modular and reusable agent systems. Here are a few advanced integration patterns where MCP plays a crucial role:

Pattern 1: Single Agent Delegating to Specialized Sub-Agents (Handoffs)

Think of a general-purpose AI agent acting as a project manager. Rather than doing everything itself, it delegates sub-tasks to more specialized agents based on domain expertise—mirroring how human teams operate.

For instance:

  • A ContentManagerAgent might delegate script writing to a ScriptWriterAgent
  • A FinancialAdvisorAgent could hand off forecasting tasks to a QuantAgent
  • A MedicalAssistantAgent might rely on a DiagnosisAgent and PharmaAgent

This pattern mirrors the division of labor in organizations and is crucial for scalability and maintainability.

How MCP Enables This:

MCP allows the parent agent to invoke any sub-agent using a standardized interface. When the ContentManagerAgent calls generate_script(topic), it doesn’t need to know how the script is written, it just trusts the ScriptWriterAgent to handle it. MCP acts as the “middleware,” allowing:

  • Tool registration
  • Input/output format enforcement
  • Context transfer (metadata, task ID, session state)

Each sub-agent effectively behaves like a callable microservice.

Example Flow:

ProjectManagerAgent receives the task: "Create a digital campaign for a new fitness app."

Steps:

  1. plan_campaign(details) → CampaignStrategistAgent
  2. draft_copy(campaign_brief) → CopywritingAgent
  3. design_assets(campaign_brief) → DesignAgent
  4. budget_allocation(campaign_brief) → FinanceAgent

Each agent is called via MCP and returns structured outputs to the primary agent, which then integrates them.

Benefits:

  • Decoupling: Agents can be developed, deployed, and improved independently.
  • Specialization: Each agent focuses on doing one task well.
  • Reusability: Sub-agents can be reused in multiple workflows.

Challenges:

  • Error Propagation: Failures in sub-agents must be handled gracefully.
  • Context Management: Ensuring the right amount of context is shared without overloading or under-informing sub-agents.

Pattern 2: Chaining Agent Outputs to Inputs (Pipelines)

Concept:

In a pipeline pattern, agents are arranged in a linear sequence, each one performing a task, transforming the data, and passing it on to the next agent. Think of this like an AI-powered assembly line.

Real-World Example: Technical Blog Generation

Let’s say you’re building a content automation pipeline for a SaaS company.

Pipeline:

  1. research_topic("MCP for Agents") → ResearchAgent
  2. draft_article(research_summary) → WriterAgent
  3. optimize_seo(article_draft) → SEOAgent
  4. edit_for_tone(seo_article) → EditorAgent
  5. publish(platform, final_article) → PublishingAgent

Each stage is executed sequentially or conditionally, with the MCP orchestrator managing the flow.

How MCP Enables This

MCP ensures each stage adheres to a common interface:

  • Standardized JSON input/output
  • Metadata tagging for each invocation
  • Error reporting and retry logic
  • Traceable workflow IDs

Benefits:

  • Composability: Any agent/tool can be swapped or reordered.
  • Observability: Each stage can be logged, audited, and improved independently.
  • Parallelism: Certain steps can run concurrently where appropriate.

Challenges:

  • Data Transformation: Outputs must match the expected input formats.
  • Latency: Sequential processing can be slower; caching and batching might help.

Pattern 3: Agent Graphs and Complex Topologies

Some problems require non-linear workflows—where agents form a graph instead of a simple chain. In these topologies:

  • Agents can communicate bi-directionally
  • Feedback loops exist
  • Tasks trigger new sub-tasks dynamically
  • Context is shared across multiple nodes

Example Scenario: Crisis Response Management

Agents:

  • AlertAgent: Detects disasters from news, social media
  • CommsAgent: Prepares public announcements
  • LogisticsAgent: Arranges relief supplies
  • DataAgent: Aggregates real-time data
  • CoordinationAgent: Routes control to the right nodes

Workflow:

  • AlertAgent triggers CommsAgent and LogisticsAgent simultaneously
  • DataAgent feeds new updates to all others
  • CoordinationAgent reroutes tasks based on progress

How MCP Helps:

  • Namespaced tool definitions allow agents to see only relevant tools.
  • Consistent invocation semantics enable plug-and-play composition.
  • Agent-to-agent handoffs become just another tool call.

Benefits:

  • Scalability: Add new agents to the graph without redesigning everything.
  • Dynamic Routing: Agents can reroute requests based on real-time feedback.

Challenges:

  • Debugging: More complex interactions are harder to trace.
  • State Management: Keeping global state consistent across a distributed system.

Example: Cross-Domain Workflow - Legal Document Generation

Let’s walk through a real-world scenario combining handoffs, chaining, and agent graphs:

Task: Generate a legally compliant, region-specific terms of service (ToS).

Step-by-Step:

  1. ClientAgent receives a request from a SaaS company.
  2. It calls gather_requirements(client_profile) → RequirementsAgent.
  3. research_laws(region) → LegalResearchAgent.
  4. draft_terms(requirements, legal_research) → LegalDraftAgent.
  5. review_terms(draft) → LegalReviewAgent.
  6. translate_terms(draft, languages) → LocalizationAgent.
  7. style_terms(translated_drafts) → EditingAgent.

At each stage, agents communicate using MCP, and each tool call is standardized, logged, and independently maintainable.

Benefits of Using MCP for Orchestration 

  • Tool/Agent Reusability: Wrap once, reuse forever. Any agent or API exposed via MCP can be plugged into different workflows, regardless of the use case or orchestrator.
  • Separation of Concerns: MCP separates execution (handled by agents/tools) from planning and control (handled by the orchestrator), making both systems easier to reason about and debug.
  • Observability & Debuggability: Every interaction, whether it succeeds or fails, is logged, versioned, and auditable. This is critical for systems operating at scale or under compliance requirements.
  • Scalability: Need to add a new language model? Just register it as an MCP tool. The rest of your architecture doesn’t break. This modularity is key to scaling across domains.
  • Interoperability: MCP abstracts away language, framework, and protocol differences. A Python-based tool can talk to a Go-based agent via MCP with no special configuration.

Read also: Why MCP Matters: Unlocking Interoperable and Context-Aware AI Agents

Security and Governance in Multi-Agent Systems 

Multi-agent systems, especially in regulated domains like healthcare, finance, and legal tech, need granular control and transparency. Here’s how MCP helps:

  • Authentication: Each agent/tool has secure credentials. MCP ensures only authorized parties can initiate calls.
  • Authorization: Role-based permissions define which agents can access which tools. For instance, a junior HR agent may not invoke generate_offer_letter() directly.
  • Audit Trails: Every call, context payload, and response is logged and timestamped. This is critical for forensics, debugging, and legal compliance.

MCP as the Execution Backbone of Multi-Agent AI

In a world where AI systems are becoming modular, distributed, and task-specialized, MCP plays an increasingly crucial role. It abstracts complexity, ensures consistency, and enables the kind of agent-to-agent collaboration that will define the next era of AI workflows.

In 2026, MCP operates alongside a second emerging standard: A2A (Agent-to-Agent Protocol), introduced by Google. Where MCP governs how agents connect to tools and data sources, A2A governs direct agent-to-agent communication - how one agent calls another agent as a peer, rather than as a tool. The two protocols are complementary: MCP handles the tool and resource layer; A2A handles the agent coordination layer above it. For teams building multi-agent systems today, the practical architecture is often MCP for external integrations + A2A (or an orchestration framework like LangGraph) for inter-agent routing.

Whether you're building content pipelines, compliance engines, scientific research chains, or human-in-the-loop decision systems, MCP helps you scale reliably and flexibly.

By making tools and agents callable, composable, and context-aware, MCP is not just a protocol, it’s an enabler of next-gen AI systems.

Next Steps:

FAQs

1. What is MCP agent orchestration?

MCP agent orchestration is the use of the Model Context Protocol as the standardised communication layer through which an orchestrator agent coordinates multiple specialised sub-agents. Rather than each agent-to-agent connection requiring custom integration, MCP provides a common protocol so that agents can discover and invoke other agents as tools — passing context, receiving outputs, and chaining them into multi-step workflows. The orchestrator handles task decomposition and routing; MCP handles the transport and tool-calling mechanics. This separation means you can swap or extend individual agents without rewriting the orchestration logic. Knit uses this pattern in its own multi-agent architecture, exposing HRIS, ATS, and payroll agents as MCP-compatible tools that any orchestrator can call.

2. What is the difference between MCP and agent orchestration frameworks like LangGraph or CrewAI?

They operate at different layers of the stack and complement rather than compete with each other. MCP is a protocol - it defines how agents and tools communicate (discovery, invocation, transport). LangGraph and CrewAI are orchestration frameworks - they define how a workflow is structured (which agent runs when, how state is managed, how branching works). In practice: LangGraph or CrewAI handle the high-level orchestration logic, while MCP handles the standardised connections to the tools and sub-agents those frameworks call. You can use LangGraph to orchestrate a workflow and MCP to connect that workflow to external tools - they're designed to work together.

3. What are the main multi-agent orchestration patterns with MCP?

Three core patterns emerge in MCP-based multi-agent systems. The first is handoff - one orchestrator agent delegates a subtask to a specialised sub-agent, waits for its output, and continues the workflow. The second is chaining — the output of one agent is passed as the input to the next, forming a sequential pipeline (e.g., research agent → summarisation agent → formatting agent). The third is agent graphs - multiple agents run in parallel or conditional branches, with a central orchestrator managing state and collecting results. All three patterns rely on MCP's tool-calling mechanics to invoke sub-agents and pass context between them.

4. How does context pass between agents in an MCP multi-agent workflow?

In MCP-based multi-agent workflows, context passes through structured metadata attached to each tool call. When the orchestrator invokes a sub-agent via MCP, it includes a payload containing relevant context - a workflow ID, prior agent outputs, user-provided parameters, and any shared state. The sub-agent processes this context, executes its task, and returns a structured response that the orchestrator uses to determine the next step. Persistent state across long-running workflows typically lives in an external store (database, memory layer) rather than in-context, since MCP itself is stateless between calls — each tool invocation is independent.

5. Is MCP an orchestration engine that can manage agent workflows directly?
No. MCP is not an orchestration engine in itself, it’s a protocol layer. Think of it as the execution and interoperability backbone that allows agents to communicate in a standardized way. The orchestration logic (i.e., deciding what to do next) must come from a planner, rule engine, or LLM-based controller like LangGraph, CrewAI, PydanticAI, Google ADK, the OpenAI Agents SDK, or Autogen.. MCP ensures that, once a decision is made, the actual tool or agent execution is reliable, traceable, and context-aware.

6. What’s the advantage of using MCP over direct API calls or hardcoded integrations between agents?
Direct integrations are brittle and hard to scale. Without MCP, you’d need to manage multiple formats, inconsistent error handling, and tightly coupled workflows. MCP introduces a uniform interface where every agent or tool behaves like a plug-and-play module. This decouples planning from execution, enables composability, and dramatically improves observability, maintainability, and reuse across workflows.

7. How does MCP enable dynamic handoffs between agents in real-time workflows?
MCP supports context-passing, metadata tagging, and invocation semantics that allow an agent to call another agent as if it were just another tool. This means Agent A can initiate a task, receive partial or complete results from Agent B, and then proceed or escalate based on the outcome. These handoffs are tracked with workflow IDs and can include task-specific context like user profiles, conversation history, or regulatory constraints.

8. Can MCP support workflows with branching, parallelism, or dynamic graph structures?
Yes. While MCP doesn’t orchestrate the branching logic itself, it supports complex topologies through its flexible invocation model. An orchestrator can define a graph where multiple agents are invoked in parallel, with results aggregated or routed dynamically based on responses. MCP’s standardized input/output formats and session management features make such branching reliable and traceable.

9. How is state or context managed when chaining multiple agents using MCP?
Context management is critical in multi-agent systems, and MCP allows you to pass structured context as metadata or part of the input payload. This might include prior tool outputs, session IDs, user-specific data, or policy flags. However, long-term or persistent state must be managed externally, either by the orchestrator or a dedicated memory store. MCP ensures the transport and enforcement of context but doesn’t maintain state across sessions by itself.

10. How does MCP handle errors and partial failures during multi-agent orchestration?
MCP defines a structured error schema, including error codes, messages, and suggested resolution paths. When a tool or agent fails, this structured response allows the orchestrator to take intelligent actions, such as retrying the same agent, switching to a fallback agent, or alerting a human operator. Because every call is traceable and logged, debugging failures across agent chains becomes much more manageable.

11. Is it possible to audit, trace, or monitor agent-to-agent calls in an MCP-based system?
Absolutely. One of MCP’s core strengths is observability. Every invocation, successful or not, is logged with timestamps, input/output payloads, agent identifiers, and workflow context. This is critical for debugging, compliance (e.g., in finance or healthcare), and optimizing workflows. Some MCP implementations even support integration with observability stacks like OpenTelemetry or custom logging dashboards.

12. Can MCP be used in human-in-the-loop workflows where humans co-exist with agents?
Yes. MCP can integrate tools that involve human decision-makers as callable components. For example, a review_draft(agent_output) tool might route the result to a human for validation before proceeding. Because humans can be modeled as tools in the MCP schema (with asynchronous responses), the handoff and reintegration of their inputs remain seamless in the broader agent graph.

13. Are there best practices for designing agents to be MCP-compatible in orchestrated systems?
Yes. Ideally, agents should be stateless (or accept externalized state), follow clearly defined input/output schemas (typically JSON), return consistent error codes, and expose a set of callable functions with well-defined responsibilities. Keeping agent functions atomic and predictable allows them to be chained, reused, and composed into larger workflows more effectively. Versioning tool specs and documenting side effects is also crucial for long-term maintainability.

Insights
-
Apr 19, 2026

MCP Architecture Explained: Tools, Resources, and Prompts (Deep Dive)

The Model Context Protocol (MCP) is revolutionizing the way AI agents interact with external systems, services, and data. By following a client-server model, MCP bridges the gap between static AI capabilities and the dynamic digital ecosystems they must work within. In previous posts, we’ve explored the basics of how MCP operates and the types of problems it solves. Now, let’s take a deep dive into the core components that make MCP so powerful: Tools, Resources, and Prompts.

Each of these components plays a unique role in enabling intelligent, contextual, and secure AI-driven workflows. Whether you're building AI assistants, integrating intelligent agents into enterprise systems, or experimenting with multimodal interfaces, understanding these MCP elements is essential.

1. Tools: Enabling AI to Take Action

What Are Tools?

In the world of MCP, Tools are action enablers. Think of them as verbs that allow an AI model to move beyond generating static responses. Tools empower models to call external services, interact with APIs, trigger business logic, or even manipulate real-time data. These tools are not part of the model itself but are defined and managed by an MCP server, making the model more dynamic and adaptable.

Tools help AI transcend its traditional boundaries by integrating with real-world systems and applications, such as messaging platforms, databases, calendars, web services, or cloud infrastructure.

Key Characteristics of Tools

  • Discovery: Clients can discover which tools are available through the tools/list endpoint. This allows dynamic inspection and registration of capabilities.
  • Invocation: Tools are triggered using the tools/call endpoint, allowing an AI to request a specific operation with defined input parameters.
  • Versatility: Tools can vary widely, from performing math operations and querying APIs to orchestrating workflows and executing scripts.

Examples of Common Tools

  • search_web(query) – Perform a web search to fetch up-to-date information.
  • send_slack_message(channel, message) – Post a message to a specific Slack channel.
  • create_calendar_event(details) – Create and schedule an event in a calendar.
  • execute_sql_query(sql) – Run a SQL query against a specified database.

How Tools Work

An MCP server advertises a set of available tools, each described in a structured format. Tool metadata typically includes:

  • Tool Name: A unique identifier.
  • Description: A human-readable explanation of what the tool does.
  • Input Parameters: Defined using JSON Schema, this sets expectations for what input the tool requires.

When the AI model decides that a tool should be invoked, it sends a call_tool request containing the tool name and the required parameters. The MCP server then executes the tool’s logic and returns either the output or an error message.

Why Tools Matter

Tools are central to bridging model intelligence with real-world action. They allow AI to:

  • Interact with live, real-time data and systems
  • Automate backend operations, workflows, and integrations
  • Respond intelligently based on external input or services
  • Extend capabilities without retraining the model

Best Practices for Implementing Tools

To ensure your tools are robust, safe, and model-friendly:

  • Use Clear and Descriptive Naming
    Give tools intuitive names and human-readable descriptions that reflect their purpose. This helps models and users understand when and how to use them correctly.
  • Define Inputs with JSON Schema
    Input parameters should follow strict schema definitions. This helps the model validate data, autocomplete fields, and avoid incorrect usage.
  • Provide Realistic Usage Examples
    Include concrete examples of how a tool can be used. Models learn patterns and behavior more effectively with demonstrations.
  • Implement Robust Error Handling and Input Validation
    Always validate inputs against expected formats and handle errors gracefully. Avoid assumptions about what the model will send.
  • Apply Timeouts and Rate Limiting
    Prevent tools from hanging indefinitely or being spammed by setting execution time limits and throttling requests as needed.
  • Log All Tool Interactions for Debugging
    Maintain detailed logs of when and how tools are used to help with debugging and performance tuning.
  • Use Progress Updates for Long Tasks
    For time-consuming operations, consider supporting intermediate progress updates or asynchronous responses to keep users informed.

Security Considerations

Ensuring tools are secure is crucial for preventing misuse and maintaining trust in AI-assisted environments.

  • Input Validation
    Rigorously enforce schema constraints to prevent malformed requests. Sanitize all inputs, especially commands, file paths, and URLs, to avoid injection attacks or unintended behavior. Validate lengths, formats, and ranges for all string and numeric fields.
  • Access Control
    Authenticate all sensitive tool requests. Apply fine-grained authorization checks based on user roles, privileges, or scopes. Rate-limit usage to deter abuse or accidental overuse of critical services.
  • Error Handling
    Never expose internal errors or stack traces to the model. These can reveal vulnerabilities. Log all anomalies securely, and ensure that your error-handling logic includes cleanup routines in case of failures or crashes.

Testing Tools: Ensuring Reliability and Resilience

Effective testing is key to ensuring tools function as expected and don’t introduce vulnerabilities or instability into the MCP environment.

  • Functional Testing
    Verify that each tool performs its expected function correctly using both valid and invalid inputs. Cover edge cases and validate outputs against expected results.
  • Integration Testing
    Test the entire flow between model, MCP server, and backend systems to ensure seamless end-to-end interactions, including latency, data handling, and response formats.
  • Security Testing
    Simulate potential attack vectors like injection, privilege escalation, or unauthorized data access. Ensure proper input sanitization and access controls are in place.
  • Performance Testing
    Stress-test your tools under simulated load. Validate that tools continue to function reliably under concurrent usage and that timeout policies are enforced appropriately.

2. Resources: Contextualizing AI with Data

What Are Resources?

If Tools are the verbs of the Model Context Protocol (MCP), then Resources are the nouns. They represent structured data elements exposed to the AI system, enabling it to understand and reason about its current environment.

Resources provide critical context—, whether it’s a configuration file, user profile, or a live sensor reading. They bridge the gap between static model knowledge and dynamic, real-time inputs from the outside world. By accessing these resources, the AI gains situational awareness, enabling more relevant, adaptive, and informed responses.

Unlike Tools, which the AI uses to perform actions, Resources are passively made available to the AI by the host environment. These can be queried or referenced as needed, forming the informational backbone of many AI-powered workflows.

Types of Resources

Resources are usually identified by URIs (Uniform Resource Identifiers) and can contain either text or binary content. This flexible format ensures that a wide variety of real-world data types can be seamlessly integrated into AI workflows.

Text Resources

Text resources are UTF-8 encoded and well-suited for structured or human-readable data. Common examples include:

  • Source code files – e.g., file://main.py
  • Configuration files – JSON, YAML, or XML used for system or application settings
  • Log files – System, application, or audit logs for diagnostics
  • Plain text documents – Notes, transcripts, instructions

Binary Resources

Binary resources are base64-encoded to ensure safe and consistent handling of non-textual content. These are used for:

  • PDF documents – Contracts, reports, or scanned forms
  • Audio and video files – Voice notes, call recordings, or surveillance footage
  • Images and screenshots – UI captures, camera input, or scanned pages
  • Sensor inputs – Thermal images, biometric data, or other binary telemetry

Examples of Resources

Below are typical resource identifiers that might be encountered in an MCP-integrated environment:

  • file://document.txt – The contents of a file opened in the application
  • db://customers/id/123 – A specific customer record from a database
  • user://current/profile – The profile of the active user
  • device://sensor/temperature – Real-time environmental sensor readings

Why Resources Matter

  • Provide relevant context for the AI to reason effectively and personalize output
  • Bridge static model capabilities with real-time data, enabling dynamic behavior
  • Support tasks that require structured input, such as summarization, analysis, or extraction
  • Improve accuracy and responsiveness by grounding the AI in current data rather than relying solely on user prompts
  • Enable application-aware interactions through environment-specific information exposure

How Resources Work

Resources are passively exposed to the AI by the host application or server, based on the current user context, application state, or interaction flow. The AI does not request them actively; instead, they are made available at the right moment for reference.

For example, while viewing an email, the body of the message might be made available as a resource (e.g., mail://current/message). The AI can then summarize it, identify action items, or generate a relevant response, all without needing the user to paste the content into a prompt.

This separation of data (Resources) and actions (Tools) ensures clean, modular interaction patterns and enables AI systems to operate in a more secure, predictable, and efficient manner.

Best Practices for Implementing Resources

  • Use descriptive URIs that reflect resource type and context clearly (e.g., user://current/settings)
  • Provide metadata and MIME types to help the AI interpret the resource correctly (e.g., application/json, image/png)
  • Support dynamic URI templates for common data structures (e.g., db://users/{id}/orders)
  • Cache static or frequently accessed resources to minimize latency and avoid redundant processing
  • Implement pagination or real-time subscriptions for large or streaming datasets
  • Return clear, structured errors and retry suggestions for inaccessible or malformed resources

Security Considerations

  • Validate resource URIs before access to prevent injection or tampering
  • Block directory traversal and URI spoofing through strict path sanitization
  • Enforce access controls and encryption for all sensitive data, particularly in user-facing contexts
  • Minimize unnecessary exposure of sensitive binary data such as identification documents or private media
  • Log and rate-limit access to sensitive or high-volume resources to prevent abuse and ensure compliance

3. Prompts: Structuring AI Interactions

What Are Prompts?

Prompts are predefined templates, instructions, or interface-integrated commands that guide how users or the AI system interact with tools and resources. They serve as structured input mechanisms that encode best practices, common workflows, and reusable queries.

In essence, prompts act as a communication layer between the user, the AI, and the underlying system capabilities. They eliminate ambiguity, ensure consistency, and allow for efficient and intuitive task execution. Whether embedded in a user interface or used internally by the AI, prompts are the scaffolding that organizes how AI functionality is activated in context.

Prompts can take the form of:

  • Suggestive query templates
  • Interactive input fields with placeholders
  • Workflow macros or presets
  • Structured commands within an application interface

By formalizing interaction patterns, prompts help translate user intent into structured operations, unlocking the AI's potential in a way that is transparent, repeatable, and accessible.

Examples of Prompts

Here are a few illustrative examples of prompts used in real-world AI applications:

  • “Show me the {metric} for {product} in the {time_period} region.”
  • “Summarize the contents of {resource_uri}.”
  • “Create a follow-up task for this email.”
  • “Generate a compliance report based on {policy_doc_uri}.”
  • “Find anomalies in {log_file} between {start_time} and {end_time}.”

These prompts can be either static templates with editable fields or dynamically generated based on user activity, current context, or exposed resources.

How Prompts Work

Just like tools and resources, prompts are advertised by the MCP (Model Context Protocol) server. They are made available to both the user interface and the AI agent, depending on the use case.

  • In a user interface, prompts provide a structured, pre-filled way for users to interact with AI functionality. Think of them as smart autocomplete or command templates.
  • Within an AI agent, prompts help organize reasoning paths, guide decision-making, or trigger specific workflows in response to user needs or system events.

Prompts often contain placeholders, such as {resource_uri}, {date_range}, or {user_intent}, which are filled dynamically at runtime. These values can be derived from user input, current application context, or metadata from exposed resources.

Why Prompts Are Powerful

Prompts offer several key advantages in making AI interactions more useful, scalable, and reliable:

  • Lower the barrier to entry by giving users ready-made, understandable templates to work with; no need to guess what to type.
  • Accelerate workflows by pre-configuring tasks and minimizing repetitive manual input.
  • Ensure consistent usage of AI capabilities, particularly in team environments or across departments.
  • Provide structure for domain-specific applications, helping AI operate within predefined guardrails or business logic.
  • Improve the quality and predictability of outputs by constraining input format and intent.

Best Practices for Implementing Prompts

When designing and implementing prompts, consider the following best practices to ensure robustness and usability:

  • Use clear and descriptive names for each prompt so users can easily understand its function.
  • Document required arguments and expected input types (e.g., string, date, URI, number) to ensure consistent usage.
  • Build in graceful error handling, if a required value is missing or improperly formatted, provide helpful suggestions or fallback behavior.
  • Support versioning and localization to allow prompts to evolve over time and be adapted for different regions or user groups.
  • Enable modular composition so prompts can be nested, extended, or chained into larger workflows as needed.
  • Continuously test across diverse use cases to ensure prompts work correctly in various scenarios, applications, and data contexts.

Security Considerations

Prompts, like any user-facing or dynamic interface element, must be implemented with care to ensure secure and responsible usage:

  • Sanitize all user-supplied or dynamic arguments to prevent injection attacks or unexpected behavior.
  • Limit the exposure of sensitive resource data or context, particularly when prompts may be visible across shared environments.
  • Apply rate limiting and maintain logs of prompt usage to monitor abuse or performance issues.
  • Guard against prompt injection and spoofing, where malicious actors try to manipulate the AI through crafted inputs.
  • Establish role-based permissions to restrict access to prompts tied to sensitive operations (e.g., financial summaries, administrative tools).

Example Use Case

Imagine a business analytics dashboard integrated with MCP. A prompt such as:

“Generate a sales summary for {region} between {start_date} and {end_date}.”

…can be presented to the user in the UI, pre-filled with defaults or values pulled from recent activity. Once the user selects the inputs, the AI fetches relevant data (via resources like db://sales/records) and invokes a tool (e.g., a report generator) to compile a summary. The prompt acts as the orchestration layer tying these components together in a seamless interaction.

The Synergy: Tools, Resources, and Prompts in Concert

While Tools, Resources, and Prompts are each valuable as standalone constructs, their true potential emerges when they operate in harmony. When thoughtfully integrated, these components form a cohesive, dynamic system that empowers AI agents to perform meaningful tasks, adapt to user intent, and deliver high-value outcomes with precision and context-awareness.

This trio transforms AI from a passive respondent into a proactive collaborator, one that not only understands what needs to be done, but knows how, when, and with what data to do it.

How They Work Together: A Layered Interaction Model

To understand this synergy, let’s walk through a typical workflow where an AI assistant is helping a business user analyze sales trends:

  1. Prompt
    The interaction begins with a structured prompt:
    “Show sales for product X in region Y over the last quarter.”
    This guides the user’s intent and helps the AI parse the request accurately by anchoring it in a known pattern.

  2. Tool
    Behind the scenes, the AI agent uses a predefined tool (e.g., fetch_sales_data(product, region, date_range)) to carry out the request. Tools encapsulate the logic for specific operations—like querying a database, generating a report, or invoking an external API.

  3. Resource
    The result of the tool's execution is a resource: a structured dataset returned in a standardized format, such as:
    data://sales/q1_productX.json.
    This resource is now available to the AI agent for further processing, and may be cached, reused, or referenced in future queries.

  4. Further Interaction
    With the resource in hand, the AI can now:
    • Summarize the findings
    • Visualize the trends using charts or dashboards
    • Compare the current data with historical baselines
    • Recommend follow-up actions, like alerting a sales manager or adjusting inventory forecasts

Built for AI developers

Enterprise MCP servers that implement all three primitives — out of the box.

Knit's MCP Servers expose HRIS, ATS, and ERP data as Tools, Resources, and Prompts for any MCP-compatible agent — Workday, BambooHR, Greenhouse, NetSuite, and 100+ more. No custom server to build or maintain.

Why This Matters

This multi-layered interaction model allows the AI to function with clarity and control:

  • Tools provide the actionable capabilities, the verbs the AI can use to do real work.
  • Resources deliver the data context, the nouns that represent information, documents, logs, reports, or user assets.
  • Prompts shape the user interaction model, the grammar and structure that link human intent to system functionality.

The result is an AI system that is:

  • Context-aware, because it can reference real-time or historical resources
  • Task-oriented, because it can invoke tools with well-defined operations
  • User-friendly, because it engages with prompts that remove guesswork and ambiguity

This framework scales elegantly across domains, enabling complex workflows in enterprise environments, developer platforms, customer service, education, healthcare, and beyond.

Conclusion: Building the Future with MCP

The Model Context Protocol (MCP) is not just a communication mechanism—it is an architectural philosophy for integrating intelligence across software ecosystems. By rigorously defining and interconnecting Tools, Resources, and Prompts, MCP lays the groundwork for AI systems that are:

  • Modular and Composable: Components can be independently built, reused, and orchestrated into workflows.
  • Secure by Design: Access, execution, and data handling can be governed with fine-grained policies.
  • Contextually Intelligent: Interactions are grounded in live data and operational context, reducing hallucinations and misfires.
  • Operationally Aligned: AI behavior follows best practices and reflects real business processes and domain knowledge.

Next Steps:

See how these components are used in practice:

FAQs

1. What is MCP architecture?

MCP (Model Context Protocol) architecture is the client-server framework that defines how AI models connect to external data sources and tools. In MCP architecture, an MCP host (the AI application - e.g. Claude Desktop or a custom agent) connects to one or more MCP servers via a standardised protocol. Each MCP server exposes three types of capabilities: tools (functions the AI can call to take actions), resources (data the AI can read for context), and prompts (reusable templates that structure how the AI interacts with that server). The protocol handles capability discovery, request/response formatting, and transport - so any MCP-compatible client can connect to any MCP-compatible server without custom wiring. Knit offers MCP servers, making enterprise data accessible to any MCP-compatible AI agent.

2. What is the difference between MCP tools, resources, and prompts?The three MCP primitives serve distinct roles. Tools are executable functions — the AI calls a tool to take an action (run a query, write a record, call an API). They are model-controlled: the AI decides when to call them based on the task. Resources are read-only data sources — the AI reads from a resource to get context (a file, a database record, a knowledge base). They are application-controlled: the host decides when to surface them. Prompts are reusable interaction templates — pre-defined workflows or instruction structures that guide how the AI should use the server's tools and resources for a given task. They are user-controlled: exposed to the user as selectable options rather than triggered autonomously by the model.

3. What is the difference between MCP and a regular API?

A regular API requires a client to know exactly what endpoints exist, how to authenticate, what parameters to pass, and how to parse responses - all bespoke per API. MCP adds a discovery and standardisation layer on top: an MCP client can connect to any MCP server and automatically discover what tools, resources, and prompts it exposes, without prior knowledge of the server's implementation. For AI agents specifically, this matters because the model can reason about which tools to call based on their descriptions - rather than being hard-coded to call specific endpoints. MCP essentially makes APIs self-describing and AI-native.

4. How does MCP client-server architecture work?In MCP's client-server architecture, the MCP host (an AI application like Claude or a custom agent framework) contains an MCP client that manages connections to one or more MCP servers. Each server runs as a separate process - either locally or remotely - and exposes its capabilities (tools, resources, prompts) via the MCP protocol. When an AI agent needs to take an action or access data, the client sends a request to the appropriate server using JSON-RPC over the configured transport (stdio for local servers, HTTP/SSE for remote). The server executes the request and returns a structured response. This separation means servers can be built, deployed, and updated independently of the AI application - and a single agent can connect to multiple servers simultaneously, composing capabilities from many sources.

5. How do Tools and Resources complement each other in MCP?
Tools perform actions (e.g., querying a database), while Resources provide the data context (e.g., the query result). Together they enable workflows that are both action-driven and data-grounded.

6. What’s the difference between invoking a Tool and referencing a Resource?
Invoking a Tool is an active request (using tools/call), while referencing a Resource is passive, the AI can access it when made available without explicitly requesting execution.

7. Why are JSON Schemas critical for Tool inputs?
Schemas prevent misuse by enforcing strict formats, ensuring the AI provides valid parameters, and reducing the risk of injection or malformed requests.

8. How can binary Resources (like images or PDFs) be used effectively?
Binary Resources, encoded in base64, can be referenced for tasks like summarizing a report, extracting data from a PDF, or analyzing image inputs.

9. What safeguards are needed when exposing Resources to AI agents?
Developers should sanitize URIs, apply access controls, and minimize exposure of sensitive binary data to prevent leakage or unauthorized access.

10. How do Prompts reduce ambiguity in AI interactions?
Prompts provide structured templates (with placeholders like {resource_uri}), guiding the AI’s reasoning and ensuring consistent execution across workflows.

11. Can Prompts dynamically adapt based on available Resources?
Yes. Prompts can auto-populate fields with context (e.g., a current email body or log file), making AI responses more relevant and personalized.

12. What testing strategies apply specifically to Tools?
Alongside functional testing, Tools require integration tests with MCP servers and backend systems to validate latency, schema handling, and error resilience.

13. How do Tools, Resources, and Prompts work together in a layered workflow?
A Prompt structures intent, a Tool executes the operation, and a Resource provides or captures the data—creating a modular interaction loop.

14. What’s an example of misuse if these elements aren’t implemented carefully?
Without input validation, a Tool could execute a harmful command; without URI checks, a Resource might expose sensitive files; without guardrails, Prompts could be manipulated to trigger unsafe operations.

API Directory
-
Apr 19, 2026

Getting Started with Greenhouse API Integration: Developer's Guide to Integration

In this article, we will discuss a quick overview of popular Greenhouse APIs, key API endpoints, common FAQs, and a step-by-step guide on how to generate your Greenhouse API keys as well as steps to authenticate. Plus, we will also share links to important documentation you will need to effectively integrate with Greenhouse.

Overview of Greenhouse API

Greenhouse is an applicant tracking software (ATS) and hiring platform that empowers organizations to foster fair and equitable hiring practices. Whether you're a developer looking to integrate Greenhouse into your company's tech stack or an HR professional seeking to streamline your hiring workflows, the Greenhouse API offers a wide range of capabilities.

Let's explore the common Greenhouse APIs, popular endpoints, and how to generate your Greenhouse API keys.

Common Greenhouse APIs

Greenhouse offers eight APIs for different integration needs. Here are the most commonly used:

1. Harvest API

⚠️ Deprecation notice: Harvest v1/v2 is deprecated and will be removed on August 31, 2026. Migrate to Harvest v3 before that date.

The Harvest API is the primary gateway to your Greenhouse data, providing full read and write access to candidates, applications, jobs, interviews, feedback, and offers. Common actions include:

  • updating candidate information
  • adding attachments to candidate profiles
  • merging candidate profiles
  • managing the application process (advancing, hiring, or rejecting them).

Harvest v3 endpoints (base: https://harvest.greenhouse.io):

  • GET /v3/applications — list candidate applications
  • PATCH /v3/applications/{id} — update a candidate application
  • GET /v3/candidates — list candidates
  • POST /v3/candidates — create a candidate

Authentication (Harvest v3): Bearer token (JWT) obtained from https://auth.greenhouse.io/token, or OAuth2 (client credentials or authorization code flow). The v1/v2 pattern of HTTP Basic Auth with an API key does not apply to v3.

Pagination (Harvest v3): Cursor-based. Pass the cursor value from the previous response header to retrieve the next page. Returns up to 500 results per page via the per_page parameter.

Harvest v3 API reference →

Job Board API

Through the Greenhouse Job Board API, you gain access to a JSON representation of your company's offices, departments, and published job listings. Use it to build custom career pages and department-specific job listing sites.

Key endpoints:

  • GET /boards/{board_token}/jobs - list active job postings
  • POST /boards/{board_token}/jobs/{id} - submit a candidate application

Authentication: GET endpoints require no authentication - job board data is publicly accessible. The POST endpoint (application submission) requires HTTP Basic Auth with a Base64-encoded Job Board API key.

Job Board API documentation →

Assessment API

Primarily used for Greenhouse API to create and conduct customized tests across coding, interviews, personality tests, etc. to check the suitability of the candidate for a particular role. You can leverage tests from third party candidate testing platforms as well and update status for the same after the completion by candidate. 

Example endpoints:

  • GET https://www.testing-partner.com/api/list_tests — list available tests for a candidate
  • GET https://www.testing-partner.com/api/test_status?partner_interview_id=12345 — check the status of a take-home test

Authentication: HTTP Basic Authentication over HTTPS

Ingestion API

The Ingestion API allows sourcing partners to push candidate leads into Greenhouse and retrieve job and application status information.

Key endpoints:

  • GET https://api.greenhouse.io/v1/partner/candidates — retrieve data for a particular candidate
  • POST https://api.greenhouse.io/v1/partner/candidates — create one or more candidates
  • GET https://api.greenhouse.io/v1/partner/jobs — retrieve jobs visible to current user

Authentication: OAuth 2.0 and Basic Auth

Audit Log API

The Audit Log API provides a structured, queryable record of system activity in your Greenhouse account — useful for compliance auditing, security monitoring, and integration debugging.

Authentication: HTTP Basic Authentication over HTTPS

Onboarding API

The Greenhouse Onboarding API allows you to retrieve and update employee data and company information for onboarding workflows. This API uses GraphQL (not REST). Supports GET, PUT, POST, PATCH, and DELETE operations.

Authentication: HTTP Basic Authentication over HTTPS

Integrate with Greenhouse API 10X faster. Learn more

How to get a Greenhouse API token?

To make requests to Greenhouse's API, you would need an API Key. Here are the steps for generating an API key in Greenhouse:

Step 1: Go to the Greenhouse website and log in to your Greenhouse account using your credentials.

Step 2: Click on the "Configure" tab at the top of the Greenhouse interface.

Step 3: From the sidebar menu under "Configure," select "Dev Center."

Step 4: In the Dev Center, find the "API Credential Management" section.

Step 5: Click on "Create New API Key."

Step 6: Configure your API Key

  1. Select the API type you want to use.
  2. Give your API key a description that helps you identify its purpose or the application it will be used for.
  3. Specify the permissions you want to grant to this API key by clicking on “Manage Permissions”. Greenhouse provides granular control over the data and functionality that can be accessed with this key. You can restrict access to specific parts of the Greenhouse API to enhance security.

Step 7: After configuring the API key, click "Create" or a similar button to generate the API token. The greenhouse will display the API token on the screen. This is a long string of characters and numbers.

Step 8: Copy the API token and store it securely. Treat it as sensitive information, and do not expose it in publicly accessible code or repositories.

Important: Be aware that you won't have the ability to copy this API Key again, so ensure you store it securely.

Once you have obtained the API token, you can use it in the headers of your HTTP requests to authenticate and interact with the Greenhouse API. Make sure to follow Greenhouse's API documentation and guidelines for using the API token, and use it according to your specific integration needs.

Always prioritize the security of your API token to protect your Greenhouse account and data. If the API token is compromised, revoke it and generate a new one through the same process. 

Now, let’s jump in on how to authenticate for using the Greenhouse API.

How to authenticate Greenhouse API?

Harvest API v3 (current)

To authenticate with the Greenhouse API, follow these steps:

Step 1: Harvest v3 uses Bearer token authentication. Obtain a JWT access token by making a POST request to https://auth.greenhouse.io/token using OAuth2 client credentials. Pass the token in the Authorization header:

Authorization: Bearer YOUR_JWT_ACCESS_TOKEN

Step 2: Harvest v3 also supports the full OAuth2 authorization code flow for partner integrations that connect to multiple Greenhouse accounts. Scopes are granular — for example, harvest:applications:list to read applications, harvest:candidates:create to create candidates.

Harvest API v1/v2 (deprecated - removed August 31, 2026)

The legacy Harvest v1/v2 used HTTP Basic Auth. The API key was passed as the username with the password left blank. In practice, most HTTP clients handle this when you set the username to your API key and leave the password empty:

curl -u "YOUR_API_KEY:" https://harvest.greenhouse.io/v1/applications

If you are currently using v1/v2 Basic Auth, you must migrate to Harvest v3 token-based auth before August 31, 2026. Refer to the Harvest v3 migration guide for the updated auth flow.

Auth for Job Board API

GET endpoints require no authentication. The POST endpoint (submitting applications) requires HTTP Basic Auth with a Base64-encoded Job Board API key as the username.

Auth for Ingestion and Assessment APIs

Both use HTTP Basic Authentication over HTTPS. These APIs are designed for Greenhouse technology partners and require enrollment in the Greenhouse Partner Program.

Common FAQs on Greenhouse API

Check out some of the top FAQs for Greenhouse API to scale your integration process:

1. Is there pagination support for large datasets?

Yes, many API endpoints that provide a collection of results support pagination.
When results are paginated, the response will include a Link response header (as per RFC-5988) containing the following details:

  • "next": This corresponds to the URL leading to the next page.
  • "prev": This corresponds to the URL leading to the previous page.
  • "last": This corresponds to the URL leading to the last page.

When this header is not present, it means there is only a single page of results, which is the first page.

2. Are there rate limits for API requests?

Yes, Greenhouse imposes rate limits on API requests to ensure fair usage, as indicated in the `X-RateLimit-Limit` header (per 10 seconds).
If this limit is exceeded, the API will respond with an HTTP 429 error. To monitor your remaining allowed requests before throttling occurs, examine the `X-RateLimit-Limit` and `X-RateLimit-Remaining` headers. 

3. Does Greenhouse have a sandbox?

Yes, Greenhouse provides a sandbox that enables you to conduct testing and simulations effectively.

The sandbox is created as a blank canvas where you can manually input fictitious data, such as mock job listings, candidate profiles, or organizational information.

Refer here for more info.

4. What are the challenges of building Greenhouse API integration?

Building Greenhouse API integration on your own can be challenging, especially for a team with limited engineering resources. For example,

  • It can take around 4 weeks to up to 3 months to fully plan, build and deploy Greenhouse integration. And ongoing maintenance and monitoring of the API can take ~10 hours each week — all the time that could be invested into improving your core product features. To calculate the cost yourself, click here
  • The data models of Greenhouse API may not match the data models from other ATS applications (if you are building multiple ATS integrations). In that case, you need to spend additional bandwidth decoding and understanding nuances of Greenhouse API which again is not very productive use of in-house engineering resources.
  • You need to design and implement workflows to serve specific use cases of your customers using Greenhouse APIs.
  • The API is updated frequently, so you need to always keep an eye out for the changes to make sure your user experience is not diminished.
  • Handle integration issues manually as and when they arise.

5. When should I consider integrating with Greenhouse API?

Here are some of the common Greenhouse API use cases that would help you evaluate your integration need:

  • Access job listings from 1000+ job boards with recommendations based on historical trends
  • Create scorecard of key attributes for fair and consistent evaluation
  • Automate surveys for candidate experience and enable candidate to self schedule interviews
  • Access 40+ reports and dashboard snapshots
  • Automated tasks and reminders for smooth onboarding
  • Structured process for paperwork
  • Easy reports to understand onboarding trends

Ready to build?

If you want to quickly implement your Greenhouse API integration but don’t want to deal with authentication, authorization, rate limiting or integration maintenance, consider choosing a unified API like Knit.

Knit helps you integrate with 30+ ATS and HR applications, including Greenhouse, with just a single unified API. It brings down your integration building time from 3 months to a few hours.

Plus, Knit takes care of all the authentication, monitoring, and error handling that comes with building Greenhouse integration, thus saving you an additional 10 hours each week.

Ready to scale? Book a quick call with one of our experts or get your Knit API keys today. (Getting started is completely free)
API Directory
-
Apr 19, 2026

HubSpot API Integration Guide: CRM, Contacts, Deals & OAuth (2026) | Knit

HubSpot is a cloud-based software platform designed to facilitate business growth by offering an integrated suite of tools for marketing, sales, customer service, and customer relationship management (CRM). Known for its user-friendly interface and robust integration capabilities, HubSpot provides businesses with the resources needed to enhance their operations and customer interactions. The platform is particularly popular among companies focusing on digital marketing and customer engagement strategies, making it a versatile solution for businesses of all sizes and industries.

HubSpot's comprehensive offerings include the Marketing Hub, which aids businesses in attracting visitors, converting leads, and closing customers through features like email marketing, social media management, and SEO analytics. The Sales Hub empowers sales teams to manage pipelines and automate tasks efficiently, while the Service Hub focuses on improving customer satisfaction with tools for ticketing and feedback management. Additionally, HubSpot's CRM offers a centralized database for tracking and nurturing leads, and the CMS Hub provides an intuitive content management system for website creation and optimization.

Key highlights of Hubspot APIs

  • 1. Easy Data Access:
    • The HubSpot API facilitates seamless access to data, enhancing accuracy and efficiency in business operations.
  • 2. Automation:
    • Supports marketing automation, sales enablement, and customer service, streamlining processes and increasing productivity.
  • 3. Custom Integration:
    • Enables seamless data exchange between HubSpot and other systems, improving overall productivity.
  • 4. Real-Time Sync:
    • Utilizes webhooks for prompt notifications of changes, ensuring up-to-date data synchronization.
  • 5. Scalable:
    • Capable of handling up to 150 requests per second, accommodating growing business needs.
  • 6. Developer-Friendly:
    • Extensive documentation and support make it accessible for developers to implement and manage.
  • 7. Global Support:
    • Implied by HubSpot's international presence, offering assistance across different regions.
  • 8. Error Handling and Logging:
    • Standard features included to manage and troubleshoot integration issues effectively.
  • 9. Rate Limiting:
    • Ensures fair usage and prevents abuse by limiting the number of requests per second.
  • 10. Version Control:
    • Maintains stability and compatibility across different API versions.
  • 11. Data Transformation:
    • Managed within integration logic to suit specific business requirements.
  • 12. Webhook Support:
    • Allows for event subscriptions, enhancing real-time data handling capabilities.
  • 13. Detailed Analytics and Reporting:
    • Likely includes features for comprehensive data insights to aid decision-making.
  • 14. Sandbox Environment:
    • A 60-day testing environment is available for development and experimentation.

Hubspot API Endpoints

Imports

  • POST /crm/v3/imports/ : Start a New Import
  • GET https://api.hubapi.com/crm/v3/imports/ : Get Active Imports
  • GET https://api.hubapi.com/crm/v3/imports/{importId} : Get Import Information
  • POST https://api.hubapi.com/crm/v3/imports/{importId}/cancel : Cancel an Active Import
  • GET https://api.hubapi.com/crm/v3/imports/{importId}/errors : Get Import Errors
Note (2026): HubSpot introduced date-based API versioning with the 2026-03 release. New integrations should use the date-versioned endpoint format (e.g. /crm/objects/2026-03/contacts) instead of /crm/v3/. Legacy v3 and v4 paths continue to work until their end-of-life date — check the HubSpot developer changelog for the deprecation timeline. Right now as shared /v4/ endpoints would work till March 2027

CRM Object Schemas

  • GET https://api.hubapi.com/crm-object-schemas/v3/schemas : Get All CRM Object Schemas
  • DELETE https://api.hubapi.com/crm-object-schemas/v3/schemas/ : Delete a Schema
  • PATCH https://api.hubapi.com/crm-object-schemas/v3/schemas/{objectType} : Update Object Schema
  • POST https://api.hubapi.com/crm-object-schemas/v3/schemas/{objectType}/associations : Create an Association Between Object Types
  • DELETE https://api.hubapi.com/crm-object-schemas/v3/schemas/{objectType}/associations/{associationIdentifier} : Remove an Association from a Schema
  • DELETE https://api.hubapi.com/crm-object-schemas/v3/schemas/{objectType}/purge : Delete CRM Object Schema

Exports

  • POST https://api.hubapi.com/crm/v3/exports/export/async : Start an Export of CRM Data
  • GET https://api.hubapi.com/crm/v3/exports/export/async/tasks/{taskId}/status : Get Export Task Status

Calling Extensions

  • POST https://api.hubapi.com/crm/v3/extensions/calling/recordings/ready : Mark Recording as Ready for Transcription
  • POST https://api.hubapi.com/crm/v3/extensions/calling/{appId}/settings : Configure a Calling Extension
  • PATCH https://api.hubapi.com/crm/v3/extensions/calling/{appId}/settings/recording : Update Calling App's Recording Settings

Cards

  • GET https://api.hubapi.com/crm/v3/extensions/cards-dev/sample-response : Get Sample Card Detail Response
  • GET https://api.hubapi.com/crm/v3/extensions/cards-dev/{appId} : Get All Cards for a Given App
  • DELETE https://api.hubapi.com/crm/v3/extensions/cards-dev/{appId}/{cardId} : Delete Card Definition

Video Conferencing

  • DELETE https://api.hubapi.com/crm/v3/extensions/videoconferencing/settings/{appId} : Delete Video Conference Application Settings

Lists

  • POST https://api.hubapi.com/crm/v3/lists/ : Create List
  • GET https://api.hubapi.com/crm/v3/lists/folders : Retrieve Folder with Child Nodes
  • PUT https://api.hubapi.com/crm/v3/lists/folders/move-list : Move List to Folder
  • DELETE https://api.hubapi.com/crm/v3/lists/folders/{folderId} : Delete CRM Folder
  • PUT https://api.hubapi.com/crm/v3/lists/folders/{folderId}/move/{newParentFolderId} : Move a Folder in CRM
  • PUT https://api.hubapi.com/crm/v3/lists/folders/{folderId}/rename : Rename a Folder in CRM
  • GET https://api.hubapi.com/crm/v3/lists/idmapping : Translate Legacy List Id to Modern List Id
  • GET https://api.hubapi.com/crm/v3/lists/object-type-id/{objectTypeId}/name/{listName} : Fetch List by Name
  • GET https://api.hubapi.com/crm/v3/lists/records/{objectTypeId}/{recordId}/memberships : Get Lists Record is Member Of
  • POST https://api.hubapi.com/crm/v3/lists/search : Search Lists by Name or Page Through All Lists
  • GET https://api.hubapi.com/crm/v3/lists/{listId} : Fetch List by ID
  • DELETE https://api.hubapi.com/crm/v3/lists/{listId}/memberships : Delete All Records from a List
  • PUT https://api.hubapi.com/crm/v3/lists/{listId}/memberships/add : Add Records to a List
  • PUT https://api.hubapi.com/crm/v3/lists/{listId}/memberships/add-and-remove : Add and/or Remove Records from a List
  • PUT https://api.hubapi.com/crm/v3/lists/{listId}/memberships/add-from/{sourceListId} : Add All Records from a Source List to a Destination List
  • GET https://api.hubapi.com/crm/v3/lists/{listId}/memberships/join-order : Fetch List Memberships Ordered by Added to List Date
  • PUT https://api.hubapi.com/crm/v3/lists/{listId}/memberships/remove : Remove Records from a List
  • PUT https://api.hubapi.com/crm/v3/lists/{listId}/restore : Restore a Deleted List
  • PUT https://api.hubapi.com/crm/v3/lists/{listId}/update-list-filters : Update List Filter Definition
  • PUT https://api.hubapi.com/crm/v3/lists/{listId}/update-list-name : Update List Name

CRM Objects

  • GET https://api.hubapi.com/crm/v3/objects/calls : Read a Page of Calls
  • POST https://api.hubapi.com/crm/v3/objects/calls/batch/archive : Archive a Batch of Calls by ID
  • POST https://api.hubapi.com/crm/v3/objects/calls/batch/create : Create a Batch of Calls
  • POST https://api.hubapi.com/crm/v3/objects/calls/batch/read : Read a Batch of Calls by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/calls/batch/update : Update a Batch of Calls by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/calls/batch/upsert : Create or Update a Batch of Calls by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/calls/search : Search Calls in CRM
  • PATCH https://api.hubapi.com/crm/v3/objects/calls/{callId} : Partial Update of CRM Call Object
  • GET https://api.hubapi.com/crm/v3/objects/carts : Read a Page of Carts
  • POST https://api.hubapi.com/crm/v3/objects/carts/batch/archive : Archive a Batch of Carts by ID
  • POST https://api.hubapi.com/crm/v3/objects/carts/batch/create : Create a Batch of Carts
  • POST https://api.hubapi.com/crm/v3/objects/carts/batch/read : Read a Batch of Carts by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/carts/batch/update : Update a Batch of Carts by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/carts/batch/upsert : Create or Update a Batch of Carts by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/carts/search : Search Carts in CRM
  • DELETE https://api.hubapi.com/crm/v3/objects/carts/{cartId} : Delete Cart Object
  • GET https://api.hubapi.com/crm/v3/objects/commerce_payments : List Commerce Payments
  • POST https://api.hubapi.com/crm/v3/objects/commerce_payments/batch/read : Read a Batch of Commerce Payments by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/commerce_payments/search : Search Commerce Payments
  • GET https://api.hubapi.com/crm/v3/objects/commerce_payments/{commercePaymentId} : Get Commerce Payment Details
  • POST https://api.hubapi.com/crm/v3/objects/communications : Create a Communication
  • POST https://api.hubapi.com/crm/v3/objects/communications/batch/archive : Archive a Batch of Communications by ID
  • POST https://api.hubapi.com/crm/v3/objects/communications/batch/create : Create a Batch of Communications
  • POST https://api.hubapi.com/crm/v3/objects/communications/batch/read : Read a Batch of Communications by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/communications/batch/update : Update a Batch of Communications by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/communications/batch/upsert : Create or Update a Batch of Communications by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/communications/search : Search Communications in CRM
  • PATCH https://api.hubapi.com/crm/v3/objects/communications/{communicationId} : Partial Update of Communication Object
  • POST https://api.hubapi.com/crm/v3/objects/companies : Create a Company
  • POST https://api.hubapi.com/crm/v3/objects/companies/batch/archive : Archive a Batch of Companies by ID
  • POST https://api.hubapi.com/crm/v3/objects/companies/batch/create : Create a Batch of Companies
  • POST https://api.hubapi.com/crm/v3/objects/companies/batch/read : Read a Batch of Companies by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/companies/batch/update : Update a Batch of Companies by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/companies/batch/upsert : Create or Update a Batch of Companies by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/companies/merge : Merge Two Companies with Same Type
  • POST https://api.hubapi.com/crm/v3/objects/companies/search : Search Companies in CRM
  • DELETE https://api.hubapi.com/crm/v3/objects/companies/{companyId} : Delete a Company Object
  • POST https://api.hubapi.com/crm/v3/objects/contacts : Create a Contact in HubSpot CRM
  • POST https://api.hubapi.com/crm/v3/objects/contacts/batch/archive : Archive a Batch of Contacts by ID
  • POST https://api.hubapi.com/crm/v3/objects/contacts/batch/create : Create a Batch of Contacts
  • POST https://api.hubapi.com/crm/v3/objects/contacts/batch/read : Read a Batch of Contacts by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/contacts/batch/update : Update a Batch of Contacts by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/contacts/batch/upsert : Create or Update a Batch of Contacts by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/contacts/gdpr-delete : GDPR Delete Contact
  • POST https://api.hubapi.com/crm/v3/objects/contacts/merge : Merge Two Contacts with Same Type
  • POST https://api.hubapi.com/crm/v3/objects/contacts/search : Search Contacts in CRM
  • DELETE https://api.hubapi.com/crm/v3/objects/contacts/{contactId} : Delete Contact by ID
  • GET https://api.hubapi.com/crm/v3/objects/deals : Read a Page of Deals
  • POST https://api.hubapi.com/crm/v3/objects/deals/batch/archive : Archive a Batch of Deals by ID
  • POST https://api.hubapi.com/crm/v3/objects/deals/batch/create : Create a Batch of Deals
  • POST https://api.hubapi.com/crm/v3/objects/deals/batch/read : Read a Batch of Deals by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/deals/batch/update : Update a Batch of Deals by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/deals/batch/upsert : Create or Update a Batch of Deals by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/deals/merge : Merge Two Deals with Same Type
  • POST https://api.hubapi.com/crm/v3/objects/deals/search : Search Deals in CRM
  • POST https://api.hubapi.com/crm/v3/objects/deals/splits/batch/read : Read a Batch of Deal Split Objects by Deal Object Internal ID
  • POST https://api.hubapi.com/crm/v3/objects/deals/splits/batch/upsert : Create or Replace Deal Splits for Deals
  • DELETE https://api.hubapi.com/crm/v3/objects/deals/{dealId} : Delete a Deal Object
  • POST https://api.hubapi.com/crm/v3/objects/discounts : Create a Discount
  • POST https://api.hubapi.com/crm/v3/objects/discounts/batch/archive : Archive a Batch of Discounts by ID
  • POST https://api.hubapi.com/crm/v3/objects/discounts/batch/create : Create a Batch of Discounts
  • POST https://api.hubapi.com/crm/v3/objects/discounts/batch/read : Read a Batch of Discounts by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/discounts/batch/update : Update a Batch of Discounts by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/discounts/batch/upsert : Create or Update a Batch of Discounts by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/discounts/search : Search Discounts in CRM
  • PATCH https://api.hubapi.com/crm/v3/objects/discounts/{discountId} : Partial Update of Discount Object
  • GET https://api.hubapi.com/crm/v3/objects/emails : Read a Page of Emails
  • POST https://api.hubapi.com/crm/v3/objects/emails/batch/archive : Archive a Batch of Emails by ID
  • POST https://api.hubapi.com/crm/v3/objects/emails/batch/create : Create a Batch of Emails
  • POST https://api.hubapi.com/crm/v3/objects/emails/batch/read : Read a Batch of Emails by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/emails/batch/update : Update a Batch of Emails by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/emails/batch/upsert : Create or Update a Batch of Emails by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/emails/search : Search Emails in CRM
  • GET https://api.hubapi.com/crm/v3/objects/emails/{emailId} : Get Email Object Details
  • GET https://api.hubapi.com/crm/v3/objects/feedback_submissions : Get Feedback Submissions
  • POST https://api.hubapi.com/crm/v3/objects/feedback_submissions/batch/read : Read a Batch of Feedback Submissions by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/feedback_submissions/search : Search Feedback Submissions
  • GET https://api.hubapi.com/crm/v3/objects/feedback_submissions/{feedbackSubmissionId} : Get Feedback Submission Details
  • POST https://api.hubapi.com/crm/v3/objects/fees : Create a Fee with Given Properties
  • POST https://api.hubapi.com/crm/v3/objects/fees/batch/archive : Archive a Batch of Fees by ID
  • POST https://api.hubapi.com/crm/v3/objects/fees/batch/create : Create a Batch of Fees
  • POST https://api.hubapi.com/crm/v3/objects/fees/batch/read : Read a Batch of Fees by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/fees/batch/update : Update a Batch of Fees by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/fees/batch/upsert : Create or Update a Batch of Fees by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/fees/search : Search Fees in CRM
  • GET https://api.hubapi.com/crm/v3/objects/fees/{feeId} : Get Fee Object Details
  • GET https://api.hubapi.com/crm/v3/objects/goal_targets : Read a Page of Goal Targets
  • POST https://api.hubapi.com/crm/v3/objects/goal_targets/batch/read : Read a Batch of Goal Targets by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/goal_targets/search : Search Goal Targets
  • GET https://api.hubapi.com/crm/v3/objects/goal_targets/{goalTargetId} : Read Goal Target Object
  • GET https://api.hubapi.com/crm/v3/objects/invoices : Read a Page of Invoices
  • POST https://api.hubapi.com/crm/v3/objects/invoices/batch/read : Read a Batch of Invoices by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/invoices/search : Search Invoices in CRM
  • GET https://api.hubapi.com/crm/v3/objects/invoices/{invoiceId} : Get Invoice Details
  • POST https://api.hubapi.com/crm/v3/objects/leads : Create a Lead
  • POST https://api.hubapi.com/crm/v3/objects/leads/batch/archive : Archive a Batch of Leads by ID
  • POST https://api.hubapi.com/crm/v3/objects/leads/batch/create : Create a Batch of Leads
  • POST https://api.hubapi.com/crm/v3/objects/leads/batch/read : Read a Batch of Leads by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/leads/batch/update : Update a Batch of Leads by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/leads/batch/upsert : Create or Update a Batch of Leads by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/leads/search : Search Leads in CRM
  • DELETE https://api.hubapi.com/crm/v3/objects/leads/{leadsId} : Delete Lead Object
  • POST https://api.hubapi.com/crm/v3/objects/line_items : Create Line Item
  • POST https://api.hubapi.com/crm/v3/objects/line_items/batch/archive : Archive a Batch of Line Items by ID
  • POST https://api.hubapi.com/crm/v3/objects/line_items/batch/create : Create a Batch of Line Items
  • POST https://api.hubapi.com/crm/v3/objects/line_items/batch/read : Read a Batch of Line Items by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/line_items/batch/update : Update a Batch of Line Items by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/line_items/batch/upsert : Create or Update a Batch of Line Items by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/line_items/search : Search Line Items in CRM
  • PATCH https://api.hubapi.com/crm/v3/objects/line_items/{lineItemId} : Partial Update of Line Item Object
  • GET https://api.hubapi.com/crm/v3/objects/meetings : Read a Page of Meetings
  • POST https://api.hubapi.com/crm/v3/objects/meetings/batch/archive : Archive a Batch of Meetings by ID
  • POST https://api.hubapi.com/crm/v3/objects/meetings/batch/create : Create a Batch of Meetings
  • POST https://api.hubapi.com/crm/v3/objects/meetings/batch/read : Read a Batch of Meetings by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/meetings/batch/update : Update a Batch of Meetings by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/meetings/batch/upsert : Create or Update a Batch of Meetings by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/meetings/search : Search Meetings in CRM
  • DELETE https://api.hubapi.com/crm/v3/objects/meetings/{meetingId} : Archive Meeting Object
  • POST https://api.hubapi.com/crm/v3/objects/notes : Create a Note in HubSpot CRM
  • POST https://api.hubapi.com/crm/v3/objects/notes/batch/archive : Archive a Batch of Notes by ID
  • POST https://api.hubapi.com/crm/v3/objects/notes/batch/create : Create a Batch of Notes
  • POST https://api.hubapi.com/crm/v3/objects/notes/batch/read : Read a Batch of Notes by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/notes/batch/update : Update a Batch of Notes by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/notes/batch/upsert : Create or Update a Batch of Notes by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/notes/search : Search CRM Notes
  • PATCH https://api.hubapi.com/crm/v3/objects/notes/{noteId} : Partial Update of CRM Note Object
  • GET https://api.hubapi.com/crm/v3/objects/orders : Read a Page of Orders
  • POST https://api.hubapi.com/crm/v3/objects/orders/batch/archive : Archive a Batch of Orders by ID
  • POST https://api.hubapi.com/crm/v3/objects/orders/batch/create : Create a Batch of Orders
  • POST https://api.hubapi.com/crm/v3/objects/orders/batch/read : Read a Batch of Orders by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/orders/batch/update : Update a Batch of Orders by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/orders/batch/upsert : Create or Update a Batch of Orders by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/orders/search : Search Orders in CRM
  • GET https://api.hubapi.com/crm/v3/objects/orders/{orderId} : Read Order Object by ID
  • GET https://api.hubapi.com/crm/v3/objects/postal_mail : Read a Page of Postal Mail
  • POST https://api.hubapi.com/crm/v3/objects/postal_mail/batch/archive : Archive a Batch of Postal Mail by ID
  • POST https://api.hubapi.com/crm/v3/objects/postal_mail/batch/create : Create a Batch of Postal Mail
  • POST https://api.hubapi.com/crm/v3/objects/postal_mail/batch/read : Read a Batch of Postal Mail by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/postal_mail/batch/update : Update a Batch of Postal Mail by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/postal_mail/batch/upsert : Create or Update a Batch of Postal Mail by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/postal_mail/search : Search Postal Mail Objects in CRM
  • DELETE https://api.hubapi.com/crm/v3/objects/postal_mail/{postalMailId} : Delete Postal Mail Object
  • POST https://api.hubapi.com/crm/v3/objects/products : Create a Product
  • POST https://api.hubapi.com/crm/v3/objects/products/batch/archive : Archive a Batch of Products by ID
  • POST https://api.hubapi.com/crm/v3/objects/products/batch/create : Create a Batch of Products
  • POST https://api.hubapi.com/crm/v3/objects/products/batch/read : Read a Batch of Products by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/products/batch/update : Update a Batch of Products by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/products/batch/upsert : Create or Update a Batch of Products by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/products/search : Search Products in CRM
  • GET https://api.hubapi.com/crm/v3/objects/products/{productId} : Get Product Details by Product ID
  • GET https://api.hubapi.com/crm/v3/objects/quotes : Read a Page of Quotes
  • POST https://api.hubapi.com/crm/v3/objects/quotes/batch/archive : Archive a Batch of Quotes by ID
  • POST https://api.hubapi.com/crm/v3/objects/quotes/batch/create : Create a Batch of Quotes
  • POST https://api.hubapi.com/crm/v3/objects/quotes/batch/read : Read a Batch of Quotes by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/quotes/batch/update : Update a Batch of Quotes by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/quotes/batch/upsert : Create or Update a Batch of Quotes by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/quotes/search : Search Quotes in CRM
  • GET https://api.hubapi.com/crm/v3/objects/quotes/{quoteId} : Get Quote Details by Quote ID
  • GET https://api.hubapi.com/crm/v3/objects/subscriptions : Read a Page of Subscriptions
  • POST https://api.hubapi.com/crm/v3/objects/subscriptions/batch/read : Read a Batch of Subscriptions by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/subscriptions/search : Search Subscriptions in CRM
  • GET https://api.hubapi.com/crm/v3/objects/subscriptions/{subscriptionId} : Get Subscription Details
  • GET https://api.hubapi.com/crm/v3/objects/tasks : Read a Page of Tasks
  • POST https://api.hubapi.com/crm/v3/objects/tasks/batch/archive : Archive a Batch of Tasks by ID
  • POST https://api.hubapi.com/crm/v3/objects/tasks/batch/create : Create a Batch of Tasks
  • POST https://api.hubapi.com/crm/v3/objects/tasks/batch/read : Read a Batch of Tasks by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/tasks/batch/update : Update a Batch of Tasks by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/tasks/batch/upsert : Create or Update a Batch of Tasks by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/tasks/search : Search CRM Tasks
  • DELETE https://api.hubapi.com/crm/v3/objects/tasks/{taskId} : Delete Task Object
  • GET https://api.hubapi.com/crm/v3/objects/taxes : Read a Page of Taxes
  • POST https://api.hubapi.com/crm/v3/objects/taxes/batch/archive : Archive a Batch of Taxes by ID
  • POST https://api.hubapi.com/crm/v3/objects/taxes/batch/create : Create a Batch of Taxes
  • POST https://api.hubapi.com/crm/v3/objects/taxes/batch/read : Read a Batch of Taxes by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/taxes/batch/update : Update a Batch of Taxes by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/taxes/batch/upsert : Create or Update a Batch of Taxes by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/taxes/search : Search Taxes in CRM
  • DELETE https://api.hubapi.com/crm/v3/objects/taxes/{taxId} : Delete Tax Object
  • GET https://api.hubapi.com/crm/v3/objects/tickets : Read a Page of Tickets
  • POST https://api.hubapi.com/crm/v3/objects/tickets/batch/archive : Archive a Batch of Tickets by ID
  • POST https://api.hubapi.com/crm/v3/objects/tickets/batch/create : Create a Batch of Tickets
  • POST https://api.hubapi.com/crm/v3/objects/tickets/batch/read : Read a Batch of Tickets by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/tickets/batch/update : Update a Batch of Tickets by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/tickets/batch/upsert : Create or Update a Batch of Tickets by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/tickets/merge : Merge Two Tickets with Same Type
  • POST https://api.hubapi.com/crm/v3/objects/tickets/search : Search CRM Tickets
  • GET https://api.hubapi.com/crm/v3/objects/tickets/{ticketId} : Get Ticket Details by Ticket ID
  • GET https://api.hubapi.com/crm/v3/objects/{objectType} : Read a Page of CRM Objects
  • POST https://api.hubapi.com/crm/v3/objects/{objectType}/batch/archive : Archive a Batch of CRM Objects by ID
  • POST https://api.hubapi.com/crm/v3/objects/{objectType}/batch/create : Create a Batch of CRM Objects
  • POST https://api.hubapi.com/crm/v3/objects/{objectType}/batch/read : Read a Batch of CRM Objects by Internal ID or Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/{objectType}/batch/update : Update a Batch of CRM Objects
  • POST https://api.hubapi.com/crm/v3/objects/{objectType}/batch/upsert : Create or Update a Batch of CRM Objects by Unique Property Values
  • POST https://api.hubapi.com/crm/v3/objects/{objectType}/search : Search CRM Objects
  • PATCH https://api.hubapi.com/crm/v3/objects/{objectType}/{objectId} : Partial Update of CRM Object

CRM Owners

  • GET https://api.hubapi.com/crm/v3/owners/ : Get a Page of CRM Owners
  • GET https://api.hubapi.com/crm/v3/owners/{ownerId} : Read an Owner by Given ID or UserID

Pipelines

  • GET https://api.hubapi.com/crm/v3/pipelines/{objectType} : Retrieve All Pipelines for a Specified Object Type
  • PUT https://api.hubapi.com/crm/v3/pipelines/{objectType}/{pipelineId} : Replace a Pipeline
  • GET https://api.hubapi.com/crm/v3/pipelines/{objectType}/{pipelineId}/audit : Get Pipeline Audit
  • GET https://api.hubapi.com/crm/v3/pipelines/{objectType}/{pipelineId}/stages : Get Pipeline Stages
  • DELETE https://api.hubapi.com/crm/v3/pipelines/{objectType}/{pipelineId}/stages/{stageId} : Delete a Pipeline Stage
  • GET https://api.hubapi.com/crm/v3/pipelines/{objectType}/{pipelineId}/stages/{stageId}/audit : Get Audit of Pipeline Stage Changes

Properties

  • POST https://api.hubapi.com/crm/v3/properties/{objectType} : Create a Property for a Specified Object Type
  • POST https://api.hubapi.com/crm/v3/properties/{objectType}/batch/archive : Archive a Batch of Properties
  • POST https://api.hubapi.com/crm/v3/properties/{objectType}/batch/create : Create a Batch of Properties
  • POST https://api.hubapi.com/crm/v3/properties/{objectType}/batch/read : Read a Batch of Properties
  • GET https://api.hubapi.com/crm/v3/properties/{objectType}/groups : Read All Property Groups for Specified Object Type
  • PATCH https://api.hubapi.com/crm/v3/properties/{objectType}/groups/{groupName} : Update a Property Group
  • DELETE https://api.hubapi.com/crm/v3/properties/{objectType}/{propertyName} : Archive a Property

Associations

  • GET https://api.hubapi.com/crm/v4/associations/definitions/configurations/all : Get All User Configurations
  • GET https://api.hubapi.com/crm/v4/associations/definitions/configurations/{fromObjectType}/{toObjectType} : Get User Configurations on Association Definitions
  • POST https://api.hubapi.com/crm/v4/associations/definitions/configurations/{fromObjectType}/{toObjectType}/batch/create : Batch Create User Configurations Between Two Object Types
  • POST https://api.hubapi.com/crm/v4/associations/definitions/configurations/{fromObjectType}/{toObjectType}/batch/purge : Batch Delete User Configurations Between Two Object Types
  • POST https://api.hubapi.com/crm/v4/associations/definitions/configurations/{fromObjectType}/{toObjectType}/batch/update : Batch Update User Configurations Between Two Object Types
  • POST https://api.hubapi.com/crm/v4/associations/usage/high-usage-report/{userId} : Generate High Usage Report for User
  • POST https://api.hubapi.com/crm/v4/associations/{fromObjectType}/{toObjectType}/batch/associate/default : Create Default Associations
  • POST https://api.hubapi.com/crm/v4/associations/{fromObjectType}/{toObjectType}/batch/labels/archive : Delete Specific Association Labels
  • POST https://api.hubapi.com/crm/v4/associations/{fromObjectType}/{toObjectType}/batch/read : Batch Read Associations for CRM Objects
  • GET https://api.hubapi.com/crm/v4/associations/{fromObjectType}/{toObjectType}/labels : Get Association Types Between Object Types
  • DELETE https://api.hubapi.com/crm/v4/associations/{fromObjectType}/{toObjectType}/labels/{associationTypeId} : Delete Association Definition
  • PUT https://api.hubapi.com/crm/v4/objects/{fromObjectType}/{fromObjectId}/associations/default/{toObjectType}/{toObjectId} : Create Default Association Between Two Object Types
  • GET https://api.hubapi.com/crm/v4/objects/{objectType}/{objectId}/associations/{toObjectType} : List All Associations of an Object by Object Type
  • DELETE https://api.hubapi.com/crm/v4/objects/{objectType}/{objectId}/associations/{toObjectType}/{toObjectId} : Delete Associations Between Two Records

Timeline Events

  • POST https://api.hubapi.com/integrators/timeline/v3/events : Create a Single Timeline Event
  • POST https://api.hubapi.com/integrators/timeline/v3/events/batch/create : Create Multiple Timeline Events
  • GET https://api.hubapi.com/integrators/timeline/v3/events/{eventTemplateId}/{eventId} : Get Event Details
  • GET https://api.hubapi.com/integrators/timeline/v3/events/{eventTemplateId}/{eventId}/detail : Get Event Detail Template Rendered
  • GET https://api.hubapi.com/integrators/timeline/v3/events/{eventTemplateId}/{eventId}/render : Render Event Template as HTML
  • POST https://api.hubapi.com/integrators/timeline/v3/{appId}/event-templates : Create Event Template for App
  • DELETE https://api.hubapi.com/integrators/timeline/v3/{appId}/event-templates/{eventTemplateId} : Delete Event Template for App
  • POST https://api.hubapi.com/integrators/timeline/v3/{appId}/event-templates/{eventTemplateId}/tokens : Add Token to Event Template
  • DELETE https://api.hubapi.com/integrators/timeline/v3/{appId}/event-templates/{eventTemplateId}/tokens/{tokenName} : Remove Token from Event Template

HubSpot API FAQs

What is the HubSpot API?

The HubSpot API is a set of REST APIs that allow developers to read and write data in HubSpot's CRM, Marketing, Sales, and Service Hubs. Knit provides a unified CRM API that normalizes HubSpot's data models alongside Salesforce, Pipedrive, and other CRMs — so teams building multi-CRM integrations write once rather than implementing each CRM's API separately. Through the API you can create and update contacts, companies, deals, and tickets; trigger workflows; send emails; manage pipelines; and subscribe to real-time events via webhooks.

How do I get access to the HubSpot API?

  • The recommended way to access the HubSpot API is via a private app:
    • Go to Settings → Integrations → Private Apps in your HubSpot account
    • Create a private app and configure the OAuth scopes for the CRM objects your integration needs
    • Copy the generated access token and pass it as a Bearer token in your request headers: Authorization: Bearer YOUR_ACCESS_TOKEN
    • For integrations connecting to multiple HubSpot accounts, use a public app with OAuth 2.0 instead
    • Note: the legacy API key method is deprecated — private apps are the current standard
    • Knit handles HubSpot authentication on your behalf, so you connect once and skip the OAuth implementation entirely
  • Source: Private Apps | HubSpot Developers
  • What is the HubSpot API versioning change in 2026?

    • HubSpot introduced date-based API versioning with the 2026-03 release, replacing the previous numeric scheme (v1, v2, v3, v4):
      • New endpoint format: /api-name/2026-03/resource — for example, GET /crm/objects/2026-03/contacts
      • Each date version has an 18-month support window before end-of-life
      • New versions release on a fixed March/September cadence
      • Legacy /crm/v3/ and /crm/v4/ paths continue to work until their end-of-life date — no forced migration yet
      • For new integrations, use the date-versioned endpoints from the start
      • Knit updates its normalization layer when HubSpot releases new versions, so integrations built on Knit don't require code changes
    • Source: 2026-03 API Reference | HubSpot Developers

    How do I authenticate with the HubSpot API?

    • Answer: HubSpot offers multiple authentication methods for its API:some text
      • Private Apps: Generate a personal access token within your HubSpot account. This method is recommended for most integrations.
      • OAuth: Use OAuth 2.0 for applications that require user authorization.
      • API Key: HubSpot is deprecating API keys in favor of private apps. It's advisable to transition to using private apps for authentication.
    • Source: HubSpot API Authentication

    What are HubSpot API rate limits?

      • Answer: HubSpot enforces the following rate limits:
        • Private apps and OAuth apps: 100 requests per 10 seconds (per app per account)
        • Exceeding the limit returns a 429 Too Many Requests response — use the Retry-After header value to back off
        • For bulk operations, use HubSpot's batch API endpoints (batch create, batch update, batch read) to reduce call volume
        • Prefer webhooks over polling for real-time data to avoid unnecessary API calls
        • Knit manages rate limiting automatically when syncing CRM data, queuing and retrying requests within HubSpot's limits
      • Source: Usage Details & Limits | HubSpot Developers

    How can I retrieve all contacts using the HubSpot API?

    • Answer: To retrieve all contacts, use the CRM API's contacts endpoint:some text
      • Send a GET request to /crm/v3/objects/contacts.
      • Use pagination by including the after parameter to navigate through large sets of contacts.
      • You can also specify properties to include in the response using the properties parameter.
    • Source: HubSpot CRM API | Contacts

    Does HubSpot API support webhooks for real-time data updates?

    • Answer: Yes, HubSpot supports webhooks through its app subscriptions feature:
      • Subscribe to create, update, delete, and property change events for contacts, companies, deals, tickets, and other CRM objects
      • HubSpot sends an HTTP POST with a JSON payload to your configured endpoint when the subscribed event fires
      • Webhook subscriptions require a public app — they are not available on private apps
      • Verify incoming webhook payloads using the X-HubSpot-Signature header
      • Knit delivers normalized HubSpot webhook events alongside events from other connected CRMs, so you maintain one webhook listener across all platforms
    • Source: Webhooks | HubSpot Developers

    What is a HubSpot private app?

    • Answer: A HubSpot private app is the current recommended authentication method for single-account integrations, replacing the deprecated API key:
      • Created under Settings → Integrations → Private Apps
      • Generates a long-lived access token scoped to the CRM objects and permissions you select at creation time
      • No redirect URI or OAuth flow required — token is available immediately in the HubSpot UI
      • Available on all HubSpot plans including free
      • For integrations connecting to multiple HubSpot customer accounts, use a public app with OAuth 2.0 instead
      • Knit supports private app token connections for direct integrations, in addition to its standard OAuth flow
    • Source: Private Apps | HubSpot Developers

    Can I create custom objects in HubSpot via the API?

    • Answer: Yes, HubSpot's CRM API allows you to define and manage custom objects. This enables you to tailor HubSpot's data structure to fit your business needs.
    • Source: HubSpot CRM API | Custom Objects

    Get Started with Hubspot API Integration

    For quick and seamless integration with HubSpot API, Knit API offers a convenient solution. It’s AI powered integration platform allows you to build any HubSpot API Integration use case. By integrating with Knit just once, you can integrate with multiple other CRMs, HRIS, Accounting, and other systems in one go with a unified approach. Knit takes care of all the authentication, authorization, and ongoing integration maintenance. This approach not only saves time but also ensures a smooth and reliable connection to HubSpot API.‍

    To sign up for free, click here. To check the pricing, see our pricing page.

    API Directory
    -
    Apr 19, 2026

    Lever ATS API: Postings, Candidates & OAuth Guide (2026) | Knit

    Lever is a talent acquisition platform that helps companies simplify and improve their hiring process. With tools for tracking applicants and managing relationships, Lever makes it easy for teams to attract, engage, and hire the best talent. Its user-friendly design and smart features help companies of all sizes make better hiring decisions while improving the candidate experience.‍Lever also offers APIs that allow businesses to integrate the platform with their existing systems. These APIs automate tasks like syncing candidate data and managing job postings, making the hiring process more efficient and customizable.‍Key highlights of Lever APIs are as follows:

    1. Seamless Integration: Easily connects with existing HR systems, CRMs, and other tools to streamline recruitment workflows.
    2. Automation: Automates tasks such as syncing opportunity and candidate data, managing job postings, and updating applicant status in real time.
    3. Custom Endpoints: Provides flexible endpoints for opportunities, postings, interviews, and users, allowing for tailored solutions.
    4. Real-time Data: Offers real-time updates, ensuring your recruitment process stays up to date with minimal manual effort.
    5. Well-documented: Comes with comprehensive documentation to help developers quickly build and maintain custom integrations.
    6. Scalable: Supports businesses of all sizes, from startups to enterprises, helping them automate and improve their hiring processes.

    This article will provide an overview of the Lever API endpoints. These endpoints enable businesses to build custom solutions, automate workflows, and streamline HR operations.

    Lever API Endpoints

    Here are the most commonly used API endpoints in the latest version. All endpoints are accessed via the base URL https://api.lever.co/v1 and use Basic Auth - your API key is the username, password field left blank. List endpoints use cursor-based pagination, returning a next token and hasNext boolean in each response.

    ⚠️ Note: The /candidates/ endpoint path is deprecated. Use /opportunities/ for all candidate and application data — this has been the current standard since 2020.

    Opportunities

    • POST /opportunities : The 'Create an opportunity' API endpoint allows integrations to create candidates and opportunities in a Lever account. It supports both JSON and multipart/form-data requests, enabling the inclusion of candidate information and resume files. The API deduplicates candidates based on email and links new opportunities to existing contacts if a match is found. It accepts various parameters such as perform_as, parse, and perform_as_posting_owner, and fields like name, headline, stage, location, phones, emails, links, tags, sources, origin, owner, followers, resumeFile, files, postings, createdAt, archived, and contact. The response includes the opportunity ID, status, and a message indicating the result of the request.
    • POST /opportunities/:opportunity/addLinks : The 'Update Contact Links by Opportunity' API allows users to add or remove links associated with a contact through a specified opportunity. The API requires the opportunity ID as a path parameter and accepts an optional 'perform_as' parameter to perform the update on behalf of a specified user. The body of the request must include an array of links to be added or removed. The response includes a status and a message indicating the result of the operation.
    • POST /opportunities/:opportunity/addSources : The 'Update Opportunity Sources' API allows users to add or remove sources from a specified opportunity. The API requires the opportunity ID as a path parameter and an array of sources in the request body. Optionally, the request can include a 'perform_as' header to specify the user on whose behalf the operation is performed. The response indicates whether the operation was successful and provides a descriptive message.
    • POST /opportunities/:opportunity/addTags : The Update Opportunity Tags API allows users to add or remove tags from a specified opportunity. The API requires the opportunity ID as a path parameter. Users can optionally specify a 'perform_as' parameter to perform the update on behalf of another user. The request body must include an array of tags to be added or removed. The response indicates whether the operation was successful and provides a message detailing the result.
    • PUT /opportunities/:opportunity/archived : This API updates the archived state of an Opportunity. If the Opportunity is already archived, the archive reason can be changed, or it can be unarchived by specifying null as the reason. If the Opportunity is active, it will be archived with the provided reason. The API accepts an optional 'perform_as' header to perform the update on behalf of a specified user. The request body requires a 'reason' for archiving, and optionally accepts 'cleanInterviews' to remove pending interviews and 'requisitionId' to hire a candidate against a specific requisition. The response includes the status of the operation, a message, and details of the updated opportunity.
    • POST /opportunities/:opportunity/files : The 'Upload a Single File' API endpoint allows users to upload a file to a specified opportunity. The request must be made using the POST method to the '/opportunities/:opportunity/files' URL, where ':opportunity' is the ID of the opportunity. The request must include a 'perform_as' query parameter to specify the user on whose behalf the upload is performed. The file must be included in the request body as a binary file with a maximum size of 30MB, and the request must be of type 'multipart/form-data'. Upon successful upload, the API returns a response containing details of the uploaded file, including its ID, download URL, extension, name, upload timestamp, status, and size. In case of an error, such as a timeout, an error response with a code and message is returned.
    • DELETE /opportunities/:opportunity/files/:file : The 'Delete a file' API endpoint allows for the deletion of a specified file associated with an opportunity. The request must include the 'perform_as' header to specify the user on whose behalf the operation is performed. The endpoint requires the 'opportunity' and 'file' path parameters to identify the specific file to be deleted. An optional 'offset' query parameter can be used to skip a number of items before starting to collect the result set for pagination. Upon successful deletion, the API returns a 204 No Content response, indicating that the file has been successfully removed. The response includes details of the user who created the file. Note that the endpoint via /candidates/ is deprecated, and it is recommended to use the /opportunities/ endpoint for this operation.
    • PUT /opportunities/:opportunity/stage : The 'Update Opportunity Stage' API allows users to change the current stage of a specified opportunity. The endpoint requires the opportunity's unique identifier as a path parameter. Optionally, the update can be performed on behalf of another user by providing their identifier in the 'perform_as' query parameter. The request body must include the new stage ID for the opportunity. The response indicates whether the update was successful and provides a message with additional information.
    • GET /opportunities/deleted : The 'List Deleted Opportunities' API endpoint allows users to retrieve a list of all deleted opportunities from their Lever account. The endpoint supports filtering by the timestamp of deletion using the optional query parameters 'deleted_at_start' and 'deleted_at_end'. If 'deleted_at_start' is provided, the API returns opportunities deleted from that timestamp onwards. If 'deleted_at_end' is provided, it returns opportunities deleted up to that timestamp. The response includes an array of deleted opportunities, each with a unique 'opportunity_id' and the 'deleted_at' timestamp.

    Archive Reasons

    • GET https://api.lever.co/v1/archive_reasons : The 'List all archive reasons' API endpoint allows users to retrieve a list of all archive reasons from their Lever account. The endpoint supports filtering by the type of archive reason, which can be either 'hired' or 'non-hired'. The request requires basic authentication using an API key. The response includes a list of archive reasons, each with an ID, text description, status (active or inactive), and type (hired or non-hired).
    • GET https://api.lever.co/v1/archive_reasons/:archive_reason : The 'Retrieve a Single Archive Reason' API endpoint allows users to fetch details of a specific archive reason by its unique identifier. The request requires an API key for authentication and the archive reason ID as a path parameter. The response includes the archive reason's ID, description, status, and type.

    Audit Events

    • GET https://api.lever.co/v1/audit_events : The 'List all audit events' API retrieves all audit events from your Lever account, sorted in descending chronological order. It supports filtering by event type, user ID, target type, target ID, and creation timestamps. The response includes detailed information about each event, such as the event ID, creation time, type, user details, target details, and additional metadata.

    Contacts

    • GET https://api.lever.co/v1/contacts/:contact : The 'Retrieve a Single Contact' API endpoint allows users to fetch details of a specific contact using their unique identifier. The request requires an API key for authentication, which should be included in the headers. The contact ID is specified as a path parameter. The response includes detailed information about the contact, such as their name, headline, location, email addresses, and phone numbers.

    EEO Responses

    • GET https://api.lever.co/v1/eeo/responses : The 'Retrieve anonymous EEO responses' API endpoint allows users to list anonymous Equal Employment Opportunity (EEO) responses. The API requires an API key for authentication and supports optional query parameters 'fromDate' and 'toDate' to filter the responses by date range. The response includes details such as application timestamps, current stage, gender, race, veteran status, and disability information of the applicants.
    • GET https://api.lever.co/v1/eeo/responses/pii : This API endpoint retrieves EEO (Equal Employment Opportunity) responses with Personally Identifiable Information (PII) from the Lever platform. It supports optional query parameters such as 'expand' to expand user IDs and posting ID into full objects, 'created_at_start' to specify the start date for retrieving data, and 'created_at_end' to specify the end date. The response includes detailed information about each EEO response, such as application dates, current stage, contact information, gender, race, veteran status, disability status, and more.

    Feedback Templates

    • POST https://api.lever.co/v1/feedback_templates : This API endpoint allows the creation of a feedback template for an account. The request must include the name of the feedback template, instructions, the group UID, and an array of fields. Each field can have various types such as date, currency, multiple-select, and score-system, among others. The fields array must include at least one field of type score-system. The response returns the created feedback template with its unique ID, name, instructions, group details, fields, and timestamps for creation and last update.
    • DELETE https://api.lever.co/v1/feedback_templates/:feedback_template : The 'Delete a feedback template' API endpoint allows users to delete a specific feedback template associated with their account. This endpoint requires the unique identifier of the feedback template to be specified in the path parameters. Only templates that were created via the 'Create a feedback template' endpoint can be deleted using this endpoint. The request must be authenticated using an API key provided in the headers. Upon successful deletion, the API returns no content.

    Form Templates

    • GET https://api.lever.co/v1/form_templates : This API endpoint retrieves all active profile form templates for an account. It includes all form data such as instructions and fields. The response can be customized to include only specific attributes like id, text, and group using the 'include' query parameter. The API requires an API key for authentication. The response contains details about each form template, including its id, creation and update timestamps, text, instructions, group information, whether it is secret by default, and the fields it contains. Each field has its own id, type, text, description, and whether it is required.
    • GET https://api.lever.co/v1/form_templates/:form_template : The Retrieve a Profile Form Template API endpoint allows users to fetch a single profile form template by its unique identifier. This endpoint is useful for obtaining a reference template when creating a new profile form. The request requires an API key for authentication and the form template ID as a path parameter. The response includes details such as the template's creation and update timestamps, text description, instructions, group information, and fields with their respective types, descriptions, and requirement status.

    Opportunities

    • GET https://api.lever.co/v1/opportunities : The 'List all opportunities' API endpoint retrieves all pipeline opportunities for contacts in your Lever account. It supports various optional query parameters to filter the results, such as 'include', 'expand', 'tag', 'email', 'origin', 'source', 'confidentiality', 'stage_id', 'posting_id', 'archived_posting_id', 'created_at_start', 'created_at_end', 'updated_at_start', 'updated_at_end', 'advanced_at_start', 'advanced_at_end', 'archived_at_start', 'archived_at_end', 'archived', 'archive_reason_id', 'snoozed', and 'contact_id'. The response includes detailed information about each opportunity, such as 'id', 'name', 'headline', 'contact', 'emails', 'phones', 'confidentiality', 'location', 'links', 'createdAt', 'updatedAt', 'lastInteractionAt', 'lastAdvancedAt', 'snoozedUntil', 'archivedAt', 'archiveReason', 'stage', 'stageChanges', 'owner', 'tags', 'sources', 'origin', 'sourcedBy', 'applications', 'resume', 'followers', 'urls', 'dataProtection', and 'isAnonymized'.
    • GET https://api.lever.co/v1/opportunities/:opportunity : The 'Retrieve a Single Opportunity' API endpoint allows users to fetch detailed information about a specific opportunity using its unique identifier. The request requires basic authentication using an API key and the opportunity ID as a path parameter. The response includes comprehensive details about the opportunity, such as contact information, confidentiality status, location, associated tags, sources, and data protection settings. It also provides timestamps for creation, updates, interactions, and stage changes, along with URLs for candidate lists and specific candidate views.
    • POST https://api.lever.co/v1/opportunities/:opportunity/feedback : This API endpoint allows the creation of a feedback form for a specific opportunity. The request requires the opportunity ID as a path parameter and the user ID to perform the action on behalf of as a query parameter. The request body must include the base template ID and can optionally include panel and interview IDs, field values, and timestamps for creation and completion. The response returns the details of the created feedback form, including its fields, associated panel and interview, and timestamps.
    • PUT https://api.lever.co/v1/opportunities/:opportunity/feedback/:feedback : The Update Feedback API allows you to update a feedback form for a specific opportunity. The endpoint requires the opportunity ID and feedback ID as path parameters. You can perform the update on behalf of a specified user by providing the 'perform_as' query parameter. The request body can include a 'completedAt' timestamp and an array of 'fieldValues' to update specific fields in the feedback form. The response includes detailed information about the updated feedback, including account ID, user ID, profile ID, and the fields with their respective values and options.
    • GET https://api.lever.co/v1/opportunities/:opportunity/files : The 'List all files for an opportunity' API endpoint allows users to retrieve a list of all files associated with a specific opportunity. The endpoint requires an API key for authentication and the unique identifier of the opportunity as a path parameter. Users can optionally filter the files by their upload timestamp using the 'uploaded_at_start' and 'uploaded_at_end' query parameters. The response includes an array of file objects, each containing details such as the file ID, download URL, extension, name, upload timestamp, processing status, and size.
    • GET https://api.lever.co/v1/opportunities/:opportunity/files/:file : The Retrieve a Single File API endpoint allows users to retrieve metadata for a specific file associated with an opportunity. The endpoint requires the opportunity ID and file ID as path parameters and uses an API key for authentication. The response includes details such as the file ID, download URL, file extension, name, upload timestamp, status, and size. Note that the endpoint via /candidates/ is deprecated, and users should use the /opportunities/ endpoint for the same response.
    • GET https://api.lever.co/v1/opportunities/:opportunity/files/:file/download : The 'Download a file' API endpoint allows users to download a file associated with a specific opportunity in Lever. The endpoint requires the opportunity ID and file ID as path parameters, and an API key for authentication in the headers. The response includes the file's binary content and headers indicating the file type and disposition. Note that if the file could not be processed correctly by Lever, a 422 Unprocessable Entity status will be returned.
    • GET https://api.lever.co/v1/opportunities/:opportunity/forms : This API endpoint retrieves all profile forms associated with a specific opportunity for a candidate. The request requires an API key for authentication and the unique identifier of the opportunity as a path parameter. The response includes details of the form such as its ID, type, text, instructions, base template ID, fields, user, stage, and timestamps for creation, completion, and deletion. Each field in the form has attributes like type, text, description, requirement status, ID, value, and currency if applicable.
    • GET https://api.lever.co/v1/opportunities/:opportunity/forms/:form : The Retrieve a Profile Form API endpoint allows you to retrieve a specific profile form associated with an opportunity. The endpoint requires the opportunity ID and form ID as path parameters. The response includes detailed information about the form, such as its ID, type, text, instructions, base template ID, fields, user ID, stage ID, and timestamps for when the form was completed, created, and deleted. The fields array contains various types of fields, each with its own properties like type, text, description, required status, ID, value, and currency if applicable.
    • GET https://api.lever.co/v1/opportunities/:opportunity/interviews : The 'List all interviews for an opportunity' API endpoint retrieves all interview events associated with a specific opportunity. The endpoint requires an API key for authentication and the unique identifier of the opportunity as a path parameter. The response includes detailed information about each interview, such as the interview ID, panel, subject, notes, interviewers, timezone, creation date, interview date, duration, location, feedback templates, feedback forms, feedback reminders, user who scheduled the interview, stage, cancellation timestamp, related postings, and Google Calendar event URL. This endpoint is useful for obtaining a comprehensive list of interviews for a given opportunity.
    • PUT https://api.lever.co/v1/opportunities/:opportunity/interviews/:interview : The 'Update an Interview' API endpoint allows users to update an existing interview associated with a specific opportunity. This endpoint requires the entire interview object to be present in the PUT request, as missing fields will be deleted. It is important to note that this endpoint cannot be used to update interviews created within the Lever application; only interviews within panels where 'externallyManaged' is true can be updated. The request must include the 'perform_as' query parameter to specify the user on whose behalf the update is performed. The request body should contain details such as the panel ID, subject, note, interviewers, date, duration, location, feedback template, and feedback reminder. The response includes the updated interview details, including the interview ID, panel, subject, note, interviewers, timezone, creation date, duration, location, feedback template, feedback forms, feedback reminder, user, stage, and postings.
    • GET https://api.lever.co/v1/opportunities/:opportunity/notes : The 'List all notes' API endpoint retrieves all notes associated with a specific opportunity. The endpoint is accessed via a GET request to '/opportunities/:opportunity/notes', where ':opportunity' is the unique identifier for the opportunity. The request requires an API key for authentication. The response includes an array of notes, each with details such as the note ID, text, fields, user, and timestamps for creation, completion, and deletion. The 'hasNext' boolean indicates if there are more notes to fetch. Note that the endpoint via '/candidates/' is deprecated.
    • GET https://api.lever.co/v1/opportunities/:opportunity/notes/:note : The Retrieve a Single Note API endpoint allows users to fetch a specific note associated with an opportunity using the opportunity and note identifiers. The request requires an API key for authentication and the unique identifiers for both the opportunity and the note as path parameters. The response includes detailed information about the note, such as its text content, associated fields, user information, and timestamps for creation, completion, and deletion. This endpoint is part of the Lever API and is used to access notes related to opportunities.
    • DELETE https://api.lever.co/v1/opportunities/:opportunity/notes/:noteId : The 'Delete a Note' API endpoint allows users to delete a note associated with a specific opportunity. This endpoint is restricted to notes created via the API and cannot delete notes created within the Lever application. The request requires the opportunity ID and note ID as path parameters, and the API key for authentication in the headers. A successful deletion will result in a 204 No Content response, indicating that the note has been successfully deleted.
    • GET https://api.lever.co/v1/opportunities/:opportunity/offers : The 'List all offers' API endpoint retrieves all offers associated with a specific opportunity. The endpoint is accessed via a GET request to '/opportunities/:opportunity/offers'. The request requires an API key for authentication, passed in the Authorization header. The 'opportunity' path parameter is mandatory and specifies the unique identifier for the opportunity. An optional query parameter 'expand' can be used to expand the creator ID into a full object in the response. The response includes an array of offer objects, each containing details such as the offer ID, creation timestamp, status, creator ID, and various fields related to the offer. Additionally, information about sent and signed documents is provided, including file names, upload timestamps, and download URLs. The response also indicates if there are more offers to fetch with the 'hasNext' boolean.
    • GET https://api.lever.co/v1/opportunities/:opportunity/offers/:offer/download : The 'Download Offer File' API allows users to download a specific version of an offer file associated with an opportunity. The endpoint requires the opportunity and offer identifiers as path parameters. An optional query parameter 'status' can be used to specify whether to download the 'sent' or 'signed' version of the offer file. If no status is provided, the most recent document is returned. The API requires an API key for authentication. The response includes a 'downloadUrl' from which the offer file can be downloaded.
    • POST https://api.lever.co/v1/opportunities/:opportunity/panels : The 'Create a Panel' API endpoint allows users to create a panel and associate it with a specific opportunity. The endpoint requires the opportunity ID as a path parameter and the 'perform_as' user ID as a query parameter. The request body must include the timezone and a non-empty array of interview objects. Optional fields include applications, feedback reminders, notes, and an external URL. The response returns details of the created panel, including its ID, associated applications, interviews, and other metadata.
    • DELETE https://api.lever.co/v1/opportunities/:opportunity/panels/:panel : This API endpoint deletes a panel associated with a specific opportunity. Only panels with 'externallyManaged' set to true can be deleted via this API. The request requires the opportunity ID and panel ID as path parameters, and a 'perform_as' query parameter to specify the user on whose behalf the delete operation is performed. The API key must be included in the headers for authentication. The endpoint returns a 204 No Content status upon successful deletion. Note that the endpoint via /candidates/ is deprecated, and the /opportunities/ endpoint should be used instead.
    • GET https://api.lever.co/v1/opportunities/:opportunity/referrals : The 'List all referrals for a candidate for an Opportunity' API endpoint retrieves all referrals associated with a specific opportunity. The endpoint is accessed via a GET request to '/opportunities/:opportunity/referrals', where ':opportunity' is the unique identifier for the opportunity. The request requires an API key for authentication, provided in the headers. The response includes a list of referral objects, each containing details such as the referral ID, type, text, instructions, fields (including referrer name, relationship, and comments), base template ID, user ID, referrer ID, stage, and timestamps for creation and completion. The response also indicates if there are more referrals to fetch with the 'hasNext' boolean.
    • GET https://api.lever.co/v1/opportunities/:opportunity/referrals/:referral : The 'Retrieve a Single Referral' API endpoint allows users to fetch details of a specific referral associated with a given opportunity. The endpoint requires the opportunity ID and referral ID as path parameters. The response includes detailed information about the referral, such as the referral ID, type, text, instructions, fields (including referrer name, relationship, and comments), base template ID, user ID, referrer ID, stage ID, and timestamps for creation and completion. This endpoint is authenticated using an API key.
    • GET https://api.lever.co/v1/opportunities/:opportunity/resumes : The 'List all resumes for an opportunity' API endpoint allows users to retrieve all resumes associated with a specific opportunity. The endpoint is accessed via a GET request to the URL 'https://api.lever.co/v1/opportunities/:opportunity/resumes', where ':opportunity' is the unique identifier for the opportunity. The request requires an 'Authorization' header with a Basic authentication API key. Optional query parameters 'uploaded_at_start' and 'uploaded_at_end' can be used to filter resumes by their upload timestamps. The response includes a list of resumes, each with details such as the resume ID, creation timestamp, file information (name, extension, download URL, upload timestamp, status, and size), and parsed data including positions and educational background. The endpoint is designed to provide comprehensive resume data for a given opportunity.
    • GET https://api.lever.co/v1/opportunities/:opportunity/resumes/:resume : The 'Retrieve a Single Resume' API endpoint allows users to retrieve metadata for a specific resume associated with an opportunity. The endpoint requires the opportunity ID and resume ID as path parameters. The response includes details such as the resume's creation timestamp, file information (name, extension, download URL, upload timestamp, status, and size), and parsed data including positions and educational background. This endpoint is accessed via a GET request and requires an API key for authentication. Note that the endpoint via /candidates/ is deprecated, and users should use the /opportunities/ endpoint for the same response.
    • GET https://api.lever.co/v1/opportunities/:opportunity/resumes/:resume/download : The 'Download a resume file' API allows users to download a resume file associated with a specific opportunity if it exists. The endpoint requires the opportunity ID and resume ID as path parameters. It uses basic authentication with an API key. The response includes the binary content of the resume file, typically in PDF format, along with headers indicating the content type and disposition. If the file cannot be processed, a 422 Unprocessable Entity status is returned. Note that the endpoint via /candidates/ is deprecated, and users should use the /opportunities/ endpoint instead.

    Postings

    • POST https://api.lever.co/v1/postings : The 'Create a Posting' API endpoint allows integrations to create job postings in a Lever account. This API does not trigger the approvals chain, but postings can be created as drafts and later go through approvals within Lever Hire. The API accepts requests in JSON format and requires a 'perform_as' query parameter to specify the user on whose behalf the posting is created. The request body includes details such as the job title, state, distribution channels, owner, hiring manager, categories, tags, content, workplace type, and requisition codes. The response returns the created posting's details, including its ID, title, timestamps, user information, categories, content, distribution channels, salary details, state, tags, URLs, workplace type, and requisition codes.
    • GET https://api.lever.co/v1/postings/:posting : The 'Retrieve a Single Posting' API endpoint allows users to fetch the details of a specific job posting, including the job description and associated metadata. The endpoint requires the unique posting ID as a path parameter and optionally accepts a 'distribution' query parameter to specify whether to return internal or external custom application questions. The response includes detailed information about the job posting, such as its title, creation and update timestamps, user and owner details, confidentiality level, categories like team and department, job content including descriptions and lists, location, state, distribution channels, requisition codes, salary details, and URLs for listing, viewing, and applying for the job. The API is authenticated using an API key.
    • GET https://api.lever.co/v1/postings/:posting/apply : The Retrieve Posting Application Questions API endpoint provides a list of questions included in a job posting's application form. It indicates whether each field is required. The API is accessed via a GET request to the specified URL with the posting ID as a path parameter. The request requires an API key for authentication. The response includes custom questions, EEO questions, personal information fields, and URLs related to the application. Custom questions may include fields like previous work experience and favorite programming language. EEO questions cover topics such as gender, race, veteran status, and disability, with options for each. Personal information fields include full name, email, current company, phone, resume, and additional information. URLs may include LinkedIn, other websites, and GitHub profiles. Note that collecting disability information is only allowed for US contractors, and EEO questions may not be saved if legally restricted.

    Requisition Fields

    • POST https://api.lever.co/v1/requisition_fields : The 'Create a requisition field' API allows users to create a set of requisition field schemas that can be used across their account for any requisition. The API accepts POST requests with a JSON body containing the field identifier, human-readable field name, field type, and optionally, whether the field is required. Fields can be of type 'number', 'text', 'date', 'object', or 'dropdown'. For 'object' type fields, subfields must be specified, and for 'dropdown' type fields, options must be provided. The response includes the created field's details, including its identifier, name, type, and any subfields or options.
    • PUT https://api.lever.co/v1/requisition_fields/:requisition_field : The 'Update a requisition field' API allows users to update an existing requisition field by sending a PUT request to the specified endpoint. The request must include the full requisition_field object with updated properties in the request body. Any properties not included will be considered deleted. The API requires a Content-Type header set to 'application/json' and an API key for authentication. The response returns the updated requisition field data.
    • POST https://api.lever.co/v1/requisition_fields/:requisition_field/options : The 'Updating Dropdown Fields' API allows users to append, update, or delete options in a dropdown field without needing to replace the entire object. The API supports POST, PUT, and DELETE methods. The POST method is used to add new options to the dropdown. The request requires a 'Content-Type' header set to 'application/json' and an 'Authorization' header with the API key. The path parameter ':requisition_field' specifies the dropdown field to be updated. The request body contains an array of 'values', each with a 'text' property representing the dropdown option. The response returns the 'id' and 'text' of the newly created option.

    Requisitions

    • GET https://api.lever.co/v1/requisitions : The 'List all requisitions' API endpoint allows users to retrieve a list of requisitions from the Lever platform. Users can filter the requisitions based on various query parameters such as 'created_at_start', 'created_at_end', 'requisition_code', 'status', and 'confidentiality'. The API requires an API key for authentication. The response includes detailed information about each requisition, including its ID, requisition code, name, confidentiality status, creation timestamp, creator ID, headcount details, status, hiring manager, owner, compensation band, employment status, location, internal notes, postings, department, team, offer IDs, approval details, custom fields, and timestamps for closure and updates.
    • GET https://api.lever.co/v1/requisitions/:requisition : The 'Retrieve a Single Requisition' API endpoint allows users to fetch detailed information about a specific requisition using its unique identifier. The API requires an API key for authentication and the requisition ID as a path parameter. The response includes comprehensive details about the requisition such as its code, name, status, headcount, compensation band, location, department, and more. It also provides information about the approval process, custom fields, and timestamps related to the requisition's lifecycle.

    Sources

    • GET https://api.lever.co/v1/sources : The 'List all sources' API endpoint retrieves a list of all sources in your Lever account. It requires a GET request to the URL 'https://api.lever.co/v1/sources' with basic authentication using an API key. The response includes a JSON object with a 'data' array, where each element contains a 'text' field representing the name of the source and a 'count' field indicating the number of occurrences of that source.

    Stages

    • GET https://api.lever.co/v1/stages : The 'List all stages' API retrieves all pipeline stages available in your Lever account. It requires a GET request to the endpoint 'https://api.lever.co/v1/stages' with basic authentication using an API key. The response includes an array of stages, each with a unique 'id' and a 'text' field representing the name of the stage. This API does not require any path or query parameters, nor does it require a request body.
    • GET https://api.lever.co/v1/stages/:stage : The 'Retrieve a Single Stage' API endpoint allows users to fetch details of a specific stage using its unique identifier. The request requires an API key for authentication, which should be included in the headers. The stage ID is a path parameter that specifies which stage to retrieve. The response includes the stage's unique identifier and its name or description.

    Diversity Surveys

    • GET https://api.lever.co/v1/surveys/diversity/:posting : The 'Retrieve a diversity survey' API endpoint allows users to retrieve a diversity survey associated with a specific location. The endpoint requires a posting ID as a path parameter and optionally accepts a country code as a query parameter to filter candidate self-select location surveys. If no country code is provided and the account's survey type is set to candidate self-select, all active surveys will be returned. The response includes details such as survey ID, creation and update timestamps, survey text, candidate locations, instructions, and fields with options.

    Tags

    • GET https://api.lever.co/v1/tags : The 'List all tags' API endpoint allows users to retrieve all tags associated with their Lever account. The request requires an API key for authentication, which should be included in the headers. The response returns a list of tags, each with a 'text' field indicating the tag name and a 'count' field indicating the number of times the tag is used.

    Uploads

    • POST https://api.lever.co/v1/uploads : The 'Upload a file' API endpoint allows users to upload files temporarily to be used in conjunction with the 'Apply to a posting' endpoint. This endpoint accepts requests of type 'multipart/form-data' and allows uploading of binary files up to 30MB in size. The uploaded file will be available for 24 hours, after which it cannot be referenced. The response includes details such as the expiration timestamp, filename, unique file ID, URI for accessing the file, and the file size.

    Users

    • POST https://api.lever.co/v1/users : This API endpoint allows integrations to create a user in the Lever account with a default role of Interviewer. Users can also be created with roles such as Limited Team Member, Team Member, Admin, or Super Admin. The request must be of type application/json and include the user's name and email as required fields. Optional fields include accessRole, externalDirectoryId, jobTitle, and managerId. The response includes the user's ID, name, username, email, timestamps for creation and deactivation, external directory ID, access role, photo URL, linked contact IDs, job title, and manager ID.
    • GET https://api.lever.co/v1/users/:user : The 'Retrieve a Single User' API endpoint allows you to fetch the full user record for a specific user by their user ID. The request requires an API key for authentication, which should be included in the Authorization header. The user ID is specified as a path parameter in the URL. The response includes detailed information about the user, such as their ID, name, username, email, creation and deactivation timestamps, external directory ID, access role, photo URL, linked contact IDs, job title, and manager ID.
    • POST https://api.lever.co/v1/users/:user/deactivate : The 'Deactivate a user' API allows you to deactivate a user in the Lever system. Deactivated users remain in the system for historical record keeping but can no longer log in and use Lever. The API requires a POST request to the endpoint '/users/:user/deactivate' with the user's unique identifier in the path parameter. The request must include the 'Content-Type' header set to 'application/json' and an 'Authorization' header with the API key. The response includes details of the deactivated user such as their ID, name, username, email, creation and deactivation timestamps, external directory ID, access role, photo URL, linked contact IDs, job title, and manager ID.
    • POST https://api.lever.co/v1/users/:user/reactivate : The Reactivate a User API allows you to reactivate a user that has been previously deactivated. The endpoint requires a POST request to '/users/:user/reactivate' with the user's unique identifier in the path parameters. The request must include headers for 'Content-Type' as 'application/json' and 'Authorization' with a valid API key. The response returns the user's details including id, name, username, email, creation timestamp, and other relevant information. The 'deactivatedAt' field will be null indicating the user is active.

    Webhooks

    • POST https://api.lever.co/v1/webhooks : The 'Create a Webhook' API allows users to create a new webhook by specifying the webhook URL and event type. The API requires the url and event parameters, while configuration, conditions, and verifyConnection are optional. Webhooks use HMAC-SHA256 request signing via a signature token returned at creation — verify this token on every incoming payload to confirm authenticity. Supported events include: application creation, hiring, stage changes, archival modifications, interview lifecycle events, and contact updates. Webhook delivery is retried up to 5 times with increasing intervals on failure; delivery logs are available in Lever settings for up to 2 weeks (max 1,000 requests). Upon successful creation, the API returns a response containing the webhook's unique ID, event type, URL, configuration details including the signature token, and timestamps. A Super Admin must enable the webhook group in account settings for data transmission to commence.
    • DELETE https://api.lever.co/v1/webhooks/:webhookId : The 'Delete a Webhook' API allows users to delete a specific webhook by its unique identifier. The request requires an API key for authentication, which should be included in the headers. The 'webhookId' is a required path parameter that specifies the webhook to be deleted. Upon successful deletion, the API returns a 204 No Content status, indicating that the webhook has been successfully removed.

    Here’s a detailed reference to all the Lever API Endpoints.

    Lever API FAQs

    Here are the frequently asked questions about Lever APIs to help you get started:

    What are the two Lever APIs?

    Lever exposes two external APIs: the Postings API and the Data API. The Postings API is publicly accessible without authentication and is designed for building job listing sites — it returns active job postings, descriptions, and application forms. The Data API provides full programmatic access to opportunities, applications, pipeline stages, feedback, and offers — it requires either an API key or OAuth 2.0 depending on whether you're building a private integration or a partner app. Knit's Unified ATS API wraps both Lever APIs behind a single normalized endpoint, so you don't need to implement separate auth flows for each.

    How do I get a Lever API key?

    Knit handles Lever API authentication on your behalf, so your application connects to Knit's normalized endpoint rather than managing Lever credentials directly. For direct access, Lever API keys are generated in Settings → Integrations → API Credentials in your Lever account. Keys are scoped to specific permission levels — read-only, read/write, or full access — and should be scoped to the minimum required for your integration. Note that the Data API requires a paid Lever plan; the Postings API is accessible on all plans including free.

    Does Lever support OAuth 2.0?

    Yes, Lever supports OAuth 2.0 for partner integrations — this is the required auth method if you're building a marketplace integration that connects to multiple Lever customer accounts. The authorization endpoint is https://auth.lever.co/authorize and the token endpoint is https://auth.lever.co/oauth/token. Access tokens expire after 1 hour; refresh tokens last 1 year or 90 days of inactivity. Knit manages the full OAuth token lifecycle for Lever integrations, including token refresh and re-authorization, automatically.

    What is the Lever Postings API?

    The Lever Postings API is a read-only, publicly accessible REST API that returns active job postings from a Lever account. It's designed specifically for building job listing sites — you can retrieve job titles, descriptions, departments, locations, and application form links without any authentication. The Postings API does not provide access to opportunity data, pipeline stages, or internal ATS records. For full ATS access including candidate and application data, you need the Lever Data API with appropriate authentication.

    What data can I access through the Lever Data API?

    The Lever Data API provides access to opportunities, applications, postings, interviews, feedback forms, offers, and users within a Lever account. Knit normalizes Lever Data API responses into a unified candidate and application schema that works across other ATS platforms like Greenhouse, Workday, and iCIMS — useful when building multi-ATS integrations. Through the API you can read and create opportunities, update application stages, post interview feedback, manage tags, and retrieve reporting data. Write operations require appropriate permission scoping on the API key or OAuth token.

    Does Lever have API rate limits?

    Lever enforces rate limits on the Data API — the standard limit is 10 requests per second per API key, with burst capacity up to 20 requests per second using a token bucket algorithm. Knit handles Lever API rate limiting automatically when syncing candidate and application data, batching requests within Lever's limits. Sustained bursts above the threshold will result in 429 responses with a Retry-After header. Best practice is to implement exponential backoff on 429 responses and use webhooks for real-time event notifications rather than polling the API continuously.

    How does pagination work in the Lever API?

    Lever uses cursor-based pagination — not offset. List endpoints return a next cursor token and a hasNext boolean. To fetch the next page, pass the next value as a query parameter in your subsequent request. Page size is configurable between 1 and 100 items (default 100). This approach means results stay stable even if records are added or modified between requests. Knit handles Lever pagination internally when syncing data, so your application always receives a complete, consistent dataset regardless of volume.

    1. What is the difference between the Lever API and Postings API? Answer
    2. Does the Lever API archive encompass all candidates either rejected or somehow stopped the interview process? Answer
    3. Does each company customize stages in Lever API? Answer
    4. How do I add global tags to a Lever account? Answer
    5. Does each company customize tags in Lever? Answer
    6. What are the rate limits for the Lever API? Answer
    7. How can I get a sandbox account to test the Lever API? Answer

    Find more FAQs here.

    Get started with Lever API

    Lever API access is only available for integration after a careful internal review based on your interest. However, if you want to integrate with multiple HRMS or Recruitment APIs quickly, you can get started with Knit, one API for all top HR integrations.

    To sign up for free, click here. To check the pricing, see our pricing page.