How It Works
We generate an idempotency key fromtenantId + jobId + applicationId:
Since criteria are stored server-side and linked to your
jobId, they don’t need to be part of the idempotency key. This simplifies retry logic significantly.Behavior
| Scenario | What Happens |
|---|---|
| First request | Creates new scoring job |
| Duplicate while processing | Returns same scoringJobId, job continues |
| Duplicate after completion | Returns same scoringJobId with completed status |
Idempotency Window
30 days. After 30 days, calling with the same IDs creates a new job.This window is designed to handle retry scenarios while allowing re-scoring if needed over time.
Safe Retry Pattern
What Changes Create New Jobs?
| Field Changed | New Job Created? |
|---|---|
jobId | Yes |
applicationId | Yes |
rescore: true | Yes |
resumeUrl | No (idempotent) |
language | No (idempotent) |
candidate.applicationAnswers | No (idempotent) |
Re-scoring Candidates
Updating criteria via the criteria endpoints does not automatically re-score existing candidates. This is by design:- Scores are immutable audit records tied to the criteria version at scoring time
- Prevents unexpected billing from automatic re-processing
- Maintains consistency for candidates already in your pipeline
Using the rescore Parameter
To re-score a candidate, set rescore: true in your scoring request:
Billing note: Each scoring job counts as one application scored. Idempotent retries (same IDs without
rescore: true) are not charged again. Re-scoring with rescore: true counts as an additional billable score.- After Criteria Update
- Without Re-scoring
When you update criteria via
PATCH /v1/jobs/{jobId}/criteria/{criterionId}, future scoring requests use the new criteria. To re-score existing candidates with the updated criteria:Use Cases
Automatic Retries on Network Failure
Webhook Missed, Polling Instead
Batch Import Recovery
Best Practices
Always retry on 5xx errors
Always retry on 5xx errors
Server errors are transient. Retrying is safe due to idempotency.
Use exponential backoff
Use exponential backoff
Increase delays between retries: 1s, 2s, 4s, etc.
Add jitter to prevent thundering herd
Add jitter to prevent thundering herd
Add random delay to prevent many clients retrying at the same time.
Track your own state
Track your own state
Don’t rely solely on API idempotency - track which jobs you’ve submitted.
Handle partial failures
Handle partial failures
In batch processing, track which items succeeded vs failed for retry.
Idempotency Checklist
Implement retry logic with exponential backoff
Handle 429 responses with
Retry-After headerRetry only on 5xx errors, not 4xx
Add jitter to retry delays
Track submitted jobs for your own state management