Overview

The SkillMatchIQ Batch Processing API enables you to analyze multiple candidate resumes against job descriptions at scale. Our AI-powered engine provides detailed matching scores, skill gap analysis, and actionable recommendations.

🚀 Lightning Fast

Process hundreds of candidates in minutes with our optimized parallel processing engine.

📊 Rich Analytics

Get detailed insights including match scores, skill gaps, and visual knowledge graphs.

🔄 Flexible Input

Support for ZIP files, CSV, cloud storage (Google Drive, AWS S3), and databases.

💡
Pro Tip: Our API processes job descriptions once per batch, significantly reducing processing time and costs when analyzing multiple candidates.

Authentication

All API requests require authentication using your API key. Include your key in the request headers:

Authorization: Bearer YOUR_API_KEY

To get your API key:

  1. Sign up for a SkillMatchIQ account
  2. Navigate to your dashboard
  3. Click on "API Keys" in the settings menu
  4. Generate a new API key
⚠️
Security: Keep your API key secure and never expose it in client-side code or public repositories.

Quick Start

Get started with our API in just a few minutes. Here's a simple example to analyze resumes from a ZIP file:

import requests
import time

# Your API configuration
API_KEY = "your_api_key_here"
API_BASE_URL = "https://api.skillmatchiq.com"

# Prepare your files
with open('resumes.zip', 'rb') as f:
    files = {'candidate_source_file': ('resumes.zip', f, 'application/zip')}
    
    data = {
        'input_type': 'ZIP_FILE',
        'job_description_text': 'Looking for a Senior Software Engineer with Python and React experience...'
    }
    
    headers = {'Authorization': f'Bearer {API_KEY}'}
    
    # Submit the batch job
    response = requests.post(
        f'{API_BASE_URL}/api/v1/batch-jobs',
        data=data,
        files=files,
        headers=headers
    )
    
    batch_job_id = response.json()['batch_job_id']
    print(f"Batch submitted: {batch_job_id}")
const FormData = require('form-data');
const fs = require('fs');
const axios = require('axios');

const API_KEY = 'your_api_key_here';
const API_BASE_URL = 'https://api.skillmatchiq.com';

async function analyzeCandidates() {
    // Prepare form data
    const form = new FormData();
    form.append('input_type', 'ZIP_FILE');
    form.append('job_description_text', 'Looking for a Senior Software Engineer...');
    form.append('candidate_source_file', fs.createReadStream('resumes.zip'));
    
    // Submit batch job
    const submitResponse = await axios.post(
        `${API_BASE_URL}/api/v1/batch-jobs`,
        form,
        {
            headers: {
                ...form.getHeaders(),
                'Authorization': `Bearer ${API_KEY}`
            }
        }
    );
    
    const batchJobId = submitResponse.data.batch_job_id;
    console.log(`Batch submitted: ${batchJobId}`);
}
# Submit batch job
curl -X POST https://api.skillmatchiq.com/api/v1/batch-jobs \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "input_type=ZIP_FILE" \
  -F "job_description_text=Looking for a Senior Software Engineer with Python and React experience..." \
  -F "[email protected]"

# Response: {"batch_job_id": "abc123-def456-ghi789", "status": "pending", ...}

Batch Processing

Our batch processing system is designed to handle large volumes of candidates efficiently. There are two processing modes:

Asynchronous Processing (Recommended)

Best for large batches. Submit your job and check back for results.

Submit Job

Upload your candidates and job description. Receive a batch ID immediately.

Monitor Progress

Check status anytime using your batch ID. Get real-time progress updates.

Retrieve Results

Once complete, fetch detailed results including scores and recommendations.

Synchronous Processing

For smaller batches (under 100 candidates), get results immediately in a single API call.

POST /api/v1/batch-jobs/sync
{
    "input_type": "CSV_FILE",
    "job_description_text": "Your job description here...",
    "max_candidates": 50,
    "timeout": 300
}

Input Types

SkillMatchIQ supports multiple input sources to fit your workflow:

Input Type Description Plan Required
ZIP_FILE ZIP archive containing resume files (PDF, DOCX, TXT) Trial, Starter, Professional, Enterprise
CSV_FILE CSV with candidate data and resume text Trial, Starter, Professional, Enterprise
GOOGLE_DRIVE Google Drive folder containing resumes Starter, Professional, Enterprise
AWS_S3 AWS S3 bucket with resume files Professional, Enterprise
DATABASE Direct database query (PostgreSQL, MySQL, SQLite) Professional, Enterprise

CSV File Format

When using CSV files, ensure your data follows this format:

candidate_id,name,resume_text,filename
1,John Doe,"Software Engineer with 5 years Python experience...",john_doe.pdf
2,Jane Smith,"Data Scientist specializing in machine learning...",jane_smith.docx

API Endpoints

Complete reference for all available API endpoints:

POST /api/v1/batch-jobs

Submit a new batch job for asynchronous processing.

Request Parameters

Parameter Type Required Description
input_type enum Yes ZIP_FILE, CSV_FILE, GOOGLE_DRIVE, AWS_S3, DATABASE
job_description_text string Yes Full job description text
candidate_source_file file Conditional Required for ZIP_FILE and CSV_FILE types
source_config_json JSON string Conditional Configuration for cloud/database sources
metadata_json JSON string No Additional metadata (job_profile, tags, etc.)

Response (HTTP 202 Accepted)

{
    "batch_job_id": "abc123-def456-ghi789",
    "user_id": 123,
    "status": "pending",
    "total_candidates": 0,
    "processed_candidates": 0,
    "candidates_charged": 0,
    "estimated_candidates": 150,
    "created_at": "2024-01-15T10:30:00Z",
    "updated_at": "2024-01-15T10:30:00Z"
}
GET /api/v1/batch-jobs/{batch_job_id}/status

Check the current status and progress of a batch job.

Response

{
    "batch_job_id": "abc123-def456-ghi789",
    "user_id": 123,
    "status": "processing",
    "total_candidates": 150,
    "processed_candidates": 75,
    "candidates_charged": 75,
    "estimated_candidates": 150,
    "created_at": "2024-01-15T10:30:00Z",
    "updated_at": "2024-01-15T10:35:45Z",
    "completed_at": null,
    "error_message": null
}

Status Values

  • pending - Job received and waiting to start
  • processing - Currently analyzing candidates
  • completed - Successfully finished
  • failed - Error occurred
GET /api/v1/batch-jobs/{batch_job_id}/results

Retrieve the simplified results of a completed batch job, suitable for UI display.

Query Parameters

Parameter Type Default Description
limit integer 100 Maximum results to return (max: 100)
skip integer 0 Number of results to skip for pagination

Response (Simplified)

{
  "batch_job_id": "abc123-def456-ghi789",
  "user_id": 42,
  "status": "completed",
  "job_title": "Senior Full Stack Developer",
  "job_summary": "Requires: Python, React, AWS, Docker, TypeScript",
  "total_candidates": 25,
  "processed_candidates": 25,
  "summary_stats": {
    "average_score": 73.2,
    "highest_score": 94.8,
    "candidates_above_80": 8
  },
  "processed_items": [
    {
      "candidate_id_internal": "candidate_001",
      "original_filename": "john_doe_resume.pdf",
      "match_score": {
        "overall_score": 94.8,
        "skill_match_percentage": 92.3
      },
      "match_results_summary": {
        "skills": {
          "matches": [{"name": "React"}, {"name": "Python"}],
          "gaps": [{"name": "AWS Lambda"}]
        }
      }
    }
  ]
}

Analytics & Insights

Track your recruitment metrics and identify trends over time with our analytics API.

GET /api/analytics/meta

Retrieve processed analytics data for your dashboard.

Query Parameters

Parameter Type Default Description
time_range enum MONTH WEEK (7 days), MONTH (30 days), QUARTER (90 days), YEAR (365 days)
job_filter string null Filter by specific job ID

Response

{
  "total_analyses": 1247,
  "avg_score": 73.2,
  "score_change": 5.4,
  "best_match": 94.8,
  "worst_match": 32.1,
  "median_score": 75.5,
  "score_distribution": {
    "excellent": 156,
    "good": 201,
    "fair": 67,
    "poor": 22
  },
  "top_matched_skills": [
    {"skill": "Python", "count": 234, "percentage": 82.3},
    {"skill": "React", "count": 189, "percentage": 66.5}
  ],
  "top_gap_skills": [
    {"skill": "AWS Lambda", "count": 98, "percentage": 34.5},
    {"skill": "Kubernetes", "count": 87, "percentage": 30.6}
  ],
  "insights": [
    {
      "message": "Average match scores improved by 5.4% this month",
      "type": "positive"
    }
  ]
}

Real-World Examples

See how different organizations use our API:

Example 1: Recruitment Agency

Process weekly candidate submissions from multiple clients:

import requests
from datetime import datetime
import json

# Weekly batch processing
def process_weekly_candidates():
    # Upload ZIP file with all resumes
    with open('weekly_resumes.zip', 'rb') as f:
        files = {'candidate_source_file': f}
        data = {
            'input_type': 'ZIP_FILE',
            'job_description_text': get_job_description('tech_role_2024'),
            'metadata_json': json.dumps({
                'job_profile': {
                    'title': 'Senior Full Stack Developer',
                    'department': 'Engineering',
                    'location': 'Remote'
                },
                'tags': ['urgent', 'high-priority'],
                'client_id': 'client_123'
            })
        }
        
        response = requests.post(
            'https://api.skillmatchiq.com/api/v1/batch-jobs',
            files=files,
            data=data,
            headers={'Authorization': f'Bearer {API_KEY}'}
        )
    
    batch_id = response.json()['batch_job_id']
    
    # Wait for completion and get results
    results = wait_and_get_results(batch_id)
    
    # Email top 10 matches to client
    top_matches = results['processed_items'][:10]
    send_client_report(top_matches, 'client_123')

Example 2: Enterprise HR System

Integrate with existing applicant tracking system using database connection:

# Database configuration for PostgreSQL
source_config = {
    "type": "postgresql",
    "connection_string": "postgresql://hr_user:pass@localhost:5432/ats",
    "query": "SELECT id, name, resume_content FROM applicants WHERE position_id = 12345"
}

# Submit batch job with database source
response = requests.post(
    'https://api.skillmatchiq.com/api/v1/batch-jobs',
    data={
        'input_type': 'DATABASE',
        'job_description_text': job_description,
        'source_config_json': json.dumps(source_config),
        'metadata_json': json.dumps({
            'ats_integration': True,
            'position_id': 12345
        })
    },
    headers={'Authorization': f'Bearer {API_KEY}'}
)

Example 3: University Career Center

Match students with internship opportunities using Google Drive:

# Process student resumes from Google Drive
config = {
    "folder_id": "1a2b3c4d5e6f7g8h9i0j",
    "credentials": {
        # OAuth2 credentials
    }
}

# Match against multiple internship positions
for position in internship_positions:
    response = requests.post(
        'https://api.skillmatchiq.com/api/v1/batch-jobs',
        data={
            'input_type': 'GOOGLE_DRIVE',
            'job_description_text': position['description'],
            'source_config_json': json.dumps(config),
            'metadata_json': json.dumps({
                'position_type': 'internship',
                'semester': 'Spring 2024',
                'department': position['department']
            })
        },
        headers={'Authorization': f'Bearer {API_KEY}'}
    )
    
    # Store results for student advisors
    batch_id = response.json()['batch_job_id']
    track_internship_matches(position['id'], batch_id)

Error Codes

Understanding and handling API errors:

Code Description Solution
400 Bad Request - Invalid parameters Check required fields and data formats
401 Unauthorized - Invalid API key Verify your API key is correct and active
402 Payment Required - Quota or plan limit exceeded Upgrade your plan to increase limits or wait for quota reset
404 Not Found - Batch ID doesn't exist Check the batch_job_id is correct
413 Payload Too Large Reduce file size or split into smaller batches
429 Too Many Requests Rate limit exceeded. Wait before retrying
500 Internal Server Error Contact support if error persists

Error Response Format

{
    "error": {
        "code": "INVALID_INPUT_TYPE",
        "message": "Invalid value for 'input_type' field",
        "details": {
            "field": "input_type",
            "provided": "zip",
            "expected": ["ZIP_FILE", "CSV_FILE", "GOOGLE_DRIVE", "AWS_S3", "DATABASE"]
        }
    }
}

Best Practices

🚀 Performance Optimization

Batch Sizing

Keep batches under 500 candidates for optimal processing time. Split larger datasets into multiple batches.

Use Metadata

Include job_profile in metadata for better matching accuracy and analytics tracking.

Async for Large Jobs

Always use asynchronous processing for batches over 50 candidates.

📊 Data Quality

  • Ensure resume files are properly formatted (PDF, DOCX, TXT)
  • Provide detailed job descriptions including required and preferred skills
  • Use consistent skill naming across job postings
  • Include metadata for better analytics and tracking

🔒 Security

🔐
  • Store API keys in environment variables, never in code
  • Use HTTPS for all API communications
  • Implement proper access controls for cloud storage
  • Regularly rotate API keys
  • Monitor API usage for unusual patterns

📈 Monitoring

Track these key metrics for optimal performance:

  • Average processing time per candidate
  • Match score distribution
  • Common skill gaps across positions
  • API error rates and types
  • Batch completion rates

⚡ Rate Limits

Our API has the following rate limits:

  • Batch submissions: 10 per minute, 100 per hour
  • Status checks: No limit (read-only)
  • Results retrieval: No limit (read-only)
  • Analytics: No limit (read-only)

Next Steps