DEV Community

Cover image for REST API Calls for Data Engineers: A Practical Guide with Examples
Chetan Gupta
Chetan Gupta

Posted on

REST API Calls for Data Engineers: A Practical Guide with Examples

REST API Calls for Data Engineers

Introduction

As a Data Engineer, you rarely work only with databases. Modern data pipelines frequently ingest data from REST APIs—whether it’s pulling data from SaaS tools (Salesforce, Jira, Google Analytics), internal microservices, or third-party providers.

Understanding how REST APIs work and how to interact with them efficiently is a core data engineering skill.

This blog covers:

  • What REST APIs are (briefly, practically)
  • Common REST methods from a data engineering perspective
  • Authentication patterns
  • Pagination, filtering, and rate limiting
  • Real-world examples using Python
  • Best practices for production data pipelines

What is a REST API (Data Engineer Perspective)

REST (Representational State Transfer) APIs allow systems to communicate over HTTP using standard methods.

From a data engineer’s standpoint:

  • REST APIs are data sources
  • JSON is the most common data format
  • APIs are often incremental, paginated, and rate-limited
  • APIs feed data lakes, warehouses, or streaming systems

Core REST HTTP Methods You’ll Use

Method Usage for Data Engineers
GET Fetch data (most common)
POST Submit parameters, create resources, complex queries
PUT Update existing resources
DELETE Rarely used in pipelines

In data engineering, GET and POST are used 90% of the time.


Anatomy of a REST API Request

A typical REST API call consists of:

https://api.example.com/v1/orders?start_date=2025-01-01&limit=100
Enter fullscreen mode Exit fullscreen mode

Components:

  • Base URL: https://api.example.com
  • Endpoint: /v1/orders
  • Query Parameters: start_date, limit
  • Headers: Authentication, content type
  • HTTP Method: GET / POST

Example 1: Simple GET Request (Fetching Data)

Use Case

Fetch daily sales data from an external system.

API Request

GET https://api.company.com/v1/sales
Enter fullscreen mode Exit fullscreen mode

Python Example (requests library)

import requests

url = "https://api.company.com/v1/sales"

headers = {
    "Authorization": "Bearer YOUR_API_TOKEN",
    "Accept": "application/json"
}

response = requests.get(url, headers=headers)

data = response.json()
print(data)
Enter fullscreen mode Exit fullscreen mode

Typical JSON Response

{
  "sales": [
    {
      "order_id": 101,
      "amount": 250.50,
      "currency": "USD",
      "order_date": "2025-01-10"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

This JSON is later:

  • Flattened
  • Transformed
  • Stored in a data lake or warehouse

Example 2: Query Parameters (Filtering Data)

Use Case

Pull incremental data to avoid reprocessing historical records.

GET /v1/sales?start_date=2025-01-01&end_date=2025-01-31
Enter fullscreen mode Exit fullscreen mode

Python Code

params = {
    "start_date": "2025-01-01",
    "end_date": "2025-01-31"
}

response = requests.get(url, headers=headers, params=params)
sales_data = response.json()
Enter fullscreen mode Exit fullscreen mode

Best Practice: Always design pipelines to be incremental.


Example 3: POST Request (Complex Queries)

Some APIs require POST when filters are complex.

API Call

POST /v1/sales/search
Enter fullscreen mode Exit fullscreen mode

Payload

{
  "region": ["US", "EU"],
  "min_amount": 100,
  "date_range": {
    "from": "2025-01-01",
    "to": "2025-01-31"
  }
}
Enter fullscreen mode Exit fullscreen mode

Python Example

payload = {
    "region": ["US", "EU"],
    "min_amount": 100,
    "date_range": {
        "from": "2025-01-01",
        "to": "2025-01-31"
    }
}

response = requests.post(url, headers=headers, json=payload)
data = response.json()
Enter fullscreen mode Exit fullscreen mode

Authentication Methods (Very Important)

1. API Key Authentication

Authorization: ApiKey abc123
Enter fullscreen mode Exit fullscreen mode

2. Bearer Token (OAuth 2.0)

Authorization: Bearer eyJhbGciOi...
Enter fullscreen mode Exit fullscreen mode

3. Basic Auth (Less Secure)

requests.get(url, auth=("username", "password"))
Enter fullscreen mode Exit fullscreen mode

🔐 Data Engineering Tip
Always store credentials in:

  • Environment variables
  • Secret managers (AWS Secrets Manager, Azure Key Vault)

Example 4: Pagination (Very Common in APIs)

Most APIs limit results per request.

API Response with Pagination

{
  "data": [...],
  "page": 1,
  "total_pages": 10
}
Enter fullscreen mode Exit fullscreen mode

Python Pagination Logic

all_data = []
page = 1

while True:
    params = {"page": page, "limit": 100}
    response = requests.get(url, headers=headers, params=params)
    result = response.json()

    all_data.extend(result["data"])

    if page >= result["total_pages"]:
        break

    page += 1
Enter fullscreen mode Exit fullscreen mode

Always handle pagination, or you’ll silently miss data.


Example 5: Handling Rate Limits

APIs often limit requests:

429 Too Many Requests
Enter fullscreen mode Exit fullscreen mode

Retry Logic Example

import time

response = requests.get(url, headers=headers)

if response.status_code == 429:
    time.sleep(60)
    response = requests.get(url, headers=headers)
Enter fullscreen mode Exit fullscreen mode

📌 Production pipelines should use:

  • Exponential backoff
  • Retry limits

Example 6: Error Handling (Critical for Pipelines)

response = requests.get(url, headers=headers)

if response.status_code != 200:
    raise Exception(
        f"API failed with status {response.status_code}: {response.text}"
    )
Enter fullscreen mode Exit fullscreen mode

Common HTTP Status Codes:

  • 200 – Success
  • 400 – Bad Request
  • 401 – Unauthorized
  • 404 – Not Found
  • 500 – Server Error

REST API Data Flow in a Data Pipeline

REST API
   ↓
Python / Spark Job
   ↓
Raw Zone (JSON)
   ↓
Transformation (Flattening, Cleaning)
   ↓
Data Warehouse (Snowflake / BigQuery / Redshift)
Enter fullscreen mode Exit fullscreen mode

Best Practices for Data Engineers

✔ Always design idempotent pipelines
✔ Log request/response metadata
✔ Store raw API responses for reprocessing
✔ Use incremental loads (timestamps, IDs)
✔ Monitor failures and latency
✔ Respect API rate limits


Conclusion

REST APIs are a primary data ingestion mechanism for data engineers. Mastering REST calls—authentication, pagination, retries, and error handling—will make your pipelines reliable, scalable, and production-ready.

If you understand REST APIs deeply, integrating any new data source becomes significantly easier.
If you to connect with me, let’s connect on LinkedIn or drop me a message—I’d love to explore how I can help drive your data success!

Top comments (0)