Skip to content

Concepts

Synchronous vs. Asynchronous API Requests: Technical Overview

This section provides a high level technical explanation of how our synchronous and asynchronous API endpoints work. We'll examine the internal processes, advantages of each approach, and best practices for implementation. For authentication details, please refer to the Authentication & Security section.

Both synchronous and asynchronous endpoints provide access to the same data, but they handle request processing differently. The choice between them depends on your data volume, response time requirements, and integration patterns

Synchronous Processing

sequenceDiagram
    participant Client
    participant Sync Endpoint

    Client->>Sync Endpoint: GET /v2/devices/status-sync
    Note over Sync Endpoint: Process request immediately
    Sync Endpoint-->>Client: 200 OK with complete data in response body
    Note over Client: Process data immediately

When you make a request to a synchronous endpoint, the following happens:

  1. Direct Processing: The API server receives your request and immediately begins to process it. Your client application establishes an HTTP connection with our server, and this connection remains open while the server works on your request. During this time, your client is waiting for a response and cannot perform other operations using that same connection.

  2. Response Delivery: Once all processing is complete, the server packages the entire result set into a single HTTP response. This data is sent back to your client application in the response body, typically in a JSON format. The entire dataset must be prepared before any part of it can be sent back to the client.

This entire process—from request to response—happens within a single HTTP transaction, which is why it's called "synchronous." Your client application must wait for the entire process to complete before it can continue, making this approach simple but potentially limiting for large datasets or long-running operations.

Technical Constraints

Synchronous endpoints are subject to several technical limitations:

  1. Timeout Limits: Most HTTP connections have timeout limits (typically 30-60 seconds). If data processing exceeds this time, the connection may be terminated before a response is delivered.

  2. Memory Constraints: The API server must hold the entire response in memory before sending it. Our system limits responses to 6MB to prevent resource exhaustion.

  3. Scaling Challenges: Under high load, synchronous requests can consume significant server resources, potentially affecting overall system performance.

Asynchronous Processing

The asynchronous communication flow is more complex, but your client isn't blocked until the data preparation is finished and the approach can deal with much bigger data. The flow is visualized in the diagram below:

sequenceDiagram
    participant Client
    participant POST Endpoint
    participant GET Endpoint
    participant Backend

    Client->>POST Endpoint: POST /v2/devices/status (with timerange and fileformat)
    POST Endpoint->>Backend: Start data preparation
    Backend-->>POST Endpoint: Return request_uuid
    POST Endpoint-->>Client: Response with status "Submitted" and request_uuid

    Note over Client: Wait and start polling...

    Client->>GET Endpoint: GET /v2/devices/status/{request_uuid}
    GET Endpoint->>Backend: Check status
    Backend-->>GET Endpoint: Status information
    GET Endpoint-->>Client: Status "In Progress"

    Note over Client: Wait and request again...

    Client->>GET Endpoint: GET /v2/devices/status/{request_uuid}
    GET Endpoint->>Backend: Check status
    Backend-->>GET Endpoint: Status information
    GET Endpoint-->>Client: Status "In Progress"

    Note over Backend: Data preparation completes

    Note over Client: Further request after waiting

    Client->>GET Endpoint: GET /v2/devices/status/{request_uuid}
    GET Endpoint->>Backend: Check status
    Backend-->>GET Endpoint: Status with URLs
    GET Endpoint-->>Client: Status "Completed" with download URLs

    Client->>Backend: Request file via download URL
    Backend-->>Client: Delivers result file (CSV/PARQUET)

What happens:

  1. Request Submission: When you submit a request to an asynchronous endpoint, the API immediately returns a request identifier (request_uuid) and places your actual data request in a processing queue.

  2. Background Processing: In the background the system starts to prepare your requested data. This happens independently of the HTTP connection that submitted the request. Your client can do something else during the processing.

  3. Status Tracking: The API provides your client with the state of your request over the GET endpoint. The state always starts with submitted when you send your request to the POST endpoint for the first time.

  4. Result Storage: Once processing is complete, the results are stored in a temporary cloud storage location and download URLs are generated. The URL is provided by the GET endpoint.

  5. Result Retrieval: You use the provided URLs to download the result files directly from storage.

Technical Advantages

The asynchronous approach offers several technical benefits:

  1. No Time Constraints: Processing can take as long as needed without concern for HTTP timeouts.

  2. Larger Data Sets: There's no practical limit to the size of data that can be returned, as results are provided as downloadable files rather than in the HTTP response.

Info

If you require the same data multiple times you can cache the URL and download the file multiple times. This avoids unnecessary API calls on our side.