REST Outbound Connector
What Is It?
The REST Outbound Connector exposes your ALP-CONNEX data points to external systems via a standard HTTP REST API. It also supports webhooks — automatically pushing data point updates to external URLs whenever values change.
In short: the REST Outbound Connector is the outbound gateway that delivers your data to web applications, dashboards, and third-party systems over HTTP.
How It Works
The connector operates in two modes simultaneously:
- REST API (pull) — External systems can query current data point values on demand via HTTP GET requests.
- Webhooks (push) — The connector monitors the data store for changes and automatically sends updates to configured webhook URLs.
Valkey → REST Outbound Connector → HTTP GET (on demand)
→ Webhook POST (on change)
REST API
GET /api/datapoints
Retrieves current data point values from the data store. Supports optional filtering by IEC 104 address.
Query Parameters:
| Parameter | Type | Range | Required | Description |
|---|---|---|---|---|
casdu | integer | 0–65,535 | No | Filter by Common Address of ASDU |
ioa | integer | 0–16,777,215 | No | Filter by Information Object Address |
Examples:
Get all data points:
GET /api/datapoints
Filter by CASDU:
GET /api/datapoints?casdu=225
Filter by IOA:
GET /api/datapoints?ioa=455
Filter by both:
GET /api/datapoints?casdu=225&ioa=455
Response (200 OK):
{
"datapoints": [
"{json-datapoint-1}",
"{json-datapoint-2}"
]
}
Error Responses:
| Status | When | Example |
|---|---|---|
400 Bad Request | Invalid casdu or ioa parameter | { "code": "invalid_casdu", "message": "casdu must be an unsigned integer (0-65535)." } |
503 Service Unavailable | Data store (Valkey) is not connected | Standard ProblemDetails response |
Webhooks
Webhooks allow the connector to push data point updates to external systems in real time, without polling.
How Webhooks Work
- The connector monitors the
process:updatesstream in the central data store for new data point entries. - When new entries appear, they are collected into a batch.
- The batch is sent as a single HTTP POST request to each configured webhook URL.
- Custom headers (e.g., authentication tokens) can be included in every request.
Webhook Configuration
Each webhook is defined with a URL and optional custom headers:
{
"url": "https://example.com/webhook",
"headers": {
"Authorization": "Bearer your-token",
"X-Custom-Header": "value"
}
}
Webhook Payload
The POST request body contains the batch of updated data points:
{
"datapoints": [
"{json-datapoint-1}",
"{json-datapoint-2}"
]
}
Webhook Behavior
- Parallel delivery — Updates are published to all configured webhooks in parallel.
- Independent failures — If one webhook fails, the others still receive their updates.
- No webhooks configured — The connector continues running without publishing. Configure webhooks at any time via the Management UI.
- Failed deliveries — Logged for troubleshooting but do not block subsequent updates.
Configuration
The connector is configured in two ways:
- Management UI / API — The primary method. When you create a REST Outbound connector in the Management UI, the configuration (listen addresses, webhooks) is stored in the database and automatically pushed to the connector via the central data store.
- Application settings / Environment variables — For initial startup parameters (connector ID, data store connection, stream polling options).
Startup Parameters
These parameters are set via appsettings.json or environment variables:
| Parameter | Description | Required | Default |
|---|---|---|---|
Id | Unique connector ID (from the Management UI) | Yes | — |
ConnectionStrings:Redis | Data store (Valkey) connection string | No | localhost:6379 |
Stream polling options:
| Parameter | Description | Default |
|---|---|---|
Redis:ConnectionRetrySeconds | Retry interval when data store connection fails | 2 |
Redis:StreamPollIntervalMilliseconds | Interval for polling the data stream | 500 |
Redis:StartFromLatest | Start from latest stream entry (true) or replay all existing entries (false) | true |
Environment Variables
Configuration values can be overridden using environment variables. Use double underscores (__) to represent nested levels:
# Connector ID
Id=ae2f909e-62bc-451e-a24e-4ea9e03153e3
# Data store (Valkey) connection
ConnectionStrings__Redis=valkey:6379
# Stream polling
Redis__ConnectionRetrySeconds=2
Redis__StreamPollIntervalMilliseconds=500
Redis__StartFromLatest=false
Docker example:
docker run \
-e Id=ae2f909e-62bc-451e-a24e-4ea9e03153e3 \
-e ConnectionStrings__Redis=valkey:6379 \
-p 8080:8080 \
alp-connex-rest-out
Managed Settings (via Management UI)
The following settings are configured through the Management UI and pushed to the connector automatically:
| Setting | Description |
|---|---|
| Display Name | Human-readable name for the connector |
| Description | Description of the connector's purpose |
| Log Level | Override logging level (Debug, Information, Warning, Error, Critical) |
| Is Active | Whether the connector is active |
| Host URLs | URLs the service should listen on (overrides default ports) |
| Webhooks | Array of webhook configurations (URL + custom headers) |
Example runtime configuration (stored in the data store):
{
"displayName": "My REST Connector",
"description": "Exposes data points to the dashboard",
"logLevel": "Information",
"isActive": true,
"hostUrls": ["http://0.0.0.0:5030"],
"webhooks": [
{
"url": "https://dashboard.example.com/api/updates",
"headers": {
"Authorization": "Bearer token123"
}
}
]
}
Dynamic Configuration Updates
When you change a connector's settings in the Management UI, the backend pushes the updated configuration to the data store. The connector detects the change and stops the application. The container orchestrator (Docker, Kubernetes) then restarts it with the new configuration applied automatically.
Data Store Integration
The connector reads data from the central data store (Valkey).
Data point values (key-value):
process:image:casdu:{CASDU}:ioa:{IOA}
The REST API reads current values from these keys using pattern matching based on the query filters.
Event stream:
process:updates
The webhook publisher monitors this stream for new data point entries. Each entry contains casdu, ioa, source, and data fields.
The backend publishes connector configuration to the data store under:
connectors:{CONNECTOR_ID}:config — connector settings (webhooks, host URLs, log level)