Sinks¶
Sinks deliver encoded bytes to their destination. You select a sink with the sink.type field. If
omitted, the default is stdout.
Most sinks buffer data before delivering it, which affects when you see output. For a full explanation of how this works, see Sink Batching.
Choosing the right url:
Network sinks (http_push, remote_write, loki, otlp_grpc) accept a URL that is
resolved inside the process running the scenario. The same YAML run from your host CLI
vs. POSTed to a containerized sonda-server reaches different hosts. See
Endpoints & networking for when to use localhost,
Compose service names, or Kubernetes Service DNS.
stdout¶
Writes events to standard output via a buffered writer. This is the default sink.
No additional parameters.
file¶
Writes events to a file. Parent directories are created automatically.
| Parameter | Type | Required | Description |
|---|---|---|---|
path |
string | yes | Filesystem path to write to. |
You can also use the --output CLI flag as a shorthand:
tcp¶
Writes events over a persistent TCP connection. The connection is established when the scenario starts.
| Parameter | Type | Required | Description |
|---|---|---|---|
address |
string | yes | Remote address in host:port format. |
udp¶
Sends each encoded event as a single UDP datagram. No connection is established -- an ephemeral
local port is bound and each event is sent via send_to.
| Parameter | Type | Required | Description |
|---|---|---|---|
address |
string | yes | Remote address in host:port format. |
http_push¶
Note
This sink requires the http Cargo feature flag. Pre-built release binaries include this
feature. If building from source: cargo build --features http -p sonda.
Batches encoded events and delivers them via HTTP POST. Events accumulate in a buffer until the batch size is reached, then the buffer is flushed as a single POST request.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
url |
string | yes | -- | Target URL for HTTP POST requests. |
content_type |
string | no | application/octet-stream |
Value for the Content-Type header. |
batch_size |
integer | no | 4096 (4 KiB) |
Flush threshold in bytes. Raise for high-rate scenarios. |
headers |
map | no | none | Extra HTTP headers sent with every request. |
sink:
type: http_push
url: "http://localhost:8428/api/v1/import/prometheus"
content_type: "text/plain"
batch_size: 32768
CLI equivalent -- use --sink http_push with --endpoint, --content-type, and
optionally --batch-size:
sonda metrics --name cpu --rate 10 --duration 30s \
--sink http_push --endpoint http://localhost:8428/api/v1/import/prometheus \
--content-type "text/plain"
Pushing to VictoriaMetrics¶
encoder:
type: prometheus_text
sink:
type: http_push
url: "http://localhost:8428/api/v1/import/prometheus"
content_type: "text/plain"
Custom headers¶
Use the headers map for protocols that require specific headers:
sink:
type: http_push
url: "http://localhost:8428/api/v1/write"
headers:
Content-Type: "application/x-protobuf"
Content-Encoding: "snappy"
X-Prometheus-Remote-Write-Version: "0.1.0"
remote_write¶
Batches metrics as Prometheus remote write requests. Designed to be paired with the
remote_write encoder, which produces length-prefixed protobuf TimeSeries bytes. The sink
accumulates entries and, on flush, wraps them in a WriteRequest, snappy-compresses it, and
HTTP POSTs with the correct protocol headers.
Note
This sink requires the remote-write Cargo feature flag. Pre-built release binaries include
this feature. If building from source: cargo build --features remote-write -p sonda.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
url |
string | yes | -- | Remote write endpoint URL. |
batch_size |
integer | no | 5 |
Flush threshold in number of TimeSeries entries. Raise for high-rate scenarios. |
encoder:
type: remote_write
sink:
type: remote_write
url: "http://localhost:8428/api/v1/write"
batch_size: 5
CLI equivalent -- use --encoder remote_write --sink remote_write --endpoint:
sonda metrics --name cpu --rate 10 --duration 30s \
--encoder remote_write \
--sink remote_write --endpoint http://localhost:8428/api/v1/write
Compatible endpoints:
| Backend | URL |
|---|---|
| VictoriaMetrics | http://host:8428/api/v1/write |
| vmagent | http://host:8429/api/v1/write |
| Prometheus | http://host:9090/api/v1/write |
| Thanos Receive | http://host:19291/api/v1/receive |
| Cortex / Mimir | http://host:9009/api/v1/push |
kafka¶
Batches events and publishes them to a Kafka topic. Uses a pure-Rust Kafka client (rskafka) for
static binary compatibility -- no C dependencies or OpenSSL required.
Note
This sink requires the kafka Cargo feature flag. Pre-built release binaries include this
feature. If building from source: cargo build --features kafka -p sonda.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
brokers |
string | yes | -- | Comma-separated broker addresses (e.g. "broker1:9092,broker2:9092"). |
topic |
string | yes | -- | Kafka topic name. |
tls |
object | no | none | TLS encryption settings. See TLS. |
sasl |
object | no | none | SASL authentication settings. See SASL. |
CLI equivalent -- use --sink kafka --brokers --topic:
sonda metrics --name cpu --rate 10 --duration 30s \
--sink kafka --brokers 127.0.0.1:9092 --topic sonda-metrics
Events are buffered until 64 KiB is accumulated, then published as a single Kafka record to
partition 0 of the configured topic. Broker-side auto-topic-creation is supported: the sink
retries metadata lookups, giving the broker time to create the topic if
auto.create.topics.enable=true.
TLS¶
Most managed Kafka services (Confluent Cloud, AWS MSK, Aiven) require TLS-encrypted connections.
Add a tls block to enable encryption.
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
tls.enabled |
boolean | yes | false |
Set to true to connect over TLS. |
tls.ca_cert |
string | no | Mozilla bundled roots | Path to a PEM-encoded CA certificate file. Use this for self-signed or internal CAs. |
sink:
type: kafka
brokers: "broker.example.com:9093"
topic: sonda-metrics
tls:
enabled: true
When ca_cert is omitted, Sonda trusts Mozilla's bundled root certificates (via webpki-roots).
This works out of the box for any broker whose certificate is signed by a public CA.
For brokers with self-signed or internal CA certificates, point ca_cert to the PEM file:
sink:
type: kafka
brokers: "kafka-internal.corp:9093"
topic: sonda-metrics
tls:
enabled: true
ca_cert: /etc/ssl/certs/internal-ca.pem
Tip
TLS uses rustls (pure Rust) -- there is no dependency on OpenSSL. This keeps the static
musl binary fully self-contained.
SASL¶
SASL authenticates your client to the Kafka broker. Sonda supports three mechanisms:
| Mechanism | When to use |
|---|---|
PLAIN |
Confluent Cloud, Aiven, most SaaS Kafka services. |
SCRAM-SHA-256 |
AWS MSK (serverless and provisioned with SCRAM enabled). |
SCRAM-SHA-512 |
Self-managed Kafka clusters preferring stronger hashing. |
| Field | Type | Required | Description |
|---|---|---|---|
sasl.mechanism |
string | yes | "PLAIN", "SCRAM-SHA-256", or "SCRAM-SHA-512". |
sasl.username |
string | yes | SASL username (API key for Confluent Cloud). |
sasl.password |
string | yes | SASL password (API secret for Confluent Cloud). |
sink:
type: kafka
brokers: "broker.example.com:9093"
topic: sonda-metrics
tls:
enabled: true
sasl:
mechanism: PLAIN
username: sonda
password: changeme
Warning
SASL can be used without TLS, but Sonda prints a warning because credentials are sent in plaintext over the network. Always enable TLS alongside SASL in production.
Common configurations¶
Confluent Cloud uses TLS + SASL PLAIN. Your API key is the username, API secret is the password.
AWS MSK with SASL/SCRAM authentication uses port 9096.
Self-managed Kafka cluster with an internal CA certificate and SCRAM auth.
Info
TLS and SASL are configured via scenario YAML only. There are no CLI flags for these options --
use a scenario file with --scenario.
loki¶
Note
This sink requires the http Cargo feature flag. Pre-built release binaries include this
feature. If building from source: cargo build --features http -p sonda.
Batches log lines and delivers them to Grafana Loki via HTTP POST. Each call to write appends one log line, and the batch is flushed when it reaches the configured size.
Stream labels are configured at the top-level labels field of the scenario, not inside the
sink block. This is consistent with how labels work for all other signal types. The scenario-level
labels are used as Loki stream labels in the push API envelope.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
url |
string | yes | -- | Base URL of the Loki instance. |
batch_size |
integer | no | 5 |
Flush threshold in number of log entries. Raise for high-rate scenarios. |
version: 2
defaults:
rate: 10
sink:
type: loki
url: "http://localhost:3100"
batch_size: 50
scenarios:
- signal_type: logs
name: app_logs_loki
labels:
job: sonda
env: dev
CLI equivalent -- use --sink loki --endpoint and --label for stream labels:
sonda logs --mode template --rate 10 --duration 30s \
--sink loki --endpoint http://localhost:3100 \
--label job=sonda --label env=dev
The sink POSTs to {url}/loki/api/v1/push.
otlp_grpc¶
Batches OTLP protobuf data and delivers it via gRPC to an OpenTelemetry Collector. Designed to
be paired with the otlp encoder, which produces length-prefixed protobuf Metric or LogRecord
bytes. The sink accumulates entries and, on flush or when batch_size is reached, wraps them in
an ExportMetricsServiceRequest or ExportLogsServiceRequest and sends via gRPC unary call.
Feature flag and build requirement
This sink requires the otlp Cargo feature flag. Pre-built release binaries and Docker
images do not include this feature. You must build from source:
cargo build --features otlp -p sonda.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
endpoint |
string | yes | -- | gRPC endpoint URL of the OTEL Collector (e.g. "http://localhost:4317"). |
signal_type |
string | yes | -- | "metrics" or "logs" -- must match the scenario signal type. |
batch_size |
integer | no | 5 |
Flush threshold in number of data points or log records. Raise for high-rate scenarios. |
encoder:
type: otlp
sink:
type: otlp_grpc
endpoint: "http://localhost:4317"
signal_type: metrics
batch_size: 5
encoder:
type: otlp
sink:
type: otlp_grpc
endpoint: "http://localhost:4317"
signal_type: logs
batch_size: 50
CLI equivalents:
# Metrics
sonda metrics --name cpu --rate 10 --duration 30s \
--encoder otlp \
--sink otlp_grpc --endpoint http://localhost:4317 --signal-type metrics
# Logs (--signal-type defaults to "logs" automatically)
sonda logs --mode template --rate 10 --duration 30s \
--encoder otlp \
--sink otlp_grpc --endpoint http://localhost:4317
Scenario-level labels are automatically converted to OTLP Resource attributes, so they appear
as resource metadata in the Collector's output.
Compatible receivers:
| Backend | Default gRPC port |
|---|---|
| OpenTelemetry Collector | 4317 |
| Grafana Alloy | 4317 |
| Datadog Agent (OTLP) | 4317 |
| Elastic APM Server | 8200 |
Retry with backoff¶
Network sinks can encounter transient failures -- connection resets, HTTP 5xx responses, broker
unavailability. By default, Sonda does not retry, which is ideal for CI pipelines where you want
fast failure feedback. For long-running soak tests or Kubernetes deployments where brief interruptions
are expected, you can add a retry: block to any network sink.
Retry applies to these sinks: http_push, remote_write, loki, otlp_grpc, kafka, tcp.
It does not apply to stdout, file, or udp.
Configuration¶
Add the retry: block inside any network sink definition:
sink:
type: http_push
url: "http://victoriametrics:8428/api/v1/import/prometheus"
retry:
max_attempts: 3 # retries after initial failure (total = 4 attempts)
initial_backoff: 100ms # first retry delay
max_backoff: 5s # backoff cap
All three fields are required when retry: is present:
| Field | Type | Description |
|---|---|---|
max_attempts |
integer | Number of retry attempts after the initial failure. Must be at least 1. Total calls = max_attempts + 1. |
initial_backoff |
duration string | Delay before the first retry (e.g. 100ms, 1s). |
max_backoff |
duration string | Upper bound on any single backoff delay. Must be >= initial_backoff. |
How it works¶
Sonda uses exponential backoff with full jitter:
The jitter prevents synchronized retries when multiple sinks retry at the same time ("thundering herd"). Each retry attempt is logged to stderr:
sonda: retry 1/3 after 127ms (error: connection refused)
sonda: retry 2/3 after 312ms (error: connection refused)
sonda: all 3 retries exhausted (last error: connection refused)
If all retries are exhausted, the batch is discarded. This prevents unbounded buffer growth -- since Sonda generates synthetic data, losing a batch is acceptable.
Error classification¶
Not all errors trigger retries. Each sink classifies errors as retryable (transient) or permanent:
| Sink | Retryable | Not retried |
|---|---|---|
http_push, remote_write, loki |
HTTP 5xx, 429 (rate limit), transport errors | HTTP 4xx (except 429) |
otlp_grpc |
UNAVAILABLE, DEADLINE_EXCEEDED, RESOURCE_EXHAUSTED | INVALID_ARGUMENT, UNAUTHENTICATED |
kafka |
All produce errors (typically transient broker/network issues) | -- |
tcp |
Connection reset, broken pipe (reconnects automatically) | -- |
Warning
Permanent errors (like a 401 Unauthorized or a malformed request returning 400) are never retried. Sonda logs a warning and discards the batch immediately.
CLI flags¶
You can also configure retry from the command line with three flags. All three must be provided together (all-or-nothing):
sonda metrics --name cpu --rate 10 --duration 60s \
--sink http_push --endpoint http://localhost:8428/api/v1/import/prometheus \
--retry-max-attempts 3 --retry-backoff 100ms --retry-max-backoff 5s
CLI retry flags override any retry: block in the YAML scenario file. See
CLI Reference for details.