Exporters allow you to configure streaming performance metrics, received OTEL events and generated threshold crossing alerts to your internal applications for real-time data anal. Exporters are currently only available in on-prem deployments and only the Kafka exporter is available at this time.
Kafka Exporter
A Kafka exporter is used to configure Provider Connectivity Assurance to forward data to a Kafka broker. The Kafka broker must have network connectivity with your installed Provider Connectivity Assurance instance.
You can create one or more export topics in your Kafka exporter. Topics must be pre-provisioned on the Kafka broker beforehand. You can determine what data is to be streamed to your export topic by creating one or more data subscriptions. Data subscriptions can be one of three data types:
Session performance metrics
For streaming performance metrics for a given object type
Fault management events
For streaming consumed events from a Mobility Collector Fault Monitoring component
Alerts events
For streaming generated threshold crossing alerts based on observed performance metrics crossing static or dynamic policy levels
Additionally, a heartbeat topic can be specified to have the exporter continuously send a message on a specified interval.
Session performance metrics
When selecting the Session performance metrics data type, you must also specify one or more Session types. Performance Metrics for the selected session types will be streamed to the exporter and will be in JSON format sent out for each monitored object. They are sent to the specified export topic for a data subscription that is provisioned in an exporter. The included metrics are determined via the ingestion dictionary for the Session type. MetaData is determined via the provisioned metadata mappings configured in the settings.
Example Metric Payload
{
"Metrics": {
"cpuutilizationavg": 45.138061,
"memoryusageavg": 20.886177
},
"MetaData": {
"agentid": "1c73c704-0244-4940-be89-7e8d668912da",
"index": "servname#sbi-unf",
"node_id": "node-0020",
"schema": "pod-utilization",
"source_ip": "172.16.10.10",
"test": "test1"
},
"MonitoredObjectID": "node-0020_servname#sbi-unf_cisco-mobilitycore-pm-pod-utilization",
"MonitoredObjectName": "amf-pod-100_servname#sbi-unf_cisco-mobilitycore-pm-pod-utilization",
"Timestamp": 1762907730000000000,
"Direction": -1,
"ObjectType": "cisco-mobilitycore-pm-pod-utilization",
"SessionId": "node-0020_servname#sbi-unf_cisco-mobilitycore-pm-pod-utilization",
"ErrorCode": 0,
"Topology": [],
"SourceLocation": {
"Lat": 0,
"Lon": 0
},
"DestinationLocation": {
"Lat": 0,
"Lon": 0
}
}OTEL Events
OTEL events are streamed based on what is collected from an OTEL collector. This is primarily used to consume events from the Mobility Collector FM component. The Mobility Collector FM component receives SNMP traps from the network and converts them to OTEL events through a customizable mapping service. They are sent to the specified export topic for a data subscription that is provisioned in an exporter with no additional filtering options at this time.
Example OTEL Event
{
"resourceLogs": [
{
"resource": {
"attributes": [
{"key": "device.ip", "value": {"stringValue": "127.0.0.1"}},
{"key": "tenant_id", "value": {"stringValue": "38365bed-fbd0-4027-ab74-694b36adddbd"}},
{"key": "user_roles", "value": {"stringValue": "skylight-admin"}},
{"key": "debug_tenant_id", "value": {"stringValue": "38365bed-fbd0-4027-ab74-694b36adddbd"}},
{"key": "debug_user_roles", "value": {"stringValue": "skylight-admin"}},
{"key": "validation_error", "value": {"stringValue": "user_roles missing tenant-admin"}},
{"key": "kafka.topic.name", "value": {"stringValue": "otel-38365bed-fbd0-4027-ab74-694b36adddbd-in"}}
]
},
"scopeLogs": [
{
"scope": { "name": "matrix-fm" },
"logRecords": [
{
"timeUnixNano": "1764088204000000000",
"observedTimeUnixNano": "1764088204000000000",
"severityNumber": 17,
"severityText": "error",
"body": {
"stringValue": "SNMPTRAP timestamp=[2025-11-25T16:30:04Z] address=[UDP: [127.0.0.1]:44118->[127.0.0.1]:162] pdu_security=[TRAP2, SNMP v2c, community pca_1] vars[.1.3.6.1.2.1.1.3.0 = Timeticks: (587279923) 67 days, 23:19:59.23\t.1.3.6.1.6.3.1.1.4.1.0 = OID: .1.3.6.1.4.1.24961.2.103.2.0.4\t.1.3.6.1.2.1.1.3.0 = Timeticks: (9028244) 1 day, 1:04:42.44\t.1.3.6.1.4.1.24961.2.103.1.1.5.1.2.0 = STRING: ;;connection-failure;;\t.1.3.6.1.4.1.24961.2.103.1.1.5.1.3.0 = STRING: ;;dummy-dev;;\t.1.3.6.1.4.1.24961.2.103.1.1.5.1.4.0 = STRING: ;;/ncs:devices/ncs:device[ncs:name='dummy-dev';]"
},
"attributes": [
{"key": "data_type", "value": {"stringValue": "alert"}},
{"key": "event_class", "value": {"stringValue": "ec9c261a51b22aa6"}},
{"key": "severity", "value": {"stringValue": "unknown"}},
{"key": "startTimestamp", "value": {"intValue": "1764088204000"}},
{"key": "state", "value": {"stringValue": "raised"}},
{"key": "vendor", "value": {"stringValue": "cisco"}},
{"key": "tenant_id", "value": {"stringValue": "38365bed-fbd0-4027-ab74-694b36adddbd"}},
{"key": "user_roles", "value": {"stringValue": "skylight-admin"}},
{"key": "entityId", "value": {"stringValue": "ottawa-lab-2"}},
{"key": "entityClass", "value": {"stringValue": "cisco-iosxr"}},
{"key": "instanceId", "value": {"stringValue": "ec9c261a51b22aa61764088204000"}}
],
"eventName": "NA"
}
]
}
]
}
]
} Alert events
Provider Connectivity Assurance can generate TCA or threshold crossing alerts based on observed performance metrics crossing either static or dynamic policy levels - this TCA alerts can be exported to an external Kafka topic.
Configuring Heartbeat Messages
Another notable feature allows users to configure a heartbeat message. This message monitors the operational status of the service by being sent to a designated topic. Ensuring that downstream applications can keep track of service availability is pivotal for maintaining operational integrity.
How to create an exporter
Go to settings

Go to Exporters

Hit the + button to launch the New exporter configuration form

Enter the following fields:
Exporter name: Display name for the exporter
Kafka client ID: The client id used when connecting to the Kafka broker. If left unset, the client id will default to the name of the exporter.
Bootstrap servers: A comma separated list of Kafka bootstrap servers to connect to.
Security & authentication
Authentication method: Choose the method used to authenticate with the Kafka broker. Select between None or TLS.
If using mutual-TLS, upload the CA certificate that was used to sign the Kafka broker’s certificate or the broker’s self-signed certificate. To have the Kafka broker trust the certificate used by Provider Connectivity Assurance, you must also download the CA certificate using an HTTP API. This can be done via Postman, Bruno, cURL, etc. An example cURL command:
curl https://${TENANT_URL}/api/v4/export/exporters-client-ca -H "Authorization: Bearer ${TOKEN}" -o client-ca-cert.pemAn API token can be obtained using OAuth 2.0 or directly via a Personal Access Token.
The downloaded certificate must be added to the Kafka broker’s trust store. Refer to your Kafka provider’s documentation for details on how to accomplish this. For example:
keytool -import -trustcacerts -alias pca-ca -file client-ca-cert.pem -keystore certs/broker/truststore.jks -storepass changeit -noprompt
Heartbeat Configuration
Enabled: Turn on to send a heartbeat message to the Kafka broker
Interval (seconds): The rate in which the heartbeat is sent
Topic: The topic in which to send the heartbeat message to

Select the Export Topics tab:
Press the Add Export Topic button
Fill in the name of your export topic. Note this topic must already exist on your Kafka broker.
Choose a Data type for the export topic. Additional details on the data types are described above.
Once an exporter is saved, it will automatically connect to the specified Kafka broker.
Advanced Troubleshooting
In an on-prem deployment, you can troubleshoot using basic Kubernetes utilities.
The connection to the Kafka broker happens from the analytics-streamer-api pod in the deployment. To troubleshoot, view the logs once in the provider-connectivity-assurance shell using the command:
# From the directory where you've installed provider-connectivity-assruance
sudo ./provider-connectivity-assurnace shell
kubectl -n pca logs deployment/analytics-streamer-api -f© 2026 Cisco and/or its affiliates. All rights reserved.
For more information about trademarks, please visit: Cisco trademarks
For more information about legal terms, please visit: Cisco legal terms
For legal information about Accedian Skylight products, please visit: Accedian legal terms and trademarks