Exporters

Prev Next

Exporters allow you to configure streaming metric, event and alert streams to your internal applications. Exporters are currently only available in on-prem deployments and only the Kafka exporter is available at this time.

Kafka Exporter

A Kafka exporter is used to configure PCA to forward data to a Kafka broker. The Kafka broker must have network connectivity with your installed Provider Connectivity Assurance instance. There are three types of streams that can be configured in your Kafka exporter:

You can create one or more export topics in your Kafka exporter. Topics must be pre-provisioned on the Kafka broker beforehand. You can determine what data is to be streamed to your export topic by creating one or more data subscriptions. Data subscriptions can be one of three types:

  • Metrics

    • For metrics, one or more object types may be selected to be included

  • OTEL Events

  • Alerts (coming soon)

Additionally, a heartbeat topic can be specified to have the exporter continuously send a message on a specified interval.

Metrics

Metrics are in JSON format sent out for each monitored object. They are sent to the specified export topic for a data subscription that is provisioned in an exporter. Metrics are determined via the ingestion dictionary for the object type. MetaData is determined via the provisioned metadata mappings for the deployment.

Example Metric Payload

{
  "Metrics": {
    "cpuutilizationavg": 45.138061,
    "memoryusageavg": 20.886177
  },
  "MetaData": {
    "agentid": "1c73c704-0244-4940-be89-7e8d668912da",
    "index": "servname#sbi-unf",
    "node_id": "node-0020",
    "schema": "pod-utilization",
    "source_ip": "172.16.10.10",
    "test": "test1"
  },
  "MonitoredObjectID": "node-0020_servname#sbi-unf_cisco-mobilitycore-pm-pod-utilization",
  "MonitoredObjectName": "amf-pod-100_servname#sbi-unf_cisco-mobilitycore-pm-pod-utilization",
  "Timestamp": 1762907730000000000,
  "Direction": -1,
  "ObjectType": "cisco-mobilitycore-pm-pod-utilization",
  "SessionId": "node-0020_servname#sbi-unf_cisco-mobilitycore-pm-pod-utilization",
  "ErrorCode": 0,
  "Topology": [],
  "SourceLocation": {
    "Lat": 0,
    "Lon": 0
  },
  "DestinationLocation": {
    "Lat": 0,
    "Lon": 0
  }
}

OTEL Events

OTEL events are streamed based on what is collected from an OTEL collector. They are sent to the specified export topic for a data subscription that is provisioned in an exporter.

Example OTEL Event

{
  "resourceLogs": [
    {
      "resource": {
        "attributes": [
          {"key": "device.ip", "value": {"stringValue": "127.0.0.1"}},
          {"key": "tenant_id", "value": {"stringValue": "38365bed-fbd0-4027-ab74-694b36adddbd"}},
          {"key": "user_roles", "value": {"stringValue": "skylight-admin"}},
          {"key": "debug_tenant_id", "value": {"stringValue": "38365bed-fbd0-4027-ab74-694b36adddbd"}},
          {"key": "debug_user_roles", "value": {"stringValue": "skylight-admin"}},
          {"key": "validation_error", "value": {"stringValue": "user_roles missing tenant-admin"}},
          {"key": "kafka.topic.name", "value": {"stringValue": "otel-38365bed-fbd0-4027-ab74-694b36adddbd-in"}}
        ]
      },
      "scopeLogs": [
        {
          "scope": { "name": "matrix-fm" },
          "logRecords": [
            {
              "timeUnixNano": "1764088204000000000",
              "observedTimeUnixNano": "1764088204000000000",
              "severityNumber": 17,
              "severityText": "error",
              "body": {
                "stringValue": "SNMPTRAP timestamp=[2025-11-25T16:30:04Z] address=[UDP: [127.0.0.1]:44118->[127.0.0.1]:162] pdu_security=[TRAP2, SNMP v2c, community pca_1] vars[.1.3.6.1.2.1.1.3.0 = Timeticks: (587279923) 67 days, 23:19:59.23\t.1.3.6.1.6.3.1.1.4.1.0 = OID: .1.3.6.1.4.1.24961.2.103.2.0.4\t.1.3.6.1.2.1.1.3.0 = Timeticks: (9028244) 1 day, 1:04:42.44\t.1.3.6.1.4.1.24961.2.103.1.1.5.1.2.0 = STRING: ;;connection-failure;;\t.1.3.6.1.4.1.24961.2.103.1.1.5.1.3.0 = STRING: ;;dummy-dev;;\t.1.3.6.1.4.1.24961.2.103.1.1.5.1.4.0 = STRING: ;;/ncs:devices/ncs:device[ncs:name='dummy-dev';]"
              },
              "attributes": [
                {"key": "data_type", "value": {"stringValue": "alert"}},
                {"key": "event_class", "value": {"stringValue": "ec9c261a51b22aa6"}},
                {"key": "severity", "value": {"stringValue": "unknown"}},
                {"key": "startTimestamp", "value": {"intValue": "1764088204000"}},
                {"key": "state", "value": {"stringValue": "raised"}},
                {"key": "vendor", "value": {"stringValue": "cisco"}},
                {"key": "tenant_id", "value": {"stringValue": "38365bed-fbd0-4027-ab74-694b36adddbd"}},
                {"key": "user_roles", "value": {"stringValue": "skylight-admin"}},
                {"key": "entityId", "value": {"stringValue": "ottawa-lab-2"}},
                {"key": "entityClass", "value": {"stringValue": "cisco-iosxr"}},
                {"key": "instanceId", "value": {"stringValue": "ec9c261a51b22aa61764088204000"}}
              ],
              "eventName": "NA"
            }
          ]
        }
      ]
    }
  ]
} 

How to create an exporter

  1. Go to settings

  2. Go to Exporters

  3. Hit the + button to launch the New exporter configuration form

  4. Enter the following fields:

    1. Name: Display name for the exporter

    2. Client ID: The client id used when connecting to the Kafka broker

    3. Bootstrap Servers: A comma separated list of Kafka bootstrap servers to connect to.

    4. Export Topic: The name of the topic that exported metrics will be published to

      1. Note that in a future release, you will be able to specify multiple export topics for a single exporter. This can be emulated in the current release by creating multiple exporters to the same Kafka broker.

    5. Object Type: A select element for choosing the object type to export monitored objects of that type to the export topic

      1. Note that in a future release, you will be able to select multiple object types per export topic

    6. Heartbeat Configuration

      1. Enabled: Turn on to send a heartbeat message to the Kafka broker

      2. Interval (seconds): The rate in which the heartbeat is sent

      3. Topic: The topic in which to send the heartbeat message to

        Once an exporter is saved, it will automatically connect to the specified Kafka broker.

Advanced Troubleshooting

In an on-prem deployment, you can troubleshoot using basic Kubernetes utilities.

The connection to the Kafka broker happens from the analytics-streamer-api pod in the deployment. To troubleshoot, view the logs once in the provider-connectivity-assurance shell using the command:

# From the directory where you've installed provider-connectivity-assruance
sudo ./provider-connectivity-assurnace shell
kubectl -n pca logs deployment/analytics-streamer-api -f