Once the Fault & Mobility Performance Monitoring Collector has been installed (as covered in the previous step), the final step is to configure Provider Connectivity Assurance to ingest and process mobility performance monitoring data.
The high level process is as follows:
The step-by-step procedure is outlined below.
The following diagram provides a visual of the components to be configured:
Getting Ready
There are a few prerequisites you must complete before you continue with this configuration procedure.
Software Tools Required
A text editor (notepad++, Microsoft Visual Studio Code, TextEdit, gedit, Vim, GNU nano, etc.) to edit and save configurations. Do not use a word processor (Word, Wordpad, Pages, etc.).
A REST API client (Postman, Bruno) to query API endpoints and GET, PUT, POST, PATCH configurations as per the procedure.
API Documentation
The Skylight Analytics API (now Provider Connectivity Assurance API) and Sensor Orchestrator are API collections which contain APIs required in order to configure our Telemetry Collector. Each one can be downloaded from https://api.accedian.io by clicking on “Platform and Session Metric Query APIs” and “Sensor Agent Management APIs” (highlighted below) and then clicking the Download button at the top of each page. Once you have both files downloaded to your PC, import them as Collections in your REST API client.
Once you select either of these two API collections, the download button appears at the top of the page:
Personal Access Token
You will require a personal access token (PAT) to authenticate the Telemetry Collector with Provider Connectivity Assurance. This will also be required in order to authenticate as a user and leverage APIs.
To create this access token, you must access and log in to the Zitadel Administration user interface, which is included as the identity and access management service within Provider Connectivity Assurance. If DNS is available, the Zitadel Administration user interface is accessed at https://auth.<deployment URL>. If DNS is not available, use the IP address of the Provider Connectivity Assurance deployment tenant and port 3443:
https://<IP address>:3443
To create this access token, follow these steps in the Zitadel Administration user interface:
Change the organization to the Provider Connectivity Assurance tenant.
Note: The name “PCA” may have been used for the tenant during installation, or a different name may have been chosen by the installer. Check the drop-down list to find your specific tenant name.Under the Users tab, click the + New button to create a new "Service User". In the token type drop-down menu, select the Bearer option.
Next, assign the "tenant-admin" role to this user. In the left-side navigation, select the Authorizations menu option and click the + New button.
In the “Search for a project” pop up, choose Analytics.
Select tenant-admin role.
Generate a new Personal Access Token with an appropriate expiration date. Select Personal Access Tokens and click the + New button.
Note: The token must be regenerated and updated before it expires, so choose a timeline that works for your organization.
Save this Personal Access Token to a file on your computer as you will not be able to retrieve it again later from the Zitadel administration user interface. It will be required in the Telemetry Collector configuration procedure, and for sending API calls with the API client of your choice (Postman or Bruno).
The generated value of the token will look something like this:
pNy2-3NuLapIXxPHWokyUEc4H2i5t67NRyStCHOK466MvOVseVykfgeI4AOfzWKLT4kkvr3KWDg
REST API Client Setup
To configure the API client software, you must configure your collection to use the Personal Access Token generated in a prior step. In your API client, navigate to the Collection Authorization configuration, choose Auth Type ▶ Bearer Token and paste the Personal Access Token in the Token field.
Now that these prerequisites are complete, you are ready to start the configuration procedure.
1. Configure and Deploy Sensor Collector
Use the Provider Connectivity user interface to create a new Sensor Collector by selecting Sensors ▶ Collectors ▶ Sensor Collector.
Choose the + button on the right to create a new Sensor Collector.
Select Gateway in the Type field and then click the Configure Sensor Collector button to proceed.
Give the Sensor Collector a meaningful name. This must be a unique name compared to all other named Sensor Collectors on the tenant, and it must not contain spaces.
Tip: The best practice is to create a name that includes:
- An indication of the incoming data type
- Name of the server upon which it will be deployed
- Name of the Provider Connectivity tenant to which it is connected
Example: Using a name format like PM_mobility_<server_name>_<tenant_name> makes it easier to locate the Sensor Collector in the inventory later, especially when there are many Sensor Collectors listed.
In the Metric configuration field, select telemetry-collector.
Populate the zone value using a unique value. The recommended best practice is to use the same value that was used as the Sensor Collector name. For example, in this screenshot, the zone and the name are both identically set to “my-new-sensor-collector”.
Click the checkmark in the top right corner to save.
Select your Sensor Collector from the list and click the ellipsis (three dots) in the upper-right corner and choose Download (Docker). This triggers the download to your local machine.
Note: The package is built dynamically so it requires few minutes to complete; the banner at the top displays Preparing file for download... (this may take a minute).
Transfer the .tar file to the target machine where the Sensor Collector will run.
You need to either generate a self-signed certificate or use a certificate that is authorized by a certificate authority of choice.
Here are the instructions for generating a self-signed certificate if that option is desired. From a terminal on a machine that has OpenSSL installed, run the following commands to generate a self-signed certificate:# generate CA DER certificate and private key openssl req -x509 -newkey rsa:2048 -keyout ca.key -out ca.crt -outform DER -days 365 # generate TLS private key openssl genpkey -algorithm RSA -outform DER -out tls.key -pkeyopt rsa_keygen_bits:2048 # generate certificate request for TLS certificates openssl req -new -key tls.key -out ca.csr -outform DER # sign TLS certificate with CA certificate openssl x509 -req -in ca.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out tls.crt -days 365 -outform DER
Once you have transferred the .tar file (step 8) and obtained a certificate (step 9), execute the following commands on the target machine:
mkdir sensorCollectorDir mv sensorCollectorArchive.tar.gz sensorCollectorDir cd sensorCollectorDir/ gunzip sensorCollectorArchive.tar.gz tar xvf sensorCollectorArchive.tar
Copy the certificates into the sensorCollector target director using these commands:
# go to the folder where sensorCollector is unpacked mkdir .tls_custom # copy ca.crt ca.key tls.crt tls.key files into sensorCollector's ".tls_custom" folder cp ca.crt .tls_custom/ cp ca.key .tls_custom/ cp tls.crt .tls_custom/ cp tls.key .tls_custom/
Run the Sensor Collector using the run.sh shell script:
# run roadrunner ./run.sh
Once you have executed the run.sh shell script above, you should check that Sensor Collector is running correctly. Instructions for checking if Sensor Collector is running correctly are here.
If you are having issues running the Sensor Collector, information on troubleshooting Sensor Collector can be found here.
Once Sensor Collector is running correctly, you can proceed to the next section on Telemetry Collector installation.
2. Configure, Customize and Deploy Telemetry Collector
2.1 Create a Telemetry Collector Configuration
Every Telemetry Collector must have its run time configuration defined in Provider Connectivity Assurance. This will be pushed down to the Telemetry Collector when it is initially deployed and "phones home" to obtain the configuration.
To prepare the Telemetry Collector instance for Mobility PM, the high-level process is as follows:
Use the Provider Connectivity Assurance user interface to create a Telemetry Collector configuration, choosing a default configuration which will be customized for our Mobility use case
Edit the default out-of-the-box configuration to customize it for Mobility PM and provide the connectivity (IP address) details for the Sensor Collector that is running in our environment
Finally, we will upload our customized configuration to the Provider Connectivity Assurance tenant, so that it is available to the Telemetry Collector when it “phones home”
To create a Telemetry Collector configuration
Use the Provider Connectivity Assurance user interface to create a New Telemetry Collector by navigating to Sensors ▶ Collectors ▶ Telemetry.
Click + on the upper right.
The New telemetry collector side bar appears, as show below:
Enter the following details:
Name: It is recommended to choose a name that will make it easy to correspond to the Sensor Collector.Naming Tip - Use the name that was chosen for the Sensor Collector, prefixed with TC for “telemetry_collector”. e.g. TC_PM_mobility_<server_name>_<tenant_name>.
Transform configuration: Choose the "Cisco-Telemetry-IOS-XR" option (which will be edited in the next step to work for Mobility PM).
Data collector server: Enter the IP address of the host where Sensor Collector is deployed, or alternatively, the Docker bridge IP if you have installed the Sensor Collector and Telemetry Collector on the same host. Tip: Do not use localhost or 127.0.0.1.Click ✓ to save the changes.
Now, there is a saved Telemetry Collector configuration in Provider Connectivity Assurance that is provisioned to connect to the correct Sensor Collector. The next step is to edit that configuration to work for Mobility PM instead of IOS XR telemetry.
2.2 Edit the Transform Configuration to Ingest Mobility PM Data
In this step, we are editing and customizing the out-of-the-box IOS-XR Telemetry Collector configuration that we created in the previous step, and modifying it so that it will ingest mobility core PM.
To edit the transform configuration
1. Download the telegraf.conf file from the Cisco box file share: Cisco box
Note: You must be logged in to Box to download the files successfully.
2. Using a text editor like GNU nano or vi (not a word processor like Microsoft Word), edit the Telegraf.conf file. Find the inputs.kafka_consumer configuration and replace the brokers placeholder with the IP address of the machine where the Fault & Mobility PM Collector is running.
3. Save the Telegraf.conf file and execute this command in a terminal window to run Base64:
The output of the base64 command is displayed in the screenshot below. Keep the terminal open with this output because the output must be copied and pasted into a configuration file in the next steps.
Retrieve the existing configuration from Provider Connectivity Assurance using the "GET" API. This API requires the Agent ID which is available in the user interface under Sensors ▶ Collectors ▶ Telemetry, as highlighted here. Keep note of this Agent ID, it is required in a few upcoming steps:
The GET API call that requires the Agent ID is shown here:
If needed, here is the API reference documentation on retrieving the configuration by Agent ID.
In the response of the GET API call, find the following dataCollection location within the JSON structure and replace the base64 encoded telegraf configuration with the one created two steps ago.
6. Using Postman or your preferred API client, execute a PUT request to submit the updated configuration back to Provider Connectivity Assurance. Ensure you use the same Agent ID specified in step 5. Here is the Reference API document for PUT command.
2.3 Create Your Telemetry Collector Secrets
The Telemetry Collector needs a secrets.yaml file containing its Agent identifier and the Personal Access Token (PAT) for authentication.
Here is a sample file you can customize with your telemetry collector information:
agentConfig:
identification:
agentId: {{agentIdFromPCA}}
authenticationToken: {{patFromZitadel}}
Note: The secrets.yaml file is sensitive to spacing and alignment. Ensure to keep the same structure and copy the PAT as a single line. There should be one carriage-return (new line) after the PAT.
2.4 Deploy the Telemetry Collector for Mobility PM Ingestion
Create a working directory for your Telemetry Collector on the host where you intend to run the instance.
In that directory, create a ./secrets subdirectory. Within that subdirectory, create a secrets.yaml file containing its agent identifier and the Personal Access Token (PAT) for authentication.
Here is a sample file you can customize with your Agent ID and PAT from previous steps:
agentConfig: identification: agentId: {{agentIdFromPCA}} authenticationToken: {{patFromZitadel}}
Note: The secrets.yaml file is sensitive to spacing and alignment. Ensure to keep the same structure and copy the PAT as a single line. There should be one carriage-return (new line) after the PAT.
Next, install the certificates in the ./secrets subdirectory. Whether you generated self-signed certificates in the Sensor Collector setup above, or alternatively obtained certificates from another authority, this step is to convert them to PEM format and place them in the same directory as the secrets.yaml file.
Execute these OpenSSL commands for conversion and copying into the ./secrets directory:
# convert certificates to PEM format openssl x509 -in ca.crt -inform DER -out ca.pem -outform PEM openssl x509 -in tls.crt -inform DER -out tls.pem -outform PEM # In the same folder where secrets.yml file resides, save the ca.pem and tls.pem certificate files created above. # Mount those files into container (here assuming that this local folder is ./secrets ) cp ca.pem ./secrets cp tls.pem ./secrets
In the Telemetry Collector working directory, create a docker-compose.yml file that resembles the following:
services: telemetry-collector: container_name: "telemetry-collector" image: "gcr.io/sky-agents/agent-telemetry-{{arch}}:{{version}}" environment: AGENT_MANAGEMENT_PROXY: "{{ sensorcollector-ip }}" AGENT_MANAGEMENT_PROXY_PORT: 55777 volumes: - ./secrets/secrets.yaml:/run/secrets/secrets.yaml - ./secrets/ca.pem:/usr/local/share/ca-certificates/ca.pem - ./secrets/tls.pem:/usr/local/share/ca-certificates/tls.pem
arch represents architecture and is either amd64 or arm64.
version corresponds to the latest available software version of the Telemetry Collector for your environment. As of August 8, the latest version is: r25.07
AGENT_MANAGEMENT_PROXY corresponds to your local Sensor Collector. The {{ sensorcollector-ip }} placeholder should be replaced with the host IP (or Docker bridge IP) of your deployed Sensor Collector.
AGENT_MANAGEMENT_PROXY_PORT corresponds to the Agent Proxy port you chose when creating your Sensor Collector configuration. 55777 is the default value. However, if you choose something else, then this must match.
At this stage, you can deploy the Telemetry Collector by running the Docker compose up command:
docker compose up -d
The Telemetry Collector will connect to Provider Connectivity Assurance and retrieve the configuration that we previously created.
Verify the logs to ensure connectivity was successful.
Obtain the container ID using these commands:
sudo docker ps
CONTAINER ID | IMAGE | COMMAND | CREATED | STATUS | PORTS |NAMES
923e4a3623b4 gcr.io/sky-agents/agent-telemetry-amd64:versiontag "/docker-entrypoint.…" 2 hours ago Up About an hour`
Now, you can tail the logs for that container:
sudo docker logs -f <container id>
Here are the specific Docker logs to search for the confirmation that the Telemetry Collector has successfully connected and is operating correctly:
Additional Troubleshooting Tips
If for any reason you are not seeing the data show up in PCA, use the following command on the host that is running the telemetry collector to confirm that it can reach the Kafka bus, and that the expected PM Mobility data is flowing. This is like a "tail" command to monitor the Kafka bus. This command does not require anything on the host beyond Docker and an internet connection. It will produce log messages that can be used to confirm that the Sensor Collector and Telemetry Collector are working properly:
docker run -it confluentinc/cp-kafka kafka-console-consumer --bootstrap-server {{server_ip}}:9092 --topic pca_kpi_topic
3. Install Ingestion Dictionaries for Mobility PM Data
This section describes the high-level process to ingest mobility PM data into Provider Connectivity Assurance:
Let the mobility PM data flow from the Fault and Mobility PM collector to Provider Connectivity Assurance (via the Telemetry Collector and the Sensor Collector) but do not enable the metrics by toggling them on in the user interface yet; there are changes to make before data is enabled/toggled on.
This will automatically create the new object type Ingestion Dictionaries under the Settings ▶ Ingestion ▶ OpenMetrics area within the Provider Connectivity Assurance user interface.
We will customize the automatically generated ingestion dictionaries to standardize metric names and assign appropriate units to the incoming metrics.
Once the dictionaries are customized, we will use the Provider Connectivity user interface to “toggle on” and enable the mobility PM metrics.
3.1 Customize the Automatically-Created Dictionaries
Once data is flowing and the default OpenMetrics ingestion dictionaries show up in Provider Connectivity Assurance, use your API client software to retrieve them one at a time for each mobility PM object type, and apply the customizations provided.
To customize the dictionaries
Get the IDs of all the object type ingestion dictionaries on the PCA tenant.
To get the IDs, use the broader GET “all” dictionaries API call, which returns all the IDs for every ingestion dictionaries in the tenant:
GET {{baseUrl}}/api/v3/ingestion-dictionaries
This document describes the ‘GET ALL’ API functionality.
From that superset of ALL ingestion dictionaries on the tenant, extract only the mobilitycore ingestion dictionaries because they are the only ones to be customized for mobility PM.
In the response in your API client from the previous step, search for object types that have the prefix “openmetricscisco-mobilitycore”, and note the ID for each of those dictionaries. Save this information as a list of IDs to be used in the subsequent customization steps. An example Ingestion Dictionary ID appears simialr to “openmetricscisco-mobilitycore-pm-workload-utilization”.
Retrieve one ingestion dictionary at a time, using the dictionary ID.
To retrieve one dictionary at a time for editing, use the GET API with one dictionary ID:
GET {{baseUrl}}/api/v3/ingestion-dictionaries/:IngestionDictionaryId
4. Edit the ingestion dictionary for one openmetricscisco-mobilitycore object type.
In the response payload in your API client, edit each individual dictionary and replace the dictionaryType and metrics as follows:
Change the dictionaryType value from the default “custom” value to “global”.
Open the corresponding customized ingestion dictionaries JSON file in the Cisco box file share: Cisco box
Note: You must be logged in to Box to download the files successfullyWhile keeping the rest of the default dictionary JSON response intact, replace only the “data” ▶ “attributes” ▶ “metrics” array with the one you find in the custom dictionary file in Cisco box.
Be especially careful to keep the “_id”, “_rev”, “dictionaryName” and “tenantId” values intact.
5. PATCH the new customized dictionary. Once your new custom dictionary is assembled, send it back to your Provider Connectivity Assurance tenant using the PATCH API:
PATCH {{baseUrl}}/api/v3/ingestion-dictionaries/:IngestionDictionaryId
IMPORTANT: Repeat steps 3, 4 and 5 for every openmetricscisco-mobilitycore dictionary ID.
4. Enable Data Flow and Provision Metadata
Once you have finished the previous step and the custom dictionaries are all patched in Provider Connectivity Assurance, data flow must be enabled, and metadata must be mapped, both of which are achieved via the Provider Connectivity UI.
4.1 Enable Data Flow for Individual Metrics
Reload the user interface in your browser window (because dictionaries are loaded in memory upon login, they will need to be refreshed) then navigate to Settings ▶ Ingestion ▶ Openmetrics and look for a list of object types prefixed with “cisco-mobilitycore”, as shown below:
For every cisco-mobilitycore in the list, select it and review the metrics. Validate that all your metrics are showing up with user friendly camelCase names. If anything is not camelCase, double check that you patched that dictionary, and report any issues over intercom.
For every object type, toggle on the metrics that are needed by your customer and use case. A maximum of 40 metrics per object can be enabled (toggled on).
Once you have finished toggling on all the metrics for an object type, ensure to save your selections using the top-right checkmark button, highlighted below:
4.2 Create Metadata Categories
There are five dynamic metadata fields being ingested into Provider Connectivity Assurance from the Fault & Mobility Performance Monitoring Collector. This procedure categorizes and maps them to existing fields so that they can be picked up in dashboards.
Navigate to Settings ▶ Metadata ▶ Categories.
Click the + Add category button and add the following categories:
• agentId
• index
• node_id
• schema
• source_ip
4.3 Assign Dynamic Metadata Mappings
Navigate to Settings ▶ Metadata ▶ Dynamic Mappings.
For each of the five categories created in the previous step (agentId, index, node_id, source_ip and schema) repeat the following steps:
Type the category name into the search box (see example for “agentID” in the screenshot below); it retrieves all the cisco-mobiliycore-pm object types that contain that same category:
For every item in the list, choose Add mapping. A pop-up menu appears. Type (or select from the list) the same category name that you used in the previous step (example “AgentId”):
The screen updates to reflect that a new mapping has been made. This step associates the field coming in dynamically from the Telemetry Collector with existing fields in Provider Connectivity Assurance so that the dynamic data can be used in dashboards.
Repeat this for all the items listed on the screen, then click the blue checkmark icon to save the configuration.
Once you have completed these mapping steps for agentId, index, node_id, source_ip and schema, you can move to the next step, setting up the dashboards.
Tip: If you are curious, these five dynamic metadata values you have just set up can be viewed by choosing any cisco-mobilitycore-pm object in the inventory, (it can take some time for data to start flowing, so check back if the values aren’t populated in the first few minutes after he procedure is complete):
5. Set Up Dashboards
This section will help familiarize you with the steps involved in creating dashboards in Provider Connectivity Assurance. This is not a prescriptive set of dashboards. Rather, it is an overview of how to create dashboards and interpret the mobility performance monitoring data, so that you can build dashboards that meet your specific customer use cases.
A few tips about the mobility data:
The Mobility Performance Monitoring data collected in Provider Connectivity Assurance are categorized primarily by “node_id” and “index”.
node_id is a device identifier and can be used for the most detailed breakdowns of data in dashboards.
index is used to identify sub-categories on each device. For example, a device can have many interfaces, and the index would represent each interface on the device.
In the Inventory ▶ Sessions screen in Provider Connectivity Assurance, each session (also known as an object) has a name that is constructed to follow the same pattern. The naming pattern is:
<nodename>_<index>_cisco-mobilitycore-pm_<schema>
In the example screenshot below, the node name is “cee1”, the index is “bulk-stats”, and the schema is “5gsystem”:
Keep this tip in mind when building dashboards and deciphering data.
5.1 User Interface for Creating Dashboards
This section provides a summary of user interface actions for creating dashboards. For more comprehensive guidance, refer to these instructions on Authoring Dashboards.
To get started, navigate to the Dashboards widget and click + Create to create a new dashboard:
You will be prompted to name and save the dashboard:
To edit a dashboard, select the dashboard from the list and use the Author mode button in the top right corner:
To add a new section to a dashboard, click + Card:
Widgets are used to visualize data and cards can have one or more Widgets (e.g. Aggregate, Timeseries, Table).
5.2 Designing appropriate customer dashboards
Dashboards in Provider Connectivity Assurance are crafted by reviewing metrics that are available and deciding what would be interesting to the customer. As an example, the object “cisco-mobilitycore-pm-apn” has a metric “activeSessions4g”.
Let’s walk through creating a dashboard to display that metric in a timeseries, grouped by device.
Example Dashboard - Timeseries (line graph) of active 4g sessions, grouped by device
Using Dashboard ▶ Create ▶ Author ▶ + Card actions described previously, name the card “Active 4G Sessions by node”:
Choose to add Data using the + (plus) button on the right side:
This opens a list of all the cisco-mobilitycore-pm object types available. For this example, highlight cisco-mobilitycore-pm-apn and choose activeSessions4G using the + (plus) button:
Next, select Group by using the + (plus) button on the right side, and choose “node_id”:
To design a dashboard that filters to show only specific devices, use the Session identifier.
Choose Filters ▶ Sessions ▶ + button.
Use the search box or filter on type = ciscomobilitycore-pm to narrow down the list of devices. Then, select the devices to show in the dashboard. Use the top-right checkmark to save the filter:
Tip: Note that multiple metrics can be combined in the same widget for comparison, including permutations of metrics and categories.
For convenience, Widgets, Cards and Dashboards can be cloned. Dashboards may be exported and imported between systems, provided that the same metrics and metadata are present on the importing system.
For more detailed instructions on how to configure Widgets, refer to these instructions.
At this stage, your mobility performance monitoring data is being ingested into Provider Connectivity Assurance, and you can create dashboard visualizations to meet the customer’s use cases.
© 2025 Cisco and/or its affiliates. All rights reserved.
For more information about trademarks, please visit: Cisco trademarks
For more information about legal terms, please visit: Cisco legal terms
For legal information about Accedian Skylight products, please visit: Accedian legal terms and tradmarks