Deployment Architecture
Traffic Processor can be deployed in a couple of primary configurations, each including a Management LAN connection for communication with other Traffic Processor servers.
General Deployment Information
Regardless of the specific interface, Traffic Processor is logically positioned before the Serving Gateway (SGW). The Traffic Processor server is installed on the active link leading to the SGW. A failover standby link (bypass) is also connected in parallel, without a Traffic Processor server on that link.
Traffic Processor servers act as a physical Layer 1 hop, meaning they do not perform any routing or switching. Traffic is passed directly through to the other side. This design allows existing failover mechanisms, such as BGP or IP-SLA checks, to remain unchanged.
Deployment Configurations
S1-U/N3 Interface Deployment: In this model, Traffic Processor is deployed on the S1-U/N3 interface. Only S1-U/N3 traffic is passed through the Traffic Processor server. Support for S1-MME/S11 Mirror traffic is optional.
An alternative in this configuration is to install Traffic Processor on the SGi/N6 interface (after the PGW), in which case only SGi traffic should be passed through the server.
Data Plane Deployment: In this model, Traffic Processor is deployed on the data plane interfaces.
Throughput Assumptions
This specification assumes that the throughput limit of the Traffic Processor server is adequate to support a 1:1 ratio with the serving gateway count, where peak traffic through a single serving gateway is not expected to exceed 320Gbps.
Example Deployment Architectures

Example Traffic-Processor w/ Control Traffic Processor Deployment (CTP) Architecture

Example Traffic-Processor No CTP Deployment Architecture
Table: Example Cable Matrix
| Run # | Cable Type | Cable Detail | Connector | LOC A Element ID | LOC A From Port | LOC Z Element ID | LOC Z To Port | Customer IP |
|---|---|---|---|---|---|---|---|---|
| 1 | MM Fiber10G | Straight | LC | Traffic-Processor-Server | N1-0 | P/S-GW | NA | |
| 2 | MM Fiber10G | Straight | LC | Traffic-Processor-Server | N1-1 | Router | NA | |
| 3 | MM Fiber10G | Straight | LC | Traffic-Processor-Server | N2-0 | P/S-GW | NA | |
| 4 | MM Fiber10G | Straight | LC | Traffic-Processor-Server | N2-1 | Router | NA | |
| 5 | Cat-6 | Straight | RJ45 | Traffic-Processor-Server | M1 | Management Switch | ||
| 6 | Cat-6 | Straight | RJ45 | Traffic-Processor-Server | iLo/ iDrac /CIMC | Management Switch (iLo/iDrac) |
Alternatively, Traffic Processor can be deployed as a ‘server on a stick’ configuration connected only to a core router ahead of the S/P-GW. Operators can use L2 (BGP/BFD) or L3 (VRF) switching / routing to send e.g., only S1-U/N3 traffic to and from the Traffic Processor server.

Alternative Example Traffic-Processor w/ CTP Deployment Architecture
Pre Operating System Installation
The platform supports Rocky Linux 8.10 or RedHat Enterprise Linux 8.10
Action Item: Provide which Operating System will be installed.
Post Operating System Installation
Traffic Processor and Management System software must be installed immediately after the operating system is installed to avoid potential installation conflicts.
If software is to be installed at any time before or after Traffic Processor and Management system software(s), they should be identified and provided, so that they can be evaluated to ensure that there is no conflict with Traffic Processor software.
Action Item: Provide a list of software(s) that need to be installed after the operating system.
Management System Pre Installation Checklist
Checklist: Remote Access
Customer shall:
- Provide VPN access to Traffic Processor and Management System servers
- Strong preference is a site-to-site IPSec VPN
- Alternative could be a connection to a jump host via SSH and then to the LAN.
Action Item: Provide VPN setup.
Firewall Rules Checklist (Grouped by Destination)
This outlines the required firewall rules based on the destination server or service (ingress).
Rules Destined for Management System (MS)
These are the inbound rules required on the Management System.
| Source | Protocol | Port | Purpose |
|---|---|---|---|
| VPN Endpoint | TCP | 22 | Access via SSH |
| VPN Endpoint | TCP | 443 | UI via HTTPS |
| TP/CTP | TCP | 443 | API via HTTPS |
| TP/CTP | TCP | 19092 | OAM/Statistics messaging via Redpanda/Kafka |
| TP/CTP | TCP | 5000 | CTP updates (if hosted on MS) |
| TP/CTP | TCP | 5001 | CTP snapshot (if hosted on MS) |
| Customer Network Services | TCP | 8084 | RAN Power State Recommendation API |
Rules Destined for Traffic Processor (TP) / Control Traffic Processor (CTP)
These are the inbound rules required on the TP/CTP endpoints.
| Source | Protocol | Port | Purpose |
|---|---|---|---|
| Mgmt Server | TCP | 22 | Access via SSH |
| VPN Endpoint | TCP | 22 | Access via SSH |
Rules Destined for Customer Network Services
These are the inbound rules required on the various Customer Network Services (like NTP, SMTP, logging servers, etc.).
| Source | Protocol | Port | Purpose |
|---|---|---|---|
| Mgmt Server | UDP | 123 | NTP (Time) |
| Mgmt Server | TCP | 25 | SMTP (Email) |
| Mgmt Server | UDP | 161 | SNMP (Monitoring) |
| Mgmt Server | TCP/UDP | 514 | RSYSLOG (Logs) |
| TP/CTP | UDP | 123 | NTP (Time) |
Action Item: Confirm communication matrix is complete.
Control Traffic Mirror (e.g., S1-MME and S11)
In order to set up the environment properly, the Customer needs to inform whether S1-MME and S11 traffic will be mirrored to as well as how many cells/ECIs are going through each (Control) Traffic Processor server.
Action Item: Confirm if control traffic will be mirrored and provide the number of cells/ECIs whose traffic will be passed through (Control) Traffic Processor servers.
Checklist: SNMP integration (Optional)
The SNMP Monitoring Agent is a process running on the server. This agent monitors all the different services that are involved in the solution.
If any problem is detected, the monitor will send an alert via a trap message to an external OSS (if integrated). The OSS will then check the map containing the different services/components to determine which one is reporting the issue.
The MIB file with the different OIDs to be used by the OSS will be distributed. The SNMP Monitoring agent uses SNMP version 2 or 3 and relies on a community name to secure the access to the agent’s table. The SNMP manager, therefore, needs to know and use the same community name.
Action Item: Optional – provide IP and port where SNMP traps should be sent.
Checklist: SMTP connection
The SNMP agent is meant to integrate with the customer’s OSS for monitoring alerts sent from the product instances, while the SMTP integration allows for the alerts to reach the customer support teams via email. The sends emails via an SMTP server reachable by the server LAN, which are then sent to the customer support team email alias. These emails will trigger an escalation and a response from customer support. This integration reduces response times and troubleshooting and is required for all deployments.
Action Item: Please provide the SMTP server IP and port.
Checklist: NTP connection
To keep events and statistics across the servers in sync it’s necessary that all servers’ system time is kept in sync. uses the Network Time Protocol Daemon (ntpd) which can be installed and configured to use an internal or public NTP server.
Action Item: Please provide the NTP server IP address to use.
Checklist: Backup location (Optional)
The dashboard properties can be configured to archive backups of the configurations and historical user data on a scheduled interval. This location will need to be mounted on the server’s file system and have write permissions enabled for all users. It is suggested that this location be a remote disk, as this will provide best data redundancy.
Action Item: Optional – provide backups location.
Data Retention Policy
In the event that the Customer collects enough statistics that the starts to run low on space, a location is required to archive these stats (if necessary). The can be configured to copy the data older than N number of days to the location provided it is mounted on the server’s file system.
Data retention, by default, is configured as follows:
- Management System dashboard metrics are stored in per-second form for the past 30
days. After 30 days, these metrics are compressed to one hour values and
stored for 11 months (one year of data retention total. After 1 year,
metrics are deleted from the system, unless a server backup location
is specified) - Deep-dive analytics (FlowAnalyst) data is retained for 7
days, then deleted unless a backup location is provided.
Action Item: Please confirm the data retention policy is ok or propose changes.
Packet Per Second Limit Monitoring
This monitors the rate of packets passing through each Traffic Processor NIC port pair. There are two threshold levels that must be provided in order for to take action if the number of packets per second limit is reached. Currently, when the limit is reached, an SNMP trap will be sent and there are four options that can be configured as follows:
- Send SNMP trap but take no further action
- Send SNMP trap AND turn off Traffic Processor service to trigger a traffic failover to the alternate path.
- Send SNMP trap and take down a set of network interfaces on the management NIC of Traffic Processor servers.
(These network interface names across all Traffic Processor servers must be the same. The intent is for this
to act as a signal to an external customer monitor so that traffic failover can be triggered without turning
off the Traffic Processor application.) - Send SNMP trap AND put the Traffic Processor service in BYPASS mode
Action Item: Confirm which behavior is desired in the event traffic packets per second exceeds limits 1 and 2.
Accounts
If a Customer needs to have access to the Management System, provide the following
information for all users who need access:
- First and last names
- Email address
Additionally, provide a default password. The Management System will direct the users to change password upon logging in the first time.
Alternatively, the Management System allows users to login using company credentials via configured LDAP connections. Admin users may configure these connections via the LDAP Configuration page in the Management System, or the required fields can be provided instead.
Action Item: Provide information to create user accounts for Management System.
Traffic Processor(s) Pre Installation Checklist
Checklist: Complete the Management System Installation Checklist
The Traffic Processor installation assumes the Management System has been fully installed and
checklists have been followed.
Action Item: Confirm the Management System checklist is complete.
Checklist: Failover setup (Required)
Traffic Processor servers need to be connected between switches/routing equipment on the data plane interfaces such that if there is a failure on Traffic Processor (e.g. hardware failure), traffic will route to the failover path (physical bypass path).
While end-to-end network-path integrity through Traffic Processor should be assessed via message-based protocols such as LACP, BFD, BGP, or ARP.
Traffic Processor can be configured so that if a link fails, the failing port’s forwarding pair port link is also forcibly brought down to provide continuity of the state of the links.
Action Item: Please confirm with what failover mechanism is to be used and if any extra configuration is necessary.
Best practices are to employ LACP, BFD, BGP, or ARP to assure connectivity and force a failover when conditions are not met.
This should be accompanied by appropriately setting minimum allowed active links on both sides of Traffic Processor when port channels/LAGs are used
Checklist: Live Traffic Routing setup
- Please confirm with if traffic will have encapsulation beyond GTP (e.g. MPLS)
- Only data plane traffic should be sent to Traffic Processor. If there are additional interfaces, please
route them around the Traffic Processor (e.g., on the bypass link).
Are there any VLANs that should or should not be optimized, if so, please provide the VLANs?
Action Item: Confirm items above.
MTU Size
Action Item: Provide the MTU size to be configured.
© 2026 Cisco and/or its affiliates. All rights reserved.
For more information about trademarks, please visit: Cisco trademarks
For more information about legal terms, please visit: Cisco legal terms
For legal information about Accedian Skylight products, please visit: Accedian legal terms and trademarks