Mark Noe, Director of Technical Alliances, NetNumber
Recently, I had the opportunity to speak at the Big 5G Event on “The Programmable Telco Network.” In this blog, I share the important points we have learned in preparing a CSP to embrace a cloud native architecture for the delivery of network services and how this transformation also considers the use of public cloud infrastructure. This blog will share some of the key points from my presentation.
Before discussing network programming in a cloud native environment, it is important to understand there are many benefits in changing the existing 4G network environment—to transform it from a monolithic network element environment to a microservices architecture, or a “cloud native environment.” Cloud native microservices use containers, or smaller pieces of applications, that are much easier to scale, move around, upgrade, automate…. i.e., provide improved agility and faster time to market.
With the delivery of 4G, network services have mainly been built on the concept of network function virtualization, or NFV. But NFV environments, though they have improved the 4G network delivery, consist of monolithic applications or self-contained environments that combine all the application components such as the external network interface, server-side application, and the database, into a single-tiered application. The downsides of this approach are higher costs and lack of agility as monolithic applications are more resource inefficient, difficult to move, change, upgrade, scale up/down, and so on.
With the evolution of 5G, the hyperscalers (Amazon, Google, Microsoft) are now pushing into the CSP solution space with cloud based 4G/5G services and private network services particularly for the enterprise market. As a result, CSP operations are being transformed from traditional to cloud native deployments in order to be more competitive and agile with the hyperscalers while also taking advantage of partnerships and using hyperscaler infrastructure to achieve a more desired agility at a lower cost.
Cloud native transformation started happening across the enterprise IT industry many years back. Today we all benefit everyday from services that are based on or utilize cloud native technology (e.g. SalesForce, Netflix, Uber, etc). These cloud native services have been mostly built on the hyperscalers public cloud infrastructure but many have also deployed in private data centers or a hybrid of both public and private.
As CSPs start their journey to cloud native, the deployment of network functions to support specific network services may start in their own private data centers initially. It is expected that CSPs will also include expanding their private domain into public cloud infrastructure and leverage managed services from vendors as part of this transformation. As the hyperscalers continue to build out geo-redundant presence in new markets and expand partnerships with
CSPs, the ability for CSPs to consider public cloud platforms continues to improve in meeting latency and regulatory requirements.
All that said, cloud native transformation is not easy for the CSP. It is complex and requires the CSP to consider some important factors that we highlight below:
- Resilience Models: via local redundancy or geo redundant data centers, with public, private or hybrid clouds. How this is achieved requires a clear understanding of cloud datacenter geo footprint (availability zones) and services, latency of signaling flows, bandwidth requirements, secure communication interconnect options and more.
- Security, privacy, and regulatory: Cloud providers have built compute infrastructure in almost all large metropolitan cities but the services, availability features, capabilities, capacity and costs will vary by region. Since most CSPs have to align with regulatory in-country policies it is important to understand these details when considering public cloud or hybrid models.
- Automation: A cloud native infrastructure significantly increases the quantity of applications in container form to be managed, so automation is critical. APIs play an important role in enabling the deployment to be automated to achieve the benefits in terms of microservice upgrades and testing in support of achieving a continuous integration and continuous deployment model.
- Configuration Orchestration: The configuration may be part of the intitial deployment or it may be separate of the deployment using a configuration store. In the configuration store approach, the store can be prepared and uploaded with the configurations the network function or service needs instead of being configured at deployment. With a configuration store, once the network function is deployed it connects to the store, downloads the configuration and is immediately placed in service. Since the configuration is separate from the deployment process, the configuration can continue to be updated as needed for future deployments.
- Deployment Management: Once deployed, the day 2 ongoing management will need to consider new capabilities as part of the ETSI MANO lifecycle management services, 3GPP management services (MnS), and/or vendor specific deployment server.
- Staff skillsets: Moving from virtualized environments to cloud native requires knowledge and experience with new open source skills and tools across the board (e.g. Docker, Kubernetes, Helm, Kafka, ELK, Prometheus, etc.) including the associated security related aspects (security benchmarks, vulnerability scans, runtime audits and more).
NetNumber has participated in several POCs and can share key lessons from working with CSPs that are undergoing the transformation to cloud native and are considering network services deployed across a public cloud or hybrid model. In the next entry to this blog we will start to break down these important factors into more granular detail. We welcome your comments, suggestions, and requests. Connect with us in the comments if you’d like to learn more.