Categories
Artificial Intelligence Networking

The Next Evolution in SONiC Intelligence

At PalC Networks, our work with SONiC has always been about more than automation.
Automation is efficient but itโ€™s still reactive.

What we wanted was awareness: A network that could interpret, coordinate, and adapt.

That idea took form through Agentic AI: a framework where SONiCโ€™s critical functions (configuration, telemetry, topology, security) are handled by specialized, intelligent agents.

But as we scaled, one challenge became clear: Intelligence, if isolated, becomes another form of silo.

Each agent could perform brilliantly on its own, but real autonomy requires more, A shared consciousness.Thatโ€™s where the MCP (Multi-Agent Coordination Plane) comes in.
Itโ€™s the layer that turns multiple intelligent agents into a cooperative, adaptive ecosystem.

From Orchestration to Collaboration

Traditional orchestration relies on centralized control using one brain to manage the entire network.

Modern networks are organic with thousands of devices, millions of telemetry signals, and unpredictable traffic patterns.

Scaling intelligence doesnโ€™t mean building a bigger brain. It means building many smaller ones where each one is capable of learning, reasoning, and collaborating.

Thatโ€™s the foundation of MCP:

Every SONiC agent becomes an independent node that can:

  • Understand its local state
  • Exchange context with peers
  • Coordinate decisions through the MCP layer

Together, they form a federation of specialized minds which means faster, more resilient, and inherently aware of the whole.

MCP Explained: The Missing Link Between Automation and Autonomy

The Multi-Agent Coordination Plane is a distributed intelligence fabric that connects multiple SONiC agents into one unified reasoning system.

In our Agentic AI architecture, MCP acts like a nervous system for the network:

  • Each agent (Config, Telemetry, Topology, Security) behaves like a neuron.
  • MCP is the synaptic layer that carries signals and aligns actions.
  • Together, they create a collective intelligence becoming self-aware, self-optimizing, and contextually driven.

How MCP Works

1.Distributed Reasoning
Each agent monitors, configures, or optimizes within its domain, while MCP ensures a shared state across them.

2.Context Sharing
When telemetry flags congestion, MCP routes that insight to configuration and topology agents, prompting proactive adjustments.

3.Decision Synchronization
MCP prevents conflicting actions, ensuring coordinated, safe changes across agents.

4.Learning & Feedback
Over time, MCP identifies patterns in cause and effect, improving the networkโ€™s ability to predict and prevent disruptions.

Business Impact: Why MCP Matters for Enterprises

MCP is more of a business enabler than being an architectural improvement.
It turns reactive infrastructure into a self-optimizing system that saves time, cost, and risk.

Enterprise Challenge MCP Solution
Manual, reactive operations Predictive, AI-driven coordination before failures occur
Configuration errors Pre-validation, rollback, and cross-agent verification
Fragmented monitoring Unified loop between telemetry, config, and topology
Scaling complexity Distributed, localized decision-making for faster remediation
Compliance and audits Built-in traceability for every autonomous action

By distributing reasoning across nodes, MCP transforms the data center from an operational burden into resilient, compliant, and aware ecosystem.

Turning SONiC Agents into Collaborators

Hereโ€™s how collaboration unfolds inside PalCโ€™s Agentic AI framework:

  • Intent Interpretation: The orchestrator translates operator intent (e.g., โ€œDeploy a 4-leaf, 1-spine fabric with telemetry enabledโ€).
  • Delegation via MCP: Tasks are distributed into Configuration sets up interfaces, Topology maps links, Telemetry preps sensors.
  • State Synchronization: Agents continuously share updates, ensuring decisions remain consistent and validated.
  • Adaptive Execution: MCP learns from each event, fine-tuning coordination for future scenarios.

SONiC, through MCP, shifts from being managed to self-managing.

When Each Agent Thinks and Learns

Each agent grows smarter through experience:

  • Config Agent: Learns from historical changes to suggest safer rollouts.
  • Telemetry Agent: Detects patterns to predict congestion or performance drift.
  • Topology Agent: Recalculates paths dynamically under load or failure.
  • Security Agent: Applies policies based on live context, not static rules.

Through MCP, these agents share learning, building a network that’s intelligently aware.

Traditional Automation Agentic AI + MCP
Centralized control Distributed coordination
Static rule execution Context-aware reasoning
Manual incident handling Autonomous self-healing
Configuration scripts Intent-driven adaptability

MCP turns SONiC fabrics into cooperative, evolving systems

PalCโ€™s Vision: Engineering Distributed Autonomy

Our MCP framework fuses AI reasoning, SONiCโ€™s openness, and operational discipline into a distributed, resilient control model.

The goal is to give networks the ability to handle complexity, so humans can focus on innovation.

The outcome:

  • Networks that heal themselves.
  • Operations that think in context.
  • Infrastructure that acts with intent.

Key Takeaways

MCP (Multi-Agent Coordination Plane) enables real-time coordination among SONiC agents.
Agentic AI transforms SONiC from automated to intelligent.
PalC Networks delivers the engineering and ecosystem to make open autonomy practical.
The result: open, intelligent, business-aware data centers built for the future.

Contact us today to learn how PalC Networks can support your journey towards future-ready infrastructure.

Categories
Artificial Intelligence Networking

How PalC Networks builds trust and resilience into open networking deployments

Why โ€œOpenโ€ Needs โ€œAssuranceโ€ย 

Open networking is no longer a fringe experiment โ€” itโ€™s the foundation of modern data center infrastructure.
SONiC, the open-source network operating system born at Microsoft and nurtured by the Linux Foundation, is now powering hyperscale and enterprise data centers alike.
But in regulated industries โ€” finance, government, healthcare, and telecom โ€” openness alone isnโ€™t enough.
These environments demand traceability, compliance, and continuous assurance.

The question isnโ€™t just โ€œCan SONiC run at scale?โ€
Itโ€™s โ€œCan it meet audit, compliance, and security standards โ€” without losing its open DNA?โ€

Thatโ€™s where hardening becomes essential.

What โ€œHardened SONiCโ€ Really Means

In PalCโ€™s terminology, Hardened SONiC is not just a patched OS.
Itโ€™s a tested, validated, and continuously supported build of SONiC, engineered for production use in environments where downtime or misconfiguration is unacceptable.

A hardened SONiC image from PalC includes:

  • Extended regression and conformance testing across multi-vendor ASICs and hardware platforms.
  • Security baselines patched CVEs, role-based access controls (RBAC), secure logging, and firmware validation.
  • Operational guardrails validated upgrade/rollback workflows, version locking, and signed images.
  • Lifecycle visibility telemetry and alert hooks tied to TAC processes for proactive support.

In short: we take SONiCโ€™s open flexibility and wrap it in enterprise-grade reliability.

Why Regulated Environments Need a Hardened SONiC Approach

Regulated sectors โ€” like BFSI, government networks, and telecom carriers โ€” live under strict mandates for data integrity, availability, and traceability.
These mandates translate directly into network design expectations.

Letโ€™s break that down.

1. Compliance by Design

Every software component must be auditable โ€” from kernel to NOS to telemetry stack.
Hardened SONiC provides version-controlled builds, cryptographic signing, and artifact traceability that meet regulatory audit standards such as ISO 27001, PCI DSS, or RBI/BIS mandates in BFSI.

2. Security by Default

Unpatched CVEs are unacceptable.
PalCโ€™s hardened builds include ongoing vulnerability tracking, secure boot enablement, ACL enforcement, and integration with external authentication (LDAP, TACACS+, RADIUS).

3. Operational Stability

Regulated enterprises operate under SLA-driven performance commitments.
SONiCโ€™s modular architecture can be both an advantage and a risk โ€” if untested combinations fail in production.
PalCโ€™s validation suite ensures all supported features (L2/L3/MPLS/EVPN/VXLAN) and vendor ASICs pass regression across 500+ functional and fault scenarios.

4. Observability and Accountability

Telemetry is not optional.
Each packet path, queue behavior, and interface statistic must be traceable.
Hardened SONiC integrates gNMI-based telemetry with PalCโ€™s NetPro Suite, enabling historical replay and audit visibility across compliance cycles.

The PalC Approach: Engineering Confidence into Openness

1. Build Validation: Qualification Across Platforms

Each PalC SONiC build goes through multi-phase qualification:

  • Hardware Compatibility Validation
    Tested on Broadcom, Marvell, and Intel platforms, ensuring feature parity and driver consistency.
  • Functional Regression
    500+ test cases covering Layer 2/3 protocols, EVPN-VXLAN, QoS, ACLs, and multi-chassis link aggregation.
  • Negative Testing
    Simulating failed links, route flaps, process restarts, and misconfigurations โ€” validating SONiCโ€™s failover logic.
  • Performance Benchmarking
    Line-rate throughput and latency benchmarks using IXIA or TRex frameworks, compared against OEM baselines.

This forms our Hardened SONiC Qualification Matrix โ€” a continuous integration pipeline that ensures each release is ready for production, not just lab demos.

2. Secure Configuration Baselines

Security in SONiC begins with the image, but extends into runtime.
Our hardening templates implement:

  • Role-Based Access Control (RBAC) for administrative isolation.
  • AAA integration with corporate identity providers (LDAP, RADIUS, or SSO).
  • Config Integrity Checkpoints โ€” SHA-signed configuration backups and change validation.
  • Secure Management Channels โ€” enforced SSHv2, TLS 1.2+, SNMPv3, gNMI/gRPC over SSL.
  • Disable default accounts and unused services as part of Day 0 provisioning.

These configurations align with CIS Benchmarks and NIST 800-53 guidelines, ensuring compliance readiness from the first boot.

3. Lifecycle Assurance & Patch Management

Open-source agility is a double-edged sword โ€” patches evolve quickly.
PalCโ€™s sustain program integrates SONiC patch cycles with enterprise change windows:

  • Patch Validation Pipelines: New commits undergo automated test runs in PalCโ€™s CI/CD lab.
  • Version Locking: Enterprises can freeze on validated releases while security patches continue to be backported.
  • Rollback Automation: Instant rollback capability in case of regression, integrated with our orchestration tools.

This process ensures that openness doesnโ€™t compromise predictability.

4. Telemetry & Compliance Observability

In regulated environments, you canโ€™t just prove uptime โ€” you must prove why it was maintained.
Using NetPro Suite, hardened SONiC deployments gain:

  • Real-time gNMI telemetry streams from switches.
  • Prometheus exporters for metrics collection.
  • Grafana dashboards for visual compliance reporting.
  • Integration with SIEM tools (e.g., Splunk, Elastic, or OpenSearch) for anomaly correlation.

Auditors can replay network states, review link utilization, and validate SLA adherence from a single pane.

5. TAC-Driven Operational Model

Even the best-engineered network will face incidents.
The difference lies in response speed and insight.

PalCโ€™s Technical Assistance Center (TAC) operates in three tiers:

  • L1: Immediate triage, log analysis, and guided recovery.
  • L2: Root-cause diagnosis, topology validation, escalation management.
  • L3: Engineering-level debugging and patch integration directly with SONiC community branches.

Every support case feeds back into our Hardened SONiC Knowledge Base, ensuring learnings become new safeguards.

This is Sustainability through Feedback Loops โ€” the more we support, the smarter the platform gets.

SONiC in FinTech Core Networks

In one of Indiaโ€™s leading FinTech payment operators, PalC deployed a SONiC-based open fabric across three high-availability data centers.
The goals were clear: vendor independence, audit readiness, and zero unplanned downtime.

Challenges included:

  • Legacy OEM lock-in and opaque management.
  • Manual firmware rollbacks during audits.
  • Limited visibility across multi-vendor devices.

Our Solution:
Hardened SONiC builds validated against the clientโ€™s exact ASICs.
Automated compliance telemetry, feeding into their security audit dashboards.
Integrated TAC support with pre-agreed SLA response tiers.
NetPro Sustain for continuous monitoring and regression validation after every change window.

The result:
40 % reduction in operational costs.
100 % audit traceability across firmware and configuration changes.
Zero downtime during compliance audits.

Proof that openness can coexist with regulation โ€” if engineered right.

SONiC in FinTech Core Networks

Hereโ€™s a distilled checklist based on our field experience:

Stage Best Practice Outcome
Design Define compliance mapping (ISO 27001, PCI, NIST). Architecture aligns with regulation before deployment.
Image Prep Use signed, tested, and version-controlled SONiC images. Verified integrity, no drift between nodes.
Access Control Implement RBAC + AAA + MFA for all admins. Prevent privilege escalation.
Telemetry Enable gNMI, stream to secure collectors. Continuous visibility and auditability.
Change Management Use configuration-as-code and CI/CD validation. Safe, repeatable updates.
Support Integrate with enterprise ticketing via TAC APIs. Rapid triage and documentation.

Why PalC Networks Leads in Hardened SONiC

PalC isnโ€™t just deploying open networking โ€” weโ€™re industrializing it.

Our contribution to the SONiC ecosystem spans RFC drafts, validation tooling, and active community participation.
But what differentiates us in regulated sectors is our ability to bridge open innovation with enterprise discipline.

We combine:

  • SONiC engineering depth (protocol enhancements, FRR stack contributions).
  • End-to-end deployment experience (design โ†’ validation โ†’ TAC).
  • A proven sustain model that aligns open-source agility with compliance rigidity.

For enterprises navigating audits, risk frameworks, and strict SLAs โ€”
PalC Networks delivers the confidence to run SONiC at scale.

Summary

The future of data centers is open, but it must also be trustworthy.
Hardened SONiC offers the best of both worlds โ€” agility without risk, freedom without fragility.
When compliance meets code, and automation meets assurance,
you donโ€™t just build a network.
You build trust at line rate.

Contact us today to learn how PalC Networks can support your journey towards future-ready infrastructure.

Categories
Artificial Intelligence Networking

The Shift Toward Reasoning Networksย 

Every evolution in networking has pursued one goal which is reducing human friction.
From command-line configurations to intent-driven automation, each step simplified execution but not understanding.
As networks now span clouds, edges, and AI clusters, complexity is no longer operational it has turned to be cognitive.
Artificial Intelligence (AI) is stepping into that gap. Not just as a data analytics tool, but as a reasoning layer for networks that can learn, infer, and decide.
And this shift Retrieval-Augmented Generation (RAG) is a framework that allows AI to think with the networkโ€™s own knowledge.
RAG marks the point where network AI stops merely predicting and starts understanding.

The Evolution of Network Intelligence

Era Core Approach Limitation Next Step
Manual Era Human-driven configs Error-prone, inconsistent Scripted automation
Automation Era SDN, CI/CD, SONiC pipelines Reactive, limited context Contextual AI reasoning
AI Era Retrieval + Generation Needs domain understanding Self-operating cognition

The next leap isnโ€™t automation โ€” itโ€™s comprehension.
Networks that donโ€™t just execute playbooks, but understand why theyโ€™re executing them.

How RAG Fits in Networking

Networks are knowledge systems. They generate massive amounts of unstructured intelligence like telemetry, syslogs, event traps, policy states and most of which remains underutilized.

RAG converts this operational exhaust into reasoning fuel. It enables AI models to:

  • Retrieve live context: Whatโ€™s happening across fabrics, clusters, and tenants.
  • Ground reasoning: Align insights with real-time configurations.
  • Generate precision: Produce factual, explainable outcomes.

In networking terms, RAG is the bridge between observability and cognition โ€” it converts visibility into understanding.

Inside the RAG Loop

RAGโ€™s value lies not only in the workflow, but also in the reasoning feedback that emerges from it.

  1. Collect & Curate: SONiC telemetry, NetPro metrics, logs, configs.
  2. Index Knowledge: Create a searchable intelligence layer of historical and live data.
  3. Retrieve Context: Query relevant slices (โ€œWhat caused leaf-03 reboot last night?โ€).
  4. Generate Reasoning: AI synthesizes causal narratives or configuration recommendations.
  5. Learn & Adapt: Verified responses become part of the retrieverโ€™s future context.

This loop makes networks progressively smarter, not just faster.

Where RAG Redefines NetOps

  • Root Cause Reasoning: Move beyond correlation โ€” infer causation with evidence.
  • Policy Intelligence: Detect and explain compliance drifts across vendors.
  • Cognitive Assistants: Natural-language diagnostics for L1 engineers.
  • Contextual Configs: Generate validated SONiC/BGP/EVPN templates grounded in current state.
  • Adaptive Learning: Retain lessons from every RCA, ticket, or anomaly.

In effect, RAG creates a knowledge memory for the network which acts as a living library that improves operational trust and speed.

PalC Networksโ€™ Perspective: From Telemetry to Reasoning

At PalC Networks, our journey through SONiC-based fabrics, AI observability, and cloud-native orchestration has naturally converged toward RAG-driven network cognition.

Our focus areas include:

  • Integrating NetPro Suite as a real-time retrieval layer, grounding AI in verified telemetry.
  • Domain-tuned AI models that understand network semantics โ€” from L2 loops to RoCEv2 optimizations.
  • Cross-vendor contextual reasoning to unify visibility across SONiC, Cisco, Juniper, and Arista environments.

As contributors to the SONiC ecosystem and the Linux Foundation, weโ€™re advancing an open, cognitive networking paradigm โ€” where intelligence is shared, transparent, and self-improving.

Turning Data into Cognitive Advantage

Enterprises adopting RAG-based network intelligence typically realize:

  • 60% faster RCA through retrieval-grounded context.
  • Reduced operational overhead via explainable AI triage.
  • Improved onboarding as natural language replaces CLI silos.
  • Lower TCO by extending reasoning across multi-vendor networks.

Looking Ahead: From Intelligent to Autonomous Networks

The next generation of networks not only just detect or report; theyโ€™ll reason, decide, and adapt.
AI agents will retrieve evidence, simulate outcomes, and execute remediations with policy assurance.

RAG is the cognitive fabric that enables the turning static data into continuous intelligence.
Itโ€™s how networks evolve from visibility to comprehension, and from automation to autonomy.

In Closing

Retrieval-Augmented Generation marks a turning point in networking ย where AI becomes both a memory and a mind.

At PalC Networks, we believe the future of network operations lies in intelligence built on understanding & networks that can explain themselves as well as they perform.

Contact us today to learn how PalC Networks can support your journey towards future-ready infrastructure.

Categories
OpenStack Networking

This document provides an overview of integrating Telegraf, InfluxDB, and Grafana to monitor SONiC (Software for Open Networking in the Cloud) devices using gNMI (gRPC Network Management Interface). It highlights the advantages of this setup and compares it with other monitoring solutions.

Components Overviewย 

1. Telegraf

  • A lightweight, open-source server agent for collecting and sending metrics.
  • Supports multiple input plugins, including gNMI, to collect telemetry data from SONiC devices.
  • Can be configured to push data to InfluxDB for storage and visualization.

2. InfluxDBย 

  • A high-performance time-series database designed to handle large volumes of real-time data.
  • Efficiently stores telemetry data collected from network devices.
  • Supports querying and analysis using InfluxQL or Flux.

3. Grafanaย ย 

  • An open-source visualization and monitoring tool.
  • Provides dashboards for real-time and historical data analysis.
  • Supports alerting and integrates well with InfluxDB.

4. gNMI (gRPC Network Management Interface)ย ย ย 

  • A modern network management protocol based on gRPC.
  • Enables efficient and secure telemetry data collection.
  • Used by SONiC to provide structured and real-time network telemetry

Advantages of This Setup

  • Real-Time Monitoring: gNMI provides real-time telemetry data, ensuring up-to-date insights into network performance.
  • Scalability:Telegrafโ€™s lightweight architecture and InfluxDBโ€™s efficient time-series storage enable scalable monitoring.
  • Flexibility:Supports multiple plugins and data sources, making it adaptable for various monitoring needs.
  • Efficient Data Storage:InfluxDB optimizes storage for high-frequency data, reducing overhead compared to traditional relational databases.
  • Customizable Dashboards:Grafana offers extensive visualization options, making network analysis intuitive and user-friendly.
  • Automation & Alerting: Grafanaโ€™s built-in alerting allows proactive network issue detection and response.ย 

Advantages of gNMI Over Other Protocols

Feature gNMI SNMP NETCONF/YANG RESTCONF
Transport gRPC-based (binary) UDP-based (text) SSH-based (XML) HTTP-based (XML/JSON)
Performance High (streaming support) Low (polling-based) Moderate (RPC-based) Moderate (REST-based)
Security TLS encryption Minimal security Secure with SSH Secure with TLS
Scalability High Moderate Moderate Moderate
Data Model Structured (Protobuf/YANG) Unstructured (OID) Structured (YANG) Structured (YANG)
Telemetry Streaming & Polling Polling only RPC-based retrieval RPC-based retrieval
Ease of Use Modern & Developer-friendly Legacy, complex Requires XML handling Requires REST API knowledge

Comparison with Other Solutions

Feature Telegraf + InfluxDB + Grafana SNMP-based Monitoring ELK Stack (Elasticsearch, Logstash, Kibana)
Real-time Data Yes (gNMI streaming) No (polling-based) Limited (log-based)
Data Efficiency High (time-series storage) Moderate High (searchable logs)
Visualization Extensive (Grafana) Basic Advanced (Kibana)
Alerting Yes Limited Yes
Scalability High Moderate High
Protocol Support gNMI, SNMP, others SNMP, NetFlow Logs, Metrics, APM

gNMI for Streaming Telemetry from Sonic Device

gNMI streaming telemetry offers an efficient alternative by continuously transmitting data from network devices with incremental updates. Instead of relying on SNMPโ€™s polling mechanism, which collects data regardless of changes, gNMI allows operators to subscribe to specific data points using well-defined sensor identifiers. This approach provides near real-time, model-driven, and analytics-ready insights, enabling more effective network automation, traffic optimization, and proactive troubleshooting.

Telegraf Configuration

[[inputs.gnmi]]
#Address and port of the gNMI GRPC server (Update with sonic device IP)

addresses = [“:”,”:”]

#define credentials

username = “”

password = “”

#gNMI encoding requested (one of: “proto”, “json”, “json_ietf”, “bytes”)

encoding = “json”

#redial in case of failures after

redial = “10s”

#enable TLS only if any of the other options are specified (For different telegraf version it will be enable_tls = true)

tls_enable = true

#Use TLS but skip chain & host verification

insecure_skip_verify = true

#Subscription to get temperature detail

[[inputs.gnmi.subscription]]

name = “temperature_sensor”

origin = “openconfig”

path = “<url>”

sample_interval = “60s”

Note : Once Configuration has been Updated restart telegraf service i.e sudo systemctl restart telegraf

Dashboards

Strategic Takeaway

This observability stack is not just a combination of open-source tools, itโ€™s a production-ready framework engineered for real-time visibility across SONiC environments.

By combining gNMI streaming, Telegraf, InfluxDB, and Grafana, and tuning them specifically for SONiC-based networking, PalC Networks helps organizations monitor infrastructure with precision, scalability, and speed. Weโ€™ve implemented custom telemetry paths, dashboard packs, and threshold-driven alerting systems.

If youโ€™re adopting SONiC and planning to integrate it with a monitoring stack-reach out to us. Our team supports everything from architecture design to implementation, validation, and ongoing maintenance.

Explore Our Open Networking Capabilities

If you need support or guidance in exploring OpenStack, open networking, or data center infrastructure optimization, we are here to help.

Contact us today to learn how PalC Networks can support your journey towards future-ready infrastructure.

Categories
Networking

SONiC (Software for Open Networking in the Cloud) emerged from Microsoft’s innovation hub in 2016, designed to transform Azure’s complex cloud infrastructure connectivity. Built on a Debian foundation, SONiC adopts a microservice-driven, containerized design, where core applications operate within independent Docker containers. This separation allows for seamless integration across various platforms. Its northbound interfaces (NBIs)โ€”including gNMI, ReST, SNMP, CLI, and OpenConfig Yang modelsโ€”facilitate smooth integration with automation systems, providing a robust, scalable, and adaptable networking solution.

Why an Open NOS is the Future

Adopting an open Network Operating System (NOS) like SONiC promotes disaggregation, a concept that liberates hardware from software constraints, allowing for a flexible, plug-and-play approach. This modular structure enables a unified software interface across different hardware architectures, enhancing supply chain flexibility and avoiding vendor lock-in. Custom in-house automation frameworks can be preserved without needing per-vendor adjustments. The DevOps-oriented model of SONiC accelerates feature deployment and rapid bug fixes, reducing dependence on vendor-specific release schedules. Moreover, the open-source ecosystem fosters innovation and broad collaboration, supporting custom use cases across various deployments. This freedom translates into significant cost efficiencies, reducing Total Cost of Ownership (TCO), Operational Expenditure (OpEx), and Capital Expenditure (CapEx), making it a compelling choice for modern networks.

Why SONiC Stands Apartย 

Among the numerous open-source NOS solutions available, SONiC distinguishes itself through its growing adoption across enterprises, hyperscale data centers, and service providers. This success highlights its versatility and robustness. Open-source contributions have meticulously refined SONiC, tailoring it for specific use cases and enhancing its features while allowing adaptable architectures.

Key Attributes of SONiC:

Open Source:

  • Vendor-neutral: Operates on any compatible vendor hardware.
  • Accelerated feature deployment: Custom modifications and quick bug resolutions.
  • Community-driven: Contributions benefit the broader SONiC ecosystem.
  • Cost-effective: Reduces TCO, OpEx, and CapEx significantly.

Disaggregation:

  • Modular architecture: Containerized components enhance resilience and simplify functionality.
  • Decoupled functionalities: Allows independent customization of software components.

Uniformity:

  • Abstracted hardware: SAI simplifies underlying hardware complexities.
  • Portability: Ensures consistent performance across diverse hardware environments.

DevOps Integration:

  • Automation: Seamless orchestration and monitoring.
  • Programmability: Utilizes full ASIC capabilities.

Real-World SONiC Deployment

Despite SONiCโ€™s evident advantages, network operators often has these critical questions- Is SONiC suitable for my network?, What does support entail?, How do I ensure code quality?, and How do I train my team?. The true test of a NOS is its user experience. For open-source solutions like SONiC, success depends on delivering a seamless experience while ensuring robust vendor-backed support.

Operators considering SONiC typically fall into two categories: those with a self-sustaining ecosystem capable of handling an open NOS and those exploring its potential. For the former, SONiC may be customized to meet specific network demands, potentially involving a private distribution or vendor-backed commercial versions. The latter group, often seeking simpler use cases, typically relies on community SONiC, balancing its open-source nature with vendor validation.

SONiC: Leading Disaggregation and Open Networking

The shift towards open networking, driven by disaggregation, has brought SONiC into the spotlight. Its flexibility, cost-effectiveness, and capacity for innovation make it an attractive option, especially for hyperscalers, enterprises, and service providers. By embracing open architecture and standard merchant silicon, SONiC provides a cost-efficient alternative to traditional closed systems, delivering equivalent features and support at a fraction of the price.

Managing SONiC: Overcoming Deployment Challenges

While companies like Microsoft, LinkedIn, and eBay have successfully integrated SONiC, they often encounter challenges such as vendor fragmentation, compatibility issues, and inconsistent support across hardware platforms. Managing diverse hardware can become complex due to each vendorโ€™s unique configurations, while updates can disrupt operations by introducing compatibility problems. Integrating different SONiC versions across vendors may also lead to inconsistencies and unreliable telemetry data, adding further complications. Additionally, varying support levels make troubleshooting difficult, and operators often require additional training to navigate the nuances of each vendorโ€™s implementation. Although these challenges can feel overwhelming, with careful planning, skilled personnel, and the right support, they can be effectively managed for a smooth and successful SONiC deployment.

PalCโ€™s SONiC NetPro Suite directly addresses these challenges by offering comprehensive solutions designed to streamline SONiC adoption. Through the Ready, Deploy, and Sustain packages, PalC ensures end-to-end support, providing expert guidance to fill gaps in in-house expertise and fostering a DevOps-friendly culture for smooth operations. PalCโ€™s strong partnerships with switch and ASIC vendors guarantee seamless integration and full infrastructure support, while its rigorous QA processes ensure performance and reliability. By offering customized support and clear cost assessments, PalC simplifies SONiC deployment, minimizing disruptions and ensuring a scalable, secure, and optimized network infrastructure.

Conclusion

Choosing the right SONiC version is crucial for network optimization. By assessing feature needs, community support, hardware compatibility, and security, organizations can make informed decisions that enhance network performance. SONiCโ€™s evolution from a Microsoft project to a leading open-source NOS underscores its transformative potential and solidifies its role in future cloud networking.

Categories
Networking

In the ever-evolving landscape of network infrastructure management, PalC Networks introduces a cutting-edge solution that empowers organizations to efficiently control and optimize their network resources. Our Network Management System โ€“ PalC NetPulse is designed with a robust set of features, following the FCAPS framework, which covers Fault, Configuration, Accounting, Performance, and Security management. Let’s delve into what sets PalC NetPulse apart and how it caters to the diverse needs of network operators and OEM/NOS vendors.

The Power of PalC NetPulse
Efficiency and Control:

  • L1 Cross Connect for EdgeCore Cassini Boxes: We’ve integrated L1 cross-connect capabilities for EdgeCore Cassini boxes, enhancing your control over network resources.
  • Interface Configuration Management: Our NMS software offers streamlined configuration management for both Ethernet and Optical interfaces, simplifying network setup and maintenance.
  • Inventory and User Management: PalC NetPulse provides robust inventory management, allowing you to keep track of network assets

Comprehensive Monitoring:
Peripheral Device Monitoring: Stay on top of your network’s health by monitoring peripheral devices. PalC NetPulse offers a holistic view of your infrastructure.

  • Optical Parameter Monitoring: Get in-depth insights into optical parameters to ensure
    optimal performance.
  • Topology Viewer and Alarms/Event Notification: Visualize your network’s topology
    and receive real-time alarms and notifications. Stay informed and respond swiftly.
  • Microservices Compatible Architecture: Our NMS is built on a microservices-
    compatible architecture, ensuring flexibility and scalability for your evolving network
    needs.

What Lies Ahead in Our Roadmap:
We are committed to enhancing our NMS software further to meet the ever-changing demands of network management:
Layer-2 Configuration Features: We are adding more layer-2 configuration features to provide you with a more comprehensive toolkit.

  • Advanced Automation: Our NetPulse will have enhanced network automation capabilities, simplifying complex tasks.
  • Integration with Cloud-Based Environments: In line with the industry’s shift towards cloud-based solutions, we are working on seamless integration with cloud environments.

Supported Software for Versatility
PalC NetPulse is compatible with various software platforms, making it versatile and adaptable to different deployment scenarios:

SONiC for Data-Center Deployments: Perfect for data-center networks, PalC NetPulse integrates seamlessly with SONiC.

  • Goldstone for Packet Optical Deployments: If your network requires packet optical solutions, PalC NetPulse has you covered with Goldstone compatibility.
  • OcNOS for Data Center/CSR Requirements: Data center and carrier-grade service provider networks benefit from our support for OcNOS.
Categories
Networking

PerfSONAR (performance Service-Oriented Network monitoring Architecture) is a network measurement toolkit that provides end-to-end network diagnostics over integrated network paths. It is an on-the-go issue detection system for network health based on various route diagnostic tools in its toolkit. It covers the essential L3 network connectivity testing tools, hosting protocols such as ICMP, OWAMP, TWAMP, and other methods such as traceroute and iperf. It also provides a uniform interface that allows for the scheduling of measurements, storage of data in uniform formats, and scalable ways to retrieve data and generate visualizations.

PerfSONAR allows scheduled tests to run on a regular basis or on demand, as required, from an interactive shell or from a GUI (PerfSONAR Web Admin). The test results can then be stored
locally or sent to a centralized server where the results from multiple hosts are aggregated to get a better view of the happenings in the network.ย  The toolkit includes numerous utilities responsible for carrying out the actual network measurements and form the foundational layer of perfSONAR.

Figure 1 – perfSONAR architecture

In the architecture diagram (Figure 1) mentioned above, the key components have been outlined below:

  • perfsonar tools – This module contains different utilities responsible for carrying out the actual network measurements. Twamp tool consists of twping binary which is used on client side and twampd binary which runs on the server/responder side. Similarly, we have owamp, powstream, traceroute, tracepath, paris-traceroute and other diagnostic methods. We can use these tools for measurement of following performance metrics.
  • Latency – measuring one-way and two-way delays
  • Packet loss, duplicate packets, jitter
  • Tracing network path
  • Trace path – Identifying the path MTU
  • Throughput – Measuring the bandwidth availability

An overview of supported tools โ€“

  • owampย – A set of tools primarily used for measuring packet loss and one-way delay. It includes the commandย owpingย for single short-lived tests and theย powstreamย command for long-running background tests.
  • twampย – A tool primarily used for measuring packet loss and two-way delay. It has an increased accuracy over tools like ping without the same clock synchronization requirements as OWAMP. The client tool is namedย twpingย and can be used to run against TWAMP servers. You can use the providedย twampdย server or many routers also come with vendor implementation of TWAMP servers that can be tested against.
  • iperf3ย – A rewrite of the classic iperf tool used to measure network throughput and associated metrics.
  • iperf2ย – Also known as justย iperf, a common tool used to measure network throughput that has been around for many years.
  • nuttcpย – Another throughput tool with some useful options not found in other tools.
  • tracerouteย – The classic packet trace tool used in identifying network paths
  • tracepathย – Another path trace tool that also measures path MTU
  • paris-tracerouteย – A packet trace tool that attempts to identify paths in the presence of load balancers
  • pingย – The classic utility for determining reachability, round-trip time (RTT), and basic packet loss.
  • pScheduler – The pScheduler is responsible for scheduling the tests based on user configuration and available time slots on the machine. It finds time-slots to run the tools while avoiding scheduling conflicts that would negatively impact results, Executing the tools and gathering results, and send the results to the configured archiver
  • Archiving – This module contains the esmond component which stores the measurement information for the tests run. It is often referred to as the measurement archive (MA) and the term is used interchangeably with esmond. Esmond can be installed in each end device, or in the central server device which collects all test results.
  • Psconfig pscheduler agent – This module will get the user configurations from a file in json format and then use it to communicate with the pScheduler to schedule the tests.
  • Psconfig maddash agent โ€“ The agent responsible for reading the same configuration json file and creating graphs and charts based on the conducted test results.

The user has different installation options for different end devices such as centralized servers, clients, and servers. Here centralized server refers to devices that register clients and servers, manage to conduct tests, collect results and act as a central server where other devices communicate anything related to test parameters. Clients are devices that conduct the throughput, latency, and RTT tests with another end device. Servers are end devices that act as the receiver or reflector for measuring network health. The tests are conducted from the client to the server, either from the client interactive shell based on demand or from the central server using a configuration json/psconfig web admin GUI.

PerfSONAR can be installed in CentOS or Ubuntu based Linux environments. The various offerings are,

  • Perfsonar-tools โ€“ This package installs only the command line binaries needed to run on-demand tests such as iperf, iperf3 and owamp. Often used for debugging network disruptions rather than for dedicated monitoring of network.
  • Perfsonar-testpoint โ€“ This package contains command-line binaries to run on-demand tests, in addition to tools to scheduling of tests conduct tests based on central server instructions in json files, and participate in node discovery conducted in central servers. But this package lacks software to store measurement data in a local archive.
  • Perfsonar-core โ€“ This package is dedicated for devices that run tests themselves, instruct others to run the tests and store measurement data themselves.
  • ย Perfsonar-toolkit โ€“ This package includes everything present in perfsonar-core including the web-admin to conduct tests via GUI.
  • ย Personar-centralmanagement โ€“ This package is installed in central management servers that instruct others to run tests and manage a number of hosts, while also being able to retrieve and archive the test results from a number of hosts.

Figure 2 below explains the bundle options,

We can run network measurement tests in two different modes.

  • Full mesh mode – where all the perfsonar hosts in the network are involved in running the tests to each other, essentially all the hosts should be running perfSONAR
  • Disjoint mode – where the tests are run from one set of perfsonar hosts to another set of hosts (these hosts can be other 3rd party devices as well)

Once the scheduled tests are completed, results ae published to the central management server archiver, which is picked up by the maddash agent and visualizations are generated. Any present issues can be detected by observing the graphs and charts data, drawn up by the maddash agent.

TWAMP and TWAMP-light

Twping hosted by TWAMP (Two-Way Active Measurement Protocol) is a tool used to generate IP performance metrics between a client and server device by using two-way or round-trip tests. TWAMP is a development from the OWAMP (One-Way Active Measurement Protocol) method. TWAMP uses four entities in the topology, control-client and session-sender packaged into the client device and session-reflector and the server are packaged into the server device.

Initially, a TCP socket connection is established between the client and server in the TWAMP dedicated TCP port 862, after which the server sends a greeting message, that contains the security authentication modes supported by the server. Upon receiving the greeting message, the client sends the test conditions and the client information on which IP of the server device the test packets will be sent to. If the server agrees to conduct the described tests, the test begins as soon as the client sends a Start-Sessions or Start-N-Session message. This process is referred to as a twamp-control process

As part of a test, the server sends a stream of UDP-based test packets to the server, and the server responds to each received packet with a response UDP-based test packet. When the client receives the response packets from the session-reflector, the information is used to calculate two-way delay, packet loss, and packet delay variation between the two devices.

The user is provided with an option to bypass the TWAMP control process and conduct a TWAMP-light test. In a TWAMP-light test, the entities present are only the session sender and the session reflector. The initial TCP connection between the server and client is bypassed and only the stream of UDP-based test packets is sent and received by the client. But to run a TWAMP-light test, the server IP and the UDP port which will reflect the TWAMP packets, needs to be added as a part of the reflector configuration in the server. The client that runs a TWAMP-light test, will need to send the UDP packets to this configured IP and UDP port, failing which will result in the sent packets being dropped and loss of test metrics.

Figure 3 and 4 show the TWAMP and TWAMP-light protocol components

Figure 3 – TWAMP

Figure 4 โ€“ TWAMP-light

Categories
Networking

NetBox is the leading solution for modeling and documenting modern networks. By combining the traditional disciplines of IP address management (IPAM) and datacenter infrastructure management (DCIM) with powerful APIs and extensions, NetBox provides the ideal “source of truth” to power network automation.

Built for Networks

Unlike general-purpose CMDBs, NetBox has curated a data model which caters specifically to the needs of network engineers and operators. It delivers a wide assortment of object types carefully crafted to best serve the needs of infrastructure design and documentation. These cover all facets of network technology, from IP address managements to cabling to overlays and more:

  • Hierarchical regions, sites, and locations
  • Racks, devices, and device components
  • Cables and wireless connections
  • Power distribution tracking
  • Data circuits and providers
  • Virtual machines and clusters
  • IP prefixes, ranges, and addresses
  • VRFs and route targets
  • FHRP groups (VRRP, HSRP, etc.)
  • AS numbers
  • VLANs and scoped VLAN groups
  • L2VPN overlays
  • Tenancy assignments
  • Contact management

Customizable & Extensible

In addition to its expansive and robust data model, NetBox offers myriad mechanisms through which it can be customized and extended. Its powerful plugins architecture enables users to extend the application to meet their needs with minimal development effort.

โ€ขCustom fields
โ€ข Custom model validation
โ€ข Export templates
โ€ข Webhooks
โ€ข Plugins
โ€ข REST & GraphQL APIs

Always Open

Because NetBox is an open-source application licensed under Apache 2, its entire code base is completely accessible to the end user, and there’s never a risk of vendor lock-in. Additionally, NetBox development is an entirely public, community-driven process to which everyone can provide input.

NetBox Development

GitHub repository:- https://github.com/netbox-community/netbox

Powered by Python

NetBox runs as a web application atop the Django Python framework with a PostgreSQL database.

* The complete documentation for NetBox can be found at docs.netbox.dev . A public demo instance is available at demo.netbox.dev (https://demo.netbox.dev).

Adding sites to Netbox:-

a) Using GUI

b) Get details of sites using Rest API: –

Response: –

Adding interface to device: –

Get interface details using ID: –

Response: –

Assign IP address to interface to Device: –

Get IP-address using ID: –

Response: –

Assign IP address to interface to Device: –

Categories
Networking

This is my 2nd article in the series “Edge Computing” articles. In this article I’m going to provide an overview about Project “Akraino”ย which is an opensource initiative of Edge Computing platforms from LF Edge (Linux Foundation Edge). Read my introduction article about Edge Computing “Introduction to Edge Computing & Open Source Edge Platforms” for more details about the Edge Computing Implementations and is available in the below link.

LF Edge is an umbrella organization that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system. By bringing
together industry leaders, LF Edge will create a common framework for hardware and software standards and best practices critical to sustaining current and future generations of IoT and edge devices.

Akraino Edge Stack

Akrainoย is a Linux Foundation project initiated by AT&T and Intel, intends to develop aย fully integrated edge infrastructureย solution, and is completely focused towards Edge Computing.ย Akraino is a set of open infrastructures (like ONAP, OpenStack, Airship, Kubernetes, Calico etc.) and application blueprints for the Edge, spanning a broad variety of use cases, including 5G, AI, Edge IaaS/PaaS, IoT, for both provider and enterprise edge domains.ย Akraino edge stack is targeted for all three type of edge computing implementation like MEC, Fog Computing / IoT, Cloudlet.

Since the edge computing solutions require large-scale deployment (typically covering 1000 plus locations) the key requirement for the Akraino project is to keep the cost low and ensure it supports large-scale deployments via automation.ย The goal of Akraino is to supply a fully integrated solution that supports Zero-touch provisioning, and Zero-touch lifecycle management of the integrated stack.

Akraino is targeted for different use cases and implementation, the community manages these use cases and implementation by defining Blueprints for each deployment. The Blueprints are the declarative configurationย of entire stack i.e., Infrastructure / Cloud platform, APIs for managing, and Applications.ย Here theย Declarative configurationย management refers to set of tools that allow the users / operators to declare the desired state of the system (be it a physical machine, EC2 VPC or a Cloud account, or anything else), and then allow the configuration management system to automatically bring the system to the declared state.

Every blueprint consists of the following main components.

  • Declarative Configuration which is used to define all the components used within that reference architecture such as Hardware, Software, tools to manage the entire stack, and point of delivery i.e., the method used to deploy in a site
  • The required hardware and software to realize the deployment
  • The CI/CD pipeline for continuous integration and delivery
  • POD – Point of Delivery which defines the BOM of the hardware components to deploy a
    particular deployment with different scale requirement.

Blueprints have been created by the Akraino community and focus exclusively on the edge in all of its different forms.ย What unites all of these blueprints is that they have been tested by the community and are ready for adoption as-is, or used as a starting point for customizing a new edge blueprint.

Akraino supports VM, container and bare metal workloads based on the application deployment. To meet this, Akraino community works with multiple upstream open source communities such
as Airship, OpenStack, ONAP, etc., to deliver a fully integrated stack. The below link provides the list of blueprints approved by the Akraino.

https://wiki.akraino.org/pages/viewpage.action?pageId=1147243

In this article, I’m going to cover one of the blueprints and explain in detail. Since this article is a series talking about the 5G deployment, the blueprint which I’m going to talk about comes under the “5G MEC System Blueprint Family” and is called asย “Enterprise Applications on
Lightweight 5G Telco Edge (EALTEdge)”

EALTEdge Introduction

The main objective of EALTEdge is to provide a platform that can be leveraged by various Telecom operators to give value added services to end users which intends to make a complete
ecosystem for 5G Telco Edge Enterprise level platform. The EALTEdge is targeted for the Telco Edge and provides Lightweight MEP Solution. I had written an article about the collaboration between the 5G Telco Providers and the Cloud Providers in detail sometime back in the below article. One such implementation of 5G providers to enable the Enterprise to run their application in 5G MEC Edge is EALTEdge.

This Lightweight MEC platform enables real-time enterprise applications on 5G telco edge. The following are some of the use cases of EALTEdge deployment

  • Optimized Streaming Media
  • Machine Vision in Campus Networks
  • Mobile Office

Architecture

The below diagram represent the high level architecture of the EALTEdge platform.

It consists of MEC Management and MEP platform components. The OCD helps is deploying
the MEP and MECM components and consist of list of playbook to deploy the platform and
infrastructure.

MECM Components:

  • Application LCM:ย Handles the application life cycle management of Applications.
  • Catalog:ย Provides application package management
  • Monitoring: Monitoring and Visualization of platform and applications.
  • Common DB: Persistent Database.

MECย Host Components:

  • MEP Agent:ย Client Libraries for application developer for service registry andย discovery.
  • API Gateway: Single entry point for MEP Services.
  • Certificate Management: Cloud Native Certificate Creation and Management.
  • Secret Management: Cloud Native Secret Generation and Management.
  • Event BUS: Message BUS Service offered to applications.
  • CNI: Container Network.
  • Service Registry:ย The service registry provides visibility of the services available on the MEC
    server.
  • Monitoring:ย Monitoring and Visualization of platform and applications.
  • Common DB:ย Persistent Database.

The below diagram represents the software being used in the different layers of EALTEdge platform

Deployment Architecture

Typically the EALTEdege platform will be deployed in 3 different nodes (including OCD node) and the list of software being used in these nodes are as follows. The Deployment Architecture consists of the following nodes

  • One-Click Deployment Node
  • MECM Node
  • MEC Hosts Node
Categories
Networking

Kingston Smiler Selvaraj

Founder & CEO at PALC Networks ;โ €โ–บ Co-Chair at TIP (Telecom Infra Project) OOPT NOS Goldstone sub group 3 articlesย Following

Few months back I had written an article about the synergy between cloud providers and 5G providers wherein I had covered briefly about edge computing. Refer to the below link to read that article.

In this article I’m going to talk more details about the Edge Computing and Various Open Source alternatives of Edge Computing Platform. This is going to be a series of article and this article covers the introduction. Before getting into the edge computing platforms, let’s look into what is Edge Computing and various implementation of Edge Computing.

Edge Computing:ย Edge computing refers to running applications closer to the end user by deploying compute, storage, and network functions relatively close to end users and/or IoT endpoints. Edge computing provides a highly distributed computing environment that can be used to deploy applications and services as well as to store and process content in close proximity. Based on the type of edge device, the proximity and implementation of edge computing varies.

For example if the edge device is going to be a mobile phone, then the proximity of edge computing in 5G era is the network operatorโ€™s data centers at the edge of the 5G network using MEC implementation (service provide edge). On the other hand, if the edge device is going to be IoT nodes inside a manufacturing plant, then the proximity of edge computing is on-site within the production facility using Fog Computing implementation (user edge). The below diagram represents a high level view of user edge and the service provider edge.

So the implementation of Edge Computing differs based on the end nodes and been
defined with different implementations as

  • Mobile Edge Computing / Multi-Access Edge Computing (MEC)
  • Fog Computing
  • Cloudlet

Mobile Edge Computing / Multi-Access Edge Computing (MEC)

MEC brings the computational and storage capacities to the edge of the network within the 5G Radio Access Network. The MEC nodes or servers are usually co-located with the Radio Network Controller or a macro base-station to reduce latency. MEC provides the ecosystem wherein the operators can open their Radio Access Network (RAN) edge to authorized third-parties, allowing them to flexibly and rapidly deploy innovative applications and services towards mobile subscribers, enterprises and vertical segments.

The MEC is an initiative from Industry Specification Group (ISG) within ETSI. The purpose is to create a standardized, open environment which will allow the efficient and seamless integration of applications from different vendors, service providers, and third- parties across multi-vendor Multi-access Edge Computing platforms. Full specification of MEC is available inย https://www.etsi.org/committee/1425-mec

Some of the use cases of MEC are

  • Video analytics
  • OTT (Over the Top services)
  • Location services
  • Internet-of-Things (IoT)
  • Augmented reality
  • Optimized local content distribution and data caching

Fog Computing (FC)

Fog computing is a term created by Cisco that refers to extendingย cloud computingย to the edge of an enterprise’s network. Fog computing is a decentralized computing infrastructure placed at any point between the end devices and the cloud. The nodes are heterogeneous in nature and thus can be based on different kinds of elements including but not limited to routers, switches, access points,ย IoT gateways as well as set-top boxes. Since cloud computing is not viable for manyย IoTย applications, fog computing is often used to address the needs of IoT andย Industrial IoT (IIoT). Fog computing reduces theย bandwidth need and reduces the back-and-forth communication between sensors/IoT nodes and the cloud, which can affect the performance badly. Some of the use cases of FC are as follows

  • Transportation / Logistics
  • Utilities / Energy / Manufacturing
  • Smart Cities / Smart Buildings
  • Retail / Enterprise / Hospitality
  • Service Providers / Data Centers
  • Oil / Gas / Mining
  • Healthcare
  • Agriculture
  • Government / Military
  • Residential / Consumer
  • Wearables / Ubiquitous Computing

Cloudlet

Cloudlets are similar to public cloud wherein the public cloud provider offers the end user with different offerings like compute, network and storage in the public cloud, however in cloudlets it will be offered in the edge closer to the user location. A cloudlet is basically a small-scale cloud however, unlike cloud computing that provides unlimited resources, the cloudlet can only provide limited resources. The services provided by Cloudlets are over a one-hop access with high bandwidth, thus offering low latency for applications. Cloudlet provides better security and privacy, since the users are directly connected to the cloudlet. Cloudlets are often compared are confused with Fog computing. Typically Fog computing are associated with IoT/ IIoT use cases whereas cloudlets are associated with use cases which requires the traditional cloud offerings in the Edge.

One of the important aspects of cloudlet is handoff across clouds / cloudlets. When a mobile device user moves away from the cloudlet he is currently using there is a need to offload the services on the first cloudlet to the second cloudlet maintaining end-to-end network quality. This resembles VM migration in cloud computing but differs considerably in a sense that the VM handoff happens in Wide Area Network (WAN).

Till now we have seen about the various implementation of edge computing. Now we are going to see the various platforms especially the open source platforms available in the
market to realize the edge platform.

The below are the list of platform available in the open source community forย MEC segment

  • Akraino Edge Stack (LF Edge)
  • CORD (Linux Foundation)
  • ย Airship (OpenStack Foundation)
  • StarlingX (OpenStack Foundation)

The above list is not exhaustive.

There are many other projects forย FOG / IoTย use case like

  • EdgeX Foundry (LF Edge)
  • KubeEdge
  • Eclipse IoFog (Eclipse)
  • Baetyl (LF Edge)
  • Eclipse Kura (Eclipse)
  • Fledge (LF Edge)
  • Edge Virtualization Engine (LF Edge)
  • Home Edge etc. (LF Edge)

Below are some of the projects inย cloudlet

  • OpenStack++
  • Elijah Cloudlet Project.

This is going to be a series of articles and in the next article, we are going to see the MEC
Edge stack in detail with Akraino Edge Stack and CORD.