Categories
Networking

SONiC (Software for Open Networking in the Cloud) emerged from Microsoft’s innovation hub in 2016, designed to transform Azure’s complex cloud infrastructure connectivity. Built on a Debian foundation, SONiC adopts a microservice-driven, containerized design, where core applications operate within independent Docker containers. This separation allows for seamless integration across various platforms. Its northbound interfaces (NBIs)—including gNMI, ReST, SNMP, CLI, and OpenConfig Yang models—facilitate smooth integration with automation systems, providing a robust, scalable, and adaptable networking solution.

Why an Open NOS is the Future

Adopting an open Network Operating System (NOS) like SONiC promotes disaggregation, a concept that liberates hardware from software constraints, allowing for a flexible, plug-and-play approach. This modular structure enables a unified software interface across different hardware architectures, enhancing supply chain flexibility and avoiding vendor lock-in. Custom in-house automation frameworks can be preserved without needing per-vendor adjustments. The DevOps-oriented model of SONiC accelerates feature deployment and rapid bug fixes, reducing dependence on vendor-specific release schedules. Moreover, the open-source ecosystem fosters innovation and broad collaboration, supporting custom use cases across various deployments. This freedom translates into significant cost efficiencies, reducing Total Cost of Ownership (TCO), Operational Expenditure (OpEx), and Capital Expenditure (CapEx), making it a compelling choice for modern networks.

Why SONiC Stands Apart 

Among the numerous open-source NOS solutions available, SONiC distinguishes itself through its growing adoption across enterprises, hyperscale data centers, and service providers. This success highlights its versatility and robustness. Open-source contributions have meticulously refined SONiC, tailoring it for specific use cases and enhancing its features while allowing adaptable architectures.

Key Attributes of SONiC:

Open Source:

  • Vendor-neutral: Operates on any compatible vendor hardware.
  • Accelerated feature deployment: Custom modifications and quick bug resolutions.
  • Community-driven: Contributions benefit the broader SONiC ecosystem.
  • Cost-effective: Reduces TCO, OpEx, and CapEx significantly.

Disaggregation:

  • Modular architecture: Containerized components enhance resilience and simplify functionality.
  • Decoupled functionalities: Allows independent customization of software components.

Uniformity:

  • Abstracted hardware: SAI simplifies underlying hardware complexities.
  • Portability: Ensures consistent performance across diverse hardware environments.

DevOps Integration:

  • Automation: Seamless orchestration and monitoring.
  • Programmability: Utilizes full ASIC capabilities.

Real-World SONiC Deployment

Despite SONiC’s evident advantages, network operators often has these critical questions- Is SONiC suitable for my network?, What does support entail?, How do I ensure code quality?, and How do I train my team?. The true test of a NOS is its user experience. For open-source solutions like SONiC, success depends on delivering a seamless experience while ensuring robust vendor-backed support.

Operators considering SONiC typically fall into two categories: those with a self-sustaining ecosystem capable of handling an open NOS and those exploring its potential. For the former, SONiC may be customized to meet specific network demands, potentially involving a private distribution or vendor-backed commercial versions. The latter group, often seeking simpler use cases, typically relies on community SONiC, balancing its open-source nature with vendor validation.

SONiC: Leading Disaggregation and Open Networking

The shift towards open networking, driven by disaggregation, has brought SONiC into the spotlight. Its flexibility, cost-effectiveness, and capacity for innovation make it an attractive option, especially for hyperscalers, enterprises, and service providers. By embracing open architecture and standard merchant silicon, SONiC provides a cost-efficient alternative to traditional closed systems, delivering equivalent features and support at a fraction of the price.

Managing SONiC: Overcoming Deployment Challenges

While companies like Microsoft, LinkedIn, and eBay have successfully integrated SONiC, they often encounter challenges such as vendor fragmentation, compatibility issues, and inconsistent support across hardware platforms. Managing diverse hardware can become complex due to each vendor’s unique configurations, while updates can disrupt operations by introducing compatibility problems. Integrating different SONiC versions across vendors may also lead to inconsistencies and unreliable telemetry data, adding further complications. Additionally, varying support levels make troubleshooting difficult, and operators often require additional training to navigate the nuances of each vendor’s implementation. Although these challenges can feel overwhelming, with careful planning, skilled personnel, and the right support, they can be effectively managed for a smooth and successful SONiC deployment.

PalC’s SONiC NetPro Suite directly addresses these challenges by offering comprehensive solutions designed to streamline SONiC adoption. Through the Ready, Deploy, and Sustain packages, PalC ensures end-to-end support, providing expert guidance to fill gaps in in-house expertise and fostering a DevOps-friendly culture for smooth operations. PalC’s strong partnerships with switch and ASIC vendors guarantee seamless integration and full infrastructure support, while its rigorous QA processes ensure performance and reliability. By offering customized support and clear cost assessments, PalC simplifies SONiC deployment, minimizing disruptions and ensuring a scalable, secure, and optimized network infrastructure.

Conclusion

Choosing the right SONiC version is crucial for network optimization. By assessing feature needs, community support, hardware compatibility, and security, organizations can make informed decisions that enhance network performance. SONiC’s evolution from a Microsoft project to a leading open-source NOS underscores its transformative potential and solidifies its role in future cloud networking.

Categories
Networking

In the ever-evolving landscape of network infrastructure management, PalC Networks introduces a cutting-edge solution that empowers organizations to efficiently control and optimize their network resources. Our Network Management System – PalC NetPulse is designed with a robust set of features, following the FCAPS framework, which covers Fault, Configuration, Accounting, Performance, and Security management. Let’s delve into what sets PalC NetPulse apart and how it caters to the diverse needs of network operators and OEM/NOS vendors.

The Power of PalC NetPulse
Efficiency and Control:

  • L1 Cross Connect for EdgeCore Cassini Boxes: We’ve integrated L1 cross-connect capabilities for EdgeCore Cassini boxes, enhancing your control over network resources.
  • Interface Configuration Management: Our NMS software offers streamlined configuration management for both Ethernet and Optical interfaces, simplifying network setup and maintenance.
  • Inventory and User Management: PalC NetPulse provides robust inventory management, allowing you to keep track of network assets

Comprehensive Monitoring:
Peripheral Device Monitoring: Stay on top of your network’s health by monitoring peripheral devices. PalC NetPulse offers a holistic view of your infrastructure.

  • Optical Parameter Monitoring: Get in-depth insights into optical parameters to ensure
    optimal performance.
  • Topology Viewer and Alarms/Event Notification: Visualize your network’s topology
    and receive real-time alarms and notifications. Stay informed and respond swiftly.
  • Microservices Compatible Architecture: Our NMS is built on a microservices-
    compatible architecture, ensuring flexibility and scalability for your evolving network
    needs.

What Lies Ahead in Our Roadmap:
We are committed to enhancing our NMS software further to meet the ever-changing demands of network management:
Layer-2 Configuration Features: We are adding more layer-2 configuration features to provide you with a more comprehensive toolkit.

  • Advanced Automation: Our NetPulse will have enhanced network automation capabilities, simplifying complex tasks.
  • Integration with Cloud-Based Environments: In line with the industry’s shift towards cloud-based solutions, we are working on seamless integration with cloud environments.

Supported Software for Versatility
PalC NetPulse is compatible with various software platforms, making it versatile and adaptable to different deployment scenarios:

SONiC for Data-Center Deployments: Perfect for data-center networks, PalC NetPulse integrates seamlessly with SONiC.

  • Goldstone for Packet Optical Deployments: If your network requires packet optical solutions, PalC NetPulse has you covered with Goldstone compatibility.
  • OcNOS for Data Center/CSR Requirements: Data center and carrier-grade service provider networks benefit from our support for OcNOS.
Categories
Networking

PerfSONAR (performance Service-Oriented Network monitoring Architecture) is a network measurement toolkit that provides end-to-end network diagnostics over integrated network paths. It is an on-the-go issue detection system for network health based on various route diagnostic tools in its toolkit. It covers the essential L3 network connectivity testing tools, hosting protocols such as ICMP, OWAMP, TWAMP, and other methods such as traceroute and iperf. It also provides a uniform interface that allows for the scheduling of measurements, storage of data in uniform formats, and scalable ways to retrieve data and generate visualizations.

PerfSONAR allows scheduled tests to run on a regular basis or on demand, as required, from an interactive shell or from a GUI (PerfSONAR Web Admin). The test results can then be stored
locally or sent to a centralized server where the results from multiple hosts are aggregated to get a better view of the happenings in the network.  The toolkit includes numerous utilities responsible for carrying out the actual network measurements and form the foundational layer of perfSONAR.

Figure 1 – perfSONAR architecture

In the architecture diagram (Figure 1) mentioned above, the key components have been outlined below:

  • perfsonar tools – This module contains different utilities responsible for carrying out the actual network measurements. Twamp tool consists of twping binary which is used on client side and twampd binary which runs on the server/responder side. Similarly, we have owamp, powstream, traceroute, tracepath, paris-traceroute and other diagnostic methods. We can use these tools for measurement of following performance metrics.
  • Latency – measuring one-way and two-way delays
  • Packet loss, duplicate packets, jitter
  • Tracing network path
  • Trace path – Identifying the path MTU
  • Throughput – Measuring the bandwidth availability

An overview of supported tools –

  • owamp – A set of tools primarily used for measuring packet loss and one-way delay. It includes the command owping for single short-lived tests and the powstream command for long-running background tests.
  • twamp – A tool primarily used for measuring packet loss and two-way delay. It has an increased accuracy over tools like ping without the same clock synchronization requirements as OWAMP. The client tool is named twping and can be used to run against TWAMP servers. You can use the provided twampd server or many routers also come with vendor implementation of TWAMP servers that can be tested against.
  • iperf3 – A rewrite of the classic iperf tool used to measure network throughput and associated metrics.
  • iperf2 – Also known as just iperf, a common tool used to measure network throughput that has been around for many years.
  • nuttcp – Another throughput tool with some useful options not found in other tools.
  • traceroute – The classic packet trace tool used in identifying network paths
  • tracepath – Another path trace tool that also measures path MTU
  • paris-traceroute – A packet trace tool that attempts to identify paths in the presence of load balancers
  • ping – The classic utility for determining reachability, round-trip time (RTT), and basic packet loss.
  • pScheduler – The pScheduler is responsible for scheduling the tests based on user configuration and available time slots on the machine. It finds time-slots to run the tools while avoiding scheduling conflicts that would negatively impact results, Executing the tools and gathering results, and send the results to the configured archiver
  • Archiving – This module contains the esmond component which stores the measurement information for the tests run. It is often referred to as the measurement archive (MA) and the term is used interchangeably with esmond. Esmond can be installed in each end device, or in the central server device which collects all test results.
  • Psconfig pscheduler agent – This module will get the user configurations from a file in json format and then use it to communicate with the pScheduler to schedule the tests.
  • Psconfig maddash agent – The agent responsible for reading the same configuration json file and creating graphs and charts based on the conducted test results.

The user has different installation options for different end devices such as centralized servers, clients, and servers. Here centralized server refers to devices that register clients and servers, manage to conduct tests, collect results and act as a central server where other devices communicate anything related to test parameters. Clients are devices that conduct the throughput, latency, and RTT tests with another end device. Servers are end devices that act as the receiver or reflector for measuring network health. The tests are conducted from the client to the server, either from the client interactive shell based on demand or from the central server using a configuration json/psconfig web admin GUI.

PerfSONAR can be installed in CentOS or Ubuntu based Linux environments. The various offerings are,

  • Perfsonar-tools – This package installs only the command line binaries needed to run on-demand tests such as iperf, iperf3 and owamp. Often used for debugging network disruptions rather than for dedicated monitoring of network.
  • Perfsonar-testpoint – This package contains command-line binaries to run on-demand tests, in addition to tools to scheduling of tests conduct tests based on central server instructions in json files, and participate in node discovery conducted in central servers. But this package lacks software to store measurement data in a local archive.
  • Perfsonar-core – This package is dedicated for devices that run tests themselves, instruct others to run the tests and store measurement data themselves.
  •  Perfsonar-toolkit – This package includes everything present in perfsonar-core including the web-admin to conduct tests via GUI.
  •  Personar-centralmanagement – This package is installed in central management servers that instruct others to run tests and manage a number of hosts, while also being able to retrieve and archive the test results from a number of hosts.

Figure 2 below explains the bundle options,

We can run network measurement tests in two different modes.

  • Full mesh mode – where all the perfsonar hosts in the network are involved in running the tests to each other, essentially all the hosts should be running perfSONAR
  • Disjoint mode – where the tests are run from one set of perfsonar hosts to another set of hosts (these hosts can be other 3rd party devices as well)

Once the scheduled tests are completed, results ae published to the central management server archiver, which is picked up by the maddash agent and visualizations are generated. Any present issues can be detected by observing the graphs and charts data, drawn up by the maddash agent.

TWAMP and TWAMP-light

Twping hosted by TWAMP (Two-Way Active Measurement Protocol) is a tool used to generate IP performance metrics between a client and server device by using two-way or round-trip tests. TWAMP is a development from the OWAMP (One-Way Active Measurement Protocol) method. TWAMP uses four entities in the topology, control-client and session-sender packaged into the client device and session-reflector and the server are packaged into the server device.

Initially, a TCP socket connection is established between the client and server in the TWAMP dedicated TCP port 862, after which the server sends a greeting message, that contains the security authentication modes supported by the server. Upon receiving the greeting message, the client sends the test conditions and the client information on which IP of the server device the test packets will be sent to. If the server agrees to conduct the described tests, the test begins as soon as the client sends a Start-Sessions or Start-N-Session message. This process is referred to as a twamp-control process

As part of a test, the server sends a stream of UDP-based test packets to the server, and the server responds to each received packet with a response UDP-based test packet. When the client receives the response packets from the session-reflector, the information is used to calculate two-way delay, packet loss, and packet delay variation between the two devices.

The user is provided with an option to bypass the TWAMP control process and conduct a TWAMP-light test. In a TWAMP-light test, the entities present are only the session sender and the session reflector. The initial TCP connection between the server and client is bypassed and only the stream of UDP-based test packets is sent and received by the client. But to run a TWAMP-light test, the server IP and the UDP port which will reflect the TWAMP packets, needs to be added as a part of the reflector configuration in the server. The client that runs a TWAMP-light test, will need to send the UDP packets to this configured IP and UDP port, failing which will result in the sent packets being dropped and loss of test metrics.

Figure 3 and 4 show the TWAMP and TWAMP-light protocol components

Figure 3 – TWAMP

Figure 4 – TWAMP-light

Categories
Networking

NetBox is the leading solution for modeling and documenting modern networks. By combining the traditional disciplines of IP address management (IPAM) and datacenter infrastructure management (DCIM) with powerful APIs and extensions, NetBox provides the ideal “source of truth” to power network automation.

Built for Networks

Unlike general-purpose CMDBs, NetBox has curated a data model which caters specifically to the needs of network engineers and operators. It delivers a wide assortment of object types carefully crafted to best serve the needs of infrastructure design and documentation. These cover all facets of network technology, from IP address managements to cabling to overlays and more:

  • Hierarchical regions, sites, and locations
  • Racks, devices, and device components
  • Cables and wireless connections
  • Power distribution tracking
  • Data circuits and providers
  • Virtual machines and clusters
  • IP prefixes, ranges, and addresses
  • VRFs and route targets
  • FHRP groups (VRRP, HSRP, etc.)
  • AS numbers
  • VLANs and scoped VLAN groups
  • L2VPN overlays
  • Tenancy assignments
  • Contact management

Customizable & Extensible

In addition to its expansive and robust data model, NetBox offers myriad mechanisms through which it can be customized and extended. Its powerful plugins architecture enables users to extend the application to meet their needs with minimal development effort.

•Custom fields
• Custom model validation
• Export templates
• Webhooks
• Plugins
• REST & GraphQL APIs

Always Open

Because NetBox is an open-source application licensed under Apache 2, its entire code base is completely accessible to the end user, and there’s never a risk of vendor lock-in. Additionally, NetBox development is an entirely public, community-driven process to which everyone can provide input.

NetBox Development

GitHub repository:- https://github.com/netbox-community/netbox

Powered by Python

NetBox runs as a web application atop the Django Python framework with a PostgreSQL database.

* The complete documentation for NetBox can be found at docs.netbox.dev . A public demo instance is available at demo.netbox.dev (https://demo.netbox.dev).

Adding sites to Netbox:-

a) Using GUI

b) Get details of sites using Rest API: –

Response: –

Adding interface to device: –

Get interface details using ID: –

Response: –

Assign IP address to interface to Device: –

Get IP-address using ID: –

Response: –

Assign IP address to interface to Device: –

Categories
Networking

This is my 2nd article in the series “Edge Computing” articles. In this article I’m going to provide an overview about Project “Akraino” which is an opensource initiative of Edge Computing platforms from LF Edge (Linux Foundation Edge). Read my introduction article about Edge Computing “Introduction to Edge Computing & Open Source Edge Platforms” for more details about the Edge Computing Implementations and is available in the below link.

LF Edge is an umbrella organization that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system. By bringing
together industry leaders, LF Edge will create a common framework for hardware and software standards and best practices critical to sustaining current and future generations of IoT and edge devices.

Akraino Edge Stack

Akraino is a Linux Foundation project initiated by AT&T and Intel, intends to develop a fully integrated edge infrastructure solution, and is completely focused towards Edge Computing. Akraino is a set of open infrastructures (like ONAP, OpenStack, Airship, Kubernetes, Calico etc.) and application blueprints for the Edge, spanning a broad variety of use cases, including 5G, AI, Edge IaaS/PaaS, IoT, for both provider and enterprise edge domains. Akraino edge stack is targeted for all three type of edge computing implementation like MEC, Fog Computing / IoT, Cloudlet.

Since the edge computing solutions require large-scale deployment (typically covering 1000 plus locations) the key requirement for the Akraino project is to keep the cost low and ensure it supports large-scale deployments via automation. The goal of Akraino is to supply a fully integrated solution that supports Zero-touch provisioning, and Zero-touch lifecycle management of the integrated stack.

Akraino is targeted for different use cases and implementation, the community manages these use cases and implementation by defining Blueprints for each deployment. The Blueprints are the declarative configuration of entire stack i.e., Infrastructure / Cloud platform, APIs for managing, and Applications. Here the Declarative configuration management refers to set of tools that allow the users / operators to declare the desired state of the system (be it a physical machine, EC2 VPC or a Cloud account, or anything else), and then allow the configuration management system to automatically bring the system to the declared state.

Every blueprint consists of the following main components.

  • Declarative Configuration which is used to define all the components used within that reference architecture such as Hardware, Software, tools to manage the entire stack, and point of delivery i.e., the method used to deploy in a site
  • The required hardware and software to realize the deployment
  • The CI/CD pipeline for continuous integration and delivery
  • POD – Point of Delivery which defines the BOM of the hardware components to deploy a
    particular deployment with different scale requirement.

Blueprints have been created by the Akraino community and focus exclusively on the edge in all of its different forms. What unites all of these blueprints is that they have been tested by the community and are ready for adoption as-is, or used as a starting point for customizing a new edge blueprint.

Akraino supports VM, container and bare metal workloads based on the application deployment. To meet this, Akraino community works with multiple upstream open source communities such
as Airship, OpenStack, ONAP, etc., to deliver a fully integrated stack. The below link provides the list of blueprints approved by the Akraino.

https://wiki.akraino.org/pages/viewpage.action?pageId=1147243

In this article, I’m going to cover one of the blueprints and explain in detail. Since this article is a series talking about the 5G deployment, the blueprint which I’m going to talk about comes under the “5G MEC System Blueprint Family” and is called as “Enterprise Applications on
Lightweight 5G Telco Edge (EALTEdge)”

EALTEdge Introduction

The main objective of EALTEdge is to provide a platform that can be leveraged by various Telecom operators to give value added services to end users which intends to make a complete
ecosystem for 5G Telco Edge Enterprise level platform. The EALTEdge is targeted for the Telco Edge and provides Lightweight MEP Solution. I had written an article about the collaboration between the 5G Telco Providers and the Cloud Providers in detail sometime back in the below article. One such implementation of 5G providers to enable the Enterprise to run their application in 5G MEC Edge is EALTEdge.

This Lightweight MEC platform enables real-time enterprise applications on 5G telco edge. The following are some of the use cases of EALTEdge deployment

  • Optimized Streaming Media
  • Machine Vision in Campus Networks
  • Mobile Office

Architecture

The below diagram represent the high level architecture of the EALTEdge platform.

It consists of MEC Management and MEP platform components. The OCD helps is deploying
the MEP and MECM components and consist of list of playbook to deploy the platform and
infrastructure.

MECM Components:

  • Application LCM: Handles the application life cycle management of Applications.
  • Catalog: Provides application package management
  • Monitoring: Monitoring and Visualization of platform and applications.
  • Common DB: Persistent Database.

MEC Host Components:

  • MEP Agent: Client Libraries for application developer for service registry and discovery.
  • API Gateway: Single entry point for MEP Services.
  • Certificate Management: Cloud Native Certificate Creation and Management.
  • Secret Management: Cloud Native Secret Generation and Management.
  • Event BUS: Message BUS Service offered to applications.
  • CNI: Container Network.
  • Service Registry: The service registry provides visibility of the services available on the MEC
    server.
  • Monitoring: Monitoring and Visualization of platform and applications.
  • Common DB: Persistent Database.

The below diagram represents the software being used in the different layers of EALTEdge platform

Deployment Architecture

Typically the EALTEdege platform will be deployed in 3 different nodes (including OCD node) and the list of software being used in these nodes are as follows. The Deployment Architecture consists of the following nodes

  • One-Click Deployment Node
  • MECM Node
  • MEC Hosts Node
Categories
Networking

Kingston Smiler Selvaraj

Founder & CEO at PALC Networks ;⠀► Co-Chair at TIP (Telecom Infra Project) OOPT NOS Goldstone sub group 3 articles Following

Few months back I had written an article about the synergy between cloud providers and 5G providers wherein I had covered briefly about edge computing. Refer to the below link to read that article.

In this article I’m going to talk more details about the Edge Computing and Various Open Source alternatives of Edge Computing Platform. This is going to be a series of article and this article covers the introduction. Before getting into the edge computing platforms, let’s look into what is Edge Computing and various implementation of Edge Computing.

Edge Computing: Edge computing refers to running applications closer to the end user by deploying compute, storage, and network functions relatively close to end users and/or IoT endpoints. Edge computing provides a highly distributed computing environment that can be used to deploy applications and services as well as to store and process content in close proximity. Based on the type of edge device, the proximity and implementation of edge computing varies.

For example if the edge device is going to be a mobile phone, then the proximity of edge computing in 5G era is the network operator’s data centers at the edge of the 5G network using MEC implementation (service provide edge). On the other hand, if the edge device is going to be IoT nodes inside a manufacturing plant, then the proximity of edge computing is on-site within the production facility using Fog Computing implementation (user edge). The below diagram represents a high level view of user edge and the service provider edge.

So the implementation of Edge Computing differs based on the end nodes and been
defined with different implementations as

  • Mobile Edge Computing / Multi-Access Edge Computing (MEC)
  • Fog Computing
  • Cloudlet

Mobile Edge Computing / Multi-Access Edge Computing (MEC)

MEC brings the computational and storage capacities to the edge of the network within the 5G Radio Access Network. The MEC nodes or servers are usually co-located with the Radio Network Controller or a macro base-station to reduce latency. MEC provides the ecosystem wherein the operators can open their Radio Access Network (RAN) edge to authorized third-parties, allowing them to flexibly and rapidly deploy innovative applications and services towards mobile subscribers, enterprises and vertical segments.

The MEC is an initiative from Industry Specification Group (ISG) within ETSI. The purpose is to create a standardized, open environment which will allow the efficient and seamless integration of applications from different vendors, service providers, and third- parties across multi-vendor Multi-access Edge Computing platforms. Full specification of MEC is available in https://www.etsi.org/committee/1425-mec

Some of the use cases of MEC are

  • Video analytics
  • OTT (Over the Top services)
  • Location services
  • Internet-of-Things (IoT)
  • Augmented reality
  • Optimized local content distribution and data caching

Fog Computing (FC)

Fog computing is a term created by Cisco that refers to extending cloud computing to the edge of an enterprise’s network. Fog computing is a decentralized computing infrastructure placed at any point between the end devices and the cloud. The nodes are heterogeneous in nature and thus can be based on different kinds of elements including but not limited to routers, switches, access points, IoT gateways as well as set-top boxes. Since cloud computing is not viable for many IoT applications, fog computing is often used to address the needs of IoT and Industrial IoT (IIoT). Fog computing reduces the bandwidth need and reduces the back-and-forth communication between sensors/IoT nodes and the cloud, which can affect the performance badly. Some of the use cases of FC are as follows

  • Transportation / Logistics
  • Utilities / Energy / Manufacturing
  • Smart Cities / Smart Buildings
  • Retail / Enterprise / Hospitality
  • Service Providers / Data Centers
  • Oil / Gas / Mining
  • Healthcare
  • Agriculture
  • Government / Military
  • Residential / Consumer
  • Wearables / Ubiquitous Computing

Cloudlet

Cloudlets are similar to public cloud wherein the public cloud provider offers the end user with different offerings like compute, network and storage in the public cloud, however in cloudlets it will be offered in the edge closer to the user location. A cloudlet is basically a small-scale cloud however, unlike cloud computing that provides unlimited resources, the cloudlet can only provide limited resources. The services provided by Cloudlets are over a one-hop access with high bandwidth, thus offering low latency for applications. Cloudlet provides better security and privacy, since the users are directly connected to the cloudlet. Cloudlets are often compared are confused with Fog computing. Typically Fog computing are associated with IoT/ IIoT use cases whereas cloudlets are associated with use cases which requires the traditional cloud offerings in the Edge.

One of the important aspects of cloudlet is handoff across clouds / cloudlets. When a mobile device user moves away from the cloudlet he is currently using there is a need to offload the services on the first cloudlet to the second cloudlet maintaining end-to-end network quality. This resembles VM migration in cloud computing but differs considerably in a sense that the VM handoff happens in Wide Area Network (WAN).

Till now we have seen about the various implementation of edge computing. Now we are going to see the various platforms especially the open source platforms available in the
market to realize the edge platform.

The below are the list of platform available in the open source community for MEC segment

  • Akraino Edge Stack (LF Edge)
  • CORD (Linux Foundation)
  •  Airship (OpenStack Foundation)
  • StarlingX (OpenStack Foundation)

The above list is not exhaustive.

There are many other projects for FOG / IoT use case like

  • EdgeX Foundry (LF Edge)
  • KubeEdge
  • Eclipse IoFog (Eclipse)
  • Baetyl (LF Edge)
  • Eclipse Kura (Eclipse)
  • Fledge (LF Edge)
  • Edge Virtualization Engine (LF Edge)
  • Home Edge etc. (LF Edge)

Below are some of the projects in cloudlet

  • OpenStack++
  • Elijah Cloudlet Project.

This is going to be a series of articles and in the next article, we are going to see the MEC
Edge stack in detail with Akraino Edge Stack and CORD.