OpenStack Kilo, the 11th release of the open source project, was in April, and now is a good time to review some of the changes we saw in the OpenStack Networking (Neutron) community during this cycle, as well as some of the key new networking features introduced in the project.
The Kilo cycle brings two major efforts which are meant to better expand and scale the Neutron development community: core plugin decomposition and advanced services split. These changes should not directly impact OpenStack users but are expected to reduce code footprint, improve feature velocity, and ultimately bring faster innovation speed. Let’s take a look at each individually:
Neutron, by design, has a pluggable architecture which offers a custom backend implementation of the . The plugin is a core piece of the deployment and acts as the “glue” between the logical API and the actual implementation. As the project evolves, more and more plugins were introduced, coming from open-source projects and communities (such as and ), as well as from various vendors in the networking industry (like Cisco, Nuage, Midokura and others). At the beginning of the Kilo cycle, Neutron had dozens of plugins and drivers span from core plugins, ML2 mechanism drivers, L3 service plugins, and L4-L7 service plugins for FWaaS, LBaaS and VPNaaS – the majority of those included directly within the Neutron project repository. The amount of code required to review across those drivers and plugins was growing to the point where it was no longer scaling. The expectation that core Neutron reviewers review code which they had no knowledge of, or could not test due to lack of proper hardwareor software setup, was not realistic. This also caused some frustration among the vendors themselves, who sometimes failed to get their plugin code merged on time.
The first effort to improve the situation was to decompose the core Neutron plugins and ML2 drivers from the Neutron repository. The idea is that plugins and drivers will leave only a small “shim” (or proxy) in the Neutron tree, and move all their backend logic out to a different repository, with being a natural place for that. The benefit is clear: Neutron reviewers can now focus on reviewing core Neutron code, while the vendors and plugin maintainers can now iterate at their own pace. While the encouraged vendors to immediately start the decomposition of their plugins, it did not require that all plugins complete decomposition in Kilo timeframe, mainly to allow enough time for the vendors to complete the process.
More information on the process is documented , with dedicated for tracking the progress of the various plugins.
While the first effort is focused solely on core Neutron plugins and ML2 drivers, a parallel was put in place to address similar concerns with the L4-L7 advanced services (FWaaS, LBaaS, and VPNaaS). Similar to the core plugins, advanced services previously stored their code in the main Neutron repository, resulting in lack of focus and reviews by Neutron core reviewers. Starting with Kilo, these services are now split into their own repositories; Neutron now includes four different repositories: one for basic L2/L3 networking, and one each for FWaaS, LBaaS, and VPNaaS. As the number of service plugins is still relatively low, vendors and plugin code will remain in each of the service repositories at this point.
It is important to note here that this change should not affect OpenStack users. Even with the services now split, there is no change to the API or CLI interfaces, and they are all still using the same Neutron client as before. That said, we do see this split laying the foundation for some more deeper changes in the future, with each of the services having the potential to become independent from Neutron and offer their own REST endpoints, configuration file, and CLI/API client. This will enable teams focused exclusively on one or more advanced services to make a bigger impact.
Security-groups are one of the most popular Neutron features, allowing tenants to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a Neutron port, effectively creating a firewall close to the virtual machines (VMs).
As a security measure, Neutron’s security group implementation always applies “bonus” default rules automatically to block attacks, preventing a VM from sending or receiving traffic with a MAC or IP address which does not belong to its Neutron port. While most users find security-groups and the default anti-spoofing rules helpful and necessary to protect their VMs, some asked for the option to turn it off for specific ports. This is mainly required in cases where network functions are running within the VMs, a common use case for .
Think for example of a router application deployed within an OpenStack VM; it receives packets that are not necessarily addressed to it and transmits (routes) packets that are not necessarily generated from one of its ports. With security-groups applied, it will not be able to perform these tasks.
Let’s examine the following topology as an example:
Host 1, with the IPv4 address of 192.168.0.1 wants to reach Host 2, which is configured with 172.16.0.1. The two hosts are connected via two VMs running a router application, and are configured to route between the networks and act as default gateways for the hosts. The MAC addresses of the relevant ports are shown as well.Now, let’s examine the traffic flow when Host 1 is trying to send traffic to Host 2:
Prior to Kilo, you could either disable or enable security-groups for the entire cloud. Starting with the Kilo release, it is now possible to enable or disable the security-group feature per port using a new port-security-enabled attribute, so that the tenant admin can decide if and where a firewall is exactly needed in the topology. This new attribute is supported with the Open vSwitch agent (ovs-agent) in conjunction with the IptablesFirewallDriver.
Going back to the previous topology, it is now possible to turn off security-groups on the VMs being used as routers, while keeping them active on the Host VM ports, so that routing can take place properly:
Some additional information on this feature along with a configuration example can be found on this by Red Hat’s Terry Wilson.
IPv6 has been a key area of focus in Neutron lately, with major features introduced during the Juno release to allow address assignment for tenant networks using and , as well as support for provider networks with Router Advertisements (RAs) messages generated by an external router. While the IPv6 code base continues to mature, the Kilo release brings several other enhancements, including:
Load-Balancing-as-a-Service (LBaaS) is one of the advanced services of Neutron. It allows tenants to create load-balancers on demand, backed by open-source or proprietary service plugins that offer different load balancing technologies. The open source solution available with Red Hat Enterprise Linux OpenStack Platform is based on the HAProxy service plugin.
of the LBaaS API included basic load balancing capabilities and established a simple, straight-forward flow for setting up a load balancer services.
This was useful for getting initial implementations and deployments of LBaaS, but it wasn’t intended to be an enterprise-class alternative to a full-blown load-balancer. adds capabilities to offer a more robust load-balancing solution, including support for . Accomplishing this required a redesign of the LBaaS architecture, along with the .
DVR, first introduced in Juno release, allows the deployment of Neutron routers across the Nova Compute nodes, so that each Compute node handles the routing for its locally hosted VMs. This is expected to result in better performance and scalability of the virtual routers, and is seen as an important milestone towards a more efficient L3 traffic flow in OpenStack.
As a reminder, the default OpenStack architecture with Neutron involves a dedicated cluster of Network nodes that handles most of the network services in the cloud, including DHCP, L3 routing, and NAT. That means that traffic from the Compute node must reach the Network nodes to get routed properly. With DVR, the Compute node can handle itself inter-subnet (east-west) routing as well as NAT for floating IPs. DVR still relies on dedicated Network nodes for the default SNAT service, which allows basic outgoing connectivity for VMs.
Prior to Kilo, distributed routers only supported overlay tunnel networks (i.e. GRE, VXLAN) for tenant separation. This hindered the adoption of the feature as many clouds opted to use 802.1Q VLAN tenant networks. With Kilo, this configuration is now possible and distributed routers may service tunnel networks as well as VLAN networks.
To get more information about DVR, I strongly recommended reading this great three part post from Red Hat’s Assaf Muller, covering: , , and .
One of the major features introduced in the Juno release was the L3 High Availability (L3 HA) solution, which allowed an active/active setup of the neutron-l3-agent across different Network nodes. The solution, based on , utilizes the protocol internally for forming groups of highly available virtual routers. By design, for each group there is one active router (which forwards traffic), and one or more standby routers (which are waiting to take control in case of a failure of the active one). The scheduling of master/backup routers is done randomly across the different Network nodes, so that the load (i.e. forwarding router instances) is spread among all nodes.
One of the limitations of the Juno-based solution was that Neutron had no way to report the HA router state, which made troubleshooting and maintenance harder. With Kilo, operators may now run the neutron l3-agent-list-hosting-router <router_id> command and see where the active instance is currently hosted.
are public IPv4 addresses that can be dynamically added to a VM instance on the fly, so that the VM can be reachable from external systems, usually the Internet. Originally, when assigning a floating IP for a specific VM, the IP would be randomly picked from a pool and there was no guarantee that a VM would consistently receive the same IP address. Starting with Kilo, the user can now choose a specific floating IP address to be assigned to a given VM by utilizing a new ‘floating_ip_address’ API attribute.
This allow specification of the desired MTU for a network, and advertisement of the MTU to guest operating systems when it is set. This new capability will avoid MTU mismatches in networks that lead to undesirable results such as connectivity issues, packet drops and degraded network performance.
The OpenStack Networking community is actively working to make Neutron a more stable and mature codebase. Among the different performance and stability enhancements introduced in Kilo, I wanted to highlight two: the switch to use with the ML2/Open vSwitch plugin instead of using Open vSwitch CLI commands, and comprehensive code base.
While these two changes are not introducing any new feature functionality to users per se, they do represent the continuous journey of improving Neutron’s code base, especially the core L2 and L3 components which are critical to all workloads.
, the next release of OpenStack, is . We are already busy planning and finalizing the sessions for the in Vancouver, where new feature and enhancement proposals are scheduled to be discussed. You can view the approved Neutron to track what proposals are accepted to the project and expected to land in Liberty.
If you want to try out OpenStack, or to check out some of the above enhancements for yourself, you are welcome to visit our site. We have where you can connect with other users, and of the most up-to-date OpenStack releases available for download.
If you are looking for enterprise-level support and our partner certification program, Red Hat also offers .
The Kilo logo is a trademark/service mark of the OpenStack Foundation.
endpoint security encryption endpoint security benefits