Node internal networking
This chapter explains how a user can connect workloads (VMs and Docker containers) to services and network ports of a node. In order to do this, it explains the internal networking concepts in detail. Most workloads will need to be connected to a network as networking is the main form of communication for workloads. They either want to connect to external servers or they are servers themselves, in which case they need to be made visible for their communication partners. The Nerve networking system enables both use cases.
The image below shows an example node consisting of the host/domain-0 and the real-time VM running the CODESYS runtime (labeled rtvm). To further clarify the networking example it also has one Virtual Machine workload and two Docker workloads deployed. The virtual machine is depicted outside of the host and the Docker containers are depicted in the Docker network inside of the host. For the sake of the example, however, the workloads are not yet connected. This is done in the examples further below.
The physical ports P1 to P5 and I/O of the Nerve Device (the MFN 100 in this case) are displayed on the left, touching the large dark rectangle that represents the host. The light blue interfaces connected to them inside the host are Linux bridged interfaces displayed with their names on the host. Highlighted by a dark blue dashed frame is the libvirt network with NAT interfaces on the left and isolated interfaces on the right. Slightly above them is the rtvm interface for communication with the RTVM. The system is setup so that the interfaces connected to the physical ports can be reached by connecting through the physical ports.
Highlighted by an orange dashed frame is the Docker network including the default Docker network (the orange interface labeled bridge), as well as Docker network equivalents of the Linux bridged interfaces, with one additional isolated network in comparison. For the sake of easier representation, the physical ports P1 to P5 are duplicated to the right of the Docker network, again touching the host. This is done to show that Docker network interfaces can also be reached directly by connecting to the physical ports while also making sure to show that the libvirt network and the Docker network are separate from each other.
All interfaces colored in purple are related to the RTVM. Interfaces labeled eth are symbolic representations of interfaces that are used by virtual machines and Docker containers for communication with the Nerve Blue system. The actual used interfaces depend on the Docker container or virtual machine.
Connections are displayed in three ways. Blue lines are connections that are predefined by the system. Blue arrows are used between bridged interfaces and the libvirt network to indicate NAT. Further down below, green lines are used as example connections that can be done by the user.
As mentioned above, the image below represents the MFN 100. Refer to the device guide for information on the Nerve Device as the physical ports and the connection to their respective interfaces differ.
See the table below for more information on the interfaces, their usage and their IP ranges.
Legend | |
---|---|
Physical Ports | The physical ports are device dependent. They are included here for clarification of the image above. The MFN 100 is used as an example. Refer to the device guide for information on the specific hardware model of the Nerve Device.
|
Bridged Interfaces |
|
NAT Interfaces | If a deployed virtual machine uses one of the predefined NAT interfaces, the IP address of the respective interface is assigned by a DHCP server with a subnet mask of 255.255.255.0. The DHCP pool contains the upper half of the respective address space, e.g. 192.168.122.128 to 192.168.122.254 .
|
Isolated Interfaces | Isolated interfaces can be used to allow communication between two virtual machines. These networks cannot communicate outside of the system.
|
Docker network | The Docker network includes the default Docker network bridge and the Docker network equivalents of the Linux bridged interfaces, with one additional isolated network in comparison.
|
Other interfaces | Other interfaces that can be used as a NAT network but without port forwarding. These interfaces do not communicate outside of the system.
|
Connections |
|
The following sections are conceptual explanations. Workloads are attached to internal networks during the provisioning process. Refer to the provisioning chapters (Virtual Machine workloads and Docker workloads) in the user guide on how to provision workloads.
Attaching virtual machines to a network
Virtual machine networking is comparable to installing a network card in the virtual machine and attaching it to the network with the network name given in the network drawing. For this example, there are two "network cards" installed in a user deployed virtual machine. They are located in the User VM and are labeled eth0 and eth1 in this example. Green lines indicate a user established connection.
There are two connections established here: eth0 of the User VM is connected to the mgmt bridged interface for communication with the RTVM inside of the system. eth1 of the User VM is connected to the default NAT interface for an internet connection protected by NAT on P2 of the Nerve Device. Both interfaces have IP addresses in the designated ranges. 172.20.2.15
for eth0 was manually configured in the virtual machine and 192.168.122.16
for eth1 was assigned by the DHCP server.
Settings example
To achieve the functionality above, configure the interfaces of the Virtual Machine workload the following way during the provisioning process in the Management System:
Communication of a virtual machine with the RTVM
A virtual machine can communicate with the RTVM by connecting an interface to the bridged interface rtvm. In the example below this is done with the interface eth0 of the User VM that has the IP address 172.20.3.15
. The IP address was manually configured.
Settings example
To achieve the functionality above, configure the interfaces of the Virtual Machine workload the following way during the provisioning process in the Management System:
Communication of two virtual machines through isolated networks
Nerve Blue offers isolated network interfaces for communication of workloads inside of the system. These interfaces do not communicate outside of the system. They can be used to establish communication between two Virtual Machines.
Both virtual machines have a "network card" installed. User VM 1 is connected to the isolated1 interface through eth0 and User VM 2 is connected through its interface eth0 to the same network interface, isolated1. Each interface has been assigned an IP address by a DHCP server in the designated range: 192.168.130.15
for eth0 of User VM1 and 192.168.130.25
for eth0 of User VM 2.
Settings example
To achieve the functionality above, configure the interfaces of both Virtual Machine workloads the following way during the provisioning process in the Management System:
Communication of a Docker container outside of the system
For Docker containers the situation is different. Docker containers can be attached to the Docker default network or respective Docker network interfaces to access other parts of the system or communicate outside of the system. The Docker default network is called bridge and has the IP address 172.17.0.1
assigned. This is interface is available on all physical ports (here P1 to P5). For this example, the Docker container will be connected to the extern1 interface. In order to make a server accessible for other workloads, map the port and protocol of the Docker container to the outside by specifying the network name, here extern1, during workload provisioning in the port mapping section.
The Docker container is connected to the extern1 interface in the Docker network and available at P3 outside of the system at an IP address in the range from 172.18.8.2
to 172.18.8.254
.
Note
Note that the Docker default network bridge is always defined as an interface by default. That means all deployed Docker workloads can be reached through the bridge interface.
Settings example
To achieve the functionality above, configure the Docker network name the following way during the provisioning process in the Management System:
List of reserved TCP/UDP ports
In general, Nerve Blue reserves the port range 47200 — 47399 on both TCP and UDP for internal usage. The following list states ports that are reserved in version 2.1.
Port | Interface | Protocol | Reserved for |
---|---|---|---|
22 | none | TCP | SSH daemon |
3333 | 172.20.2.1 | TCP | Local UI |
47200 | 127.0.0.1 | TCP/UDP | System Log |
47201 | 127.0.0.1 | UDP | Filebeat |
47300 | 127.0.0.1 | TCP | Local MQTT broker |
47301 | 127.0.0.1 | TCP | Local MQTT broker |