Skip to content

Node internal networking

This chapter explains how a user can connect workloads (VMs and Docker containers) to services and network ports of a node. In order to do this, it explains the internal networking concepts in detail. Most workloads will need to be connected to a network, as networking is the main form of communication for workloads. They either want to connect to external servers or they are servers themselves, in which case they need to be made visible for their communication partners. The Nerve networking system enables both use cases.

The image below shows an example node consisting of the host/domain-0. To further clarify the networking example it also has one Virtual Machine workload (User VM) and one Docker workload (Docker container 1) deployed. The virtual machine is depicted outside of the host and the Docker container is depicted in the Docker network inside of the host. For the sake of explanation, however, the workloads are not yet connected. This is done in the examples further below.

Note

The CODESYS runtime and the User VM in the depiction below are marked with an asterisk. This is due to there being two different installation types of Nerve:

  • without a hypervisor
    This variant contains the CODESYS runtime but cannot run VM workloads.
  • with the KVM hypervisor
    This variant does not contain the CODESYS runtime but it can run VM workloads.

So depending on the installation type, there is either a CODESYS runtime available or VM workloads can be deployed but not both. In all illustrations further down below, both user VMs and the CODESYS runtime are included for informative purposes.

Networking

Interface color Description
Gray with blue frame Physical network ports
Light blue Linux network bridges displayed with their bridge names on the host
Dark blue libvirt network interfaces for Virtual Machine workloads
Orange Docker network bridges for Docker workloads
Green Nerve internal network bridges shared between virtual machines, Docker containers and the host. They are not connected to any external, physical interface.
Purple Interfaces related to the CODESYS runtime.

The physical ports P1 to P5 and I/O of the Nerve Device (the MFN 100 in this case) are displayed on the left, touching the large dark rectangle that represents the host. The light blue interfaces connected to them inside the host are Linux bridged interfaces displayed with their names on the host. Highlighted by a dark blue dashed frame is the libvirt network with NAT interfaces. The system is set up so that the network bridges can be reached by connecting through the physical ports and that virtual machines can reach outside of the system through the NAT interfaces.

Highlighted by an orange dashed frame is the Docker network including the default Docker network (the orange interface labeled bridge), as well as Docker network equivalents of the Linux bridged interfaces. For the sake of easier representation, the physical ports P1 to P5 are duplicated to the right of the Docker network, again touching the host. This is done to show that Docker network interfaces can also be reached directly by connecting to the physical ports while also making sure to show that the libvirt network and the Docker network are separate from each other.

The Nerve internal network is highlighted by the green frame. It consists of the rtvm interface and the isolated interfaces. The isolated interfaces are designed for communication inside of the system, enabling communication between workloads. Using the isolated interfaces, communication can be established between Docker workloads, between Virtual Machine workloads, or between Docker and Virtual Machine workloads. The rtvm interface is designed for communication with the CODESYS runtime and can be used to enable communication of the CODESYS runtime with Docker workloads.

All interfaces colored in purple are related to the CODESYS runtime. Interfaces labeled eth are symbolic representations of interfaces that are used by virtual machines and Docker containers for communication with the Nerve system. The actual used interfaces depend on the Docker container or virtual machine.

Connections are displayed in three ways. Blue lines are connections that are predefined by the system. Blue arrows are used between bridged interfaces and the libvirt network to indicate NAT. Further down below, green lines are used as example connections that can be defined by the user.

As mentioned above, the image above represents the MFN 100. Refer to the device guide for information on the Nerve Device as the physical ports and the connection to their respective interfaces differ.

See the tables below for more information on the interfaces, their usage and their IP ranges. The table is structured by workload type and shows which interface can be used for each workload type.

Networks for Virtual Machine workloads

Category Descriptions
Bridged Interfaces To connect to any of the bridged interfaces below, select Bridged from the drop-down menu and enter the name of the interface when provisioning a Virtual Machine workload.
  • mgmt
    This interface is intended for the configuration of the system. There is no DHCP server present on this interface. To connect to it, configure the network adapter of the workstation. The IP address has to be in the range from 172.20.2.5 to 172.20.2.254 with a 255.255.255.0 subnet mask. The mgmt interface is connected to the eth1 interface of the CODESYS runtime.
  • wan
    This interface is designated for internet connection and configured as DHCP client by default.
  • extern1, extern2, extern3
    These interfaces can be used for customer specific external connections.
NAT Interfaces To connect to any of the NAT interfaces below, select NAT from the drop-down menu and enter the name of the interface when provisioning a Virtual Machine workload.
If a deployed virtual machine uses one of the predefined NAT interfaces, the IP address of the respective interface is assigned by a DHCP server with a subnet mask of 255.255.255.0. The DHCP pool contains the lower half of the respective address space, e.g. 192.168.122.2 to 192.168.122.128. The only exception is the default network.
  • default
    This network is the NAT interface of the wan bridged interface with an IP address in the range from 192.168.122.2 to 192.168.122.254.
  • extern1-nat, extern2-nat, extern3-nat
    These networks are the NAT interfaces of the extern1, extern2 and extern3 bridged interfaces. The DHCP IP address ranges of these interfaces are the following:
    • extern1-nat
      192.168.123.2 to 192.168.123.128
    • extern2-nat
      192.168.124.2 to 192.168.124.128
    • extern3-nat
      192.168.125.2 to 192.168.125.128
Nerve internal network To connect to any of the interfaces in the Nerve internal network below, select Bridged from the drop-down menu and enter the name of the interface when provisioning a Virtual Machine workload.

Isolated interfaces
Isolated interfaces can be used to allow Virtual Machine workloads to communicate with Docker workloads and other Virtual Machine workloads. These networks cannot communicate outside of the system. The IP addresses of these interfaces are assigned by DHCP servers in the following ranges:
  • isolated1
    172.18.64.2 to 172.18.66.254
  • isolated2
    172.18.68.2 to 172.18.70.254
  • isolated3
    172.18.72.2 to 172.18.74.254
  • isolated4
    172.18.76.2 to 172.18.78.254
  • isolated5
    172.18.80.2 to 172.18.82.254
The following ranges are reserved for static IP addresses:
  • isolated1
    172.18.67.0/24
  • isolated2
    172.18.71.0/24
  • isolated3
    172.18.75.0/24
  • isolated4
    172.18.79.0/24
  • isolated5
    172.18.83.0/24
Note that when creating a VM on a Nerve Device according to Provisioning a Virtual Machine workload, these networks need to be specified as br-isolated1.

rtvm
Use this interface for communication with the CODESYS runtime and to establish connections between Virtual Machine workloads and the CODESYS runtime. The IP addresses are assigned in the range from 172.20.3.3 to 172.20.3.254. 172.20.3.2 is reserved for eth0 in the CODESYS runtime.

Networks for Docker and Docker Compose workloads

Category Descriptions
Docker network Since interfaces in the Docker network are behind NAT by the default, there is no drop-down menu to select Bridged or NAT when provisioning a Docker workload. Enter the network name of one of the networks below to connect a Docker workload to an interface.
  • bridge
    This is the default Docker network designated for communication with Docker. Interfaces of Docker containers that are connected to the gateway receive IP addresses in the default Docker network, ranging from 172.17.0.2 to 172.17.0.244. By default, this interface can be reached through any physical port (excluding io0).
  • mgmt, wan, extern1, extern2, extern3
    These interfaces are the Docker network equivalents of the bridged interfaces. They can be reached through the physical ports P1 to P5. The DHCP IP address ranges of these interfaces are the following:
    • mgmt
      172.18.0.2 to 172.18.3.254
    • wan
      172.18.4.2 to 172.18.7.254
    • extern1
      172.18.8.2 to 172.18.11.254
    • extern2
      172.18.12.2 to 172.18.15.254
    • extern3
      172.18.16.2 to 172.18.19.254
Nerve internal network Since there is no drop-down menu to select Bridged or NAT when provisioning a Docker workload, enter the network name of one of the networks below to connect a Docker workload to an interface.

Isolated interfaces
Isolated interfaces can be used to allow Docker workloads to communicate with Virtual Machine workloads and other Docker workloads. These networks cannot communicate outside of the system. The IP addresses of these interfaces are assigned by DHCP servers in the following ranges:
  • isolated1
    172.18.64.2 to 172.18.66.254
  • isolated2
    172.18.68.2 to 172.18.70.254
  • isolated3
    172.18.72.2 to 172.18.74.254
  • isolated4
    172.18.76.2 to 172.18.78.254
  • isolated5
    172.18.80.2 to 172.18.82.254
The following ranges are reserved for static IP addresses:
  • isolated1
    172.18.67.0/24
  • isolated2
    172.18.71.0/24
  • isolated3
    172.18.75.0/24
  • isolated4
    172.18.79.0/24
  • isolated5
    172.18.83.0/24
rtvm
Use this interface for communication with the CODESYS runtime and to establish connections between Docker workloads and the CODESYS runtime. The IP addresses are assigned in the range from 172.20.3.3 to 172.20.3.254. 172.20.3.2 is reserved for eth0 in the CODESYS runtime.

Physical ports, other interfaces and connections

Category Descriptions
Physical ports The physical ports are device dependent. They are included here for clarification of the image above. The MFN 100 is used as an example. Refer to the device guide for information on the specific hardware model of the Nerve Device.
  • P1 to P5
    Ethernet ports of the MFN 100. Note that P5 is an SFP port.
  • I/O
    While also an Ethernet port, this port is reserved for communication of the CODESYS runtime with the fieldbus.
Other interfaces eth
These interfaces are symbolic representations of interfaces that are used by virtual machines and Docker containers for communication with the Nerve system. The actual interfaces used depend on the Docker container or virtual machine.
The only exceptions are eth0 and eth1 for the CODESYS runtime as they are always defined with these names.

io0
This interface is defined for communication between the I/O port and the CODESYS runtime.
Connections Blue lines
Blue lines signify connections that are predefined by the Nerve system.

Green lines
Green lines are example connections that could be defined by a user to connect virtual machines or Docker containers to the network.

Blue arrows
Blue arrows indicate a NAT between bridged interfaces and the libvirt network.

The following sections are conceptual explanations. Workloads are attached to internal networks during the provisioning process. Refer to the provisioning chapters (Virtual Machine workloads and Docker workloads) in the user guide on how to provision workloads.

Addressing workloads by DNS name

Docker containers and virtual machines can be addressed by their DNS name, without the need of knowing the exact IP address. The DNS name of a Docker container and a virtual machine consists of the hostname or container name and the name of the network they are attached to. This behavior is explained further within the examples below, highlighting the DNS name that can be used where applicable.

Addressing a Docker workload by DNS name

The DNS name of a Docker workload is defined in the provisioning process of the workload. Following the Provisioning a Docker workload chapter, the DNS name is determined by the Container name and Network name settings formatted as <containername>.<networkname>. As an example, the following screenshot shows these settings of a Docker workload:

Docker network settings

According to these settings, this Docker container can be reached under docker-container.isolated1 after being deployed to a node.

Addressing a Virtual Machine workload by DNS name

For Virtual Machine workloads the same principle applies, but the hostname part of the DNS name does not come from the workload provisioning settings in the Management System. The hostname is defined within the virtual machine, usually at creation time. As an example, if a Windows VM is generated on a Nerve Device, the hostname that is defined during the installation process of the Windows OS is required for addressing the Virtual machine per DNS name, e.g. DOCUMENTATION-PC.

The network name is determined in the Management System by the New interface setting. As an example, the following shows this setting of a Virtual Machine workload:

Virtual machine network setting

According to the machine name or hostname setting of the virtual machine and the network setting in the Management System, this virtual machine can be reached under DOCUMENTATION-PC.isolated1

Attaching virtual machines to a network

Virtual machine networking is comparable to installing a network card in the virtual machine and attaching it to the network with the network name given in the network drawing. For this example, there are two "network cards" installed in a user deployed virtual machine. They are located in the User VM and are labeled eth0 and eth1 in this example. Green lines indicate a user established connection.

Networking Example 1

There are two connections established here:

  • eth0 of the User VM is connected to the mgmt bridged interface so that the virtual machine can be reached through physical port P1. If the User VM has an SSH server set up, a service technician could gain access through P1 of the MFN 100.

  • eth1 of the User VM is connected to the default NAT interface for an internet connection protected by NAT on P2 of the Nerve Device.

Both interfaces have IP addresses in the designated ranges. 172.20.2.15 for eth0 was manually configured in the virtual machine and 192.168.122.16 for eth1 was assigned by the DHCP server.

Settings example

To achieve the functionality above, configure the interfaces of the Virtual Machine workload the following way during the provisioning process in the Management System:

Settings Example 1

Settings example

To achieve the functionality above, configure the interfaces of the Virtual Machine workload the following way during the provisioning process in the Management System:

Settings Example 2

Communication of a Docker container with external devices

Docker containers can be attached to the Docker default network or respective Docker network interfaces to access other parts of the system or to communicate outside of the system. The Docker default network is called bridge and has the IP address 172.17.0.1 assigned. This interface is available on all physical ports (here P1 to P5). Therefore, when a Docker is connected to the Docker default network, the forwarded ports, as defined during workload provisioning in the port mapping section, are exposed on all physical interfaces. For this example, the Docker container will be connected to the extern1 interface. In order to make a Docker port accessible from outside, specify the port and protocol during workload provisioning in the port mapping section. The chosen network, here extern1, defines on which physical port the exported Docker port can be accessed. The Docker container is connected to the extern1 interface in the Docker network, which makes the exported port available at physical port P3 to external devices.

Networking Example 4

The Docker container can be reached from the outside by connecting to the specified, exported port on the IP address of the P3 interface from an IP address in the range from 172.18.8.2 to 172.18.8.254.

Settings example

To achieve the functionality above, configure the Docker network name the following way during the provisioning process in the Management System. The Container name and the port mapping settings are placeholders chosen for this example:

Settings Example 4

Communication of a Docker container and a virtual machine through isolated networks

Nerve offers isolated network interfaces in the Nerve internal network for communication of workloads inside of the system. They can be used to establish communication between Docker containers, between virtual machines, or as in this example, between a Docker container and a virtual machine. Note that these interfaces do not communicate outside of the system.

Networking Example 3

The virtual machine has a "network card" installed. User VM 1 is connected to the isolated1 interface through eth0. Docker container 1 is connected through its interface eth0 to the same network interface, isolated1. Each interface has been assigned an IP address by a DHCP server in the designated range: 172.18.64.15 for eth0 of User VM1 and 172.18.64.25 for eth0 of Docker container 1.

Settings example

To achieve the functionality above, configure the interfaces of the Virtual Machine workload and the Docker workload the following way during the provisioning process in the Management System:

  • Virtual machine

    Settings Example 3: VM

Assuming that the virtual machine is running Windows with the hostname DOCUMENTATION-PC which was defined in the creation process, the virtual machine can be reached under DOCUMENTATION-PC.isolated1.

  • Docker container

    Settings Example 3: Docker

The Docker container can be reached under docker-container-1.isolated1 due to the Container name and the isolated network chosen in the Docker workload settings.

List of reserved TCP/UDP ports

In general, Nerve reserves the port range 47200 — 47399 on both TCP and UDP for internal usage. The following list states ports that are reserved in version 2.1.

Port Interface Protocol Reserved for
22 none TCP SSH daemon
3000 172.20.2.1 TCP Previous location of Local UI
3333 172.20.2.1 TCP Local UI
47200 127.0.0.1 TCP/UDP System Log
47201 127.0.0.1 UDP Filebeat
47300 127.0.0.1 TCP Local MQTT broker
47301 127.0.0.1 TCP Local MQTT broker