Skip to content

Release notes

This page contains a list of features, changes, known issues and known limitations of the Nerve releases.

Version 2.3.1

This version was released on September 16, 2021. Unless otherwise stated, all known issues and limitations of the previous version are still valid.

Features

Component Features
Nerve Data Services Added support for the Kafka Producer output
Custom variables can now be defined by the user for all outputs that use a JSON format (ZeroMQ Publisher, MQTT Publisher and Kafka Producer)
Incremental data mode is available for OPC UA Client, JSON inputs (ZeroMQ Subscriber, MQTT Subscriber) and JSON outputs (ZeroMQ Publisher, MQTT Publisher and Kafka Producer)

Improvements

  • Extended the functionality of applying configuration files to Docker workloads. Now they can also be applied when the workload is stopped or suspended.
  • Added a reconnection mechanism to the Gateway for OPC UA Client connections.
  • The Gateway can now process more than 100 variables.
  • Improved the CPU usage of the Gateway.

Bug fixes

  • Fixed a display error for VM resource allocation. In the previous version, the Management System UI remembered the resources of the last selected VM workload.
  • Fixed an issue where the CODESYS runtime would not have a decrypted license applied after an RTVM reboot.
  • Fixed an issue where a certificate file would be modified by the Gateway after being imported through the GUI.
  • Fixed an issue where the node would enter a broken state if a workload was deployed to a node twice in a row while another deployment was still in progress. A similar issue when deploying workloads through the API was also fixed.
  • Fixed an issue where the Nerve Data Services Gateway would not extinguish connections or prohibit new connections if the defined space threshold had been exceeded.
  • Fixed an issue where the Data Services and Docker workloads would be unavailable. This was due to an error regarding the internal Nerve networks.
  • Fixed an issue where the Gateway would cause a connector to fall back from subscription to polling after losing connection and reconnecting fails. At times, the connector would also be rendered unavailable.
  • Fixed an issue where the central NerveDB would refuse connections from nodes.
  • Fixed an issue where the Gateway would crash if an OPC UA server reported erroneous or unknown status codes.
  • Fixed an issue where the Gateway would not start if at least one OPC UA NodeId did not exist.
  • Fixed an issue where the DB writer in the Management System would not work if at least one OPC UA connection had problems.
  • Fixed an issue where the deployment of a Docker workload would be stuck in the created state.
  • Fixed an issue where the OPC UA Client input connector initialization aborts on a faulty variable.
  • Fixed an issue where the data retention policy in the Data Services is reset to default after a node update.

Known issues and limitations

  • Updating the Management System from 2.3.0 to 2.3.1 causes errors in the update process. This needs to be done manually by a Nerve Service technician. Contact TTTech Industrial customer support by writing an issue through the TTTech Industrial support portal.
  • The Local UI becomes unresponsive when trying to preview a database with a high number of columns in the Data Services.
  • Variables arriving in a TimescaleDB are discarded if they have a name longer than 64 characters. A warning is logged for each occurrence.
  • It may occur that the Gateway creates a subscription again after reconnecting to the OPC UA server, possibly causing old values to be received again.
  • Nerve Data Services: All Docker workloads connected to the Data Services network need to be updated after updating the node to 2.3.1. For any workload connected to the nerve-dp network, the network needs to be changed to nerve-ds. Workloads connected to nerve-ds need to be updated as well without changing anything. Select Update in the workload version settings and update the Docker workloads on all nodes where they are deployed as well.

Scaling and performance limitations

This release has been tested to perform within the following scaling boundaries:

Maximum number of concurrent devices 200 
Maximum number of concurrently logged in users
Maximum workload size 50 GB 
Maximum number of concurrent remote access sessions
Maximum number of workloads in workload repository 200
Maximum data upload per node 5 datagrams per second with at least 10 sensor values for 200 nodes in parallel.