Skip to content

Streaming Custom Clusters

Tom McQuillan edited this page Jul 2, 2025 · 1 revision

Overview

The Streaming Custom Clusters feature enables you to stream complex LabVIEW cluster data directly to the Nominal platform. This feature supports high-speed data collection using a Producer/Consumer design pattern, allowing data to be buffered locally and published in batches for optimal performance.

Supported Data Types

Cluster Element Requirements

Clusters must contain only the following data types:

  • Signed Integers (I8, I16, I32, I64)
  • Unsigned Integers (U8, U16, U32, U64)
  • Singles (SGL)
  • Doubles (DBL)
  • Booleans
  • Strings

Nested Clusters

  • Nested clusters are supported and can contain any combination of the above data types
  • Nesting depth is limited only by LabVIEW's cluster limitations

Channel Naming Convention

When data is streamed to Nominal, channels are automatically created using a dot-delimited naming format based on cluster hierarchy and element names.

Naming Examples

Simple Cluster:

Cluster: "SensorData"
├── Temperature (DBL)
├── Pressure (SGL)
└── Active (Boolean)

Resulting Channels:
- SensorData.Temperature
- SensorData.Pressure  
- SensorData.Active

Nested Cluster:

Cluster: "VehicleData"
├── Engine
│   ├── RPM (U16)
│   └── Temperature (DBL)
├── GPS
│   ├── Latitude (DBL)
│   └── Longitude (DBL)
└── Speed (SGL)

Resulting Channels:
- VehicleData.Engine.RPM
- VehicleData.Engine.Temperature
- VehicleData.GPS.Latitude
- VehicleData.GPS.Longitude
- VehicleData.Speed

Producer/Consumer Design Pattern

This feature is designed for high-throughput applications where data collection occurs faster than network transmission. The buffering system allows:

  • Fast data collection at high rates without blocking
  • Batch transmission for efficient network utilization
  • Asynchronous operation between data collection and transmission

API Functions

1. Initialize Streaming Cluster

VI Path: stream_custom_cluster.lvclass:initialize.vi

Purpose: Sets up backend buffers and analyzes the cluster structure.

Inputs:

  • Cluster data type (used for analysis only)

Outputs:

  • stream_custom_cluster.lvclass object containing buffers

Notes:

  • This function does not send or capture data
  • Performs datatype validation and buffer setup
  • Must be called first in the sequence

2. Add Streaming Cluster Data

VI Path: stream_custom_cluster.lvclass:add data.vi

Purpose: Adds cluster data to the internal buffer for later transmission.

Inputs:

  • stream_custom_cluster.lvclass object
  • Cluster data (must match Initialize function data type exactly)

Outputs:

  • Updated stream_custom_cluster.lvclass object

Notes:

  • Data type must be identical to the Initialize function
  • Fast, non-blocking operation for high-speed data collection

3. Write Cluster Data Batches

VI Path: Nominal Client.lvclass:nominal_channel_writer.write_data_cluster.vi

Purpose: Transmits all buffered data to the Nominal platform in batches.

Inputs:

  • stream_custom_cluster.lvclass object
  • Nominal Client connection

Outputs:

  • Updated stream_custom_cluster.lvclass object (with cleared buffer)

Notes:

  • Sends all buffered data in optimized batches
  • Clears the buffer after successful transmission

4. Destroy Streaming Cluster

VI Path: stream_custom_cluster.lvclass:destroy.vi

Purpose: Releases memory used by the buffering system.

Inputs:

  • stream_custom_cluster.lvclass object
  • Wait until all data is streamed? (Boolean, default: TRUE)
  • Timeout when waiting (U32, default: 10000ms)

Outputs:

  • Cleaned up resources

Notes:

  • When "Wait until all data is streamed" is enabled, waits for buffer size = 0
  • 10-second default timeout prevents indefinite waiting
  • Essential for proper memory management

Implementation Example

Basic Producer/Consumer Pattern

1. Initialize:
   - Call Initialize Streaming Cluster with your cluster type
   - Store the returned object for subsequent operations

2. Producer Loop:
   - Collect data at high speed
   - Call Add Streaming Cluster Data for each data point
   - Continue without waiting for transmission

3. Consumer Loop:
   - Periodically call Write Cluster Data Batches
   - Monitor buffer levels if needed
   - Adjust batch frequency based on requirements

4. Cleanup:
   - Call Destroy Streaming Cluster when finished
   - Ensure all data is transmitted before cleanup