Sample Rate

The sample rate determines the update frequency for the controller and the SynqNet network. Every sample, the controller must read the cyclic sampled data from the network hardware, process the data, and write the transmit data to the Rincon buffer. The Rincon handles the data receive/transmit between the controller and the SynqNet network.

Determining an appropriate sample rate for a SynqNet system is dependent on several factors:

SynqNet Cyclic Update Rate

At higher cyclic update rates, the network performance is improved. The limitation to the cyclic update rate is based on the network loading. More nodes require more data packets. More features within a node will require larger data packets. To determine the network load, view the bandwidth usage (%). See SynqNet Timing Values for more details. The bandwidth usage must be less than 100%.
Bandwidth usage < 100%.

At present, most SynqNet systems are limited by the controller's processing power and not by the network bandwidth. For example, an 8 node (with one axis per node) system operating at 4kHz only uses 16% of the network bandwidth. If there are sample rate limitations due to network bandwidth, the data packets can be optimized to remove unused features. In the future, as new and more powerful controllers become available, the system designer will need to pay attention to the maximum sample rate and data packet load.

The minimum cyclic update rate for SynqNet systems is 1000 (period = 1 millisecond). The maximum cyclic update rate for SynqNet systems is dependent on the packet load.

SynqNet Drive Period

SynqNet drives that have processors will update torque, velocity, and/or position loops at frequencies. To make sure the controller's closed-loop control loop are synchronized with the drive(s) loops, the controller update period must be a multiple of the drive update period.
SynqNet network update period MUST be a multiple of the SynqNet drive update period for ALL drives on the network.

Most SynqNet drives have a 16kHz update rate (62.5 microseconds). See also the Drive Update Frequency and Period table.

See Valid Network Sample Rates for different network setups (24kHz, 16kHz, 8kHz).

The reason for this restriction is to guarantee the drive PLL can lock to an even sample period of the SynqNet SYNQ signal. If drives on a SynqNet network have different update rates, then the controller sample period MUST be a common multiple of ALL the drives on the network. SynqNet nodes without drive processors do not restrict the SynqNet sample rate. For example, MEI's RMB-10v2 does not have drive processor, so it is compatible with any SynqNet update rate.

SynqNet Tx Time

The transmit time determines when the cyclic data is sent within the Controller/SynqNet sample period. The Tx Time is expressed as a percentage from 0% to 100%.

Smaller Tx Time values will cause the cyclic data to be transmitted earlier in the sample period.
Larger Tx Time values will cause the cyclic data to be transmitted later in the sample period.

Smaller Tx Time values will reduce the latency between the feedback data from the node and the servo demand value sent from the controller to the node. Decreasing the latency will improve the servo closed-loop performance. The (Tx Time * controller period) must be larger than the controller's foreground calculation time. See SynqNet Timing Values for more details.
Maximum Foreground Time < (Controller * TxTime)

Larger Tx Time values will increase the latency between the feedback data and the demand value. But, using a larger Tx Time value may allow the controller to operate at a higher sample rate, thereby increasing the servo bandwidth.

For SynqNet servo drives, the latency change is only effective if the magnitude of the Tx Time change is large enough to cross the boundary of the drive's sample period. For SynqNet nodes without drive processors, changing the Tx Time will always directly impact the control latency.

For example, suppose the SynqNet cyclic period is 500 microseconds and the drive period is 62.5 microseconds. Changing the Tx Time by 1% will cause a 5 microsecond change in the transmit time. To cross the drive period boundary, the Tx Time would need to be changed by about 13% (or less). Since the transmit data timing may not be exactly lined up with the drive's update time, the actual amount of Tx Time change to cross the drive period boundary may be significantly less. If the node was an RMB-10V2 (no drive processor), then a 13% Tx Time change would cause a 65 microsecond change in latency.

The default Tx Time is 75%. A reasonable range for Tx Time values is from 65% to 95%.



Sample Rate = 2000 (period = 500 microsec)
TxTime = 75% (375 microsec)

Foreground Time = 275 microsec
Background Time = 300 microsec
Delta = 1

       Legal Notice  |  Tech Email  |  Feedback
Copyright ©
2001-2021 Motion Engineering