.

Sample Rate

The sample rate determines the update frequency for the controller and the SynqNet network. Every sample, the controller must read the cyclic sampled data from the network hardware, process the data, and write the transmit data to the Rincon buffer. The Rincon handles the data receive/transmit between the controller and the SynqNet network.

Determining an appropriate sample rate for a SynqNet system is dependent on several factors:

SynqNet Cyclic Update Rate

At higher cyclic update rates, the network performance is improved. The limitation to the cyclic update rate is based on the network loading. More nodes require more data packets. More features within a node will require larger data packets. To determine the network load, view the bandwidth usage (%). See SynqNet Timing Values for more details. The bandwidth usage must be less than 100%.

At present, most SynqNet systems are limited by the controller's processing power and not by the network bandwidth. For example, an 8 node (with one axis per node) system operating at 4kHz only uses 16% of the network bandwidth. If there are sample rate limitations due to network bandwidth, the data packets can be optimized to remove unused features. In the future, as new and more powerful controllers become available, the system designer will need to pay attention to the maximum sample rate and data packet load.

The minimum cyclic update rate for SynqNet systems is 1000 (period = 1 millisecond). The maximum cyclic update rate for SynqNet systems is TBD.

SynqNet Drive Period

The SynqNet network update period MUST be a multiple of the SynqNet drive update period for ALL drives on the network. Most SynqNet drives have a 16kHz update rate (62.5 microseconds). See also the Drive Update Frequency and Period table. For example, valid SynqNet update rates for a network with 16 kHz drives are:

Sample Rate
Period (microsec)
16000
62.5
8000
125
5333.4
187.5
4000
250
3200
312.5
2666.7
375
2285.7
437.5
2000
500
1777.7
562.5
1600
625
1454.5
687.5
1333.3
750
1230.8
812.5
1142.9
875
1066.7
937.5
1000
1000

The reason for this restriction is to guarantee the drive PLL can lock to an even sample period of the SynqNet SYNQ signal. If drives on a SynqNet network have different update rates, then the controller sample period MUST be a common multiple of ALL the drives on the network. SynqNet nodes without drive processors do not restrict the SynqNet sample rate. For example, MEI's RMB-10v2 does not have drive processor, so it is compatible with any SynqNet update rate.

SynqNet Tx Time

The transmit time determines when the cyclic data is sent within the Controller/SynqNet sample period. The Tx Time is expressed as a percentage from 0% to 100%.
Smaller Tx Time values will cause the cyclic data to be transmitted earlier in the sample period.
Larger Tx Time values will cause the cyclic data to be transmitted later in the sample period.

Smaller Tx Time values will reduce the latency between the feedback data from the node and the servo demand value sent from the controller to the node. Decreasing the latency will improve the servo closed-loop performance. The (Tx Time * controller period) must be larger than the controller's foreground calculation time. See SynqNet Timing Values for more details.

Larger Tx Time values will increase the latency between the feedback data and the demand value. But, using a larger Tx Time value may allow the controller to operate at a higher sample rate, thereby increasing the servo bandwidth.

For SynqNet servo drives, the latency change is only effective if the magnitude of the Tx Time change is large enough to cross the boundary of the drive's sample period. For SynqNet nodes without drive processors, changing the Tx Time will always directly impact the control latency.

For example, suppose the SynqNet cyclic period is 500 microseconds and the drive period is 62.5 microseconds. Changing the Tx Time by 1% will cause a 5 microsecond change in the transmit time. To cross the drive period boundary, the Tx Time would need to be changed by about 13% (or less). Since the transmit data timing may not be exactly lined up with the drive's update time, the actual amount of Tx Time change to cross the drive period boundary may be significantly less. If the node was an RMB-10V2 (no drive processor), then a 13% Tx Time change would cause a 65 microsecond change in latency.

The default Tx Time is 75%. A reasonable range for Tx Time values is from 65% to 95%.

       Legal Notice  |  Tech Email  |  Feedback
      
Copyright ©
2001-2021 Motion Engineering