Release Note
MPI Library Version 04.03.00
Release Type |
MPI Version |
Release Date |
Production Release |
04.03.00 |
11Mar2013 |
New Features
Version 04.03.00
<none>
General Changes
Version 04.03.00
<none>
Fixed Bugs
Version 04.03.00
|
mpiUserLimitConfig() hangs |
|
|
Reference Number: MPI 2616 |
|
|
Type: Fixed Bug |
|
|
MPI Version: 04.03.00 |
|
|
Problem/Cause:
Repeated calls to mpiUserLimitConfig() were causing the MPI to go into an infinite loop.
A race condition caused a limit disable call to fail. As a result, a loop in userlimit.c, which waits for the limit to disable, continued looping without an exit. |
|
|
Fix/Solution:
The wait loop was revised to timeout and exit if the limit did not disable.
The race condition causing the disable failure was also removed.
|
|
|
Affects to Application Code:
Repeated mpiUserLimitConfig() calls will execute properly.
|
|
ControlConfigSet() overwrites Axis 2 gear config |
|
|
Reference Number: MPI 2615 |
|
|
Type: Fixed Bug |
|
|
MPI Version: 04.03.00 |
|
|
Problem/Cause:
The Axis 2 gear ratio parameters are corrupted after calling mpiControlConfigSet() to increase the number of allocated Motion Supervisors.
The recent changes in how user limits are enabled (list rather than count) caused limits to be active durning the allocation process. Since the data for the limits was no longer valid, some limits would trigger and their outputs would overwrite the Axis 2 (or other) Gear Config variables. |
|
|
Fix/Solution:
Any limit with a limit number greater than MfwBufferData,SystemConfig.Enabled.UserLimits
is not processed. The MPI sets MfwBufferData,SystemConfig.Enabled.UserLimit to 0 during allocation.
|
|
|
Affects to Application Code:
The user limits are disabled during initialization/reallocation so no data is corrupted by their outputs.
|
|
Dependent Objects Memory Leak |
|
|
Reference Number: D-04233 |
|
|
Type: Fixed Bug |
|
|
MPI Version: 04.03.00 |
|
|
Problem/Cause:
If ControlConfigSet is called after MPI objects are allocated and if those MPI objects are used, it:
- leaks memory for every MPI call using that object
- it prints on the console |
|
|
Fix/Solution:
MPI objects allocated prior to an mpiControlConfigSet call, in which dynamic reallocation is performed, will be considered "obsoleted". The user will have to delete these objects and reallocate them.
|
|
|
Affects to Application Code:
If it is desired to capture the configuration of these objects, the user will have to retrieve the configuration using, say, mpiAxisConfigGet on an MPIAxis object, PRIOR to calling mpiControlConfigSet. The stored configuration can be applied on the newly obtained MPIAxis object AFTER calling mpiControlConfigSet.
If an "obsoleted" object as described above is used in an MPI method, the error returned is:
ERROR 0x631: Control: Object cannot be used after reallocation
|
Open Issues
Existing Bugs
Limitations
|
Multiple Drive Map Files |
|
|
Reference Number: N/A |
|
|
Type: Limitation |
|
|
MPI Version: 04.02.xx |
|
|
Problem:
Multiple *.dm files in the node directory may cause unexpected results when using meiConfig. It is recommended that you only have one file in this directory with the .dm extension per drive type. When multiple drive map files are present, it is possible that meiConfig will use the wrong one or an out of date .dm file.
Cause:
MeiConfig expects a clean directory and doesn’t know about copies and test DM files. MeiConfig reads every file with a .dm extension, and uses the first instance where the CONTENTS of the drive map file match the drive type. |
|
mpiFilterPostfilterSectionGet and mpiFilterPostfilterGet Unable to Identify Postfilter Types |
|
|
Reference Number: MPI 2284 |
|
|
Type: Limitation |
|
|
MPI Version: 04.02.xx |
|
|
Problem:
When postfilters are set to the controller by using mpiFilterPostfilterSectionSet(...) or mpiFilterPostfilterSet(...) and a variable representing a frequency is close to zero or the Nyquist frequency, mpiFilterPostfilterSectionGet, mpiFilterPostfilterGet are unable to identify the postfilter type.
Cause:
This limitation occurs because the postfilter type is not stored on the controller. Instead the MPI attempts to identify the postfilter type. However, when a specified frequency is close to zero or the nyquist frequency, the precision needed to correctly identify the postfilter type exceeds the precision of the variables on the controller resulting in the inability to correctly identify the postfilter type.
Indentification problems occur when a specified frequency is within 0.5% of the Nyquist frequency of zero. For a sample rate of 2 kHz, the Nyquist frequency is 1 kHz resulting in identification problems occurring when specified frequencies are in the range of 0-5 Hz or 995-1000 Hz. |
|
Single Thread Access to Interrupts per MPIControl Object |
|
|
Reference Number: MPI 2363 |
|
|
Type: Limitation |
|
|
MPI Version: 04.02.xx |
|
|
Description:
Two built-in MPI features use interrupts: the event service routine and the SyncInterrupt feature. However, only one thread may access interrupts when accessing a controller over a client-server connection.
The controller event service routine started by mpiControlEventServiceStart(...) uses interrupts when MPIWaitFOREVER is specified for the sleep parameter. Other values for the sleep parameter allows the event service thread to run in polling mode. In order to use the SyncInterrupt feature, the event collection thread must use polling or the SyncInterrupt routine must call mpiControlProcessEvents(...). |
|
sqNodeFlash Returns Timeout Error on eZMP server |
|
|
Reference Number: MPI 2512 |
|
|
Type: Limitation |
|
|
MPI Version: 04.02.xx |
|
|
Description:
While running server.exe on the eZMP with S200, a timeout error occurs when sqNodeFlash.exe is run on the client PC. |
|
SimServer Still Under Development |
|
|
Reference Number: N/A |
|
|
Type: Limitation |
|
|
MPI Version: 04.02.xx |
|
|
Description:
At the time of the release, the development of SimServer had not been completed. This feature is released as a standalone package and is not included in the MPI distributables. SimServer is planned to be completed and supported in the next MPI release (04.02.01). |
|