Release Note
MPI/XMP Solaris Package

New Version
Previous Version
MPI/XMP Library
MPI/Solaris Library
Release Date


Welcome to the MPI-Solaris Support package. The distribution was built and tested on Solaris 8 (SPARC) with both XMP and ZMP motion controllers. The libraries have been built with standard Sun tools. This document provides an overview of the release and describes the new features and changes from the standard WinNT MPI software releases.

Each MPI-Solaris distribution has a particular version number. To properly run client/server applications between Solaris and Win32 systems, the Win32 systems must have the standard, Win32 MPI release of the identical version number.


Solaris System Requirements

This release requires a Sun SPARC processor, running the Solaris 8 operating system.
NOTE: Only use the standard Sun language tools and library versions that are supplied with the Solaris 8 distribution.

Target System
Kernal Version
Solaris 8
Supported Compiler
Sun X/Open5
Compiler Switches
Linker Switches
-mt -lpthread –lposix4 -lrt


Solaris Software Installation Instructions

The MPI-Solaris distribution is in the Solaris package format. By default, the MPI is installed into the /opt/MEI directory.

To install the distribution, you need to do the following as the 'root' user:

mv MPI_ReleaseXX.XX.XX_Solaris.ZIP /MEI/
cd /MEI


The MPI-Solaris installation is designed as a standalone MPI release. All Solaris-specific files are installed into Solaris/SPARC subdirectories. Key components of the distribution are:

  • Solaris device driver, installed in the /opt/MEI/XMP/MPI /Solaris/SPARC subdirectory and into the kernel/drv directory.

  • The release and debug versions of the MPI-Solaris libraries, installed in the /opt/MEI/XMP/ lib/Solaris/SPARC subdirectory.

  • Makefiles for various sample applications and utilities.

General Operation

Starting and Building the MEI XMP/ZMP driver

A XMP/ZMP device driver for Solaris is included in every release in the /opt/MEI/XMP/MPI /Solaris/SPARC/driver directory.

The driver is loaded automatically when pkgadd is run.
The MEIXMP driver can also be loaded by running the MEI install shell script. You will need to be logged in as ‘root’ in order to run this script.

     $ cd /opt/MEI/XMP/MPI /Solaris/SPARC
     $ ./install_meixmp

The driver may be uninstalled by running the remove_meixmp shell script.
To rebuild the MEI XMP/ZMP driver.

     $ cd /opt/MEI/XMP/MPI /Solaris/SPARC/driver
     $ make –f meixmp.mak


Host/Client Communication between Win32 and Solaris Applications

Running the MPI Server for remote access

The MPI Server provides access to the MPI library from socket-based clients including MEI's Windows-based Motion Console and Motion Scope applications.

There are two ways to run the MPI Server on Solaris.

  • via user program or command
  • using the inet daemon (inetd)

To run from the command line:


$ cd the /opt/MEI/XMP/bin /Solaris
$ ./server

The MPI Server may also be run as a command from a shell script or programming language. This is often the most convenient method during a system test and debug since it requires no change to system /etc files.

The file /etc/inetd.conf contains the configuration for the inet daemon. For detailed information on the format of the configuration file, type man inetd.conf.

MEI provides a sample entry for the MPI Server in (/MEI/XMP/MPI/Solaris/sampleInetd.conf). You will need to be logged in as 'root' in order to modify the system /etc/inetd.conf file. Simply add the line from sampleInetd.conf to your /etc/inetd.conf file. You must similarly modify the system /etc/services file using the sample entry from /MEI/XMP/MPI/Solaris/SPARC/services.


Running XMP support utilities under Solaris

Pre-compiled utilities are included in the Solaris release. These utilities support two modes of communication; they can be loaded and run from the target system or across ethernet from a Win32 host with the -server flag.

NOTE: Solaris is case sensitive.

Here is the syntax for a few of the support utilities:

  Flash Utility
  • From the /MEI/XMP/bin/Solaris directory, type:
         ./flash ../XMPnnnXn.bin

  • Running via TCP/IP from a Win32 host, type:
         flash -server 'target' XMPnnnXn.bin
  VM3 Utility
  • Access on the Solaris System is only available through client/server.

  • Running via TCP/IP from a Win32 host, type:
          VM3 -server 'target'
    For operation instructions, please refer to the VM3 section.
  Motion Console Utility
  • Access on a Solaris System is only available through client/server.
  • Running via TCP/IP from a Win32 host.
  • For operation instructions, please refer to the Motion Console section.
  Motion Scope Utility
  • Access on a Solaris System is only available through client/server.
  • Running via TCP/IP from a Win32 host.
  • For operation instructions, please refer to the Motion Scope section.


MPI Sample Applications

This release installs a makefile for building the sample applications with Sun C tools. The makefile provides an interface to build the sample applications from the command line using the make utility.


Known Bugs and Issues


The MPI library requires resource locks that must be shared between processes. However, an application may crash and not release a shared resource lock, thus preventing any MPI-based programs to run. Solaris has a non-portable robust resource lock (mutex), but a robust thread cannot also be recursive and the MPI library requires recursive resource locks (i.e. a thread already holding a lock can obtain it again, subject to releasing it the appropriate number of times).

Previously, the above situation involved rebooting the system. This release contains a shell script that will release all shared resource locks after an MPI-based application crash. The /MEI/XMP/MPI/Solaris/shmclobber.sh script should be run whenever an application crashes. However, all other MPI applications (including remote applications such as Motion Console should be terminated prior to running shmclobber.sh). You should also shut down the MPI Server if it has been started. The status of the memory containing the MPI library shared resource locks may be seen by using this Solaris command:

$ ipcs -m -o

Look in the ouput of this command for the KEY field whose value is 0xc0febabe. The corresponding NATTACH field indicates the number of attached MPI applications.


Client/Server on Solaris

If a ZMP controller is installed in the Solaris system, a Solaris client cannot run with the Solaris server.

An MPI error message will be generated when an application is run from a Solaris client to a Solaris server.

Please see the General Release Notes for outstanding MPI bugs and limitations.


       Legal Notice  |  Tech Email  |  Feedback
Copyright ©
2001-2021 Motion Engineering