next up previous contents
Next: 8.2 Adding a new Up: 8. Architecture Previous: 8. Architecture   Contents

Subsections

8.1 Server Structure

Player is implemented in C++ and makes use of the POSIX-compliant pthread interface for writing multi-threaded programs. Initially, Player was written with a very large number of threads (2 per client + 1 per device); we found this model to be rather inefficient (especially with LinuxThreads) and poorly scalable due to scheduler delay and context switch time. Thus we have eliminated many threads, keeping the total thread count constant in the number of clients. To support code modularity and reusability there is still generally one thread per device, though some light-weight devices (e.g., the laserbeacon device) do not require their own threads.

One thread services all clients, doing the following: listen for new client connections on the selected TCP port(s), read commands and requests from all current clients, and write data and replies to all clients.

When the server receives a request for a device that is not already setup, it calls the proper method, Setup(), in the object which controls the indicated device. The invocation of Setup() involves spawning another thread to communicate with that device8.1. So, in total, we have 1 server thread and 1 thread per open device.

The overall system structure of Player is shown in Figure 8.1. The center portion of the figure is Player itself; on the left are the physical devices and on the right are the clients. As described above, each client has a TCP socket connection to Player. If the client is executing on the same host as Player, then this socket is simply a loopback connection; otherwise, there is a physical network in between the two. At the other end, Player connects to each device by whatever method is appropriate for that device. For most devices, including the laser, camera, and robot microcontroller, Player makes this connection via an RS-232 serial line. However, connections to the ACTS vision server and Festival speech synthesizer are via a TCP socket.

Within Player, the various threads communicate through a shared global address space. As indicated in Figure 8.1, each device has associated with it a command buffer and a data buffer. These buffers, which are each protected by mutual exclusion locks, provide an asynchronous communication channel between the device threads and the client reader and writer threads. For example, when the client reader thread receives a new command for a device, it writes the command into the command buffer for that device. At some later point in time, when the device thread is ready for a new command, it will read the command from its command buffer and send it on to the device. Analogously, when a device thread receives new data from its device, it writes the data into its data buffer. Later, when the client writer thread is ready to send new data from that device to a particular client, it reads the data from the data buffer and passes it on to the client. In this way, the client service thread is decoupled from the device service threads (and thus the clients are decoupled from the devices). Also, just by the nature of threads, the devices are decoupled from each other.

Figure 8.1: Overall system architecture of Player
\begin{figure}\centering
\epsfig{file=buffers.eps, width=0.900\textwidth}
\end{figure}

8.1.1 Device data

By default, each client will receive new data from each device to which it is subscribed at 10Hz. Of course, receiving data at 10Hz may not be reasonable for all clients; thus we provide a method for changing the frequency, and also for placing the server in a request/reply mode. It is important to remember that even when a client receives data slowly, there is no backlog and it always receives the most current data; it has simply missed out on some intervening information. Also, these frequency changes affect the server's behavior with respect to each client individually; the client at 30Hz and the client at 5Hz can be connected simultaneously, and the server will feed each one data at its preferred rate.

There are four (per-client) modes of data delivery, as follows:

The default mode is currently PLAYER_DATAMODE_PUSH_NEW, which many clients will find most useful. In general, the *PUSH* modes, which essentially provide continuous streams of data, are good when implementing simple (or multi-threaded) client programs in which the process will periodically block on the socket to wait for new data. Likewise, the *PULL* modes are good for client programs that are very slow and/or aperiodic. Along the other dimension, the *NEW* modes are most efficient, as they never cause ``old'' data to be sent to the client. However, if a client program does not cache sensor data locally, then it might prefer to use one of the *ALL* modes in order to receive all sensor data on every cycle; in this way the client can operate each cycle on the sensor data in place as it is received.

Of course, it is possible for a device to generate new data faster than the client is reading from the server. In particular, there is no method by which a device can throw an interrupt to signal that it has data ready. Thus the data received by the client will always be slightly older for having sat inside the shared data buffer inside Player. This ``buffer sit'' time can be minimized by increasing the frequency with which the server is sending data to the client8.2. In any case, all data is timestamped by the originating device driver, preferably as close to the time when the data was gathered from the device. This data timestamp is generally a very close approximation to the time at which the sensed phenomenon occurred and can be used by client programs requiring (fairly) precise timing information.

8.1.2 Device commands

Analogous to the issue of data freshness is the fact that there is no guarantee that a command given by a client will ever be sent to the intended physical device. Player does not implement any device locking, so when multiple clients are connected to a Player server, they can simultaneously write into a single device's command buffer. There is no queuing of commands, and each new command will overwrite the old one; the service thread for the device will only send to the device itself whatever command it finds each time it reads its command buffer. We chose not to implement locking in order to provide maximal power and flexibility to the client programs. In our view, if multiple clients are concurrently controlling a single device, such as a robot's wheels, then those clients are probably cooperative, in which case they should implement their own arbitration mechanism at a higher level than Player. If the clients are not cooperative, then the subject of research is presumably the interaction of competitive agents, in which case device locking would be a hindrance.

8.1.3 Device configurations

Whereas the data and command for each device are stored in simple buffers that are successively overwritten, configuration requests and replies are stored in queues. Configuration requests, which are sent from client to server, are stored in the device's incoming queue. Configuration replies, which are sent from server to client, are stored in the device's outgoing queue. These queues are fixed-size: queue element size is currently fixed at 1KB for all devices and queue length is determined at compile-time by each device's contructor.


next up previous contents
Next: 8.2 Adding a new Up: 8. Architecture Previous: 8. Architecture   Contents
2004-06-02