Basic FAQ

From The Player Project

Revision as of 17:42, 15 August 2006 by Rtv (Talk)
Jump to: navigation, search

The old FAQ is located here, you may be interested in the old FAQ if you are still using very outdated versions of the Player project software.



I have problems, documentation didn't help me, how can I get help?

See Getting help page.

What's the right way to report bugs and ask questions on the mailing lists?

Nobody actually asks this question, but many should. The guidelines for messages on the mailing list can be found in the getting help page.

It is very important that you read that page before posting to the mailing list. At least if you are meant to be helped efficiently.

Which versions of Player, Stage and Gazebo are compatible?

TODO: insert compatibility table here 

Where can I find binary packages?

We don't maintain binary packages, but some users do. Look on the Download page.

What's the story of Player's creation?

The story of the Player/Stage project can be read in the player's history page.

What other information about the Player/Stage Project is available online?

Some online sources in rough order of usefulness:

What is a "local" or "user" installation?

See the Local installation tutorial.

Why are Player and Stage (etc) so named?

See Shakespeare's 'Seven Ages of Man' speech.

How can I make movies from screenshots?

Many P/S/G programs (including playerv, playernav, stage, and gazebo) can dump screenshots, and you might want to assemble these screenshots into a movie, for example to include in a presentation. Unfortunately, there is no good (universal) method for animating frames into a movie that will play on all platforms. Some known methods, all of which have pros and cons:

  • On Linux, use mencoder (comes with mplayer). Works great, but the movies it makes generally don't run on Windows machines (some kind of DIVX problem). Sometimes Windows Media Player will play these movies, but Powerpoint won't let you embed them in a slide (maddening, isn't it?). Encoding with MPEG1 does work, but it looks terrible.
  • On Windows, there is a nice freeware binary called BMP2AVI (google it) that does the trick. Simple, but pretty effective.
  • On Windows/OS X, you can pay $30 for the full version of QuickTime, and use that to make your movies. You can generally tweak it so that the movies play on all platforms (QuickTime on Windows and MPlayer on Linux).
  • xvidcap: Captures snapshots or movies of areas of the screen.
  • wink: Input formats: Capture screenshots from your PC, or use images in BMP/JPG/PNG/TIFF/GIF formats. Output formats: Macromedia Flash, Standalone EXE, PDF, PostScript, HTML or any of the above image formats. Use Flash/html for the web, EXE for distributing to PC users and PDF for printable manuals.

Please let us know if you can come up with a better solution.


What is Player?

Player is a device server that provides a powerful, flexible interface to a variety of sensors and actuators (e.g., robots). Because Player uses a TCP socket-based client/server model, robot control programs can be written in any programming language and can execute on any computer with network connectivity to the robot. In addition, Player supports multiple concurrent client connections to devices, creating new possibilities for distributed and collaborative sensing and control.

More information on the Player page.

How is Player different from other robot interfaces?

Previous work in the area of robot programming interfaces has focused primarily on providing a development environment that suits a particular control philosophy. While such tools are very useful, we believe that implementing them at such a low level imposes unnecessary restrictions on the programmer, who should have the choice to build any kind of control system while still enjoying device abstraction and encapsulation.

Thus in Player we make a clear distinction between the programming interface and the control structure, opting for a maximally general programming interface, with the belief that users will develop their own tools for building control systems. Further, most robot interfaces confine the programmer to a single language, providing a (generally closed-source) language-specific library to which the user must link his programs. In contrast, the TCP socket abstraction of Player allows for the use of virtually any programming language. In this way, it is much more "minimal" that other robot interfaces.

What hardware and software does Player support?

There is a list of supported devices.

How do I get/build/install Player?

The page has the links and the instructions.

How do I cross-compile Player (e.g., for the iPAQ or Gumstix)?

Of course you can find the proper tutorial for this.

What is the difference between Player and Stage and Gazebo? What is the difference between Player device drivers and simulated device models in Stage or Gazebo?

See the explanation on How Player works

When I try to connect to Player, I get "connection refused."

That's usually because either Player isn't running or because you're trying the wrong port. To check whether Player is running and to verify on which port(s) it is listening, use netstat. In Linux, the following should help (arguments will be different for other platforms):

  • $ netstat --inet --tcp -lp

You should see a list of all processes currently listening on TCP ports; look for player.

How do I add a device driver to Player?

Read this tutorial.

When I run Player (possibly under Stage), it exits with the message "unknown host; probably should quit." What's the deal?

(This seems to occur mostly on OS X) Add an entry to your /etc/hosts for your machine's name. For example, if your machine is called foobar: localhost foobar

There's probably already a line for (known as the "loopback address"); you can just append your hostname to the end of that line.

I have a syntax error involving PKG_CHECK_MODULES. What's the fix?

If you get a syntax error involving PKG_CHECK_MODULES, it is likely that aclocal can't find the pkg.m4 file, which defines this macro. This is common on OS X with Fink, as the pkg-config package puts this file in /sw/share/aclocal, while the standard OS X aclocal program is looking in /usr/share/aclocal. Unfortunately, there is no reliable search path mechanism for aclocal, so the best fix is just to copy (or symlink) /sw/share/aclocal/pkg.m4 to /usr/share/aclocal. This FAQ taken from the Autonomy Lab's P/S Wiki.

How can I get high data rates from my SICK LMS laser rangefinder?

It's possible to get scans at 75Hz from the SICK LMS, if you use an RS422 connection, which is a high-speed serial line. If you use a run-of-the-mill RS232 connection, the best you can expect is about 10Hz, depending on angular aperture and resolution. Look here for one way to use a USB-RS422 converter to get high-speed laser scans. For a detailed explanation of how the LMS works, timing considerations, and what data rates can be expected, look here. More info and tips on using Player to get high-speed laser scans can be found here.

How do I connect my (Sony or Canon) PTZ camera to a standard serial port (instead of, e.g., the AUX port on my Pioneer)?

ActivMedia robots that are equipped with a PTZ camera often have the camera connected to the AUX port on the P2OS board in the robot. Player does not support control of the camera through this connection, for reasons explained here. Instead, Player requires a standard, direct, serial line to the camera.

Documentation about the Sony PTZ units is available here. In particular, page 15 of this manual has a wiring diagram.

Here are some detailed wiring instructions, courtesy of Jason L. Bryant at the Navy Center for Applied research in Artificial Intelligence:

Instructions for rewiring a pioneer robot so that the PTZ camera device can be connected to a serial port (ttyS1) the on-board computer rather than to the robot's microcontroller.

Purchase a VISCA - DB9 conversion cable (item # 0002V448 on-line), as well as a length of 20 ribbon connection cable (our cable is about 18 inches long). You will also need a 20 pin header connector.

Attach the 20 pin header to one end of the ribbon taking note of the location pin 1 on both the ribbon and the header connector. At the other end of the cable, split the ribbon into 2 10 pin sections. Cut about 1 inch off of the last pin from each section (pins 10 and 20) so that you now have 2 9-pin cable ends. Now attach 2 DB-9 serial connectors (MALE) to the ends (being sure that pins 1 and 11 go into the proper slots of the connector. The serial connection with pin 1 will eventually go to the serial port on the microcontroller and the other connection will hook to the VISCA - DB9 conversion cable.

Remove the top plate and nose from your pioneer robot. Next,locate and remove the 20 pin header with a 9 wire rainbow colored ribbon from the Serial port on the on-board computer. This header connects to serial ports ttyS0 and ttyS1, however, using the default pioneer configuration, port ttyS1 is unused. The other end of this ribbon connects to the serial port on the microcontroller (look in your pioneer manual for the location of this port or just follow the cable).

Now place the 20 pin header of the cable you just made into the now free serial ports on the computer. Snake the wires under the robot's control panel and to the back section of the chassis. Connect the serial connection from ttyS0 (serial connection with pins 1 - 9) to the now free serial port on the microcontroller. Connect the other serial connection (pins 11 - 19) to the female DB-9 connector on the VISCA to DB-9 conversion cable and snake the rest of this cable up and outside the robot cover. Replace the nose and top cover of your robot. Once you connect the other end of the VISCA cable to the camera, you will now have a working ptz camera on port /dev/ttyS1.

You can test that the connections work by running /usr/local/Aria/bin/demo on the robot, selecting 'C' for camera control, then the appropriate key for your particular camera (Sony, or Canon) connected to a serial port ('@' for a Canon), and finally '2' for serial port /dev/ttyS1.

How can I read image data from a camera?

There are several options for accessing image data from a camera in Player:

  1. Write a (plugin) Player driver which reads the data directly from the camera (through the camera interface).
  2. Use socket interface to return the image data to the client side.
  3. Use an external streaming system, like Quicktime RTSP, gstreamer, VideoLAN or OpenH323.

The first is the recommended way of accessing the camera. By building a driver in Player, the need to transmit camera information via a network is minimized. The Player server can process the image, extract whatever information you require, and return that if necessary. That is how the blobfinder and cmvision "virtual sensors" work. For custom vision processing algorithms (that do not belong in the Player source tree), users can create "plugin" drivers.

Raw image data can be read on the client side using an appropriate proxy ((e.g., CameraProxy in the C++ client, or playerc_camera_t in the C client). Be aware that this options will severly increase network traffic.

Setting up an external streaming server allows you to access the "live" video feed using many other popular programs. Since the data is not travelling via Player, there is less impact on the performance of Player. Also, streaming servers typically compress the images before sending, reducing the network load somewhat. That said, there are no samples in Player/Stage to show you how to do this, as it is completely outside of the project.

Searching the mailing lists for "camera" will bring up most of the previous discussions of this matter.


What is the purpose of the key in a provides or requires field (e.g., the "odometry" in "odometry::position:0")?

Look here.

What is set odometry, and what does set odometry do?

It's a request to a position2d device (e.g., a mobile robot) to set its internal odometry to a particular (X,Y,theta) value. It doesn't move the robot, just transforms the coordinate system in which odometry will be reported.

Suppose I write a Plugin, how do I set it up to have its own messages?

The 'opaque' interface is designed for this purpose. It allows you to exchange messages with arbitrary content. On the client side, there's an OpaqueProxy. Of course, there will not be XDR wrappers for your custom messages, so you have to do your own (de)marshaling on each side.

The opaque interface is usually used to prototype new interfaces and/ or extensions to existing interfaces. After some testing and refinement, these additions can be submitted for consideration to be included in player.h, at which point they'll be fully supported, with XDR wrappers, client-side proxies, etc.


What is Stage?

Stage is a scaleable multiple robot simulator; it simulates a population of mobile robots moving in and sensing a two-dimensional bitmapped environment. When used as a Player plugin, Stage provides virtual Player robots which interact with simulated rather than physical devices. Stage can also be used as link library to create custom simulations. Various sensor models are provided, including sonar, scanning laser rangefinder, pan-tilt-zoom camera with color blob detection and odometry.

More information on the Stage page.

How do I get/build/install Stage?

See the Download page.

When configuring Stage, Player is not found, but I just installed Player OK. What's up?

Stage uses pkgconfig to find Player, so the problem probably lies with your pkg-config setup. First, make sure you have pkg-config installed. Then run it from the command line to make sure it finds Player.

Here's a successful manual run of pkg-config:

 $ pkg-config --cflags playercore

Here's an unsuccessful run, which produces a hint on how to fix it:

 $ pkg-config --cflags playercore
 Package playercore was not found in the pkg-config search path.
 Perhaps you should add the directory containing `playercore.pc'
 to the PKG_CONFIG_PATH environment variable
 No package 'playercore' found

So you need to add the path to Player's installed pkg-config metadata (*.pc) files. These are in $(prefix)/lib/pkgconfig. $(prefix) defaults to /usr/local/ unless you specified it differently on your configure command line. So here's the fix for a default install:

 $ export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
 $ pkg-config --cflags playercore

How can I make movies of my Stage simulations?

For Stage 1.6 and up, use the File:Export menu in the GUI to dump screenshots, then see "How can I make movies from screenshots?" question above.

Where is the "sonar" model in Stage?

Stage does not have a "sonar" model, but it does have a "ranger" model that does a reasonable job of modeling sonar and IR rangefinders, or similar arrays of simple rangefinders. The ranger models a sonar or IR cone by raytracing a configurable number of thin diverging beams.


What is Gazebo?

Gazebo is a 3D, dynamic, multi-robot simulator. Whereas Stage is intended to simulate the behavior of very large populations of robots with moderate fidelity, Gazebo simulates the behavior of small populations of robots (less than 10) with high fidelity. Read the Gazebo page for more information.

Where does Gazebo run?

How do I get/build/install Gazebo?

See the Download page.

How do I build Gazebo on OS X?

Directions for building on Gazebo on OS X can be found in the Gazebo manual, available from the documentation page.

How can I make movies of Gazebo?

Gazebo will not make movies directly, but can be instructed to export still frames, which you can then animate (see how to make movies question above). In versions up to and including 0.3.0, click on the window you wish to export, then press the 'W' key'; frames are saved in PPM format in a directory named "frames-". Note that saving frames will significantly affect Gazebo's performance.

How can I read image data from a camera?

For libgazebo users, raw image data is available through the gz_camera_t interface.

For Player users, see the FAQ entry on reading camera data; from Player's perspective, Gazebo cameras work just like real cameras (which means you can develop image processing algorithms using Gazebo-simulated images).


How do I get the latest code?

All the code for the project is mantained in a CVS repository at SourceForge. An excellent source of CVS documentation (besides the CVS manual) is here. Project-specific instructions for CVS access, both anonymous and read/write, are here.

We keep our code organized into CVS modules, and that is how you should access it. You should not check out directories directly, because you will bypass any module dependencies that we have set up. The following modules will likely be of greatest interest to you:

  • player
  • stage
  • gazebo

How do I build from CVS?

Since we're using the GNU Autotools, it's a little different to build from CVS instead of from a distribution. First, you need autoconf and automake installed. They are already installed on any reasonable UNIX-like machine, but you might need to upgrade them; you can download both packages from any GNU mirror. We're currently using:

  • autoconf 2.53
  • automake 1.6

Newer versions will probably work, but older ones probably won't. If you do use newer versions, keep in mind that you should not use any macros that aren't available in the versions listed above, because that will likely break the build for other developers.

Building from CVS involve the same steps:

  1. ./autoreconf -i -s OR ./bootstrap
  2. ./configure [options]
  3. make
  4. make install (optional)

The autoreconf tool runs the right Autotools in the right order to generate necessary files, including a configure script. You only need to supply the -i -s arguments the first time you use autoreconf on a checked out copy. If autoreconf doesn't work for you (older versions were pretty buggy), then you can run the bootstrap script instead, which does the same thing.

You only usually need to run autoreconf when some part of the build system, such as or acinclude.m4, has changed; at other times, you can just run configure, or even just make. However, it's safest to run autoreconf whenever you udpate from CVS, in case something important changed. The exact dependencies among the various files and tools are of course deterministic but extremely complex and it's best not to think about them.

One more thing: since we're using automake, we don't write Makefiles. Instead, we write Makefile.ams (automake files), which are like meta-Makefiles. Except in special cases, Makefiles (and Makefile.ins) are auto-generated and should not be checked in.

Personal tools