Using Player

Introduction

This how to serves as a guide to get a Player controlled robot to do various things. Before proceeding, this how to assumes that you know what The Player Project is, and you have installed it on your computer (there is plenty of info on the internet on this, but you could also have a look at my install guide).

Mapping an environment

When I used Player in my masters project, it came with some very basic mapping drivers (pmap). However, the driver could not perform a map incrementally in real-time (it used logged data files, and did offline processing to produce a map). Depending on your application and requirements, this may or may not be an issue. If you wish to build maps incrementally as the data arrives, you need to interface a suitable SLAM library to Player. This can be tricky and time consuming. I went through this very process with my masters, and I ended up creating a wrapper for GMapping. You find about my GMapping wrapper here.

After you've installed the GMapping plugin on your machine, Player should now be capable of building maps of the environment in real-time. To test this out, let's use a Stage simulated robot. Create a configuration file called 'stage_gmapping.cfg' and add in the following contents:

# Stage plugin
driver
(
    name "stage"
    provides ["simulation:0"]
    plugin "stageplugin"
    worldfile "simple.world"
)
driver
(
    name "stage"
    provides ["position2d:0" "laser:0" "graphics2d:0" "graphics3d:0"]
    model "r0"
)
# GMapping plugin
driver
(
    name "gmapping"
    plugin "libgmapping"
    provides ["position2d:2" "map:0"]
    requires ["position2d:0" "laser:0"]
    xmin -10.0
    xmax 10.0
    ymin -10.0
    ymax 10.0
)

Start Player with this configuration file, and then in another terminal run 'playerv'. In playerv, subscribe and enable commands to the Stage position2d proxy. Also subscribe and enable continuous update to GMapping's map proxy. If all is well, you should be able to drive the robot around and see a map of the environment being built as you go. This video clip is what happens on my computer:

Input Player camera frames to OpenCV

If your project involves both Player and computer vision, odds are you'll want to grab images from the Player server and convert them into a format that OpenCV can handle. I wrote the function below to do exactly this. Note that you may need to swap the red and blue channels to get your picture looking right.

IplImage *player2opencv(PlayerCc::CameraProxy *camera, bool swap_RB)
{
    camera->Decompress();
    int32_t image_size = camera->GetImageSize(), image_width = camera->GetWidth(), image_height = camera->GetHeight();
    if(image_size <= 0 || image_width <= 0 || image_height <= 0)
    {
        std::cout << "problem reading camera" << std::endl;
        return NULL;
    }
    uint8_t *tmp = NULL;
    tmp = new uint8_t[image_size];
    camera->GetImage(tmp);
    IplImage *cv_img = NULL;
    cv_img = cvCreateImage(cvSize(image_width, image_height), IPL_DEPTH_8U, image_size/(image_width*image_height));
    memcpy(cv_img->imageData, (char *) tmp, image_size);
    delete [] tmp;
    if(swap_RB)
    {
        IplImage *cv_tmp = NULL;
        cv_tmp = cvCloneImage(cv_img);
        cvConvertImage(cv_img, cv_tmp, CV_CVTIMG_SWAP_RB);
        cvReleaseImage(&cv_img);
        return cv_tmp;
    }
    else
    {
        return cv_img;
    }
}

Share |