Robobot mission

From Rsewiki
(Difference between revisions)
Jump to: navigation, search
(Motor)
(Main)
 
(48 intermediate revisions by one user not shown)
Line 2: Line 2:
  
 
== Robobot mission software ==
 
== Robobot mission software ==
 +
 +
'''NB! not valid for the 2023 version of the software'''.
  
 
[[File:robobot_mission_block_overview.png | 600px]]
 
[[File:robobot_mission_block_overview.png | 600px]]
  
This figure shows the mission functional blocks. The main connection is to the robobot_bridge, through which the robot is controlled.  
+
This figure shows the mission functional blocks. The primary connection is to the robobot_bridge, through which the robot is controlled.  
The red mission block is the main block, where the behaviour is controlled. To help decide and control the behaviour, there is a number of data elements available in the bottom row of boxes, available through the bridge. A camera block is available, where image processing is assumed to happen. From the camera block, there is support for ArUco marker detection.
+
The red block is the main part where the behaviour is controlled.  
 +
Data elements are available in the bottom row of boxes. A camera block is available, where image processing is assumed to happen. Some vision example code is included as inspiration.
  
The blocks marked with a circle arrow is running their own thread. The yellow boxes are features available outside the mission application.
+
The blocks marked with a circle arrow are running in a thread to handle incoming data (from bridge or camera).
  
 
The mission application executable is in /home/local/mission/build:
 
The mission application executable is in /home/local/mission/build:
  
  local@solvej:~ $ cd mission/build
+
  local@Oscar:~ $ cd mission/build
  local@solvej:~/mission/build $ '''./mission'''
+
  local@Oscar:~/mission/build $ ./mission  
image size 960x1280
+
  Received, but not used: # Welcome to robot bridge (HAL) - send 'help' for more info
Connecting to camera
+
  Received, but not used: bridge 1 crc mod99
Connected to pi-camera ='00000000a0604732
+
  # Video device 0: width=1280, height=720, format=MJPG, FPS=25
  # Welcome to REGBOT bridge - send 'help' for more info
+
  # Vision::setup: Starting image capture loop
  # press green to start.
+
# Setup finished OK
  Press h for help (q for quit)
+
  >>
+
 
+
When the application is starting it opens the camera and displays the used image size (960x1280) and the camera serial number 00000000a0604732.
+
Then it connects to the bridge, that reply "# Welcome to REGBOT bridge".
+
The connection to the bridge is strictly needed, otherwise the mission app terminates.
+
 
+
The "''# press green to start.''" is information from the actual mission plan.
+
  
There is a prompt at the end ">>" indication that the keyboard is active.  
+
When the application starts, the welcome message is ignored (Received, but not used).
 +
It opens the camera and displays the used image size (1280 x 720) and format.
 +
And the setup is finished.
  
 
== Main ==
 
== Main ==
  
When the mission app is started a number of functions are available from the keyboard.
+
The main mission program example program is like this:
  
Press h (and enter) to get a list of features:
+
1 int main(int argc, char **argv)  
 +
2 {
 +
3  if (setup(argc, argv))
 +
4  { // start mission
 +
5    std::cout << "# Robobot mission starting ...\n";
 +
6    step1();
 +
7    step2();
 +
8    std::cout << "# Robobot mission finished ...\n";
 +
9    // remember to close camera
 +
10  vision.stop();
 +
11  sound.say("I am finished..", 0.2);
 +
12  while (sound.isSaying())
 +
13    sleep(1);
 +
14  bridge.tx("regbot mute 1\n");
 +
15 }
 +
16 return 0;
 +
17 }
  
>> h
+
A C++ program starts at the main(..) function.
# got 'h' n=1
+
# mission command options
+
#    a    Do an ArUco analysis of a frame (no debug - faster)
+
#    b    Bridge restart
+
#    c    Capture an image and save to disk (image*.png)
+
#    d 1/0 Set/clear flag to save ArUco debug images, is=0
+
#    h    This help
+
#    lo xxx  Open log for xxx (pose 0, hbt 0, bridge 0, imu 0
+
#              ir 0, motor 0, joy 0, event 0, cam 0, aruco 0, mission 0)
+
#    lc xxx  Close log for xxx
+
#    o    Loop-test for steady ArUco marker (makes logfile)
+
#    p 99 Camera pan (Yaw) degrees (positive CCV) is 0.0 deg
+
#    q    Quit now
+
#    r 99 Camera roll degrees (positive left), is 0.0 deg
+
#    s    Status (all)
+
#    t 99 Camera tilt degrees (positive down), is 10.0 deg
+
#    2 x y h d    To face destination (x,y,h) at dist d
+
#    3 x y h      Camera position on robot (is 0.030, 0.030, 0.270) [m]
+
#
+
# NB!  Robot may continue to move if this app is stopped with ctrl-C.
+
>>
+
  
Log-files can be opened, this is a nice feature for debugging and analyzing sensor behaviour.
+
First, the interfaces and data sources need to be set up in line 3; if setup fails, the program terminates.
e.g. the command
+
  
>> lo pose hbt imu ir motor joy event cam aruco mission
+
Line 5 is just a print to the console.
  
will open logfiles for most of the available data - except the bridge, that logs all communication.
+
Lines 6 and 7 are the two mission parts used in this example.
  
The camera pose should be set in "ucamera.h", but can temporarily be changed here with the 'p', 'r', 't' and '3' commands.
+
Then there is just cleanup left.
 +
Line 14 shows the way to send data to the Regbot through the Bridge. "Bridge.tx()" is the function call to transmit data to the bridge. When the send text starts with "regbot" the rest is send to the Regbot by the bridge, as "regbot" is a data source name.
  
The status list 's' gives an actual status for rather many data points, including 'data age', the time since the last update.
+
=== Mission step ===
  
In the line starting with '3' is shown current camera position on the robot, relative to the centre between the driving wheels at floor level 3cm more forward, 3cm to the left and 27cm above the ground.
+
The first mission step in this example is:
  
=== Logfiles ===
+
1  void step1()
 +
2  {
 +
3  sound.say(". Step one.", 0.3);
 +
4  // remove old mission
 +
5  bridge.tx("regbot mclear\n");
 +
6  // clear events received from last mission
 +
7  event.clearEvents();
 +
8  // add mission lines
 +
9  bridge.tx("regbot madd vel=0.2:time=1\n");
 +
10  bridge.tx("regbot madd tr=0.1:time=1,turn=-90\n");
 +
11  bridge.tx("regbot madd :time=1\n");
 +
12  // start this mission
 +
13  bridge.tx("regbot start\n");
 +
14  // wait until finished
 +
15  //
 +
16  cout << "Waiting for step 1 to finish (event 0 is send, when mission is finished)\n";
 +
17  event.waitForEvent(0);
 +
18 //  sound.say(". Step one finished.");
 +
19 }
  
The data blocks have a data logger feature that can be enabled and disabled.
+
Line 3 calls a "sound" function called "sound.say("string", volume)"; the function converts the text to sound (in the English language) and plays that sound file (aa.wav).
The interface logfile will be in a text format for MATLAB import.
+
The name of the log-file will include date and time, and will therefore not overwrite the previous logfile.
+
  
The IR logfile could be named 'log_irdist_20200105_133333.297.txt' look like this:
+
Line 5 sends a message to the Regbot to clear any old mission stored (this will also stop the active control of the robot wheels if a mission is running)
  
% robobot mission IR distance log
+
Line 7 clears events. Events can be generated in any mission line (with number 1 to 30) and is automatically generated at the start (event 33) and stop (event 0) of a Regbot mission.
% 1 Timestamp in seconds
+
% 2 IR 1 distance [meter]
+
% 3 IR 2 distance [meter]
+
% 4 IR 1 raw [AD value]
+
% 5 IR 2 raw [AD value]
+
1578227613.337 0.250 0.730 33549 5864
+
1578227613.372 0.249 0.740 33649 5687
+
1578227613.419 0.250 0.759 33476 5330
+
1578227613.455 0.250 0.716 33468 6147
+
1578227613.494 0.250 0.727 33470 5936
+
  
And can be displayed in MATLAB using a script like this  
+
Line 9 to 11 adds new mission lines, the first part "regbot" tells the bridge that it is for the Regbot, and the second part "madd" tells the Regbot that this is a line to add.
 +
The rest of the string is decoded as a mission line.
  
ir = load('log_irdist_20200105_133333.297.txt'); %
+
In case of syntax error, a message is sent back from the Regbot, like:
h = figure(300)
+
  regbot:# UMissionThread::addLine syntax error thread 1 line 0: failed parameter at 2:time=1
hold off
+
plot(ir(:,1) - ir(1,1), ir(:,2));
+
  hold on
+
plot(ir(:,1) - ir(1,1), ir(:,3));
+
grid on
+
grid MINOR
+
axis([5,12,0,1])
+
legend('IR 1 distance', 'IR 2 distance','location','southwest')
+
xlabel('sec')
+
ylabel('distance [m]')
+
saveas(h, 'ir1-ir2.png')
+
  
 +
The first part "regbot:" says that it is from the Regbot, the rest of the line says that in "thread 1 line 0", there is an error. The offending part is shown "2:time=1", here the error was that the velocity part "vel=0.2" was written as "vel=0,2", and the comma is used as a separation of commands.
  
And in this case, the plot shows:
+
Line 13 tells the Regbot to start the just downloaded mission lines.
  
[[File:ir1-ir2.png | 400px]]
+
Line 17 waits for event 0 to happen, indicating "end of the mission".
  
The plot shows that longer distances (IR2) have more noise than shorter distances (IR1).
+
=== Setup ===
In this case the drive is started by an IR2 distance less than 0.3m (at 6 seconds), hereafter the robot moves and the IR 1 distance (looking right) sees different obstacles.
+
  
== Bridge ==
+
The setup function called all data modules in turn.
 +
The data modules will subscribe to the relevant data from the bridge and the Regbot.
  
This part handles the interface with the regbot_bridge application. This two-way communication handles each direction individually.  
+
The bridge module that receives the returned data from the bridge will, in turn, ask all the data modules if they handle this message type.
 +
The code is in line 40 in the "bridge.cpp" file.
  
Sending messages is mostly handled by the mission block, here regular updates of data is requested for the data blocks - e.g. robot pose, joystick buttons, IR-sensor measurements etc.
+
== Vision ==
  
The receiving part of the bridge is always waiting for messages and distribute them for the relevant data block for decoding.
+
The vision setup opens the camera with these lines in file "vision.cpp"
  
In a normal setup, about 150 messages will be received each second.
+
  // line 64 ff
 +
  // prepare to open the camera
 +
  int deviceID = dev;        // 0 = open default camera
 +
  int apiID = cv::CAP_V4L2;  // Video for Linux version 2
 +
  // open selected camera using selected API
 +
  cap.open(deviceID, apiID);
 +
  // check if we succeeded
 +
  camIsOpen = cap.isOpened();
 +
  if (not camIsOpen)
 +
  {
 +
    cerr << "ERROR! Unable to open camera\n";
 +
  }
  
== Data elements ==
+
==== Capture image thread ====
  
This is a list of the features of each of the data elements.
+
If opening is successful, then a thread is started (line 102):
  
=== IMU ===
+
  // start thread to keep buffer empty
 +
  printf("# Vision::setup: Starting image capture loop\n");
 +
  listener = new thread(startloop, this);
  
The IMU is an MPU-9150 chip from InvenSense, that provide accelerometer and gyro signals on all 3 axes.
+
The "startLoop" calls "loop" (line 136ff)
  
[[File:acc.png | 600px]]
+
void UVision::loop()
 +
{
 +
  while (camIsOpen and not terminate)
 +
  { // keep framebuffer empty
 +
    if (useFrame)
 +
    { // grab and decode next image
 +
      cap.read(frame);
 +
      // mark as available
 +
      gotFrame = not frame.empty();
 +
      useFrame = not gotFrame;
 +
    }
 +
    else
 +
      // just grab the image - mark it as used
 +
      cap.grab();
 +
    frameSerial++;
 +
  }
 +
}
  
Accelerometer data. The z-axis shows the gravity acceleration, and when the robot starts driving after 6 seconds, there is clearly some bumping, that is seen as noise.
+
As long as no one has set the boolean "useFrame=true", the loop will just call "cap.grab()" to keep the frame buffer empty.
The robot holds a pause at about 10 to 10.5 seconds and then turns a bit. The values on the x-axis (forward) ought to show some lateral acceleration, but it is hard to see in the noise.
+
When "useFrame" is true, the next image will be saved in the "frame" image buffer.
  
[[File:gyro.png | 600px]]
+
The function "getNewestFrame()" will tell the loop to capture an image and then wait until the image is in the frame buffer.
  
Gyro data. While driving forward (from 6 to 10 seconds) there is a significant amount of noise.
+
==== Process image ====
The robot holds a pause at about 10 to 10.5 seconds and then turns a bit. The turning (from 10.5 to 11.5 seconds) is clearly visible on the gyro z-axes.
+
  
The data is accessible in the mission code as:
+
The function "processImage()" is intended to be called from one of the mission steps, and this example is overly complicated, but some of the important lines are shown here:
  
  bridge->imu->acc[0]  Acceleration in x-axes (forward) [m/s^2]
+
  170  bool UVision::processImage(float seconds)
  bridge->imu->acc[1]   y-axis (left)
+
      { // process images in 'seconds' seconds
  bridge->imu->acc[2]  z-axes (up)
+
      ...
bridge->imu->gyro[0]  Rotation velocity around x-axes [degree/sec]
+
  182  getNewestFrame();   
  bridge->imu->gyro[1] y-axis
+
      if (gotFrame)
bridge->imu->gyro[2]  z-axes
+
      { // save the image - with a number
 +
          const int MSL = 100;
 +
          char s[MSL];
 +
          snprintf(s, MSL, "sandberg_%03d.png", n);
 +
          t3.now();
 +
  200      cv::imwrite(s, frame);
 +
          printf("Image save took %.3f sec\n", t3.getTimePassed());
 +
      }
 +
      ...
 +
      ballBoundingBox.clear();
 +
  207 terminate = doFindBall();
 +
      ...
 +
  return terminate or not camIsOpen;
 +
}
  
There is further a turn rate function (vector sum of all three gyro axes):
+
Line 182 requests a fresh image, "gotFrame" is true if successful.
  
bridge->imu->turnrate()
+
Line 200 saves the image to a file (n is the frame number).
  
There is no calculated nor calibrated scale not offset on the accelerometer reading.
+
Line 207 calls an image analysis function and returns true if a ball is found.
  
The gyro offset can be calibrated using the REGBOT GUI (IMU tab).
+
==== Find ball OpenCv example ====
  
=== IR-dist ===
+
To illustrate some of the OpenCV calls, the example function "doFineBall" highlights are:
  
There are two IR sharp sensors type GP2Y0A21, a 10-80cm sensor.
+
bool UVision::doFindBall()
 +
{ // process pipeline to find
 +
    // bounding boxes of balls with matched colour
 +
242 cv::Mat yuv;
 +
244 cv::cvtColor(frame, yuv, cv::COLOR_BGR2YUV);
 +
    int h = yuv.rows;
 +
    int w = yuv.cols;
 +
247 cv::imwrite("yuv_balls_01.png", yuv);
 +
    // color for filter
 +
251 cv::Vec3b yuvOrange = cv::Vec3b(128,88,187);
 +
252 cv::Mat gray1(h,w, CV_8UC1);
 +
    // test all pixels
 +
    for (int r = 0; r < h; r++)
 +
    { // get pointers to pixel-row for destination image
 +
256  uchar * pOra = (uchar*) gray1.ptr(r); // gray
 +
      for (int c = 0; c < w; c++)
 +
      { // go through all pixels in this row
 +
        int d;
 +
260    cv::Vec3b p = yuv.at<cv::Vec3b>(r,c);
 +
261    d = uvDistance(p, yuvOrange);
 +
262    *pOra = 255 - d;
 +
        pOra++; // increase to next destination pixel
 +
      }
 +
    }
 +
    // do static threshold at value 230, max is 255, and mode is 3 (zero all pixels below threshold)
 +
    cv::Mat gray2;
 +
285 cv::threshold(gray1, gray2, 230, 255, 3);
 +
    // remove small items with a erode/delate
 +
    // last parameter is iterations and could be increased
 +
    cv::Mat gray3, gray4;
 +
290 cv::erode(gray2, gray3, cv::Mat(), cv::Point(-1,-1), 1);
 +
    cv::dilate(gray3, gray4, cv::Mat(), cv::Point(-1,-1), 1);
  
The raw response from the sensor is approximate as shown here
+
Line 242 creates an OpenCV image handle called "yuv"
  
[[File:GP2Y0A21_sensitivity.gif]]
+
Line 244 converts the fresh image to be in YUV colour coding; this has isolated brightness to the channel Y and the colour to two dimensions U and V.
  
The part from about 8-10cm is converted to a distance in meter using the '''calibration''' points in the REGBOT GUI.
+
Line 247 Saves the YUV image to a file (as if it were a BGR image), this is to be used to find the colour (Y and V) of the ball to be detected.
  
A data example is shown above in the logfiles section.
+
Line 251 Inserts the found colour (found in an image application from the "yuv_balls_01.png" file)
  
The data is accessible using
+
Line 252 Creates a gray-scale image of the same size as the original image (gray values from 0 to 255, CV_8UC1 is 8-bit unsigned with one channel).
  
bridge->irdist->dist[0]  is distance sensor 1 [m]
+
Line 256 Gets a pointer to the first pixel in the grayscale image (to write the filtered image)
bridge->irdist->dist[1]  is distance sensor 2 [m]
+
  
=== Pose ===
+
Line 260 Gets the YUV pixel at position (r,c) as a vector with 3 byte sized values (cv:Vec3b).
  
The robot pose includes position, orientation and tilt and is accessed as
+
Line 261 Gets the colour difference between the selected U,V value (line 251) and the UV value of this pixel, by just adding the distance in the U direction to the distance in the Y direction, as:
 +
225 int UVision::uvDistance(cv::Vec3b pix, cv::Vec3b col)
 +
    { /// format is Y,V,U and Y is not used
 +
      int d = abs(pix[1] - col[1]) + abs(pix[2] - col[2]);
 +
The output is limited to maximum 255.
  
bridge->pose->x      distance in x-direction (forward) sinse start of mission.
+
Line 262 Writes the result to the grayscale image so that a small distance is white (255).
bridge->pose->y      y-distance
+
bridge->pose->h      heading in radians, zero is aligned with x-axes, positive is counter clockwise
+
bridge->pose->tilt  tilt is rotation angle around the y-axis (used for REGBOT balance) (using a complementary filter (acc and gyro).
+
bridge->pose->dist  distance driven since start of mission [m].
+
  
The data from the same mission as above looks like.
+
Line 285 Then thresholds the image to a new image called "gray2". Values above 230 (no more than 25 values from the selected colour) are likely to be the from the ball colour we are looking for.
  
[[File:odo-time.png | 400px]]
+
Line 290 The thresholded image is then further filtered.
[[File:odo-xy.png | 400px]]
+
  
The first plot shows that the robot moves 0.6m forward (y=0 and h=0) starting at 6 seconds and stops at 10 seconds - to take an image. After this, it takes a number of turns (about 20 degrees with a turning radius of about 0.2m) with a stop for about a second (looking for an ArUco code in the image).
 
  
After 18 seconds and the heading is about 0.9 radians, and an ArUco-code is visible. Thien the robot advances to the position of the code (in a turn-drive-turn manoeuver).
 
  
The second plot shows the position of the robot.
+
== Camera ==
  
Calibration of the odometry is the wheel-base (distance between driving wheels) and the wheel radius. These setting must be inserted in the REGBOT GUI (robot tab).
 
  
=== Joy ===
+
===Camera calibration===
  
The joystick data is available from the bridge.
+
To use the camera to determine distances, calibration is needed.
  
The following data is available
+
A rough calibration is used that, in most cases, is sufficient.
  
bridge->joy->axes[0]  (left hand left-right axes)
+
The camera calibration consists of a camera matrix and a lens distortion vector.
...
+
This is set in the camera class definition in the file 'ucamera.h':
bridge->joy->axis[7]  (digital up-down axes)
+
+
For the other axes see [[Robobot_bridge#Gamepad]].
+
The value is a signed 16-bit value (-32767 to 32766) for the axes.
+
  
  bridge->joy->button[0]    Green button
+
  /** camera matrix is a 3x3 matrix (raspberry PI typical values)
  ...
+
  *    pix    ---1----  ---2---  ---3---  -3D-
  bridge->joy->button[10]  Right knob
+
  *  1 (x)      980        0       640    (X)
 +
  *  2 (y)      0        980      480    (Y)
 +
  *  3 (w)      0        0        1      (Z)
 +
  * where [1,1] and [2,2] is focal length,
 +
   * and  [1,3] is half width  center column (assuming image is 1280 pixels wide)
 +
  * and  [2,3] is half height center row    (assuming image is 960 pixels high)
 +
  * [X,Y,Z] is 3D position (in camera coordinated (X=right, Y=down, Z=front),
 +
  * [x,y,w] is pixel position for 3D position, when normalized, so that w=1
 +
  */
 +
  const cv::Mat cameraMatrix = (cv::Mat_<double>(3,3) <<
 +
                      980,    0,    640,
 +
                      0,    980,    480,  
 +
                      0,      0,    1);
 +
  /**
 +
  * camera radial distortion vector
 +
  * 1 (k1)  0.14738
 +
  * 2 (k2)  0.0117267
 +
  * 3 (p1)  0
 +
  * 4 (p2)  0
 +
  * 5 (k3) -0.14143
 +
  * where k1, k2 and k3 is radial distortion params
 +
  * and p1, p2 are tangential distortion
 +
  * see https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
 +
  *  */
 +
  const cv::Mat distortionCoefficients = (cv::Mat_<double>(1,5) <<
 +
                      0.14738,
 +
                      0.0117267,
 +
                      0, 
 +
                      0,
 +
                      -0.14143);
  
For the other buttons see [[Robobot_bridge#Gamepad]].
+
The matrix 'cameraMatrix' holds the focal length set to approximately 980 pixels (in this image resolution) and the optical centre of the image set to the geometric centre of the image frame.
The value is 0 (not pressed) and 1 (pressed).
+
  
There is further a flag for manual override by the gamepad
+
The vector 'distortionCoefficients' is set to some values estimated in an earlier student project.
  
bridge->joy->manual
+
===Camera coordinate conversion matrix===
  
With true or false value (true means gamepad control)
+
The camera is placed on the robot at some distance from the origo of the robot coordinate system.
  
[[File:joy-green.png | 400px]]
+
The default position and tilt is set in the class definition (uvision.h):
  
The mission from the other plots gets started with a press on the green button about 4 seconds after the start of the mission application.
+
  const float camPos[3] = {0.13,-0.02, 0.23};      // in meters
(the robot then further awaits the distance of IR sensor 2 is below 0.3m before starting to move).
+
  const float camTilt = 22 * M_PI / 180; // in radians
 +
  cv::Mat1f camToRobot;
  
===Motor===
+
And the conversion matrix from camera coordinates to robot coordinates are added in uvision.cpp setup()
  
The motor interface gives the (estimated) velocity of the two driving wheels.
+
  float st = sin(camTilt);
 +
  float ct = cos(camTilt);
 +
  camToRobot = (cv::Mat1f(4,4) << ct,  0.f, st, camPos[0],
 +
                                  0.f ,  1.f, 0.f , camPos[1],
 +
                                  -st, 0.f, ct, camPos[2],
 +
                                  0.f ,  0.f, 0.f , 1.f);
  
bridge->motor->velocity[0]    Velocity of the left wheel in m/s
 
bridge->motor->velocity[1]    Velocity of right wheel
 
  
There is further access to the motor current.  
+
This coordinate conversion matrix is used to find the position of an object (e.g. a ball) once the ball's position is found in camera coordinates.
  
bridge->motor->current[0]    Current of the left motor in Amps
+
== ArUco ==
bridge->motor->current[1]    Current of right motor
+
  
[[File:motor-velocity.png]]
+
OpenCV has a library function to detect ArUco codes and estimate their position in camera coordinates.
[[File:motor-current.png]]
+
  
An example from the mission used earlier is shown.
+
This requires that the camera is calibrated with a camera matrix and a lens distortion vector. These are implemented in the camera class (UCamera.h).
The left plot shows the estimated velocity, where the velocity starts after 6 seconds at about 0.2 m/s and going back to zero at about 10 seconds, and after that starts turning at 10.5 and 12 seconds - the right wheel drives faster.
+
  
The right current plot shows that the control is active (but not moving) with a current about 100mA. After 6 seconds there is a current peak to about 600mA that starts the motors and then drives with a current of about 250mA. Again after 10.5 and 12 seconds, there is a start-current peak.
+
The coordinate system used for detection is camera coordinates: (x,y,z) where x is to the right, y is down and z is forward and rotation around the same axes.
  
=== Info ===
+
[[File:i12432_annotated_20200105_091123.850.png | 400px]]
  
The info block holds som static data, but also data from the heartbeat message - the only part that can be logged).
+
An ArUco marker seen by the robot in my home domain.
  
=== Event ===
+
=== Ecample code ===
  
Holds all 32 events. The logfile has a list of when each of then has been set or cleared.
+
An example code to extract the codes and save the ArUco marker position in robot coordinates are implemented as a 'ArUcoVals' class in the uaruco.h and uaruco.cpp files. The found values are stored in an array 'arucos' of class objects of type 'ArUcoVal' (also in the uaruco.h and aruco.cpp files)
  
=== Edge ===
+
The extraction is in the function
  
The edge sensor (also called line sensor).
+
int ArUcoVals::doArUcoProcessing(cv::Mat frame, int frameNumber, UTime imTime)
 +
{
 +
  cv::Mat frameAnn;
 +
  const float arucoSqaureDimensions = 0.100;      //meters
 +
  vector<int> markerIds;
 +
  vector<vector<cv::Point2f>> markerCorners; //, rejectedcandidates;
 +
  cv::aruco::DetectorParameters parameters;
 +
  //seach DICT on docs.opencv.org
 +
  cv::Ptr < cv::aruco::Dictionary> markerDictionary = cv::aruco::getPredefinedDictionary(cv::aruco::PREDEFINED_DICTIONARY_NAME::DICT_4X4_100);
 +
  // marker position info
 +
  vector<cv::Vec3d> rotationVectors, translationVectors;
 +
  UTime t; // timing calculation (for log)
 +
  t.now(); // start timing
 +
  // clear all flags (but maintain old information)
 +
  setNewFlagToFalse();
 +
  /** Each marker has 4 marker corners into 'markerCorners' in image pixel coordinates (float x,y).
 +
  *  Marker ID is the detected marker ID, a vector of integer.
 +
  * */
 +
  cv::aruco::detectMarkers(frame, markerDictionary, markerCorners, markerIds);
 +
  // marker size is same as detected markers
 +
  if (markerIds.size() > 0)
 +
  { // there are markers in image
 +
    if (debugImages)
 +
    { // make a copy for nice saved images
 +
      frame.copyTo(frameAnn);
 +
      cv::aruco::drawDetectedMarkers(frameAnn, markerCorners, markerIds);
 +
    }
 +
    //
 +
    cv::aruco::estimatePoseSingleMarkers(markerCorners,
 +
                                        arucoSqaureDimensions,
 +
                                        cam->cameraMatrix,
 +
                                        cam->distortionCoefficients,
 +
                                        rotationVectors,
 +
                                        translationVectors);
 +
  }
 +
  else
 +
    printf("# No markers found\n");
  
== Camera ==
+
The functions '' 'cv::aruco::detectMarkers(...)' '' and '' 'cv::aruco::estimatePoseSingleMarkers(...)' '' has the functionality, but requires som data setup to function, as shown above.
  
The camera interface sets the camera to 1280 x 960 pixels at 30 frames per second.
+
All found markers are then saved and their position and orientation is converted to robot coordinates.
 +
Robot coordinates are defined as (x,y,z), where x is forward, y is left and z is up (plus orientation around the same coordinates).
 +
The reference position is the centre between the driving wheels at ground level.
  
...
+
Each marker is updated with this code
  
== ArUco ==  
+
    ArUcoVal * v = getID(i);
 +
    if (v != NULL)
 +
    { // save marker info and do coordinate conversion for marker
 +
      v->lock.lock();
 +
      v->markerId = i;
 +
      // set time and frame-number for this detection
 +
      v->imageTime = imTime;
 +
      v->frameNumber = frameNumber;
 +
      // v->frame = frame; // might use frame.copyTo(v->frame) insted
 +
      v->rVec = rotationVectors[j];
 +
      v->tVec = translationVectors[j];
 +
      //
 +
      if (debugImages)
 +
      { // show detected orientation in image X:red, Y:green, Z:blue.
 +
        cv::aruco::drawAxis(frameAnn, cam->cameraMatrix, cam->distortionCoefficients, v->rVec, v->tVec, 0.03f); //X:red, Y:green, Z:blue.
 +
      }
 +
      //
 +
      v->markerToRobotCoordinate(cam->cam2robot);
 +
      v->isNew = true;
 +
      v->lock.unlock();
  
This part of the camera system is configured to detect ArUco codes with their position relative to the robot.
+
The coordinate conversion is in the '' 'markerToRobotCoordinate(cv::Mat cam2robot)' '' function.
The position is in (x, y, z), where x is forward, y is left and z is up. The reference position is the center between the driving wheels at ground level.
+
  
== Mission ==
+
This function also determines if the marker is vertical or horizontal, and estimates based on this the orientation as one angle around the robot z-axis (up). This can then be used as a destination pose, using the x,y from the marker position and heading (called '' 'markerAngle' '')
 
+
The mission block has access to all the other elements and controls the performance of the robot.
+
 
+
The [[mission code]] is described in a bit more detail here.
+
  
 +
The values for the marker is protected by a resource lock, as it is generated and used by different threads.
  
 
==Installation==
 
==Installation==
Line 288: Line 423:
 
Get the ROBOBOT software from the svn repository:
 
Get the ROBOBOT software from the svn repository:
  
  svn checkout svn://repos.gbar.dtu.dk/jcan/regbot/mission mission
+
  svn checkout svn://repos.gbar.dtu.dk/jcan/robobot
  
or just update if there already
+
Or update if the code is there already (as it is on the Raspberry Pi)
  
 
  svn up
 
  svn up
 +
 +
NB! this will most likely create a conflict if you have changed the code.
  
 
===Compile===
 
===Compile===
  
To be able to compile the demo software CMAKE needs also to use the user installed library (raspicam installed above),
+
Build Makefiles and compile
so add the following line to ~/.bashrc:
+
 
+
export CMAKE_PREFIX_PATH=/usr/local/lib
+
 
+
Then build Makefiles and compile:
+
  
 
  cd ~/mission
 
  cd ~/mission
Line 308: Line 440:
 
  cmake ..
 
  cmake ..
 
  make -j3
 
  make -j3
 +
 +
Once the "build" directory is made and "make" is called for the first time, then the last line "make -j3" is needed.
  
 
Then test-run the application:
 
Then test-run the application:
Line 314: Line 448:
  
 
It should print that the camera is open and the bridge is connected to the REGBOT hardware.
 
It should print that the camera is open and the bridge is connected to the REGBOT hardware.
 
==Sound==
 
 
The robot has small speakers
 
 
===Music===
 
 
 
===Speak===
 
 
system("espeak \"bettina reached point 3\" -ven+f4 -a30 -s130");
 
 
This line makes the robot say "bettina reached point 3" the parameters "-a30" turns amplitude down to 30%, "-ven+f4" sets language to english with female voice 4 and "-s130" makes the speech a little slower and easier to understand.
 
It requires that espeak is installed (sudo apt install espeak).
 
 
To use on Raspberry pi, it is better to use
 
 
system("espeak \"Mission paused.\" -ven+f4 -s130 -a60 2>/dev/null &");
 
 
The "2>/dev/null" tell that error messages should be dumped, and the final "&" say that it should run in the background (not to pause the mission).
 
 
===AruCo Marker===
 
 
The camera class contains some functions to detect Aruco markers.
 
It it described in more details on [[AruCo Markers]].
 
 
===Software documentation (doxygen)===
 
 
[[File:inherit_graph_2.png | 200px]]
 
 
Figure: generated with doxygen http://aut.elektro.dtu.dk/robobot/doc/html/classes.html
 
 
 
The classes that inherit from UData are classes that makes data available for the mission, e.g. joystick buttons (in UJoy) event flags (in UEvent) or IR distance data (in UIRdist).
 
 
The classes that inherit from URun has a thread running to receive data from an external source, e.g. UBridge that handles communication with the ROBOBOT_BRIDGE.
 
 
The camera class (UCamera) is intended to do the image processing.
 
 
====HTML documentation - Doxygen====
 
 
To generate doxygen html files go to mission directory and run doxygen.
 
 
cd ~/mission
 
doxygen Doxyfile
 
 
then open the index.html with a browser.
 
 
If doxygen is not installed, then install using
 
 
sudo apt install doxygen
 

Latest revision as of 14:38, 2 February 2023

Back to robobot

Contents

[edit] Robobot mission software

NB! not valid for the 2023 version of the software.

Robobot mission block overview.png

This figure shows the mission functional blocks. The primary connection is to the robobot_bridge, through which the robot is controlled. The red block is the main part where the behaviour is controlled. Data elements are available in the bottom row of boxes. A camera block is available, where image processing is assumed to happen. Some vision example code is included as inspiration.

The blocks marked with a circle arrow are running in a thread to handle incoming data (from bridge or camera).

The mission application executable is in /home/local/mission/build:

local@Oscar:~ $ cd mission/build
local@Oscar:~/mission/build $ ./mission 
Received, but not used: # Welcome to robot bridge (HAL) - send 'help' for more info
Received, but not used: bridge 1 crc mod99
# Video device 0: width=1280, height=720, format=MJPG, FPS=25
# Vision::setup: Starting image capture loop
# Setup finished OK

When the application starts, the welcome message is ignored (Received, but not used). It opens the camera and displays the used image size (1280 x 720) and format. And the setup is finished.

[edit] Main

The main mission program example program is like this:

1 int main(int argc, char **argv) 
2 {
3  if (setup(argc, argv))
4  { // start mission
5    std::cout << "# Robobot mission starting ...\n";
6    step1();
7    step2();
8    std::cout << "# Robobot mission finished ...\n";
9    // remember to close camera
10   vision.stop();
11   sound.say("I am finished..", 0.2);
12   while (sound.isSaying())
13     sleep(1);
14   bridge.tx("regbot mute 1\n");
15 }
16 return 0;
17 }

A C++ program starts at the main(..) function.

First, the interfaces and data sources need to be set up in line 3; if setup fails, the program terminates.

Line 5 is just a print to the console.

Lines 6 and 7 are the two mission parts used in this example.

Then there is just cleanup left. Line 14 shows the way to send data to the Regbot through the Bridge. "Bridge.tx()" is the function call to transmit data to the bridge. When the send text starts with "regbot" the rest is send to the Regbot by the bridge, as "regbot" is a data source name.

[edit] Mission step

The first mission step in this example is:

1  void step1()
2  {
3   sound.say(". Step one.", 0.3);
4   // remove old mission
5   bridge.tx("regbot mclear\n");
6   // clear events received from last mission
7   event.clearEvents();
8   // add mission lines
9   bridge.tx("regbot madd vel=0.2:time=1\n");
10  bridge.tx("regbot madd tr=0.1:time=1,turn=-90\n");
11  bridge.tx("regbot madd :time=1\n");
12  // start this mission
13  bridge.tx("regbot start\n");
14  // wait until finished
15  //
16  cout << "Waiting for step 1 to finish (event 0 is send, when mission is finished)\n";
17  event.waitForEvent(0);
18 //   sound.say(". Step one finished.");
19 }

Line 3 calls a "sound" function called "sound.say("string", volume)"; the function converts the text to sound (in the English language) and plays that sound file (aa.wav).

Line 5 sends a message to the Regbot to clear any old mission stored (this will also stop the active control of the robot wheels if a mission is running)

Line 7 clears events. Events can be generated in any mission line (with number 1 to 30) and is automatically generated at the start (event 33) and stop (event 0) of a Regbot mission.

Line 9 to 11 adds new mission lines, the first part "regbot" tells the bridge that it is for the Regbot, and the second part "madd" tells the Regbot that this is a line to add. The rest of the string is decoded as a mission line.

In case of syntax error, a message is sent back from the Regbot, like:

regbot:# UMissionThread::addLine syntax error thread 1 line 0: failed parameter at 2:time=1

The first part "regbot:" says that it is from the Regbot, the rest of the line says that in "thread 1 line 0", there is an error. The offending part is shown "2:time=1", here the error was that the velocity part "vel=0.2" was written as "vel=0,2", and the comma is used as a separation of commands.

Line 13 tells the Regbot to start the just downloaded mission lines.

Line 17 waits for event 0 to happen, indicating "end of the mission".

[edit] Setup

The setup function called all data modules in turn. The data modules will subscribe to the relevant data from the bridge and the Regbot.

The bridge module that receives the returned data from the bridge will, in turn, ask all the data modules if they handle this message type. The code is in line 40 in the "bridge.cpp" file.

[edit] Vision

The vision setup opens the camera with these lines in file "vision.cpp"

 // line 64 ff
 // prepare to open the camera
 int deviceID = dev;        // 0 = open default camera
 int apiID = cv::CAP_V4L2;  // Video for Linux version 2
 // open selected camera using selected API
 cap.open(deviceID, apiID);
 // check if we succeeded
 camIsOpen = cap.isOpened();
 if (not camIsOpen)
 {
   cerr << "ERROR! Unable to open camera\n";
 }

[edit] Capture image thread

If opening is successful, then a thread is started (line 102):

 // start thread to keep buffer empty
 printf("# Vision::setup: Starting image capture loop\n");
 listener = new thread(startloop, this);

The "startLoop" calls "loop" (line 136ff)

void UVision::loop() {

 while (camIsOpen and not terminate)
 { // keep framebuffer empty
   if (useFrame)
   { // grab and decode next image
     cap.read(frame);
     // mark as available
     gotFrame = not frame.empty();
     useFrame = not gotFrame;
   }
   else
     // just grab the image - mark it as used
     cap.grab();
   frameSerial++;
 }

}

As long as no one has set the boolean "useFrame=true", the loop will just call "cap.grab()" to keep the frame buffer empty. When "useFrame" is true, the next image will be saved in the "frame" image buffer.

The function "getNewestFrame()" will tell the loop to capture an image and then wait until the image is in the frame buffer.

[edit] Process image

The function "processImage()" is intended to be called from one of the mission steps, and this example is overly complicated, but some of the important lines are shown here:

170  bool UVision::processImage(float seconds)
     { // process images in 'seconds' seconds
     ...
182  getNewestFrame();    
     if (gotFrame)
     { // save the image - with a number
         const int MSL = 100;
         char s[MSL];
         snprintf(s, MSL, "sandberg_%03d.png", n);
         t3.now();
200      cv::imwrite(s, frame);
         printf("Image save took %.3f sec\n", t3.getTimePassed());
     }
     ...
     ballBoundingBox.clear();
207  terminate = doFindBall();
     ...
 return terminate or not camIsOpen;

}

Line 182 requests a fresh image, "gotFrame" is true if successful.

Line 200 saves the image to a file (n is the frame number).

Line 207 calls an image analysis function and returns true if a ball is found.

[edit] Find ball OpenCv example

To illustrate some of the OpenCV calls, the example function "doFineBall" highlights are:

bool UVision::doFindBall()
{ // process pipeline to find
    // bounding boxes of balls with matched colour
242 cv::Mat yuv;
244 cv::cvtColor(frame, yuv, cv::COLOR_BGR2YUV);
    int h = yuv.rows;
    int w = yuv.cols;
247 cv::imwrite("yuv_balls_01.png", yuv);
    // color for filter
251 cv::Vec3b yuvOrange = cv::Vec3b(128,88,187);
252 cv::Mat gray1(h,w, CV_8UC1);
    // test all pixels
    for (int r = 0; r < h; r++)
    { // get pointers to pixel-row for destination image
256   uchar * pOra = (uchar*) gray1.ptr(r); // gray
      for (int c = 0; c < w; c++)
      { // go through all pixels in this row
        int d;
260     cv::Vec3b p = yuv.at<cv::Vec3b>(r,c);
261     d = uvDistance(p, yuvOrange);
262     *pOra = 255 - d;
        pOra++; // increase to next destination pixel
      }
    }
    // do static threshold at value 230, max is 255, and mode is 3 (zero all pixels below threshold) 
    cv::Mat gray2;
285 cv::threshold(gray1, gray2, 230, 255, 3);
    // remove small items with a erode/delate
    // last parameter is iterations and could be increased
    cv::Mat gray3, gray4;
290 cv::erode(gray2, gray3, cv::Mat(), cv::Point(-1,-1), 1);
    cv::dilate(gray3, gray4, cv::Mat(), cv::Point(-1,-1), 1);

Line 242 creates an OpenCV image handle called "yuv"

Line 244 converts the fresh image to be in YUV colour coding; this has isolated brightness to the channel Y and the colour to two dimensions U and V.

Line 247 Saves the YUV image to a file (as if it were a BGR image), this is to be used to find the colour (Y and V) of the ball to be detected.

Line 251 Inserts the found colour (found in an image application from the "yuv_balls_01.png" file)

Line 252 Creates a gray-scale image of the same size as the original image (gray values from 0 to 255, CV_8UC1 is 8-bit unsigned with one channel).

Line 256 Gets a pointer to the first pixel in the grayscale image (to write the filtered image)

Line 260 Gets the YUV pixel at position (r,c) as a vector with 3 byte sized values (cv:Vec3b).

Line 261 Gets the colour difference between the selected U,V value (line 251) and the UV value of this pixel, by just adding the distance in the U direction to the distance in the Y direction, as:

225 int UVision::uvDistance(cv::Vec3b pix, cv::Vec3b col)
    { /// format is Y,V,U and Y is not used
      int d = abs(pix[1] - col[1]) + abs(pix[2] - col[2]);

The output is limited to maximum 255.

Line 262 Writes the result to the grayscale image so that a small distance is white (255).

Line 285 Then thresholds the image to a new image called "gray2". Values above 230 (no more than 25 values from the selected colour) are likely to be the from the ball colour we are looking for.

Line 290 The thresholded image is then further filtered.


[edit] Camera

[edit] Camera calibration

To use the camera to determine distances, calibration is needed.

A rough calibration is used that, in most cases, is sufficient.

The camera calibration consists of a camera matrix and a lens distortion vector. This is set in the camera class definition in the file 'ucamera.h':

/** camera matrix is a 3x3 matrix (raspberry PI typical values)
  *    pix    ---1----  ---2---  ---3---   -3D-
  *  1 (x)      980        0       640     (X)
  *  2 (y)       0        980      480     (Y)
  *  3 (w)       0         0        1      (Z)
  * where [1,1] and [2,2] is focal length,
  * and   [1,3] is half width  center column (assuming image is 1280 pixels wide)
  * and   [2,3] is half height center row    (assuming image is 960 pixels high)
  * [X,Y,Z] is 3D position (in camera coordinated (X=right, Y=down, Z=front),
  * [x,y,w] is pixel position for 3D position, when normalized, so that w=1
 */
 const cv::Mat cameraMatrix = (cv::Mat_<double>(3,3) << 
                      980,    0,    640, 
                      0,    980,    480,  
                      0,       0,     1);
 /**
  * camera radial distortion vector 
  * 1 (k1)   0.14738
  * 2 (k2)   0.0117267
  * 3 (p1)   0
  * 4 (p2)   0
  * 5 (k3)  -0.14143
  * where k1, k2 and k3 is radial distortion params
  * and p1, p2 are tangential distortion 
  * see https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html 
  *   */
 const cv::Mat distortionCoefficients = (cv::Mat_<double>(1,5) << 
                      0.14738, 
                      0.0117267, 
                      0,  
                      0, 
                     -0.14143);

The matrix 'cameraMatrix' holds the focal length set to approximately 980 pixels (in this image resolution) and the optical centre of the image set to the geometric centre of the image frame.

The vector 'distortionCoefficients' is set to some values estimated in an earlier student project.

[edit] Camera coordinate conversion matrix

The camera is placed on the robot at some distance from the origo of the robot coordinate system.

The default position and tilt is set in the class definition (uvision.h):

 const float camPos[3] = {0.13,-0.02, 0.23};       // in meters
 const float camTilt = 22 * M_PI / 180; // in radians
 cv::Mat1f camToRobot;

And the conversion matrix from camera coordinates to robot coordinates are added in uvision.cpp setup()

 float st = sin(camTilt);
 float ct = cos(camTilt);
 camToRobot = (cv::Mat1f(4,4) << ct,  0.f, st, camPos[0],
                                 0.f ,  1.f, 0.f , camPos[1],
                                 -st, 0.f, ct, camPos[2],
                                 0.f ,  0.f, 0.f , 1.f);


This coordinate conversion matrix is used to find the position of an object (e.g. a ball) once the ball's position is found in camera coordinates.

[edit] ArUco

OpenCV has a library function to detect ArUco codes and estimate their position in camera coordinates.

This requires that the camera is calibrated with a camera matrix and a lens distortion vector. These are implemented in the camera class (UCamera.h).

The coordinate system used for detection is camera coordinates: (x,y,z) where x is to the right, y is down and z is forward and rotation around the same axes.

I12432 annotated 20200105 091123.850.png

An ArUco marker seen by the robot in my home domain.

[edit] Ecample code

An example code to extract the codes and save the ArUco marker position in robot coordinates are implemented as a 'ArUcoVals' class in the uaruco.h and uaruco.cpp files. The found values are stored in an array 'arucos' of class objects of type 'ArUcoVal' (also in the uaruco.h and aruco.cpp files)

The extraction is in the function

int ArUcoVals::doArUcoProcessing(cv::Mat frame, int frameNumber, UTime imTime)
{
 cv::Mat frameAnn;
 const float arucoSqaureDimensions = 0.100;      //meters
 vector<int> markerIds;
 vector<vector<cv::Point2f>> markerCorners; //, rejectedcandidates;
 cv::aruco::DetectorParameters parameters;
 //seach DICT on docs.opencv.org
 cv::Ptr < cv::aruco::Dictionary> markerDictionary = cv::aruco::getPredefinedDictionary(cv::aruco::PREDEFINED_DICTIONARY_NAME::DICT_4X4_100); 
 // marker position info
 vector<cv::Vec3d> rotationVectors, translationVectors;
 UTime t; // timing calculation (for log)
 t.now(); // start timing
 // clear all flags (but maintain old information)
 setNewFlagToFalse();
 /** Each marker has 4 marker corners into 'markerCorners' in image pixel coordinates (float x,y).
  *  Marker ID is the detected marker ID, a vector of integer.
  * */
 cv::aruco::detectMarkers(frame, markerDictionary, markerCorners, markerIds);
 // marker size is same as detected markers
 if (markerIds.size() > 0)
 { // there are markers in image
   if (debugImages)
   { // make a copy for nice saved images
     frame.copyTo(frameAnn);
     cv::aruco::drawDetectedMarkers(frameAnn, markerCorners, markerIds);
   }
   // 
   cv::aruco::estimatePoseSingleMarkers(markerCorners, 
                                        arucoSqaureDimensions, 
                                        cam->cameraMatrix, 
                                        cam->distortionCoefficients, 
                                        rotationVectors, 
                                        translationVectors);
 }
 else
   printf("# No markers found\n");

The functions 'cv::aruco::detectMarkers(...)' and 'cv::aruco::estimatePoseSingleMarkers(...)' has the functionality, but requires som data setup to function, as shown above.

All found markers are then saved and their position and orientation is converted to robot coordinates. Robot coordinates are defined as (x,y,z), where x is forward, y is left and z is up (plus orientation around the same coordinates). The reference position is the centre between the driving wheels at ground level.

Each marker is updated with this code

   ArUcoVal * v = getID(i);
   if (v != NULL)
   { // save marker info and do coordinate conversion for marker
     v->lock.lock();
     v->markerId = i;
     // set time and frame-number for this detection
     v->imageTime = imTime;
     v->frameNumber = frameNumber;
     // v->frame = frame; // might use frame.copyTo(v->frame) insted
     v->rVec = rotationVectors[j];
     v->tVec = translationVectors[j];
     //
     if (debugImages)
     { // show detected orientation in image X:red, Y:green, Z:blue.
       cv::aruco::drawAxis(frameAnn, cam->cameraMatrix, cam->distortionCoefficients, v->rVec, v->tVec, 0.03f); //X:red, Y:green, Z:blue.
     }
     //
     v->markerToRobotCoordinate(cam->cam2robot);
     v->isNew = true;
     v->lock.unlock();

The coordinate conversion is in the 'markerToRobotCoordinate(cv::Mat cam2robot)' function.

This function also determines if the marker is vertical or horizontal, and estimates based on this the orientation as one angle around the robot z-axis (up). This can then be used as a destination pose, using the x,y from the marker position and heading (called 'markerAngle' )

The values for the marker is protected by a resource lock, as it is generated and used by different threads.

[edit] Installation

[edit] Get software

Get the ROBOBOT software from the svn repository:

svn checkout svn://repos.gbar.dtu.dk/jcan/robobot

Or update if the code is there already (as it is on the Raspberry Pi)

svn up

NB! this will most likely create a conflict if you have changed the code.

[edit] Compile

Build Makefiles and compile

cd ~/mission
mkdir -p build
cd build
cmake ..
make -j3

Once the "build" directory is made and "make" is called for the first time, then the last line "make -j3" is needed.

Then test-run the application:

./mission

It should print that the camera is open and the bridge is connected to the REGBOT hardware.

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox