LattePanda 3 Delta project - building a 2DOF ball balancing robotic platform with ROS2

Projects LattePanda

This article shows how to use LattePanda to building a 2DOF ball balancing robotic platform with ROS2 .

(Original Article by amgalbu:


This is the second blog of the LattePanda 3 Delta review. The goal of this post is to show how the LattePanda can be used to build robotics application.
The project I have in mind is a 2DOF ball balancing platform.

The basic concept is:

1. a webcam captures the image of the platform, where a ball is free to move

2. through video analysys, the current position of the ball is detected

3. a control loop moves 2 servo to tilt the platform and keep the ball at the center of the platform or (eventually) follow a predefined path 


Since this is a typical robotics application, , this is a good opportunity to learn something that is new to me but very widespread in the robotics field: ROS2


What is ROS2

ROS stands for Robot Operating System. This is a set of software libraries and tools for building robot applications. The project started in 2007, and since then a quantity of drivers and state-of-the-art algorithms and developer tools have been added.
At the base of the ROS2 architecture is the concept of "node"
A node in ROS is responsible for a single, module purpose (e.g. one node for controlling wheel motors, one node for controlling a laser range-finder, etc). Each node can send and receive data to other nodes via topics, services, actions, or parameters.

Topics are a vital element of the ROS graph that act as a bus for nodes to exchange messages. A node may publish data to any number of topics and simultaneously have subscriptions to any number of topics.

Topics are one of the main ways in which data is moved between nodes and therefore between different parts of the system.


The nodes

In this project, I will build three nodes:

1. the video processing node: this node will capture data from the webcam, detect the ball position and publish the coordinates to the topic "panda-pos"

2. the path planner node: this node will calculate the desired position of the ball at a certain moment in time and publish such coordinates to the topic "panda_path"

3. the arduino bridge node: this node will subscribe to both "panda-pos" and "panda-path" topics and simply send such data to the ATMEGA32U4 microcontroller through the serial connection


The ATMEGA32U4 microcontroller will read data (current ball position and desired ball position) from serial line and apply a PID control loop to minimized the error (i.e. current ball position - desired ball position). The output of the PID is the angle to apply to the servo to correct such an error


Having a clear goal in mind, we can start to tweak with the ROS2 on the LattePanda 3 Delta board


Installing ROS2

First of all, let's install the ROS2 platform. This is very easy since we are running Ubuntu 22.04. So I just had to go through the tutorial at the following URL


Just to recap, here are the commands to type in a terminal


1. Enable Ubuntu Universe repository
sudo apt install software-properties-common
sudo add-apt-repository universe


2. Add ROS2 GPG key

sudo apt update && sudo apt install curl
sudo curl -sSL -o /usr/share/keyrings/ros-archive-keyring.gpg


3. Add repository to sources list

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] $(. /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee /etc/apt/sources.list.d/ros2.list > /dev/null


4. Install ROS2

sudo apt update
sudo apt install ros-humble-desktop


5. Install colcon (build tool)

sudo apt install python3-colcon-common-extensions


Creating the package

To creare a package, we first need to create a workspace. A ROS workspace is a directory with a particular structure. Commonly there is a src subdirectory. Inside that subdirectory is where the source code of ROS packages will be located. Typically the directory starts otherwise empty.
colcon does out of source builds. By default it will create the following directories as peers of the src directory:


1. The build directory will be where intermediate files are stored. For each package a subfolder will be created in which e.g. CMake is being invoked.

2. The install directory is where each package will be installed to. By default each package will be installed into a separate subdirectory.

3. The log directory contains various logging information about each colcon invocation.


mkdir -p ~/ros2_ws/src
cd ~/ros2_ws


Creating the package and the first node

To create the package and the first node (the path planner) I entered the following commands


cd ~/ros2_ws/src
ros2 pkg create --build-type ament_cmake --node-name panda_path panda_acrobat


This creates a package with a sample node that we can build and run. Since I am going to develop in C++, I select cmake as build tool. Another option available on ROS2 is to write Python code. To build the node, run


cd ~/ros2_ws
colcon build


To run the node, we need first to properly setup the environment. colcon automatically generates a bash file with all the environment variables to set.


. install/local_setup.bash


Finally, we can run the node


ros2 run panda_acrobat panda_path


which will print on the terminal


hello world panda_acrobat package


Now all the boilerplate is ready and we can start coding the nodes


Video processing node


The video processing node uses OpenCV to track the ball (I started from this nice blog). The OpenCV VideoCapture handles image acquisition from the video input device. So now we have the image frame and convert it from RGB to HSV because HSV is a little easier to handle when we begin thresh-holding the colors of the ball later.


//Convert RGB to HSV colormap
//and apply Gaussain blur
Mat hsvFrame;
cvtColor(frame, hsvFrame, CV_RGB2HSV);

Applying a small 1 x 1 Gaussian blur will help reduce the noise in the image and improve the accuracy of our track.

blur(hsvFrame, hsvFrame, cv::Size(1, 1));

We are finally ready to threshold the image. The inRange function assigned any pixel in its range to a 1 and any pixel outside its range to a 0. This should result in a black-and-white picture of the ball.

Scalar lowerBound = cv::Scalar(55, 100, 50);
Scalar upperBound = cv::Scalar(90, 255, 255);
Mat threshFrame;
inRange(hsvFrame, lowerBound, upperBound, threshFrame);

Now that we have a black-and-white image, we need to find the center of the ball. OpenCV includes a function known as moments that can automatically calculate the centroid of the binary image.

//Calculate X,Y centroid
Moments m = moments(threshFrame, false);
Point com(m.m10 / m.m00, m.m01 / m.m00);

Finally, let's just draw a marker over the centroid and show the image.

//Draw crosshair
Scalar color = cv::Scalar(0, 0, 255);
drawMarker(frame, com, color, cv::MARKER_CROSS, 50, 5);

imshow("Tennis Ball", frame);
imshow("Thresholded Tennis Ball", threshFrame);

Path planner node

The first version of the path planner node will simply return the coordinates of the center of the image grabbed by the webcam. So the code for this node is extremely easy: I create a timer that, every 500 ms, publishes the coordinates of the center of the image


Arduino bridge node

The Arduino bridge subscribes to the above-mentioned topics (panda-pos and panda-path) and forward the data received on those topics to the ATMEGA32U4 microcontroller. The messages sent are human-readable strings and have the following format


· to send the current ball position
C<X position>;<Y position>\n· to send the desired ball position
P<X position>;<Y position>\n


Arduino sketch

The Arduino sketch performs two main tasks

1. read data sent by the Arduino bridge node on the serial line2. run PID control


To write and download the Arduino sketch, simply install the Arduino IDE on the LattePanda board and work as if you have an Arduino Leonardo board connected on serial port named /dev/ttyACM0. As simple as that!


The only annoying thing is that Arduino IDE needs administrative privileges to access the serial port, so you need to launch the IDE from a terminal with sudo


cd arduino-1.8.19
sudo ./arduino


Reading data from Arduino bridge

To read data sent by the ROS2 node, I implemented a simple Finite State Machine as per below diagram



To implement this function, I created a state machine. The state machine waits for either a 'C' (for current position) or 'P' (for desired position). When one of this char is received, the state machine machine waits for one or more '0'...'9' character for the X coordinate. When the ';" character is received, the state machine waits for one or more '0'...'9' characters. Finally, when a Carriage Return or Line Feed character is received, the state machine returns to the initial state


Controlling servos

To implement PID, I installed the PID library by Brett Beauregard, which can be installed in the LIbraries Manager of the Arduino IDE



This is the definition and initialization of PID objects


#define OUTPUT_MIN    -127
#define OUTPUT_MAX    128

PID xPID(&xPos, &xOutput, &xPath, 0.1, 0.5, 0, DIRECT);
PID yPID(&yPos, &yOutput, &yPath, 0.1, 0.5, 1, DIRECT);

void setup()

Servos are connected to pin 5 and 6, according to diagram below




#define XSERVO_PIN 5
#define YSERVO_PIN 6

Servo xServo;
Servo yServo;

void setup()

PIDs (and consequently the servos) are updated 10 times per second.

#define UPDATE_PERIOD_MS 100

void writeServos()
  long delta = millis() - updateMillis;
  if (delta < UPDATE_PERIOD_MS)

  // map xOutput to angle
  double xAngle = map(xOutput, OUTPUT_MIN, OUTPUT_MAX, ANGLE_MIN, ANGLE_MAX);

  Serial.print("X Servo: ");
  Serial.print(" - ");
  Serial.print(" -> ");
  Serial.print(" -> ");

  // map yOutput to angle
  double yAngle = map(yOutput, OUTPUT_MIN, OUTPUT_MAX, ANGLE_MIN, ANGLE_MAX);

  Serial.print("Y Servo: ");
  Serial.print(" - ");
  Serial.print(" -> ");
  Serial.print(" -> ");

  updateMillis = millis();

Mechanical construction 

Here are some images of the plate where the ball moves 

Launch the ROS2 application


To launch the application


1. Open Arduino IDE and download the sketch2. Open a new terminal and launch the panda_path node
cd ros_ws
. install/local_setup.bash
ros2 run panda_acrobat panda_path


This node publishes the desired position (in this example, the center of the image)


3. Open a new terminal and launch the panda_cam node
cd ros_ws
. install/local_setup.bash
ros2 run panda_acrobat panda_cam


This node processes the image from the webcam, detected the presence of a yellow blob (as you can see in the the two top windows) and publishes the coordinates of the center of the blob


4. Open a new terminal and launch the panda_arduino node
cd ros_ws
. install/local_setup.bash
ros2 run panda_acrobat panda_arduino


This node subscribes to the nodes updated by panda_path and panda_cam nodes and forward these values to the integrated Arduino Leonardo through serial line. in the screenshot, "Path" is the value received from the panda_path topic; "Pos" is the value received from the panda_pos topic 


Source code

(Preliminary) source code is available on my github



This project confirms the first impression I had during my review of the Latte Panda 3 Delta SBC: it's an incredible board that can bridge the gap between desktop computing and robotics. You have all the power of a complete x86 desktop at your fingertip AND, at the same time, the hard real-time capabilities provided by the Arduino platform. To be honest, after some tweaking, it would have been great if the LattePanda 3 Delta had two additional features 


1. Arduino-compatible pin headers. LattePanda 3 Delta provides the Arduino signals on a single-line header strip. To connect an Arduino shield, you nee some wiring

2. A more powerful microcontroller. The target of LattePanda 3 Delta SBC is amateurs and makers, but you can build quite advanced applications on this platform. These applications, in my opinion, may required a more powerful microcontroller and, in my dreams, also an FPGA to implement logic when required performances can not met by the microcontroller 


Apart from this nice-to-have features, I am very glad to be awarded with the opportunity to roadtest this platform. It will be the core of one of my future projects!