Welcome to Pacbot!
We are part of RIT's Multi-Disciplinary Robotics Club, a student-led organization that aims to teach, create, and manage robotics projects, using skills learned across electrical, mechanical, and software engineering. Pacbot requires collaboration across all three disciplines.
Our team focuses on developing a small fully autonomous robot to play the arcade game Pac-Man on a physical field. Our annual competition is in April, where we compete against other Pacbot teams from various universities. A picture of our competition board can be found here.
The History of the RIT Pacbot Team
The Rochester Institute of Technology (RIT) Pacbot team is a part of the RIT Robotics Club, also known as MDRC. The team participates in the annual Pacbot competition, where autonomous robots navigate a maze to collect pellets and evade or capture ghosts. This document chronicles the history of the RIT Pacbot team, drawing from a team meeting held on May 2, 2025, following the competition hosted at RIT on April 19, 2025.
The RIT Pacbot team's participation in the competition dates back to its early years. Quinn Tucker, who joined RIT in the fall of 2019, recalled being told about the team's unexpected victory in the 2019 Pacbot competition. Ben Giacalone, also a freshman in 2019, corroborated this, describing it as a win achieved through "complete accent." At that time, the robots from various teams were generally unreliable, and their artificial intelligence (AI) capabilities were rudimentary, primarily relying on heuristic approaches. The RIT robot that secured the first-place finish was described as being hastily assembled the night before the competition and barely functional.
Following this surprising success, the team faced challenges in subsequent years. The onset of the COVID-19 pandemic in 2020 significantly disrupted the competition schedule, with the Pacbot competition being canceled in 2020 and likely in 2021 as well. This period of uncertainty led to demotivation among some team members. Despite the lack of competitions, the team continued to work on their robot and AI, though progress on the physical robot was slow, with early prototypes being simple box-shaped designs that lacked effective movement.
The team's efforts gained renewed momentum in 2023 when the Pacbot competition was reinstated. Ben Giacalone, who had been exploring reinforcement learning (RL) algorithms, rejoined the team with the goal of applying these techniques to the robot's high-level AI. After considerable experimentation, the team achieved a breakthrough, developing an RL-based AI that outperformed their previous heuristic methods. This marked a significant advancement in the team's approach to the competition.
In the fall of 2023, Quinn Tucker undertook an independent study under Dr. Zhao, focusing on implementing various RL algorithms for the Pacbot challenge. This effort, partly motivated by gaining academic credit for his work on the team, saw the exploration of Deep Q Network (DQN), Proximal Policy Optimization (PPO), and AlphaZero algorithms. While AlphaZero proved computationally intensive, the DQN agent ultimately demonstrated the best performance and became the foundation for the team's high-level policy in subsequent competitions. Ben Giacalone played a crucial role in guiding Quinn with the fundamentals of reinforcement learning during this period.
Michael Elia joined the team as a freshman in 2022. Under the leadership of Christian Guerrero, the team initially focused on heuristic-based AI. However, progress in this area was disappointing, with the models frequently encountering simple scenarios they couldn't navigate effectively. The team did attempt to build a robot that year, but it remained unfinished due to incomplete electronics.
The 2023-2024 academic year saw Michael, Quinn, and Ben Giacalone, all software-focused, attempt to build a robot. This endeavor proved unsuccessful, with the resulting robot unable to move. This experience highlighted the necessity of dedicated hardware expertise within the team.
The arrival of Brian Doolin, with a background in BattleBots, proved to be a turning point for the team's hardware capabilities. Brian introduced Onshape, a collaborative CAD software, to streamline the robot design process. He was tasked with creating the smallest possible robot, leading to the current three-wheeled omni-drive design, which fits within a five-inch diameter circle. This design prioritized compactness to navigate the seven-inch wide passages of the competition arena, a lesson learned from past robots getting stuck. The robot features a top shell, a mount for a screen (used for software debugging), a battery, and three motors in a triangular configuration to ensure consistent ground contact and encoder accuracy. The team utilizes Pico boards and a custom-designed PCB, referred to as a "hat," for motor control, power distribution, and sensor connections. This PCB, designed by Michael Elia using EasyEDA software, significantly improved the organization and reduced the size of the robot's electronics compared to earlier wire-based setups and proto boards.
The robot is equipped with several types of sensors. Encoders on each wheel provide data on wheel speed and position. Initially, the team used infrared distance sensors, which were unreliable. They then transitioned to time-of-flight sensors, which offered improved accuracy. The current model of time-of-flight sensors can detect objects as close as one millimeter, though their effective range is limited by the cone of light they emit hitting the arena floor. To address the robot's orientation, the team incorporated an Inertial Measurement Unit (IMU) featuring an accelerometer, gyroscope, and magnetometer. While the magnetometer proved too noisy for reliable use, the accelerometer and gyroscope provide accurate rotational data, crucial for localization.
Localization, determining the robot's position within the maze, has been a significant challenge. In 2023-2024, Quinn Tucker developed a particle filter-based localization system implemented in Rust. While theoretically sound, this approach proved computationally expensive and struggled with inaccuracies stemming from the computer vision (CV) system's lag in tracking the robot's position. Despite this limitation, the robot performed well in simulation and even achieved second place in one practice run at the 2024 competition, scoring higher than any previous team. However, the CV lag hampered its performance in official runs.
Following the 2024 competition, Michael Elia initiated a significant restructuring of the team's codebase, breaking it into modular crates within the MDRC Pacbot repository on GitHub. The particle filter was not ported to this new system due to its performance issues. Instead, the team shifted towards a stateless localization approach for the 2024-2025 season. Ben Brodeur played a key role in developing this new system, which initially involved raycasting based on CV location and sensor readings. However, this was prone to errors when the robot's rotation deviated. The final stateless localization method adopted involved comparing sensor readings to expected values at discrete locations within the maze grid and selecting the most likely location based on proximity to the CV data and a point system. This simpler, computationally efficient algorithm could run directly on the robot, eliminating the delay associated with off-robot processing.
The 2025 competition, hosted at RIT, saw the team achieve first place, with UIUC, who won their own competition, not attending the RIT event. The team also showcased their custom-built competition field, constructed from MDF boards using the club's CNC router.
Looking to the future, Ben Brodeur, the team lead for the next year, outlined several areas for improvement. Enhancing the accuracy of localization remains a priority, potentially by re-exploring state-based methods to address sensor range limitations in long corridors. Improving low-level motor control by incorporating feed-forward terms alongside PID control is also seen as important for smoother movements and preventing the robot from getting stuck.
The team also discussed potential hardware upgrades. While a suspension system was deemed overly complex, exploring flexible 3D-printed mounts or rubber dampeners could help mitigate vibrations. Considering motors with perpendicular axles could further reduce the robot's footprint. The reliability of the current brushed DC motors, which occasionally fail due to brush disintegration, is also a factor to consider for future iterations.
A recurring "bad idea," jokingly raised by Quinn Tucker, was to train the reinforcement learning model directly on the physical robot instead of in simulation, a proposition deemed impractical due to the potential for damage and inefficiency. Another explored but ultimately discarded concept was to have the reinforcement learning model control the low-level motor commands directly. The consensus was that hand-coded motor control, possibly incorporating model-based control techniques, would be more effective.
The team acknowledges the significant progress made over the years, from a barely functional robot to a competition-winning design and the ability to host their own event. The current success is attributed to the dedication and hard work of numerous team members, particularly the graduating seniors Michael Elia and Brian Doolin, as well as the contributions of alumni like Quinn Tucker and Ben Giacalone. As Michael Elia prepares to hand over the team leadership to Ben Brodeur, there is a sense of pride in past achievements and optimism for the future of the RIT Pacbot team. The wealth of knowledge and experience accumulated within the team, documented in their GitHub repository and the memories of its members, will serve as a valuable resource for future generations of RIT students who take on the challenge of autonomous Pacbot navigation.
Rust API
Our code is done entirely in Rust, a modern systems programmings language focused on speed and memory safety. If you don't know Rust, check out The Rust Book.
You can read more about each crate in the repository Readme.
Auto-generated documentation for the Rust code can be found at https://rit-mdrc.github.io/mdrc-pacbot/api/core_pb/
The rules of the competition can be found here: https://docs.google.com/document/d/1S_LQgnrct9sxjBMF8caiuIUB6S4VU-Ep4F8Lx8vzH5Y/edit?usp=sharing
Grid
- Grid & Coordinate System: The 2D structure
Grid
and the corresponding coordinate system - Computed Grid: The utility structure
ComputedGrid
built aroundGrid
that provides pre-calculated information
Grid & Coordinate System
Grid
Pacman is played on a 2D integer grid, meaning the location of all objects (including walls, ghosts, pellets, and Pacman) can be fully described with two integers.
The Grid
data type, an alias for [[bool; GRID_COLS]; GRID_ROWS]
provides information about which cells are walls.
Grid
is GRID_ROWS
x GRID_COLS
size. The official Pacbot grid
is 28 cells 31 cells, but this code currently supports up to 32 x 32 grids.
Any coordinate less than (0, 0) or greater than (31, 31) is treated as a wall, and is not stored in Grid
.
Standard Grids
A number of standard grids are provided in core_pb::grid::standard_grid
. These include several
common configurations:
GRID_PACMAN
- The official PacbotGrid
GRID_BLANK
- AGrid
entirely composed of walls (except for the space at (1, 1)), copy-paste-able to create newGrid
sGRID_OUTER
- AGrid
with an empty pathway around the inside of the outer edgeGRID_PLAYGROUND
- AGrid
with many small areas for testing motor control algorithms
These can be used as-is or edited to create custom Grid
s. However, most parts of the application do not support Grid
s
outside the StandardGrid
s.
Upgrading to ComputedGrid
The ComputedGrid struct provides additional pre-calculated information about Grid
s.
The section Grid Rules contains more information about special rules, like that a Grid
must have
Walls around the entire outer edge, among others.
Once a Grid
is upgraded to a ComputedGrid
via ComputedGrid::try_from(grid)
, which is necessary for many other parts of this library, it becomes
immutable.
Coordinates
Throughout mdrc_pacbot
, we use nalgebra::Point2
as much as possible to describe location. This has two properties,
x
and y
. Most of the time, we use the same coordinate system you've used in math class for many years:
+y; 90° (31, 31)
🡑
| (...)
|
(0, 0)-----🡒 +x; 0°
where +x
and 0° is to the right, +y
is up, and angle is more positive in the counter-clockwise direction. Using
this system, angle functions like sin and cos behave as expected.
The only downside to this system is that it differs from the official Pacbot Pac-Man implementation. There, they
use row
and col
, with coordinates in the form (row, col)
, with the intended orientation like so:
(0, 0) --> +col
| (...)
v
+row (31, 31)
We choose not to do this because it places (0, 0) at the top left which makes physics calculations confusing. So we
rotate the official game by 90° and translate (row, col)
to (x, y)
. If you want to view the grid without the rotation,
click the "Rotated Grid" checkbox in the gui.
On the official Pacbot grid (in either rotation):
- (1, 1) is the top left walkable corner
- (1, 26) is the top right walkable corner
- (29, 1) is the bottom left walkable corner
- (29, 26) is the top right walkable corner
Pacbot's starting position is (23, 13), facing right.
Physical Coordinates
The integer coordinate system described above is easily extendable to a physical floating point coordinate system.
When points are given as floating point values, integer coordinates (like 1.0, 1.0) represent
the center of the corresponding Grid
cell (in this case, (1, 1)).
Note that walls in the physical Pacbot grid are "eroded" by half a cell so that paths are two
Grid
cells wide. This is why, except for the outer edge, there can't be a wall in Grid
that is one cell
wide - both sides would get eroded by half a cell and there would be an infinitely thin wall.
Curiously, this results in the edges of walls falling on integer coordinates that lie at the center
of the Grid
cells that are declared as walls.
Computed Grid
ComputedGrid
provides additional information about a Grid
that is calculated when
the ComputedGrid
is first created, meaning it is very cheap to retrieve.
For this documentation, a "walkable space" is one where Pacman can travel.
Grid Rules
In order for a Grid
to be successfully upgraded to a ComputedGrid
via ComputedGrid::try_from(grid)
,
all of the following must be true:
- All the cells around the outside of the grid (with
x
ory
equal to 0 or 31) are walls - There is at least one walkable cell (where Pacman can spawn)
- There are no 2x2 empty squares
- There is no wall with a walkable space both above and below it
- There is no wall with a walkable space both to the left and right
The last two points can be simplified to say that all walls are 2 cells thick.
Note:
- It is not necessary that a
ComputedGrid
is connected, or that every walkable space is accessible from every other one
Pre-computed Information
When a ComputedGrid
is constructed, it spends extra time calculating a number of variables related to
the game grid. This information is documented in the ComputedGrid documentation.
---
title: RIT MDRC Pacbot Network Architecture (github.com/RIT-MDRC/mdrc-pacbot)
---
flowchart LR
subgraph Legend
direction TB
Y(GUI):::gui
Z(Game Server):::game_server
X(Robot):::robot
end
%% ensure the legend is vertical
Y ~~~ A
subgraph "mdrc-pacbot"
A(WASM GUIs):::gui & B(Rust GUIs):::gui <-->|Websocket| D{MDRC Pacbot Server\nserver_pb}
D <-->|TCP| E(Raspberry Pi Picos):::robot
D <-->|TCP| F(Simulated Robots):::robot
D <-->|Websocket| J(Simulation Manager)
I(Rust Game Server):::game_server <-->|Websocket| D
subgraph gui_pb
A
B
end
subgraph pico_pb
E
end
subgraph sim_pb
I
J
F
end
end
subgraph "Pacbot-2 (github.com/Pacbot-Competition/Pacbot-2)"
C(Competition GUIs):::gui <-->|Websocket| G(Official Go Game Server):::game_server
end
C <-->|Websocket| I
G <-->|Websocket| D
classDef gui stroke:#f00
classDef game_server stroke:#0f0
classDef robot stroke:#ff0
Localization
The RIT pacbot team's localization requires 4 sensors, each facing in a cardinal direction (right, up, left down). The sensors have a range that extends half the distance of the longest corridor of the grid.
General Steps
- First Pass Prediction
- Second Pass Prediction
- Combine Directional Information
First Pass Prediction
The cv_location
passed into the localizer function is a rough estimate of where pacbot is on the grid. To
get an initial estimate of where pacbot is based on sensor data, a raycast is performed in each of the
cardinal directions from the cv_location
.
fn get_sim_ray_cast(loc: Point2<i8>, grid: &Grid, radius: f32) -> [f32; 4] {
VECTORS.map(|dir| {
let mut dist: i8 = 0;
let mut p = loc;
let dir = dir.map(|x| x as i8);
while !wall_at(grid, p) {
p += dir;
dist += 1;
}
dist as f32 - radius
})
}
For the first prediction, an assumption is made: the walls that are hit by the actual sensors are the same
walls hit by the corresponding raycasts from cv_location
. This assumption is not always true given the inaccuracy of cv_location
, but can be fixed in the second pass prediction. Knowing this, each of the
sensors can give an estimate of pacbot's location on the corresponding axis by simply adding the raycast
vector and subtracting the sensor vector.
fn get_estimated_poses(
grid: &Grid,
cv_location: Point2<i8>,
distance_sensors: &[Result<Option<f32>, ()>; 4],
radius: f32,
) -> [Option<Point2<f32>>; 4] {
let cv_distances = get_sim_ray_cast(cv_location, grid, radius);
let cv_location = cv_location.map(|x| x as f32);
[0, 1, 2, 3].map(|i| {
distance_sensors[i]
.ok()
.flatten()
.map(|dist| cv_location + (VECTORS[i] * (cv_distances[i] - dist)))
})
}
Although the first prediction may be incorrect, the information from the estimated poses allows for error correction.
Second Pass Prediction
Once an estimated position is created from each of the sensors, each axis is checked to determine if it
actually hit the wall that was hit by the cv_location
raycast. The check is done by observing if the
predicted position is different from cv_location
by more than CV_ERROR
. If the estimation is off
by more than CV_ERROR
for both directions along an axis (right and left or up and down), then we must perform the second pass operation.
Before describing how the second pass works, it is important to understand that if one of the axis
has incorrect values for both sensors, then the other axis must have a correct value. This property
is a product of the grid, which requires pacbot to travel in a cardinal direction and also prevents it from
travelling diagonally through any corridors. Requiring at least one sensor to hit the same wall as
the cv_location
raycast is essential for the second pass localization to correct the errors from
the first pass.
For the second pass, the program must find a position to raycast from in which the rays hit the same walls
that the sensors do. To do so, we check the position given by the axis that gives a position within CV_ERROR
. If this position has a greater value along the axis as the cv_location
, then we must raycast
again from one unit above the original cv_location
on the corresponding axis. If the position has a lower value along the axis as the cv_location
, then we must raycast again from one unit below the original cv_location
on the corresponding axis. Once new_location
is found, repeat the operations of the first
pass with new_location
in place of cv_location
. Since the assumption that was made is now true
(the rays hit the same walls as the sensors), the operation should give the correct position.
if [poses[0], poses[2]].iter().all(|x| {
x.map(|pos| get_dist(pos, cv_location_f32) > CV_ERROR)
.unwrap_or(true)
}) {
let mut new_location = cv_location_int;
if let Some(pos) = [poses[1], poses[3]].into_iter().flatten().next() {
if pos.y > cv_location_f32.y {
new_location.y += 1;
} else {
new_location.y -= 1;
}
poses = get_estimated_poses(&grid, new_location, distance_sensors, robot.radius);
}
}
if [poses[1], poses[3]].iter().all(|x| {
x.map(|pos| get_dist(pos, cv_location_f32) > CV_ERROR)
.unwrap_or(true)
}) {
let mut new_location = cv_location_int;
if let Some(pos) = [poses[0], poses[2]].into_iter().flatten().next() {
if pos.x > cv_location_f32.x {
new_location.x += 1;
} else {
new_location.x -= 1;
}
poses = get_estimated_poses(&grid, new_location, distance_sensors, robot.radius);
}
}
Combine Directional Information
The program integrates all of the sensor data by simply using one of the two sensors for each of the axis.
It prioritizes the values closest to cv_location
and values that are not None
or Err()
for each axis.
If both sensors are not functional or do not detect a wall, that axis will remain whatever is given by
cv_location
.
let x = [poses[0], poses[2]]
.into_iter()
.flatten()
.min_by_key(|pos| {
NotNan::new(get_dist(*pos, cv_location_f32)).unwrap_or(NotNan::new(0.0).unwrap())
})
.unwrap_or(cv_location_f32)
.x;
let y = [poses[1], poses[3]]
.into_iter()
.flatten()
.min_by_key(|pos| {
NotNan::new(get_dist(*pos, cv_location_f32)).unwrap_or(NotNan::new(0.0).unwrap())
})
.unwrap_or(cv_location_f32)
.y;