Welcome to Pacbot!
We are part of RIT's Multi-Disciplinary Robotics Club, a student-led organization that aims to teach, create, and manage robotics projects, using skills learned across electrical, mechanical, and software engineering. Pacbot requires collaboration across all three disciplines.
Our team focuses on developing a small fully autonomous robot to play the arcade game Pac-Man on a physical field. Our annual competition is in April, where we compete against other Pacbot teams from various universities. A picture of our competition board can be found here.
Rust API
Our code is done entirely in Rust, a modern systems programmings language focused on speed and memory safety. If you don't know Rust, check out The Rust Book.
You can read more about each crate in the repository Readme.
Auto-generated documentation for the Rust code can be found at https://rit-mdrc.github.io/mdrc-pacbot/api/core_pb/
The rules of the competition can be found here: https://docs.google.com/document/d/1S_LQgnrct9sxjBMF8caiuIUB6S4VU-Ep4F8Lx8vzH5Y/edit?usp=sharing
Grid
- Grid & Coordinate System: The 2D structure
Grid
and the corresponding coordinate system - Computed Grid: The utility structure
ComputedGrid
built aroundGrid
that provides pre-calculated information
Grid & Coordinate System
Grid
Pacman is played on a 2D integer grid, meaning the location of all objects (including walls, ghosts, pellets, and Pacman) can be fully described with two integers.
The Grid
data type, an alias for [[bool; GRID_COLS]; GRID_ROWS]
provides information about which cells are walls.
Grid
is GRID_ROWS
x GRID_COLS
size. The official Pacbot grid
is 28 cells 31 cells, but this code currently supports up to 32 x 32 grids.
Any coordinate less than (0, 0) or greater than (31, 31) is treated as a wall, and is not stored in Grid
.
Standard Grids
A number of standard grids are provided in core_pb::grid::standard_grid
. These include several
common configurations:
GRID_PACMAN
- The official PacbotGrid
GRID_BLANK
- AGrid
entirely composed of walls (except for the space at (1, 1)), copy-paste-able to create newGrid
sGRID_OUTER
- AGrid
with an empty pathway around the inside of the outer edgeGRID_PLAYGROUND
- AGrid
with many small areas for testing motor control algorithms
These can be used as-is or edited to create custom Grid
s. However, most parts of the application do not support Grid
s
outside the StandardGrid
s.
Upgrading to ComputedGrid
The ComputedGrid struct provides additional pre-calculated information about Grid
s.
The section Grid Rules contains more information about special rules, like that a Grid
must have
Walls around the entire outer edge, among others.
Once a Grid
is upgraded to a ComputedGrid
via ComputedGrid::try_from(grid)
, which is necessary for many other parts of this library, it becomes
immutable.
Coordinates
Throughout mdrc_pacbot
, we use nalgebra::Point2
as much as possible to describe location. This has two properties,
x
and y
. Most of the time, we use the same coordinate system you've used in math class for many years:
+y; 90° (31, 31)
🡑
| (...)
|
(0, 0)-----🡒 +x; 0°
where +x
and 0° is to the right, +y
is up, and angle is more positive in the counter-clockwise direction. Using
this system, angle functions like sin and cos behave as expected.
The only downside to this system is that it differs from the official Pacbot Pac-Man implementation. There, they
use row
and col
, with coordinates in the form (row, col)
, with the intended orientation like so:
(0, 0) --> +col
| (...)
v
+row (31, 31)
We choose not to do this because it places (0, 0) at the top left which makes physics calculations confusing. So we
rotate the official game by 90° and translate (row, col)
to (x, y)
. If you want to view the grid without the rotation,
click the "Rotated Grid" checkbox in the gui.
On the official Pacbot grid (in either rotation):
- (1, 1) is the top left walkable corner
- (1, 26) is the top right walkable corner
- (29, 1) is the bottom left walkable corner
- (29, 26) is the top right walkable corner
Pacbot's starting position is (23, 13), facing right.
Physical Coordinates
The integer coordinate system described above is easily extendable to a physical floating point coordinate system.
When points are given as floating point values, integer coordinates (like 1.0, 1.0) represent
the center of the corresponding Grid
cell (in this case, (1, 1)).
Note that walls in the physical Pacbot grid are "eroded" by half a cell so that paths are two
Grid
cells wide. This is why, except for the outer edge, there can't be a wall in Grid
that is one cell
wide - both sides would get eroded by half a cell and there would be an infinitely thin wall.
Curiously, this results in the edges of walls falling on integer coordinates that lie at the center
of the Grid
cells that are declared as walls.
Computed Grid
ComputedGrid
provides additional information about a Grid
that is calculated when
the ComputedGrid
is first created, meaning it is very cheap to retrieve.
For this documentation, a "walkable space" is one where Pacman can travel.
Grid Rules
In order for a Grid
to be successfully upgraded to a ComputedGrid
via ComputedGrid::try_from(grid)
,
all of the following must be true:
- All the cells around the outside of the grid (with
x
ory
equal to 0 or 31) are walls - There is at least one walkable cell (where Pacman can spawn)
- There are no 2x2 empty squares
- There is no wall with a walkable space both above and below it
- There is no wall with a walkable space both to the left and right
The last two points can be simplified to say that all walls are 2 cells thick.
Note:
- It is not necessary that a
ComputedGrid
is connected, or that every walkable space is accessible from every other one
Pre-computed Information
When a ComputedGrid
is constructed, it spends extra time calculating a number of variables related to
the game grid. This information is documented in the ComputedGrid documentation.
---
title: RIT MDRC Pacbot Network Architecture (github.com/RIT-MDRC/mdrc-pacbot)
---
flowchart LR
subgraph Legend
direction TB
Y(GUI):::gui
Z(Game Server):::game_server
X(Robot):::robot
end
%% ensure the legend is vertical
Y ~~~ A
subgraph "mdrc-pacbot"
A(WASM GUIs):::gui & B(Rust GUIs):::gui <-->|Websocket| D{MDRC Pacbot Server\nserver_pb}
D <-->|TCP| E(Raspberry Pi Picos):::robot
D <-->|TCP| F(Simulated Robots):::robot
D <-->|Websocket| J(Simulation Manager)
I(Rust Game Server):::game_server <-->|Websocket| D
subgraph gui_pb
A
B
end
subgraph pico_pb
E
end
subgraph sim_pb
I
J
F
end
end
subgraph "Pacbot-2 (github.com/Pacbot-Competition/Pacbot-2)"
C(Competition GUIs):::gui <-->|Websocket| G(Official Go Game Server):::game_server
end
C <-->|Websocket| I
G <-->|Websocket| D
classDef gui stroke:#f00
classDef game_server stroke:#0f0
classDef robot stroke:#ff0
Localization
The RIT pacbot team's localization requires 4 sensors, each facing in a cardinal direction (right, up, left down). The sensors have a range that extends half the distance of the longest corridor of the grid.
General Steps
- First Pass Prediction
- Second Pass Prediction
- Combine Directional Information
First Pass Prediction
The cv_location
passed into the localizer function is a rough estimate of where pacbot is on the grid. To
get an initial estimate of where pacbot is based on sensor data, a raycast is performed in each of the
cardinal directions from the cv_location
.
fn get_sim_ray_cast(loc: Point2<i8>, grid: &Grid, radius: f32) -> [f32; 4] {
VECTORS.map(|dir| {
let mut dist: i8 = 0;
let mut p = loc;
let dir = dir.map(|x| x as i8);
while !wall_at(grid, p) {
p += dir;
dist += 1;
}
dist as f32 - radius
})
}
For the first prediction, an assumption is made: the walls that are hit by the actual sensors are the same
walls hit by the corresponding raycasts from cv_location
. This assumption is not always true given the inaccuracy of cv_location
, but can be fixed in the second pass prediction. Knowing this, each of the
sensors can give an estimate of pacbot's location on the corresponding axis by simply adding the raycast
vector and subtracting the sensor vector.
fn get_estimated_poses(
grid: &Grid,
cv_location: Point2<i8>,
distance_sensors: &[Result<Option<f32>, ()>; 4],
radius: f32,
) -> [Option<Point2<f32>>; 4] {
let cv_distances = get_sim_ray_cast(cv_location, grid, radius);
let cv_location = cv_location.map(|x| x as f32);
[0, 1, 2, 3].map(|i| {
distance_sensors[i]
.ok()
.flatten()
.map(|dist| cv_location + (VECTORS[i] * (cv_distances[i] - dist)))
})
}
Although the first prediction may be incorrect, the information from the estimated poses allows for error correction.
Second Pass Prediction
Once an estimated position is created from each of the sensors, each axis is checked to determine if it
actually hit the wall that was hit by the cv_location
raycast. The check is done by observing if the
predicted position is different from cv_location
by more than CV_ERROR
. If the estimation is off
by more than CV_ERROR
for both directions along an axis (right and left or up and down), then we must perform the second pass operation.
Before describing how the second pass works, it is important to understand that if one of the axis
has incorrect values for both sensors, then the other axis must have a correct value. This property
is a product of the grid, which requires pacbot to travel in a cardinal direction and also prevents it from
travelling diagonally through any corridors. Requiring at least one sensor to hit the same wall as
the cv_location
raycast is essential for the second pass localization to correct the errors from
the first pass.
For the second pass, the program must find a position to raycast from in which the rays hit the same walls
that the sensors do. To do so, we check the position given by the axis that gives a position within CV_ERROR
. If this position has a greater value along the axis as the cv_location
, then we must raycast
again from one unit above the original cv_location
on the corresponding axis. If the position has a lower value along the axis as the cv_location
, then we must raycast again from one unit below the original cv_location
on the corresponding axis. Once new_location
is found, repeat the operations of the first
pass with new_location
in place of cv_location
. Since the assumption that was made is now true
(the rays hit the same walls as the sensors), the operation should give the correct position.
if [poses[0], poses[2]].iter().all(|x| {
x.map(|pos| get_dist(pos, cv_location_f32) > CV_ERROR)
.unwrap_or(true)
}) {
let mut new_location = cv_location_int;
if let Some(pos) = [poses[1], poses[3]].into_iter().flatten().next() {
if pos.y > cv_location_f32.y {
new_location.y += 1;
} else {
new_location.y -= 1;
}
poses = get_estimated_poses(&grid, new_location, distance_sensors, robot.radius);
}
}
if [poses[1], poses[3]].iter().all(|x| {
x.map(|pos| get_dist(pos, cv_location_f32) > CV_ERROR)
.unwrap_or(true)
}) {
let mut new_location = cv_location_int;
if let Some(pos) = [poses[0], poses[2]].into_iter().flatten().next() {
if pos.x > cv_location_f32.x {
new_location.x += 1;
} else {
new_location.x -= 1;
}
poses = get_estimated_poses(&grid, new_location, distance_sensors, robot.radius);
}
}
Combine Directional Information
The program integrates all of the sensor data by simply using one of the two sensors for each of the axis.
It prioritizes the values closest to cv_location
and values that are not None
or Err()
for each axis.
If both sensors are not functional or do not detect a wall, that axis will remain whatever is given by
cv_location
.
let x = [poses[0], poses[2]]
.into_iter()
.flatten()
.min_by_key(|pos| {
NotNan::new(get_dist(*pos, cv_location_f32)).unwrap_or(NotNan::new(0.0).unwrap())
})
.unwrap_or(cv_location_f32)
.x;
let y = [poses[1], poses[3]]
.into_iter()
.flatten()
.min_by_key(|pos| {
NotNan::new(get_dist(*pos, cv_location_f32)).unwrap_or(NotNan::new(0.0).unwrap())
})
.unwrap_or(cv_location_f32)
.y;