Our particle filter implementation followed a structured approach to localization. The filter began by generating a set of randomly distributed particles across the map, with a higher concentration around the initial estimate of the robot's position. Each particle represented a potential pose (position and orientation) of the robot. As the robot moved through its environment, we used data from the odometry sensor to update the position of each particle, simulating how the particles would move if they were the actual robot while incorporating realistic motion noise to account for sensor inaccuracies.
For each particle, we simulated what the robot's LIDAR scan would look like if the robot were at that particle's position. By comparing these simulated scans with the actual LIDAR readings from the robot, we assigned weights to each particle based on how closely the simulated and actual scans matched. After assigning weights to all particles, we implemented a resampling step where particles were selected with probability proportional to their weights, meaning that particles with higher weights were more likely to be selected for the next iteration.
The final estimate of the robot's pose was calculated by taking a weighted average of all particle positions, providing a continuous, smooth estimate of the robot's location within the map. We also implemented functionality to "reset" the filter with a new initial pose estimate when necessary, using RViz to provide a new initial pose represented as a green arrow to re-initialize the particle distribution. The entire process was visualized in RViz, showing the particle cloud, laser scans, and estimated robot position in real-time, which was crucial for debugging and demonstrating the effectiveness of the algorithm.