Shubham Kedia Logo Image
Shubham Kedia

Hey, This is Shubham

With a mind engineered for robotics, vision, and machine learning, and a knack for maneuvering through complex algorithms, I possess an expertise in the research and innovation. Let's navigate the realm of autonomous systems with a touch of humor and a dash of machine magic!

About Me I'm fueled by endless cups of coffee and an insatiable curiosity for pushing the boundaries of technology. So, buckle up and join me on this exhilarating journey through the world of robots!

Get to know me!

Hey! It's Shubham Kedia and I'm a Roboticist located in Illinois, USA. With a strong foundation in Robotics and Software Development, I have undertaken various projects, including the development of a Visual-LiDAR Fusion SLAM system, integrated perception and planning for autonomous vehicle navigation, and implementing deep learning-based modules for multi-object 3D detection and tracking. My experience also extends to working as a Graduate Research Assistant at the Intelligent Motion Lab, University of Illinois Urbana-Chamapign.

Additionally, I have >4 years industry experience as a Senior Engineer at Mahindra & Mahindra Pvt. Ltd., contributing to software development, modeling, and state estimation. Proficient in programming languages such as Python and C++, I possess the technical skills necessary to excel in the field. I am driven by a passion for innovation and a desire to contribute to the advancement of robotics and automation. Feel free to contact me here.

Contact

My Skills

C
C++
Python
Linux
MATLAB
ROS
GIT
Eigen
GTSAM
Open3D
Pytorch
NVIDIA Jetson
dSPACE
OOP
SLAM
Sensor Fusion
Artificial Intelligence (AI)
HIL

Projects Below, I present the specific details of the projects in which I have actively participated.

Software Screenshot

Visual-LiDAR sensor fusion for adaptive and robust Simultaneous Localization and Mapping

Simultaneous Localization and Mapping (SLAM) system capable of dynamically addressing LiDAR degenerate scenarios and visually challenging environments through sensor fusion in unstructured environments. This fusion approach utilizes the Pose Graph Optimization (PGO) technique, incorporating adaptive fusion weights generated by a Deep Neural Network (DNN) model. The architecture of the DNN model draws inspiration from introspective learning principles employed in vision systems.

Case Study
Software Screenshot

Integrated Perception and Planning

An integrated framework for autonomous navigation that combines perception and planning using optimization techniques. This framework enables real-time state estimation and path planning with exceptional precision and reliability. In the perception system, a cost map is generated using the Euclidean Distance Transform (EDT), which effectively represents environmental constraints. To accomplish real-time trajectory optimization, we formulate a non-linear optimization problem with the cost function and solve it using a direct collocation method.

Case Study
Software Screenshot

Multi Object 3D Detection and Tracking using Deep Learning

A module for dynamic object tracking was developed to monitor and track different objects, including cars, pedestrians, and more, using frames captured from a vehicle's perspective. The tracking process involves utilizing detections obtained from the PointRCNN model, which are then processed using a Kalman filter. The implementation and evaluation of this approach were performed on the KITTI dataset. The results demonstrate that the proposed approach achieves a high Multiple Object Tracking Accuracy (MOTA) score of 0.7806 specifically for tracking cars.

Case Study
Software Screenshot

Reinforcement Learning-based Vehicle Controller for Vehicle-Pedestrian Interactions

We investigate the impact of pedestrian's reactive behaviors on a learning-based vehicle controller. We utilize a model-free algorithm, specifically an actor-critic approach with proximal policy optimization, to train a safe vehicle policy. This algorithm has demonstrated promising results in effectively handling various pedestrian dynamics. The experiments are conducted on different classes and sub-classes of pedestrian behaviors. In the animation, the vehicle is represented by a red rectangle, while the pedestrians with diverse reactive dynamics are depicted as colored dots. The simulation environment used for this study is OpenAI Gym.

Case Study
Software Screenshot

Motion Primitives Based Kinodynamic RRT for Autonomous Vehicle Navigation

A SLAM-assisted navigation module for an autonomous vehicle operating under unknown dynamics . The navigation objective entails reaching a desired goal configuration along a collision-free trajectory while respecting the system's dynamics. To achieve this, we utilize LiDAR-based Hector SLAM to perform mapping of the environment, obstacle detection, and tracking of the vehicle's conformance to the trajectory during various states. For motion planning, we employ rapidly exploring random trees (RRTs) on a set of generated motion primitives. This allows us to explore and search for dynamically feasible trajectory sequences that satisfy the system's constraints and ensure collision-free paths to the goal configuration.

Case Study
Software Screenshot

Optimal Control using Physics Informed Neural Networks (PINNs)

PINNs method is used to learn the HJB PDE for optimal control problems. These can be deployed in real-time for trajectory optimization. The animation illustrates the solution for the non-linear differential drive model, where the green car is the initial state and the blue car is the goal. The red line represents the trajectory from the HJB PINNs method, while the black line is from the shooting method. PINNs provide real-time optimal control solutions over long time horizons, significantly faster than the shooting method. For instance, the shooting method requires approximately 400 seconds to compute 500 time-steps, whereas the proposed method only takes 0.3 seconds to compute the same (using only the CPU).

Case Study

Hardware-in-Loop + Driver-in-Loop Simulation

This work was done during my professional experience. The objective is to create a Hardware-in-the-Loop (HIL) simulation ecosystem for a plugin hybrid vehicle based on the P4 architecture. The dSPACE SCALEXIO real-time platform was utilized to simulate various aspects of the vehicle, including the vehicle systems, vehicle dynamics, and internal combustion (IC) powertrain. The HIL system was also interfaced with a real electric drive powertrain and EV controllers. This ecosystem supports dual inputs: one from the driver for driver-in-loop simulation and another from the SCALEXIO platform for automated testing and software development purposes.

Case Study

Contact Kindly complete this form, and I will contact you at the earliest opportunity.