Robotics Software Engineer specializing in perception-driven autonomy, semantic mapping, and intelligent control. Focused on developing systems that combine vision, optimization, and learning for real-world robotic applications.
A curated selection of my robotics research and engineering work, focused on real-time control, 3D perception, optimization, and applied AI-driven autonomy.
A hybrid semantic exploration framework for multi-object search with persistent memory, integrating vision-language models, semantic mapping, and frontier-based navigation for intelligent exploration and reasoning.
SAGE (Semantic-Aware Guided Exploration) is a framework designed for multi-object search in unknown environments using persistent 3D semantic memory.
It combines exploration, semantic understanding, and memory-based reasoning to enable robots to search and identify objects efficiently using open-vocabulary prompts.
The system integrates multiple AI and robotics components:
Evaluation:
To validate SAGE, 3D semantic segmentation with OpenFusion is used to compare object detection and mapping accuracy against the same semantic classes.
Performance is measured using Success Rate (SR) and Success weighted by Path Length (SPL) metrics for single and multi-object search tasks.
The project is currently under active development, with further experiments in semantic fusion, frontier optimization, and real-world deployment in progress.
SAGE introduces a semantic exploration architecture that fuses frontier-based exploration, 3D mapping, and vision-language models into a unified pipeline for open-vocabulary multi-object search.
Through persistent semantic memory and cross-modal fusion, it enables robots to recall, reason, and plan toward objects intelligently during long-term autonomous missions.
A modular ROS 2 reinforcement-learning framework built for real-time robotics applications, enabling vectorized training, live introspection, and plug-in environments for reproducible DRL research.
A modular ROS 2 Deep Reinforcement Learning (DRL) framework developed as a commissioned project to provide a standardized, extensible platform for end-to-end learning in robotics.
The goal was to lower the entry barrier for students and research teams by enabling quick prototyping, reproducible training, and real-time introspection within ROS 2.
The framework integrates tightly with Stable-Baselines3 and supports plug-in-based environments, allowing new tasks to be added without modifying the RL core.
It comes with practical examples (CarRacing, LunarLander, CartPole) and extensive documentation covering observation/action space design, reward shaping, and hyperparameter tuning.
It also supports vectorized environments for parallel training and can introspect live ROS 2 topics during learning, enabling developers to visualize and debug agent behavior in real time, as shown below.
The framework emphasizes reproducibility, scalability, and transparency, making it an ideal foundation for both industrial and educational reinforcement-learning applications.
Built and tested under ROS 2 Jazzy with CUDA-enabled PyTorch 2.2 for GPU training.
A professionally developed ROS 2 reinforcement learning framework unifying algorithm design, training, and evaluation in robotics.
It bridges educational usability and research-grade scalability, empowering students, researchers, and engineers to prototype and deploy intelligent robotic behaviors efficiently.
A high-precision nonlinear control system for differential-drive robots that predicts future motion and optimizes control inputs over a finite horizon, enabling smooth constraint-aware trajectory tracking.
A nonlinear Model Predictive Controller (nMPC) based local planner developed for a Differential Drive Mobile Robot (DDMR).
Unlike conventional reactive planners, the nMPC predicts future robot states through a kinematic model and optimizes control inputs over a finite horizon.
The controller minimizes a cost function while enforcing hard constraints on obstacle clearance, velocity, and input bounds.
The implementation leverages:
The planner was benchmarked against standard local planners:
Results demonstrate smoother, dynamically feasible trajectories, particularly in cluttered or narrow environments.
The entire system was simulated in Gazebo using a TurtleBot, with a GPU-enabled Docker container for reproducibility.
See also the Optimization Lab – PyTorch-based MPC (ROS 2) for a lightweight educational re-implementation using PyTorch instead of CasADi.
A high-performance nonlinear MPC for mobile robots using CasADi and IPOPT delivering smooth constraint-aware motion planning and serving as a foundation for subsequent PyTorch-based re-implementations in ROS 2.
An educational ROS 2 lab demonstrating real-time control through gradient-based optimization with PyTorch, teaching how to implement MPC without external solvers.
A PyTorch-based Model Predictive Control (MPC) framework developed as part of a university optimization lab, demonstrating how numerical optimization can be applied to control and planning problems in robotics.
Unlike the earlier CasADi-based MPC, this version leverages PyTorch autograd and optimizers (Adam/LBFGS) directly, without relying on external NLP solvers, to teach students how to formulate and solve control problems from first principles.
Developed as a commissioned project, the lab provides a complete ROS 2 Jazzy package (mpc_local_planner) that serves as both a tutorial and a working local planner.
It includes comprehensive documentation explaining:
mpc_local_planner
Built and tested under ROS 2 Jazzy using CUDA-enabled PyTorch 2.2.
A university lab project showcasing optimization for robotics using PyTorch as a numerical solver.
It bridges classical control and differentiable programming by re-implementing nMPC entirely in PyTorch, illustrating how learning-based and optimization-based control can converge within modern ROS 2 pipelines.
Designed and implemented an automated force–displacement measurement system using a UR10 robot, FT sensor, and RGB-D visualization, enabling reproducible AIRSKIN pad calibration.
A collaborative project with Blue Danube Robotics – AIRSKIN developed at UAS Technikum Vienna to automate tactile pad sensitivity measurements.
The system measures the force and displacement required to trigger an AIRSKIN pad at defined grid points. From this, the spring constant and local sensitivity are derived to detect mechanical weak points and support further product development.
Built entirely with ROS Noetic and Docker, the system integrates:
Once all measurement points are defined, MoveIt executes a fully automated sequence. The system visualizes force vectors in RViz and overlays a 3D point cloud from an integrated RGB-D camera.
Automated robotic test bench for AIRSKIN pad calibration, measuring and visualizing tactile sensitivity through force–displacement mapping.
Custom particle filter for 2D localization with optimized raycasting and resampling, achieving reliable pose estimation with only 100 particles.
A Monte Carlo Localization (MCL) system, also known as a Particle Filter, implemented in C++ for Differential Drive Mobile Robots using ROS Noetic.
The algorithm estimates a robot’s pose on a known map by maintaining a set of weighted samples (“particles”), each representing a possible state hypothesis.
Robust and efficient Monte Carlo Localization achieving high accuracy with minimal particles through adaptive resampling, enabling fast and reliable robot pose estimation in dynamic indoor environments.
A cascaded control system enabling real-time UAV tracking with a high-speed pan–tilt camera, combining field-oriented motor control and Kalman-filtered trajectory prediction.
A control system for tracking UAVs using a pan–tilt camera with cascaded position and velocity control.
Developed at Automation and Control Institute (TU Wien), the system enables accurate drone tracking in real time with predictive correction via Kalman filtering.
Designed a cascaded position–velocity control system for a high-speed pan–tilt camera tracking UAVs, integrating FOC-driven PMSM motors and Kalman-filtered trajectory prediction for robust real-time tracking.
Outside of my academic research and industrial work, I enjoy building and experimenting with robotic systems in my free time, exploring mechanical design, embedded control, and intelligent motion planning.
These projects allow me to prototype, test, and iterate on new ideas that blend classical robotics with modern AI-driven methods.
Designed and built a 6-DOF robotic arm using stepper-driven harmonic-drive-inspired gear reductions, integrated with ROS MoveIt for collision-aware motion planning and synchronized sim-to-real execution.
A 6-DOF robotic arm designed and controlled entirely through open-source tools, combining 3D-printed mechanics, ROS MoveIt motion planning, and real-to-sim synchronization for flexible robotic manipulation.