CARET: Context-Aware Interpretation of Human Actions for Implementation in Assistive Robotics
Grant Agreement No.: 2024/55/D/ST7/02627
Granting Authority: National Science Centre (NCN)
Call: SONATA 20
Project duration: 01.10.2025 – 30.09.2028
Project’s Grant Amount: 1 063 962,00 PLN
Grant Amount for WUT: 1 063 962,00 PLN
Project Manager at PW: Vibekananda Dutta Ph.D.
Project Team: Xin He Ph.D., Zuyang Fan MSc
Project description:
This project presents a context-aware control framework for human-assisting robots that integrates perception, semantic reasoning, and task execution. The system combines real-time human posture recognition with object detection and scene context to infer user intent and select appropriate assistive actions. Central to the approach is an ontology-based semantic representation of users, objects, tasks, and the environment, which supports rule-based reasoning and dynamic task planning. Detected objects and user states are continuously instantiated in the ontology, enabling a closed perception–reasoning–action loop. The developed framework is expected to be implemented on a legged robot platform - the Boston Dynamics Spot equipped with a manipulator and assistive robotic platform (Agile X) to evaluate scenarios requiring the robot to detect user needs and deliver selected services under varying lighting conditions. Expected experimental results will demonstrate that the system's ability to perform context-dependent assistive tasks, with robust delivery success, even under perceptual uncertainty. The study highlights the effectiveness of integrating symbolic reasoning with low-level perception to achieve adaptive, explainable, and human-centered robotic behavior.