This project investigates context-aware human–robot collaboration for brick-layering in a visually guided workplace. Bricks are randomly scattered on site, and the wall is constructed through an iterative co-design process where a human can intervene at any moment while the robot executes sequential placement. A depth-camera + OpenCV pipeline localizes candidate bricks and translates sensing results into human-interpretable instructions, while a decision-tree model continuously updates actions based on context and assembly logic. By integrating human intent, material behavior, and tectonic assembly knowledge into a shared workffow, the system supports adaptive, co-creative construction outcomes.