Abstract
Real-time artificial intelligence (AI) guidance for robotic surgery aims to transform intraoperative data—such as endoscopic video, robotic kinematics, tool telemetry, and physiologic signals—into context-aware recommendations that can support surgeons during active procedures. Despite rapid progress, translation into routine practice remains constrained by strict latency requirements, limited and delayed ground-truth labels for clinically meaningful outcomes, and performance variability across hospitals, surgeons, devices, and procedure styles. This systematic review synthesizes peer-reviewed research published from 2020 to 2025 on machine learning and AI methods designed for intraoperative guidance in robot-assisted and minimally invasive surgery. Using transparent, structured screening and reporting practices consistent with contemporary systematic review standards, we organize the literature into four functional categories: (a) perception and recognition (phase, tool, and anatomy understanding), (b) risk and error detection, (c) context-driven decision support and guidance policies, and (d) governance capabilities including logging, auditability, and explanation artifacts. To evaluate deployment relevance beyond model accuracy, we introduce the Guidance Readiness Quadrant, which compares studies across latency feasibility, evidence realism and leakage control, safety and fail-safe behavior under uncertainty, and governance and interpretability features for human-in-the-loop oversight. Overall, perception performance has improved markedly, but operational usefulness depends on disciplined validation, calibrated alerting aligned with workflow capacity, and robust monitoring to manage drift and degraded inputs. The review concludes with a deployment-oriented agenda focused on standardized operational metrics, privacy-aware cross-site evaluation, and safety-centered system design.