个人信息

参与实验室科研项目
复杂环境下非完全信息博弈决策的智能基础模型研究
研究课题
人机共享控制中的非线性仲裁方法研究
学术成果
共撰写/参与撰写专利 0 项,录用/发表论文 2 篇,投出待录用论文2篇。
Journal Articles
-
UNA-SAC: An Uncertainty-Aware Nonlinear Arbitration Method for Human-AI Shared Control
Shuyue Jiang,
Yun-Bo Zhao ,
Yu Kang,
Fei Xie,
and Yun-Sheng Zhao
IEEE Trans. Artif. Intell.
2026
[Abs]
[doi]
[pdf]
With the continuous development of artificial intelligence (AI), human-AI shared control has become an essential paradigm for achieving reliable collaboration, where the key challenge lies in efficiently arbitrating between human and AI policies. However, the inherent uncertainty of AI policies and their approximation errors often undermine the robustness and effectiveness of traditional linear arbitration. To address this issue, this paper proposes a nonlinear arbitration method based on the Soft Actor-Critic (SAC) framework, termed UNA-SAC. The method introduces a moment network to model AI policy uncertainty and incorporates a cognition-inspired mechanism to adjust the human policy, thereby constructing a distributional nonlinear arbitration form. Theoretical analysis demonstrates that the proposed method provides advantages in gradient optimization and effectively mitigates the cumulative effect of uncertainty-induced bias. Experimental results further validate its superiority in driving assistance scenarios: UNA-SAC achieves significant improvements in convergence speed, task success rate, robustness, and operational performance compared with linear arbitration and other baseline methods.
-
A Dual Confidence Evaluation-Based Shared Control Approach for Human-Machine Collaboration
Yaqing Zhou,
Yun-Bo Zhao ,
Pengfei Li,
Xia Tian,
Shuyue Jiang,
and Yu Kang
Neurocomputing
2026
[Abs]
[doi]
[pdf]
Shared control has become a key strategy for enhancing the safety and adaptability of human-machine collaboration systems, particularly in complex and uncertain environments. However, existing rule-based and confidence-based authority allocation approaches often suffer from limited generalizability or excessive reliance on physiological signals, which hinders their practical deployment. This paper proposes a Dual Confidence-Based Shared Control (DC-SC) approach that enables dynamic and interpretable authority allocation by quantifying the decision confidence of both humans and machines. The human confidence model is constructed through a knowledge-task matching function that measures the cognitive alignment between the operator’s expertise and task difficulty, while the machine confidence model assesses decision reliability via an uncertainty-tolerance matching mechanism. These two types of confidence indicators are jointly used to construct a shared control policy, in which the fusion weights are dynamically adjusted using environmental feedback within a policy gradient optimization framework, thereby maximizing human-machine collaborative performance. Theoretical analysis validates the soundness of the confidence models, and experiments conducted in benchmark environments such as LunarLander and UAV path planning demonstrate that DC-SC significantly outperforms both reinforcement learning baselines and traditional shared control approaches in terms of policy performance and system safety.
博客文章