TY - GEN
T1 - Unmanned Vehicle Autonomy for Long-Duration Surveillance Missions
AU - Rastgoftar, Hossein
AU - Jiang, Jinning
AU - Atkins, Ella
N1 - Funding Information:
This work was supported in part by National Science Foundation (NSF) Grant CNS 1739525 and Office of Naval Research (ONR) grant N000141410596.
Publisher Copyright:
© 2018 IEEE.
PY - 2018/11
Y1 - 2018/11
N2 - Long duration unmanned vehicle missions with unreliable communication require flexible onboard planning to assure controls are appropriate for an uncertain environment. While the autonomous unmanned vehicle (AUV) can command its own actions, environmental dynamics are uncontrollable and typically uncertain. This paper applies the Markov Decision Process (MDP) to an environmental sampling mission with uncertain sampling target locations. AUV movements are discretized and the reward function maximizes likelihood of collecting valuable data. By modeling environmental features as MDP state features rather than injected noise, mission planning can be robust to changes in environmental conditions. This paper efficiently integrates transitions over controllable AUV motions in the presence of uncertain, uncontrollable, spatially-varying environmental dynamics. An AUV exploration case study is investigated with MDP states including location, time, three-dimensional water current velocity, temperature, surface air pressure, and other spatially and temporally varying ocean environmental parameters generated from the Regional Ocean Modeling System (ROMS). This paper contributes a novel controllable/uncontrollable partitioning for AUV decision state and examines its use for an AUV operating in a realistic ocean environment.
AB - Long duration unmanned vehicle missions with unreliable communication require flexible onboard planning to assure controls are appropriate for an uncertain environment. While the autonomous unmanned vehicle (AUV) can command its own actions, environmental dynamics are uncontrollable and typically uncertain. This paper applies the Markov Decision Process (MDP) to an environmental sampling mission with uncertain sampling target locations. AUV movements are discretized and the reward function maximizes likelihood of collecting valuable data. By modeling environmental features as MDP state features rather than injected noise, mission planning can be robust to changes in environmental conditions. This paper efficiently integrates transitions over controllable AUV motions in the presence of uncertain, uncontrollable, spatially-varying environmental dynamics. An AUV exploration case study is investigated with MDP states including location, time, three-dimensional water current velocity, temperature, surface air pressure, and other spatially and temporally varying ocean environmental parameters generated from the Regional Ocean Modeling System (ROMS). This paper contributes a novel controllable/uncontrollable partitioning for AUV decision state and examines its use for an AUV operating in a realistic ocean environment.
KW - Autonomous Underwater Vehicle
KW - Long Duration Surveillance
KW - Markov Decision Process
UR - http://www.scopus.com/inward/record.url?scp=85068319839&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85068319839&partnerID=8YFLogxK
U2 - 10.1109/AUV.2018.8729743
DO - 10.1109/AUV.2018.8729743
M3 - Conference contribution
AN - SCOPUS:85068319839
T3 - AUV 2018 - 2018 IEEE/OES Autonomous Underwater Vehicle Workshop, Proceedings
BT - AUV 2018 - 2018 IEEE/OES Autonomous Underwater Vehicle Workshop, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE/OES Autonomous Underwater Vehicle Workshop, AUV 2018
Y2 - 6 November 2018 through 9 November 2018
ER -