A Note on the Reward Function for PHD Filters with Sensor Control

The context is sensor control for multi-object Bayes filtering in the framework of partially observed Markov decision processes (POMDPs). The current information state is represented by the multi-object probability density function (pdf), while the reward function associated with each sensor control...

Full description

Bibliographic Details
Main Authors: Ristic, B., Vo, Ba-Ngu, Clark, D.
Format: Journal Article
Published: Aerospace & Electronic Systems Society 2011
Online Access:http://hdl.handle.net/20.500.11937/30999
_version_ 1848753252560011264
author Ristic, B.
Vo, Ba-Ngu
Clark, D.
author_facet Ristic, B.
Vo, Ba-Ngu
Clark, D.
author_sort Ristic, B.
building Curtin Institutional Repository
collection Online Access
description The context is sensor control for multi-object Bayes filtering in the framework of partially observed Markov decision processes (POMDPs). The current information state is represented by the multi-object probability density function (pdf), while the reward function associated with each sensor control (action) is the information gain measured by the alpha or Rényi divergence. Assuming that both the predicted and updated state can be represented by independent identically distributed (IID) cluster random finite sets (RFSs) or, as a special case, the Poisson RFSs, this work derives the analytic expressions of the corresponding Rényi divergence based information gains. The implementation of Rényi divergence via the sequential Monte Carlo method is presented. The performance of the proposed reward function is demonstrated by a numerical example, where a moving range-only sensor is controlled to estimate the number and the states of several moving objects using the PHD filter.
first_indexed 2025-11-14T08:21:34Z
format Journal Article
id curtin-20.500.11937-30999
institution Curtin University Malaysia
institution_category Local University
last_indexed 2025-11-14T08:21:34Z
publishDate 2011
publisher Aerospace & Electronic Systems Society
recordtype eprints
repository_type Digital Repository
spelling curtin-20.500.11937-309992017-09-13T15:10:08Z A Note on the Reward Function for PHD Filters with Sensor Control Ristic, B. Vo, Ba-Ngu Clark, D. The context is sensor control for multi-object Bayes filtering in the framework of partially observed Markov decision processes (POMDPs). The current information state is represented by the multi-object probability density function (pdf), while the reward function associated with each sensor control (action) is the information gain measured by the alpha or Rényi divergence. Assuming that both the predicted and updated state can be represented by independent identically distributed (IID) cluster random finite sets (RFSs) or, as a special case, the Poisson RFSs, this work derives the analytic expressions of the corresponding Rényi divergence based information gains. The implementation of Rényi divergence via the sequential Monte Carlo method is presented. The performance of the proposed reward function is demonstrated by a numerical example, where a moving range-only sensor is controlled to estimate the number and the states of several moving objects using the PHD filter. 2011 Journal Article http://hdl.handle.net/20.500.11937/30999 10.1109/TAES.2011.5751278 Aerospace & Electronic Systems Society restricted
spellingShingle Ristic, B.
Vo, Ba-Ngu
Clark, D.
A Note on the Reward Function for PHD Filters with Sensor Control
title A Note on the Reward Function for PHD Filters with Sensor Control
title_full A Note on the Reward Function for PHD Filters with Sensor Control
title_fullStr A Note on the Reward Function for PHD Filters with Sensor Control
title_full_unstemmed A Note on the Reward Function for PHD Filters with Sensor Control
title_short A Note on the Reward Function for PHD Filters with Sensor Control
title_sort note on the reward function for phd filters with sensor control
url http://hdl.handle.net/20.500.11937/30999