Policy abstraction for transfer learning using learning vector quantization in reinforcement learning framework

People grow up every day exposed to the infinite state space environment interacting with active biological subjects and machines. There are routines that are always expected and unpredicted events that are not completely known beforehand as well. When people interact with the future routines, they...

Full description

Bibliographic Details
Main Author: Ahmad Afif, Mohd Faudzi
Format: Thesis
Language:English
Published: 2015
Subjects:
Online Access:http://umpir.ump.edu.my/id/eprint/13521/
http://umpir.ump.edu.my/id/eprint/13521/16/Policy%20abstraction%20for%20transfer%20learning%20using%20learning%20vector%20quantization%20in%20reinforcement%20learning%20framework.pdf
_version_ 1848819493349883904
author Ahmad Afif, Mohd Faudzi
author_facet Ahmad Afif, Mohd Faudzi
author_sort Ahmad Afif, Mohd Faudzi
building UMP Institutional Repository
collection Online Access
description People grow up every day exposed to the infinite state space environment interacting with active biological subjects and machines. There are routines that are always expected and unpredicted events that are not completely known beforehand as well. When people interact with the future routines, they do not require the same effort as they do during the first time. Based on experience, irrelevant information that does not affect the achievement is ignored. For example, a new worker in his/her first day will carefully recognize the road to his/her office, including the road's name, signboards, and buildings as well as focusing on the traffic. After several months he/she, possibly, will focus only on buildings and traffic. Furthermore, when people interact with an unpredicted event, they will usually try to cope with the situation using their knowledge that is acquired from their past experience. For example, an accident happened and the worker's daily route was jammed, here, he/she will try to find the alternate route based on the distance and the location of his/her office. This shows that people have an ability to benefit from their previous experience and knowledge for the future. Furthermore, the knowledge is not stored in a concrete or very detailed form, but in an abstract form that is ready to be used for routine events and also to be used for assisting in unknown events. Such abilities are obviously acquired through the most significant ability of a human being, which is learning ability from its successes and failures.
first_indexed 2025-11-15T01:54:26Z
format Thesis
id ump-13521
institution Universiti Malaysia Pahang
institution_category Local University
language English
last_indexed 2025-11-15T01:54:26Z
publishDate 2015
recordtype eprints
repository_type Digital Repository
spelling ump-135212021-11-17T01:28:01Z http://umpir.ump.edu.my/id/eprint/13521/ Policy abstraction for transfer learning using learning vector quantization in reinforcement learning framework Ahmad Afif, Mohd Faudzi TK Electrical engineering. Electronics Nuclear engineering People grow up every day exposed to the infinite state space environment interacting with active biological subjects and machines. There are routines that are always expected and unpredicted events that are not completely known beforehand as well. When people interact with the future routines, they do not require the same effort as they do during the first time. Based on experience, irrelevant information that does not affect the achievement is ignored. For example, a new worker in his/her first day will carefully recognize the road to his/her office, including the road's name, signboards, and buildings as well as focusing on the traffic. After several months he/she, possibly, will focus only on buildings and traffic. Furthermore, when people interact with an unpredicted event, they will usually try to cope with the situation using their knowledge that is acquired from their past experience. For example, an accident happened and the worker's daily route was jammed, here, he/she will try to find the alternate route based on the distance and the location of his/her office. This shows that people have an ability to benefit from their previous experience and knowledge for the future. Furthermore, the knowledge is not stored in a concrete or very detailed form, but in an abstract form that is ready to be used for routine events and also to be used for assisting in unknown events. Such abilities are obviously acquired through the most significant ability of a human being, which is learning ability from its successes and failures. 2015-08 Thesis NonPeerReviewed pdf en http://umpir.ump.edu.my/id/eprint/13521/16/Policy%20abstraction%20for%20transfer%20learning%20using%20learning%20vector%20quantization%20in%20reinforcement%20learning%20framework.pdf Ahmad Afif, Mohd Faudzi (2015) Policy abstraction for transfer learning using learning vector quantization in reinforcement learning framework. PhD thesis, Kyushu University (Contributors, UNSPECIFIED: UNSPECIFIED).
spellingShingle TK Electrical engineering. Electronics Nuclear engineering
Ahmad Afif, Mohd Faudzi
Policy abstraction for transfer learning using learning vector quantization in reinforcement learning framework
title Policy abstraction for transfer learning using learning vector quantization in reinforcement learning framework
title_full Policy abstraction for transfer learning using learning vector quantization in reinforcement learning framework
title_fullStr Policy abstraction for transfer learning using learning vector quantization in reinforcement learning framework
title_full_unstemmed Policy abstraction for transfer learning using learning vector quantization in reinforcement learning framework
title_short Policy abstraction for transfer learning using learning vector quantization in reinforcement learning framework
title_sort policy abstraction for transfer learning using learning vector quantization in reinforcement learning framework
topic TK Electrical engineering. Electronics Nuclear engineering
url http://umpir.ump.edu.my/id/eprint/13521/
http://umpir.ump.edu.my/id/eprint/13521/16/Policy%20abstraction%20for%20transfer%20learning%20using%20learning%20vector%20quantization%20in%20reinforcement%20learning%20framework.pdf