Improving Generalization Performance in Co-Evolutionary Learning

Recently, the generalization framework in co-evolutionary learning has been theoretically formulated and demonstrated in the context of game-playing. Generalization performance of a strategy (solution) is estimated using a collection of random test strategies (test cases) by taking the average game...

Full description

Bibliographic Details
Main Authors: Chong, Siang Yew, Tino, Peter, Ku, Day Chyi, Yao, Xin
Format: Article
Language:English
Published: 2012
Subjects:
Online Access:http://shdl.mmu.edu.my/3386/
http://shdl.mmu.edu.my/3386/1/10.pdf
_version_ 1848790313722707968
author Chong, Siang Yew
Tino, Peter
Ku, Day Chyi
Yao, Xin
author_facet Chong, Siang Yew
Tino, Peter
Ku, Day Chyi
Yao, Xin
author_sort Chong, Siang Yew
building MMU Institutional Repository
collection Online Access
description Recently, the generalization framework in co-evolutionary learning has been theoretically formulated and demonstrated in the context of game-playing. Generalization performance of a strategy (solution) is estimated using a collection of random test strategies (test cases) by taking the average game outcomes, with confidence bounds provided by Chebyshev's theorem. Chebyshev's bounds have the advantage that they hold for any distribution of game outcomes. However, such a distribution-free framework leads to unnecessarily loose confidence bounds. In this paper, we have taken advantage of the near-Gaussian nature of average game outcomes and provided tighter bounds based on parametric testing. This enables us to use small samples of test strategies to guide and improve the co-evolutionary search. We demonstrate our approach in a series of empirical studies involving the iterated prisoner's dilemma (IPD) and the more complex Othello game in a competitive co-evolutionary learning setting. The new approach is shown to improve on the classical co-evolutionary learning in that we obtain increasingly higher generalization performance using relatively small samples of test strategies. This is achieved without large performance fluctuations typical of the classical approach. The new approach also leads to faster co-evolutionary search where we can strictly control the condition (sample sizes) under which the speedup is achieved (not at the cost of weakening precision in the estimates).
first_indexed 2025-11-14T18:10:38Z
format Article
id mmu-3386
institution Multimedia University
institution_category Local University
language English
last_indexed 2025-11-14T18:10:38Z
publishDate 2012
recordtype eprints
repository_type Digital Repository
spelling mmu-33862014-03-05T03:48:05Z http://shdl.mmu.edu.my/3386/ Improving Generalization Performance in Co-Evolutionary Learning Chong, Siang Yew Tino, Peter Ku, Day Chyi Yao, Xin QA75.5-76.95 Electronic computers. Computer science Recently, the generalization framework in co-evolutionary learning has been theoretically formulated and demonstrated in the context of game-playing. Generalization performance of a strategy (solution) is estimated using a collection of random test strategies (test cases) by taking the average game outcomes, with confidence bounds provided by Chebyshev's theorem. Chebyshev's bounds have the advantage that they hold for any distribution of game outcomes. However, such a distribution-free framework leads to unnecessarily loose confidence bounds. In this paper, we have taken advantage of the near-Gaussian nature of average game outcomes and provided tighter bounds based on parametric testing. This enables us to use small samples of test strategies to guide and improve the co-evolutionary search. We demonstrate our approach in a series of empirical studies involving the iterated prisoner's dilemma (IPD) and the more complex Othello game in a competitive co-evolutionary learning setting. The new approach is shown to improve on the classical co-evolutionary learning in that we obtain increasingly higher generalization performance using relatively small samples of test strategies. This is achieved without large performance fluctuations typical of the classical approach. The new approach also leads to faster co-evolutionary search where we can strictly control the condition (sample sizes) under which the speedup is achieved (not at the cost of weakening precision in the estimates). 2012-02 Article PeerReviewed text en http://shdl.mmu.edu.my/3386/1/10.pdf Chong, Siang Yew and Tino, Peter and Ku, Day Chyi and Yao, Xin (2012) Improving Generalization Performance in Co-Evolutionary Learning. IEEE Transactions on Evolutionary Computation, 16 (1). pp. 70-85. ISSN 1089-778X http://dx.doi.org/10.1109/TEVC.2010.2051673 doi:10.1109/TEVC.2010.2051673 doi:10.1109/TEVC.2010.2051673
spellingShingle QA75.5-76.95 Electronic computers. Computer science
Chong, Siang Yew
Tino, Peter
Ku, Day Chyi
Yao, Xin
Improving Generalization Performance in Co-Evolutionary Learning
title Improving Generalization Performance in Co-Evolutionary Learning
title_full Improving Generalization Performance in Co-Evolutionary Learning
title_fullStr Improving Generalization Performance in Co-Evolutionary Learning
title_full_unstemmed Improving Generalization Performance in Co-Evolutionary Learning
title_short Improving Generalization Performance in Co-Evolutionary Learning
title_sort improving generalization performance in co-evolutionary learning
topic QA75.5-76.95 Electronic computers. Computer science
url http://shdl.mmu.edu.my/3386/
http://shdl.mmu.edu.my/3386/
http://shdl.mmu.edu.my/3386/
http://shdl.mmu.edu.my/3386/1/10.pdf