Combination of generative artificial intelligence and deep reinforcement learning: performance comparison
In this study, we explore the integration of Generative Adversarial Networks (GANs) and Deep Reinforcement Learning (DRL) methods, focusing on the performance comparison between different architectures of Sequence Generative Adversarial Networks (SeqGAN) and policy gradient algorithms. We address ke...
| Main Author: | |
|---|---|
| Format: | Final Year Project / Dissertation / Thesis |
| Published: |
2024
|
| Subjects: | |
| Online Access: | http://eprints.utar.edu.my/6814/ http://eprints.utar.edu.my/6814/1/2200481_LIM_FANG_NIE.pdf |
| Summary: | In this study, we explore the integration of Generative Adversarial Networks (GANs) and Deep Reinforcement Learning (DRL) methods, focusing on the performance comparison between different architectures of Sequence Generative Adversarial Networks (SeqGAN) and policy gradient algorithms. We address key challenges in text generation, such as maintaining narrative coherence over long sequences, reducing text repetition, and optimizing SeqGAN for diverse textual outputs. The study incorporates architectural
innovations like Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU) that enhance the ability of SeqGAN to capture long-range dependencies in sequences, while attention mechanisms improve contextual awareness by
selectively focusing on relevant parts of the sequence. Through extensive experiments, we analyze the influence of various neural network configurations and regulatory mechanisms, including gradient penalties and regularization on the quality of the generated text. Our findings show a 15% increase in BLEU scores, highlighting significant improvements in text coherence and diversity across various datasets, demonstrating the effectiveness of integrating SeqGAN with policy gradient methods for automated content generation.
|
|---|