Feedforward neural network for solving particular fractional differential equations

Fractional differential equations (FDEs) model real-world phenomena capturing memory effects. However, existing numerical methods are mostly traditional, prompting the need for innovative approaches. Artificial neural networks (ANNs), a machine learning tool, have exhibited promising capabilities...

Full description

Bibliographic Details
Main Author: Admon, Mohd Rashid
Format: Thesis
Language:English
Published: 2024
Subjects:
Online Access:http://psasir.upm.edu.my/id/eprint/118402/
http://psasir.upm.edu.my/id/eprint/118402/1/118402.pdf
Description
Summary:Fractional differential equations (FDEs) model real-world phenomena capturing memory effects. However, existing numerical methods are mostly traditional, prompting the need for innovative approaches. Artificial neural networks (ANNs), a machine learning tool, have exhibited promising capabilities in solving differential equations. This research aims to develop a scheme based on a feedforward neural network (FNN) with a vectorized algorithm (FNNVA) for solving FDEs in the Caputo sense (FDEsC) using selected first-order optimization techniques: simple gradient descent (GD), momentum method (MM), and adaptive moment estimation method (Adam). Then, a single hidden layer of FNN based on Chelyshkov polynomials with an extreme learning machine algorithm (SHLFNNCP-ELM) is constructed for solving FDEsC. Next, a scheme based on an extended single hidden layer of FNN using a second-order optimization technique known as the Broyden–Fletcher–Goldfarb–Shanno method (ESHLFNN-BFGS) is designed to solve FDEs in the Caputo-Fabrizio sense (FDEsCF). This study also focuses on solving fractal-fractional differential equations in the Caputo sense with a power-law kernel (FFDEsCP) using FNN in two hidden layers with a vectorized algorithm alongside Adam (FNN2HLVA-Adam). In the first scheme, a vectorized algorithm and automatic differentiation are implemented to minimize computational costs. Numerical results indicated that FNNVA with Adam in one or two hidden layers, 5 or 10 nodes, and an appropriate learning rate offers superior accuracy compared to FNNVA with GD and FNNVA with MM. The second approach relies on Chelyshkov basis functions for approximation and utilizes the extreme machine learning algorithm for weight determination, achieving high accuracy and low computational time. The third scheme employs the BFGS solver during the learning process, attained satisfactory numerical results with fewer iterations. The final scheme utilizes a two hidden layer FNNVA, with Adam optimization, using suitable number of nodes and value of learning rates to handle problems involving memory and fractal concepts. The numerical solutions obtained are consistent with reference solutions. In conclusion, all proposed schemes deliver more accurate results compared to existing methods while maintaining low computational costs.