Evaluating automated grammar corrective feedback tools: a comparative study of grammarly and quillBot in ESL expository essays
The advent of artificial intelligence and the proliferation of automated grammar feedback applications have garnered great interest among ESL learners as tools to facilitate language acquisition. While ample studies have examined the utility of applications like Grammarly and Quillbot, scarce rese...
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Penerbit Universiti Kebangsaan Malaysia
2024
|
| Online Access: | http://journalarticle.ukm.my/24291/ http://journalarticle.ukm.my/24291/1/Akademika_94_2_23.pdf |
| Summary: | The advent of artificial intelligence and the proliferation of automated grammar feedback applications have garnered
great interest among ESL learners as tools to facilitate language acquisition. While ample studies have examined the
utility of applications like Grammarly and Quillbot, scarce research compares their effectiveness in identifying and
classifying errors in Malaysian ESL student writing samples. This study aimed to conduct such a comparative analysis
using expository essays authored by Malaysian ESL students. This study employs a descriptive quantitative approach
to collect data and conduct data analysis. Five writing samples were examined using both applications to ascertain
the frequencies of errors flagged and categorised mistakes based on James' (1998) error classification schemata.
Results demonstrated that overall, Grammarly detected more errors compared to Quillbot. Additionally, both
applications recognised substantially more grammatical and substance inaccuracies relative to other error types like
lexical, syntactic, or semantic issues. Grammarly provided detailed descriptions and suggestions of each error
identified, while Quillbot only highlighted the errors with brief explanations. These findings suggest both tools can
meaningfully supplement ESL learners in their language learning process. However, further investigations into their
respective strengths and limitations are merited given the nuances observed. Overall, this exploratory study highlights
the promise of automated writing evaluation to enable self-directed editing to enhance the language learning process
among ESL learners. |
|---|