Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
The impact of automated feedback using artificial
intelligence on the development of writing skills in students
of English as a foreign language
El impacto de la retroalimentación automatizada mediante inteligencia
artificial en el desarrollo de las habilidades de escritura en estudiantes
de inglés como lengua extranjera
-Fecha de recepción: 23-01-2026 -Fecha de aceptación: 13-03-2026 -Fecha de publicación: 26-03-2026
Roberto Fernando Lozada Lozada
Investigador Independiente, Quito Ecuador
rflozada@gmail.com
https://orcid.org/0009-0007-4634-7981
Maicol Alexander Suntasig Guallichico
Investigador Independiente, Quito Ecuador
maicolsuntasig@gmail.com
https://orcid.org/0009-0006-0415-7672
Ximena Alexandra Estrada Chango
Investigador Independiente, Quito Ecuador
xime_alexa27@hotmail.com
https://orcid.org/0009-0000-2646-416X
Jhessica Alexandra Jumbo Obaco
Investigador Independiente, Loja Ecuador
jessijumbo@hotmail.com
https://orcid.org/0009-0008-5972-1378
Cristian David Chucho Muñoz
Investigador Independiente, Cañar Ecuador
cristianchucho7@gmail.com
https://orcid.org/0009-0006-8694-9381
Luis Alberto Deleg Juela
Investigador Independiente, Cuenca Ecuador
luisdelegjuela@gmail.com
https://orcid.org/0009-0008-0769-8655
Abstract
The teaching of English as a Foreign Language (EFL) has increasingly incorporated artificial
intelligence (AI), leading to significant transformations in writing instruction. This research
analyzes the impact of AI-driven automated feedback on the development of writing skills in EFL
learners, with particular attention to the types of technologies employed. Methodologically, a
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
mixed-methods systematic review was conducted following the PRISMA protocol, examining
studies published between 2020 and 2025. The findings reveal that automated feedback tools can
be broadly classified into two categories: traditional Natural Language Processing (NLP)-based
systems (e.g., Grammarly, Criterion, Pigai), which focus primarily on grammatical accuracy, error
detection, and surface-level textual features; and Large Language Models (LLMs) (e.g.,
ChatGPT), which generate more holistic feedback related to coherence, organization, and content
development. Both categories demonstrate positive effects on writing accuracy and revision
practices, largely due to the immediacy and personalization of feedback. However, the review also
identifies challenges associated with technological dependence, uneven feedback depth, and
ethical concerns when feedback is used without pedagogical mediation. The study concludes that
AI-based automated feedback constitutes an innovative pedagogical resource for EFL writing
development, provided it is integrated within structured instructional frameworks and
complemented by active teacher mediation.
Keywords: artificial intelligence, automated feedback, academic writing, foreign language, EFL
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
Introduction
Globalization and English proficiency are essential for communication today, and can even be
considered a path to academic and professional opportunities. In this context, automated feedback
through artificial intelligence emerges as a strategic platform for improving academic writing skills
in English as a Foreign Language (EFL) instruction. In this sense, developing proficiency in
writing in English as a foreign language (EFL) is a challenge in training processes; this involves
not only mastering foreign grammatical structures and vocabulary, but also constructing coherent
and cohesive texts in academic and professional contexts.
It is important to note that since the emergence of artificial intelligence (AI), it has generated,
among other aspects, the possibility of benefits regarding automated feedback in the teaching-
learning process for students, transforming traditional pedagogical practices.
According to Liu (2024), automated feedback provides suggested responses generated by
computerized systems in real time to improve written production. Furthermore, it is noted that
artificial intelligence encompasses the set of technologies capable of simulating human cognitive
processes, such as knowledge acquisition and decision-making (Lee & Moore, 2024). However,
this is compounded by the limitations that arise in English language studies in foreign language
contexts. In this sense, learning English in countries where it is neither an official language nor in
everyday use presents barriers to real-world exposure (Sarıca & Deneme, 2025). Therefore, it
would seem that academic writing in EFL is in a stage of pedagogical transition, in which its
integration with artificial intelligence does not replace traditional teaching; on the contrary, it
complements it, thus providing opportunities for students to strengthen their autonomy and achieve
international standards in written production.
Despite the above, there are other conflicting opinions regarding the effectiveness of these tools.
Authors such as Mekheimer (2025) and Alnemrat (2025) highlight the positive results in text
quality, while others, such as Steiss (2024), warn that AI-generated feedback can be superficial or
ambiguous. Therefore, the need to combine the potential of AI with pedagogical mediation is
recognized. Currently, some research related to automated feedback systems, such as Grammarly,
Criterion, and ChatGPT, has confirmed that these systems contribute to improving the accuracy
and coherence of texts in EFL learners. In particular, Dizon (2024) and Shi and Aryadoust (2024)
assert that these tools generate benefits by providing immediate and personalized feedback,
although the quality of the suggestions depends on the learner's level of language proficiency.
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
Similarly, in the research of Fleckenstein et al. (2023), these confirm the positive effects on text
revision, of course with the support of the teacher.
Similarly, in Latin America, studies have focused on understanding the influence of AI on
language instruction. In this regard, research by Nunes et al. (2022) and Marzuki et al. (2023) has
determined that the implementation of automated platforms leads to greater autonomy, although
they highlight limitations related to technological access and the lack of educational policies that
promote its use. Furthermore, recent research in Ecuador explores the benefits of automated
feedback in university settings, identifying advantages in correcting recurring errors and in student
motivation (Jaramillo, 2025).
From a social perspective, this research responds to the growing demand for communicative
competence in English as a foundation for global academic and professional integration. From a
practical standpoint, it offers teachers and students innovative strategies to optimize writing
instruction. Methodologically, this research adopts a systematic review approach based on the
PRISMA protocol, ensuring rigor and transparency. Academically, it provides recent evidence on
the effectiveness of automated feedback in the Latin American and Ecuadorian context.
In this context, the purpose of this article is to analyze the impact of automated feedback using
artificial intelligence on the development of writing skills in students of English as a foreign
language.
One of the most enduring problems in language instruction is the development of writing
proficiency in English as a Foreign Language (EFL), especially in situations where English is not
a common language of communication. In addition to grammatical and lexical correctness,
academic writing demands the capacity to create arguments, arrange ideas logically, and adhere to
global standards for academic communication. Written feedback is essential in this approach
because it helps students improve their writing and fosters their progressive growth as independent
authors.
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
However, time constraints, big class sizes, and teachers' overwhelming workloads frequently limit
the ability to provide fast and customized feedback through traditional teacher-provided methods.
Artificial intelligence (AI) advancements have reshaped pedagogical approaches in EFL writing
education by creating automated feedback systems that can instantly and individually respond to
students' written work in response to these problems.
The necessity to thoroughly and critically investigate the real effects of AI-driven automated
feedback on the growth of academic writing abilities in EFL learners makes this study pertinent.
Recent studies show gains in literary coherence, grammatical accuracy, and student motivation,
but they also raise issues with technology dependence, the sporadic shallowness of automated
recommendations, and the need for pedagogical mediation. Therefore, it is crucial to synthesize
the most recent scientific findings in order to comprehend not only the advantages but also the
constraints and situational circumstances that can make automated feedback a useful teaching tool.
By organizing contemporary research published between 2020 and 2025 and employing the
PRISMA methodology to guarantee methodological rigor and openness in the selection and
analysis of studies, this study advances the field of EFL teaching and learning from an academic
standpoint. By doing this, it highlights current discussions, gaps, and trends surrounding the
application of AI to written feedback, especially in EFL contexts.
Practically speaking, the results give teachers, teacher educators, and curriculum designers
important information by offering evidence-based recommendations on how to include automated
feedback systems as an addition to, rather than a substitute for, teacher input. This method
preserves the growth of critical and metacognitive writing abilities while promoting informed
pedagogical decision-making that increases learner autonomy.
Lastly, from a social and educational standpoint, this study is particularly pertinent in Ecuadorian
and Latin American contexts, where issues with technology access and English language training
continue to exist. Analyzing how AI-based automated feedback affects academic writing helps
create more inclusive educational policies and creative teaching methods that enhance educational
quality and facilitate students' academic and professional integration in a world growing more
interconnected by the day.
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
1.1 Artificial Intelligence, Feedback, and Writing Development from an Information
Processing and Self-Regulated Learning Perspective
Both the Self-Regulated Learning (SRL) framework and information processing theory provide
theoretical justification for the efficacy of AI-driven automated feedback in EFL writing teaching.
Learning is defined as the active encoding, storing, and retrieval of information through cognitive
systems with constrained processing power from the standpoint of information processing.
Because it lessens cognitive strain and stops the consolidation of faulty language forms, immediate
feedback is essential to this process. AI-based feedback enables learners to make real-time
linguistic adjustments by giving prompt answers during the writing process, which promotes more
effective encoding of lexical and grammatical knowledge (Atkinson & Shiffrin, 1968; Mayer,
2020).
Additionally, a supplementary explanation for the noted advantages of automated feedback is
provided by the Self-Regulated Learning framework. This viewpoint holds that the capacity of
students to organize, track, and assess their own performance is a necessary component of effective
learning (Liu, 2024; Shi & Aryadoust, 2024). With AI-driven feedback, students can
autonomously see mistakes, edit texts, and gauge their progress without continual teacher
assistance, supporting the monitoring and evaluation stages. Learner autonomy and continued
involvement in the writing process are fostered by the accessibility and immediateness of AI
feedback, which promotes iterative revision cycles.
Both theoretical frameworks stress, therefore, that meaningful learning is not ensured by feedback
alone. Immediate feedback may result in superficial adjustment rather than in-depth cognitive
processing if reflective engagement is not maintained. In order to ensure that learners actively
evaluate and use input rather than passively accepting automated suggestions, AI-based feedback
must be incorporated into instructional designs that foster metacognitive awareness (Zimmerman,
2002; Panadero, 2017).
Materials and methods
With the aim of providing relevant evidence in the field of language teaching, this research was
structured as a systematic review, which facilitates the analysis and comparison of contributions
from different studies related to academic writing in English as a foreign language. Therefore, this
study adopted a mixed-methods approach, combining qualitative and quantitative systematic
review, based on the PRISMA protocol. This protocol combines interpretive analysis of findings
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
(qualitative) with the use of synthesis techniques and numerical categorization of studies
(quantitative), allowing for a deeper understanding of the phenomena under investigation, as well
as the identification of patterns, trends, and gaps in the literature. As a result, it allows for the
organization and transparency of the process of searching, selecting, and analyzing the literature
(Page et al., 2021; Moher et al., 2020). Snyder (2022) indicates that systematic review guarantees
academic rigor by synthesizing the available evidence around a research problem; on the other
hand, Kitchenham and Charters (2021) point out that this approach is ideal for identifying trends
and gaps in emerging areas such as AI-driven automated feedback.
This investigation was carried out in accordance with the PRISMA protocol and used a mixed-
methods systematic review methodology, integrating qualitative and quantitative synthesis.
Database searching, screening according to inclusion and exclusion criteria, and narrative-
comparative synthesis of the chosen papers were the three steps in the review process.
250 records from Scopus, Web of Science, ERIC, Scielo, and Redalyc were found in the first
search. 60 articles were examined in full text after duplicates were eliminated and relevance and
quality standards were applied. A final sample of 25 peer-reviewed studies (n = 25) published
between 2020 and 2025 was included in the study after this screening procedure. The findings'
legitimacy and dependability are increased by this clear specification of the final corpus, which
also guarantees scientific transparency.
Consequently, a descriptive-analytical study was developed with an exploratory scope, as it
investigated and identified the contributions, limitations, and perspectives of automated feedback
in English as a foreign language instruction. In this sense, this design was pertinent because it
allowed for the comparison of international, regional, and national evidence under previously
established criteria (Creswell & Creswell, 2021).
Therefore, the analysis was developed in three phases: (1) a search based on specific expressions
in English and Spanishincluding terms such as artificial intelligence , automated feedback, EFL
writing , and automated feedback (Table 1)to ensure a broad and relevant retrieval of published
articles, as well as indexed metadata (Scopus, Web of Science, ERIC, Scielo, and Redalyc); (2)
the provision of inclusion and exclusion criteria; and (3) a narrative and comparative synthesis of
the selected studies, all to guarantee reliability. Additionally, source triangulation and cross-
validation techniques were employed (Gough et al., 2021; Booth et al., 2021). Another aspect of
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
the applied method is the population, which consisted of academic articles published between 2020
and 2025 that address AI-powered automated feedback in English language teaching.
Table 1.
Search strings used
Database
Search string
Scopus
(“Artificial Intelligence” OR “AI”) AND (“Automated Feedback” OR
“Automated Writing Evaluation”) AND (“EFL” OR “English as a Foreign
Language”) AND (2020–2025)
Web of
Science
(“AI feedback” AND EFL writing” AND academic writing” AND 2020
2025”)
ERIC
(“Automated feedback” OR “AI writing tools”) AND (“foreign language
learning” OR “English writing”)
Scielo /
Redalyc
(“automated feedback” AND “artificial intelligence” AND “academic writing”
AND “2020–2025”)
Source: Own elaboration
Similarly, reference is made to the PRISMA diagram, which reflects the screening process. Of the
initial 250 articles, only 25 met the established inclusion criteria (year, language, thematic
relevance, and methodological rigor), Figure 1. The selection of studies using the Inclusion and
Exclusion Criteria is also shown in Table 2, in the search and organization of the databases,
ensuring the relevance, currency, and scientific validity of the selected studies, Table 3.
Fig. 1. PRISMA diagram of the selection process
Records identified: 250
Records after removing duplicates: 210
Records examined by title/abstract: 210
Excluded records (not relevant): 150
Articles reviewed in full text: 60
Articles excluded by criteria: 35
Studies included in the final synthesis: 25
En esta investigación se elaboraron
En esta investigación se
En esta investigación se
En esta investigación se
En esta investigación se
En esta investigación se
En esta investigación se
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
Table 2.
Inclusion and exclusion criteria
Inclusion criteria
Exclusion criteria
Articles between 2020 and 2025
Publications before 2020
Studies on AI and written feedback
Documents on AI in other skills (reading, speaking,
listening)
EFL/ESL Context
Exclusively native context
Full access to the text
Abstracts or papers without full text
Systematic reviews and empirical
studies
Opinions, blogs, or essays without scientific backing
Source: Own elaboration
Table 3.
Summary of selected articles
Country
AI tool / approach
Main results
Jordan
ChatGPT
Automated feedback improved coherence
and organization in EFL argumentative
essays.
Iran
ChatGPT + teacher
feedback
The AI-teacher combination increased
grammatical accuracy and reduced
recurring errors.
Saudi
Arabia
Various AI tools
Automated feedback generated
improvements in informative texts,
although with limitations in style.
Japan
Grammarly
AWE reduced grammatical errors, but
students relied too heavily on the
proofreader.
China
Systematic review
(Grammarly, Pigai ,
Criterion)
Positive evidence in writing accuracy, but
longitudinal research is lacking.
Norway
Theoretical perspective
on AI
Automated feedback is understood as a
cultural tool, but it can limit
metacognition.
Germany
Meta-analysis
AWE systems show moderate positive
effects on writing quality.
China
Pigai
Frequent use motivated students to write
more, albeit with technological
dependence.
Ecuador
Grammarly
The frequency of spelling and grammar
errors among university students
decreased.
USA
Systematic review
ChatGPT offers fast feedback, but
requires teacher validation to avoid
superficial responses.
China
Pigai
Immediate feedback strengthened
autonomy and self-publishing in EFL
students.
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
India
ChatGPT
Students perceived greater motivation and
confidence in their academic writing.
Egypt
ChatGPT
AI feedback increased the frequency of
revisions and improved text quality.
Indonesia
AI (Grammarly,
Quillbot )
Improvements in text structure, although
with unequal technological access.
Brazil
Criterion
Advances in narrative writing among
schoolchildren; positive evidence in EFL
contexts.
Türkiye
Hybrid review on AI
Students value the immediacy of the
feedback, but doubt its reliability.
Azerbaijan
ChatGPT
Pedagogical opportunities, although it
raises ethical concerns in EFL writing.
Indonesia
Grammarly, Quillbot ,
Ginger
Useful tools for vocabulary and grammar,
but limited in style and creativity.
Singapore
Systematic review
AI feedback shows a positive impact on
text accuracy and length.
Germany
Human vs AI
comparison
Human feedback is still more in-depth; AI
is useful in initial feedback.
USA
Automatic evaluation
AWE is effective in linguistic accuracy,
but has criticisms regarding
contextualization.
South
Korea
ChatGPT
Students experienced contradictions: trust
in AI, but a need for teacher guidance.
China
Review of LLMs
They identify the ethical benefits and risks
of using language models in education.
China
AIAS Framework
AI applied to English teaching improves
formative feedback, still under
experimental development.
China
ChatGPT
Controlled trial showed improvements in
critical writing with AI support.
Source: Own elaboration
Results and discussion
Two main categories clearly dominate the sorts of artificial intelligence technologies that were
studied in the chosen studies, according to the findings. First, the most often examined systems are
the conventional automated writing evaluation tools based on Natural Language Processing (NLP),
such as Grammarly, Criterion, Pigai, QuillBot, and Ginger. This is especially true of research done
before 2023. These methods mainly concentrate on surface-level textual aspects, lexical choice,
and grammatical accuracy.
Second, more recent research demonstrates an increasing focus on Large Language Models
(LLMs), particularly ChatGPT, which are intended to produce context-aware feedback concerning
coherence, organization, and content development in addition to linguistic accuracy. A move away
from rule-based feedback systems and toward generative models that can generate comprehensive
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
and flexible responses to student writing is reflected in the growing number of LLMs in the
literature.
According to comparative analysis, LLM-based systems dominate qualitatively in terms of
research interest and pedagogical discussion, but NLP-based tools dominate statistically across the
literature. LLMs are more commonly linked to higher-order writing processes like idea formation
and argumentation coherence, whereas NLP tools are regularly linked to gains in grammatical
precision and error reduction. However, research also indicates that LLMs raise more pedagogical
and ethical issues, such as the possibility of over-reliance and accidental plagiarism, which
emphasizes the necessity of planned instructional mediation.
Consolidated evidence about the effect of AI-driven automated feedback on the development of
writing abilities in English as a Foreign Language (EFL) is provided by the analysis of the 25-
research chosen using the PRISMA protocol. Based on the linguistic focus of the feedback, learner
engagement, the presence of pedagogical mediation, and the contextual conditions in which these
technologies are implemented, the results show that automated feedback's effectiveness is
multifaceted rather than uniform.
A dominant pattern across the reviewed studies is the positive effect of automated feedback on
linguistic accuracy, particularly in grammar, spelling, and sentence-level structure. Studies carried
out in Ecuador (Jaramillo, 2025), China (Liu, 2024), Brazil (Nunes et al., 2022), and Japan (Dizon,
2024) consistently show that students' writing has improved in formal accuracy and decreased in
recurrent errors. The promptness and regularity of feedback, which enable students to spot and fix
mistakes during iterative revision processes, are directly related to these gains. This implies that
surface-level language growth, a fundamental aspect of EFL academic writing, is particularly well-
supported by automated feedback.
Numerous research demostrate enhancements in textual coherence and organization beyond
grammatical accuracy, especially when generative AI tools like ChatGPT are used (Alnemrat,
2025; Mekheimer, 2025). According to these studies, learners can benefit from automated
feedback by using it to better organize arguments and construct paragraphs. This conclusion,
though, is not always true. While automated systems offer helpful recommendations at the lexical
and syntactic levels, other research (Baz, 2025; Setiawan, 2025) notes that they are limited in their
ability to handle discourse-level characteristics including academic voice, stylistic nuance, and
rhetorical appropriateness. This discrepancy suggests that when writing demands shift from micro-
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
linguistic precision to higher-order cognitive and rhetorical skills, the influence of automated
feedback diminishes.
Learner autonomy and revision behavior are the subject of another noteworthy finding. According
to a number of studies (Liu, 2024; Mekheimer, 2025; Mahapatra, 2024), students are more inclined
to make autonomous, repetitive revisions to their writings when automated feedback is used.
Constant feedback seems to encourage self-regulation and learner accountability by moving
writing practices toward a more process-oriented approach. In this way, by allowing students to
actively interact with their own writings, automated feedback serves as a stimulant for independent
learning.
However, there are significant risks associated with this autonomy. Engeness (2025) and Steiss
(2024) caution that if students accept ideas without critically analyzing them, an over-reliance on
automated feedback may result in shallow editing practices. This conflict highlights a paradox
found in the literature review: automated feedback raises the number of revisions but does not
always ensure greater metacognitive participation. Therefore, rather than simply applying
feedback mechanically, students must be able to analyze, assess, and justify it in order to build
critical writing skills.
The results show that students generally have a positive perception of automated feedback in terms
of motivation and affective variables. According to research from South Korea, India, and Turkey
(Mahapatra, 2024; Sarıca & Deneme Gençoğlu, 2025; Woo et al., 2023), students appreciate AI-
generated feedback's instantaneity, accessibility, and lack of judgment. Particularly for EFL
learners who frequently feel nervous when composing academic texts, these traits help to lessen
writing anxiety and boost confidence. Students do, however, also voice concerns about the
accuracy and comprehensiveness of automated feedback, underscoring the necessity for
instruction in evaluating recommendations produced by AI.
The crucial function that teacher mediation plays is among the most important conclusions to come
out of this analysis. When human supervision is added to automated systems, studies comparing
automated feedback alone with combined AIteacher feedback models (Asadi et al., 2025;
Fleckenstein et al., 2023) consistently show better learning outcomes. In these hybrid approaches,
teachers concentrate on higher-order skills like argumentation quality, coherence, and critical
thinking, while AI tools mainly assist with initial error identification and revision. This
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
demonstrates that rather than taking the place of instructor competence, automated feedback
should be viewed as an additional instructional tool.
Lastly, regional and contextual factors have a big impact on how effective automated feedback is.
The degree to which AI tools can be successfully incorporated into EFL instruction is influenced
by institutional infrastructure, teacher training conditions, and technological access, according to
research done in Latin American and developing contexts (Nunes et al., 2022; Marzuki et al., 2023;
Jaramillo, 2025). The benefits of automated feedback are not uniform in settings with poor
connectivity or inadequate pedagogical support, which supports the notion that pedagogical
development is not guaranteed by technology innovation alone.
Overall, this systematic review's findings show that AI-powered automated feedback improves
EFL writing growth in quantifiable ways, especially in terms of grammatical accuracy, revision
techniques, and learner motivation. However, contextual preparedness, teacher mediation, and
critical integration are necessary for its pedagogical efficacy. In order to promote both linguistic
correctness and higher-order writing skills, automated feedback should be viewed as a component
of a well-rounded teaching strategy that incorporates both human direction and technological
assistance.
Teacher mediation is crucial in minimizing the misuse of AI-based feedback, especially when it
comes to accidental plagiarism and mechanical text substitution, according to a key result from the
analyzed studies. Students may rely too much on automated recommendations in the absence of
clear instructional guidance, utilizing AI-generated language without critically analyzing the
material or comprehending the underlying linguistic principles.
Thus, three crucial areas should be the focus of effective teacher mediation. Instructors must first
clearly explain the limitations and goal of AI feedback, emphasizing that these technologies are
meant to assist with editing rather than produce original content. Second, educators should provide
writing assignments that challenge students to defend changes, consider their feedback selections,
or turn in annotated versions that detail how they understood and used the automated ideas. This
approach discourages passive acceptance of AI outcomes and encourages metacognitive
interaction. Third, by incorporating conversations about ethical AI use, authorship, and originality
into writing training, instructors may highlight academic integrity.
Teacher mediation guarantees that automated systems improve learning without taking the place
of cognitive effort by presenting AI feedback as a formative support tool within a guided
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
instructional procedure. By doing this, educational intervention protects against the dangers of
reliance and accidental plagiarism while simultaneously optimizing the advantages of immediacy
and personalization.
Conclusions
The results of this systematic review show that automated feedback powered by artificial
intelligence (AI) improves writing abilities in English as a Foreign Language (EFL) in a positive
and quantifiable way. This is especially true when considering the kinds of AI technologies that
are most commonly discussed in the literature. Traditional Natural Language Processing (NLP)-
based tools, like Grammarly, Criterion, Pigai, QuillBot, and Ginger, clearly outnumber the
examined research. These tools are regularly linked to gains in spelling, grammar, and sentence-
level correctness. Because of their accessibility and emphasis on surface-level linguistic traits,
these tools continue to be the most extensively used systems in a variety of educational contexts.
The ability of Large Language Models (LLMs), in particular ChatGPT, to produce comprehensive,
context-aware feedback that addresses coherence, organization, and content growth, on the other
hand, is the focus of more recent research. The results suggests that the instructional effectiveness
of LLM-based tools is more variable and context-dependent, despite their encouraging potential
to enhance higher-order writing processes. According to studies, when used without clear
instructional supervision, LLMs pose more issues about overreliance, decreased critical
engagement, and the possibility of inadvertent plagiarism than typical NLP tools.
Overall, the findings from the literature study show that the kind of technology used has a
significant impact on how successful AI-based feedback is. While LLMs give more pedagogical
options but necessitate more formal integration, NLP-based automated writing evaluation
techniques consistently and reliably improve linguistic accuracy. In order to guarantee that
automated feedback promotes meaningful learning as opposed to mechanical text correction or
substitution, instructor mediation stands out as a critical component in both categories.
These results imply that AI-driven feedback should be viewed as a differentiated educational
resource rather than a one-size-fits-all solution from a pedagogical standpoint. To ensure that
students interact critically with feedback and uphold authorship and academic integrity, educators
are urged to strategically integrate LLM-based systems for content and organization support with
NLP-based tools for accuracy-focused feedback. Longitudinal impacts, comparative instructional
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
approaches, and ethical frameworks guiding the use of generative AI in EFL writing teaching
should all be further explored in future studies.
References
Alnemrat, A. (2025). AI vs. teacher feedback on EFL argumentative writing: A quasi-
experimental study. Frontiers in Education, 10, Article 1614673.
https://doi.org/10.3389/feduc.2025.1614673
Asadi, M., Rahimi, M., & Hosseini, H. (2025). The impact of integrating ChatGPT with
teachers’ feedback on EFL writing skills. Heliyon, 11(2), e20917.
https://doi.org/10.1016/j.heliyon.2025.e20917
Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control
processes. In K. W. Spence & J. T. Spence (Eds.), The psychology of learning and
motivation (Vol. 2, pp. 89195). Academic Press.
Baz, M. A. (2025). The effect of feedback on informative text writing: AI or teacher? Open
Praxis, 17(3), 315329. https://doi.org/10.55982/openpraxis.17.3.871
Booth, A., Sutton, A., & Papaioannou, D. (2021). Systematic approaches to a successful
literature review (2nd ed.). SAGE.
Creswell, J. W., & Creswell, J. D. (2021). Research design: Qualitative, quantitative, and mixed
methods approaches (5th ed.). SAGE.
Ding, L., & Zou, D. (2024). Automated writing evaluation systems: A systematic review of
Grammarly, Pigai, and Criterion with a perspective on future directions in the age of
generative artificial intelligence. Education and Information Technologies, 29(6), 7557
7584.
https://doi.org/10.1007/s10639-024-12894-7
Dizon, G. (2024). A systematic review of Grammarly in L2 English writing. Cogent Education,
11(1), 2397882.
https://doi.org/10.1080/2331186X.2024.2397882
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
Engeness, I. (2025). Exploring AI-driven feedback as a cultural tool: A cultural-historical
perspective. Integrative Psychological and Behavioral Science, 59(2), 455472.
https://doi.org/10.1007/s12124-025-09894-8
Fleckenstein, J., Leifheit, L., & Köller, O. (2023). Automated feedback and writing: A multi-
level meta-analysis. Frontiers in Artificial Intelligence, 6, Article 1162454.
https://doi.org/10.3389/frai.2023.1162454
Gough, D., Oliver, S., & Thomas, J. (2021). An introduction to systematic reviews (2nd ed.).
SAGE.
He, Y. (2024). A reflection on EFL learners’ motivation to write with automated writing
evaluation. The International Review of Research in Open and Distributed Learning,
25(2), 181199.
https://doi.org/10.19173/irrodl.v25i2.7769
Jaramillo, J. J. (2025). From struggle to mastery: AI-powered writing skills in ESL contexts.
Applied Sciences, 15(14), Article 8079.
https://doi.org/10.3390/app15148079
Kitchenham, B., & Charters, S. (2021). Guidelines for performing systematic literature reviews
in software engineering (updated ed.). EBSE.
Lee, S. S., & Moore, R. L. (2024). Harnessing generative AI for automated feedback in higher
education: A systematic review. Online Learning Journal, 28(3), 122.
https://doi.org/10.24059/olj.v28i3.4312
Liu, W. (2024). A systematic review of automated writing evaluation feedback: Validity, effects,
and student engagement. Language Teaching Research Quarterly, 45, 86105.
https://doi.org/10.32038/ltrq.2024.45.05
Mahapatra, S. (2024). Impact of ChatGPT on ESL students’ academic writing skills: A mixed-
methods intervention study. Smart Learning Environments, 11(1), Article 5.
https://doi.org/10.1186/s40561-024-00295-9
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
Marzuki, M., Fadhli, R., & Widodo, A. (2023). The impact of AI writing tools on content and
structure of students’ essays. Cogent Education, 10(1), 2236469.
https://doi.org/10.1080/2331186X.2023.2236469
Mayer, R. E. (2020). Multimedia learning (3rd ed.). Cambridge University Press.
Mekheimer, M. (2025). Generative AI-assisted feedback and EFL writing: Proficiency, revision
frequency, and writing quality. Discover Education, 4(1), Article 21.
https://doi.org/10.1007/s44217-025-00602-7
Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2020). Preferred reporting items for
systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine, 6(7),
e1000097. https://doi.org/10.1371/journal.pmed.1000097
Nunes, A., Cordeiro, C., Limpo, T., & Castro, S. L. (2022). Effectiveness of automated writing
evaluation systems in school settings: A systematic review. Journal of Computer Assisted
Learning, 38(2), 289304.
https://doi.org/10.1111/jcal.12621
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., …
Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting
systematic reviews. BMJ, 372, n71.
https://doi.org/10.1136/bmj.n71
Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for
research. Frontiers in Psychology, 8, Article 422.
https://doi.org/10.3389/fpsyg.2017.00422
Sadigzade, Z. (2025). AI-powered feedback in ESL writing classes: Pedagogical opportunities
and ethical concerns. Journal of Azerbaijan Language and Education Studies, 5(1), 4461.
Sarıca, T., & Deneme Gençoğlu, S. (2025). EFL students’ perceptions of AI-assisted writing
tools: A systematic narrative hybrid review. The Literacy Trek, 11(1), 3355.
https://doi.org/10.33531/literacytrek.2025.11.1.33
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
Setiawan, F. (2025). Exploring artificial intelligence as automated feedback: Grammarly,
QuillBot, and Ginger in EFL contexts. Eternal (English Teaching Journal), 11(1), 1206
1220. https://doi.org/10.26877/eternal.v11i1.1206
Shi, H., & Aryadoust, V. (2024). A systematic review of AI-based automated written feedback
research. ReCALL, 36(1), 85104.
https://doi.org/10.1017/S0958344023000336
Steiss, J. (2024). Comparing the quality of human and ChatGPT feedback: A formative feedback
study. Computers & Education, 205, Article 104890.
https://doi.org/10.1016/j.compedu.2024.104890
Sweller, J., Ayres, P., & Kalyuga, S. (2019). Cognitive load theory (2nd ed.). Springer.
Wilson, J., & Roscoe, R. D. (2020). Automated writing evaluation and feedback: Multiple
metrics of efficacy. Journal of Educational Computing Research, 58(1), 3970.
https://doi.org/10.1177/0735633119830766
Woo, D. J., Susanto, H., & Guo, K. (2023). EFL students’ attitudes and contradictions in a
machine-in-the-loop activity system. arXiv.
https://doi.org/10.48550/arXiv.2307.13699
Ya, W., Zhang, Y., & Chen, Q. (2025). Practical and ethical challenges of large language models
in education: A systematic scoping review. arXiv.
https://doi.org/10.48550/arXiv.2303.13379
Yan, L., Xu, J., Wang, T., & Li, P. (2025). From assessment to practice: Implementing the AIAS
framework in EFL teaching and learning. arXiv.
https://doi.org/10.48550/arXiv.2501.00964
Zhang, K., Liu, J., & Zhao, H. (2025). Enhancing critical writing through AI feedback: A
randomized trial. Journal of Educational Technology & Society, 28(2), 7790.
Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into
Practice, 41(2), 6470.
https://doi.org/10.1207/s15430421tip4102_2
Revista Neosapiencia ISNN 3091-1982. Enero - junio 2026. Vol. 4, Núm.1, P. 403-421.
Copyright (2026) © Roberto Fernando Lozada Lozada, Maicol Alexander Suntasig Guallichico,
Ximena Alexandra Estrada Chango, Jhessica Alexandra Jumbo Obaco, Cristian David Chucho
Muñoz, Luis Alberto Deleg Juela
Este texto está protegido bajo una licencia internacional Creative Commons 4.0. Usted es libre
para Compartir copiar y redistribuir el material en cualquier medio o formato y Adaptar el
documento remezclar, transformar y crear a partir del material para cualquier propósito,
incluso para fines comerciales, siempre que cumpla las condiciones de atribución. Usted debe
dar crédito a la obra original de manera adecuada, proporcionar un enlace a la licencia, e
indicar si se han realizado cambios. Puede hacerlo en cualquier forma razonable, pero no de
forma tal que sugiera que tiene el apoyo del licenciante o lo recibe por el uso que hace de la
obra.