Generative artificial intelligence (GenAI) has rapidly entered English as a foreign language (EFL) academic writing through tools that draft, paraphrase, translate, summarize, evaluate, and imitate disciplinary texts. The central question for language education is no longer whether learners will encounter these tools but how pedagogies can help them use AI critically, ethically, and rhetorically. This article offers an integrative conceptual review of peer-reviewed scholarship on AI-assisted writing, automated writing evaluation, digital literacies, identity, academic integrity, and EFL/ESL pedagogy. Drawing on studies from applied linguistics, language education, educational technology, and discourse studies, this paper synthesizes four recurring issues: AI writing systems as feedback infrastructures, learner agency and identity in human-AI composing, the risks of dependency and homogenized discourse, and the need for assessment practices that value process evidence rather than detection alone. The review argues that GenAI should be conceptualized as a literacy environment that mediates language, power, authorship and intercultural communication. It proposes a Critical GenAI Writing Literacy Cycle comprising six stages: orienting to task and genre, prompting strategically, comparing outputs, verifying evidence, transforming texts through human revision, and disclosing AI use through reflective accountability. The framework contributes to JLLI's scope of JLLI by connecting applied linguistics, technology-enhanced language learning, digital discourse, cultural studies, and language education. It concludes that responsible GenAI integration requires pedagogical designs that protect linguistic diversity, strengthen critical reading, and position EFL writers as accountable authors, rather than passive consumers of machine-generated prose.
Academic writing has always been technologically mediated. Dictionaries, grammar handbooks, corpora, word processors, translation systems, plagiarism checkers, automated writing evaluation (AWE), and learning management systems have all shaped students’ planning, drafting, revising, and submitting of texts. However, GenAI has intensified this mediation because it can produce extended, fluent, and seemingly context-sensitive prose within seconds. This development is particularly consequential for EFL learners. GenAI tools can make English academic writing more accessible by offering lexical alternatives, rhetorical models, feedback, translation support, and opportunities for low-stakes practice to non-native English speakers. At the same time, they can obscure authorship, reproduce linguistic norms without critique, fabricate evidence, weaken engagement with sources, and invite surveillance-based responses, such as automated AI detection. Therefore, the challenge for language educators is not simply technological; it is linguistic, cultural, ethical, and pedagogical.
The rise of GenAI must be understood within the context of the longer history of AI and automated feedback in writing instruction. Research on AWE shows that automated systems can support revision when embedded in guided pedagogy, but their feedback is limited when learners treat scores or suggestions as authoritative substitutes for rhetorical judgment (Li et al., 2015; Warschauer & Ware, 2006; Zhang & Hyland, 2018). Recent research on ChatGPT and related tools has extended this debate. Kohnke et al. (2023) describe potential affordances for language teaching and learning while emphasizing the need for digital competencies. Godwin-Jones (2022) similarly frames intelligent writing assistance as a partnership that requires pedagogical orchestration. Studies with EFL learners indicate that AI writing tools may enhance motivation, organization, and perceived writing confidence; however, these benefits coexist with concerns about dependency, creativity, critical thinking, and integrity (Darwin et al., 2024; Marzuki et al., 2023; Song & Song, 2023; Xiao & Zhi, 2023).
A narrow response to GenAI often reduces the issue to cheating. Such a response is understandable because universities must protect the validity of assessments and academic integrity. However, this is insufficient for language education. If teachers only prohibit AI or rely on detectors, they miss the chance to teach students how machine-generated language works, how prompts shape discourse, why evidence must be verified, and how multilingual writers can retain their voice in standardized academic genres. Moreover, AI detectors have been shown to be biased against non-native English writing, making punitive detection especially risky in EFL and international educational contexts (Liang et al., 2023). A more constructive approach is to treat GenAI as a site for critical digital literacy: learners should learn to interrogate outputs, compare alternatives, evaluate evidence, reflect on authorship, and make ethical choices regarding disclosure.
This study developed an integrative review and conceptual framework for the use of GenAI in EFL academic writing. This aligns with the scope of the Journal of Language and Literature Inquiry, which brings together applied linguistics, language education, digital discourse, communication, and culture. This study addressed three guiding questions. First, what benefits and risks have been reported in the scholarship on AI-mediated EFL and L2 academic writing? Second, what dimensions of critical digital literacy are needed when EFL writers compose texts using GenAI? Third, what pedagogical framework can help teachers integrate GenAI responsibly while protecting learner agency, academic integrity, and linguistic diversity? This paper synthesizes dispersed literature into a critical language-education framework that can guide curriculum design, classroom practice, and future empirical research.
Digital literacy in language education is not merely the ability to operate software programs. It includes the capacity to participate in digitally mediated communication, interpret multimodal texts, evaluate information, and understand how technology shapes meaning. Hafner et al. (2015) argue that digital literacies are central to language learning because learners increasingly communicate through networked, multimodal, and participatory environments. Reinhardt (2019) shows that social media and digital platforms create authentic opportunities for language learning, but also require learners to navigate context, identity, and audience. Kern (2014) uses the metaphor of technology as pharmakon, both remedy and poison, to emphasize that technological mediation can expand learning while also generating new vulnerabilities. This ambivalence is highly relevant to GenAI.
Critical digital literacy extends digital literacy by asking who benefits from a technology, whose language is privileged, what assumptions are embedded in systems, and how users can act with agency. In applied linguistics, the questions of agency and identity are inseparable from language learning. Peirce (1995) introduced investment as a way to understand why learners engage with language practices in relation to their identity, power, and imagined futures. Darvin and Norton (2015) later expanded investment to include ideology, capital, and identity, offering a useful lens for AI-mediated writing research. When EFL learners use GenAI, they do not simply receive technical assistance; they negotiate access to valued academic discourse, institutional expectations, and the symbolic capital. The output of an AI system may appear neutral, but it often reflects dominant academic registers and cultural assumptions that can shape how learners understand their writing identities.
Intercultural and sociolinguistic perspectives are also necessary for this study. Baker (2012) argues for intercultural awareness that recognizes culture and language as dynamic rather than fixed. Androutsopoulos (2015) similarly conceptualizes digital multilingualism as shaped by networked audiences, technologies, and global media flows. These ideas matter because GenAI can standardize expression toward dominant Anglophone norms, potentially smoothing out local voices, translanguaging practices, and culturally situated rhetorical choices. Therefore, a critical approach to GenAI must not only ask whether a text is grammatically accurate but also whether it represents the writer's intended stance, audience, discipline, and cultural positioning.
The term critical GenAI writing literacy is used in this article to refer to the ability to use GenAI in academic writing through informed, ethical, reflective, and rhetorically purposeful practices. This includes prompt literacy, output evaluation, source verification, genre awareness, revision judgment, disclosure, and awareness of bias. This definition draws on recent work that explicitly links critical digital literacy to AI-mediated language learning and writing. Darvin (2025) argues that GenAI-mediated L2 writing requires critical digital literacy because learners must negotiate power, authorship, and algorithmic mediation. Liu et al. (2025) show that critical digital literacies are connected to agentic practices in AI-mediated informal learning of English. Lim and Darvin (2026) further emphasized the negotiation of agency in human-AI interactions. Together, these studies suggest that language education should move beyond tool training and toward a critical inquiry into how AI shapes writing, learning, and identity.
This study adopts an integrative conceptual review rather than a systematic review or meta-analysis. The purpose of this study is to synthesize established and emerging scholarship across related fields and generate a pedagogical framework for EFL academic writing. An integrative approach is suitable because GenAI writing research is developing quickly and draws on multiple traditions, including AWE, computer-assisted language learning, applied linguistics, digital literacy, academic integrity, and EdTech.
The literature base was selected through a relevance-oriented review of peer-reviewed journal articles in Scopus- or Web of Science-indexed venues and other recognized scholarly outlets in language learning and educational technology fields. Priority was given to articles that addressed at least one of four areas: AI or automated feedback in L2/EFL writing, GenAI in language teaching and learning, digital or critical digital literacies, and ethical issues such as integrity, authorship, bias, and assessment. Foundational studies on learner identity and investment were included because they provide a theoretical basis for understanding the agency in AI-mediated writing. Recent studies from 2023 to 2026 were prioritized for GenAI-specific discussions, whereas earlier AWE and digital literacy studies were included to avoid treating GenAI as an entirely new phenomenon detached from previous scholarship.
The analysis followed a thematic synthesis. After repeated readings of the selected literature, recurring concepts were organized into four themes: feedback infrastructure, agency and identity, risks and ethical tensions, and pedagogical redesign. These themes were then used to construct the proposed Critical GenAI Writing Literacy Cycle. This review does not claim exhaustive coverage of all GenAI publications and does not estimate effect sizes. Instead, it offers a theoretically informed synthesis that can be tested and refined in future studies.
3.1. Theme 1: AI Writing Systems as Feedback Infrastructures
A major affordance of AI writing tools is their feedback. Long before ChatGPT became widely discussed, AWE systems were used to provide feedback on grammar, vocabulary, mechanics, organization, and scores. Early research warned that automated systems should not be evaluated only by technical accuracy but also by how they are integrated into classroom learning (Warschauer & Ware, 2006). Li et al. (2015) found that AWE feedback can support ESL writing when students are guided to interpret and revise rather than simply accept machine suggestions. Zhang and Hyland (2018) showed that learners' affective, behavioral, and cognitive engagement mediates the usefulness of teacher and automated feedback. Ranalli (2021) similarly highlighted the importance of trust and engagement when L2 students use automated feedback. These studies show that feedback is not inherently pedagogical; it becomes so through learner interpretation, teacher mediation, and revision practice.
Meta-analytic and experimental studies provide further insights. Zhai and Ma (2023) reported that AWE can improve writing quality, while Ngo et al. (2024) synthesized evidence on AWE effectiveness in EFL/ESL writing across studies. Wei et al. (2023) found positive impacts of AWE in a randomized controlled trial with Chinese EFL learners. However, such findings should not be generalized to claim that automated feedback is always beneficial. Automated systems tend to privilege features that can be detected, scored, or generated, which means they may overemphasize surface correctness and underemphasize arguments, evidence, reader engagement, and disciplinary reasoning. In academic writing, the most important decisions often involve not merely how to phrase an idea but which idea is worth developing and how it is supported.
GenAI expands automated feedback from correction to co-composition. Tools can propose thesis statements, reorganize paragraphs, summarize sources, generate counterarguments, and rewrite texts in different registers. Kohnke et al. (2023) describe opportunities for language learning, including conversation practice and writing support, while also noting the need for teacher and learner competencies. Song and Song (2023) reported that ChatGPT may enhance academic writing skills and motivation among EFL students. Xiao and Zhi (2023) similarly documented learners' experiences using ChatGPT for language learning tasks. These findings support the view that GenAI can provide scaffolding, especially for students who struggle with lexical access, genre familiarity, or confidence.
However, the pedagogical value of GenAI feedback depends on task design. If students ask a tool to correct a draft and then submit the output, the feedback collapses into substitution. If they compare the AI response with their original text, annotate accepted and rejected changes, and explain their revision decisions, feedback becomes an opportunity for developing metalinguistic awareness. Therefore, teachers need to shift their focus from product accuracy to process accountability. The central classroom question should not be whether AI improves the text but what the learner learns from evaluating, revising, and transforming AI-mediated feedback.
3.2. Theme 2: Agency, Identity, and Voice in Human-AI Composing
EFL academic writing is not only a skill of sentence production; it is also a process of becoming a participant in academic discourse. Learners must negotiate what counts as evidence, how to position claims, express caution, cite others, and project authorial voice. Investment theories show that learners' engagement with language practices is shaped by identity, power, and access to valued forms of capital (Darvin & Norton, 2015; Peirce, 1995). GenAI intensifies this negotiation because it offers immediate access to polished academic language that may appear more legitimate than the learner’s developing voice.
Recent scholarship suggests that AI tools can empower and constrain L2 writers. Moorhouse et al. (2025) argued that GenAI tools can empower L2 academic writers by supporting idea development, language refinement, and confidence. Liu et al. (2025) connect AI-mediated English learning with agentic practices, showing that learners can appropriate tools for their own purposes. Simultaneously, Darvin (2025) warns that GenAI-mediated L2 writing requires critical digital literacy because agency is negotiated within systems that encode power relations. Lim and Darvin (2026) similarly describe human-AI interaction as a space where learners negotiate agency rather than simply operate tools.
Voice is a key issue. Academic writing often requires conventional forms of coherence, hedging, and citation; however, voice is also built through choices of argument, evidence, stance, rhythm, and cultural reference. GenAI can help learners approximate a disciplinary style, but it may also produce generic prose that sounds fluent while lacking situated meaning. Kubota (2023) argues that AI-assisted L2 writing contains contradictions, including the tension between supporting multilingual writers and reproducing dominant linguistic standards. This contradiction is visible when students use AI to make their writing sound more 'native-like.' Such revisions may improve readability, but they can also reinforce the idea that the best academic English is standardized, decontextualized, and detached from the writer's linguistic history.
Therefore, a critical pedagogy of GenAI should distinguish between support and replacement. Support occurs when AI helps learners notice alternatives, tests rhetorical options, and strengthens clarity while retaining ownership of meaning. Replacement occurs when AI determines the argument, evidence, structure and voice. Teachers can make this distinction teachable by asking students to submit revised rationales. For example, students might identify three AI suggestions they accepted, three they rejected, and one sentence in which they intentionally restored their original voice. Such practices make agency visible and help learners understand that academic fluency is not the same as surrendering authorship.
3.3. Theme 3: Risks, Ethics, and Academic Integrity
The risks of GenAI in EFL academic writing are real and significant. The first risk is dependency on technology. Students who rely heavily on AI for idea generation, translation, paraphrasing, and revision may have fewer opportunities to productively struggle with language. Struggle is not a romantic ideal; it is part of developing rhetorical control, lexical precision and disciplinary thinking. Darwin et al. (2024) show that EFL students perceive both benefits and limitations of AI in relation to critical thinking. Marzuki et al. (2023) similarly report teacher concerns about the effects of AI writing tools on content, organization, creativity, and originality. These concerns suggest that AI integration should be accompanied by explicit learning outcomes and not left to informal student experimentation.
The second risk is epistemic authority. GenAI outputs often present information in a confident style, even when the content is inaccurate or unsupported. In academic writing, such fluency can mislead students into treating the generated text as knowledge. This is especially dangerous when learners are unfamiliar with a topic, discipline, or citation conventions. AI can summarize concepts, but it cannot replace careful reading of peer-reviewed sources, method evaluation, or understanding of theoretical debates. Therefore, source verification must be taught as a core writing skill for academic writing. Students should be required to trace every factual claim to a credible source and distinguish between AI-generated wording and evidence from the literature.
The third risk concerns integrity and surveillance issues. Lund et al. (2023) argue that large language models create new ethical questions for scholarly publishing, including authorship and transparency. Cotton et al. (2024) discuss academic integrity in the era of ChatGPT and emphasize the need for assessment redesign. Perkins (2023) similarly argued that integrity policies must address AI tools rather than assuming that traditional plagiarism frameworks are sufficient. However, a detection-centered approach may be harmful. Liang et al. (2023) found that GPT detectors can be biased against non-native English writers. In EFL contexts, this finding is critical because students may already write differently from native-speaker expectations. Punitive detection can therefore reproduce inequality by treating linguistic differences as suspicion.
A more balanced integrity approach combines transparency, assessment redesign, and education. Teachers can require AI-use statements that specify which tools were used, for what purposes, and how students evaluated or revised outputs. Assessment can include drafts, prompt logs, annotated AI outputs, oral defenses or reflective memos. These practices are not merely policing mechanisms; they make the development of writing visible. They also teach students that ethical AI use involves accountable authorship. The goal is not to eliminate AI from writing, which may be unrealistic and pedagogically narrow, but to ensure that students remain responsible for their claims, sources, structure, and final language choices.
3.4. Theme 4: Toward a Critical GenAI Writing Literacy Cycle
The reviewed literature points to a central pedagogical principle: GenAI should be integrated as an object of inquiry as well as a tool for support. Students need opportunities to use AI, critique it, revise with it, and reflect on its effects. This article proposes the Critical GenAI Writing Literacy Cycle as a practical framework for EFL academic writing courses. The cycle consists of six recursive stages:
The first stage is task and genre orientation. Before using AI, students identified the assignment’s purpose, audience, genre, criteria, and source requirements. This step prevents prompt writing from replacing the task interpretation. Students might analyze model articles, identify rhetorical moves, and discuss how different disciplines organize their claims and evidence. The second stage is strategic prompting. Students learn that prompts are not neutral instructions; rather, they frame the audience, role, stance, genre, and constraints. Prompt literacy includes asking for alternatives, requesting explanations, and limiting unsupported assertions. The third stage involves comparing the outputs. Students generate or examine more than one AI response and compare the differences in accuracy, tone, evidence, organization, and cultural assumptions. This stage develops critical reading because students must judge rather than accept what they read.
The fourth stage is the verification of evidence. Students check claims against peer-reviewed sources, course readings and credible databases. They learn that AI output is not a citable authority unless the tool is analyzed as an object of study. The fifth stage is transformation through human revision. Students use their own reading, arguments, disciplinary knowledge, and linguistic judgment to reshape the text. This stage emphasizes that revision is not a cosmetic correction but a rhetorical decision-making process. The sixth stage is disclosure and reflection. Students submitted a brief AI-use statement and a reflection explaining what the tool contributed, what the student changed, and what limitations or risks they identified.
This cycle can be implemented through classroom activities. In an AI contrastive reading log, students compared an AI-generated paragraph with a published article paragraph and identified differences in evidence, specificity, and authorial stance. In a prompt-evolution portfolio, students document how changing prompts changes output quality and explain what they learned about the genre and audience. In an output annotation task, students highlighted claims that required verification, sentences that sounded generic, and phrases that needed localization. In a voice-restoration task, students revise an AI-polished paragraph to recover their intended stance, cultural examples, or disciplinary nuances. In an integrity memo, students state how AI was used and take responsibility for the final claims and citations.
The cycle also supports the redesign of assessments. Rather than grading only the final essay, teachers can assess process evidence, including planning notes, prompt logs, source verification tables, annotated drafts, and revision rationales. The rubrics evaluate four dimensions: rhetorical purpose, evidence and verification, language and genre control, and ethical disclosure. This design reduces the temptation to outsource the entire task because students know that their processes will be evaluated. It also reduces the unfair reliance on detectors that are not reliable enough to be the sole basis for judgment, especially in multilingual contexts (Liang et al., 2023).
The framework requires professional development for teachers. Many language teachers are expected to manage AI-related issues without clear institutional guidance. Teacher learning should include hands-on exploration of AI tools, analysis of sample outputs, discussion of policies, and development of assessment tasks that fit the local context. Because access to AI tools is unequal, institutions should also consider whether students have comparable access, whether paid tools create advantages, and whether data privacy policies are clear and enforced. Critical GenAI literacy is therefore not only a learner competence but also an institutional responsibility (see Table 1).
Table 1. Critical GenAI Writing Literacy Cycle for EFL Academic Writing
3.5. Classroom Implementation Scenario
To illustrate how the cycle can operate in an EFL academic writing course, consider a four-week unit on writing literature-based argumentative essays. In week one, students read two short peer-reviewed articles on a shared topic and identified thesis statements, citation practices, hedging, and paragraph structure. Before any AI use is allowed, they write a brief problem statement and explain the audience of their essay. This ensures that the learner, not the machine, initiates the writing process. The teacher then demonstrated how different prompts produce different rhetorical outcomes. A vague prompt such as 'write an introduction about digital literacy' is compared with a constrained prompt that specifies audience, source limits, disciplinary stance, and the requirement to avoid invented references. Students discuss which output is more useful and which still requires human judgment.
In week two, students created a prompt log and requested three possible outlines from a GenAI tool. They were not permitted to copy an outline directly. Instead, they annotated each outline by labeling claims that were too general, assumptions that were culturally narrow, and points that required evidence. They then designed their own outline using course readings and database sources. This task converts the AI output into reading and planning objects. It also helps students see that fluent structures do not guarantee scholarly adequacy.
In week three, students independently draft one body paragraph and then ask AI for feedback on clarity, cohesion, and counterargument. They must separate language-level suggestions from content-level ones. For every accepted AI suggestion, they wrote a short rationale; for every rejected suggestion, they explained why it would weaken the argument, misrepresent the evidence, or erase their intended voice. Peer review follows, allowing students to compare human and AI feedback. This sequence positions AI as a feedback source among others, rather than the final evaluator.
In week four, students submitted a final essay with a process portfolio. The portfolio includes the original problem statement, prompt log, annotated AI outputs, source verification notes, draft excerpts, and AI-use disclosure. The final grade assigned significant weight to rhetorical development, evidence, source integration, and reflection. Such a design makes misconduct less attractive because the process is visible, but it also avoids treating students as potential suspects. More importantly, it teaches transferable practices: how to question generated language, protect ownership, and use digital tools without losing academic responsibility.
The synthesis shows that GenAI in EFL academic writing cannot be understood through a simple opposition between innovation and academic dishonesty. It is better understood as a new layer in the ecology of academic literacies. Similar to previous writing technologies, GenAI can scaffold learning by supporting noticing, practice, feedback, and revision. Unlike previous tools, it can produce large portions of text in ways that blur the boundary between assistance and authorship. This blurring creates both opportunities and dangers. This allows learners to access explanations, examples, and language support that may have previously been unavailable. The danger is that learners may accept fluent output without critical engagement, thereby weakening their source-based reasoning and voice.
The proposed framework responds to this ambiguity by positioning the students as critical authors. It does not assume that AI is inherently empowering, nor does it assume that prohibitions are sustainable. Instead, it asks teachers to design tasks in which students must interpret AI outputs, verify claims, and make accountable decisions. This approach resonates with Kern's (2014) view of technology as both promise and peril and with digital literacy scholarship that treats learners as participants in mediated communication rather than passive users (Hafner et al., 2015; Reinhardt, 2019). It also extends investment theory by showing how AI tools mediate access to academic capital while potentially reshaping identity and voice (Darvin & Norton, 2015).
The contribution of JLLI’s interdisciplinary scope is twofold. First, this article connects applied linguistics and language education to digital discourse and cultural studies. GenAI is a discourse technology that produces, circulates, and normalizes language. Its outputs shape how academic English is imagined and which voices are considered legitimate. Second, this study offers practical implications for curriculum and assessment. Language education should not only teach students to write better essays but also teach them to understand the conditions under which texts are produced. In the GenAI era, this includes understanding prompts, algorithms, biases, evidence, and disclosures.
The framework is especially relevant for EFL contexts in the Global South and other multilingual settings such as Malaysia. Students in these contexts may use AI to access academic English, but they may also encounter tools trained on dominant language norms and institutional policies developed in Anglophone countries. Therefore, critical GenAI literacy should include local language practices, cultural examples, and non-deficit views of multilingual writing. Teachers can encourage students to ask when AI makes a text clearer and when it makes it less meaningful. This question is central to the cultural and intercultural dimensions of language education.
4.1. Implications for Pedagogy, Policy, and Research
From a pedagogical perspective, the first implication is that AI use should be taught explicitly. Silence creates hidden curricula in which some students experiment productively, while others either avoid useful tools or use them inappropriately. Explicit instructions can include demonstrations of hallucinations, prompt comparisons, source verifications, and revision decision-making. The second implication is that writing assessments should value the process. When only the final essay matters, GenAI becomes a shortcut. When planning, drafting, verification, and reflection are involved, GenAI becomes a resource that students must manage critically. The third implication is that voice should be assessed in addition to accuracy. Teachers can ask whether the final text represents the student's argument, context, and intended stance, not merely whether it sounds grammatically polished.
Regarding policy, institutions should avoid rules that are either so broad that all AI use is treated as misconduct or so vague that students do not know what is allowed. A better policy distinguishes between prohibited uses, permitted support, required disclosure, and assessment-specific expectations. Policies should also address equity issues. Paid AI tools may create unequal access, and detection tools may misidentify multilingual writers. Therefore, policies should include due process, human reviews, and educational responses before punitive actions.
Future studies should move beyond perceptions and short-term performance. Longitudinal classroom research is needed to examine whether critical AI pedagogy improves writing development, source use, and learner autonomy over time. Comparative studies across languages, disciplines, and regions can show how local contexts shape the adoption of AI. Further research is needed on teacher professional development, student disclosure practices, and the effects of AI on authorial voice. Finally, corpus-based and discourse-analytic studies can examine how AI-polished student writing differs from non-AI writing in terms of stance, cohesion, lexical diversity, citation behavior, and cultural specificity.
4.2. Limitations
This study has some limitations. This is an integrative conceptual review, not a systematic review with exhaustive database searches, formal screening statistics, or effect size calculations. The literature on GenAI is also developing rapidly, so any synthesis is provisional and subject to change. In addition, the proposed Critical GenAI Writing Literacy Cycle has not been empirically tested. It should be understood as a theoretically grounded framework for future classroom interventions, not as a validated model. Further empirical research should examine how the cycle works with learners at different proficiency levels, in different disciplines, and in contexts with unequal technological access.
GenAI has changed the conditions of EFL academic writing, but it has not removed the need for language pedagogy. In contrast, this makes language pedagogy more important. Students need support not only in grammar and vocabulary, but also in judging evidence, reading AI outputs critically, protecting voice, understanding bias, and acting with academic integrity. The reviewed literature shows that AI tools can support feedback, confidence, and access to academic discourse; however, they can also encourage dependency, homogenize expression, and create new ethical risks. A critical literacy approach offers a balanced solution.
The Critical GenAI Writing Literacy Cycle proposed in this study provides teachers with a practical way to integrate AI without surrendering the goals of writing education. By orienting to genre, prompting strategically, comparing outputs, verifying evidence, transforming through human revision, and disclosing use, EFL learners can become accountable authors in AI-mediated environments and avoid academic dishonesty. For language and literature inquiry, the central issue is not whether machines can generate text but how human writers learn to interpret, challenge, and transform machine-mediated language in culturally meaningful ways.