top of page
Search

Can AI Bring Thinking Back?

Updated: Dec 22, 2025

English Language Education in the Age of Screens, Shortcuts and Spectacle


Author

Shams Bhatti

(English Language Teacher and Educational Consultant)

Affiliation: Keystone Academic Consultants Limited

 

Author note

The author is an English language teacher with over twenty-four years of professional experience across universities, industrial training centres, corporate education contexts, including Aramco and further education colleges in the United Kingdom. His work spans ESOL, Functional Skills, ESL, EFL, EAL, ESP and post-16 English provision, with a sustained focus on literacy development, assessment integrity and academically rigorous pedagogy. Alongside classroom teaching, he is engaged in curriculum design, educational content development, formative and summative assessment resources, mark schemes, proofreading services and academic leadership through Keystone Academic Consultants Limited. His professional perspective is shaped by long-term practice at the intersection of teaching, assessment and policy, and by a commitment to quality-driven, ethically grounded and technology-aware education.



Why this matters now

As generative artificial intelligence (AI) becomes an everyday presence in schools, colleges and universities, English language education finds itself at a critical crossroads. On the one hand, AI threatens authorship, assessment validity and the cognitive processes through which reading and writing develop. On the other hand, it may represent one of the few screen-based technologies capable of redirecting attention away from socially corrosive platforms towards sustained, knowledge-rich engagement. This article argues that British education policy must move beyond a narrow risk-management stance and establish a safeguarded, human-centred settlement that protects literacy, integrity and moral development while harnessing AI’s genuine educational potential.

 


Introduction

The integration of generative AI into education is frequently framed as a technical or managerial challenge. In English language education, however, it is fundamentally a cognitive and ethical endeavour. AI now operates at the point where learners form sentences, structure arguments, and rehearse ideas—activities that have traditionally been the very substance of learning.

Classroom evidence increasingly reveals a troubling pattern: learners submit fluent, AI-assisted coursework but struggle to reproduce comparable quality under controlled assessment conditions. At the same time, attention spans are being eroded by short-form video, algorithmic feeds and continuous digital stimulation. In this context, AI is often treated solely as a threat. This article contends that such a binary framing is inadequate. The real policy challenge is not whether AI should be resisted or embraced, but how it should be governed, pedagogically framed and morally situated within English language teaching and learning.

 

From CALL and MALL to Generative AI: a structural shift


Earlier waves of educational technology, including Computer-Assisted Language Learning (CALL) and Mobile-Assisted Language Learning (MALL), functioned primarily as tools. They supported practice, extended exposure and enriched classroom activity, but they did not usually replace learner cognition. Access, however, was uneven. CALL and MALL depended on infrastructure, institutional investment and trained staff, leaving many learners—particularly in deprived contexts—excluded.


Generative AI marks a structural departure. It is not merely assistive but performative. With minimal infrastructure, often just a smartphone, learners can now generate grammatically fluent text, summaries and explanations on demand. This shift dramatically lowers access barriers, especially for English learners globally. At the same time, it introduces unprecedented risks to authorship, independence and skill formation.


The contrast between the United Kingdom’s cautious, safeguard-oriented policy stance and China’s early, curriculum-embedded AI education for learners as young as six highlights divergent philosophies. Britain emphasises professional judgement, risk mitigation and assessment integrity. China treats AI literacy as a strategic national capability to be developed from early schooling. Neither approach is inherently superior, but the comparison underscores the need for Britain to articulate a clear, education-specific rationale for AI use in English language learning.

 

Attention, memory and the erosion of deep reading


The cognitive challenges confronting English language education did not begin with AI. Long before generative models became mainstream, short-form video platforms and social media feeds were reshaping attention. Rapid task-switching, novelty-seeking and emotional stimulation have become habitual, with significant implications for literacy development.


Language learning depends on repetition, retrieval and sustained exposure. Deep reading, in particular, is a learned cognitive practice that supports syntactic awareness, inferencing, vocabulary growth and conceptual depth. When learners replace extended reading with summaries or fragments, they lose more than information; they lose the conditions under which language competence matures.


From a cognitive science perspective, the distinction between performance and learning is critical. Apparent fluency produced quickly—whether through skimming or AI assistance—often masks fragile understanding. Conditions that feel efficient can be educationally deceptive, producing short-term gains without long-term retention or transfer.

 

Cognitive bypass and the integrity gap in English writing


Writing in English-language education is not simply a communicative outcome; it is a process of thinking. Planning, drafting, revising and editing are the mechanisms through which learners internalise grammar, register and discourse structure. When AI generates text on a learner’s behalf, these mechanisms are bypassed.


This has produced what may be termed an integrity gap: a widening distance between the quality of submitted coursework and the competence demonstrated under exam conditions. Regulators and awarding bodies have responded appropriately by strengthening malpractice guidance and emphasising centre responsibility. Yet policy responses focused solely on detection and prohibition risk, missing the developmental issue. The deeper problem is not misconduct alone but missed learning.


For English learners, habitual reliance on AI risks creating surface fluency without control, confidence without competence and correctness without authorship.

 

AI as cognitive counterweight: from dopamine loops to dialogue


While these risks are real, it would be a mistake to treat AI as the primary cause of cognitive decline. The more damaging influence on attention and memory arguably comes from platforms designed to monetise distraction, outrage and performative identity. Against this backdrop, AI is comparatively morally quiet. It does not demand validation, reward exhibitionism or amplify extremity.


Used deliberately, AI can support cognitive dialogue rather than dopamine-driven consumption. It can facilitate clarification, rehearsal, comparison, feedback and revision. In this sense, AI may function as a form of educational harm reduction: redirecting screen time from socially corrosive feeds towards language-rich, reflective interaction.


This reframes the earlier contrast between the “howling individual” and the scholar. The problem is not that scholarship lacks value, but that contemporary attention economies fail to reward it. AI cannot repair that economy alone, but it can offer learners an alternative cognitive pathway—if education policy chooses to cultivate it.

 

Speaking, listening and the limits of simulation


AI-based speaking and listening tools are among the most promising areas for English language education. They can reduce anxiety, increase practice opportunities and provide immediate feedback. Evidence suggests they can support confidence and engagement, particularly for learners with limited access to authentic interaction.


However, simulation has limits. Real communicative competence involves unpredictability, turn-taking pressure, pragmatic judgement and social consequence. AI should therefore be framed as a rehearsal space, not a replacement for human interaction. Policy must ensure that gains in confidence translate into authentic communicative performance.


Policy implications for English language education in Britain


A credible British response to AI in English-language education requires a shift from reactive safeguarding to intentional design. The following reforms offer a balanced settlement.

 

1. Establish AI literacy within English curricula

AI literacy should be defined as a language outcome: evaluating accuracy, register, coherence and appropriacy; revising AI output; justifying linguistic choices. This is not computing education but advanced literacy.

2. Redesign assessment to foreground process

Assessment must make thinking visible through drafts, annotations, in-class writing, oral defences and reflective commentaries. This protects validity while preserving meaningful coursework.

3. Strengthen safeguarding and procurement standards

Clear national expectations for data protection, age appropriacy and tool approval are essential, particularly given emerging risks such as AI-enabled harassment.

4. Protect deep reading as cognitive infrastructure

Sustained reading must be treated as a protected outcome. AI should support reading through scaffolding and discussion, not replace engagement with texts.

5. Reposition AI as an alternative to harmful screen practices

Policy should explicitly frame AI as a preferable cognitive activity compared with attention-extractive platforms, without presenting it as a substitute for teachers or books.

6. Invest in sustained teacher development

Teacher capacity is the system bottleneck. Professional development must address pedagogy, assessment design, safeguarding and ethical judgement, not merely tool familiarity.

 

Conclusion


British education policy is right to prioritise safeguarding, assessment integrity and professional judgement. Yet a policy stance defined primarily by caution risks underusing AI’s capacity to support literacy, reflection and moral seriousness. The choice is not between banning AI and surrendering to it, but between allowing it to inherit the logic of the attention economy or reclaiming it as a tool for thinking.


In English-language education, where reading, writing, and speech shape identity and opportunity, the stakes are especially high. A safeguarded, human-centred AI settlement can protect authorship, restore cognitive discipline and offer learners an alternative to the noise of spectacle. If implemented with seriousness and intent, AI may not eclipse the scholar but help make scholarship visible again.

 

References


Bjork, R.A. and Bjork, E.L. (2011) ‘Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning’, in Gernsbacher, M.A., Pew, R.W., Hough, L.M. and Pomerantz, J.R. (eds.) Psychology and the real world: Essays illustrating fundamental contributions to society. New York: Worth Publishers, pp. 56–64.


City & Guilds (2024) Managing cases of suspected malpractice in examinations and assessments. Available at:

(Accessed: 20 December 2025).

 

Department for Education (2023, updated 2025) Generative artificial intelligence (AI) in education. London: DfE. Available at:

(Accessed: 20 December 2025).

 

Department for Education (2025). Artificial intelligence in schools and colleges: Everything you need to know. The Education Hub. Available at:

(Accessed: 20 December 2025).


Du, J., Gao, X. and Wang, S. (2024) ‘Artificial intelligence-powered chatbots in English as a second or foreign language learning: A systematic review’, Computer Assisted Language Learning.


Ofqual (2024). Ofqual’s approach to regulating the use of artificial intelligence in the qualifications sector. Coventry: Ofqual. Available at:

(Accessed: 20 December 2025).

 

Ofsted (2024, updated 2025) Ofsted’s approach to artificial intelligence. Manchester: Ofsted. Available at:

(Accessed: 20 December 2025).


Orben, A. and Przybylski, A.K. (2019) ‘The association between adolescent well-being and digital technology use’, Nature Human Behaviour, 3, pp. 173–182. Available at:

(Accessed: 20 December 2025).

 

UNESCO (2023, updated 2025) Guidance for generative AI in education and research. Paris: UNESCO. Available at:

(Accessed: 20 December 2025).


Wolf, M. (2018) Reader, come home: The reading brain in a digital world. New York: HarperCollins.

 


© Shams Bhatti, Keystone Academic Consultants Ltd. All rights reserved.


This material has been created by Shams Bhatti and is published by Keystone Academic Consultants Ltd. It may be used, printed and shared for non-commercial educational purposes, including classroom teaching, learner self-study and internal staff training, provided that the material is not altered and that full credit is given to the author and the company.


This material may not be sold, republished, uploaded to public websites, adapted, or incorporated into commercial products or services without prior written permission from the copyright holder.

 
 
 

3 Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Samar Husayn
Jan 12
Rated 5 out of 5 stars.

A valuable article highlights a significant issue in education field nowadays. I absolutely agree with MR Shams,'' AI should therefore be framed as a rehearsal space, not a replacement for human interaction.' Thanks for sharing.

Like

Becky
Dec 22, 2025
Rated 5 out of 5 stars.

This is a powerful and timely reflection on how AI is reshaping not just educational practices but the very cognitive foundations of learning. I appreciate how the article acknowledges the real risks of over-reliance on AI—especially the erosion of deep reading, sustained attention, and authentic writing skills—while also offering a balanced perspective on AI’s potential when used thoughtfully. In a world full of distraction and shallow engagement, it’s crucial that we protect the value of human authorship and the development of critical thinking, rather than surrendering these processes to automation. At the same time, AI can be a supportive tool for expanding access and enhancing understanding if educators and learners maintain oversight and stay grounded in the process of learning…

Like

Amin
Dec 21, 2025
Rated 5 out of 5 stars.

This article explains clearly how AI is changing English learning. It shows that AI can help students practice, but it can also stop real learning if students depend on it too much. The writer makes a good point that reading, writing,and thinking are important skills that should not be replaced by AI . I agree that AI should be used carefully, with writer’s guidance to support learning and not to do the work for the students.

Like

ka-consultants.com

©2023 by Keystone Academic Consultants Ltd. Company Number: 14787504

Email: admin@ka-consultants.com

WhatsApp: + 44 (0) 7311424303

bottom of page