Malmi

Portrait 03 of 15

Malmi

Education in an AI-rich future

she/her · 29 · Wurundjeri Woi Wurrung Country (Melbourne), VIC

AI & education · Critical thinking

Digital fluency is not the same as digital wisdom.

As a primary school teacher in Melbourne, Malmi watches students use AI the way they use Google - faster, not deeper. She sees the gap widening between how fluently children use tools and how confidently they can think without them. Her portrait examines whether Australia is rushing AI into classrooms before children have the cognitive foundations to use it well, and what that choice will cost future generations.

Through Malmi’s eyes

Malmi (she/her), 29 | Wurundjeri Woi Wurrung Country, Kulin Nations (Melbourne, Victoria)

Malmi moves through the school with practiced alertness, shifting between students, devices, and the needs the system fails to register. Her days unfold as continuous triage, anchored in one commitment: “remain student-focused.” Yet the more she adjusts in real time, the more visible the gap becomes. The system around her was designed for an earlier era, and it now strains to keep pace with the digital world younger generations inhabit.

Her students swipe and tap with native ease, but their fluency is thin and often misses the practical competencies they need. “They may not know how to file documents, set up keyboard shortcuts, or use digital calendars,” she explains. And as generative AI enters the classroom, the gap widens. “Students don’t understand the nuances of how to use AI in ways that are effective,” Malmi observes. “They use it as a more advanced Google,” treating it as a faster search engine rather than a tool for deeper thinking. The tools outpace their literacy, and the confidence on display often conceals a quiet dependence. “It is risky to treat technology as the solution to all problems,” she cautions. Even well-intentioned adoption can backfire. A shortcut becomes a substitute, and soon the tool is doing the thinking the student never learned to attempt. “It’s almost a double-edged sword,” she points out. “Introducing these platforms without literacy can be dangerous.”

What she seeks is simple yet structurally profound: a curriculum that teaches students how to think ‘with’ technology. “It needs to be more explicitly taught,” she says, “how to use the tools in a way that helps you think” rather than outsourcing the thinking to them. "Technology should be an aide that helps humans be better at being human, not something we use for the sake of its existence.” She sees AI as another wave in a long line of tools. Her main concern is the rapid rollout paired with the assumption that exposure alone will deliver the competence schools expect. “Somewhere along the way we shifted to this idea that just by using the tools students would receive 21st-century skills. We need a higher standard for what good tech use is, because not all tech use is equal,” she argues. Her expectations remain grounded. “All technology has some impact. I know this will have an impact, I’m just not sure what it is yet.”

In the absence of clear policy direction, Malmi rewrites digital practices where she can, stitching literacy through the gaps the system leaves open. She knows, though, that individual effort can only go so far. The strain runs deeper than a single classroom and will shape the level of cognitive resilience young people will carry into the rest of their lives.

Australia’s education system was built for a slower world, long before algorithms and constant connectivity rearranged how children learn. Each year, the distance between use and understanding grows wider, and Malmi knows it will not close on its own. The faults she sees are not in the students. They sit in the assumptions holding up a system now struggling to keep pace with the world moving around it.

Malmi - 2
Malmi - 3
Current trajectory

The policy trajectory of AI and education

Malmi’s story reveals an education system integrating AI faster than it develops the cognitive skills and teacher capacity required to use it well. Policy assumes that exposure produces competence, allowing tools to scale quickly while instructional design, curriculum alignment, and assessment reform lag behind. As long as assessment continues to reward outputs over process, this gap widens. Teachers are incentivised to prioritise completion and correctness over reasoning, judgement, and responsible AI use. Surface-level digital engagement therefore expands more quickly than students’ capacity to evaluate, question, and think critically. If this trajectory continues, early reliance on automated support will shape learning pathways in lasting ways, embedding dependency, widening disparities in cognitive development, and limiting young people’s informed participation in civic and economic life.

Future generations potential

The potential of future generations policy to intervene

A future generations policy lens realigns AI adoption with developmental readiness and institutional capability. It sequences technological integration alongside curriculum reform, teacher training, and assessment redesign, ensuring that skills development keeps pace with tool deployment. This approach reduces dependency and strengthens durable cognitive skills. It builds educational resilience and equips future learners to think independently, evaluate critically, and adapt responsibly within AI-rich environments.

Policy landscape

Today’s policy landscape: AI in education

AI is moving rapidly into Australian classrooms, reshaping how students learn and how teachers teach. The central policy question is no longer whether AI belongs in education, but how, when, and under what safeguards it should be introduced to strengthen human capability rather than displace it.

Adoption is accelerating faster than preparedness. 66% of Australian teachers report using AI, nearly double the OECD average of 36%. Yet the 2025 AI Preparedness Survey by the Association of School Business Administrators found that no surveyed schools in Australia and New Zealand consider themselves fully prepared, and 54% lack any formal AI governance framework. Even with modest sample sizes, the pattern is clear: schools are absorbing powerful tools more quickly than the system can guide, regulate, or embed them responsibly.

This pace carries developmental risk. When schools introduce AI before students build foundational reasoning and metacognitive skills, technology can displace essential stages of cognitive formation. Adolescence is a critical period for forming independent judgement; a premature AI integration can therefore have lasting consequences for workforce readiness, democratic resilience, and personal agency.

The risks are amplified by cognitive offloading, the outsourcing of mental effort to external tools. When adolescents rely on AI before their prefrontal cortex matures by their mid-20s, they may bypass the effort required to build discernment, persistence, and deep reasoning. Learning follows biological timelines, not political ones. Developmental windows that close cannot be reopened. Evidence suggests that over-reliance on AI reduces engagement in the higher-order thinking processes that human-to-human education cultivates. If dependency forms before critical thinking, deficits in judgment, creativity, and agency can persist into adulthood.

Weak critical-literacy provision compounds these vulnerabilities. Without strong programs, AI tools widen gaps in discernment and collective knowledge. 97% of adults demonstrate poor or limited ability to verify information online and often overestimate their competence. Younger users face greater risk. UNICEF's 2021 analysis notes that children are “particularly vulnerable to mis/disinformation” because their cognitive capacities are still developing, and other research links excessive or dependency-forming technology use to attentional deficits and mental-health challenges. In this context, AI can amplify the risks of misinformation when students lack the literacy to evaluate its outputs.

Governance has struggled to keep pace. States including New South Wales, Queensland, and Victoria have adopted differing approaches to AI in schools, producing a fragmented national landscape. In 2023, Education Ministers approved the Australian Framework for Generative AI in Schools, with implementation beginning in 2024. At the same time, broader misinformation reforms stalled amid concerns about government overreach, leaving the country reliant on voluntary industry codes with limited enforcement. The establishment of the Australian AI Safety Institute, announced in November 2025 and operational from early 2026, signals recognition of governance gaps, yet its focus on risk and compliance must be matched by equivalent investment in educational preparedness.

The result is structural misalignment. As national regulation lags, schools increasingly shoulder responsibility for preparing students to navigate AI-driven environments, compensating for governance failures beyond the classroom. Meanwhile, assessment regimes continue to prioritise outputs over reasoning, limiting incentives for the metacognitive and evaluative skills a responsible AI use requires.

Malmi - 4
Malmi - 5

The value of a future generations approach

When schools introduce high-capability AI tools before students develop reasoning, evaluation, and learning-to-learn skills, early reliance reshapes how those skills form. Habitual dependence can crowd out deliberate practice in analysis and synthesis, widening gaps in judgement, information literacy, and epistemic confidence. These effects do not remain confined to school. They shape workforce performance, civic participation, and the ability to navigate complex information environments throughout adult life.

A future generations policy approach places governance design at the centre of technological change. It aligns the pace and scope of AI adoption with stages of cognitive development and institutional capacity. Instead of prioritising exposure, it prioritises sequencing and readiness. Policy can specify when tools enter classrooms, under what pedagogical conditions they are used, and which safeguards accompany them. Such design strengthens critical reasoning, metacognition, and digital judgement before automation scales. When alignment holds, students use technology to extend thinking, not replace it.

International experience shows that alternatives exist. Singapore links long-horizon skills planning to curriculum design and assessment to developmental readiness, supporting capability formation across student cohorts. Estonia, on the other hand, deploys digital tools through teacher-led models supported by strong public digital governance, expanding access while reinforcing evaluation and reasoning within classrooms instead of substituting for them.

Future Generations Policy Analysis

A Future Generations Policy lens on the Australian Framework for Generative AI in Schools

Case Study: The Australian Framework for Generative AI in Schools

Released in November 2023, the Australian Framework for Generative AI in Schools sets national expectations for how AI should operate in education. Built around six principles, Teaching and Learning, Human and Social Wellbeing, Transparency, Fairness, Accountability, and Privacy, Security and Safety, it states that AI must enhance learning while protecting equity, human agency, and data privacy.

The Framework aligns with Australia’s broader policy architecture, including the AI Ethics Framework, the Alice Springs (Mparntwe) Education Declaration, and UN Sustainable Development Goal 4. Its most distinctive feature is its connection to the Australian Curriculum, particularly the Digital Technologies strand developed by ACARA, which provides the mandated structure for digital literacy across year levels.

In practice, implementation remains uneven. Capacity varies significantly across states and territories, and disparities in digital infrastructure risk widening existing inequities. While the AI Compliance and Standards Working Group, led by Education Services Australia (ESA), is developing National Product Expectations under the 2024-25 workplan to standardise privacy, transparency, and reliability requirements, system-wide preparedness and consistent application remain works in progress.

Fairness dimensions

Life stage equity

Misalignment

Introducing AI during formative learning stages before foundational reasoning and metacognitive skills are secure shifts how thinking habits develop, passing forward cohorts whose early learning prioritised task completion over judgment formation.

Distribution Across and Within Generations

Misalignment

When capability-building depends on school resources, teacher capacity, and informal support, cognitive resilience accumulates unevenly carrying forward stratified outcomes in discernment, adaptability, and opportunity.

Future Opportunities and Path Dependency

Misalignment

Early reliance on AI as a substitute for reasoning normalises shallow skill formation, locking future learners into dependency pathways that become harder to reverse once expectations and assessment practices adjust around tool use.

Proportionate and Justified Trade-offs

Partial Alignment

Accelerating AI adoption trades speed and short-term efficiency for the slower work of cognitive capability-building,shifting the long-term cost of remediation and retraining onto future cohorts, institutions, and labour markets.

Precautionary Approach

Partial Alignment

Governance that emphasises access and safeguards without equal attention to cognitive readiness increases the chances that future generations inherit learning environments that reinforce dependence rather than support independent judgment.

Our opportunity to shape Australia’s future

Education policy shapes the cognitive capacities future Australians will draw on in work, civic life, and public decision-making. When schools introduce advanced AI tools faster than students develop judgement and evaluative skills, early reliance can hardwire dependency into learning pathways. Over time, this weakens reasoning depth, distorts discernment, and reduces the ability to navigate complex information environments with confidence.

A future generations perspective helps identify where policy design matters most. One important leverage point lies in sequencing. Aligning the timing and conditions of AI adoption with stages of cognitive development, while ensuring schools have sufficient pedagogical space and institutional support, can help safeguard the development of foundational reasoning and evaluative skills.

The two speculative futures below, inspired by Malmi’s story, explore how educational and cognitive conditions could diverge by 2040 under continued trajectories or under redesigned policy frameworks.

Two possible futures

If we stay the course

A future of digital dependence

It’s 2040, and the pattern Malmi noticed at 29 has hardened into a national crisis. Early AI adoption produced a brief surge of enthusiasm: lesson preparation became faster, administration smoother, and digital competence appeared to rise. Beneath the surface, however, a literacy gap widened. Schools placed AI tools in students’ hands before many had developed the reasoning to use them well. Support remained inconsistent, and teachers squeezed verification skills into minutes between curriculum demands.

Policy churn deepened the strain. Every new government arrived with new frameworks, initiatives, and compliance layers. Teacher workload rose and burnout followed. Capability-building thinned. Students learned to complete tasks efficiently, yet struggled to explain how they reached an answer or whether a source could be trusted. Dependence first became common, then largely invisible.

The workforce gradually split. A smaller group with strong critical thinking and technical fluency commanded higher wages. Many others moved through roles where reliance on AI limited advancement. Entry-level pathways narrowed as employers assumed baseline judgement that many workers had never been taught to develop. Retraining programs expanded, but they arrived unevenly and often too late to close the gap.

Civic consequences proved more difficult to contain. Distinguishing reliable information from manipulation became harder for many citizens. Trust in institutions declined. Algorithmic silos deepened as fewer people recognised how digital feeds shaped what they saw. Waves of reactive regulation did little to rebuild shared understanding because the underlying evaluative skills were missing. What began as casual deference to AI models in adolescence turned into dependence in adulthood. A few managed to catch up through intensive retraining, but most were left with lasting gaps in judgment and reasoning.

By 2040, Australia stands as a cautionary case: a nation that prioritised speed over depth, and convenience over capability, then paid the cost through democratic fragility, widening economic stratification, and lost human potential that earlier policy choices failed to safeguard.

If we choose differently

A future of digital wisdom

It’s 2040, and Australia has embraced what Malmi argued for years: digital fluency is not the same as digital wisdom. A national 15-year Education Futures Strategy replaced the churn of reactive policy cycles with a long-horizon approach. The strategy positioned digital literacy, critical thinking, and information verification, alongside reading, mathematics, and ethical reasoning as core civic capabilities. These foundations became universal entitlements, no longer dependent on family income, school postcode, or access to private enrichment.

The shift succeeded because it followed how children learn. Instead of rigid year-level benchmarks, schools adopted clear capability progressions. Students moved from basic digital practices to source verification, algorithm awareness, and responsible use through mastery-based pathways, advancing when ready rather than when the calendar dictated. Teacher Innovation Hubs connected classrooms with research in cognitive science, AI ethics, and media literacy, while policymakers reduced compliance burdens to create space for professional judgment.

Professional learning became continuous rather than episodic. Governments rebalanced teacher workload so educators could focus on capability development and student resilience instead of drowning in administrative burden. Classroom practice changed accordingly. Students were expected to show their reasoning, explain their sources, and test AI outputs instead of accepting them uncritically. Tool use became visible, accountable, and intentional.

By 2040, the effects are evident in everyday life. Students verify instinctively. They use AI to draft, test, and extend ideas, not to replace thinking. In workplaces, graduates bring judgement alongside technical fluency, allowing roles to evolve with technology instead of disappearing beneath it. Democratic participation remains resilient because citizens navigate complex, information-dense environments with confidence and clarity.

The myth of the “digital native” has faded. In its place is a shared understanding that capability must be taught, sequenced, and supported. Australia stopped chasing each new tool and redesigned education around discernment, timing, and cognitive development, ensuring these capacities grow deliberately and equitably across generations rather than emerging through chance and privilege.

Malmi - 6
Malmi - 7
Malmi - 8
Malmi — Portraits of Our Future