AI in Learning Organizations

Opportunities Brought by AI

Generative AI is reshaping learning organizations through five broad opportunity areas that span innovation, student empowerment, and operational efficiency.

🚀

Innovation

New models of teaching, content creation, and organizational design enabled by AI tools.

🎯

Empowerment

Students and instructors gain new capabilities and pathways for deeper engagement.

Efficiency

Reduction of repetitive administrative and instructional workloads through automation.

📚

Personalized Learning

Adaptive pathways tailored to individual student needs, pacing, and objectives.

🔄

Rapid Feedback

Immediate, iterative feedback loops supporting continuous improvement in learning.

🤖

24/7 Tutoring

AI-powered support available around the clock, reducing barriers to help-seeking.

📉

Reduced Workload

Teacher administrative burden decreases, freeing time for high-value human interaction.

GenAI as Student Learning Assistant: What the Data Show

91%+
of students use generative AI tools for learning
Kanont et al., 2024
50.9%
use AI primarily to seek information
Kanont et al., 2024
47.7%
use text generation as their primary AI tool type
Kanont et al., 2024
27.1%
use AI more than once daily — deep workflow integration
Kanont et al., 2024

Philosophical Grounding

Technology as Non-Neutral: Heidegger's Warning

The essence of technology is by no means anything technological. We have become "chained to technology," viewing our world only through a lens of efficiency.

— Heidegger, The Question Concerning Technology (1954)

Heidegger challenges the "instrumental" view of technology — the common assumption that technology is simply a neutral tool subject to human control. His analysis reveals something far more unsettling.

Enframing (Gestell)

Modern technology is defined by Enframing — a way of understanding the world that dictates how we act, regardless of our intentions. It is not a tool we use; it uses us.

Standing Reserve (Bestand)

Under Enframing, the world is revealed only as a "standing reserve" — resources to be optimized, ordered, and exploited. A river becomes a hydroelectric resource. People become "human resources."

Non-Neutrality

Technology shapes our perception of reality, forcing us to view everything through a lens of efficiency, utility, and calculative thinking — even when we believe we are freely choosing.

Beyond Human Control

Enframing is not merely a human activity but a "destining" that develops beyond direct human control, effectively controlling us rather than the reverse.

The Greatest Risk: "The Highest Danger"

  • Loss of Human Essence: The danger is that Enframing will completely dominate, causing humans to forget other, more meaningful ways of experiencing the world — through art, poetry, and genuine relationship.
  • The "Final Delusion": In its quest to be "master and possessor of nature," humanity reduces itself to "human resources" and becomes unable to see the world as anything other than a resource, losing its own authentic being.
  • Forgetting of Being: Technology forces a "forgetting of being," where the mystery of existence is covered over by technical, calculative, and measurable data.

Systems Linked to Digital Transformation

Different system types carry different implications for who bears costs, who holds power, and what temporal logic governs their operation:

System Type Who Bears Cost? Power Axis Tech Examples
Systems for DesignFuture users (if poorly designed)Distributed agencyAI co-creation, CAD, dev environments
Systems of SupportSystem operators (ideally)Enabling / augmentingLMS, ITS, ERP support, EPSS
Systems of MediationEpistemic commonsPerceptual shapingSearch engines, LLMs, social feeds
Systems of ExchangeThird parties, environmentReciprocity or asymmetryMarketplaces, APIs, blockchain
Systems of LegitimationNon-credentialed groupsAuthority maintenancePeer review, credentialing, ethics boards
Systems of Control / SurveillanceMonitored populationsDisciplinary powerLearning analytics, HRMS, algorithmic mgmt
Systems of ExploitationWorkers, communities, natureSurplus extractionAdTech, logistics systems, gig platforms

Organizational Reality

Challenges of Digital Transformation & AI

60–70%
of digital transformation initiatives fail
90–95%
AI-specific transformation initiatives fail

Organizational Challenges (Part 1)

  • AI Business Models Under Strain: Recent reports from OpenAI, Anthropic, and Microsoft indicate likely financial instability within 2 years — organizations adopting these tools face vendor risk.
  • Unsustainable Resource Demands: Water and energy use by AI systems are already high and growing — long-term sustainability is an unresolved question.
  • Unreliable Outputs (AI Slop): Generative AI models are inferential and draw on different parts of training data each run — consistent, reliable output is not guaranteed.
  • Vendor Uncertainty Risk: Adopting AI into core work processes when the vendor may disappear in a year harms future organizational stability and institutional memory.
  • Human Capital Loss: When knowledgeable workers are replaced by AI, their tacit knowledge and expertise are permanently lost to the organization.

Organizational Challenges (Part 2)

  • Strategic Misalignment: Unclear vision and inconsistent leadership commitment — many leaders do not know what transformation is actually for.
  • Cultural Resistance: Opposition from groups threatened by change is consistent and often underestimated.
  • Technical Integration Failures: Legacy system complexity creates cascading integration problems.
  • Failure to Plan and Invest: Successful AI adoption requires full change management strategy, which is costly and requires significant human resources.
  • Training as Afterthought: Workforce training is almost always underfunded or ignored entirely in transformation planning.

Key Organizational Concerns with GenAI

Challenge AreaStatisticSource
AI Risk Awareness61% of leaders acknowledge growing risksEY, 2024
Security GapsOnly 24% of GenAI initiatives are properly securedIBM, 2024
Data BreachesAverage cost: $4.88 millionIBM, 2024
Employee Concerns50% worry about AI inaccuraciesMcKinsey, 2025
Cybersecurity51% concerned about AI-related vulnerabilitiesMcKinsey, 2025
Job Disruption15–25% of jobs affected by 2027BLS, 2023
Gender Impact79% of women in high-risk automation rolesWEF, 2024

Human Impacts of Digital Transformation

  • 15–25% of jobs face significant disruption by 2025–2027, affecting millions of families (revised 2025 US job losses now exceed 1 million).
  • 79% of employed women work in roles at high risk of automation.
  • 43% of workers fear personal privacy violations in AI systems.
  • Career uncertainty creates anxiety, stress, and economic insecurity across entire communities.
  • Whole communities face potential displacement without adequate support systems in place.

Foundational Model

What Is Digital Transformation as a Process Model?

Digital transformation is not a one-time technology adoption event — it is a cyclical, data-informed process of organizational learning and adaptation. Based on Lardi (2022) and Schmarzo (2020), the model comprises five interconnected stages:

1
Organizational Monitoring Capture big data focused on organizational performance — the foundation of any informed transformation effort.
2
Organizational Insights Identify insights regarding performance, analyze potential technology solutions, and operationalize valuable digital transformations.
3
Organizational Optimization Optimize operations, processes, functions, resource allocation, technologies, and tasks to improve outcomes.
4
Insights Monetization Improve organizational performance to increase value and returns — translating digital insights into tangible gains.
5
Digital Transformation Determine how best to adopt new technologies and processes, including automation and AI, guided by the data and insights gathered in prior stages.

Failure Analysis

Why Digital Transformations Fail

A systematic review across public and private sectors (Syed et al., 2023) identified five primary failure categories. Understanding these is the prerequisite for avoiding them.

Organizational — Poor stakeholder engagement & collaboration42%
Cultural — Resistance to change and new ways of working38%
Leadership — Lack of clear vision for transformation35%
Implementation — Insufficient integration planning31%
Environmental — Misalignment with broader ecosystem needs28%
Source: Syed et al. (2023): Systematic review across public and private sectors

The Three Categories of Transformation Risk

Organizational Risks

Cultural resistance, leadership changes, and resource constraints — often the most underestimated category of risk in planning documents.

Implementation Risks

Technical integration failures, user adoption issues, and performance degradation — the execution layer where plans meet reality.

Environmental Risks

Regulatory changes, competitive pressures, and technology evolution — external forces that can invalidate sound internal plans.

A Practical & Ethical Question: Should We?

  • Do you have a clear, detailed plan? Having a plan does not mean it should go forward — clarity of purpose must precede action.
  • Consider all costs: Beyond financial constraints, there are human constraints and human consequences that must be understood before proceeding.
  • Scrutinize the internal source: Is the person pushing for adoption credible? Are they thinking systemically for long-term positive outcomes, or only for short-term gains?

Risk Management

FMEA: Systematic Risk Assessment for AI Adoption

Failure Modes and Effects Analysis (FMEA) provides a structured, quantifiable approach to identifying and prioritizing transformation risks before they materialize.

Risk Priority Number (RPN) = Occurrence × Severity × Detectability

Occurrence (1–10)

How frequently does this risk occur? Based on historical data and current reports — not intuition.

Severity (1–10)

What is the impact on transformation objectives and stakeholders? Often not discovered until failure has already set in.

Detectability (1–10)

How quickly can problems be identified and addressed? The black-box nature of AI makes this a major risk factor — especially with embedded bias that may go undetected.

Need for Decision-Support Frameworks

We don't know what we don't know. We can't know what we don't look for. We often don't even know what to look for and need guidance. "There are unknown unknowns."

— Donald Rumsfeld, US Secretary of Defense (2002)

People in organizations do not want to know about issues that may complicate their work and force them to make difficult decisions.

— Alvesson et al. (2022), citing Jackall (1988)

Frameworks for Action

Decision-Support Models for Digital Transformation

Unlike other IT and AI guidelines that provide only principles, the Digital Transformation Adoption Decision-Support (DTAD) and Ethical Choices with Ed Tech (ECET) frameworks provide both principles and guiding questions to support decision-making and risk reduction.

DTAD Framework (Corporate)

Designed for corporate contexts, DTAD guides organizations to reflect on why and how they seek to adopt AI or IT, if they can use it in their setting, and critically — whether they should (the ethical dimension).

ECET Framework (K-12 / Higher Ed)

Designed for educational settings, ECET supports institutional reflection on adoption decisions through the same three lenses: purpose, feasibility, and ethics — adapted for teaching and learning contexts.

ECET AI Component — Evaluation Framework

The ECET tool evaluates AI adoption across three dimensions (Idea, Feasibility, Ethics) and six criteria areas:

Dimension Explainable & Trustworthy Educationally Useful Adaptable & Agency Fair & Equitable Usable & Supportable Safe & Reliable Score
Idea Does the company explain and provide evidence about how this AI supports teaching and/or learning outcomes? Can you explain how the AI tool helps reach your learning outcomes? Can you adapt the technology to support teaching and learning in your local environment? Does the AI tool support all intended users? Is the AI tool easy enough to use to offer students pedagogical and technical support? Is the AI tool safe and reliable for users? /70
Feasibility Are technology requirements clearly explained? Are activities effective within available instructional time? Will all needed tech resources be available? Will all users have digital access? Is training and documentation readily available? Can the AI tool be used securely with your institution's systems? /60
Ethics Is the AI technology supplier trustworthy? Does the product measure learning ethically? Does the AI tool support independent learning as adapted to your setting? Does the AI tool provide all learners with equitable educational benefits? Is the tool technically usable by all educational participants? Will the AI technology do no harm to users? /70

Adapted from Warren, S. J., Beck, D., & McGuffin, K. (2023). In Applied Ethics for Instructional Design and Technology. EdTechBooks.org.

Human Factors

Building Trust and Addressing AI Bias

What Builds Trust in DT & AI?

Technological Performance

The strongest predictor of trust — response quality, accuracy, and reliability matter most (Cui et al., 2025).

User Experience

Enjoyment, ease of use, and interaction quality significantly shape trust and adoption behavior (Cui et al., 2025).

Ethics & Safety

Privacy, security, and algorithmic fairness each impact user adoption decisions — not just technical factors (Cui et al., 2025).

Individual Differences

Age, gender, and social influence moderate how trust is formed — one-size-fits-all approaches will fail (Cui et al., 2025).

Trust & Transparency by Design

  • Explainability: Users need to understand how AI arrives at outputs — transparency builds trust (Cui et al., 2025).
  • Confirmation loops: Allow users to verify, correct, or refine AI suggestions before accepting them (Nicolescu & Tudorache, 2022).
  • Feedback & error reporting mechanisms: Critical for maintaining trust over time — not an add-on but a core design requirement (Nicolescu & Tudorache, 2022).

Addressing AI Bias

The Problem

  • AI systems mirror and amplify the biases present in their training data.
  • Consequences include discriminatory hiring, biased lending, and unfair law enforcement outcomes.
  • Noble (2018) and Benjamin (2019) document extensive real-world harms from algorithmic bias in practice.

The Solution

  • Diverse teams designing and auditing AI systems from the outset.
  • Systematic, transparent auditing processes built into the development cycle.
  • Investment in bias detection and mitigation before implementation — not as an afterthought.
  • Algorithmic accountability frameworks providing structure and ongoing oversight.

Practical Guidance

Implementation Success Factors & Future Research

Phased Rollout

Staged implementation allowing for learning and adaptation at each step — not a single "big bang" deployment.

Comprehensive Training

Systematic skill development for all stakeholder groups — not just technical staff but every affected person.

Change Management Integration

Combine AI adoption with established organizational change management practices — technology alone is never sufficient.

Continuous Feedback Loops

Regular gathering of and genuine response to stakeholder input throughout the transformation lifecycle.

Future Research Directions

Emotional Design

How do affective responses shape long-term AI relationships? What design choices build sustainable engagement? (Cui et al., 2025)

AI Pedagogy

What instructional designs best support AI-augmented learning? How do we design for human flourishing alongside AI assistance? (Kanont et al., 2024)

Longitudinal Metrics

How does trust evolve over months and years of use? What predicts sustained engagement versus disillusionment? (Yu et al., 2024)

A Path Forward

Heidegger's "Saving Power": Non-Technological Responses

Where the danger is, there grows also what saves.

— Martin Heidegger, The Question Concerning Technology (1954)

A Free Relationship

Heidegger calls for a "free" relationship with technology — one in which we use it without letting it dominate our inner lives or our understanding of existence.

Meditative Thinking

The alternative to "calculative thinking" is "meditative thinking" — a reflective, poetic way of living that acknowledges the mystery and intrinsic value of things, not just their utility.

Re-evaluating Techne

Return to a more ancient, Greek understanding of techne — technology as poiesis, a "bringing forth" or revealing of nature, rather than a "challenging forth" or assault on nature.

Summary

Key Takeaways

01

Systematic Decision-Support, Not Just Adoption

Organizations must move beyond "can we?" to ask "should we?" using frameworks like DTAD to evaluate opportunities, risks, and human impacts before committing.

02

Trust Is Built Through Transparency

Users need to understand how AI works, verify outputs, and see accountability mechanisms before they will genuinely adopt new systems — performance alone is insufficient.

03

Risk Management Covers Technical and Human Factors

Address bias, sustainability, vendor stability, and workforce impacts — not just implementation timelines and ROI metrics. The human dimension is not optional.

04

Stakeholder Engagement from Day One

Phased rollouts, comprehensive training, and genuine change management integration prevent the 42% organizational failure rate — planning for people is planning for success.

05

Map, Model, and Iterate

Map stakeholder perspectives (CATWOE), model the system holistically, and iterate through continuous feedback — transformation is not linear and cannot be treated as such.

Scholarly Grounding

References

📚 Companion Resource Sites

Digital Transformation Resources

Frameworks, tools, and evidence-based practices — Schmarzo's Laws of DT, Warren's Dynamic Systems Engineering framework, and a curated reference library.

dtresources.systemly.net ↗

Ethical Choices with Educational Technology

The ECET framework — a research-validated model for ethical decision-making in educational technology adoption and instructional design.

ecet.systemly.space ↗

Warren — Refereed Journal Articles & Book Chapters

Warren — Conference Presentations (AERA, AECT, DSI)

Warren — Invited Presentations on AI (2023–2025)

Primary References

Foundational Systems Theory

Political Economy & Social Theory

Technology Accountability & Ethics

Educational Technology & Learning Theory