AI in Learning Organizations
Opportunities Brought by AI
Generative AI is reshaping learning organizations through five broad opportunity areas that span innovation, student empowerment, and operational efficiency.
Innovation
New models of teaching, content creation, and organizational design enabled by AI tools.
Empowerment
Students and instructors gain new capabilities and pathways for deeper engagement.
Efficiency
Reduction of repetitive administrative and instructional workloads through automation.
Personalized Learning
Adaptive pathways tailored to individual student needs, pacing, and objectives.
Rapid Feedback
Immediate, iterative feedback loops supporting continuous improvement in learning.
24/7 Tutoring
AI-powered support available around the clock, reducing barriers to help-seeking.
Reduced Workload
Teacher administrative burden decreases, freeing time for high-value human interaction.
GenAI as Student Learning Assistant: What the Data Show
Philosophical Grounding
Technology as Non-Neutral: Heidegger's Warning
The essence of technology is by no means anything technological. We have become "chained to technology," viewing our world only through a lens of efficiency.
— Heidegger, The Question Concerning Technology (1954)Heidegger challenges the "instrumental" view of technology — the common assumption that technology is simply a neutral tool subject to human control. His analysis reveals something far more unsettling.
Enframing (Gestell)
Modern technology is defined by Enframing — a way of understanding the world that dictates how we act, regardless of our intentions. It is not a tool we use; it uses us.
Standing Reserve (Bestand)
Under Enframing, the world is revealed only as a "standing reserve" — resources to be optimized, ordered, and exploited. A river becomes a hydroelectric resource. People become "human resources."
Non-Neutrality
Technology shapes our perception of reality, forcing us to view everything through a lens of efficiency, utility, and calculative thinking — even when we believe we are freely choosing.
Beyond Human Control
Enframing is not merely a human activity but a "destining" that develops beyond direct human control, effectively controlling us rather than the reverse.
The Greatest Risk: "The Highest Danger"
- Loss of Human Essence: The danger is that Enframing will completely dominate, causing humans to forget other, more meaningful ways of experiencing the world — through art, poetry, and genuine relationship.
- The "Final Delusion": In its quest to be "master and possessor of nature," humanity reduces itself to "human resources" and becomes unable to see the world as anything other than a resource, losing its own authentic being.
- Forgetting of Being: Technology forces a "forgetting of being," where the mystery of existence is covered over by technical, calculative, and measurable data.
Systems Linked to Digital Transformation
Different system types carry different implications for who bears costs, who holds power, and what temporal logic governs their operation:
| System Type | Who Bears Cost? | Power Axis | Tech Examples |
|---|---|---|---|
| Systems for Design | Future users (if poorly designed) | Distributed agency | AI co-creation, CAD, dev environments |
| Systems of Support | System operators (ideally) | Enabling / augmenting | LMS, ITS, ERP support, EPSS |
| Systems of Mediation | Epistemic commons | Perceptual shaping | Search engines, LLMs, social feeds |
| Systems of Exchange | Third parties, environment | Reciprocity or asymmetry | Marketplaces, APIs, blockchain |
| Systems of Legitimation | Non-credentialed groups | Authority maintenance | Peer review, credentialing, ethics boards |
| Systems of Control / Surveillance | Monitored populations | Disciplinary power | Learning analytics, HRMS, algorithmic mgmt |
| Systems of Exploitation | Workers, communities, nature | Surplus extraction | AdTech, logistics systems, gig platforms |
Organizational Reality
Challenges of Digital Transformation & AI
Organizational Challenges (Part 1)
- AI Business Models Under Strain: Recent reports from OpenAI, Anthropic, and Microsoft indicate likely financial instability within 2 years — organizations adopting these tools face vendor risk.
- Unsustainable Resource Demands: Water and energy use by AI systems are already high and growing — long-term sustainability is an unresolved question.
- Unreliable Outputs (AI Slop): Generative AI models are inferential and draw on different parts of training data each run — consistent, reliable output is not guaranteed.
- Vendor Uncertainty Risk: Adopting AI into core work processes when the vendor may disappear in a year harms future organizational stability and institutional memory.
- Human Capital Loss: When knowledgeable workers are replaced by AI, their tacit knowledge and expertise are permanently lost to the organization.
Organizational Challenges (Part 2)
- Strategic Misalignment: Unclear vision and inconsistent leadership commitment — many leaders do not know what transformation is actually for.
- Cultural Resistance: Opposition from groups threatened by change is consistent and often underestimated.
- Technical Integration Failures: Legacy system complexity creates cascading integration problems.
- Failure to Plan and Invest: Successful AI adoption requires full change management strategy, which is costly and requires significant human resources.
- Training as Afterthought: Workforce training is almost always underfunded or ignored entirely in transformation planning.
Key Organizational Concerns with GenAI
| Challenge Area | Statistic | Source |
|---|---|---|
| AI Risk Awareness | 61% of leaders acknowledge growing risks | EY, 2024 |
| Security Gaps | Only 24% of GenAI initiatives are properly secured | IBM, 2024 |
| Data Breaches | Average cost: $4.88 million | IBM, 2024 |
| Employee Concerns | 50% worry about AI inaccuracies | McKinsey, 2025 |
| Cybersecurity | 51% concerned about AI-related vulnerabilities | McKinsey, 2025 |
| Job Disruption | 15–25% of jobs affected by 2027 | BLS, 2023 |
| Gender Impact | 79% of women in high-risk automation roles | WEF, 2024 |
Human Impacts of Digital Transformation
- 15–25% of jobs face significant disruption by 2025–2027, affecting millions of families (revised 2025 US job losses now exceed 1 million).
- 79% of employed women work in roles at high risk of automation.
- 43% of workers fear personal privacy violations in AI systems.
- Career uncertainty creates anxiety, stress, and economic insecurity across entire communities.
- Whole communities face potential displacement without adequate support systems in place.
Foundational Model
What Is Digital Transformation as a Process Model?
Digital transformation is not a one-time technology adoption event — it is a cyclical, data-informed process of organizational learning and adaptation. Based on Lardi (2022) and Schmarzo (2020), the model comprises five interconnected stages:
Failure Analysis
Why Digital Transformations Fail
A systematic review across public and private sectors (Syed et al., 2023) identified five primary failure categories. Understanding these is the prerequisite for avoiding them.
The Three Categories of Transformation Risk
Organizational Risks
Cultural resistance, leadership changes, and resource constraints — often the most underestimated category of risk in planning documents.
Implementation Risks
Technical integration failures, user adoption issues, and performance degradation — the execution layer where plans meet reality.
Environmental Risks
Regulatory changes, competitive pressures, and technology evolution — external forces that can invalidate sound internal plans.
A Practical & Ethical Question: Should We?
- Do you have a clear, detailed plan? Having a plan does not mean it should go forward — clarity of purpose must precede action.
- Consider all costs: Beyond financial constraints, there are human constraints and human consequences that must be understood before proceeding.
- Scrutinize the internal source: Is the person pushing for adoption credible? Are they thinking systemically for long-term positive outcomes, or only for short-term gains?
Risk Management
FMEA: Systematic Risk Assessment for AI Adoption
Failure Modes and Effects Analysis (FMEA) provides a structured, quantifiable approach to identifying and prioritizing transformation risks before they materialize.
Occurrence (1–10)
How frequently does this risk occur? Based on historical data and current reports — not intuition.
Severity (1–10)
What is the impact on transformation objectives and stakeholders? Often not discovered until failure has already set in.
Detectability (1–10)
How quickly can problems be identified and addressed? The black-box nature of AI makes this a major risk factor — especially with embedded bias that may go undetected.
Need for Decision-Support Frameworks
We don't know what we don't know. We can't know what we don't look for. We often don't even know what to look for and need guidance. "There are unknown unknowns."
— Donald Rumsfeld, US Secretary of Defense (2002)People in organizations do not want to know about issues that may complicate their work and force them to make difficult decisions.
— Alvesson et al. (2022), citing Jackall (1988)Frameworks for Action
Decision-Support Models for Digital Transformation
Unlike other IT and AI guidelines that provide only principles, the Digital Transformation Adoption Decision-Support (DTAD) and Ethical Choices with Ed Tech (ECET) frameworks provide both principles and guiding questions to support decision-making and risk reduction.
DTAD Framework (Corporate)
Designed for corporate contexts, DTAD guides organizations to reflect on why and how they seek to adopt AI or IT, if they can use it in their setting, and critically — whether they should (the ethical dimension).
ECET Framework (K-12 / Higher Ed)
Designed for educational settings, ECET supports institutional reflection on adoption decisions through the same three lenses: purpose, feasibility, and ethics — adapted for teaching and learning contexts.
ECET AI Component — Evaluation Framework
The ECET tool evaluates AI adoption across three dimensions (Idea, Feasibility, Ethics) and six criteria areas:
| Dimension | Explainable & Trustworthy | Educationally Useful | Adaptable & Agency | Fair & Equitable | Usable & Supportable | Safe & Reliable | Score |
|---|---|---|---|---|---|---|---|
| Idea | Does the company explain and provide evidence about how this AI supports teaching and/or learning outcomes? | Can you explain how the AI tool helps reach your learning outcomes? | Can you adapt the technology to support teaching and learning in your local environment? | Does the AI tool support all intended users? | Is the AI tool easy enough to use to offer students pedagogical and technical support? | Is the AI tool safe and reliable for users? | /70 |
| Feasibility | Are technology requirements clearly explained? | Are activities effective within available instructional time? | Will all needed tech resources be available? | Will all users have digital access? | Is training and documentation readily available? | Can the AI tool be used securely with your institution's systems? | /60 |
| Ethics | Is the AI technology supplier trustworthy? | Does the product measure learning ethically? | Does the AI tool support independent learning as adapted to your setting? | Does the AI tool provide all learners with equitable educational benefits? | Is the tool technically usable by all educational participants? | Will the AI technology do no harm to users? | /70 |
Adapted from Warren, S. J., Beck, D., & McGuffin, K. (2023). In Applied Ethics for Instructional Design and Technology. EdTechBooks.org.
Human Factors
Building Trust and Addressing AI Bias
What Builds Trust in DT & AI?
Technological Performance
The strongest predictor of trust — response quality, accuracy, and reliability matter most (Cui et al., 2025).
User Experience
Enjoyment, ease of use, and interaction quality significantly shape trust and adoption behavior (Cui et al., 2025).
Ethics & Safety
Privacy, security, and algorithmic fairness each impact user adoption decisions — not just technical factors (Cui et al., 2025).
Individual Differences
Age, gender, and social influence moderate how trust is formed — one-size-fits-all approaches will fail (Cui et al., 2025).
Trust & Transparency by Design
- Explainability: Users need to understand how AI arrives at outputs — transparency builds trust (Cui et al., 2025).
- Confirmation loops: Allow users to verify, correct, or refine AI suggestions before accepting them (Nicolescu & Tudorache, 2022).
- Feedback & error reporting mechanisms: Critical for maintaining trust over time — not an add-on but a core design requirement (Nicolescu & Tudorache, 2022).
Addressing AI Bias
The Problem
- AI systems mirror and amplify the biases present in their training data.
- Consequences include discriminatory hiring, biased lending, and unfair law enforcement outcomes.
- Noble (2018) and Benjamin (2019) document extensive real-world harms from algorithmic bias in practice.
The Solution
- Diverse teams designing and auditing AI systems from the outset.
- Systematic, transparent auditing processes built into the development cycle.
- Investment in bias detection and mitigation before implementation — not as an afterthought.
- Algorithmic accountability frameworks providing structure and ongoing oversight.
Practical Guidance
Implementation Success Factors & Future Research
Phased Rollout
Staged implementation allowing for learning and adaptation at each step — not a single "big bang" deployment.
Comprehensive Training
Systematic skill development for all stakeholder groups — not just technical staff but every affected person.
Change Management Integration
Combine AI adoption with established organizational change management practices — technology alone is never sufficient.
Continuous Feedback Loops
Regular gathering of and genuine response to stakeholder input throughout the transformation lifecycle.
Future Research Directions
Emotional Design
How do affective responses shape long-term AI relationships? What design choices build sustainable engagement? (Cui et al., 2025)
AI Pedagogy
What instructional designs best support AI-augmented learning? How do we design for human flourishing alongside AI assistance? (Kanont et al., 2024)
Longitudinal Metrics
How does trust evolve over months and years of use? What predicts sustained engagement versus disillusionment? (Yu et al., 2024)
A Path Forward
Heidegger's "Saving Power": Non-Technological Responses
Where the danger is, there grows also what saves.
— Martin Heidegger, The Question Concerning Technology (1954)A Free Relationship
Heidegger calls for a "free" relationship with technology — one in which we use it without letting it dominate our inner lives or our understanding of existence.
Meditative Thinking
The alternative to "calculative thinking" is "meditative thinking" — a reflective, poetic way of living that acknowledges the mystery and intrinsic value of things, not just their utility.
Re-evaluating Techne
Return to a more ancient, Greek understanding of techne — technology as poiesis, a "bringing forth" or revealing of nature, rather than a "challenging forth" or assault on nature.
Summary
Key Takeaways
Systematic Decision-Support, Not Just Adoption
Organizations must move beyond "can we?" to ask "should we?" using frameworks like DTAD to evaluate opportunities, risks, and human impacts before committing.
Trust Is Built Through Transparency
Users need to understand how AI works, verify outputs, and see accountability mechanisms before they will genuinely adopt new systems — performance alone is insufficient.
Risk Management Covers Technical and Human Factors
Address bias, sustainability, vendor stability, and workforce impacts — not just implementation timelines and ROI metrics. The human dimension is not optional.
Stakeholder Engagement from Day One
Phased rollouts, comprehensive training, and genuine change management integration prevent the 42% organizational failure rate — planning for people is planning for success.
Map, Model, and Iterate
Map stakeholder perspectives (CATWOE), model the system holistically, and iterate through continuous feedback — transformation is not linear and cannot be treated as such.
Scholarly Grounding
References
📚 Companion Resource Sites
Digital Transformation Resources
Frameworks, tools, and evidence-based practices — Schmarzo's Laws of DT, Warren's Dynamic Systems Engineering framework, and a curated reference library.
dtresources.systemly.net ↗Ethical Choices with Educational Technology
The ECET framework — a research-validated model for ethical decision-making in educational technology adoption and instructional design.
ecet.systemly.space ↗Warren — Refereed Journal Articles & Book Chapters
- Warren, S. J., Boston Vogt, E., Tincher, B., & Yang, J. (2025). Enhancing quality assurance through strategic artificial intelligence integration: A framework for higher education digital transformation. Quality Assurance in Education. https://doi.org/10.1108/QAE-09-2024-0183
- Warren, S. J., & Churchill, C. (2024). A model for applying cognitive theory to firm to improve organizational learning for sustained knowledge production and competitive advantage. Performance Improvement Journal. https://doi.org/10.56811/PFI-594355
- Warren, S. J., Churchill, C., & Hayes, A. (2024). A service-based measurement model for determining disruptive workforce training technology value: Return on investment calculations and example. In J. Delello & R. McWhorter (Eds.), Disruptive Technologies in Education and Workforce Development (1st ed., pp. 206–231). IGI Global Business Science Reference. https://doi.org/10.4018/979-8-3693-3003-6.ch010
- Warren, S. J., & Beck, D. E. (2023). The Ethical Choices with Educational Technology Framework: A description of its research-based validation model and process. In M. J. Spector, B. B. Lockee, & M. D. Childress (Eds.), Learning, Design, and Technology: An International Compendium of Theory, Research, Practice and Policy (1st ed.). Springer Nature.
- Warren, S. J., Beck, D., & McGuffin, K. (2023). In support of ethical instructional design: Translation and use of the ECET ID tool for educational developers. In S. Moore & T. Dousay (Eds.), Applied Ethics for Instructional Design and Technology: Design, Decision Making, and Contemporary Issues (1st ed., pp. 41–62). EdTechBooks.org. https://doi.org/10.59668/270.12644 · ethicalchoices.carrd.co
- Warren, S. J., McGuffin, K., Moran, S., & Beck, D. E. (2023). Educational technology and its environmental impacts: Ethical considerations in the adoption of technology at scale using Life Cycle Cost Analysis and Total Cost of Ownership approaches. In S. Moore & T. Dousay (Eds.), Applied Ethics for Instructional Design and Technology: Design, Decision Making, and Contemporary Issues (1st ed., pp. 3–26). EdTechBooks.org.
- Warren, S. J., & Churchill, C. (2022). Strategic, operations, and evaluation planning for higher education distance education. Distance Education, 43(2), 239–270. https://doi.org/10.1080/01587919.2022.2064821
Warren — Conference Presentations (AERA, AECT, DSI)
- Warren, S. J., Churchill, C., Fog, A., Tincher, B., Robins Boone, J., & Robinson, S. L. (2025). Measuring return on investment from digital transformation in corporate training environments. Decision Sciences Institute 2025 Annual Meeting. Orlando, FL. Conference program
- Warren, S. J., Grotewold, K. S., & Beck, D. E. (2024, November 23). Exploration and application of the Information Technology Adoption Risk Evaluation Decision-Making Scorecard. Decision Sciences Institute 2024 Annual Meeting. Phoenix, AZ.
- Warren, S. J., Beck, D. E., Grotewold, K. S., & Tincher, B. (2024, November 25). The Digital Transformation Adoption Decision Framework: Validation process and application examples. Decision Sciences Institute 2024 Annual Meeting. Phoenix, AZ. Conference program
- Warren, S. J., & Tincher, B. (2024, November 25). Addressing the analytics chasm with digital transformation systems modeling. Decision Sciences Institute 2024 Annual Meeting. Phoenix, AZ.
- Warren, S. J., & Tincher, B. (2024, November 25). The use of concurrent systems methodology to analyze complex situations in preparation for digital transformation. Decision Sciences Institute 2024 Annual Meeting. Phoenix, AZ.
- Grotewold, K. S., Warren, S. J., & Beck, D. (2024). Ethical Choices in Educational Technology Framework for AI: Applied examples for decision-making with scoring. Association for Educational Communications and Technology 2024 Annual Meeting. Kansas City, MO.
- Grotewold, K. S., & Warren, S. J. (2024). Instructors' views of the effectiveness, efficiency, and trustworthiness of an artificial intelligence literature review tool. American Educational Research Association 2024 Annual Meeting. Philadelphia, PA.
- Warren, S. J., & McGuffin, K. (2024, November 23). Systems thinking using environmental engineering to evaluate and mitigate the environmental impacts of information technology. Decision Sciences Institute 2024 Annual Meeting. Phoenix, AZ.
- Beck, D., & Warren, S. (2023). The Ethical Choices with Educational Technology (ECET) Framework: A practitioners session. Proceedings of the Annual Meeting of the Association for Educational Communications & Technology.
- Warren, S., Robinson, H., & Beck, D. (2023). Restructuring a doctoral program informed by ethics of care for decision-making regarding systems, policies, processes, and communication. Proceedings of the Annual Meeting of the Association for Educational Communications & Technology.
- Warren, S. J., Moore, S., Beck, D., Leary, H., Tilberg-Webb, H., & Lin, L. (2021). Ethical issues in practical problems: Implications for design, decision making, and leadership. Proceedings of the Annual Meeting of the Association for Educational Communications & Technology.
- Beck, D., & Warren, S. J. (2021). ECET: A proposed framework to guide ethical instructor choices with learning technologies. Association for Educational Communications and Technology Annual Meeting.
- Beck, D., & Warren, S. J. (2020). ECET: A proposed framework to guide ethical instructor choices with learning technologies. Association for Educational Communications and Technology Annual Meeting.
Warren — Invited Presentations on AI (2023–2025)
- Warren, S. J. (2025, November 6). Reasoning in the age of machines. Lubbock Christian University. Lubbock, TX. https://lcu2025st.systemly.space/
- Warren, S. J. (2025, November 6). Digital transformation in the age of AI: Navigating challenges and embracing opportunities in K-12, higher education, and business. Lubbock Economics Forum. Lubbock, TX. https://lcu2025.systemly.net/
- Warren, S. J. (2025, November 6). The wild west of generative AI: Considering gen AI's impact on user experience. Lubbock Christian University. Lubbock, TX.
- Warren, S. J. (2023). AI support tools for teaching, learning, and research. UNT College of Information CODE Series. University of North Texas, Denton, TX.
- Warren, S. J. (2023). Balancing innovation and security: AI and cybersecurity policy and practice. Lubbock Christian University and CoNetrix Cybersecurity Symposium. Lubbock, TX.
- Warren, S. J. (2023). Challenges and opportunities with AI for trust in higher education: Potential impacts on admissions, pedagogy, and student workforce transitions. LCU Scholars Colloquium. Lubbock Christian University, Lubbock, TX.
- Warren, S. J. (2023). AI support tools in school and work environments: Considering the next 50 years. LCU Scholars Colloquium. Lubbock Christian University, Lubbock, TX.
- Warren, S. J. (2023). Celebration of scholarship contributors luncheon and the future of higher education technologies. LCU Scholars Colloquium. Lubbock Christian University, Lubbock, TX.
Primary References
- Cui, Y., Zeng, M. L., Du, X. K., & He, W. M. (2025). What shapes learners' trust in AI? A meta-analytic review of its antecedents and consequences. IEEE Access, 13, Article 11170322.
- Heidegger, M. (1977). The Question Concerning Technology and Other Essays (W. Lovitt, Trans.). Harper & Row. (Original work published 1954)
- Lardi, K. (2022). The Human Side of Digital Business Transformation. Wiley.
- Nicolescu, L., & Tudorache, M. T. (2022). Human-computer interaction in customer service: The experience with AI chatbots—A systematic literature review. Electronics, 11(10), 1579.
- Kanont, K., Pingmuang, P., Simasathien, T., Wisnuwong, S., Wiwatsiripong, B., Poonpirome, K., Songkram, N., & Khlaisang, J. (2024). Generative-AI, a learning assistant? Factors influencing higher-ed students' technology acceptance. Electronic Journal of e-Learning, 22(6), 18–33.
- Schmarzo, B. (2020). The Economics of Data, Analytics, and Digital Transformation: The theorems, laws, and empowerments to guide your organization's digital transformation. Packt Publishing.
- Syed, R., Bandara, W., & Eden, R. (2023). Public sector digital transformation barriers: A developing country experience. Information Polity, 28(1), 5–27.
- Yu, X., Yang, Y., & Li, S. (2024). Users' continuance intention towards an AI painting application. PLoS ONE, 19(5), e0301821.
Foundational Systems Theory
- Checkland, P. (1981). Systems Thinking, Systems Practice. Wiley.
- Churchman, C. W. (1968). The Systems Approach. Dell.
- Simon, H. A. (1969). The Sciences of the Artificial. MIT Press.
- Ulrich, W. (1983). Critical Heuristics of Social Planning. Haupt.
- Jackson, M. C. (2003). Systems Thinking: Creative Holism for Managers. Wiley.
Political Economy & Social Theory
- Braverman, H. (1974). Labor and Monopoly Capital. Monthly Review Press.
- Foucault, M. (1977). Discipline and Punish. Pantheon Books.
- Habermas, J. (1984). The Theory of Communicative Action (Vol. 1). Beacon Press.
- Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.
Technology Accountability & Ethics
- Benjamin, R. (2019). Race After Technology. Polity Press.
- Eubanks, V. (2018). Automating Inequality. St. Martin's Press.
- Noble, S. U. (2018). Algorithms of Oppression. NYU Press.
- Costanza-Chock, S. (2020). Design Justice. MIT Press.
Educational Technology & Learning Theory
- Engeström, Y. (1987). Learning by Expanding. Orienta-Konsultit.
- Lave, J., & Wenger, E. (1991). Situated Learning. Cambridge University Press.
- Vygotsky, L. S. (1978). Mind in Society. Harvard University Press.