BONUS: When AI Knows Your Emotional Triggers Better Than You Do — Navigating Mindfulness in the AI Age | Mo Edjlali

BONUS: When AI Knows Your Emotional Triggers Better Than You Do — Navigating Mindfulness in the AI Age | Mo Edjlali

BONUS: When AI Knows Your Emotional Triggers Better Than You Do — Navigating Mindfulness in the AI Age

In this thought-provoking conversation, former computer engineer and mindfulness leader Mo Edjlali explores how AI is reshaping human meaning, attention, and decision-making. We examine the critical question: what happens when AI knows your emotional triggers better than you know yourself? Mo shares insights on remaining sovereign over our attention, avoiding dependency in both mindfulness and technology, and preparing for a world where AI may outperform us in nearly every domain.

From Technology Pioneer to Mindfulness Leader

"I've been very heavily influenced by technology, computer engineering, software development. I introduced DevOps to the federal government. But I have never seen anything change the way in which human beings work together like Agile." — Mo Edjlali

Mo's journey began in the tech world — graduating in 1998, he was on the front line of the internet explosion. He remembers the days before the internet, watched online multiplayer games emerge in 1994, and worked on some of the most complicated tech projects in federal government. Technology felt almost like magic, advancing at a logarithmic rate faster than anything else. But when Mo discovered mindfulness practices 12-15 years ago, he found something equally transformative: actual exercises to develop emotional intelligence and soft skills that the tech world talked about but never taught. Mindfulness provided logical, practical methods that didn't require "woo-woo" beliefs — just practice that fundamentally changed his relationship with his mind. This dual perspective — tech innovator and mindfulness teacher — gives Mo a unique lens for understanding where we're headed.

The Shift from Liberation to Dependency

"I was fortunate enough, the teachers I was exposed to, the mentality was very much: you're gonna learn how to meditate on your own, in silence. There is no guru. There is no cult of personality." — Mo Edjlali

Mo identifies a dangerous drift in the mindfulness movement: from teaching independence to creating dependency. His early training, particularly a Vipassana retreat led by S.N. Goenka, modeled true liberation — you show up for 10 days, pay nothing, receive food and lodging, learn to meditate, then donate what you can at the end. Critically, you leave being able to meditate on your own without worshiping a teacher or subscribing to guided meditations. But today's commercialized mindfulness often creates the opposite: powerful figures leading fiefdoms, consumers taught to listen to guided meditations rather than meditate independently. This dependency model mirrors exactly what's happening with AI — systems designed to make us rely on them rather than empower our own capabilities. Recognizing this parallel is essential for navigating both fields wisely.

AI as a New Human Age, Not Just Another Tool

"With AI, this is different. This isn't like mobile computing, this isn't like the internet. We're entering a new age. We had the Bronze Age, the Iron Age, the Industrial Age. When you enter a new age, it's almost like knocking the chess board over, flipping the pieces upside down. We're playing a new game." — Mo Edjlali

Mo frames AI not as another technology upgrade but as the beginning of an entirely new human age. In a new age, everything shifts: currency, economies, government, technology, even religions. The documentary about the Bronze Age collapse taught him that when ages turn over, the old rules no longer apply. This perspective explains why AI feels fundamentally different from previous innovations. ChatGPT 2.0 was interesting; ChatGPT 3 blew Mo's mind and made him realize we're witnessing something unprecedented. While he's optimistic about the potential for sustainable abundance and extraordinary breakthroughs, he's also aware we're entering both the most exciting and most frightening time to be alive. Everything we learned in high school might be proven wrong as AI rewrites human knowledge, translates animal languages, extends longevity, and achieves things we can't even imagine.

The Mental Health Tsunami and Loss of Purpose

"If we do enter the age of abundance, where AI could do anything that human beings could do and do it better, suddenly the system we have set up — where our purpose is often tied to our income and our job — suddenly, we don't need to work. So what is our purpose?" — Mo Edjlali

Mo offers a provocative vision of the future: a world where people might pay for jobs rather than get paid to work. It sounds crazy until you realize it's already happening — people pay $100,000-$200,000 for college just to get a job, politicians spend millions to get elected. If AI handles most work and we enter an age of abundance, jobs won't be about survival or income — they'll be about meaning, identity, and social connection. This creates three major crises Mo sees accelerating: attacks on our focus and attention (technology hijacking our awareness), polarization (forcing black-and-white thinking), and isolation (pushing us toward solo experiences). The mental health tsunami is coming as people struggle to find purpose in a world where AI outperforms them in domain after domain. The jobs will change, the value systems will shift, and those without tools for navigating this transformation will suffer most.

When AI Reads Your Mind

"Researchers at Duke University had hooked up fMRI brain scanning technology and took that data and fed it into GPT 2. They were able to translate brain signals into written narrative. So the implications are that we could read people's minds using AI." — Mo Edjlali

The future Mo describes isn't science fiction — it's already beginning. Three years ago, researchers used early GPT to translate brain signals into written text by scanning people's minds with fMRI and training AI on the patterns. Today, AI knows a lot about heavy users like Mo through chat conversations. Tomorrow, AI will have video input of everything we see, sensory input from our biometrics (pulse, heart rate, health indicators), and potentially direct connection to our minds. This symbiotic relationship is coming whether we're ready or not. Mo demonstrates this with a personal experiment: he asked his AI to tell him about himself, describe his personality, identify his strengths, and most powerfully — reveal his blind spots. The AI's response was outstanding, better than what any human (even his therapist or himself) could have articulated. This is the reality we're moving toward: AI that knows our emotional triggers, blind spots, and patterns better than we do ourselves.

Using AI as a Mirror for Self-Discovery

"I asked my AI, 'What are my blind spots?' Human beings usually won't always tell you what your blind spots are, they might not see them. A therapist might not exactly see them. But the AI has... I've had the most intimate kind of conversations about everything. And the response was outstanding." — Mo Edjlali

Mo's approach to AI is both pragmatic and experimental. He uses it extensively — at the level of teenagers and early college students who are on it all the time. But rather than just using AI as a tool, he treats it as a mirror for understanding himself. Asking AI to identify your blind spots is a powerful exercise because AI has observed all your conversations, patterns, and tendencies without the human limitations of forgetfulness or social politeness. Vasco shares a similar experience using AI as a therapy companion — not replacing his human therapist, but preparing for sessions and processing afterward. This reveals an essential truth: most of us don't understand ourselves that well. We're blind navigators using an increasingly powerful tool. The question isn't whether AI will know us better than we know ourselves — that's already happening. The question is how we use that knowledge wisely.

The Danger of AI Hijacking Our Agency

"There's this real danger. I saw that South Park episode about ChatGPT where his wife is like, 'Come on, put the AI down, talk to me,' and he's got this crazy business idea, and the AI keeps encouraging him along. It's a point where he's relying way too heavily on the AI and making really poor decisions." — Mo Edjlali

Not all AI use is beneficial. Mo candidly admits his own mistakes — sometimes leaning into AI feedback over his actual users' feedback for his Meditate Together app because "I like what the AI is saying." This mirrors the South Park episode's warning about AI dependency, where the character's AI encourages increasingly poor decisions while his relationships suffer. Social media demonstrates this danger at scale: AI algorithms tuned to steal our attention and hijack our agency, preventing us from thinking about what truly matters — relationships and human connection. Mo shares a disturbing story about Zoom bombers disrupting Meditate Together sessions, filming it, posting it on YouTube where it got 90,000 views, with comments thanking the disruptors for "making my day better." Technology created a cannibalistic dynamic where teenagers watched videos of their mothers, aunts, and grandmothers being harassed during meditation. When Mo tried to contact Google, the company's incentive structure prioritized views and revenue over human decency. Technology combined with capitalism creates these dangerous momentum toward monetizing attention at any cost.

Remaining Sovereign Over Your Attention

"Traditionally, mindfulness does an extraordinary job, if you practice right, to help you regain your agency of your focus and concentration. It takes practice. But reading is now becoming a concentration practice. It's an actual practice." — Mo Edjlali

Mo identifies three major symptoms affecting us: attacks on focus/attention, polarization into black-and-white thinking, and isolation. Mindfulness practices directly counter all three — but only if practiced correctly. Training attention, focus, and concentration requires actual practice, not just listening to guided meditations. Mo offers practical strategies: reading as concentration practice (asking "does anyone read anymore?" recognizing that sustained reading now requires deliberate effort), turning off AirPods while jogging or driving to find silence, spending time alone with your thoughts, and recognizing that we were given extraordinary power (smartphones) with zero training on how to be aware of it. Older generations remember having to rewind VHS tapes — forced moments of patience and stillness that no longer exist. We need to deliberately recreate those spaces where we're not constantly consuming entertainment and input.

Dialectic Thinking: Beyond Polarization

"I saw someone the other day wear a shirt that said, 'I'm perfect the way I am.' That's one-dimensional thinking. Two-dimensional thinking is: you're perfect the way that you are, and you could be a little better." — Mo Edjlali

Mo's book OpenMBSR specifically addresses polarization by introducing dialectic thinking — the ability to hold paradoxes and seeming contradictions simultaneously. Social media and algorithms push us toward one-dimensional, black-and-white thinking: good/bad, right/wrong, with me/against me. But reality is far more nuanced. The ability to think "I'm perfect as I am AND I can improve" or "AI is extraordinary AND dangerous" is essential for navigating complexity. This mirrors the tech world's embrace of continuous improvement in Agile — accepting where you are while always pushing for better. Chess players learned this years ago when AI defeated humans — they didn't freak out, they accepted it and adapted. Now AI in chess doesn't just give answers; it helps humans understand how it arrived at those answers. This partnership model, where AI coaches us through complexity rather than simply replacing us, represents the healthiest path forward.

Building Community, Not Dependency

"When people think to meditate, unfortunately, they think, I have to do this by myself and listen to guided meditation. I'm saying no. Do it in silence. If you listen to guided meditation, listen to guided meditation that teaches you how to meditate in silence. And do it with other people, with intentional community." — Mo Edjlali

Mo's OpenMBSR initiative explicitly borrows from the Agile movement's success: grassroots, community-centric, open source, transparent. Rather than creating fiefdoms around cult personalities, he wants mindfulness to spread organically through communities helping communities. This directly counters the isolation trend that technology accelerates. Meditate Together exists specifically to create spaces where people meditate with other human beings around the world, with volunteer hosts holding sessions. The model isn't about dependency on a teacher or platform — it's about building connection and shared practice. This aligns perfectly with how the tech world revolutionized collaborative work through Agile and Scrum: transparent, iterative, valuing individuals and interactions. The question for both mindfulness and AI adoption is whether we'll create systems that empower independence and community, or ones that foster dependency and isolation.

Preparing for a World Where AI Outperforms Humans

"AI is going to need to kind of coach us and ease us into it, right? There's some really dark, ugly things about ourselves that could be jarring without it being properly shared, exposed, and explained." — Mo Edjlali

Looking at his children, Mo wonders what tools they'll need in a world where AI may outperform humans in nearly every domain. The answer isn't trying to compete with AI in calculation, memory, or analysis — that battle is already lost. Instead, the essential human skills become self-awareness, emotional intelligence, dialectic thinking, community building, and maintaining agency over attention and decision-making. AI will need to become a coach, helping humans understand not just answers but how it arrived at those answers. This requires AI development that prioritizes human growth over profit maximization. It also requires humans willing to do the hard work of understanding themselves — confronting blind spots, managing emotional triggers, practicing concentration, and building genuine relationships. The mental health tsunami Mo predicts isn't inevitable if we prepare now by teaching these skills widely, building community-centric systems, and designing AI that empowers rather than replaces human wisdom and connection.

About Mo Edjlali

Mo Edjlali is a former computer engineer, and also the founder and CEO of Mindful Leader, the world's largest provider of Mindfulness-Based Stress Reduction training. Mo's new book Open MBSR: Reimagining the Future of Mindfulness explores how ancient practices can help us navigate the AI revolution with awareness and resilience.

You can learn more about Mo and his work at MindfulLeader.org, check out Meditate Together, and read his articles on AI's Mind-Reading Breakthrough and AI: Not Another Tool, but a New Human Age.

Avsnitt(200)

Agile Meets AI—How to Code Fast Without Breaking Things | Llewellyn Falco

Agile Meets AI—How to Code Fast Without Breaking Things | Llewellyn Falco

AI Assisted Coding: Agile Meets AI—How to Code Fast Without Breaking Things, With Llewellyn Falco In this BONUS episode we explore the practice of coding with AI—not just the buzzwords, but the real-world experience. Our guest, Llewellyn Falco, has been learning by doing, exploring the space of AI-assisted coding from the experimental and intuitive—what some call vibecoding—to the more structured world of professional, world-class software engineering. This is a conversation for practitioners who want to understand what's actually happening on the ground when we code with AI. Understanding Vibecoding "You can now program without looking at code. When you're in that space, vibecoding is the word we're using to say, we are programming in a way that does not relate to programming last year." The software development landscape shifted dramatically in early 2025. Vibecoding represents a fundamental change in how we create software—programming without constantly looking at the code itself. This approach removes many traditional limitations around technology, language, and device constraints, allowing developers to move seamlessly between different contexts. However, this power comes with responsibility, as developers can now move so fast that traditional safety practices become even more critical. From Concept to Working App in 15 Minutes "We wrote just a markdown page of 'here's what we want this to look like'. And then we fed that to Claude Code. And 15 minutes later we had a working app on the phone." At the Agile 2025 conference in Denver, Llewellyn participated in a hackathon focused on helping psychologists prevent child abuse. Working with customer Amanda, a psychologist, and data scientist Rachel, the team identified a critical problem: clinicians weren't using the most effective parenting intervention technique because recording 60 micro-interactions in 5 minutes was too difficult and time-consuming. The team's approach embodied lean startup principles turned up to eleven. After understanding the customer's needs through exposition and conversation, they created a simple markdown specification and used Claude Code to generate a working mobile app in just 15 minutes. When Amanda tested it, she was moved to tears—after 20 years of trying to make progress on this problem, she finally had hope. Over three days, the team released 61 iterations, constantly getting feedback and refining the solution. Iterative Development Still Matters When Coding With AI "We need to see things working to know what to deliver next. That's never going to change. Unless you're building something that's already there." The team's success wasn't about writing a complete requirements document upfront. Instead, they delivered a minimal viable product quickly, tested it with real users, and iterated based on feedback. This agile approach proved essential even—or especially—when working with AI. One breakthrough came when Amanda used the number keypad instead of looking at her phone screen. With her full attention on the training video she'd watched hundreds of times, she noticed an interaction she had missed before. At that moment, the team knew they had created real value, regardless of what additional features they might build. Good Engineering Practices Without Looking at Code "We asked it to do good engineering practices, even though we didn't really understand what it was doing. We just sort of say, okay, yeah, that seems sensible." A critical moment came when the code had grown large and complex. Rather than diving into the code themselves, Llewellyn and his partner Lotta asked the AI to refactor the code to make a panel easy to switch before actually making the change. They verified functionality worked through manual testing but never looked at how the refactoring was implemented. This demonstrates that developers can maintain good practices like refactoring and clean architecture even when working at a higher level of abstraction. Key practices for AI-assisted development include: Don't accept AI's default settings—they're based on popularity, not best practices Prime the AI with the practices you want it to use through configuration files Tell AI to be honest and help you avoid mistakes, not just be agreeable Ask for explanations of architecture and evaluate whether approaches make sense Keep important decisions documented in markdown files that can be referenced later "The documentation is now executable. I can turn it into code" "The documentation is now executable. I can turn it into code. If I had to choose between losing my documentation or losing my code, I would keep the docs. I think I could regenerate the code pretty easily." In this new paradigm, documentation takes on new importance—it becomes the specification from which code can be regenerated. The team created and continuously updated markdown files for project context, architecture, and individual features. This practice allowed them to reset AI context when needed while maintaining continuity of their work. The workflow was bidirectional: sometimes they'd write documentation first and have AI generate code; other times they'd build features iteratively and have AI update the documentation. This approach using tools like Super Whisper for voice-to-text made creating and maintaining documentation effortless. Remove Deterministic Tasks from AI "AI is sloppy. It's inconsistent. Everything that can be deterministic—take it out. AI can write that code. But don't make AI do repetitive tasks." A crucial principle emerged: anything that needs to be consistently and repeatedly correct should be automated with traditional code, not left to AI. The team wrote shell scripts for tasks like auto-incrementing version numbers and created git hooks to ensure these scripts ran automatically. They also automated file creation with dates at the top, removing the need for AI to track temporal information. This principle works both ways—deterministic logic should be removed from underneath AI (via scripts and hooks) and from above AI (via orchestration scripts that call AI in loops with verification steps in between). Anti-Patterns to Avoid "The biggest anti-pattern is you're not committing frequently. I really want the ability to drop my context and revert my changes at a moment's notice." The primary anti-pattern when coding with AI is failing to commit frequently to version control. The ability to quickly drop context, revert changes, and start fresh becomes essential when working at this pace. Getting important decisions into documentation files and code into version control enables rapid experimentation without fear of losing work. Other challenges include knowing when to focus on the right risks. The team had to navigate competing priorities—customers wanted certain UX features, but the team identified data collection and storage as the critical unknown risk that needed solving first. This required diplomatic firmness in prioritizing work based on technical risk assessment rather than just user requests. Essential Tools for AI-Assisted Development "If you are using AI by going to a website, that is not what we are talking about here." To work effectively with AI, developers need agentic tools that can interact with files and run programs, not just chat interfaces. Recommended tools include: Claude Code (CLI for file interaction) Windsurf (VS Code-like interface) Cursor (code editor with AI integration) RooCode (alternative option) Super Whisper (voice-to-text transcription for Mac) Most developers working at this level have disabled safety guards, allowing AI to run programs without asking permission each time. While this carries risks, committing frequently to version control provides the safety net needed for rapid experimentation. The Power of Voice Interaction "Most of the time coding now looks like I'm talking. It's almost like Star Trek—you're talking to the computer and then code shows up." Using voice transcription tools like Super Whisper transformed the development experience. Speaking instead of typing not only increased speed but also changed the nature of communication with AI. When speaking, developers naturally provide more context and explanation than when typing, leading to better results from AI systems. This proved especially valuable in a crowded conference room where Super Whisper could filter out background noise and accurately transcribe the speakers' voices. The tool enabled natural, conversational interaction with development tools. Balancing Speed with Safety Over three days, the team released 61 times without comprehensive automated testing, focusing instead on validating user value through manual testing with the actual customer. However, after the hackathon, Llewellyn added automated testing by creating a test plan document through voice dictation, having AI clean it up and expand it, then generating Puppeteer tests and shell scripts to run them—all in about 40 minutes. This demonstrates a pragmatic approach: when exploring and validating with users, manual testing may suffice; but for ongoing maintenance and confidence, automated tests remain valuable and can be generated efficiently with AI assistance. The Future of Software Development "If you want to make something, there could not be a better time than now." The skills required for effective software development are shifting. Understanding how to assess risk, knowing when to commit code, maintaining good engineering practices, and finding creative solutions within system constraints remain critical. What's changing is that these skills are now applied at a higher level of abstraction, with AI handling much of the detailed implementation. The space is evolving rapidly—practices that work today may need adjustment in months. Developers need to continuously experiment, stay current with new tools and models, and develop instincts for working effectively with AI systems. The fundamentals of agile development—rapid iteration, customer feedback, risk assessment, and incremental delivery—matter more than ever. About Llewellyn Falco Llewellyn is an Agile and XP (Extreme Programming) expert with over two decades of experience in Java, OO design, and technical practices like TDD, refactoring, and continuous delivery. He specializes in coaching, teaching, and transforming legacy code through clean code, pair programming, and mob programming. You can link with Llewellyn Falco on LinkedIn.

7 Okt 49min

Beyond AI Code Assistants: How Moldable Development Answers Questions AI Can't | Tudor Girba

Beyond AI Code Assistants: How Moldable Development Answers Questions AI Can't | Tudor Girba

AI Assisted Coding: Beyond AI Code Assistants: How Moldable Development Answers Questions AI Can't With Tudor Girba In this BONUS episode, we explore Moldable Development with Tudor Girba, CEO of feenk.com and creator of the Glamorous Toolkit. We dive into why developers spend over 50% of their time reading code—not because they want to, but because they lack the answers they need. Tudor shares how building contextual tools can transform software development, making systems truly understandable and enabling decisions at the speed of thought. The Hidden System: A Telco's Three-Year Quest "They had a system consisting of five boxes, but they could only enumerate four. If this is your level of awareness about what is reality around you, you have almost no chance of systematically affecting that reality." Tudor opens with a striking case study from a telecommunications company that spent three years and hundreds of person-years trying to optimize a data pipeline. Despite massive effort and executive mandate, the pipeline still took exactly one day to process data—no improvement whatsoever. When Tudor's team investigated, they asked for an architecture diagram. The team drew four boxes representing their system. But when Tudor's team started building tools to mirror this architecture back from the actual code, they discovered something shocking: there was an entire fifth system between the first and second boxes that nobody knew existed. This missing system was likely the bottleneck they'd been trying to optimize for three years. Why Reading Code Doesn't Scale "Developers spend more than 50% of their time reading code. The problem is that our systems are typically larger than anyone can read, and by the time you finish reading, the system has already changed many times." The real issue isn't the time spent reading—it's that reading is the most manual, least scalable way to extract information from systems. When developers read code, they're actually trying to answer questions so they can make decisions. But a 250,000-line system would take one person-month to read at high speed, and the system changes constantly during that time. This means everything you learned yesterday becomes merely a hypothesis, not a reliable answer. The fundamental problem is that we cannot perceive anything in a software system except through tools, yet we've never made how we read code an explicit, optimizable activity. The Context Problem: Why Generic Tools Fail "Software is highly contextual, which means we can predict classes of problems people will have, but we cannot predict specific problems people will have." Tudor draws a powerful parallel with testing. Nobody downloads unit tests from the web and applies them to their system—that would be absurd. Instead, we download test frameworks and build tests contextually for our specific system, encoding what's valuable about our particular business logic. Yet for almost everything else in software development, we download generic tools and expect them to work. This is why teams have tens of thousands of static analysis warnings they ignore, while a single failing test stops deployment. The test encodes contextual value; the generic warning doesn't. Moldable Development extends this principle: every question about your system should be answered by a contextual tool you build for that specific question. Tools That Mirror Your Mental Model "Whatever you draw on the whiteboard—that's your mental model. But as soon as the system exists, we want the system to mirror you back that thing. We make it the job of the system to show our mental model back to us." When someone draws an architecture diagram on a whiteboard, they're not documenting the system—they're documenting their beliefs about the system. The diagram represents wishes when drawn before the system exists, but beliefs when drawn after. Moldable Development flips this: instead of humans reading code and creating approximations, the system itself generates the visualization directly from the actual code. This eliminates the layers of belief and inference. Whether you're looking at high-level architecture, data lineage across multiple technologies, performance bottlenecks, or business domain structure, you build small tools that extract and present exactly the information you need from the system as it actually is. The Test-Driven Development Parallel "Testing was a way to find some kind of class of answers. But there are many other questions we have, and the question is: is there a systematic way to approach arbitrary questions?" Tudor explains that Moldable Development applies test-driven development principles to all forms of system understanding. Just as we write tests after we understand the functionality we need, we build visualization and analysis tools after we understand the questions we need answered. Both approaches share key characteristics: they're built contextually for the specific system, created by developers during development, and composed of many small tools that collectively model the system. The difference is that TDD focuses on functional decomposition and known expectations, while Moldable Development addresses architecture, security, domain structure, performance, and any other perspective where functional tests aren't the most useful decomposition. From Thousands of Features to Thousands of Tools "In my development environment, I don't have features. I have thousands of tools that coexist. Development environments should be focused not on what exists out of the box, but on how quickly you can create a contextual tool." Traditional development environments offer dozens of features—buttons, plugins, generic views. But Moldable Development environments contain thousands of micro-tools, each answering a specific question about a specific system. The key is making these tools composable and fast to create. Rather than building monolithic tools that try to handle every scenario, you build small inspectors that show one perspective on one object or concept. These inspectors chain together naturally as you drill down from high-level questions to detailed investigations. You might have one inspector showing test failures grouped by exception type, another showing PDF document comparisons, another showing cluster performance, and another showing memory usage—all coexisting and available when needed. The Real Bottleneck To Learning A System: Time to the Next Question "Once you do this, you will see that the interesting bottleneck is in the time to the next interesting question. This is by far the most interesting place to be spending energy." When you commoditize access to answers through contextual tools, something remarkable happens: the bottleneck shifts from getting answers to asking better questions. Right now, because answers come so slowly through manual reading and analysis, we rarely exercise the skill of formulating good questions. We make decisions based on gut feelings and incomplete data because we can't afford to dig deeper. But when answers arrive at the speed of thought, you can explore, follow hunches, test hypotheses, and develop genuine insight. The conversation between person and system becomes fluid, enabling decision-making based on actual evidence rather than belief. Moldable Development in Practice: The Lifeware Case "They are investing in software engineering as their competitive advantage. They have 150,000 tests that would take 10 days to run on a single machine, but they run them in 16 minutes distributed across AWS." Tudor shares a powerful case study of Lifeware, a life insurance software company that was featured in Kent Beck's "Test-Driven Development by Example" in 2002 with 4,000 tests. Today they have 150,000 tests and have fully adopted Moldable Development as their core practice. Their business model is remarkable: they take data from insurance companies, throw away the old systems, and reverse-engineer new systems by TDD-ing the business—replaying history to produce pixel-identical documents. They've deployed Glamorous Toolkit as their sole development environment across 100+ developers. Their approach demonstrates that Moldable Development isn't just a research concept but a practical competitive advantage that scales to large teams and complex systems. Why AI Doesn't Solve This Problem "When you ask AI, you will get exactly the same kind of answers. The answer comes quickly, but you will not know whether this is accurate, whether this represents the whole thing, and you definitely do not have an explanation as to why the answer is the way it is." In the age of AI code assistants, it might seem like language models could solve the problem of understanding systems. But Tudor explains why they can't. When you ask an AI about your architecture, you get an opinion—fast but unverifiable. Just like asking a developer to draw the architecture on a whiteboard, you receive filtered information without knowing if it's complete or accurate. Moldable Development, by contrast, extracts answers deterministically from the actual system. Software systems have almost no ambiguity in meaning—they're mathematical, not linguistic. We don't need probabilistic interpretation of source code; we need precise extraction and presentation. The tools you build give you not just answers but explanations of how those answers were derived from the actual system state. Scaling Through Language, Not Features "You need a new kind of development environment where the goal is to create tools much quicker. You need some sort of language in which to express development environments." The technical challenge of Moldable Development is enabling thousands of tools to coexist productively. This requires a fundamentally different approach to development environments. Instead of adding features—buttons and menu items that quickly become overwhelming—you need a language for expressing tools and a system for composing them. Glamorous Toolkit demonstrates this through its inspector architecture, where any object can define custom views that appear contextually. These views compose naturally as you navigate through your investigation, reusing earlier perspectives while adding new ones. The environment becomes a medium for tool creation, not just a collection of pre-built features. Making the Invisible Visible "We cannot perceive anything in a software system except through a tool. If that's so important, then the ability to control that shape is probably kind of important too." Software has no inherent shape—it's just data. Every perception we have of it comes through some tool that renders it into a form we can reason about. This means tools aren't nice-to-have accessories; they're fundamental to our ability to work with software at all. The text editor showing code is a tool. The debugger showing variables is a tool. But these are generic tools built once and reused everywhere, which means they show generic perspectives. What if we could control the shape of our software as easily as we write it? What if the system could show us exactly the view we need for exactly the question we have? That's the promise of Moldable Development. About Tudor Girba Tudor Girba is CEO of feenk.com and creator of Moldable Development. He leads the team behind Glamorous Toolkit, a novel IDE that helps developers make sense of complex systems. His work focuses on transforming how teams understand, navigate, and modernize legacy software through custom, insightful tools. Tudor and Simon Wardley are writing a book about Moldable Development which you can get at: https://moldabledevelopment.com/, and read more about in this Medium article. You can link with Tudor Girba on LinkedIn.

6 Okt 41min

When Product Owners Eat the Grass for Their Teams | Tom Molenaar

When Product Owners Eat the Grass for Their Teams | Tom Molenaar

Tom Molenaar: When Product Owners "Eat the Grass" for Their Teams Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. The Great Product Owner: The Vision Catalyst "This PO had the ability to communicate the vision and enthusiasm about the product, even I felt inspired." Tom describes an exceptional Product Owner who could communicate vision and enthusiasm so effectively that even he, as the Scrum Master, felt inspired about the product. This PO excelled at engaging teams in product discovery techniques, helping them move from merely delivering features to taking outcome responsibility. The PO introduced validation techniques, brought customers directly to the office for interviews, and consistently showed the team the impact of their work, creating a strong connection between engineers and end users. The Bad Product Owner: The Micromanager "This PO was basically managing the team with micro-managing approach, this blocked the team from self-organizing." Tom encountered a Product Owner who was too controlling, essentially micromanaging the team instead of empowering them. This PO hosted daily stand-ups, assigned individual tasks, and didn't give the team space for self-organization. When Tom investigated the underlying motivation, he discovered the PO believed that without tight control, the team would underperform. Tom helped the PO understand the benefits of trusting the team and worked with both sides to clarify roles and responsibilities, moving from micromanagement to empowerment. In this segment, we refer to the book "Empowered" by Marty Cagan. Self-reflection Question: How do you help Product Owners find the balance between providing clear direction and allowing team autonomy? [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people. 🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue. Buy Now on Amazon [The Scrum Master Toolbox Podcast Recommends] About Tom Molenaar Tom is a team coach with a background in social psychology and behavioral influence. He is passionate about fostering collaboration, and helping teams flourish and achieve their potential. His approach blends insight, empathy, and strategy to cultivate lasting team success. You can link with Tom Molenaar on LinkedIn.

3 Okt 17min

The Three Pillars of Scrum Master Success | Tom Molenaar

The Three Pillars of Scrum Master Success | Tom Molenaar

Tom Molenaar: Purpose, Process, and People—The Three Pillars of Scrum Master Success Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "I always try to ask the team first, what is your problem? Or what is the next step, do you think? Having their input, having my input, bundle it and share it." Tom defines success for Scrum Masters through three essential pillars: purpose (achieving the team's product goals), process (effective Agile practices), and people (team maturity and collaboration). When joining new teams, he uses a structured approach combining observation with surveys to get a 360-degree view of team performance. Rather than immediately implementing his own improvement ideas, Tom prioritizes asking teams what problems they want to solve and finding common ground for a "handshake moment" on what needs to be addressed. Featured Retrospective Format for the Week: Creative Drawing of the Sprint Tom's favorite retrospective format involves having team members draw their subjective experience of the sprint, then asking others to interpret each other's drawings. This creative approach brings people back to their childhood, encourages laughter and fun, and helps team members tap into each other's experiences in ways that traditional verbal retrospectives cannot achieve. The exercise stimulates understanding between team members and often reveals important topics for improvement while building connection through shared interpretation of creative expressions. Example activity you can use to "draw the sprint". [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people. 🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue. Buy Now on Amazon [The Scrum Master Toolbox Podcast Recommends] About Tom Molenaar Tom is a team coach with a background in social psychology and behavioral influence. He is passionate about fostering collaboration, and helping teams flourish and achieve their potential. His approach blends insight, empathy, and strategy to cultivate lasting team success. You can link with Tom Molenaar on LinkedIn.

2 Okt 16min

Systemic Change Management—Making the Emotional Side of Change Visible | Tom Molenaar

Systemic Change Management—Making the Emotional Side of Change Visible | Tom Molenaar

Tom Molenaar: Systemic Change Management—Making the Emotional Side of Change Visible Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "We tend to skip the phase where we just give the person the space to grieve, to not know, instead of that, we tend to move to solutions maybe too quick." Tom faces a significant challenge as he prepares to start with new teams transitioning between value streams in a SAFe environment. The teams will experience multiple changes simultaneously - new physical locations, new team dependencies, and organizational restructuring. Tom applies systemic change management principles, outlining five critical phases: sense of urgency, letting go, not knowing, creation, and new beginning. He emphasizes the importance of making the emotional "understream" visible, giving teams space to grieve their losses, and helping them verbalize their feelings before moving toward solutions. In this episode, we refer to Systemic Change Management, an approach that views organizations as complex, interconnected systems—rather than collections of independent parts. Instead of focusing only on individual skills, isolated processes, or top-down directives, SCM works with the whole system (people, structures, culture, and external environment) to create sustainable transformation. Self-reflection Question: How comfortable are you with sitting in uncertainty and allowing teams to process change without immediately jumping to solutions? [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people. 🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue. Buy Now on Amazon [The Scrum Master Toolbox Podcast Recommends] About Tom Molenaar Tom is a team coach with a background in social psychology and behavioral influence. He is passionate about fostering collaboration, and helping teams flourish and achieve their potential. His approach blends insight, empathy, and strategy to cultivate lasting team success. You can link with Tom Molenaar on LinkedIn.

1 Okt 18min

Building Trust in Teams - The Foundation of Self-Organization | Tom Molenaar

Building Trust in Teams - The Foundation of Self-Organization | Tom Molenaar

Tom Molenaar: How to Spot and Fix Lack of Trust in Scrum Teams Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "When people don't speak up, it's because there's no trust. The team showed that they did not feel free to express their opinions." Tom describes working with a team that appeared to be performing well on the surface - they were reaching their goals and had processes in place. However, deeper observation revealed a troubling dynamic: a few dominant voices controlled discussions while half the team remained silent during ceremonies. Through one-on-ones, Tom discovered team members felt judged and unsafe to express their ideas. Using the Lencioni Pyramid as a framework, he helped the team address the fundamental lack of trust that was preventing constructive conflict and genuine collaboration. Featured Book of the Week: Empowered by Marty Cagan Tom recommends "Empowered" by Marty Cagan as a book that significantly influenced his approach to team coaching. The book focuses on empowering teams and organizations to deliver great products while developing ordinary people into extraordinary performing teams. Tom appreciates its well-structured approach that covers all necessary elements without getting lost in details. The book provides practical tools for effective coaching, including techniques for regular one-on-ones, active listening, constructive feedback, setting clear expectations, celebrating success, and creating a culture of learning from failure. [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people. 🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue. Buy Now on Amazon [The Scrum Master Toolbox Podcast Recommends] About Tom Molenaar Tom is a team coach with a background in social psychology and behavioral influence. He is passionate about fostering collaboration, and helping teams flourish and achieve their potential. His approach blends insight, empathy, and strategy to cultivate lasting team success. You can link with Tom Molenaar on LinkedIn.

30 Sep 12min

When To Stop Helping Agile Teams To Change—A Real Life Story | Tom Molenaar

When To Stop Helping Agile Teams To Change—A Real Life Story | Tom Molenaar

Tom Molenaar: When To Stop Helping Agile Teams To Change—A Real Life Story Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Instead of slowing down and meeting the team in their resistance, I started to try and drag them because I saw the vision of the possible improvement, but they did not see it." Tom shares a powerful failure story about a team that didn't feel the urgency to improve their way of working. Despite management wanting the team to become more effective, Tom found himself pushing improvements that the team actively resisted. Instead of slowing down to understand their resistance, he tried to drag them forward, leading to exhaustion and ultimately his decision to leave the assignment. This episode explores the critical lesson that it's not our job to save teams that don't want to be saved, and the importance of recognizing when to step back. Self-reflection Question: When you encounter team resistance to change, how do you distinguish between healthy skepticism that needs addressing and fundamental unwillingness to improve? [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people. 🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue. Buy Now on Amazon [The Scrum Master Toolbox Podcast Recommends] About Tom Molenaar Tom is a team coach with a background in social psychology and behavioral influence. He is passionate about fostering collaboration, and helping teams flourish and achieve their potential. His approach blends insight, empathy, and strategy to cultivate lasting team success. You can link with Tom Molenaar on LinkedIn.

29 Sep 17min

BONUS Product Delight - How to make your product stand out with emotional connection With Nesrine Changuel

BONUS Product Delight - How to make your product stand out with emotional connection With Nesrine Changuel

BONUS: Nesrine Changuel shares how to create product delight through emotional connection! In this BONUS episode we explore the book by Nesrine Changuel: 'Product Delight - How to make your product stand out with emotional connection.' In this conversation, we explore Nesrine's journey from research to product management, share lessons from her experiences at Google, Spotify, and Microsoft, and unpack the key strategies for building emotionally resonant products that connect with users beyond mere functionality. The Genesis of Product Delight "I quickly realized that there is something that is quite intense while building Skype... it's not just that communication tool, but it was iconic, with its blue, with ringtones, with emojis. So it was clear that it's not just for making calls, but also to make you feel connected, relaxed, and part of it." Nesrine's journey into product delight began during her transition from research to product management at Skype. Working on products at major companies like Skype, Spotify, and Google Meet, she discovered that successful products don't just function well—they create emotional connections. Her role as "Delight PM" at Google Meet during the pandemic crystallized her understanding that products must address both functional and emotional user needs to truly stand out in the market. Understanding Customer Delight in Practice "The delight is about creating two dimensions and combining these two dimensions altogether, it's about creating products that function well, but also that help with the emotional connection." Customer delight manifests when products exceed expectations and anticipate user needs. Nesrine explains that delight combines surprise and joy—creating positive surprises that go beyond basic functionality. She illustrates this with Microsoft Edge's coupon feature, which proactively suggests discounts during online shopping without users requesting it. This anticipation of needs creates memorable peak moments that strengthen emotional connections with products. Segmenting Users by Motivators "We can discover that users are using your product for different reasons. I mean, we tend to think that users are using the product for the same reason." Traditional user segmentation focuses on demographics (who users are) or behavior (what they do). Nesrine advocates for motivational segmentation—understanding why users engage with products. Using Spotify as an example, she demonstrates how users might seek music for specific songs, inspiration, nostalgia, or emotional regulation. This approach reveals both functional motivators (practical needs) and emotional motivators (feelings users want to experience), enabling teams to build features aligned with user desires rather than assumptions. In this segment, we refer to Spotify Wrapped. The Distinction from Jobs To Be Done "There's no contrast. I mean to be honest, it's quite aligned, and I'm a big fan of the job to be done framework." While aligned with Clayton Christensen's Jobs To Be Done framework, Nesrine's approach extends beyond identifying triggers to practical implementation. She acknowledges that Jobs To Be Done provides the foundational theory, distinguishing between personal emotional motivators (how users want to feel) and social emotional motivators (how they want others to perceive them). However, many teams struggle to translate these insights into actual product features—a gap her Product Delight framework addresses through actionable methodologies. Navigating the Line Between Delight and Addiction "Building for delight is about creating products that are aligned with users' values. It's about aligning with what people really want themselves to feel. They want to feel themselves, to feel a better version of themselves." The critical distinction between delight and addiction lies in value alignment. Delightful products help users become better versions of themselves and align with their personal values. Nesrine contrasts this with addictive design that creates dependencies contrary to user wellbeing. Using Spotify Wrapped as an example, she explains how reflecting positive achievements (skills learned, personal growth) creates healthy engagement, while raw usage data (hours spent) might trigger negative self-reflection and potential addictive patterns. Getting Started with Product Delight "If you only focus on the functional motivators, you will create products that function, but they will not create that emotional connection. If you take into consideration the emotional motivators in addition to the functional motivators, you create perfect products that connect with users emotionally." Teams beginning their delight journey should start by identifying both functional and emotional user motivators through direct user conversations. The first step involves listing what users want to accomplish (functional) alongside how they want to feel (emotional). This dual understanding enables feature development that serves practical needs while creating positive emotional experiences, leading to products that users remember and recommend. Product Delight and Human-Centered Design "Making products feel as if it was done by a human being... how can you make your product feel as close as possible to a human version of the product." Nesrine positions product delight within the broader human-centered design movement, but focuses specifically on humanization at the product feature level rather than just visual design. She shares examples from Google Meet, where the team compared remote meetings to in-person experiences, and Dyson, which benchmarks vacuum cleaners against human cleaning services. This approach identifies missing human elements and guides feature development toward more natural, intuitive interactions. In this segment we refer to the books Emotional Design by Don Norman, and Design for Emotion by Aarron Walter.. AI's Role in Future Product Delight "AI is a tool, and as every tool we're using, it can be used in a good way, or could be used in a bad way. And it is extremely possible to use AI in a very good way to make your product feel more human and more empathetic and more emotionally engaging." AI presents opportunities to enhance emotional connections through empathetic interactions and personalized experiences. Nesrine cites ChatGPT's conversational style—including apologies and collaborative language—as creating companionship feelings during work. The key lies in using AI to identify and honor emotional motivators rather than exploit them, focusing on making users feel supported and understood rather than manipulated or dependent. Developer Experience as Product Delight "If the user of your products are human beings... whether business consumer engineers, they deserve their emotions to be honored, so I usually don't distinguish between B2B or B2C... I say like B2H, which is business to human." Developer experience exemplifies product delight in B2B contexts. Companies like GitHub have created metrics specifically measuring developer delight, recognizing that technical users also have emotional needs. Tools like Jira, Miro, and GitHub succeed by making users feel more competent and productive. Nesrine advocates for "B2H" (business to human) thinking, emphasizing that any product used by humans should consider emotional impact alongside functional requirements. About Nesrine Changuel Nesrine is a product coach, trainer, and author with experience at Google, Spotify, and Microsoft. Holding a PhD from Bell Labs and UCLA, she blends research and practice to guide teams in building emotionally resonant products. Based in Paris, she teaches and speaks globally on human-centered design. You can connect with Nesrine Changuel on LinkedIn.

27 Sep 40min

Populärt inom Politik & nyheter

svenska-fall
motiv
aftonbladet-krim
p3-krim
fordomspodden
flashback-forever
rss-viva-fotboll
rss-krimstad
aftonbladet-daily
rss-sanning-konsekvens
spar
blenda-2
rss-vad-fan-hande
rss-krimreportrarna
rss-frandfors-horna
dagens-eko
olyckan-inifran
krimmagasinet
rss-expressen-dok
svd-nyhetsartiklar