Beyond AI Code Assistants: How Moldable Development Answers Questions AI Can't | Tudor Girba

Beyond AI Code Assistants: How Moldable Development Answers Questions AI Can't | Tudor Girba

AI Assisted Coding: Beyond AI Code Assistants: How Moldable Development Answers Questions AI Can't With Tudor Girba

In this BONUS episode, we explore Moldable Development with Tudor Girba, CEO of feenk.com and creator of the Glamorous Toolkit. We dive into why developers spend over 50% of their time reading code—not because they want to, but because they lack the answers they need. Tudor shares how building contextual tools can transform software development, making systems truly understandable and enabling decisions at the speed of thought.

The Hidden System: A Telco's Three-Year Quest

"They had a system consisting of five boxes, but they could only enumerate four. If this is your level of awareness about what is reality around you, you have almost no chance of systematically affecting that reality."

Tudor opens with a striking case study from a telecommunications company that spent three years and hundreds of person-years trying to optimize a data pipeline. Despite massive effort and executive mandate, the pipeline still took exactly one day to process data—no improvement whatsoever. When Tudor's team investigated, they asked for an architecture diagram. The team drew four boxes representing their system. But when Tudor's team started building tools to mirror this architecture back from the actual code, they discovered something shocking: there was an entire fifth system between the first and second boxes that nobody knew existed. This missing system was likely the bottleneck they'd been trying to optimize for three years.

Why Reading Code Doesn't Scale

"Developers spend more than 50% of their time reading code. The problem is that our systems are typically larger than anyone can read, and by the time you finish reading, the system has already changed many times."

The real issue isn't the time spent reading—it's that reading is the most manual, least scalable way to extract information from systems. When developers read code, they're actually trying to answer questions so they can make decisions. But a 250,000-line system would take one person-month to read at high speed, and the system changes constantly during that time. This means everything you learned yesterday becomes merely a hypothesis, not a reliable answer. The fundamental problem is that we cannot perceive anything in a software system except through tools, yet we've never made how we read code an explicit, optimizable activity.

The Context Problem: Why Generic Tools Fail

"Software is highly contextual, which means we can predict classes of problems people will have, but we cannot predict specific problems people will have."

Tudor draws a powerful parallel with testing. Nobody downloads unit tests from the web and applies them to their system—that would be absurd. Instead, we download test frameworks and build tests contextually for our specific system, encoding what's valuable about our particular business logic. Yet for almost everything else in software development, we download generic tools and expect them to work. This is why teams have tens of thousands of static analysis warnings they ignore, while a single failing test stops deployment. The test encodes contextual value; the generic warning doesn't. Moldable Development extends this principle: every question about your system should be answered by a contextual tool you build for that specific question.

Tools That Mirror Your Mental Model

"Whatever you draw on the whiteboard—that's your mental model. But as soon as the system exists, we want the system to mirror you back that thing. We make it the job of the system to show our mental model back to us."

When someone draws an architecture diagram on a whiteboard, they're not documenting the system—they're documenting their beliefs about the system. The diagram represents wishes when drawn before the system exists, but beliefs when drawn after. Moldable Development flips this: instead of humans reading code and creating approximations, the system itself generates the visualization directly from the actual code. This eliminates the layers of belief and inference. Whether you're looking at high-level architecture, data lineage across multiple technologies, performance bottlenecks, or business domain structure, you build small tools that extract and present exactly the information you need from the system as it actually is.

The Test-Driven Development Parallel

"Testing was a way to find some kind of class of answers. But there are many other questions we have, and the question is: is there a systematic way to approach arbitrary questions?"

Tudor explains that Moldable Development applies test-driven development principles to all forms of system understanding. Just as we write tests after we understand the functionality we need, we build visualization and analysis tools after we understand the questions we need answered. Both approaches share key characteristics: they're built contextually for the specific system, created by developers during development, and composed of many small tools that collectively model the system. The difference is that TDD focuses on functional decomposition and known expectations, while Moldable Development addresses architecture, security, domain structure, performance, and any other perspective where functional tests aren't the most useful decomposition.

From Thousands of Features to Thousands of Tools

"In my development environment, I don't have features. I have thousands of tools that coexist. Development environments should be focused not on what exists out of the box, but on how quickly you can create a contextual tool."

Traditional development environments offer dozens of features—buttons, plugins, generic views. But Moldable Development environments contain thousands of micro-tools, each answering a specific question about a specific system. The key is making these tools composable and fast to create. Rather than building monolithic tools that try to handle every scenario, you build small inspectors that show one perspective on one object or concept. These inspectors chain together naturally as you drill down from high-level questions to detailed investigations. You might have one inspector showing test failures grouped by exception type, another showing PDF document comparisons, another showing cluster performance, and another showing memory usage—all coexisting and available when needed.

The Real Bottleneck To Learning A System: Time to the Next Question

"Once you do this, you will see that the interesting bottleneck is in the time to the next interesting question. This is by far the most interesting place to be spending energy."

When you commoditize access to answers through contextual tools, something remarkable happens: the bottleneck shifts from getting answers to asking better questions. Right now, because answers come so slowly through manual reading and analysis, we rarely exercise the skill of formulating good questions. We make decisions based on gut feelings and incomplete data because we can't afford to dig deeper. But when answers arrive at the speed of thought, you can explore, follow hunches, test hypotheses, and develop genuine insight. The conversation between person and system becomes fluid, enabling decision-making based on actual evidence rather than belief.

Moldable Development in Practice: The Lifeware Case

"They are investing in software engineering as their competitive advantage. They have 150,000 tests that would take 10 days to run on a single machine, but they run them in 16 minutes distributed across AWS."

Tudor shares a powerful case study of Lifeware, a life insurance software company that was featured in Kent Beck's "Test-Driven Development by Example" in 2002 with 4,000 tests. Today they have 150,000 tests and have fully adopted Moldable Development as their core practice. Their business model is remarkable: they take data from insurance companies, throw away the old systems, and reverse-engineer new systems by TDD-ing the business—replaying history to produce pixel-identical documents. They've deployed Glamorous Toolkit as their sole development environment across 100+ developers. Their approach demonstrates that Moldable Development isn't just a research concept but a practical competitive advantage that scales to large teams and complex systems.

Why AI Doesn't Solve This Problem

"When you ask AI, you will get exactly the same kind of answers. The answer comes quickly, but you will not know whether this is accurate, whether this represents the whole thing, and you definitely do not have an explanation as to why the answer is the way it is."

In the age of AI code assistants, it might seem like language models could solve the problem of understanding systems. But Tudor explains why they can't. When you ask an AI about your architecture, you get an opinion—fast but unverifiable. Just like asking a developer to draw the architecture on a whiteboard, you receive filtered information without knowing if it's complete or accurate. Moldable Development, by contrast, extracts answers deterministically from the actual system. Software systems have almost no ambiguity in meaning—they're mathematical, not linguistic. We don't need probabilistic interpretation of source code; we need precise extraction and presentation. The tools you build give you not just answers but explanations of how those answers were derived from the actual system state.

Scaling Through Language, Not Features

"You need a new kind of development environment where the goal is to create tools much quicker. You need some sort of language in which to express development environments."

The technical challenge of Moldable Development is enabling thousands of tools to coexist productively. This requires a fundamentally different approach to development environments. Instead of adding features—buttons and menu items that quickly become overwhelming—you need a language for expressing tools and a system for composing them. Glamorous Toolkit demonstrates this through its inspector architecture, where any object can define custom views that appear contextually. These views compose naturally as you navigate through your investigation, reusing earlier perspectives while adding new ones. The environment becomes a medium for tool creation, not just a collection of pre-built features.

Making the Invisible Visible

"We cannot perceive anything in a software system except through a tool. If that's so important, then the ability to control that shape is probably kind of important too."

Software has no inherent shape—it's just data. Every perception we have of it comes through some tool that renders it into a form we can reason about. This means tools aren't nice-to-have accessories; they're fundamental to our ability to work with software at all. The text editor showing code is a tool. The debugger showing variables is a tool. But these are generic tools built once and reused everywhere, which means they show generic perspectives. What if we could control the shape of our software as easily as we write it? What if the system could show us exactly the view we need for exactly the question we have? That's the promise of Moldable Development.

About Tudor Girba

Tudor Girba is CEO of feenk.com and creator of Moldable Development. He leads the team behind Glamorous Toolkit, a novel IDE that helps developers make sense of complex systems. His work focuses on transforming how teams understand, navigate, and modernize legacy software through custom, insightful tools. Tudor and Simon Wardley are writing a book about Moldable Development which you can get at: https://moldabledevelopment.com/, and read more about in this Medium article.

You can link with Tudor Girba on LinkedIn.

Avsnitt(200)

AI Assisted Coding: Transactional AI Development - Commit, Validate, and Rollback With Sergey Sergyenko

AI Assisted Coding: Transactional AI Development - Commit, Validate, and Rollback With Sergey Sergyenko

AI Assisted Coding: Treating AI Like a Junior Engineer - Onboarding Practices for AI Collaboration In this special episode, Sergey Sergyenko, CEO of Cybergizer, shares his practical framework for AI-assisted development built on transactional models, Git workflows, and architectural conventions. He explains why treating AI like a junior engineer, keeping commits atomic, and maintaining rollback strategies creates production-ready code rather than just prototypes. Vibecoding: An Automation Design Instrument "I would define Vibecoding as an automation design instrument. It's not a tool that can deliver end-to-end solution, but it's like a perfect set of helping hands for a person who knows what they need to do."   Sergey positions vibecoding clearly: it's not magic, it's an automation design tool. The person using it must know what they need to accomplish—AI provides the helping hands to execute that vision faster. This framing sets expectations appropriately: AI speeds up development significantly, but it's not a silver bullet that works without guidance. The more you practice vibecoding, the better you understand its boundaries. Sergey's definition places vibecoding in the evolution of development tools: from scaffolding to co-pilots to agentic coding to vibecoding. Each step increases automation, but the human architect remains essential for providing direction, context, and validation. Pair Programming with the Machine "If you treat AI as a junior engineer, it's very easy to adopt it. Ah, okay, maybe we just use the old traditions, how we onboard juniors to the team, and let AI follow this step."   One of Sergey's most practical insights is treating AI like a junior engineer joining your team. This mental model immediately clarifies roles and expectations. You wouldn't let a junior architect your system or write all your tests—so why let AI? Instead, apply existing onboarding practices: pair programming, code reviews, test-driven development, architectural guidance. This approach leverages Extreme Programming practices that have worked for decades. The junior engineer analogy helps teams understand that AI needs mentorship, clear requirements, and frequent validation. Just as you'd provide a junior with frameworks and conventions to follow, you constrain AI with established architectural patterns and framework conventions like Ruby on Rails. The Transactional Model: Atomic Commits and Rollback "When you're working with AI, the more atomic commits it delivers, more easy for you to kind of guide and navigate it through the process of development."   Sergey's transactional approach transforms how developers work with AI. Instead of iterating endlessly when something goes wrong, commit frequently with atomic changes, then rollback and restart if validation fails. Each commit should be small, independent, and complete—like a feature flag you can toggle. The commit message includes the prompt sequence used to generate the code and rollback instructions.  This approach makes the Git repository the context manager, not just the AI's memory. When you need to guide AI, you can reference specific commits and their context. This mirrors trunk-based development practices where teams commit directly to master with small, verified changes. The cost of rollback stays minimal because changes are atomic, making this strategy far more efficient than trying to fix broken implementations through iteration. Context Management: The Weak Point and the Solution "Managing context and keeping context is one of the weak points of today's coding agents, therefore we need to be very mindful in how we manage that context for the agent."   Context management challenges current AI coding tools—they forget, lose thread, or misinterpret requirements over long sessions. Sergey's solution is embedding context within the commit history itself. Each commit links back to the specific reasoning behind that code: why it was accepted, what iterations it took, and how to undo it if needed. This creates a persistent context trail that survives beyond individual AI sessions. When starting new features, developers can reference previous commits and their context to guide the AI. The transactional model doesn't just provide rollback capability—it creates institutional memory that makes AI progressively more effective as the codebase grows. TDD 2.0: Humans Write Tests, AI Writes Code "I would never allow AI to write the test. I would do it by myself. Still, it can write the code."   Sergey is adamant about roles: humans write tests, AI writes implementation code. This inverts traditional TDD slightly—instead of developers writing tests then code, they write tests and AI writes the code to pass them. Tests become executable requirements and prompts. This provides essential guardrails: AI can iterate on implementation until tests pass, but it can't redefine what "passing" means. The tests represent domain knowledge, business requirements, and validation criteria that only humans should control. Sergey envisions multi-agent systems where one agent writes code while another validates with tests, but critically, humans author the original test suite. This TDD 2.0 framework (a talk Sergey gave at the Global Agile Summit) creates a verification mechanism that prevents the biggest anti-pattern: coding without proper validation. The Two Cardinal Rules: Architecture and Verification "I would never allow AI to invent architecture. Writing AI agentic coding, Vibecoding, whatever coding—without proper verification and properly setting expectations of what you want to get as a result—that's the main mistake."   Sergey identifies two non-negotiables. First, never let AI invent architecture. Use framework conventions (Rails, etc.) to constrain AI's choices. Leverage existing code generators and scaffolding. Provide explicit architectural guidelines in planning steps. Store iteration-specific instructions where AI can reference them. The framework becomes the guardrails that prevent AI from making structural decisions it's not equipped to make. Second, always verify AI output. Even if you don't want to look at code, you must validate that it meets requirements. This might be through tests, manual review, or automated checks—but skipping verification is the fundamental mistake. These two rules—human-defined architecture and mandatory verification—separate successful AI-assisted development from technical debt generation. Prototype vs. Production: Two Different Workflows "When you pair as an architect or a really senior engineer who can implement it by himself, but just wants to save time, you do the pair programming with AI, and the AI kind of ships a draft, and rapid prototype."   Sergey distinguishes clearly between prototype and production development. For MVPs and rapid prototypes, a senior architect pairs with AI to create drafts quickly—this is where speed matters most. For production code, teams add more iterative testing and polishing after AI generates initial implementation. The key is being explicit about which mode you're in. The biggest anti-pattern is treating prototype code as production-ready without the necessary validation and hardening steps. When building production systems, Sergey applies the full transactional model: atomic commits, comprehensive tests, architectural constraints, and rollback strategies. For prototypes, speed takes priority, but the architectural knowledge still comes from humans, not AI. The Future: AI Literacy as Mandatory "Being a software engineer and trying to get a new job, it's gonna be a mandatory requirement for you to understand how to use AI for coding. So it's not enough to just be a good engineer."   Sergey sees AI-assisted coding literacy becoming as fundamental as Git proficiency. Future engineering jobs will require demonstrating effective AI collaboration, not just traditional coding skills. We're reaching good performance levels with AI models—now the challenge is learning to use them efficiently. This means frameworks and standardized patterns for AI-assisted development will emerge and consolidate. Approaches like AAID, SpecKit, and others represent early attempts to create these patterns. Sergey expects architectural patterns for AI-assisted development to standardize, similar to how design patterns emerged in object-oriented programming. The human remains the bottleneck—for domain knowledge, business requirements, and architectural guidance—but the implementation mechanics shift heavily toward AI collaboration. Resources for Practitioners "We are reaching a good performance level of AI models, and now we need to guide it to make it impactful. It's a great tool, now we need to understand how to make it impactful."   Sergey recommends Obie Fernandez's work on "Patterns of Application Development Using AI," particularly valuable for Ruby and Rails developers but applicable broadly. He references Andrey Karpathy's original vibecoding post and emphasizes Extreme Programming practices as foundational. The tools he uses—Cursor and Claude Code—support custom planning steps and context management. But more important than tools is the mindset: we have powerful AI capabilities now, and the focus must shift to efficient usage patterns. This means experimenting with workflows, documenting what works, and sharing patterns with the community. Sergey himself shares case studies on LinkedIn and travels extensively speaking about these approaches, contributing to the collective learning happening in real-time.   About Sergey Sergyenko   Sergey is the CEO of Cybergizer, a dynamic software development agency with offices in Vilnius, Lithuania. Specializing in MVPs with zero cash requirements, Cybergizer offers top-tier CTO services and startup teams. Their tech stack includes Ruby, Rails, Elixir, and ReactJS.   Sergey was also a featured speaker at the Global Agile Summit, and you can find his talk available in your membership area. If you are not a member don't worry, you can get the 1-month trial and watch the whole conference. You can cancel at any time.   You can link with Sergey Sergyenko on LinkedIn.

27 Nov 41min

AI Assisted Coding: Augmented AI Development - Software Engineering First, AI Second With Dawid Dahl

AI Assisted Coding: Augmented AI Development - Software Engineering First, AI Second With Dawid Dahl

BONUS: Augmented AI Development - Software Engineering First, AI Second In this special episode, Dawid Dahl introduces Augmented AI Development (AAID)—a disciplined approach where professional developers augment their capabilities with AI while maintaining full architectural control. He explains why starting with software engineering fundamentals and adding AI where appropriate is the opposite of most frameworks, and why this approach produces production-grade software rather than technical debt. The AAID Philosophy: Don't Abandon Your Brain "Two of the fundamental developer principles for AAID are: first, don't abandon your brain. And the second is incremental steps."   Dawid's Augmented AI Development framework stands in stark contrast to "vibecoding"—which he defines strictly as not caring about code at all, only results on screen. AAID is explicitly designed for professional developers who maintain full understanding and control of their systems. The framework is positioned on the furthest end of the spectrum from vibe coding, requiring developers to know their craft deeply. The two core principles—don't abandon your brain, work incrementally—reflect a philosophy that AI is a powerful collaborator, not a replacement for thinking. This approach recognizes that while 96% of Dawid's code is now written by AI, he remains the architect, constantly steering and verifying every step. In this segment we refer to Marcus Hammarberg's work and his book The Bungsu Story. Software Engineering First, AI Second: A Hill to Die On "You should start with software engineering wisdom, and then only add AI where it's actually appropriate. I think this is super, super important, and the entire foundation of this framework. This is a hill I will personally die on."   What makes AAID fundamentally different from other AI-assisted development frameworks is its starting point. Most frameworks start with AI capabilities and try to add structure and best practices afterward. Dawid argues this is completely backwards. AAID begins with 50-60 years of proven software engineering wisdom—test-driven development, behavior-driven development, continuous delivery—and only then adds AI where it enhances the process. This isn't a minor philosophical difference; it's the foundation of producing maintainable, production-grade software. Dawid admits he's sometimes "manipulating developers to start using good, normal software engineering practices, but in this shiny AI box that feels very exciting and new." If the AI wrapper helps developers finally adopt TDD and BDD, he's fine with that. Why TDD is Non-Negotiable with AI "Every time I prompt an AI and it writes code for me, there is often at least one or two or three mistakes that will cause catastrophic mistakes down the line and make the software impossible to change."   Test-driven development isn't just a nice-to-have in AAID—it's essential. Dawid has observed that AI consistently makes 2-3 mistakes per prompt that could have catastrophic consequences later. Without TDD's red-green-refactor cycle, these errors accumulate, making code increasingly difficult to change. TDD answers the question "Is my code technically correct?" while acceptance tests answer "Is the system releasable?" Both are needed for production-grade software. The refactor step is where 50-60 years of software engineering wisdom gets applied to make code maintainable. This matters because AAID isn't vibe coding—developers care deeply about code quality, not just visible results. Good software, as Dave Farley says, is software that's easy to change. Without TDD, AI-generated code becomes a maintenance nightmare. The Problem with "Prompt and Pray" Autonomous Agents "When I hear 'our AI can now code for over 30 hours straight without stopping,' I get very afraid. You fall asleep, and the next morning, the code is done. Maybe the tests are green. But what has it done in there? Imagine everything it does for 30 hours. This system will not work."   Dawid sees two diverging paths for AI-assisted development's future. The first—autonomous agents working for hours or days without supervision—terrifies him. The marketing pitch sounds appealing: prompt the AI, go to sleep, wake up to completed features. But the reality is technical debt accumulation at scale. Imagine all the decisions, all the architectural choices, all the mistakes an AI makes over 30 hours of autonomous work. Dawid advocates for the stark contrast: working in extremely small increments with constant human steering, always aligned to specifications. His vision of the future isn't AI working alone—it's voice-controlled confirmations where he says "Yes, yes, no, yes" as AI proposes each tiny change. This aligns with DORA metrics showing that high-performing teams work in small batches with fast feedback loops. Prerequisites: Product Discovery Must Come First "Without Dave Farley, this framework would be totally different. I think he does everything right, basically. With this framework, I want to stand on the shoulders of giants and work on top of what has already been done."   AAID explicitly requires product discovery and specification phases before AI-assisted coding begins. This is based on Dave Farley's product journey model, which shows how products move from idea to production. AAID starts at the "executable specifications" stage—it requires input specifications from prior discovery work. This separates specification creation (which Dawid is addressing in a separate "Dream Encoder" framework) from code execution. The prerequisite isn't arbitrary; it acknowledges that AI-assisted implementation works best when the problem is well-defined. This "standing on shoulders of giants" approach means AAID doesn't try to reinvent software engineering—it leverages decades of proven practices from TDD pioneers, BDD creators, and continuous delivery experts. What's Wrong with Other AI Frameworks "When the AI decides to check the box [in task lists], that means this is the definition of done. But how is the AI taking that decision? It's totally ad hoc. It's like going back to the 1980s: 'I wrote the code, I'm done.' But what does that mean? Nobody has any idea."   Dawid is critical of current AI frameworks like SpecKit, pointing out fundamental flaws. They start with AI first and try to add structure later (backwards approach). They use task lists with checkboxes where AI decides when something is "done"—but without clear criteria, this becomes ad hoc decision-making reminiscent of 1980s development practices. These frameworks "vibecode the specs," not realizing there's a structured taxonomy to specifications that BDD already solved. Most concerning, some have removed testing as a "feature," treating it as optional. Dawid sees these frameworks as over-engineered, process-centric rather than developer-centric, often created by people who may not develop software themselves. AAID, in contrast, is built by a practicing developer solving real problems daily. Getting Started: Learn Fundamentals First "The first thing developers should do is learn the fundamentals. They should skip AI altogether and learn about BDD and TDD, just best practices. But when you know that, then you can look into a framework, maybe like mine."   Dawid's advice for developers interested in AI-assisted coding might seem counterintuitive: start by learning fundamentals without AI. Master behavior-driven development, test-driven development, and software engineering best practices first. Only after understanding these foundations should developers explore frameworks like AAID. This isn't gatekeeping—it's recognizing that AI amplifies whatever approach developers bring. If they start with poor practices, AI will help them build unmaintainable systems faster. But if they start with solid fundamentals, AI becomes a powerful multiplier that lets them work at unprecedented speed while maintaining quality. AAID offers both a dense technical article on dev.to and a gentler game-like onboarding in the GitHub repo, meeting developers wherever they are in their journey.   About Dawid Dahl   Dawid is the creator of Augmented AI Development (AAID), a disciplined approach where developers augment their capabilities by integrating with AI, while maintaining full architectural control. Dawid is a software engineer at Umain, a product development agency.   You can link with Dawid Dahl on LinkedIn and find the AAID framework on GitHub.

26 Nov 45min

AI Assisted Coding: Swimming in AI - Managing Tech Debt in the Age of AI-Assisted Coding | Lou Franco

AI Assisted Coding: Swimming in AI - Managing Tech Debt in the Age of AI-Assisted Coding | Lou Franco

AI Assisted Coding: Swimming in AI - Managing Tech Debt in the Age of AI-Assisted Coding In this special episode, Lou Franco, veteran software engineer and author of "Swimming in Tech Debt," shares his practical approach to AI-assisted coding that produces the same amount of tech debt as traditional development—by reading every line of code. He explains the critical difference between vibecoding and AI-assisted coding, why commit-by-commit thinking matters, and how to reinvest productivity gains into code quality. Vibecoding vs. AI-Assisted Coding: Reading Code Matters "I read all the code that it outputs, so I need smaller steps of changes."   Lou draws a clear distinction between vibecoding and his approach to AI-assisted coding. Vibecoding, in his definition, means not reading the code at all—just prompting, checking outputs, and prompting again. His method is fundamentally different: he reads every line of generated code before committing it. This isn't just about catching bugs; it's about maintaining architectural control and accountability. As Lou emphasizes, "A computer can't be held accountable, so a computer can never make decisions. A human always has to make decisions." This philosophy shapes his entire workflow—AI generates code quickly, but humans make the final call on what enters the repository. The distinction matters because it determines whether you're managing tech debt proactively or discovering it later when changes become difficult. The Moment of Shift: Staying in the Zone "It kept me in the zone. It saved so much time! Never having to look up what a function's arguments were... it just saved so much time."   Lou's AI coding journey began in late 2022 with GitHub Copilot's free trial. He bought a subscription immediately after the trial ended because of one transformative benefit: staying in the flow state. The autocomplete functionality eliminated constant context switching to documentation, Stack Overflow searches, and function signature lookups. This wasn't about replacing thinking—it was about removing friction from implementation. Lou could maintain focus on the problem he was solving rather than getting derailed by syntax details. This experience shaped his understanding that AI's value lies in removing obstacles to productivity, not in replacing the developer's judgment about architecture and design. Thinking in Commits: The Right Size for AI Work "I think of prompts commit-by-commit. That's the size of the work I'm trying to do in a prompt."   Lou's workflow centers on a simple principle: size your prompts to match what should be a single commit. This constraint provides multiple benefits. First, it keeps changes small enough to review thoroughly—if a commit is too big to review properly, the prompt was too ambitious. Second, it creates a clear commit history that tells a story about how the code evolved. Third, it enables easy rollback if something goes wrong. This commit-sized thinking mirrors good development practices that existed long before AI—small, focused changes that each accomplish one clear purpose. Lou uses inline prompting in Cursor (Command-K) for these localized changes because it keeps context tight: "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast." The Tech Debt Question: Same Code, Same Debt "Based on the way I've defined how I did it, it's exactly the same amount of tech debt that I would have done on my own... I'm faster and can make more code, but I invest some of that savings back into cleaning things up."   As the author of "Swimming in Tech Debt," Lou brings unique perspective to whether AI coding creates more technical debt. His answer: not if you're reading and reviewing everything. When you maintain the same quality standards—code review, architectural oversight, refactoring—you generate the same amount of debt as manual coding. The difference is speed. Lou gets productivity gains from AI, and he consciously reinvests a portion of those gains back into code quality through refactoring. This creates a virtuous cycle: faster development enables more time for cleanup, which maintains a codebase that's easier for both humans and AI to work with. The key insight is that tech debt isn't caused by AI—it's caused by skipping quality practices regardless of how code is generated. When Vibecoding Creates Debt: AI Resistance as a Symptom "When you start asking the AI to do things, and it can't do them, or it undoes other things while it's doing them... you're experiencing the tech debt a different way. You're trying to make changes that are on your roadmap, and you're getting resistance from making those changes."   Lou identifies a fascinating pattern: tech debt from vibecoding (without code review) manifests as "AI resistance"—difficulty getting AI to make the changes you want. Instead of compile errors or brittle tests signaling problems, you experience AI struggling to understand your codebase, undoing changes while making new ones, or producing code with repetition and tight coupling. These are classic tech debt symptoms, just detected differently. The debt accumulates through architecture violations, lack of separation of concerns, and code that's hard to modify. Lou's point is profound: whether you notice debt through test failures or through AI confusion, the underlying problem is the same—code that's difficult to change. The solution remains consistent: maintain quality practices including code review, even when AI makes generation fast. Can AI Fix Tech Debt? Yes, With Guidance "You should have some acceptance criteria on the code... guide the LLM as to the level of code quality you want."   Lou is optimistic but realistic about AI's ability to address existing tech debt. AI can definitely help with refactoring and adding tests—but only with human guidance on quality standards. You must specify what "good code" looks like: acceptance criteria, architectural patterns, quality thresholds. Sometimes copy/paste is faster than having AI regenerate code. Very convoluted codebases challenge both humans and AI, so some remediation should happen before bringing AI into the picture. The key is recognizing that AI amplifies your approach—if you have strong quality standards and communicate them clearly, AI accelerates improvement. If you lack quality standards, AI will generate code just as problematic as what already exists. Reinvesting Productivity Gains in Quality "I'm getting so much productivity out of it, that investing a little bit of that productivity back into refactoring is extremely good for another kind of productivity."   Lou describes a critical strategy: don't consume all productivity gains as increased feature velocity. Reinvest some acceleration back into code quality through refactoring. This mirrors the refactor step in test-driven development—after getting code working, clean it up before moving on. AI makes this more attractive because the productivity gains are substantial. If AI makes you 30% faster at implementation, using 10% of that gain on refactoring still leaves you 20% ahead while maintaining quality. Lou explicitly budgets this reinvestment, treating quality maintenance as a first-class activity rather than something that happens "when there's time." This discipline prevents the debt accumulation that makes future work progressively harder. The 100x Code Concern: Accountability Remains Human "Directionally, I think you're probably right... this thing is moving fast, we don't know. But I'm gonna always want to read it and approve it."   When discussing concerns about AI generating 100x more code (and potentially 100x more tech debt), Lou acknowledges the risk while maintaining his position: he'll always read and approve code before it enters the repository. This isn't about slowing down unnecessarily—it's about maintaining accountability. Humans must make the decisions because only humans can be held accountable for those decisions. Lou sees potential for AI to improve by training on repository evolution rather than just end-state code, learning from commit history how codebases develop. But regardless of AI improvements, the human review step remains essential. The goal isn't to eliminate human involvement; it's to shift human focus from typing to thinking, reviewing, and making architectural decisions. Practical Workflow: Inline Prompting and Small Changes "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast."   Lou's preferred tool is Cursor with inline prompting (Command-K), which allows him to work on specific code sections with tight context. This approach is fast because it limits what AI considers, reducing both latency and irrelevant changes. The workflow resembles pair programming: Lou knows what he wants, points AI at the specific location, AI generates the implementation, and Lou reviews before accepting. He also uses Claude Code for full codebase awareness when needed, but the inline approach dominates his daily work. The key principle is matching tool choice to context needs—use inline prompting for localized changes, full codebase tools when you need broader understanding. This thoughtful tool selection keeps development efficient while maintaining control. Resources and Community Lou recommends Steve Yegge's upcoming book on vibecoding. His website, LouFranco.com, provides additional resources.    About Lou Franco   Lou Franco is a veteran software engineer and author of Swimming in Tech Debt. With decades of experience at startups, as well as Trello, and Atlassian, he's seen both sides of debt—as coder and leader. Today, he advises teams on engineering practices, helping them turn messy codebases into momentum.   You can link with Lou Franco on LinkedIn and visit his website at LouFranco.com.

25 Nov 37min

AI Assisted Coding: From Designer to Solo Developer - Building Production Apps with AI With Elina Patjas

AI Assisted Coding: From Designer to Solo Developer - Building Production Apps with AI With Elina Patjas

AI Assisted Coding: From Designer to Solo Developer - Building Production Apps with AI In this special episode, Elina Patjas shares her remarkable journey from designer to solo developer, building LexieLearn—an AI-powered study tool with 1,500+ users and paying customers—entirely through AI-assisted coding. She reveals the practical workflow, anti-patterns to avoid, and why the future of software might not need permanent apps at all. The Two-Week Transformation: From Idea to App Store "I did that, and I launched it to App Store, and I was like, okay, so… If I can do THIS! So, what else can I do? And this all happened within 2 weeks."   Elina's transformation happened fast. As a designer frustrated with traditional software development where maybe 10% of your original vision gets executed, she discovered Cursor and everything changed. Within two weeks, she went from her first AI-assisted experiment to launching a complete app in the App Store. The moment that shifted everything was realizing that AI had fundamentally changed the paradigm from "writing code" to "building the product." This wasn't about learning to code—it was about finally being able to execute her vision 100% the way she wanted it, with immediate feedback through testing. Building LexieLearn: Solving Real Problems for Real Users "I got this request from a girl who was studying, and she said she would really appreciate to be able to iterate the study set... and I thought: "That's a brilliant idea! And I can execute that!" And the next morning, it was 9.15, I sent her a screen capture."   Lexie emerged from Elina's frustration with ineffective study routines and gamified edtech that didn't actually help kids learn. She built an AI-powered study tool for kids aged 10-15 that turns handwritten notes into adaptive quizzes revealing knowledge gaps—private, ad-free, and subscription-based. What makes Lexie remarkable isn't just the technology, but the speed of iteration. When a user requested a feature, Elina designed and implemented it overnight, sending a screen capture by 9:15 AM the next morning. This kind of responsiveness—from customer feedback to working feature in hours—represents a fundamental shift in how software can be built. Today, Lexie has over 1,500 users with paying customers, proving that AI-assisted development isn't just for prototypes anymore. The Workflow: It's Not Just "Vibing" "I spend 30 minutes designing the whole workflow inside my head... all the UX interactions, the data flow, and the overall architectural decisions... so I spent a lot of time writing a really, really good spec. And then I gave that to Claude Code."   Elina has mixed feelings about the term "vibecoding" because it suggests carelessness. Her actual workflow is highly disciplined. She spends significant time designing the complete workflow mentally—all UX interactions, data flow, and architectural decisions—then writes detailed specifications. She often collaborates with Claude to write these specs, treating the AI as a thinking partner. Once the spec is clear, she gives it to Claude Code and enters a dialogue mode: splitting work into smaller tasks, maintaining constant checkpoints, and validating every suggestion. She reads all the code Claude generates (32,000 lines client-side, 8,000 server-side) but doesn't write code herself anymore. This isn't lazy—it's a new kind of discipline focused on design, architecture, and clear communication rather than syntax. Reading Code vs. Writing Code: A New Skill Set "AI is able to write really good code, if you just know how to read it... But I do not write any code. I haven't written a single line of code in a long time."   Elina's approach reveals an important insight: the skill shifts from writing code to reading and validating it. She treats Claude Code as a highly skilled companion that she needs to communicate with extremely well. This requires knowing "what good looks like"—her 15 years of experience as a designer gives her the judgment to evaluate what the AI produces. She maintains dialogue throughout development, using checkpoints to verify direction and clarify requirements. The fast feedback loop means when she fails to explain something clearly, she gets immediate feedback and can course-correct instantly. This is fundamentally different from traditional development where miscommunication might not surface until weeks later. The Anti-Pattern: Letting AI Run Rampant "You need to be really specific about what you want to do, and how you want to do it, and treat the AI as this highly skilled companion that you need to be able with."   The biggest mistake Elina sees is treating AI like magic—giving vague instructions and expecting it to "just figure it out." This leads to chaos. Instead, developers need to be incredibly specific about requirements and approach, treating AI as a skilled partner who needs clear communication. The advantage is that the iteration loop is so fast that when you fail to explain something properly, you get feedback immediately and can clarify. This makes the learning curve steep but short. The key is understanding that AI amplifies your skills—if you don't know what good architecture looks like, AI won't magically create it for you. Breaking the Gatekeeping: One Person, Ten Jobs "I think that I can say that I am a walking example of what you can do, if you have the proper background, and you know what good looks like. You can do several things at a time. What used to require 10 people, at least, to build before."   Elina sees herself as living proof that the gatekeeping around software development is breaking down. Someone with the right background and judgment can now do what previously required a team of ten people. She's passionate about others experiencing this same freedom—the ability to execute their vision without compromise, to respond to user feedback overnight, to build production-quality software solo. This isn't about replacing developers; it's about expanding who can build software and what's possible for small teams. For Elina, working with a traditional team would actually slow her down now—she'd spend more time explaining her vision than the team would save through parallel work. The Future: Intent-Based Software That Emerges and Disappears "The software gets built in an instance... it's going to this intent-based mode when we actually don't even need apps or software as we know them."   Elina's vision for the future is radical: software that emerges when you need it and disappears when you don't. Instead of permanent apps, you'd have intent-based systems that generate solutions in the moment. This shifts software from a product you download and learn to a service that materializes around your needs. We're not there yet, but Elina sees the trajectory clearly. The speed at which she can now build and modify Lexie—overnight feature implementations, instant bug fixes, continuous evolution—hints at a future where software becomes fluid rather than fixed. Getting Started: Just Do It "I think that the best resource is just your own frustration with some existing tools... Just open whatever tool you're using, is it Claude or ChatGPT and start interacting and discussing, getting into this mindset that you're exploring what you can do, and then just start doing."   When asked about resources, Elina's advice is refreshingly direct: don't look for tutorials, just start. Let your frustration with existing tools drive you. Open Claude or ChatGPT and start exploring, treating it as a dialogue partner. Start building something you actually need. The learning happens through doing, not through courses. Her own journey proves this—she went from experimenting with Cursor to shipping Lexie to the App Store in two weeks, not because she found the perfect tutorial, but because she just started building. The tools are good enough now that the biggest barrier isn't technical knowledge—it's having the courage to start and the judgment to evaluate what you're building.   About Elina Patjas   Elina is building Lexie, an AI-powered study tool for kids aged 10–15. Frustrated by ineffective "read for exams" routines and gamified edtech fluff, she designed Lexie to turn handwritten notes into adaptive quizzes that reveal knowledge gaps—private, ad-free, and subscription-based. Lexie is learning, simplified.   You can link with Elina Patjas on LinkedIn.

24 Nov 41min

Coaching Product Owners from Isolation to Collaboration | Sara Di Gregorio

Coaching Product Owners from Isolation to Collaboration | Sara Di Gregorio

Sara Di Gregorio: Coaching Product Owners from Isolation to Collaboration Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. The Great Product Owner: Using User Story Mapping to Break Down PO Isolation "One of the key strengths is the ability to build a strong collaborative relationship with the Scrum team. We constantly exchange feedback, with the shared goal of improving both our collaborating and the way of working." - Sara Di Gregorio   Sara considers herself fortunate—she currently works with Product Owners who exemplify what great collaboration looks like. One of their key strengths is the ability to build strong collaborative relationships with the Scrum team. They don't wait for sprint reviews to exchange feedback; instead, they constantly communicate with the shared goal of improving both collaboration and ways of working.  These Product Owners involve the team early, using techniques like user story mapping after analysis phases to create open discussions around upcoming topics and help the team understand potential dependencies. They make themselves truly available—they observe daily stand-ups not as passive attendees but as engaged contributors. If the team needs five minutes to discuss something afterward, the Product Owner is ready. They attend Scrum events with genuine interest in working with the team, not just fulfilling an attendance requirement. They encourage open dialogue, even participating in retrospectives to understand how the team is working and where they can improve collaboration. What sets these Product Owners apart is their communication approach. They don't come in thinking they know everything or that they need to do everything alone. Their mindset is collaborative: "We're doing this together." They recognize that developers aren't just executors—they're users of the product, experts who can provide valuable perspectives.  When Product Owners ask "Why do you want this?" and developers respond with "If we do it this way, we can be faster, and you can try your product sooner," that's when magic happens. Great Product Owners understand that strong communication skills and collaborative relationships create better products, better teams, and better outcomes for everyone involved.   Self-reflection Question: How are your Product Owners involving the team early in discovery and analysis, and are they building collaborative relationships or just attending required events? The Bad Product Owner: The Isolated Expert Who Thinks Teams Just Execute   "Sometimes they feel very comfortable in their subject, so they assume they know everything, and the team has only to execute what they asked for." - Sara Di Gregorio   Sara has encountered Product Owners who embody the worst anti-pattern: they believe they don't need to interact with the development team because they're confident in their subject matter expertise. They assume they know everything, and the team's job is simply to execute what they ask for. These Product Owners work isolated from the development team, writing detailed user stories alone and skipping the interesting discussions with developers. They only involve the team when they think it's necessary, treating developers as order-takers rather than collaborators who could contribute valuable insights.  The impact is significant—teams lose the opportunity to understand the "why" behind features, Product Owners miss perspectives that could improve the product, and collaboration becomes transactional instead of transformational. Sara's approach to addressing this anti-pattern is patient but deliberate. She creates space for dialogue and provides training with the Product Owner to help them understand how important it is to collaborate and cooperate with the team. She shows them the impact of including the team from the beginning of feature study.  One powerful technique she uses is user story mapping workshops, bringing both the team and Product Owner together. The Product Owner explains what they want to deliver from their point of view, but then something crucial happens: the team asks lots of questions to understand "Why do you want this?"—not just "I will do it."  Through this exercise, Sara watched Product Owners have profound realizations. They understood they could change their mindset by talking with developers, who often are users of the product and can offer perspectives like "If we do it this way, we can be faster, and you can try your product sooner."  The workshop helps teams understand the big picture of what the Product Owner is asking for while helping the Product Owner reflect on what they're actually asking. It transforms the relationship from isolation to collaboration, from directive to dialogue, from assumption to shared understanding.   In this segment, we refer to the User Story Mapping blog post by Jeff Patton.   Self-reflection Question: Are your Product Owners writing user stories in isolation, or are they involving the team in discovery to create shared understanding and better solutions?   [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.   🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.   Buy Now on Amazon   [The Scrum Master Toolbox Podcast Recommends]   About Sara Di Gregorio   Sara is a people-centered Scrum Master who champions trust, collaboration, and real value over rigid frameworks. With experience introducing Agile practices, she fosters empathy, inclusion, and clarity in every team. As an Advanced Scrum Master, she helps teams grow, perform, and deliver with enthusiasm and purpose.   You can link with Sara Di Gregorio on LinkedIn.

21 Nov 14min

How to Know Your Team Has Internalized Agile Values | Sara Di Gregorio

How to Know Your Team Has Internalized Agile Values | Sara Di Gregorio

Sara Di Gregorio: How to Know Your Team Has Internalized Agile Values Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.   "Scrum isn't just a process to follow, it's a way of working." - Sara Di Gregorio   For Sara, success as a Scrum Master isn't measured by what the team delivers—it's measured by how they grow. She knows that if you facilitate team growth in communication and collaboration, delivery will naturally improve.  The indicators she watches for are subtle but powerful. When teams come to her with specific requests outside the regular schedule—"Can we have 30 minutes to talk and reflect mid-sprint?"—she knows something has shifted. When teams want to reflect outside the retrospective cycle, it means they've internalized the value of continuous improvement, not just going through the motions. She listens for the word "goal" during sprint planning.  When team members start their planning by talking about goals, she feels a surge of recognition: "Okay, for me, this is very, very, very important." Success shows up in unexpected places. One of her colleague's teams pushed back during a cross-team meeting, saying "We're going out of the timebox" and suggesting they move the discussion to a different time. That kind of proactive leadership and accountability signals maturity. It means the team isn't just attending Scrum events because they have to—they truly understand why each event matters and actively participate to make them valuable.  When Sara first met a team, they asked if she wanted to change things. She said no. What she focuses on is how people improve and understand the process better. For her, it starts with the people—when people change and understand the value, that's when real changes happen in the company. It's about helping people feel good and be guided well, because when they're working well, that's when transformation becomes possible. As Sara reminds us, Scrum isn't just a process to follow—it's a way of working that teams must embrace, understand, and make their own.   Self-reflection Question: Are your teams coming to you asking for reflection time outside scheduled events, and what does that tell you about how deeply they've internalized continuous improvement? Featured Retrospective Format for the Week: Unstructured Retrospective After facilitating many structured retrospectives, Sara started experimenting with an unstructured format that brought new energy to team reflection. Instead of using predefined frameworks, she brings white paper, sticky notes, and sharpies of different colors. She opens with a simple question: "Guys, what impacted you mostly during the last week? How do you feel today?" Sometimes she starts with data and metrics; other times, she begins with how the team is feeling.  The key is creating open space for conversation rather than forcing it into a predetermined structure. What Sara discovered is remarkable: "They are more engaged, more open, and more present in the conversation, maybe because it was something new." Instead of the same structured format every time, the unstructured approach breaks the routine and creates space for true reflections that bring out something deeper and more meaningful. It allows people to express what's genuinely going on for them, not just what fits into a predefined template.  Sara doesn't abandon structured formats entirely—she alternates between structured and unstructured to keep retrospectives fresh and engaging. She also recommends, if you work hybrid, trying to schedule unstructured retrospectives for days when the team is in the office together. The physical presence combined with the open format creates an environment where teams can be more vulnerable, more creative, and more honest about what's really happening. The unstructured retrospective isn't about chaos—it's about trusting the team to surface what matters most to them, with the Scrum Master providing light facilitation and space for authentic reflection.   [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.   🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.   Buy Now on Amazon   [The Scrum Master Toolbox Podcast Recommends]   About Sara Di Gregorio   Sara is a people-centered Scrum Master who champions trust, collaboration, and real value over rigid frameworks. With experience introducing Agile practices, she fosters empathy, inclusion, and clarity in every team. As an Advanced Scrum Master, she helps teams grow, perform, and deliver with enthusiasm and purpose.   You can link with Sara Di Gregorio on LinkedIn.

20 Nov 14min

Facilitating Deeper Retrospectives—When to Step In and When to Step Back | Sara Di Gregorio

Facilitating Deeper Retrospectives—When to Step In and When to Step Back | Sara Di Gregorio

Sara Di Gregorio: Facilitating Deeper Retrospectives—When to Step In and When to Step Back Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.   "When they start connecting and having an interesting discussion, I go to the corner, and I'm only trying to listen." - Sara Di Gregorio   Sara faces a challenge that many Scrum Masters encounter: teams that want to discuss too many topics during retrospectives without going deep on any of them. The team had plenty to talk about, but conversations stayed surface-level, never reaching the insights that drive real improvement. Sara recognized that the aim of the retrospective isn't to talk about everything—it's to go deeper on topics the team genuinely cares about.  So she started coaching teams to select just three main topics they wanted to discuss, helping them understand why prioritization matters and making explicit which topics are most important. But her real skill emerged in how she facilitated the discussions. When she saw communication starting to flow and team members becoming deeply connected to the topic, she moved to the corner and listened. She didn't abandon the team—she remained present, ready to help shy or quiet members speak up, watching the clock to respect timeboxes.  But she understood that when teams connect authentically, the Scrum Master's job is to create space, not fill it. Sara learned to ask better questions too. Instead of repeatedly asking "Why? Why? Why?"—which can feel accusatory—she reformulated: "How did you approach it? What happens?" When teams started blaming other teams, she redirected: "What can we influence? What can we do from our side?" She used visual tools like white paper, sharpies, and sticky notes to help teams visualize their discussion steps and create structured moments for questions.  Sometimes, when teams discussed complex technical topics beyond her understanding, she empowered them: "You are the main expert of this topic. Please, when someone sees that we're going out of topic or getting too detailed, raise your hand and help me bring the communication back to what we've chosen to talk about."  This balance—knowing when to step in with structure and when to step back and listen—is what transforms retrospectives from checkbox events into genuine opportunities for team growth.   Self-reflection Question: In your facilitation, are you creating space for deep team connection, or are you inadvertently filling the space that teams need to discover insights for themselves?   [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.   🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.   Buy Now on Amazon   [The Scrum Master Toolbox Podcast Recommends]   About Sara Di Gregorio   Sara is a people-centered Scrum Master who champions trust, collaboration, and real value over rigid frameworks. With experience introducing Agile practices, she fosters empathy, inclusion, and clarity in every team. As an Advanced Scrum Master, she helps teams grow, perform, and deliver with enthusiasm and purpose.   You can link with Sara Di Gregorio on LinkedIn.

19 Nov 15min

Rebuilding Agile Team Connection in the Remote Work Era | Sara Di Gregorio

Rebuilding Agile Team Connection in the Remote Work Era | Sara Di Gregorio

Sara Di Gregorio: Rebuilding Agile Team Connection in the Remote Work Era Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.   "The book helped me to shift from reacting to connecting, which completely changed the quality of conversation." - Sara Di Gregorio   When COVID forced Sara's team into full remote work, she noticed something troubling—the team was losing real connection. Replicating in-office meetings online simply didn't work. People attended meetings but weren't truly present. The spontaneous coffee machine conversations that built relationships and surfaced important information had vanished.  So Sara started experimenting. She introduced 5-minute chit-chat sessions at the start of every meeting: "Guys, how are you today? What happened yesterday?" She created "coffee all together" moments—10-minute virtual breaks where the team could drink coffee or have aperitivos together, sometimes three times per week. She established weekly feedback sessions every Friday morning—30 minutes to recap the week and understand what could improve.  These weren't just social niceties; they were deliberate efforts to recreate the human connections that remote work had stripped away. Sara recognized that mechanized interactions—"here are the things I need you to do, let's talk next steps"—kill team dynamics. Teams need moments where they relate to each other as people, not just as functions.  The experiments worked because they created space for genuine connection, allowing the team to maintain the trust and collaboration that makes effective teamwork possible, even when working remotely.   In this episode, we refer to Non-Violent Communication concepts and practices.   Self-reflection Question: How are you creating moments for your remote or hybrid team to connect as people, not just as colleagues executing tasks? Featured Book of the Week: Nonviolent Communication by Marshall Rosenberg Sara credits Nonviolent Communication by Marshall Rosenberg (translated in Italian as "Words are Windows, or They are Walls") as having a deep impact on her career. The book explores how to listen without judging, how to ask the right questions, and how to observe people to understand their real needs. But above all, it teaches how to communicate in a way that builds connection rather than creating barriers.  For Sara, the book was remarkably practical—she didn't just read it, she experimented with the techniques afterward. She explains: "I think that without this mindset, it's easy to fall into reactive communication, trying to defend, justify, or give quick answers. But that often blocks real understanding." The book helped her shift from reacting to connecting, which completely changed the quality of her conversations.  As a Scrum Master working with people every day—facilitating meetings, mediating conflicts, supporting teams—the way we communicate determines whether we open dialogue or close it. Sara found that taking time to reflect instead of giving quick answers transformed her ability to help teams discover dependencies, improve dialogue, and address communication issues. For anyone in the Scrum Master role, this book provides essential skills for building the kind of connection that makes true collaboration possible. In this segment, we also refer to the NVC episodes we have on the podcast. Check those out to learn more about Nonviolent Communication   [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.   🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.   Buy Now on Amazon   [The Scrum Master Toolbox Podcast Recommends]   About Sara Di Gregorio   Sara is a people-centered Scrum Master who champions trust, collaboration, and real value over rigid frameworks. With experience introducing Agile practices, she fosters empathy, inclusion, and clarity in every team. As an Advanced Scrum Master, she helps teams grow, perform, and deliver with enthusiasm and purpose. You can link with Sara Di Gregorio on LinkedIn.

18 Nov 17min

Populärt inom Politik & nyheter

svenska-fall
motiv
aftonbladet-krim
p3-krim
fordomspodden
flashback-forever
rss-viva-fotboll
rss-krimstad
aftonbladet-daily
rss-sanning-konsekvens
spar
blenda-2
rss-vad-fan-hande
rss-krimreportrarna
rss-frandfors-horna
dagens-eko
olyckan-inifran
krimmagasinet
rss-expressen-dok
svd-nyhetsartiklar