Swimming in Tech Debt — Practical Techniques to Keep Your Team from Drowning in Its Codebase | Lou Franco

Swimming in Tech Debt — Practical Techniques to Keep Your Team from Drowning in Its Codebase | Lou Franco

BONUS: Swimming in Tech Debt — Practical Techniques to Keep Your Team from Drowning in Its Codebase

In this fascinating conversation, veteran software engineer and author Lou Franco shares hard-won lessons from decades at startups, Trello, and Atlassian. We explore his book "Swimming in Tech Debt," diving deep into the 8 Questions framework for evaluating tech debt decisions, personal practices that compound over time, team-level strategies for systematic improvement, and leadership approaches that balance velocity with sustainability. Lou reveals why tech debt is often the result of success, how to navigate the spectrum between ignoring debt and rewriting too much, and practical techniques individuals, teams, and leaders can use starting today.

The Exit Interview That Changed Everything

"We didn't go slower by paying tech debt. We went actually faster, because we were constantly in that code, and now we didn't have to run into problems." — Lou Franco

Lou's understanding of tech debt crystallized during an exit interview at Atalasoft, a small startup where he'd spent years. An engineer leaving the company confronted him: "You guys don't care about tech debt." Lou had been focused on shipping features, believing that paying tech debt would slow them down. But this engineer told a different story — when they finally fixed their terrible build and installation system, they actually sped up. They were constantly touching that code, and removing the friction made everything easier. This moment revealed a fundamental truth: tech debt isn't just about code quality or engineering pride. It's about velocity, momentum, and the ability to move fast sustainably. Lou carried this lesson through his career at Trello (where he learned the dangers of rewriting too much) and Atlassian (where he saw enterprise-scale tech debt management). These experiences became the foundation for "Swimming in Tech Debt."

Tech Debt Is the Result of Success

"Tech debt is often the result of success. Unsuccessful projects don't have tech debt." — Lou Franco

This reframes the entire conversation about tech debt. Failed products don't accumulate debt — they disappear before it matters. Tech debt emerges when your code survives long enough to outlive its original assumptions, when your user base grows beyond initial expectations, when your team scales faster than your architecture anticipated. At Atalasoft, they built for 10 users and got 100. At Trello, mobile usage exploded beyond their web-first assumptions. Success creates tech debt by changing the context in which code operates. This means tech debt conversations should happen at different intensities depending on where you are in the product lifecycle. Early startups pursuing product-market fit should minimize tech debt investments — move fast, learn, potentially throw away the code. Growth-stage companies need balanced approaches. Mature products benefit significantly from tech debt investments because operational efficiency compounds over years. Understanding this lifecycle perspective helps teams make appropriate decisions rather than applying one-size-fits-all rules.

The 8 Questions Framework for Tech Debt Decisions

"Those 8 questions guide you to what you should do. If it's risky, has regressions, and you don't even know if it's gonna work, this is when you're gonna do a project spike." — Lou Franco

Lou introduces a systematic framework for evaluating whether to pay tech debt, inspired by Bob Moesta's push-pull forces from product management. The 8 questions create a complete picture:

  1. Visibility — Will people outside the team understand what we're doing?

  2. Alignment — Does this match our engineering values and target architecture?

  3. Resistance — How hard is this code to work with right now?

  4. Volatility — How often do we touch this code?

  5. Regression Risk — What's the chance we'll introduce new problems?

  6. Project Size — How big is this to fix?

  7. Estimate Risk — How uncertain are we about the effort required?

  8. Outcome Uncertainty — How confident are we the fix will actually improve things?

High volatility and high resistance with low regression risk? Pay the debt now. High regression risk with no tests? Write tests first, then reassess. Uncertain outcomes on a big project? Do a spike or proof of concept. The framework prevents both extremes — ignoring costly debt and undertaking risky rewrites without proper preparation.

Personal Practices That Compound Daily

"When I sit down at my desk, the first thing I do is I pay a little tech debt. I'm looking at code, I'm about to change it, do I even understand it? Am I having some kind of resistance to it? Put in a little helpful comment, maybe a little refactoring." — Lou Franco

Lou shares personal habits that create compounding improvements over time. Start each coding session by paying a small amount of tech debt in the area you're about to work — add a clarifying comment, extract a confusing variable, improve a function name. This warms you up, reduces friction for your actual work, and leaves the code slightly better than you found it. The clean-as-you-go philosophy means tech debt never accumulates faster than you can manage it. But Lou's most powerful practice comes at the end of each session: mutation testing by hand. Before finishing for the day, deliberately break something — change a plus to minus, a less-than to less-than-or-equal. See if tests catch it. Often they don't, revealing gaps in test coverage. The key insight: don't fix it immediately. Leave that failing test as the bridge to tomorrow's coding session. It connects today's momentum to tomorrow's work, ensuring you always start with context and purpose rather than cold-starting each day.

Mutation Testing: Breaking Things on Purpose

"Before I'm done working on a coding session, I break something on purpose. I'll change a plus to a minus, a less than to a less than equals, and see if tests break. A lot of times tests don't break. Now you've found a problem in your test." — Lou Franco

Manual mutation testing — deliberately breaking code to verify tests catch the break — reveals a critical gap in most test suites. You can have 100% code coverage and still have untested behavior. A line of code that's executed during tests isn't necessarily tested — the test might not actually verify what that line does. By changing operators, flipping booleans, or altering constants, you discover whether your tests protect against actual logic errors or just exercise code paths. Lou recommends doing this manually as part of your daily practice, but automated tools exist for systematic discovery: Stryker (for JavaScript, C#, Scala) and MutMut (for Python) can mutate your entire codebase and report which mutations survive uncaught. This isn't just about test quality — it's about understanding what your code actually does and building confidence that changes won't introduce subtle bugs.

Team-Level Practices: Budgets, Backlogs, and Target Architecture

"Create a target architecture document — where would we be if we started over today? Every PR is an opportunity to move slightly toward that target." — Lou Franco

At the team level, Lou advocates for three interconnected practices. First, create a target architecture document that describes where you'd be if starting fresh today — not a detailed design, but architectural patterns, technology choices, and structural principles that represent current best practices. This isn't a rewrite plan; it's a North Star. Every pull request becomes an opportunity to move incrementally toward that target when touching relevant code. Second, establish a budget split between PM-led feature work and engineering-led tech debt work — perhaps 80/20 or whatever ratio fits your product lifecycle stage. This creates predictable capacity for tech debt without requiring constant negotiation. Third, hold quarterly tech debt backlog meetings separate from sprint planning. Treat this backlog like PMs treat product discovery — explore options, estimate impacts, prioritize based on the 8 Questions framework. Some items fit in sprints; others require dedicated engineers for a quarter or two. This systematic approach prevents tech debt from being perpetually deprioritized while avoiding the opposite extreme of engineers disappearing into six-month "improvement" projects with no visible progress.

The Atlassian Five-Alarm Fire

"The Atlassian CTO's 'five-alarm fire' — stopping all feature development to focus on reliability. I reduced sync errors by 75% during that initiative." — Lou Franco

Lou shares a powerful example of leadership-driven tech debt management at scale. The Atlassian CTO called a "five-alarm fire" — halting all feature development across the company to focus exclusively on reliability and tech debt. This wasn't panic; it was strategic recognition that accumulated debt threatened the business. Lou worked on reducing sync errors, achieving a 75% reduction during this focused period. The initiative demonstrated several leadership principles: willingness to make hard calls that stop revenue-generating feature work, clear communication of why reliability matters strategically, trust that teams will use the time wisely, and commitment to see it through despite pressure to resume features. This level of intervention is rare and shouldn't be frequent, but it shows what's possible when leadership truly prioritizes tech debt. More commonly, leaders should express product lifecycle constraints (startup urgency vs. mature product stability), give teams autonomy to find appropriate projects within those constraints, and require accountability through visible metrics and dashboards that show progress.

The Rewrite Trap: Why Big Rewrites Usually Fail

"A system that took 10 years to write has implicit knowledge that can't be replicated in 6 months. I'm mostly gonna advocate for piecemeal migrations along the way, reducing the size of the problem over time." — Lou Franco

Lou lived through Trello's iOS navigation rewrite — a classic example of throwing away working code to start fresh, only to discover all the edge cases, implicit behaviors, and user expectations baked into the "old" system. A codebase that evolved over several years contains implicit knowledge — user workflows, edge case handling, performance optimizations, and subtle behaviors that users rely on even if they never explicitly requested them. Attempting to rewrite this in six months inevitably misses critical details. Lou strongly advocates for piecemeal migrations instead. The Trello "Decaffeinate Project" exemplifies this approach — migrating from CoffeeScript to TypeScript incrementally, with public dashboards showing the percentage remaining, interoperable technologies allowing gradual transition, and the ability to pause or reverse if needed. Keep both systems running in parallel during migrations. Use runtime observability to verify new code behaves identically to old code. Reduce the problem size steadily over months rather than attempting big-bang replacements. The only exception: sometimes keeping parallel systems requires scaffolding that creates its own complexity, so evaluate whether piecemeal migration is actually simpler or if you're better off living with the current system.

Making Tech Debt Visible Through Dashboards

"Put up a dashboard, showing it happen. Make invisible internal improvements visible through metrics engineering leadership understands." — Lou Franco

One of tech debt's biggest challenges is invisibility — non-technical stakeholders can't see the improvement from refactoring or test coverage. Lou learned to make tech debt work visible through dashboards and metrics. The Decaffeinate Project tracked percentage of CoffeeScript files remaining, providing a clear progress indicator anyone could understand. When reducing sync errors, Lou created dashboards showing error rates declining over time. These visualizations serve multiple purposes: they demonstrate value to leadership, create accountability for engineering teams, build momentum as progress becomes visible, and help teams celebrate wins that would otherwise go unnoticed. The key is choosing metrics that matter to the business — error rates, page load times, deployment frequency, mean time to recovery — rather than pure code quality metrics like cyclomatic complexity that don't translate outside engineering. Connect tech debt work to customer experience, reliability, or developer productivity in ways leadership can see and value.

Onboarding as a Tech Debt Opportunity

"Unit testing is a really great way to learn a system. It's like an executable specification that's helping you prove that you understand the system." — Lou Franco

Lou identifies onboarding as an underutilized opportunity for tech debt reduction. When new engineers join, they need to learn the codebase. Rather than just reading code or shadowing, Lou suggests having them write unit tests in areas they're learning. This serves dual purposes: tests are executable specifications that prove understanding of system behavior, and they create safety nets in areas that likely lack coverage (otherwise, why would new engineers be confused by the code?). The new engineer gets hands-on learning, the team gets better test coverage, and everyone wins. This practice also surfaces confusing code — if new engineers struggle to understand what to test, that's a signal the code needs clarifying comments, better naming, or refactoring. Make onboarding a systematic tech debt reduction opportunity rather than passive knowledge transfer.

Leadership's Role: Constraints, Autonomy, and Accountability

"Leadership needs to express the constraints. Tell the team what you're feeling about tech debt at a high level, and what you think generally is the appropriate amount of time to be spent on it. Then give them autonomy." — Lou Franco

Lou distills leadership's role in tech debt management to three elements. First, express constraints — communicate where you believe the product is in its lifecycle (early startup, rapid growth, mature cash cow) and what that means for tech debt tolerance. Are we pursuing product-market fit where code might be thrown away? Are we scaling a proven product where reliability matters? Are we maintaining a stable system where operational efficiency pays dividends? These constraints help teams make appropriate trade-offs. Second, give autonomy — once constraints are clear, trust teams to identify specific tech debt projects that fit those constraints. Engineers understand the codebase's pain points better than leaders do. Third, require accountability — teams must make their work visible through dashboards, metrics, and regular updates. Autonomy without accountability becomes invisible engineering projects that might not deliver value. Accountability without autonomy becomes micromanagement that wastes engineering judgment. The balance creates space for teams to make smart decisions while keeping leadership informed and confident in the investment.

AI and the Future of Tech Debt

"I really do AI-assisted software engineering. And by that, I mean I 100% review every single line of that code. I write the tests, and all the code is as I would have written it, it's just a lot faster. Developers are still responsible for it. Read the code." — Lou Franco

Lou has a chapter about AI in his book, addressing the elephant in the room: will AI-generated code create massive tech debt? His answer is nuanced. AI can accelerate development tremendously if used correctly — Lou uses it extensively but reviews every single line, writes all tests himself, and ensures the code matches what he would have written manually. The problem emerges with "vibe coders" — non-developers using AI to generate code they don't understand, creating unmaintainable messes that become someone else's problem. Developers remain responsible for all code, regardless of how it's generated. This means you must read and understand AI-generated code, not blindly accept it. Lou also raises supply chain security concerns — dependencies can contain malicious code, and AI might introduce vulnerabilities developers miss. His recommendation: stay six months behind on dependency updates, let others discover the problems first, and consider separate sandboxed development machines to limit security exposure. AI is a powerful tool, but it doesn't eliminate the need for engineering judgment, testing discipline, or code review practices.

The Style Guide Beyond Formatting

"Have a style guide that goes beyond formatting to include target architecture. This is the kind of code we want to write going forward." — Lou Franco

Lou advocates for style guides that extend beyond tabs-versus-spaces formatting rules to include architectural guidance. Document patterns you want to move toward: how should components be structured, what state management approaches do we prefer, how should we handle errors, what testing patterns should we follow? This creates a shared understanding of the target architecture without requiring a massive design document. When reviewing pull requests, teams can reference the style guide to explain why certain approaches align with where the codebase is headed versus perpetuating old patterns. This makes tech debt conversations less personal and more objective — it's not about criticizing someone's code, it's about aligning with team standards and strategic direction. The style guide becomes a living document that evolves as the team learns and technology changes, capturing collective wisdom about what good code looks like in your specific context.

Recommended Resources

Some of the resources mentioned in this episode include:

About Lou Franco

Lou Franco is a veteran software engineer and author of Swimming in Tech Debt. With decades of experience at startups, as well as Trello, and Atlassian, he's seen both sides of debt—as coder and leader. Today, he advises teams on engineering practices, helping them turn messy codebases into momentum.

You can link with Lou Franco on LinkedIn and learn more at LouFranco.com.

Avsnitt(200)

AI Assisted Coding: Transactional AI Development - Commit, Validate, and Rollback With Sergey Sergyenko

AI Assisted Coding: Transactional AI Development - Commit, Validate, and Rollback With Sergey Sergyenko

AI Assisted Coding: Treating AI Like a Junior Engineer - Onboarding Practices for AI Collaboration In this special episode, Sergey Sergyenko, CEO of Cybergizer, shares his practical framework for AI-assisted development built on transactional models, Git workflows, and architectural conventions. He explains why treating AI like a junior engineer, keeping commits atomic, and maintaining rollback strategies creates production-ready code rather than just prototypes. Vibecoding: An Automation Design Instrument "I would define Vibecoding as an automation design instrument. It's not a tool that can deliver end-to-end solution, but it's like a perfect set of helping hands for a person who knows what they need to do."   Sergey positions vibecoding clearly: it's not magic, it's an automation design tool. The person using it must know what they need to accomplish—AI provides the helping hands to execute that vision faster. This framing sets expectations appropriately: AI speeds up development significantly, but it's not a silver bullet that works without guidance. The more you practice vibecoding, the better you understand its boundaries. Sergey's definition places vibecoding in the evolution of development tools: from scaffolding to co-pilots to agentic coding to vibecoding. Each step increases automation, but the human architect remains essential for providing direction, context, and validation. Pair Programming with the Machine "If you treat AI as a junior engineer, it's very easy to adopt it. Ah, okay, maybe we just use the old traditions, how we onboard juniors to the team, and let AI follow this step."   One of Sergey's most practical insights is treating AI like a junior engineer joining your team. This mental model immediately clarifies roles and expectations. You wouldn't let a junior architect your system or write all your tests—so why let AI? Instead, apply existing onboarding practices: pair programming, code reviews, test-driven development, architectural guidance. This approach leverages Extreme Programming practices that have worked for decades. The junior engineer analogy helps teams understand that AI needs mentorship, clear requirements, and frequent validation. Just as you'd provide a junior with frameworks and conventions to follow, you constrain AI with established architectural patterns and framework conventions like Ruby on Rails. The Transactional Model: Atomic Commits and Rollback "When you're working with AI, the more atomic commits it delivers, more easy for you to kind of guide and navigate it through the process of development."   Sergey's transactional approach transforms how developers work with AI. Instead of iterating endlessly when something goes wrong, commit frequently with atomic changes, then rollback and restart if validation fails. Each commit should be small, independent, and complete—like a feature flag you can toggle. The commit message includes the prompt sequence used to generate the code and rollback instructions.  This approach makes the Git repository the context manager, not just the AI's memory. When you need to guide AI, you can reference specific commits and their context. This mirrors trunk-based development practices where teams commit directly to master with small, verified changes. The cost of rollback stays minimal because changes are atomic, making this strategy far more efficient than trying to fix broken implementations through iteration. Context Management: The Weak Point and the Solution "Managing context and keeping context is one of the weak points of today's coding agents, therefore we need to be very mindful in how we manage that context for the agent."   Context management challenges current AI coding tools—they forget, lose thread, or misinterpret requirements over long sessions. Sergey's solution is embedding context within the commit history itself. Each commit links back to the specific reasoning behind that code: why it was accepted, what iterations it took, and how to undo it if needed. This creates a persistent context trail that survives beyond individual AI sessions. When starting new features, developers can reference previous commits and their context to guide the AI. The transactional model doesn't just provide rollback capability—it creates institutional memory that makes AI progressively more effective as the codebase grows. TDD 2.0: Humans Write Tests, AI Writes Code "I would never allow AI to write the test. I would do it by myself. Still, it can write the code."   Sergey is adamant about roles: humans write tests, AI writes implementation code. This inverts traditional TDD slightly—instead of developers writing tests then code, they write tests and AI writes the code to pass them. Tests become executable requirements and prompts. This provides essential guardrails: AI can iterate on implementation until tests pass, but it can't redefine what "passing" means. The tests represent domain knowledge, business requirements, and validation criteria that only humans should control. Sergey envisions multi-agent systems where one agent writes code while another validates with tests, but critically, humans author the original test suite. This TDD 2.0 framework (a talk Sergey gave at the Global Agile Summit) creates a verification mechanism that prevents the biggest anti-pattern: coding without proper validation. The Two Cardinal Rules: Architecture and Verification "I would never allow AI to invent architecture. Writing AI agentic coding, Vibecoding, whatever coding—without proper verification and properly setting expectations of what you want to get as a result—that's the main mistake."   Sergey identifies two non-negotiables. First, never let AI invent architecture. Use framework conventions (Rails, etc.) to constrain AI's choices. Leverage existing code generators and scaffolding. Provide explicit architectural guidelines in planning steps. Store iteration-specific instructions where AI can reference them. The framework becomes the guardrails that prevent AI from making structural decisions it's not equipped to make. Second, always verify AI output. Even if you don't want to look at code, you must validate that it meets requirements. This might be through tests, manual review, or automated checks—but skipping verification is the fundamental mistake. These two rules—human-defined architecture and mandatory verification—separate successful AI-assisted development from technical debt generation. Prototype vs. Production: Two Different Workflows "When you pair as an architect or a really senior engineer who can implement it by himself, but just wants to save time, you do the pair programming with AI, and the AI kind of ships a draft, and rapid prototype."   Sergey distinguishes clearly between prototype and production development. For MVPs and rapid prototypes, a senior architect pairs with AI to create drafts quickly—this is where speed matters most. For production code, teams add more iterative testing and polishing after AI generates initial implementation. The key is being explicit about which mode you're in. The biggest anti-pattern is treating prototype code as production-ready without the necessary validation and hardening steps. When building production systems, Sergey applies the full transactional model: atomic commits, comprehensive tests, architectural constraints, and rollback strategies. For prototypes, speed takes priority, but the architectural knowledge still comes from humans, not AI. The Future: AI Literacy as Mandatory "Being a software engineer and trying to get a new job, it's gonna be a mandatory requirement for you to understand how to use AI for coding. So it's not enough to just be a good engineer."   Sergey sees AI-assisted coding literacy becoming as fundamental as Git proficiency. Future engineering jobs will require demonstrating effective AI collaboration, not just traditional coding skills. We're reaching good performance levels with AI models—now the challenge is learning to use them efficiently. This means frameworks and standardized patterns for AI-assisted development will emerge and consolidate. Approaches like AAID, SpecKit, and others represent early attempts to create these patterns. Sergey expects architectural patterns for AI-assisted development to standardize, similar to how design patterns emerged in object-oriented programming. The human remains the bottleneck—for domain knowledge, business requirements, and architectural guidance—but the implementation mechanics shift heavily toward AI collaboration. Resources for Practitioners "We are reaching a good performance level of AI models, and now we need to guide it to make it impactful. It's a great tool, now we need to understand how to make it impactful."   Sergey recommends Obie Fernandez's work on "Patterns of Application Development Using AI," particularly valuable for Ruby and Rails developers but applicable broadly. He references Andrey Karpathy's original vibecoding post and emphasizes Extreme Programming practices as foundational. The tools he uses—Cursor and Claude Code—support custom planning steps and context management. But more important than tools is the mindset: we have powerful AI capabilities now, and the focus must shift to efficient usage patterns. This means experimenting with workflows, documenting what works, and sharing patterns with the community. Sergey himself shares case studies on LinkedIn and travels extensively speaking about these approaches, contributing to the collective learning happening in real-time.   About Sergey Sergyenko   Sergey is the CEO of Cybergizer, a dynamic software development agency with offices in Vilnius, Lithuania. Specializing in MVPs with zero cash requirements, Cybergizer offers top-tier CTO services and startup teams. Their tech stack includes Ruby, Rails, Elixir, and ReactJS.   Sergey was also a featured speaker at the Global Agile Summit, and you can find his talk available in your membership area. If you are not a member don't worry, you can get the 1-month trial and watch the whole conference. You can cancel at any time.   You can link with Sergey Sergyenko on LinkedIn.

27 Nov 41min

AI Assisted Coding: Augmented AI Development - Software Engineering First, AI Second With Dawid Dahl

AI Assisted Coding: Augmented AI Development - Software Engineering First, AI Second With Dawid Dahl

BONUS: Augmented AI Development - Software Engineering First, AI Second In this special episode, Dawid Dahl introduces Augmented AI Development (AAID)—a disciplined approach where professional developers augment their capabilities with AI while maintaining full architectural control. He explains why starting with software engineering fundamentals and adding AI where appropriate is the opposite of most frameworks, and why this approach produces production-grade software rather than technical debt. The AAID Philosophy: Don't Abandon Your Brain "Two of the fundamental developer principles for AAID are: first, don't abandon your brain. And the second is incremental steps."   Dawid's Augmented AI Development framework stands in stark contrast to "vibecoding"—which he defines strictly as not caring about code at all, only results on screen. AAID is explicitly designed for professional developers who maintain full understanding and control of their systems. The framework is positioned on the furthest end of the spectrum from vibe coding, requiring developers to know their craft deeply. The two core principles—don't abandon your brain, work incrementally—reflect a philosophy that AI is a powerful collaborator, not a replacement for thinking. This approach recognizes that while 96% of Dawid's code is now written by AI, he remains the architect, constantly steering and verifying every step. In this segment we refer to Marcus Hammarberg's work and his book The Bungsu Story. Software Engineering First, AI Second: A Hill to Die On "You should start with software engineering wisdom, and then only add AI where it's actually appropriate. I think this is super, super important, and the entire foundation of this framework. This is a hill I will personally die on."   What makes AAID fundamentally different from other AI-assisted development frameworks is its starting point. Most frameworks start with AI capabilities and try to add structure and best practices afterward. Dawid argues this is completely backwards. AAID begins with 50-60 years of proven software engineering wisdom—test-driven development, behavior-driven development, continuous delivery—and only then adds AI where it enhances the process. This isn't a minor philosophical difference; it's the foundation of producing maintainable, production-grade software. Dawid admits he's sometimes "manipulating developers to start using good, normal software engineering practices, but in this shiny AI box that feels very exciting and new." If the AI wrapper helps developers finally adopt TDD and BDD, he's fine with that. Why TDD is Non-Negotiable with AI "Every time I prompt an AI and it writes code for me, there is often at least one or two or three mistakes that will cause catastrophic mistakes down the line and make the software impossible to change."   Test-driven development isn't just a nice-to-have in AAID—it's essential. Dawid has observed that AI consistently makes 2-3 mistakes per prompt that could have catastrophic consequences later. Without TDD's red-green-refactor cycle, these errors accumulate, making code increasingly difficult to change. TDD answers the question "Is my code technically correct?" while acceptance tests answer "Is the system releasable?" Both are needed for production-grade software. The refactor step is where 50-60 years of software engineering wisdom gets applied to make code maintainable. This matters because AAID isn't vibe coding—developers care deeply about code quality, not just visible results. Good software, as Dave Farley says, is software that's easy to change. Without TDD, AI-generated code becomes a maintenance nightmare. The Problem with "Prompt and Pray" Autonomous Agents "When I hear 'our AI can now code for over 30 hours straight without stopping,' I get very afraid. You fall asleep, and the next morning, the code is done. Maybe the tests are green. But what has it done in there? Imagine everything it does for 30 hours. This system will not work."   Dawid sees two diverging paths for AI-assisted development's future. The first—autonomous agents working for hours or days without supervision—terrifies him. The marketing pitch sounds appealing: prompt the AI, go to sleep, wake up to completed features. But the reality is technical debt accumulation at scale. Imagine all the decisions, all the architectural choices, all the mistakes an AI makes over 30 hours of autonomous work. Dawid advocates for the stark contrast: working in extremely small increments with constant human steering, always aligned to specifications. His vision of the future isn't AI working alone—it's voice-controlled confirmations where he says "Yes, yes, no, yes" as AI proposes each tiny change. This aligns with DORA metrics showing that high-performing teams work in small batches with fast feedback loops. Prerequisites: Product Discovery Must Come First "Without Dave Farley, this framework would be totally different. I think he does everything right, basically. With this framework, I want to stand on the shoulders of giants and work on top of what has already been done."   AAID explicitly requires product discovery and specification phases before AI-assisted coding begins. This is based on Dave Farley's product journey model, which shows how products move from idea to production. AAID starts at the "executable specifications" stage—it requires input specifications from prior discovery work. This separates specification creation (which Dawid is addressing in a separate "Dream Encoder" framework) from code execution. The prerequisite isn't arbitrary; it acknowledges that AI-assisted implementation works best when the problem is well-defined. This "standing on shoulders of giants" approach means AAID doesn't try to reinvent software engineering—it leverages decades of proven practices from TDD pioneers, BDD creators, and continuous delivery experts. What's Wrong with Other AI Frameworks "When the AI decides to check the box [in task lists], that means this is the definition of done. But how is the AI taking that decision? It's totally ad hoc. It's like going back to the 1980s: 'I wrote the code, I'm done.' But what does that mean? Nobody has any idea."   Dawid is critical of current AI frameworks like SpecKit, pointing out fundamental flaws. They start with AI first and try to add structure later (backwards approach). They use task lists with checkboxes where AI decides when something is "done"—but without clear criteria, this becomes ad hoc decision-making reminiscent of 1980s development practices. These frameworks "vibecode the specs," not realizing there's a structured taxonomy to specifications that BDD already solved. Most concerning, some have removed testing as a "feature," treating it as optional. Dawid sees these frameworks as over-engineered, process-centric rather than developer-centric, often created by people who may not develop software themselves. AAID, in contrast, is built by a practicing developer solving real problems daily. Getting Started: Learn Fundamentals First "The first thing developers should do is learn the fundamentals. They should skip AI altogether and learn about BDD and TDD, just best practices. But when you know that, then you can look into a framework, maybe like mine."   Dawid's advice for developers interested in AI-assisted coding might seem counterintuitive: start by learning fundamentals without AI. Master behavior-driven development, test-driven development, and software engineering best practices first. Only after understanding these foundations should developers explore frameworks like AAID. This isn't gatekeeping—it's recognizing that AI amplifies whatever approach developers bring. If they start with poor practices, AI will help them build unmaintainable systems faster. But if they start with solid fundamentals, AI becomes a powerful multiplier that lets them work at unprecedented speed while maintaining quality. AAID offers both a dense technical article on dev.to and a gentler game-like onboarding in the GitHub repo, meeting developers wherever they are in their journey.   About Dawid Dahl   Dawid is the creator of Augmented AI Development (AAID), a disciplined approach where developers augment their capabilities by integrating with AI, while maintaining full architectural control. Dawid is a software engineer at Umain, a product development agency.   You can link with Dawid Dahl on LinkedIn and find the AAID framework on GitHub.

26 Nov 45min

AI Assisted Coding: Swimming in AI - Managing Tech Debt in the Age of AI-Assisted Coding | Lou Franco

AI Assisted Coding: Swimming in AI - Managing Tech Debt in the Age of AI-Assisted Coding | Lou Franco

AI Assisted Coding: Swimming in AI - Managing Tech Debt in the Age of AI-Assisted Coding In this special episode, Lou Franco, veteran software engineer and author of "Swimming in Tech Debt," shares his practical approach to AI-assisted coding that produces the same amount of tech debt as traditional development—by reading every line of code. He explains the critical difference between vibecoding and AI-assisted coding, why commit-by-commit thinking matters, and how to reinvest productivity gains into code quality. Vibecoding vs. AI-Assisted Coding: Reading Code Matters "I read all the code that it outputs, so I need smaller steps of changes."   Lou draws a clear distinction between vibecoding and his approach to AI-assisted coding. Vibecoding, in his definition, means not reading the code at all—just prompting, checking outputs, and prompting again. His method is fundamentally different: he reads every line of generated code before committing it. This isn't just about catching bugs; it's about maintaining architectural control and accountability. As Lou emphasizes, "A computer can't be held accountable, so a computer can never make decisions. A human always has to make decisions." This philosophy shapes his entire workflow—AI generates code quickly, but humans make the final call on what enters the repository. The distinction matters because it determines whether you're managing tech debt proactively or discovering it later when changes become difficult. The Moment of Shift: Staying in the Zone "It kept me in the zone. It saved so much time! Never having to look up what a function's arguments were... it just saved so much time."   Lou's AI coding journey began in late 2022 with GitHub Copilot's free trial. He bought a subscription immediately after the trial ended because of one transformative benefit: staying in the flow state. The autocomplete functionality eliminated constant context switching to documentation, Stack Overflow searches, and function signature lookups. This wasn't about replacing thinking—it was about removing friction from implementation. Lou could maintain focus on the problem he was solving rather than getting derailed by syntax details. This experience shaped his understanding that AI's value lies in removing obstacles to productivity, not in replacing the developer's judgment about architecture and design. Thinking in Commits: The Right Size for AI Work "I think of prompts commit-by-commit. That's the size of the work I'm trying to do in a prompt."   Lou's workflow centers on a simple principle: size your prompts to match what should be a single commit. This constraint provides multiple benefits. First, it keeps changes small enough to review thoroughly—if a commit is too big to review properly, the prompt was too ambitious. Second, it creates a clear commit history that tells a story about how the code evolved. Third, it enables easy rollback if something goes wrong. This commit-sized thinking mirrors good development practices that existed long before AI—small, focused changes that each accomplish one clear purpose. Lou uses inline prompting in Cursor (Command-K) for these localized changes because it keeps context tight: "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast." The Tech Debt Question: Same Code, Same Debt "Based on the way I've defined how I did it, it's exactly the same amount of tech debt that I would have done on my own... I'm faster and can make more code, but I invest some of that savings back into cleaning things up."   As the author of "Swimming in Tech Debt," Lou brings unique perspective to whether AI coding creates more technical debt. His answer: not if you're reading and reviewing everything. When you maintain the same quality standards—code review, architectural oversight, refactoring—you generate the same amount of debt as manual coding. The difference is speed. Lou gets productivity gains from AI, and he consciously reinvests a portion of those gains back into code quality through refactoring. This creates a virtuous cycle: faster development enables more time for cleanup, which maintains a codebase that's easier for both humans and AI to work with. The key insight is that tech debt isn't caused by AI—it's caused by skipping quality practices regardless of how code is generated. When Vibecoding Creates Debt: AI Resistance as a Symptom "When you start asking the AI to do things, and it can't do them, or it undoes other things while it's doing them... you're experiencing the tech debt a different way. You're trying to make changes that are on your roadmap, and you're getting resistance from making those changes."   Lou identifies a fascinating pattern: tech debt from vibecoding (without code review) manifests as "AI resistance"—difficulty getting AI to make the changes you want. Instead of compile errors or brittle tests signaling problems, you experience AI struggling to understand your codebase, undoing changes while making new ones, or producing code with repetition and tight coupling. These are classic tech debt symptoms, just detected differently. The debt accumulates through architecture violations, lack of separation of concerns, and code that's hard to modify. Lou's point is profound: whether you notice debt through test failures or through AI confusion, the underlying problem is the same—code that's difficult to change. The solution remains consistent: maintain quality practices including code review, even when AI makes generation fast. Can AI Fix Tech Debt? Yes, With Guidance "You should have some acceptance criteria on the code... guide the LLM as to the level of code quality you want."   Lou is optimistic but realistic about AI's ability to address existing tech debt. AI can definitely help with refactoring and adding tests—but only with human guidance on quality standards. You must specify what "good code" looks like: acceptance criteria, architectural patterns, quality thresholds. Sometimes copy/paste is faster than having AI regenerate code. Very convoluted codebases challenge both humans and AI, so some remediation should happen before bringing AI into the picture. The key is recognizing that AI amplifies your approach—if you have strong quality standards and communicate them clearly, AI accelerates improvement. If you lack quality standards, AI will generate code just as problematic as what already exists. Reinvesting Productivity Gains in Quality "I'm getting so much productivity out of it, that investing a little bit of that productivity back into refactoring is extremely good for another kind of productivity."   Lou describes a critical strategy: don't consume all productivity gains as increased feature velocity. Reinvest some acceleration back into code quality through refactoring. This mirrors the refactor step in test-driven development—after getting code working, clean it up before moving on. AI makes this more attractive because the productivity gains are substantial. If AI makes you 30% faster at implementation, using 10% of that gain on refactoring still leaves you 20% ahead while maintaining quality. Lou explicitly budgets this reinvestment, treating quality maintenance as a first-class activity rather than something that happens "when there's time." This discipline prevents the debt accumulation that makes future work progressively harder. The 100x Code Concern: Accountability Remains Human "Directionally, I think you're probably right... this thing is moving fast, we don't know. But I'm gonna always want to read it and approve it."   When discussing concerns about AI generating 100x more code (and potentially 100x more tech debt), Lou acknowledges the risk while maintaining his position: he'll always read and approve code before it enters the repository. This isn't about slowing down unnecessarily—it's about maintaining accountability. Humans must make the decisions because only humans can be held accountable for those decisions. Lou sees potential for AI to improve by training on repository evolution rather than just end-state code, learning from commit history how codebases develop. But regardless of AI improvements, the human review step remains essential. The goal isn't to eliminate human involvement; it's to shift human focus from typing to thinking, reviewing, and making architectural decisions. Practical Workflow: Inline Prompting and Small Changes "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast."   Lou's preferred tool is Cursor with inline prompting (Command-K), which allows him to work on specific code sections with tight context. This approach is fast because it limits what AI considers, reducing both latency and irrelevant changes. The workflow resembles pair programming: Lou knows what he wants, points AI at the specific location, AI generates the implementation, and Lou reviews before accepting. He also uses Claude Code for full codebase awareness when needed, but the inline approach dominates his daily work. The key principle is matching tool choice to context needs—use inline prompting for localized changes, full codebase tools when you need broader understanding. This thoughtful tool selection keeps development efficient while maintaining control. Resources and Community Lou recommends Steve Yegge's upcoming book on vibecoding. His website, LouFranco.com, provides additional resources.    About Lou Franco   Lou Franco is a veteran software engineer and author of Swimming in Tech Debt. With decades of experience at startups, as well as Trello, and Atlassian, he's seen both sides of debt—as coder and leader. Today, he advises teams on engineering practices, helping them turn messy codebases into momentum.   You can link with Lou Franco on LinkedIn and visit his website at LouFranco.com.

25 Nov 37min

AI Assisted Coding: From Designer to Solo Developer - Building Production Apps with AI With Elina Patjas

AI Assisted Coding: From Designer to Solo Developer - Building Production Apps with AI With Elina Patjas

AI Assisted Coding: From Designer to Solo Developer - Building Production Apps with AI In this special episode, Elina Patjas shares her remarkable journey from designer to solo developer, building LexieLearn—an AI-powered study tool with 1,500+ users and paying customers—entirely through AI-assisted coding. She reveals the practical workflow, anti-patterns to avoid, and why the future of software might not need permanent apps at all. The Two-Week Transformation: From Idea to App Store "I did that, and I launched it to App Store, and I was like, okay, so… If I can do THIS! So, what else can I do? And this all happened within 2 weeks."   Elina's transformation happened fast. As a designer frustrated with traditional software development where maybe 10% of your original vision gets executed, she discovered Cursor and everything changed. Within two weeks, she went from her first AI-assisted experiment to launching a complete app in the App Store. The moment that shifted everything was realizing that AI had fundamentally changed the paradigm from "writing code" to "building the product." This wasn't about learning to code—it was about finally being able to execute her vision 100% the way she wanted it, with immediate feedback through testing. Building LexieLearn: Solving Real Problems for Real Users "I got this request from a girl who was studying, and she said she would really appreciate to be able to iterate the study set... and I thought: "That's a brilliant idea! And I can execute that!" And the next morning, it was 9.15, I sent her a screen capture."   Lexie emerged from Elina's frustration with ineffective study routines and gamified edtech that didn't actually help kids learn. She built an AI-powered study tool for kids aged 10-15 that turns handwritten notes into adaptive quizzes revealing knowledge gaps—private, ad-free, and subscription-based. What makes Lexie remarkable isn't just the technology, but the speed of iteration. When a user requested a feature, Elina designed and implemented it overnight, sending a screen capture by 9:15 AM the next morning. This kind of responsiveness—from customer feedback to working feature in hours—represents a fundamental shift in how software can be built. Today, Lexie has over 1,500 users with paying customers, proving that AI-assisted development isn't just for prototypes anymore. The Workflow: It's Not Just "Vibing" "I spend 30 minutes designing the whole workflow inside my head... all the UX interactions, the data flow, and the overall architectural decisions... so I spent a lot of time writing a really, really good spec. And then I gave that to Claude Code."   Elina has mixed feelings about the term "vibecoding" because it suggests carelessness. Her actual workflow is highly disciplined. She spends significant time designing the complete workflow mentally—all UX interactions, data flow, and architectural decisions—then writes detailed specifications. She often collaborates with Claude to write these specs, treating the AI as a thinking partner. Once the spec is clear, she gives it to Claude Code and enters a dialogue mode: splitting work into smaller tasks, maintaining constant checkpoints, and validating every suggestion. She reads all the code Claude generates (32,000 lines client-side, 8,000 server-side) but doesn't write code herself anymore. This isn't lazy—it's a new kind of discipline focused on design, architecture, and clear communication rather than syntax. Reading Code vs. Writing Code: A New Skill Set "AI is able to write really good code, if you just know how to read it... But I do not write any code. I haven't written a single line of code in a long time."   Elina's approach reveals an important insight: the skill shifts from writing code to reading and validating it. She treats Claude Code as a highly skilled companion that she needs to communicate with extremely well. This requires knowing "what good looks like"—her 15 years of experience as a designer gives her the judgment to evaluate what the AI produces. She maintains dialogue throughout development, using checkpoints to verify direction and clarify requirements. The fast feedback loop means when she fails to explain something clearly, she gets immediate feedback and can course-correct instantly. This is fundamentally different from traditional development where miscommunication might not surface until weeks later. The Anti-Pattern: Letting AI Run Rampant "You need to be really specific about what you want to do, and how you want to do it, and treat the AI as this highly skilled companion that you need to be able with."   The biggest mistake Elina sees is treating AI like magic—giving vague instructions and expecting it to "just figure it out." This leads to chaos. Instead, developers need to be incredibly specific about requirements and approach, treating AI as a skilled partner who needs clear communication. The advantage is that the iteration loop is so fast that when you fail to explain something properly, you get feedback immediately and can clarify. This makes the learning curve steep but short. The key is understanding that AI amplifies your skills—if you don't know what good architecture looks like, AI won't magically create it for you. Breaking the Gatekeeping: One Person, Ten Jobs "I think that I can say that I am a walking example of what you can do, if you have the proper background, and you know what good looks like. You can do several things at a time. What used to require 10 people, at least, to build before."   Elina sees herself as living proof that the gatekeeping around software development is breaking down. Someone with the right background and judgment can now do what previously required a team of ten people. She's passionate about others experiencing this same freedom—the ability to execute their vision without compromise, to respond to user feedback overnight, to build production-quality software solo. This isn't about replacing developers; it's about expanding who can build software and what's possible for small teams. For Elina, working with a traditional team would actually slow her down now—she'd spend more time explaining her vision than the team would save through parallel work. The Future: Intent-Based Software That Emerges and Disappears "The software gets built in an instance... it's going to this intent-based mode when we actually don't even need apps or software as we know them."   Elina's vision for the future is radical: software that emerges when you need it and disappears when you don't. Instead of permanent apps, you'd have intent-based systems that generate solutions in the moment. This shifts software from a product you download and learn to a service that materializes around your needs. We're not there yet, but Elina sees the trajectory clearly. The speed at which she can now build and modify Lexie—overnight feature implementations, instant bug fixes, continuous evolution—hints at a future where software becomes fluid rather than fixed. Getting Started: Just Do It "I think that the best resource is just your own frustration with some existing tools... Just open whatever tool you're using, is it Claude or ChatGPT and start interacting and discussing, getting into this mindset that you're exploring what you can do, and then just start doing."   When asked about resources, Elina's advice is refreshingly direct: don't look for tutorials, just start. Let your frustration with existing tools drive you. Open Claude or ChatGPT and start exploring, treating it as a dialogue partner. Start building something you actually need. The learning happens through doing, not through courses. Her own journey proves this—she went from experimenting with Cursor to shipping Lexie to the App Store in two weeks, not because she found the perfect tutorial, but because she just started building. The tools are good enough now that the biggest barrier isn't technical knowledge—it's having the courage to start and the judgment to evaluate what you're building.   About Elina Patjas   Elina is building Lexie, an AI-powered study tool for kids aged 10–15. Frustrated by ineffective "read for exams" routines and gamified edtech fluff, she designed Lexie to turn handwritten notes into adaptive quizzes that reveal knowledge gaps—private, ad-free, and subscription-based. Lexie is learning, simplified.   You can link with Elina Patjas on LinkedIn.

24 Nov 41min

Coaching Product Owners from Isolation to Collaboration | Sara Di Gregorio

Coaching Product Owners from Isolation to Collaboration | Sara Di Gregorio

Sara Di Gregorio: Coaching Product Owners from Isolation to Collaboration Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. The Great Product Owner: Using User Story Mapping to Break Down PO Isolation "One of the key strengths is the ability to build a strong collaborative relationship with the Scrum team. We constantly exchange feedback, with the shared goal of improving both our collaborating and the way of working." - Sara Di Gregorio   Sara considers herself fortunate—she currently works with Product Owners who exemplify what great collaboration looks like. One of their key strengths is the ability to build strong collaborative relationships with the Scrum team. They don't wait for sprint reviews to exchange feedback; instead, they constantly communicate with the shared goal of improving both collaboration and ways of working.  These Product Owners involve the team early, using techniques like user story mapping after analysis phases to create open discussions around upcoming topics and help the team understand potential dependencies. They make themselves truly available—they observe daily stand-ups not as passive attendees but as engaged contributors. If the team needs five minutes to discuss something afterward, the Product Owner is ready. They attend Scrum events with genuine interest in working with the team, not just fulfilling an attendance requirement. They encourage open dialogue, even participating in retrospectives to understand how the team is working and where they can improve collaboration. What sets these Product Owners apart is their communication approach. They don't come in thinking they know everything or that they need to do everything alone. Their mindset is collaborative: "We're doing this together." They recognize that developers aren't just executors—they're users of the product, experts who can provide valuable perspectives.  When Product Owners ask "Why do you want this?" and developers respond with "If we do it this way, we can be faster, and you can try your product sooner," that's when magic happens. Great Product Owners understand that strong communication skills and collaborative relationships create better products, better teams, and better outcomes for everyone involved.   Self-reflection Question: How are your Product Owners involving the team early in discovery and analysis, and are they building collaborative relationships or just attending required events? The Bad Product Owner: The Isolated Expert Who Thinks Teams Just Execute   "Sometimes they feel very comfortable in their subject, so they assume they know everything, and the team has only to execute what they asked for." - Sara Di Gregorio   Sara has encountered Product Owners who embody the worst anti-pattern: they believe they don't need to interact with the development team because they're confident in their subject matter expertise. They assume they know everything, and the team's job is simply to execute what they ask for. These Product Owners work isolated from the development team, writing detailed user stories alone and skipping the interesting discussions with developers. They only involve the team when they think it's necessary, treating developers as order-takers rather than collaborators who could contribute valuable insights.  The impact is significant—teams lose the opportunity to understand the "why" behind features, Product Owners miss perspectives that could improve the product, and collaboration becomes transactional instead of transformational. Sara's approach to addressing this anti-pattern is patient but deliberate. She creates space for dialogue and provides training with the Product Owner to help them understand how important it is to collaborate and cooperate with the team. She shows them the impact of including the team from the beginning of feature study.  One powerful technique she uses is user story mapping workshops, bringing both the team and Product Owner together. The Product Owner explains what they want to deliver from their point of view, but then something crucial happens: the team asks lots of questions to understand "Why do you want this?"—not just "I will do it."  Through this exercise, Sara watched Product Owners have profound realizations. They understood they could change their mindset by talking with developers, who often are users of the product and can offer perspectives like "If we do it this way, we can be faster, and you can try your product sooner."  The workshop helps teams understand the big picture of what the Product Owner is asking for while helping the Product Owner reflect on what they're actually asking. It transforms the relationship from isolation to collaboration, from directive to dialogue, from assumption to shared understanding.   In this segment, we refer to the User Story Mapping blog post by Jeff Patton.   Self-reflection Question: Are your Product Owners writing user stories in isolation, or are they involving the team in discovery to create shared understanding and better solutions?   [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.   🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.   Buy Now on Amazon   [The Scrum Master Toolbox Podcast Recommends]   About Sara Di Gregorio   Sara is a people-centered Scrum Master who champions trust, collaboration, and real value over rigid frameworks. With experience introducing Agile practices, she fosters empathy, inclusion, and clarity in every team. As an Advanced Scrum Master, she helps teams grow, perform, and deliver with enthusiasm and purpose.   You can link with Sara Di Gregorio on LinkedIn.

21 Nov 14min

How to Know Your Team Has Internalized Agile Values | Sara Di Gregorio

How to Know Your Team Has Internalized Agile Values | Sara Di Gregorio

Sara Di Gregorio: How to Know Your Team Has Internalized Agile Values Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.   "Scrum isn't just a process to follow, it's a way of working." - Sara Di Gregorio   For Sara, success as a Scrum Master isn't measured by what the team delivers—it's measured by how they grow. She knows that if you facilitate team growth in communication and collaboration, delivery will naturally improve.  The indicators she watches for are subtle but powerful. When teams come to her with specific requests outside the regular schedule—"Can we have 30 minutes to talk and reflect mid-sprint?"—she knows something has shifted. When teams want to reflect outside the retrospective cycle, it means they've internalized the value of continuous improvement, not just going through the motions. She listens for the word "goal" during sprint planning.  When team members start their planning by talking about goals, she feels a surge of recognition: "Okay, for me, this is very, very, very important." Success shows up in unexpected places. One of her colleague's teams pushed back during a cross-team meeting, saying "We're going out of the timebox" and suggesting they move the discussion to a different time. That kind of proactive leadership and accountability signals maturity. It means the team isn't just attending Scrum events because they have to—they truly understand why each event matters and actively participate to make them valuable.  When Sara first met a team, they asked if she wanted to change things. She said no. What she focuses on is how people improve and understand the process better. For her, it starts with the people—when people change and understand the value, that's when real changes happen in the company. It's about helping people feel good and be guided well, because when they're working well, that's when transformation becomes possible. As Sara reminds us, Scrum isn't just a process to follow—it's a way of working that teams must embrace, understand, and make their own.   Self-reflection Question: Are your teams coming to you asking for reflection time outside scheduled events, and what does that tell you about how deeply they've internalized continuous improvement? Featured Retrospective Format for the Week: Unstructured Retrospective After facilitating many structured retrospectives, Sara started experimenting with an unstructured format that brought new energy to team reflection. Instead of using predefined frameworks, she brings white paper, sticky notes, and sharpies of different colors. She opens with a simple question: "Guys, what impacted you mostly during the last week? How do you feel today?" Sometimes she starts with data and metrics; other times, she begins with how the team is feeling.  The key is creating open space for conversation rather than forcing it into a predetermined structure. What Sara discovered is remarkable: "They are more engaged, more open, and more present in the conversation, maybe because it was something new." Instead of the same structured format every time, the unstructured approach breaks the routine and creates space for true reflections that bring out something deeper and more meaningful. It allows people to express what's genuinely going on for them, not just what fits into a predefined template.  Sara doesn't abandon structured formats entirely—she alternates between structured and unstructured to keep retrospectives fresh and engaging. She also recommends, if you work hybrid, trying to schedule unstructured retrospectives for days when the team is in the office together. The physical presence combined with the open format creates an environment where teams can be more vulnerable, more creative, and more honest about what's really happening. The unstructured retrospective isn't about chaos—it's about trusting the team to surface what matters most to them, with the Scrum Master providing light facilitation and space for authentic reflection.   [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.   🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.   Buy Now on Amazon   [The Scrum Master Toolbox Podcast Recommends]   About Sara Di Gregorio   Sara is a people-centered Scrum Master who champions trust, collaboration, and real value over rigid frameworks. With experience introducing Agile practices, she fosters empathy, inclusion, and clarity in every team. As an Advanced Scrum Master, she helps teams grow, perform, and deliver with enthusiasm and purpose.   You can link with Sara Di Gregorio on LinkedIn.

20 Nov 14min

Facilitating Deeper Retrospectives—When to Step In and When to Step Back | Sara Di Gregorio

Facilitating Deeper Retrospectives—When to Step In and When to Step Back | Sara Di Gregorio

Sara Di Gregorio: Facilitating Deeper Retrospectives—When to Step In and When to Step Back Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.   "When they start connecting and having an interesting discussion, I go to the corner, and I'm only trying to listen." - Sara Di Gregorio   Sara faces a challenge that many Scrum Masters encounter: teams that want to discuss too many topics during retrospectives without going deep on any of them. The team had plenty to talk about, but conversations stayed surface-level, never reaching the insights that drive real improvement. Sara recognized that the aim of the retrospective isn't to talk about everything—it's to go deeper on topics the team genuinely cares about.  So she started coaching teams to select just three main topics they wanted to discuss, helping them understand why prioritization matters and making explicit which topics are most important. But her real skill emerged in how she facilitated the discussions. When she saw communication starting to flow and team members becoming deeply connected to the topic, she moved to the corner and listened. She didn't abandon the team—she remained present, ready to help shy or quiet members speak up, watching the clock to respect timeboxes.  But she understood that when teams connect authentically, the Scrum Master's job is to create space, not fill it. Sara learned to ask better questions too. Instead of repeatedly asking "Why? Why? Why?"—which can feel accusatory—she reformulated: "How did you approach it? What happens?" When teams started blaming other teams, she redirected: "What can we influence? What can we do from our side?" She used visual tools like white paper, sharpies, and sticky notes to help teams visualize their discussion steps and create structured moments for questions.  Sometimes, when teams discussed complex technical topics beyond her understanding, she empowered them: "You are the main expert of this topic. Please, when someone sees that we're going out of topic or getting too detailed, raise your hand and help me bring the communication back to what we've chosen to talk about."  This balance—knowing when to step in with structure and when to step back and listen—is what transforms retrospectives from checkbox events into genuine opportunities for team growth.   Self-reflection Question: In your facilitation, are you creating space for deep team connection, or are you inadvertently filling the space that teams need to discover insights for themselves?   [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.   🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.   Buy Now on Amazon   [The Scrum Master Toolbox Podcast Recommends]   About Sara Di Gregorio   Sara is a people-centered Scrum Master who champions trust, collaboration, and real value over rigid frameworks. With experience introducing Agile practices, she fosters empathy, inclusion, and clarity in every team. As an Advanced Scrum Master, she helps teams grow, perform, and deliver with enthusiasm and purpose.   You can link with Sara Di Gregorio on LinkedIn.

19 Nov 15min

Rebuilding Agile Team Connection in the Remote Work Era | Sara Di Gregorio

Rebuilding Agile Team Connection in the Remote Work Era | Sara Di Gregorio

Sara Di Gregorio: Rebuilding Agile Team Connection in the Remote Work Era Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.   "The book helped me to shift from reacting to connecting, which completely changed the quality of conversation." - Sara Di Gregorio   When COVID forced Sara's team into full remote work, she noticed something troubling—the team was losing real connection. Replicating in-office meetings online simply didn't work. People attended meetings but weren't truly present. The spontaneous coffee machine conversations that built relationships and surfaced important information had vanished.  So Sara started experimenting. She introduced 5-minute chit-chat sessions at the start of every meeting: "Guys, how are you today? What happened yesterday?" She created "coffee all together" moments—10-minute virtual breaks where the team could drink coffee or have aperitivos together, sometimes three times per week. She established weekly feedback sessions every Friday morning—30 minutes to recap the week and understand what could improve.  These weren't just social niceties; they were deliberate efforts to recreate the human connections that remote work had stripped away. Sara recognized that mechanized interactions—"here are the things I need you to do, let's talk next steps"—kill team dynamics. Teams need moments where they relate to each other as people, not just as functions.  The experiments worked because they created space for genuine connection, allowing the team to maintain the trust and collaboration that makes effective teamwork possible, even when working remotely.   In this episode, we refer to Non-Violent Communication concepts and practices.   Self-reflection Question: How are you creating moments for your remote or hybrid team to connect as people, not just as colleagues executing tasks? Featured Book of the Week: Nonviolent Communication by Marshall Rosenberg Sara credits Nonviolent Communication by Marshall Rosenberg (translated in Italian as "Words are Windows, or They are Walls") as having a deep impact on her career. The book explores how to listen without judging, how to ask the right questions, and how to observe people to understand their real needs. But above all, it teaches how to communicate in a way that builds connection rather than creating barriers.  For Sara, the book was remarkably practical—she didn't just read it, she experimented with the techniques afterward. She explains: "I think that without this mindset, it's easy to fall into reactive communication, trying to defend, justify, or give quick answers. But that often blocks real understanding." The book helped her shift from reacting to connecting, which completely changed the quality of her conversations.  As a Scrum Master working with people every day—facilitating meetings, mediating conflicts, supporting teams—the way we communicate determines whether we open dialogue or close it. Sara found that taking time to reflect instead of giving quick answers transformed her ability to help teams discover dependencies, improve dialogue, and address communication issues. For anyone in the Scrum Master role, this book provides essential skills for building the kind of connection that makes true collaboration possible. In this segment, we also refer to the NVC episodes we have on the podcast. Check those out to learn more about Nonviolent Communication   [The Scrum Master Toolbox Podcast Recommends] 🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥 Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.   🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.   Buy Now on Amazon   [The Scrum Master Toolbox Podcast Recommends]   About Sara Di Gregorio   Sara is a people-centered Scrum Master who champions trust, collaboration, and real value over rigid frameworks. With experience introducing Agile practices, she fosters empathy, inclusion, and clarity in every team. As an Advanced Scrum Master, she helps teams grow, perform, and deliver with enthusiasm and purpose. You can link with Sara Di Gregorio on LinkedIn.

18 Nov 17min

Populärt inom Politik & nyheter

svenska-fall
motiv
aftonbladet-krim
p3-krim
fordomspodden
flashback-forever
rss-viva-fotboll
rss-krimstad
aftonbladet-daily
rss-sanning-konsekvens
spar
blenda-2
rss-vad-fan-hande
rss-krimreportrarna
rss-frandfors-horna
dagens-eko
olyckan-inifran
krimmagasinet
rss-expressen-dok
svd-nyhetsartiklar