EU AI Act Enters Critical Phase, Reshaping Global AI Governance

EU AI Act Enters Critical Phase, Reshaping Global AI Governance

Today isn't just another day in the European regulatory calendar—it's a seismic mark on the roadmap of artificial intelligence. As of August 2, 2025, the European Union AI Act enters its second phase, triggering a host of new obligations for anyone building, adapting, or selling general-purpose AI—otherwise known as GPAI—within the Union’s formidable market. Listeners, this isn’t just policy theater. It’s the world’s most ambitious leap toward governing the future of code, cognition, and commerce.

Let’s dispense with hand-waving and go straight to brass tacks. The GPAI model providers—those luminaries engineering large language models like GPT-4 and Gemini—are now staring down a battery of obligations. Think transparency filings, copyright vetting, and systemic risk management—because, as the Commission’s newly minted Guidelines declare, models capable of serious downstream impact demand serious oversight. For the uninitiated, the Commission defines “systemic risk” in pure computational horsepower: if your training run blows past 10^25 floating-point operations, you’re in the regulatory big leagues. Accordingly, companies have to assess and mitigate everything from algorithmic bias to misuse scenarios, all the while logging serious incidents and safeguarding their infrastructure like digital Fort Knox.

A highlight this week: the AI Office’s Code of Practice for General-Purpose AI is newly finalized. While voluntary, the code offers what Brussels bureaucrats call “presumption of conformity.” Translation: follow the code, and you’re presumed compliant—legal ambiguity evaporates, administrative headaches abate. The three chapters—transparency, copyright, and safety/security—outline everything from pre-market data disclosures to post-market monitoring. Sound dry? It’s actually the closest thing the sector has to an international AI safety playbook. Yet, compliance isn’t a paint-by-numbers affair. Meta just made headlines for refusing to sign the Code of Practice. Why? Because real compliance means real scrutiny, and not every developer wants to upend R&D pipelines for Brussels’ blessing.

But beyond corporate politicking, penalties now loom large. Authorities can now levy fines for non-compliance. Enforcement powers will get sharper still come August 2026, with provisions for systemic-risk models growing more muscular. The intent is unmistakable: prevent unmonitored models from rewriting reality—or, worse, democratising the tools for cyberattacks or automated disinformation.

The world is watching, from Washington to Shenzhen. Will the EU’s governance-by-risk-category approach become a global template, or just a bureaucratic sandpit? Either way, today’s phase change is a wake-up call: Europe plans to pilot the ethics and safety of the world’s most powerful algorithms—and in doing so, it’s reshaping the very substrate of the information age.

Thanks for tuning in. Remember to subscribe for more quiet, incisive analysis. This has been a quiet please production, for more check out quiet please dot ai.

Some great Deals https://amzn.to/49SJ3Qs

For more check out http://www.quietplease.ai

Avsnitt(198)

Buckle Up for Europe's AI Regulatory Roadmap: No Detours Allowed

Buckle Up for Europe's AI Regulatory Roadmap: No Detours Allowed

Welcome to the fast lane of European AI regulation—no seat belts required, unless you count the dozens of legal provisions about to reshape the way we build and deploy artificial intelligence. As I’m recording this, just days away from the August 2, 2025, enforcement milestone, there’s a distinctly charged air. The EU AI Act, years in the making, isn’t being delayed—not for Airbus, not for ASML, not even after a who’s-who of industry leaders sent panicked open letters to Ursula von der Leyen and the European Commission, pleading for a pause. The Commission’s answer? A polite but ironclad “no.” The regulatory Ragnarok is happening as scheduled.Let’s cut straight to the core: the EU AI Act is the world’s first comprehensive legal framework governing the use of artificial intelligence. Its risk-based model isn’t just a talking point—they’ve already made certain uses illegal, from biometric categorization based on sensitive data to emotion recognition in the workplace, and of course, manipulative systems that influence behavior unnoticed. Those rules have been in effect since February.Now, as of this August, new obligations kick in for providers of general-purpose AI models—think foundational models like GPT-style large language models, image generators, and more. The General-Purpose AI Code of Practice, published July 10, lays out the voluntary gold standard for compliance. There’s a carrot here: less paperwork and more legal certainty for organizations who sign on. Voluntary, yes—but ignore it at your peril, given the risk of crushing fines up to 35 million euros or 7% of global turnover.The Commission has been busy clarifying thresholds, responsibility-sharing for upstream versus downstream actors, and handling those labyrinthine integration and modification scenarios. The logic is simple: modify a model with significant new compute power? Congratulations, you inherit all compliance responsibility. And if your model is open-source, you’re only exempt if there’s no money changing hands and the model isn’t a systemic risk. No free passes for the most potent systems, open-source or not.To smooth the rollout, the AI Office and the European Artificial Intelligence Board have spun out guidelines, FAQs, and the newly opened AI Service Desk for support. France’s Mistral, Germany’s Federal Network Agency, and hundreds of stakeholders across academia, business, and civil society have their fingerprints on the rules. But be prepared: initial confusion is inevitable. Early enforcement will be “graduated,” with guidance and consultation—until August 2027, when the Act’s teeth come out for all, including high-risk systems.What does it mean for you? Increased trust, more visible transparency—chatbots have to disclose they’re bots, deep fakes need obvious labels, and every high-risk system comes under the microscope. Europe is betting that by dictating terms to the world’s biggest AI players, it will shape what’s next. Like it or not, the future of AI is being drawn up in Brussels—and compliance is mandatory, not optional.Thanks for tuning in. Don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

21 Juli 3min

"EU AI Act Becomes Reality: No More Delays, Hefty Fines Await Unprepared Businesses"

"EU AI Act Becomes Reality: No More Delays, Hefty Fines Await Unprepared Businesses"

Let’s just call it: the EU AI Act is about to become reality—no more discussions, no more delays, no more last-minute reprieves. The European Commission has dug its heels in. Despite this month’s frantic lobbying, from the likes of Airbus and ASML to Mistral, asking for a two-year pause, the Commission simply said, “Our legal deadlines are established. The rules are already in force.” The first regulations have been binding since February and the heavy hitters—transparency, documentation, and technical standards for general-purpose AI—hit on August 2, 2025. If your AI touches the European market and you’re not ready, the fines alone might make your CFO reconsider machine learning as a career path—think €35 million or 7% of your global turnover.Zoom in on what’s actually changing and why some tech leaders are sweating. The EU AI Act is the world’s first sweeping legal framework for artificial intelligence, risk-based just like GDPR was for privacy. Certain AI is now outright banned: biometric categorization based on sensitive data, emotion recognition in your workplace Zoom calls, manipulative systems changing your behavior behind the scenes, and, yes, the dreaded social scoring. If you’re building AI with general purposes—think large language models, multimodal models—your headaches start from August 2. You’ll need to document your training data, lay out your model development and evaluation, publish summaries, and keep transparency reports up to date. Copyrighted material in your training set? Document it, prove you had the rights, or face the consequences. Even confidential data must be protected under new, harmonized technical standards the Commission is quietly making the gold standard.This week’s news is all about guidelines and the GPAI Code of Practice, finalized on July 10 and made public in detail just yesterday. The Commission wants providers to get on board with this voluntary code: comply and, supposedly, you’ll have a reduced administrative burden and more legal certainty. Ignore it, and you might find yourself tangled in legal ambiguity or at the sharp end of enforcement from the likes of Germany’s Bundesnetzagentur, or, if you’re Danish, the Agency for Digital Government. Denmark, ever the overachiever, enacted its national AI oversight law early—on May 8—setting the pace for everyone else.If you remember the GDPR scramble, this déjà vu is justified. Every EU member state must designate their own national AI authorities by August 2. The European Artificial Intelligence Board is set to coordinate these efforts, making sure no one plays fast and loose with the AI rules. Businesses whine about complexity; regulators remain unmoved. And while the new guidelines offer some operational clarity, don’t expect a gentle phase-in like GDPR. The Act positions the EU as the de facto global regulator—again. Non-EU companies using AI in Europe? Welcome to the jurisdictional party.Thanks for tuning in. Don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

19 Juli 3min

Denmark Leads EU's AI Regulation Revolution: Enforcing Landmark AI Act Months Ahead of Deadline

Denmark Leads EU's AI Regulation Revolution: Enforcing Landmark AI Act Months Ahead of Deadline

Imagine waking up in Copenhagen this week, where Denmark just cemented its reputation as a tech regulation trailblazer, becoming the first EU country to fully implement the EU Artificial Intelligence Act—months ahead of the August 2, 2025, mandatory deadline. Industry insiders from Brussels to Berlin are on edge, their calendars marked by the looming approach of enforcement. The clock, quite literally, is ticking.Unlike the United States’ scattershot, state-level approach, the EU AI Act is structured, systematic, and—let’s not mince words—ambitious. This is the world’s first unified legal framework governing artificial intelligence. The Act’s phased rollout means that today, in July 2025, we are in the eye of the regulatory storm. Since February, particularly risky AI practices, such as biometric categorization targeting sensitive characteristics and emotion recognition in workplaces, have been banned outright. Builders and users of AI across Europe are scrambling to ramp up what the EU calls “AI literacy.” If your team can’t explain the risks and logic of the systems they deploy, you might be facing more than just a stern memo—a €35 million fine or 7% of global turnover can land quickly and without mercy.August 2025 is the next inflection point. From then, any provider or deployer of general-purpose AI—think OpenAI, Google, Microsoft—must comply with stringent documentation, transparency, and data-provenance obligations. The European Commission’s just-published General-Purpose AI Code of Practice, after months of wrangling with nearly 1,000 stakeholders, offers a voluntary but incentivized roadmap. Adherence means a lighter administrative load and regulatory tranquility—stray, and the burden multiplies. But let’s be clear: the Code does not guarantee legal safety; it simply clarifies the maze.What most AI companies are quietly asking themselves: will this European model reverberate globally? The Act’s architecture, in many ways reminiscent of the GDPR playbook, is already nudging discussion in Washington, New Delhi, and Beijing. And make no mistake, the EU’s choice of a risk-based approach—categorizing systems from minimal to “unacceptable risk”—means the law evolves alongside technological leaps.There’s plenty of jockeying behind the scenes. German authorities are prepping regulatory sandboxes; IBM is running compliance campaigns, while Meta and Amazon haven’t yet committed to the new code. But in this moment, the message is discipline, transparency, and relentless readiness. You can feel the regulatory pressure in every boardroom and dev sprint. The EU is betting that by constraining the wild, it can foster innovation that’s not just profitable, but trustworthy.Thank you for tuning in—don’t miss the next update, and be sure to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

17 Juli 3min

Europe's AI Reckoning: The EU's Groundbreaking Regulation Shakes Up the Tech Landscape

Europe's AI Reckoning: The EU's Groundbreaking Regulation Shakes Up the Tech Landscape

Imagine waking up in Brussels this morning and realizing that, as of a few weeks ago, the European Union’s AI Act is no longer just the stuff of policy briefings and think tank debates—it’s a living, breathing regulation that’s about to transform tech across the continent and beyond. Effective since February, the EU AI Act is carving out a new global reality for artificial intelligence, and the clock is ticking—August 2, 2025, is D-day for the new transparency rules targeting General-Purpose AI models. That means if you’re building, selling, or even adapting models like GPT-4, DALL-E, or Google’s Gemini for the EU market, you’re now on the hook for some of the world’s most comprehensive and contentious AI requirements.Let’s get specific. The law is already imposing AI literacy obligations across the board: whether you’re a provider, a deployer, or an importer, you need your staff to have a real grasp of how AI works. No more black-box mystique or “it’s just an algorithm” hand-waving. By August, anyone providing a General-Purpose AI model will have to publish detailed summaries of their training data, like a nutrition label for algorithms. And we’re not talking about vague assurances. The EU is demanding documentation “sufficiently detailed” to let users, journalists, and regulators trace the DNA of what these models have been fed. Think less ‘trust us,’ more ‘show your work—or risk a €15 million fine or 3% of worldwide annual turnover.’ These are GDPR-level risks, and the comparison isn’t lost on anyone in tech.But let’s not pretend it’s frictionless. In the past week alone, Airbus, Siemens Energy, Lufthansa, ASML, and a who’s-who of European giants fired off an open letter begging the European Commission for a two-year delay. They argue the rules bring regulatory overload, threaten competitiveness, and, with key implementation standards still being thrashed out, are almost impossible to obey. The Commission has so far said no—August 2 is still the target date—but Executive Vice President Henna Virkkunen has left a crack in the door, hinting at “targeted delays” if essential standards aren’t ready.This tension is everywhere. The voluntary Code of Practice released July 10 is a preview of the coming world: transparency, stricter copyright compliance, and systemic risk management. Companies like OpenAI and Google are reviewing the text; Meta and Amazon are holding their cards close. There’s a tug-of-war between innovation and caution, global ambition and regulatory rigor.Europe wants to be the AI continent—ambitious, trusted, safe. Yet building rules for tech that evolves while you write the legislation is an impossible engineering problem. The real test starts now: will the AI Act make Europe the model for AI governance, or slow it down while others—looking at you, Silicon Valley and Shanghai—race ahead? The debate is no longer theoretical, and as deadlines close in, the world is watching.Thanks for tuning in, and don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

14 Juli 3min

Headline: Navigating the Labyrinth of the EU AI Act: A Race Against Compliance and Innovation

Headline: Navigating the Labyrinth of the EU AI Act: A Race Against Compliance and Innovation

Today is July 12th, 2025, and if you thought European bureaucracy moved slowly, let me introduce you to the EU AI Act — which is somehow both glacial and frighteningly brisk at the same time. Since February, this sweeping new law has been the talk of Brussels, Berlin, Paris, and, frankly, anyone in tech who gets heart palpitations at the mere mention of “compliance matrix.” The AI Act is now the world’s most ambitious legal effort to bring artificial intelligence systems to heel — classifying them by risk, imposing bans on “unacceptable” uses, and, for the AI providers and deployers among us, requiring that staff must now be schooled in what the Act coyly calls “AI literacy.”But let’s not pretend this is all running like automated clockwork. As of this week, the European Commission just published the final Code of Practice for general-purpose AI, or GPAI, those foundational models like GPT-4, DALL-E, and their ilk. Now, OpenAI, Google, and other industry titans are frantically revisiting their compliance playbooks, because the fines for getting things wrong can reach up to 7% of global turnover. That’ll buy a lot of GPU clusters, or a lot of legal fees. The code is technically “voluntary,” but signing on means less red tape and more legal clarity, which, in the EU, is like being handed the cheat codes for Tetris.Transparency is the new battle cry. The Commission’s Henna Virkkunen described the Code as a watershed for “tech sovereignty.” Now, AI companies that sign up will need to share not only what their models can do, but also what they were trained on — think of it as a nutrition label for algorithms. Paolo Lazzarino from the law firm ADVANT Nctm says this helps us judge what’s coming out of the AI kitchen, data by data.Yet, not everyone is popping champagne. More than forty-five heavyweights of European industry — Airbus, Siemens Energy, even Mistral — have called for a two-year “stop the clock” on the most burdensome rules, arguing the AI Act’s moving goalposts and regulatory overload could choke EU innovation right when the US and China are speeding ahead. Commission spokesperson Thomas Regnier, unwavering, stated: “No stop the clock, no pause.” And don’t expect any flexibility from Brussels in trade talks with Washington or under the Digital Markets Act: as far as the EU is concerned, these rules are about European values, not bargaining chips.Here’s where it gets interesting for the would-be compliance artist: the real winners in this grand experiment might be the very largest AI labs, who can afford an armada of lawyers and ethicists, while smaller players are left guessing at requirements — or quietly shifting operations elsewhere.So, will the Act make Europe the world’s beacon for ethical AI, or just a museum for lost startups? The next few months will tell. Thanks for tuning in, listeners. Don’t forget to subscribe for more — this has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

12 Juli 3min

EU's AI Act Rewrites the Global AI Rulebook

EU's AI Act Rewrites the Global AI Rulebook

Welcome to the era where artificial intelligence isn’t just changing our world, but being reshaped by law at an unprecedented pace. Yes, I’m talking about the European Union’s Artificial Intelligence Act, the so-called AI Act, which, as of now, is rapidly transitioning from legislative text to concrete reality. The Act officially entered into force last August, and the compliance countdown is all but inescapable, especially with that pivotal August 2, 2025 deadline looming for general-purpose AI models.Let’s get right to it: The EU AI Act is the world’s first comprehensive regulatory framework for AI, designed to set the rules of play not only for European companies, but for any organization worldwide that wants access to the EU’s massive market. Forget the GDPR—this is next-level, shaping the global conversation about AI accountability, safety, and risk.Here’s how it works. The AI Act paints AI risk with a bold palette: unacceptable, high, limited, and minimal risk. Unacceptable risk? Banned outright. High-risk? Think biometric surveillance, AI in critical infrastructure, employment—those undergo strict compliance and transparency measures. Meanwhile, your run-of-the-mill chatbots? Minimal or limited risk, with lighter obligations. And then there’s the beast: general-purpose AI models, like those powering the latest generative marvels. These are subject to special transparency and evaluation rules, with slightly fewer hoops for open source models.Now, if you’re hearing a faint whirring sound, that’s the steady hum of tech CEOs furiously lobbying Brussels. Just last week, leaders from companies like ASML, Meta, Mistral, and even Carrefour threw their weight behind an open letter—46 European CEOs asking the Commission to hit pause on the AI Act. Their argument? The guidelines aren’t finalized, the compliance landscape is murky, and Europe risks throttling its own innovation before it can compete with the US and China. They call their campaign #stoptheclock.But the EU Commission’s Thomas Regnier shot that down on Friday—no stop the clock, no grace period, and absolutely no pause. The timeline is the timeline: August 2025 for general-purpose models, August 2026 for high-risk models, and phased requirements in between. And for the record, this is no empty threat—the Act creates national notifying bodies, demands conformity assessments, and empowers a European Artificial Intelligence Board to keep Member States in line.What’s more, as of February, every provider and deployer of AI in the EU must ensure their staff have a “sufficient level of AI literacy.” That’s not just a suggestion; it’s law. The upshot? Organizations are scrambling to develop robust training programs and compliance protocols, even as the final Code of Practice for General-Purpose AI models is still delayed, thanks in part to lobbying from Google, Meta, and others.Will this new regulatory order truly balance innovation and safety? Or will Europe’s bold move become a cautionary tale for overregulation in AI? Only time will tell, but one thing is certain: the next year is make or break for every AI provider with European ambitions.Thanks for tuning in, and don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

10 Juli 3min

Europe's AI Reckoning: Racing to Comply with High-Stakes Regulations

Europe's AI Reckoning: Racing to Comply with High-Stakes Regulations

Europe’s AI summer may feel more like a nervous sprint than a picnic right now, especially for those of us living at the intersection of code, capital, and compliance. The EU’s Artificial Intelligence Act is no longer a looming regulation—it’s a fast-moving train, and as of today, July 7th, 2025, there are no signs of it slowing down. That’s despite a deluge of complaints, lobbying blitzes, and even a CEO-endorsed hashtag campaign aimed at hitting pause. ASML, Mistral, Alphabet, Meta, and a crowd of nearly 50 other tech heavyweights signed an open letter in the last week, warning the European Commission that the deadline is not just ambitious, it’s borderline reckless, risking Europe’s edge in the global AI arms race.Thomas Regnier, the Commission’s spokesperson, essentially dropped the regulatory mic last Friday: “Let me be as clear as possible, there is no stop the clock. There is no grace period. There is no pause.” No amount of LinkedIn drama or industry angst could budge the schedule. By August 2025, general-purpose AI models—think everything from smart chatbots to foundational LLMs—must comply. Come August 2026, high-risk AI applications like biometric surveillance and automated hiring tools are up next. European policymakers seem adamant about legal certainty, hoping that a crystal-clear timeline will attract long-term investment and prevent another “GDPR scramble.”But listening to industry leaders like Ulf Kristersson, the Swedish Prime Minister, and organizations such as CCIA Europe, you’d think the AI Act is a bureaucratic maze designed in a vacuum. The complaint isn’t just about complexity. It’s about survival for smaller firms, who are now openly considering relocating AI projects to the US or elsewhere to dodge regulatory quicksand. Compared to the EU’s risk-tiered, legally binding approach, the US is sticking to voluntary sector-by-sector frameworks, while China is going all-in on state-mandated AI dominance.Still, there are flickers of pragmatism from Brussels. The Commission is flirting with a Digital Simplification Omnibus—yes, that is the real name—and promising an AI Act Serve Desk to handhold companies through the paperwork labyrinth. There’s even a delayed but still-anticipated Code of Practice, now expected at year’s end, intended to demystify compliance for developers and enterprise leaders alike.Yet, beneath this regulatory bravado, a question lingers—will Europe’s ethical ambition be its competitive undoing? As the world watches, it’s not just the substance of the AI Act that matters, but whether Europe can balance principle with the breakneck pace of global innovation.Thanks for tuning in to this breakdown of Europe’s regulatory moment. Don’t forget to subscribe for more. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

7 Juli 2min

The EU AI Act: Transforming the Tech Landscape

The EU AI Act: Transforming the Tech Landscape

Today, the European Union’s Artificial Intelligence Act isn’t just regulatory theory; it’s a living framework, already exerting tangible influence over the tech landscape. If you’ve been following Brussels headlines—or your company’s compliance officer’s worried emails—you know that since February 2, 2025, the first phase of the EU AI Act is in effect. That means any artificial intelligence system classified as posing “unacceptable risk” is banned across all EU member states. We’re talking about systems that do things like social scoring or deploy manipulative biometric categorization. And it’s not a soft ban, either: violations can trigger penalties as staggering as €35 million or 7% of global turnover. The stakes are real.Let’s talk implications, because this isn’t just about a few outlier tools. From Berlin to Barcelona, every organization leveraging AI in the EU market must now ensure not only that their products and processes are compliant, but that their people are, too. There’s a new legal duty for AI literacy—staff must actually understand how these systems work, their risks, and the ethical landmines they could set off. This isn’t a box-ticking exercise. If your workforce doesn’t get it, your entire compliance posture is at risk.Looking ahead, the grip will only tighten. By August 2, 2025, obligations hit general-purpose AI providers—think big language models, foundational AIs powering everything from search engines to drug discovery. Those teams will have to produce exhaustive documentation about their models, detail the data used for training, and publish summaries respecting EU copyright laws. If a model carries “systemic risk”—which means reasonably foreseeable harm to fundamental rights—developers must actively monitor, assess, and mitigate those effects, reporting serious incidents and demonstrating robust cybersecurity.And don’t think this is a one-size-fits-all regime. The EU AI Act is layered: high-risk AI systems, like those controlling critical infrastructure or evaluating creditworthiness, have their own timelines and escalating requirements, fully coming into force by August 2027. Meanwhile, the EU is building the institutional scaffolding: national authorities, an AI Office, and a European Artificial Intelligence Board are coming online to monitor, advise, and enforce.The recent AI Continent Action Plan released by the European Commission is galvanizing the region’s AI capabilities—think massive new computing infrastructure, high-quality data initiatives, and a centralized AI Act Service Desk to help navigate the compliance labyrinth.So, what’s the real impact? European innovation isn’t grinding to a halt—it’s being forced to evolve. Companies that embed transparency, risk management, and ethical rigor into their AI are discovering that trust can be a competitive advantage. But for those who see regulation as an afterthought, the next years are going to be rocky.Thanks for tuning in. Don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.

5 Juli 3min

Populärt inom Business & ekonomi

framgangspodden
varvet
badfluence
uppgang-och-fall
svd-ledarredaktionen
rss-borsens-finest
avanzapodden
rss-kort-lang-analyspodden-fran-di
rikatillsammans-om-privatekonomi-rikedom-i-livet
affarsvarlden
rss-dagen-med-di
lastbilspodden
fill-or-kill
tabberaset
kapitalet-en-podd-om-ekonomi
borsmorgon
dynastin
montrosepodden
market-makers
rss-inga-dumma-fragor-om-pengar