"Shaping the AI Future: Mondaq's Public Consultation on the AI Act Implementation"

"Shaping the AI Future: Mondaq's Public Consultation on the AI Act Implementation"

In a significant development, the European Union is actively engaging in a broad public consultation to discuss the implementation strategies of the anticipated Artificial Intelligence Act (AI Act), following its formal adoption by the Council of the European Union on May 21, 2024. This legislative milestone is pivotal for the digital and technological landscape of Europe, intending to regulate the application and development of artificial intelligence (AI) within the region.

The AI Act represents a comprehensive framework devised to ensure that the deployment of AI technologies across the EU respects fundamental rights, while fostering an environment of trust and security for both citizens and businesses. The phased implementation process signifies a carefully calibrated approach by the EU, aiming to gradually integrate these regulatory measures without hindering the dynamic growth of the AI sector.

The EU has long positioned itself as a global frontrunner in digital rights and privacy, with instruments like the General Data Protection Regulation (GDPR) setting international standards. The AI Act is poised to build on this legacy, addressing the unique challenges and potentials posed by AI technologies. Among the key objectives of the AI Act are promoting human oversight, ensuring transparency in AI functionalities, and safeguarding against biases, thereby mitigating risks associated with automated decision-making systems.

Given the broad implications of the AI Act, the ongoing public consultation is a critical element of the legislative process. It offers stakeholders, including tech companies, civil society organizations, AI developers, and the general public, a platform to express their views, concerns, and aspirations regarding the act's implementation. This inclusive approach not only enriches the legislative procedure with diverse perspectives but also aims to build a consensus on how Europe navigates the complex terrain of AI governance.

One of the distinguishing features of the AI Act is its risk-based classification system, which categorizes AI applications according to their potential impact on society and individuals. High-risk applications, encompassing areas like employment, education, law enforcement, and critical infrastructure, will be subject to stringent compliance requirements. This includes mandatory risk assessments, enhanced data governance, and transparency obligations, ensuring that such technologies are deployed responsibly.

As Europe embarks on this ambitious legislative journey, the global conversation around AI regulation is set to intensify. The EU's approach, characterized by its emphasis on fundamental rights and robust risk management, could serve as a blueprint for other jurisdictions grappling with similar regulatory challenges. However, the success of the AI Act will largely depend on the effective engagement of all stakeholders during the consultation phase and beyond, underscoring the importance of collaborative efforts in shaping the future of AI governance.

As the public consultation unfolds, the world watches keenly. The outcomes of this process will not only influence the trajectory of AI development in Europe but could also contribute to establishing international norms for the responsible use of one of the 21st century's most transformative technologies.

Avsnitt(199)

EU AI Act's Deadline Looms: A Tectonic Shift for AI in Europe

EU AI Act's Deadline Looms: A Tectonic Shift for AI in Europe

Blink and the EU AI Act’s next compliance deadline is on your doorstep—August 2, 2025, isn’t just a date, it’s a tectonic shift for anyone touching artificial intelligence in Europe. Picture it: Ursula von der Leyen in Brussels, championing “InvestAI” to funnel €200 billion into Europe’s AI future, while, just days ago, the final General Purpose AI Code of Practice landed on the desks of stakeholders across the continent. The mood? Nervous, ambitious, and very much under pressure.Let’s cut straight to the chase—this is the world’s first comprehensive legal framework for regulating AI, and it’s poised to recode how companies everywhere build, scale, and deploy AI systems. The Commission has drawn a bright line: there will be no “stop the clock,” no gentle handbrake for last-minute compliance. This, despite Airbus, ASML, and Mistral’s CEOs practically pleading for a two-year pause, warning that the rules are so intricate they might strangle innovation before it flourishes. But Brussels is immovable. As a Commission spokesperson quipped at the July 4th press conference, “We have legal deadlines established in a legal text.” Translation: adapt or step aside.From August onwards, if you’re offering or developing general purpose AI—think OpenAI’s GPT, Google’s Gemini, or Europe’s own Aleph Alpha—transparency and safety are no longer nice-to-haves. Documentation requirements, copyright clarity, risk mitigation, deepfake labeling—these obligations are spelled out in exquisite legal detail and will become enforceable by 2026 for new models. For today’s AI titans, 2027 is the real D-Day. Non-compliance? Stiff fines up to 7% of global revenue, which means nobody can afford to coast.Techies might appreciate that the regulation’s risk-based system reflects a distinctly European vision of “trustworthy AI”—human rights at the core, and not just lip service. That includes outlawing predictive policing algorithms, indiscriminate biometric scraping, and emotion detection in workplaces or policing contexts. Critically, the Commission’s new 60-member AI Scientific Panel is overseeing systemic risk, model classification, and technical compliance, driving consultation with actual scientists, not just politicians.What about the rest of the globe? This is regulatory extraterritoriality in action. Where Brussels goes, others follow—like New York’s privacy laws in the 2010s, only faster and with higher stakes. If you’re coding from San Francisco or Singapore but serving EU markets, welcome to the world’s most ambitious sandbox.The upshot? For leaders in AI, the message has never been clearer: rethink your strategy, rewrite your documentation, and get those compliance teams in gear—or risk becoming a cautionary tale when the fines start rolling.Thanks for tuning in, and don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

24 Juli 3min

Buckle Up for Europe's AI Regulatory Roadmap: No Detours Allowed

Buckle Up for Europe's AI Regulatory Roadmap: No Detours Allowed

Welcome to the fast lane of European AI regulation—no seat belts required, unless you count the dozens of legal provisions about to reshape the way we build and deploy artificial intelligence. As I’m recording this, just days away from the August 2, 2025, enforcement milestone, there’s a distinctly charged air. The EU AI Act, years in the making, isn’t being delayed—not for Airbus, not for ASML, not even after a who’s-who of industry leaders sent panicked open letters to Ursula von der Leyen and the European Commission, pleading for a pause. The Commission’s answer? A polite but ironclad “no.” The regulatory Ragnarok is happening as scheduled.Let’s cut straight to the core: the EU AI Act is the world’s first comprehensive legal framework governing the use of artificial intelligence. Its risk-based model isn’t just a talking point—they’ve already made certain uses illegal, from biometric categorization based on sensitive data to emotion recognition in the workplace, and of course, manipulative systems that influence behavior unnoticed. Those rules have been in effect since February.Now, as of this August, new obligations kick in for providers of general-purpose AI models—think foundational models like GPT-style large language models, image generators, and more. The General-Purpose AI Code of Practice, published July 10, lays out the voluntary gold standard for compliance. There’s a carrot here: less paperwork and more legal certainty for organizations who sign on. Voluntary, yes—but ignore it at your peril, given the risk of crushing fines up to 35 million euros or 7% of global turnover.The Commission has been busy clarifying thresholds, responsibility-sharing for upstream versus downstream actors, and handling those labyrinthine integration and modification scenarios. The logic is simple: modify a model with significant new compute power? Congratulations, you inherit all compliance responsibility. And if your model is open-source, you’re only exempt if there’s no money changing hands and the model isn’t a systemic risk. No free passes for the most potent systems, open-source or not.To smooth the rollout, the AI Office and the European Artificial Intelligence Board have spun out guidelines, FAQs, and the newly opened AI Service Desk for support. France’s Mistral, Germany’s Federal Network Agency, and hundreds of stakeholders across academia, business, and civil society have their fingerprints on the rules. But be prepared: initial confusion is inevitable. Early enforcement will be “graduated,” with guidance and consultation—until August 2027, when the Act’s teeth come out for all, including high-risk systems.What does it mean for you? Increased trust, more visible transparency—chatbots have to disclose they’re bots, deep fakes need obvious labels, and every high-risk system comes under the microscope. Europe is betting that by dictating terms to the world’s biggest AI players, it will shape what’s next. Like it or not, the future of AI is being drawn up in Brussels—and compliance is mandatory, not optional.Thanks for tuning in. Don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

21 Juli 3min

"EU AI Act Becomes Reality: No More Delays, Hefty Fines Await Unprepared Businesses"

"EU AI Act Becomes Reality: No More Delays, Hefty Fines Await Unprepared Businesses"

Let’s just call it: the EU AI Act is about to become reality—no more discussions, no more delays, no more last-minute reprieves. The European Commission has dug its heels in. Despite this month’s frantic lobbying, from the likes of Airbus and ASML to Mistral, asking for a two-year pause, the Commission simply said, “Our legal deadlines are established. The rules are already in force.” The first regulations have been binding since February and the heavy hitters—transparency, documentation, and technical standards for general-purpose AI—hit on August 2, 2025. If your AI touches the European market and you’re not ready, the fines alone might make your CFO reconsider machine learning as a career path—think €35 million or 7% of your global turnover.Zoom in on what’s actually changing and why some tech leaders are sweating. The EU AI Act is the world’s first sweeping legal framework for artificial intelligence, risk-based just like GDPR was for privacy. Certain AI is now outright banned: biometric categorization based on sensitive data, emotion recognition in your workplace Zoom calls, manipulative systems changing your behavior behind the scenes, and, yes, the dreaded social scoring. If you’re building AI with general purposes—think large language models, multimodal models—your headaches start from August 2. You’ll need to document your training data, lay out your model development and evaluation, publish summaries, and keep transparency reports up to date. Copyrighted material in your training set? Document it, prove you had the rights, or face the consequences. Even confidential data must be protected under new, harmonized technical standards the Commission is quietly making the gold standard.This week’s news is all about guidelines and the GPAI Code of Practice, finalized on July 10 and made public in detail just yesterday. The Commission wants providers to get on board with this voluntary code: comply and, supposedly, you’ll have a reduced administrative burden and more legal certainty. Ignore it, and you might find yourself tangled in legal ambiguity or at the sharp end of enforcement from the likes of Germany’s Bundesnetzagentur, or, if you’re Danish, the Agency for Digital Government. Denmark, ever the overachiever, enacted its national AI oversight law early—on May 8—setting the pace for everyone else.If you remember the GDPR scramble, this déjà vu is justified. Every EU member state must designate their own national AI authorities by August 2. The European Artificial Intelligence Board is set to coordinate these efforts, making sure no one plays fast and loose with the AI rules. Businesses whine about complexity; regulators remain unmoved. And while the new guidelines offer some operational clarity, don’t expect a gentle phase-in like GDPR. The Act positions the EU as the de facto global regulator—again. Non-EU companies using AI in Europe? Welcome to the jurisdictional party.Thanks for tuning in. Don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

19 Juli 3min

Denmark Leads EU's AI Regulation Revolution: Enforcing Landmark AI Act Months Ahead of Deadline

Denmark Leads EU's AI Regulation Revolution: Enforcing Landmark AI Act Months Ahead of Deadline

Imagine waking up in Copenhagen this week, where Denmark just cemented its reputation as a tech regulation trailblazer, becoming the first EU country to fully implement the EU Artificial Intelligence Act—months ahead of the August 2, 2025, mandatory deadline. Industry insiders from Brussels to Berlin are on edge, their calendars marked by the looming approach of enforcement. The clock, quite literally, is ticking.Unlike the United States’ scattershot, state-level approach, the EU AI Act is structured, systematic, and—let’s not mince words—ambitious. This is the world’s first unified legal framework governing artificial intelligence. The Act’s phased rollout means that today, in July 2025, we are in the eye of the regulatory storm. Since February, particularly risky AI practices, such as biometric categorization targeting sensitive characteristics and emotion recognition in workplaces, have been banned outright. Builders and users of AI across Europe are scrambling to ramp up what the EU calls “AI literacy.” If your team can’t explain the risks and logic of the systems they deploy, you might be facing more than just a stern memo—a €35 million fine or 7% of global turnover can land quickly and without mercy.August 2025 is the next inflection point. From then, any provider or deployer of general-purpose AI—think OpenAI, Google, Microsoft—must comply with stringent documentation, transparency, and data-provenance obligations. The European Commission’s just-published General-Purpose AI Code of Practice, after months of wrangling with nearly 1,000 stakeholders, offers a voluntary but incentivized roadmap. Adherence means a lighter administrative load and regulatory tranquility—stray, and the burden multiplies. But let’s be clear: the Code does not guarantee legal safety; it simply clarifies the maze.What most AI companies are quietly asking themselves: will this European model reverberate globally? The Act’s architecture, in many ways reminiscent of the GDPR playbook, is already nudging discussion in Washington, New Delhi, and Beijing. And make no mistake, the EU’s choice of a risk-based approach—categorizing systems from minimal to “unacceptable risk”—means the law evolves alongside technological leaps.There’s plenty of jockeying behind the scenes. German authorities are prepping regulatory sandboxes; IBM is running compliance campaigns, while Meta and Amazon haven’t yet committed to the new code. But in this moment, the message is discipline, transparency, and relentless readiness. You can feel the regulatory pressure in every boardroom and dev sprint. The EU is betting that by constraining the wild, it can foster innovation that’s not just profitable, but trustworthy.Thank you for tuning in—don’t miss the next update, and be sure to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

17 Juli 3min

Europe's AI Reckoning: The EU's Groundbreaking Regulation Shakes Up the Tech Landscape

Europe's AI Reckoning: The EU's Groundbreaking Regulation Shakes Up the Tech Landscape

Imagine waking up in Brussels this morning and realizing that, as of a few weeks ago, the European Union’s AI Act is no longer just the stuff of policy briefings and think tank debates—it’s a living, breathing regulation that’s about to transform tech across the continent and beyond. Effective since February, the EU AI Act is carving out a new global reality for artificial intelligence, and the clock is ticking—August 2, 2025, is D-day for the new transparency rules targeting General-Purpose AI models. That means if you’re building, selling, or even adapting models like GPT-4, DALL-E, or Google’s Gemini for the EU market, you’re now on the hook for some of the world’s most comprehensive and contentious AI requirements.Let’s get specific. The law is already imposing AI literacy obligations across the board: whether you’re a provider, a deployer, or an importer, you need your staff to have a real grasp of how AI works. No more black-box mystique or “it’s just an algorithm” hand-waving. By August, anyone providing a General-Purpose AI model will have to publish detailed summaries of their training data, like a nutrition label for algorithms. And we’re not talking about vague assurances. The EU is demanding documentation “sufficiently detailed” to let users, journalists, and regulators trace the DNA of what these models have been fed. Think less ‘trust us,’ more ‘show your work—or risk a €15 million fine or 3% of worldwide annual turnover.’ These are GDPR-level risks, and the comparison isn’t lost on anyone in tech.But let’s not pretend it’s frictionless. In the past week alone, Airbus, Siemens Energy, Lufthansa, ASML, and a who’s-who of European giants fired off an open letter begging the European Commission for a two-year delay. They argue the rules bring regulatory overload, threaten competitiveness, and, with key implementation standards still being thrashed out, are almost impossible to obey. The Commission has so far said no—August 2 is still the target date—but Executive Vice President Henna Virkkunen has left a crack in the door, hinting at “targeted delays” if essential standards aren’t ready.This tension is everywhere. The voluntary Code of Practice released July 10 is a preview of the coming world: transparency, stricter copyright compliance, and systemic risk management. Companies like OpenAI and Google are reviewing the text; Meta and Amazon are holding their cards close. There’s a tug-of-war between innovation and caution, global ambition and regulatory rigor.Europe wants to be the AI continent—ambitious, trusted, safe. Yet building rules for tech that evolves while you write the legislation is an impossible engineering problem. The real test starts now: will the AI Act make Europe the model for AI governance, or slow it down while others—looking at you, Silicon Valley and Shanghai—race ahead? The debate is no longer theoretical, and as deadlines close in, the world is watching.Thanks for tuning in, and don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

14 Juli 3min

Headline: Navigating the Labyrinth of the EU AI Act: A Race Against Compliance and Innovation

Headline: Navigating the Labyrinth of the EU AI Act: A Race Against Compliance and Innovation

Today is July 12th, 2025, and if you thought European bureaucracy moved slowly, let me introduce you to the EU AI Act — which is somehow both glacial and frighteningly brisk at the same time. Since February, this sweeping new law has been the talk of Brussels, Berlin, Paris, and, frankly, anyone in tech who gets heart palpitations at the mere mention of “compliance matrix.” The AI Act is now the world’s most ambitious legal effort to bring artificial intelligence systems to heel — classifying them by risk, imposing bans on “unacceptable” uses, and, for the AI providers and deployers among us, requiring that staff must now be schooled in what the Act coyly calls “AI literacy.”But let’s not pretend this is all running like automated clockwork. As of this week, the European Commission just published the final Code of Practice for general-purpose AI, or GPAI, those foundational models like GPT-4, DALL-E, and their ilk. Now, OpenAI, Google, and other industry titans are frantically revisiting their compliance playbooks, because the fines for getting things wrong can reach up to 7% of global turnover. That’ll buy a lot of GPU clusters, or a lot of legal fees. The code is technically “voluntary,” but signing on means less red tape and more legal clarity, which, in the EU, is like being handed the cheat codes for Tetris.Transparency is the new battle cry. The Commission’s Henna Virkkunen described the Code as a watershed for “tech sovereignty.” Now, AI companies that sign up will need to share not only what their models can do, but also what they were trained on — think of it as a nutrition label for algorithms. Paolo Lazzarino from the law firm ADVANT Nctm says this helps us judge what’s coming out of the AI kitchen, data by data.Yet, not everyone is popping champagne. More than forty-five heavyweights of European industry — Airbus, Siemens Energy, even Mistral — have called for a two-year “stop the clock” on the most burdensome rules, arguing the AI Act’s moving goalposts and regulatory overload could choke EU innovation right when the US and China are speeding ahead. Commission spokesperson Thomas Regnier, unwavering, stated: “No stop the clock, no pause.” And don’t expect any flexibility from Brussels in trade talks with Washington or under the Digital Markets Act: as far as the EU is concerned, these rules are about European values, not bargaining chips.Here’s where it gets interesting for the would-be compliance artist: the real winners in this grand experiment might be the very largest AI labs, who can afford an armada of lawyers and ethicists, while smaller players are left guessing at requirements — or quietly shifting operations elsewhere.So, will the Act make Europe the world’s beacon for ethical AI, or just a museum for lost startups? The next few months will tell. Thanks for tuning in, listeners. Don’t forget to subscribe for more — this has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

12 Juli 3min

EU's AI Act Rewrites the Global AI Rulebook

EU's AI Act Rewrites the Global AI Rulebook

Welcome to the era where artificial intelligence isn’t just changing our world, but being reshaped by law at an unprecedented pace. Yes, I’m talking about the European Union’s Artificial Intelligence Act, the so-called AI Act, which, as of now, is rapidly transitioning from legislative text to concrete reality. The Act officially entered into force last August, and the compliance countdown is all but inescapable, especially with that pivotal August 2, 2025 deadline looming for general-purpose AI models.Let’s get right to it: The EU AI Act is the world’s first comprehensive regulatory framework for AI, designed to set the rules of play not only for European companies, but for any organization worldwide that wants access to the EU’s massive market. Forget the GDPR—this is next-level, shaping the global conversation about AI accountability, safety, and risk.Here’s how it works. The AI Act paints AI risk with a bold palette: unacceptable, high, limited, and minimal risk. Unacceptable risk? Banned outright. High-risk? Think biometric surveillance, AI in critical infrastructure, employment—those undergo strict compliance and transparency measures. Meanwhile, your run-of-the-mill chatbots? Minimal or limited risk, with lighter obligations. And then there’s the beast: general-purpose AI models, like those powering the latest generative marvels. These are subject to special transparency and evaluation rules, with slightly fewer hoops for open source models.Now, if you’re hearing a faint whirring sound, that’s the steady hum of tech CEOs furiously lobbying Brussels. Just last week, leaders from companies like ASML, Meta, Mistral, and even Carrefour threw their weight behind an open letter—46 European CEOs asking the Commission to hit pause on the AI Act. Their argument? The guidelines aren’t finalized, the compliance landscape is murky, and Europe risks throttling its own innovation before it can compete with the US and China. They call their campaign #stoptheclock.But the EU Commission’s Thomas Regnier shot that down on Friday—no stop the clock, no grace period, and absolutely no pause. The timeline is the timeline: August 2025 for general-purpose models, August 2026 for high-risk models, and phased requirements in between. And for the record, this is no empty threat—the Act creates national notifying bodies, demands conformity assessments, and empowers a European Artificial Intelligence Board to keep Member States in line.What’s more, as of February, every provider and deployer of AI in the EU must ensure their staff have a “sufficient level of AI literacy.” That’s not just a suggestion; it’s law. The upshot? Organizations are scrambling to develop robust training programs and compliance protocols, even as the final Code of Practice for General-Purpose AI models is still delayed, thanks in part to lobbying from Google, Meta, and others.Will this new regulatory order truly balance innovation and safety? Or will Europe’s bold move become a cautionary tale for overregulation in AI? Only time will tell, but one thing is certain: the next year is make or break for every AI provider with European ambitions.Thanks for tuning in, and don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

10 Juli 3min

Europe's AI Reckoning: Racing to Comply with High-Stakes Regulations

Europe's AI Reckoning: Racing to Comply with High-Stakes Regulations

Europe’s AI summer may feel more like a nervous sprint than a picnic right now, especially for those of us living at the intersection of code, capital, and compliance. The EU’s Artificial Intelligence Act is no longer a looming regulation—it’s a fast-moving train, and as of today, July 7th, 2025, there are no signs of it slowing down. That’s despite a deluge of complaints, lobbying blitzes, and even a CEO-endorsed hashtag campaign aimed at hitting pause. ASML, Mistral, Alphabet, Meta, and a crowd of nearly 50 other tech heavyweights signed an open letter in the last week, warning the European Commission that the deadline is not just ambitious, it’s borderline reckless, risking Europe’s edge in the global AI arms race.Thomas Regnier, the Commission’s spokesperson, essentially dropped the regulatory mic last Friday: “Let me be as clear as possible, there is no stop the clock. There is no grace period. There is no pause.” No amount of LinkedIn drama or industry angst could budge the schedule. By August 2025, general-purpose AI models—think everything from smart chatbots to foundational LLMs—must comply. Come August 2026, high-risk AI applications like biometric surveillance and automated hiring tools are up next. European policymakers seem adamant about legal certainty, hoping that a crystal-clear timeline will attract long-term investment and prevent another “GDPR scramble.”But listening to industry leaders like Ulf Kristersson, the Swedish Prime Minister, and organizations such as CCIA Europe, you’d think the AI Act is a bureaucratic maze designed in a vacuum. The complaint isn’t just about complexity. It’s about survival for smaller firms, who are now openly considering relocating AI projects to the US or elsewhere to dodge regulatory quicksand. Compared to the EU’s risk-tiered, legally binding approach, the US is sticking to voluntary sector-by-sector frameworks, while China is going all-in on state-mandated AI dominance.Still, there are flickers of pragmatism from Brussels. The Commission is flirting with a Digital Simplification Omnibus—yes, that is the real name—and promising an AI Act Serve Desk to handhold companies through the paperwork labyrinth. There’s even a delayed but still-anticipated Code of Practice, now expected at year’s end, intended to demystify compliance for developers and enterprise leaders alike.Yet, beneath this regulatory bravado, a question lingers—will Europe’s ethical ambition be its competitive undoing? As the world watches, it’s not just the substance of the AI Act that matters, but whether Europe can balance principle with the breakneck pace of global innovation.Thanks for tuning in to this breakdown of Europe’s regulatory moment. Don’t forget to subscribe for more. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

7 Juli 2min

Populärt inom Business & ekonomi

framgangspodden
varvet
badfluence
uppgang-och-fall
svd-ledarredaktionen
rss-borsens-finest
avanzapodden
rss-kort-lang-analyspodden-fran-di
rikatillsammans-om-privatekonomi-rikedom-i-livet
affarsvarlden
rss-dagen-med-di
lastbilspodden
fill-or-kill
tabberaset
kapitalet-en-podd-om-ekonomi
borsmorgon
dynastin
montrosepodden
market-makers
rss-inga-dumma-fragor-om-pengar