Generative AI and Democracy: Shaping the Future

Generative AI and Democracy: Shaping the Future

In a significant stride towards regulating artificial intelligence, the European Union's pioneering piece of legislation known as the AI Act has been finalized and approved. This landmark regulation aims to address the myriad complexities and risks associated with AI technologies while fostering innovation and trust within the digital space.

The AI Act introduces a comprehensive legal framework designed to govern the use and development of AI across the 27 member states of the European Union. It marks a crucial step in the global discourse on AI governance, setting a precedent that could inspire similar regulatory measures worldwide.

At its core, the AI Act categorizes AI systems according to the risk they pose to safety and fundamental rights. The framework distinguishes between unacceptable risk, high risk, limited risk, and minimal risk applications. This risk-based approach ensures that stricter requirements are imposed on systems that have significant implications for individual and societal well-being.

AI applications considered a clear threat to people’s safety, livelihoods, and rights, such as social scoring systems and exploitative subliminal manipulation technologies, are outright banned under this act. Meanwhile, high-risk categories include critical infrastructures, employment and workers management, and essential private and public services, which could have major adverse effects if misused.

For high-risk AI applications, the act mandates rigorous transparency and data management provisions. These include requirements for high-quality data sets that are free from biases to ensure that AI systems operate accurately and fairly. Furthermore, these systems must incorporate robust security measures and maintain detailed documentation to facilitate audit trails. This ensures accountability and enables oversight by regulatory authorities.

The AI Act also stipulates that AI developers and deployers in high-risk sectors maintain clear and accurate records of their AI systems’ functioning. This facilitates assessments and compliance checks by the designated authorities responsible for overseeing AI implementation within the Union.

Moreover, the act acknowledges the rapid development within the AI sector and allocates provisions for updates and revisions of regulatory requirements, adapting to technological advancements and emerging challenges in the field.

Additionally, the legislation emphasizes consumer protection and the rights of individuals, underscoring the importance of transparency in AI operations. Consumers must be explicitly informed when they are interacting with AI systems, unless it is unmistakably apparent from the circumstances.

The path to the enactment of the AI Act was marked by extensive debates and consultations with various stakeholders, including tech industry leaders, academic experts, civil society organizations, and the general public. These discussions highlighted the necessity of balancing innovation and ethical considerations in the development and deployment of artificial intelligence technologies.

As the European Union sets forth this regulatory framework, the AI Act is expected to play a pivotal role in shaping the global landscape of AI governance. It not only aims to protect European citizens but also to establish a standardized approach that could serve as a blueprint for other regions considering similar legislation.

As the AI field continues to evolve, the European Union’s AI Act will undoubtedly be a subject of much observation and analysis, serving as a critical reference point in the ongoing dialogue on how best to manage and harness the potential of artificial intelligence for the benefit of society.

Avsnitt(201)

Headline: "Europe's AI Reckoning: A High-Stakes Clash of Tech, Policy, and Global Ambition"

Headline: "Europe's AI Reckoning: A High-Stakes Clash of Tech, Policy, and Global Ambition"

Let’s not sugarcoat it—the past week in Brussels was electric, and not just because of a certain heatwave. The European Union’s Artificial Intelligence Act, the now-world-famous EU AI Act, is moving from high theory to hard enforcement, and it’s already remapping how technologists, policymakers, and global corporations think about intelligence in silicon. Two days from now, on August 2nd, the most consequential tranche of the Act’s requirements goes live, targeting general-purpose AI models—think the ones that power language assistants, creative generators, and much of Europe’s digital infrastructure. In the weeks leading up to this, the European Commission pulled no punches. Ursula von der Leyen doubled down on the continent’s ambition to be the global destination for “trustworthy AI,” unveiling the €200 billion InvestAI initiative plus a fresh €20 billion fund for gigafactories designed to build out Europe’s AI backbone.The recent publication of the General-Purpose AI Code of Practice on July 10th sent a shockwave through boardrooms and engineering hubs from Helsinki to Barcelona. This code, co-developed by a handpicked cohort of experts and 1000-plus stakeholders, landed after months of fractious negotiation. Its central message: if you’re scaling or selling sophisticated AI in Europe, transparency, copyright diligence, and risk mitigation are no longer optional—they’re your new passport to the single market. The Commission dismissed all calls for a delay; there’s no "stop the clock.” Compliance starts now, not after the next funding round or product launch.But the drama doesn’t end there. Back in February, chaos erupted when the draft AI Liability Directive was pulled amid furious debates over core liability issues. So, while the AI Act defines the tech rules of the road, legal accountability for AI-based harm remains a patchwork—an unsettling wild card for major players and start-ups alike.If you want detail, look to France’s CNIL and their June guidance. They carved “legitimate interest” into GDPR compliance for AI, giving the French regulatory voice outsized heft in the ongoing harmonization of privacy standards across the Union.Governance, too, is on fast-forward. Sixty independent scientists are now embedded as the AI Scientific Panel, quietly calibrating how models are classified and how “systemic risk” ought to be taxed and tamed. Their technical advice is rapidly becoming doctrine for future tweaks to the law.Not everybody is thrilled, of course. Industry lobbies have argued that the EU’s prescriptive risk-based regime could push innovation elsewhere—London, perhaps, where Peter Kyle’s Regulatory Innovation Office touts a more agile, innovation-friendly alternative. Yet here in the EU, as of this week, the reality is set. Hefty fines—up to 7% of global turnover—back up these new directives.Listeners, the AI Act is more than a policy experiment. It’s a stress test of Europe’s political will and technological prowess. Will the gamble pay off? For now, every AI engineer, compliance officer, and political lobbyist in Europe is on red alert.Thanks for tuning in—don’t forget to subscribe for more sharp takes on AI’s unfolding future. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

28 Juli 3min

EU AI Act: Regulatory Reality Dawns as Landmark Legislation Takes Effect

EU AI Act: Regulatory Reality Dawns as Landmark Legislation Takes Effect

Have you felt it, too? That faint tremor running through every boardroom and startup, from Lisbon to Helsinki, as we approach the next milestone in the EU Artificial Intelligence Act saga? We’ve sprinted past speculation—now, as July 26, 2025, dawns, we’re staring at regulatory reality. The long-anticipated second phase of the EU AI Act hits in less than a week, with August 2nd the date circled in red on every compliance officer's calendar. Notably, this phase brings the first legally binding obligations for providers of general-purpose AI models—think of the likes of OpenAI or Mistral, but with strict European guardrails.This is the moment Ursula von der Leyen, President of the European Commission, seemed to foreshadow in February when she unleashed the InvestAI initiative, a €200 billion bet to cement Europe as an "AI continent." Sure, the PR shine is dazzling, but under the glossy surface there’s a slog of bureaucracy and multi-stakeholder bickering. Over a thousand voices—industry, academia, civil society—clashed and finally hammered out the General-Purpose AI Code of Practice, submitted to the European Commission just weeks ago.Why all the fuss over this so-called Code? It’s the cheat sheet, the Copilot, for every entity wrangling with the new regime, wrestling with transparency mandates, copyright headaches, and the ever-elusive specter of “systemic risk.” The Code is voluntary, for now, but don’t kid yourself: Brussels expects it to shape best practices and spark a compliance arms race. And, to the chagrin of lobbyists fishing for delays, the Commission rejected calls to “stop the clock.” From August 2, there’s no more grace period. The AI Act’s teeth are fully bared.But the Act doesn’t just slam the brakes on dystopic AIs. It empowers the European AI Office, tasks a new Scientific Panel with evidence-based oversight, and requires each member state to stand up a conformity authority—think AI police for the digital realm. Fines? They bite hard: up to €35 million or 7% of global turnover if you deploy a prohibited system.Meanwhile, debate simmers over the abandoned AI Liability Directive—a sign that harmonizing digital accountability remains the trickiest Gordian knot of all. But don’t overlook this irony: by codifying risks and thresholds, the EU’s hard rules have paradoxically driven a burst of regulatory creativity outside the EU. The UK’s Peter Kyle is pushing the Regulatory Innovation Office’s cross-jurisdictional collaboration, seeking a lighter touch, more “sandbox” than command-and-control.So what’s next for AI in Europe and beyond? Watch the standard-setters tussle. Expect the market to stratify—major AI players compelled to disclose, mitigate, and sometimes reengineer. For AI startups dreaming of exponential scale, the new gospel is risk literacy and compliance by design. The era where ‘move fast and break things’ ruled tech is well and truly sunsetted, at least on this side of the Channel.Thanks for tuning in. Subscribe for sharper takes, and remember: This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

26 Juli 3min

EU AI Act's Deadline Looms: A Tectonic Shift for AI in Europe

EU AI Act's Deadline Looms: A Tectonic Shift for AI in Europe

Blink and the EU AI Act’s next compliance deadline is on your doorstep—August 2, 2025, isn’t just a date, it’s a tectonic shift for anyone touching artificial intelligence in Europe. Picture it: Ursula von der Leyen in Brussels, championing “InvestAI” to funnel €200 billion into Europe’s AI future, while, just days ago, the final General Purpose AI Code of Practice landed on the desks of stakeholders across the continent. The mood? Nervous, ambitious, and very much under pressure.Let’s cut straight to the chase—this is the world’s first comprehensive legal framework for regulating AI, and it’s poised to recode how companies everywhere build, scale, and deploy AI systems. The Commission has drawn a bright line: there will be no “stop the clock,” no gentle handbrake for last-minute compliance. This, despite Airbus, ASML, and Mistral’s CEOs practically pleading for a two-year pause, warning that the rules are so intricate they might strangle innovation before it flourishes. But Brussels is immovable. As a Commission spokesperson quipped at the July 4th press conference, “We have legal deadlines established in a legal text.” Translation: adapt or step aside.From August onwards, if you’re offering or developing general purpose AI—think OpenAI’s GPT, Google’s Gemini, or Europe’s own Aleph Alpha—transparency and safety are no longer nice-to-haves. Documentation requirements, copyright clarity, risk mitigation, deepfake labeling—these obligations are spelled out in exquisite legal detail and will become enforceable by 2026 for new models. For today’s AI titans, 2027 is the real D-Day. Non-compliance? Stiff fines up to 7% of global revenue, which means nobody can afford to coast.Techies might appreciate that the regulation’s risk-based system reflects a distinctly European vision of “trustworthy AI”—human rights at the core, and not just lip service. That includes outlawing predictive policing algorithms, indiscriminate biometric scraping, and emotion detection in workplaces or policing contexts. Critically, the Commission’s new 60-member AI Scientific Panel is overseeing systemic risk, model classification, and technical compliance, driving consultation with actual scientists, not just politicians.What about the rest of the globe? This is regulatory extraterritoriality in action. Where Brussels goes, others follow—like New York’s privacy laws in the 2010s, only faster and with higher stakes. If you’re coding from San Francisco or Singapore but serving EU markets, welcome to the world’s most ambitious sandbox.The upshot? For leaders in AI, the message has never been clearer: rethink your strategy, rewrite your documentation, and get those compliance teams in gear—or risk becoming a cautionary tale when the fines start rolling.Thanks for tuning in, and don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

24 Juli 3min

Buckle Up for Europe's AI Regulatory Roadmap: No Detours Allowed

Buckle Up for Europe's AI Regulatory Roadmap: No Detours Allowed

Welcome to the fast lane of European AI regulation—no seat belts required, unless you count the dozens of legal provisions about to reshape the way we build and deploy artificial intelligence. As I’m recording this, just days away from the August 2, 2025, enforcement milestone, there’s a distinctly charged air. The EU AI Act, years in the making, isn’t being delayed—not for Airbus, not for ASML, not even after a who’s-who of industry leaders sent panicked open letters to Ursula von der Leyen and the European Commission, pleading for a pause. The Commission’s answer? A polite but ironclad “no.” The regulatory Ragnarok is happening as scheduled.Let’s cut straight to the core: the EU AI Act is the world’s first comprehensive legal framework governing the use of artificial intelligence. Its risk-based model isn’t just a talking point—they’ve already made certain uses illegal, from biometric categorization based on sensitive data to emotion recognition in the workplace, and of course, manipulative systems that influence behavior unnoticed. Those rules have been in effect since February.Now, as of this August, new obligations kick in for providers of general-purpose AI models—think foundational models like GPT-style large language models, image generators, and more. The General-Purpose AI Code of Practice, published July 10, lays out the voluntary gold standard for compliance. There’s a carrot here: less paperwork and more legal certainty for organizations who sign on. Voluntary, yes—but ignore it at your peril, given the risk of crushing fines up to 35 million euros or 7% of global turnover.The Commission has been busy clarifying thresholds, responsibility-sharing for upstream versus downstream actors, and handling those labyrinthine integration and modification scenarios. The logic is simple: modify a model with significant new compute power? Congratulations, you inherit all compliance responsibility. And if your model is open-source, you’re only exempt if there’s no money changing hands and the model isn’t a systemic risk. No free passes for the most potent systems, open-source or not.To smooth the rollout, the AI Office and the European Artificial Intelligence Board have spun out guidelines, FAQs, and the newly opened AI Service Desk for support. France’s Mistral, Germany’s Federal Network Agency, and hundreds of stakeholders across academia, business, and civil society have their fingerprints on the rules. But be prepared: initial confusion is inevitable. Early enforcement will be “graduated,” with guidance and consultation—until August 2027, when the Act’s teeth come out for all, including high-risk systems.What does it mean for you? Increased trust, more visible transparency—chatbots have to disclose they’re bots, deep fakes need obvious labels, and every high-risk system comes under the microscope. Europe is betting that by dictating terms to the world’s biggest AI players, it will shape what’s next. Like it or not, the future of AI is being drawn up in Brussels—and compliance is mandatory, not optional.Thanks for tuning in. Don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

21 Juli 3min

"EU AI Act Becomes Reality: No More Delays, Hefty Fines Await Unprepared Businesses"

"EU AI Act Becomes Reality: No More Delays, Hefty Fines Await Unprepared Businesses"

Let’s just call it: the EU AI Act is about to become reality—no more discussions, no more delays, no more last-minute reprieves. The European Commission has dug its heels in. Despite this month’s frantic lobbying, from the likes of Airbus and ASML to Mistral, asking for a two-year pause, the Commission simply said, “Our legal deadlines are established. The rules are already in force.” The first regulations have been binding since February and the heavy hitters—transparency, documentation, and technical standards for general-purpose AI—hit on August 2, 2025. If your AI touches the European market and you’re not ready, the fines alone might make your CFO reconsider machine learning as a career path—think €35 million or 7% of your global turnover.Zoom in on what’s actually changing and why some tech leaders are sweating. The EU AI Act is the world’s first sweeping legal framework for artificial intelligence, risk-based just like GDPR was for privacy. Certain AI is now outright banned: biometric categorization based on sensitive data, emotion recognition in your workplace Zoom calls, manipulative systems changing your behavior behind the scenes, and, yes, the dreaded social scoring. If you’re building AI with general purposes—think large language models, multimodal models—your headaches start from August 2. You’ll need to document your training data, lay out your model development and evaluation, publish summaries, and keep transparency reports up to date. Copyrighted material in your training set? Document it, prove you had the rights, or face the consequences. Even confidential data must be protected under new, harmonized technical standards the Commission is quietly making the gold standard.This week’s news is all about guidelines and the GPAI Code of Practice, finalized on July 10 and made public in detail just yesterday. The Commission wants providers to get on board with this voluntary code: comply and, supposedly, you’ll have a reduced administrative burden and more legal certainty. Ignore it, and you might find yourself tangled in legal ambiguity or at the sharp end of enforcement from the likes of Germany’s Bundesnetzagentur, or, if you’re Danish, the Agency for Digital Government. Denmark, ever the overachiever, enacted its national AI oversight law early—on May 8—setting the pace for everyone else.If you remember the GDPR scramble, this déjà vu is justified. Every EU member state must designate their own national AI authorities by August 2. The European Artificial Intelligence Board is set to coordinate these efforts, making sure no one plays fast and loose with the AI rules. Businesses whine about complexity; regulators remain unmoved. And while the new guidelines offer some operational clarity, don’t expect a gentle phase-in like GDPR. The Act positions the EU as the de facto global regulator—again. Non-EU companies using AI in Europe? Welcome to the jurisdictional party.Thanks for tuning in. Don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

19 Juli 3min

Denmark Leads EU's AI Regulation Revolution: Enforcing Landmark AI Act Months Ahead of Deadline

Denmark Leads EU's AI Regulation Revolution: Enforcing Landmark AI Act Months Ahead of Deadline

Imagine waking up in Copenhagen this week, where Denmark just cemented its reputation as a tech regulation trailblazer, becoming the first EU country to fully implement the EU Artificial Intelligence Act—months ahead of the August 2, 2025, mandatory deadline. Industry insiders from Brussels to Berlin are on edge, their calendars marked by the looming approach of enforcement. The clock, quite literally, is ticking.Unlike the United States’ scattershot, state-level approach, the EU AI Act is structured, systematic, and—let’s not mince words—ambitious. This is the world’s first unified legal framework governing artificial intelligence. The Act’s phased rollout means that today, in July 2025, we are in the eye of the regulatory storm. Since February, particularly risky AI practices, such as biometric categorization targeting sensitive characteristics and emotion recognition in workplaces, have been banned outright. Builders and users of AI across Europe are scrambling to ramp up what the EU calls “AI literacy.” If your team can’t explain the risks and logic of the systems they deploy, you might be facing more than just a stern memo—a €35 million fine or 7% of global turnover can land quickly and without mercy.August 2025 is the next inflection point. From then, any provider or deployer of general-purpose AI—think OpenAI, Google, Microsoft—must comply with stringent documentation, transparency, and data-provenance obligations. The European Commission’s just-published General-Purpose AI Code of Practice, after months of wrangling with nearly 1,000 stakeholders, offers a voluntary but incentivized roadmap. Adherence means a lighter administrative load and regulatory tranquility—stray, and the burden multiplies. But let’s be clear: the Code does not guarantee legal safety; it simply clarifies the maze.What most AI companies are quietly asking themselves: will this European model reverberate globally? The Act’s architecture, in many ways reminiscent of the GDPR playbook, is already nudging discussion in Washington, New Delhi, and Beijing. And make no mistake, the EU’s choice of a risk-based approach—categorizing systems from minimal to “unacceptable risk”—means the law evolves alongside technological leaps.There’s plenty of jockeying behind the scenes. German authorities are prepping regulatory sandboxes; IBM is running compliance campaigns, while Meta and Amazon haven’t yet committed to the new code. But in this moment, the message is discipline, transparency, and relentless readiness. You can feel the regulatory pressure in every boardroom and dev sprint. The EU is betting that by constraining the wild, it can foster innovation that’s not just profitable, but trustworthy.Thank you for tuning in—don’t miss the next update, and be sure to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

17 Juli 3min

Europe's AI Reckoning: The EU's Groundbreaking Regulation Shakes Up the Tech Landscape

Europe's AI Reckoning: The EU's Groundbreaking Regulation Shakes Up the Tech Landscape

Imagine waking up in Brussels this morning and realizing that, as of a few weeks ago, the European Union’s AI Act is no longer just the stuff of policy briefings and think tank debates—it’s a living, breathing regulation that’s about to transform tech across the continent and beyond. Effective since February, the EU AI Act is carving out a new global reality for artificial intelligence, and the clock is ticking—August 2, 2025, is D-day for the new transparency rules targeting General-Purpose AI models. That means if you’re building, selling, or even adapting models like GPT-4, DALL-E, or Google’s Gemini for the EU market, you’re now on the hook for some of the world’s most comprehensive and contentious AI requirements.Let’s get specific. The law is already imposing AI literacy obligations across the board: whether you’re a provider, a deployer, or an importer, you need your staff to have a real grasp of how AI works. No more black-box mystique or “it’s just an algorithm” hand-waving. By August, anyone providing a General-Purpose AI model will have to publish detailed summaries of their training data, like a nutrition label for algorithms. And we’re not talking about vague assurances. The EU is demanding documentation “sufficiently detailed” to let users, journalists, and regulators trace the DNA of what these models have been fed. Think less ‘trust us,’ more ‘show your work—or risk a €15 million fine or 3% of worldwide annual turnover.’ These are GDPR-level risks, and the comparison isn’t lost on anyone in tech.But let’s not pretend it’s frictionless. In the past week alone, Airbus, Siemens Energy, Lufthansa, ASML, and a who’s-who of European giants fired off an open letter begging the European Commission for a two-year delay. They argue the rules bring regulatory overload, threaten competitiveness, and, with key implementation standards still being thrashed out, are almost impossible to obey. The Commission has so far said no—August 2 is still the target date—but Executive Vice President Henna Virkkunen has left a crack in the door, hinting at “targeted delays” if essential standards aren’t ready.This tension is everywhere. The voluntary Code of Practice released July 10 is a preview of the coming world: transparency, stricter copyright compliance, and systemic risk management. Companies like OpenAI and Google are reviewing the text; Meta and Amazon are holding their cards close. There’s a tug-of-war between innovation and caution, global ambition and regulatory rigor.Europe wants to be the AI continent—ambitious, trusted, safe. Yet building rules for tech that evolves while you write the legislation is an impossible engineering problem. The real test starts now: will the AI Act make Europe the model for AI governance, or slow it down while others—looking at you, Silicon Valley and Shanghai—race ahead? The debate is no longer theoretical, and as deadlines close in, the world is watching.Thanks for tuning in, and don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

14 Juli 3min

Headline: Navigating the Labyrinth of the EU AI Act: A Race Against Compliance and Innovation

Headline: Navigating the Labyrinth of the EU AI Act: A Race Against Compliance and Innovation

Today is July 12th, 2025, and if you thought European bureaucracy moved slowly, let me introduce you to the EU AI Act — which is somehow both glacial and frighteningly brisk at the same time. Since February, this sweeping new law has been the talk of Brussels, Berlin, Paris, and, frankly, anyone in tech who gets heart palpitations at the mere mention of “compliance matrix.” The AI Act is now the world’s most ambitious legal effort to bring artificial intelligence systems to heel — classifying them by risk, imposing bans on “unacceptable” uses, and, for the AI providers and deployers among us, requiring that staff must now be schooled in what the Act coyly calls “AI literacy.”But let’s not pretend this is all running like automated clockwork. As of this week, the European Commission just published the final Code of Practice for general-purpose AI, or GPAI, those foundational models like GPT-4, DALL-E, and their ilk. Now, OpenAI, Google, and other industry titans are frantically revisiting their compliance playbooks, because the fines for getting things wrong can reach up to 7% of global turnover. That’ll buy a lot of GPU clusters, or a lot of legal fees. The code is technically “voluntary,” but signing on means less red tape and more legal clarity, which, in the EU, is like being handed the cheat codes for Tetris.Transparency is the new battle cry. The Commission’s Henna Virkkunen described the Code as a watershed for “tech sovereignty.” Now, AI companies that sign up will need to share not only what their models can do, but also what they were trained on — think of it as a nutrition label for algorithms. Paolo Lazzarino from the law firm ADVANT Nctm says this helps us judge what’s coming out of the AI kitchen, data by data.Yet, not everyone is popping champagne. More than forty-five heavyweights of European industry — Airbus, Siemens Energy, even Mistral — have called for a two-year “stop the clock” on the most burdensome rules, arguing the AI Act’s moving goalposts and regulatory overload could choke EU innovation right when the US and China are speeding ahead. Commission spokesperson Thomas Regnier, unwavering, stated: “No stop the clock, no pause.” And don’t expect any flexibility from Brussels in trade talks with Washington or under the Digital Markets Act: as far as the EU is concerned, these rules are about European values, not bargaining chips.Here’s where it gets interesting for the would-be compliance artist: the real winners in this grand experiment might be the very largest AI labs, who can afford an armada of lawyers and ethicists, while smaller players are left guessing at requirements — or quietly shifting operations elsewhere.So, will the Act make Europe the world’s beacon for ethical AI, or just a museum for lost startups? The next few months will tell. Thanks for tuning in, listeners. Don’t forget to subscribe for more — this has been a quiet please production, for more check out quiet please dot ai.Some great Deals https://amzn.to/49SJ3QsFor more check out http://www.quietplease.ai

12 Juli 3min

Populärt inom Business & ekonomi

framgangspodden
varvet
badfluence
uppgang-och-fall
rss-borsens-finest
svd-ledarredaktionen
avanzapodden
lastbilspodden
rikatillsammans-om-privatekonomi-rikedom-i-livet
rss-kort-lang-analyspodden-fran-di
rss-dagen-med-di
fill-or-kill
affarsvarlden
borsmorgon
dynastin
kapitalet-en-podd-om-ekonomi
tabberaset
montrosepodden
borslunch-2
rss-inga-dumma-fragor-om-pengar