AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy

AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy

🔍 In this TL;DR episode, Carys J Craig (Osgoode Professional Development) explains the "copyright trap" in AI regulation, where relying on copyright favors corporate interests over creativity. She challenges misconceptions about copying and property rights, showing how this approach harms innovation and access. Carys offers alternative ways to protect human creativity without falling into this trap.


📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:46] Q1-What is the "Copyright Trap," and why could it harm AI and creativity?

⏲️[10:05] Q2-Can you explain the three routes that lead into the copyright trap and their relevance to AI?

⏲️[22:08] Q3-What alternatives should policymakers consider to protect creators and manage AI?

⏲️[28:45] Wrap-up & Outro


💭 Q1 - What is the "Copyright Trap," and why could it harm AI and creativity?


🗣️ “To turn to copyright law is to turn to really a false friend. The idea that copyright is going to be our friend, is going to help us in this situation,(...) it's likely to do more harm than good."

🗣️ “We are imagining increasingly in these policy debates that copyright and protection of copyright owners will be a kind of counterweight to corporate power and to the sort of extractive logics of Big Tech and AI development. I think that that is misguided. And in fact, we're playing into the interests of both the entertainment industries and big tech ”

🗣️ "When we run into the copyright trap, this sort of conviction that copyright is going to be the right regulatory tool, we are sort of defining how this technology is going to evolve in a way that I think will backfire and will actually undermine the political objectives of those who are pointing to the inequities and the unfairness behind the technology and the way that it's being developed.”

🗣️ "AI industry, big tech industry and the creative industry stakeholders are all, I think, perfectly happy to approach these larger policy questions through the sort of logic of copyright, sort of proprietary logic of ownership, control, exchange in the free market, licencing structures that we're already seeing taking hold"

🗣️ "What we're going to see, I think, if we run into the copyright trap is that certainly smaller developers, but really everyone will be training the technology on incomplete data sets, the data sets that reflect the sort of big packaged data products that have been exchanged for value between the main market actors. So that's going to lessen the quality really of what's going in generally by making it more exclusive and less inclusive."


💭 Q2 - Can you explain the three routes that lead into the copyright trap and their relevance to AI?


🗣️ ""The first route that I identify is what's sometimes called the if-value-then-right fallacy. So that's the assumption that if something has value, then there should be or must be some right over it.“

🗣️ "Because something has value, whether economic or social, doesn't mean we should turn it into property that can be owned and controlled through these exclusive rights that we find in copyright law."

🗣️ "The second route that I identify is a sort of obsession with copying and the idea that copying is inherently just a wrongful activity. (...) The reality is that there's nothing inherently wrongful about copying. And in fact, this is how we learn. This is how we create.

🗣️ "One of the clearest routes into the copyright trap is saying, well, you know, you have to make copies of texts in order to train AI. So of course, copyright is implicated. And of course, we have to prevent that from happening without permission.. (...) But our obsession with the individuated sort of discrete copies of works behind the scenes is now an anachronism that we really need to let go.”

🗣️ "Using the figure of the artist as a reason to expand copyright control, and assuming that that's going to magically turn into lining the pockets of artists and creators seems to me to be a fallacy and a route into the copyright trap."


💭 Q3 - Why is output-based remuneration better for creators, AI developers, and society?


🗣️ "The health of our cultural environment (..) [should be] the biggest concern and not simply or only protecting creators as a separate class of sort of professional actors."

🗣️ "I think what we could do is shift our copyright policy focus to protecting and encouraging human authorship by refusing to protect AI generated outputs.

🗣️ "If the outputs of generative AI are substantially similar to works on which the AI was trained, then those are infringing outputs and copyright law will apply to them such that to distribute those infringing copies would produce liability under the system as it currently exists.“

🗣️ "There are privacy experts who might be much better placed to say how should we curate or ensure that we regulate the data on which the machines are trained and I would be supportive of those kinds of interventions at the input stage.

🗣️ “Copyright seems like a tempting way to do it but that's not what it does. And so maybe rather than some of the big collective licencing solutions that are being imagined in this context, we'd be better off thinking about tax solutions, where we properly tax big tech and then we use that tax in a way that actually supports the things that we as a society care about, including funding culture and the arts."


📌 About Our Guest

🎙️ Carys J Craig | Osgoode Hall Law School

🌐 Article | The AI-Copyright Trap

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4905118

🌐 Carys J Craig

https://www.osgoode.yorku.ca/faculty-and-staff/craig-carys-j/


Carys is the Academic Director of the Osgoode Professional Development LLM Program in Intellectual Property Law, and recently served as Osgoode’s Associate Dean. A recipient of multiple teaching awards, Carys researches and publishes widely on intellectual property law and policy, with an emphasis on authorship, users’ rights and the public domain.

#AI #ArtificialIntelligence #GenerativeAI

Jaksot(37)

AI lab TL;DR | Žiga Turk - Brussels is About to Protect Citizens from Intelligence

AI lab TL;DR | Žiga Turk - Brussels is About to Protect Citizens from Intelligence

🔍 In this TL;DR episode, Professor Žiga Turk (University of Ljubljana, Slovenia) discusses his recent contribution for the Wilfried Martens Centre for European Studies on how “Brussels is About to Protect Citizens from Intelligence” with the AI lab📌 TL;DR Highlights⏲️[00:00] Intro⏲️[01:55] Q1 - Why do you think AI regulation prioritises limiting risks over promoting innovation and freedom of expression? How can governments balance security and privacy with technological innovation?⏲️[05:13] Q2 - You view AI as a 'general technology' that shouldn't be specifically regulated, advocating for technology-neutral laws. What does this mean in practice?⏲️[09:46] Wrap-up & Outro🗣️ The mistake is to try to regulate technology, it is behaviours that have to be regulated. (...) If politicians (...) go about regulating every new technology that appears, they will always be behind the curve.🗣️ The even bigger danger is [the] kind of chilling effect [AI regulation] would have for European industries, people and businesses who will not have access to the latest and greatest AI tools (...).🗣️ Some AI tools are coming to European customers with a delay or not at all. This puts the whole European economy, its citizens [and] its scientists at a disadvantage with their competition.🗣️ Investors would be hesitant. Do I want to invest in [AI] in Europe, which is so tightly regulated?🗣️ I don't think it matters whether you make a deepfake with Photoshop or let AI do it. If deepfakes need to be labelled, they should be labelled regardless of the technology.🗣️ Admire (...) the thinkers and politicians of the Enlightenment era (...) [for] not going “[the printed press] will create all kinds of unacceptable risks, we have to regulate ex-ante (...)”. Instead, they created (...) legislation on freedom of expression.🗣️ In the early days (...), the US created regulation that actually freed Internet companies from some potential dangers of hosting user content on their platforms, which created this whole Internet industry and creativity around platforms.📌 About Our Guest🎙️ Žiga Turk | Professor, University of Ljubljana (Slovenia) 𝕏  https://twitter.com/@zigaTurkEU 🌐 Wilfried Martens Centre for European Studies - Brussels is About to Protect Citizens from Intelligencehttps://www.martenscentre.eu/blog/brussels-is-about-to-protect-citizens-from-intelligence/🌐 Regulating artificial intelligence: A technology-independent approach. European View, 23(1), 87-93https://doi.org/10.1177/17816858241242890 🌐 Prof. Žiga Turkhttps://www.zturk.com/p/english.html Žiga Turk is a Professor at the University of Ljubljana (Slovenia) and a member of the Academic Council of the Wilfried Martens Centre for European Studies. He holds degrees in engineering and computer science. Prof. Turk was Minister for Growth, as well as Minister of Education, Science, Culture and Sports in the Government of Slovenia and Secretary General of the Felipe Gonzalez Reflection Group on the Future of Europe. As an academic, author and public speaker, he studies communication, internet science and scenarios of future global developments, particularly the role of technology and innovation.

22 Huhti 202410min

AI lab hot item | MEP Axel Voss: In Search of Pragmatic Solutions for AI Devs & the Creative Sector

AI lab hot item | MEP Axel Voss: In Search of Pragmatic Solutions for AI Devs & the Creative Sector

🔥 In this 'Hot Item', MEP Axel Voss (Germany, EPP) & the AI lab discuss his intentions to bring the creative industry and AI developers around the table in mid-April for a first exchange to gain a better understanding of the issues perceived on both sides📌 Hot Item Highlights⏲️[00:00] Intro⏲️[00:53] MEP Axel Voss (Germany, EPP)⏲️[09:51] Wrap-up & Outro🗣️ The copyright problem was already discussed in a way five years ago, but now we have a new technology in place. (...) This problem occurs once again. We should not wait for an imbalanced situation.🗣️ We need to solve the [AI] problem, not only for the press publishers but for the whole creative sector.🗣️ Technology can't just ignore existing laws (...) and (...) existing laws should not hinder new developments (...). We need a balance. (...) We actually have to try to find a pragmatic solution at the end, [and] it should be (...) legally binding (...).🗣️ We need these ideas of what is feasible with the technology (...). If we are thinking about [respecting] copyright (...) it might be critical (...) to say what is copyright protected, (...) who is the copyright holder. (...) [A] problem [might be:] how to do this.🗣️ If we have globally acting machines (...), should there be a kind of global approach to it? We are inviting some of our like-minded friends in the world (...) So the transatlantic system might play a role.🗣️ We also have to think (...) if there is a way to align some aspects of copyright in the interest of the AI developers.🗣️ We have a lot of issues to align. This should be a starting point. It should at first produce some ideas of problems that a legislator or a moderator (...) might be helpful with [in] what needs to be solved. 📌 About Our Guest🎙️ MEP Axel Voss (Germany, EPP)  𝕏 https://twitter.com/AxelVossMdEP🌐 MEP Axel Vosshttps://www.europarl.europa.eu/meps/en/96761/AXEL_VOSS/home https://www.axel-voss-europa.de Axel Voss (CDU) is a Member of the European Parliament for Germany in the European People's Party (EPP) Group. He is the EPP Group coordinator for the Committee on Legal Affairs (JURI), a deputy member of the Committee on Civil Liberties, Justice and Home Affairs (LIBE), and, from 2020 to 2022, a member and rapporteur in the Special Committee on Artificial Intelligence. He is (shadow-)rapporteur for the EU AI Act and was rapporteur on the Directive on Copyright in the Digital Single Market (CDSM).

28 Maalis 202410min

AI lab TL;DR | Nuno Sousa e Silva - Are AI Models’ Weights Protected Databases?

AI lab TL;DR | Nuno Sousa e Silva - Are AI Models’ Weights Protected Databases?

🔍 In this TL;DR episode, Assistant Professor Nuno Sousa e Silva (Universidade Católica Portuguesa) discusses his recent Kluwer Copyright Blog contribution, “Are AI Models’ Weights Protected Databases?”, with the AI lab.📌 TL;DR Highlights⏲️[00:00] Intro⏲️[01:41] Q1 - What are weights in an AI model?⏲️[05:14] Q2 - Why could the EU Database Directive apply to AI models in certain cases, and what would the consequences be?⏲️[09:23] Wrap-up & Outro🗣️ Models are basically tools that humans use to simplify the real world, to boil it down, to describe it, and the way that this is done is through mathematical functions.🗣️ Weights are nothing but a set of numerical values that represent the strength of the connection of neurons in a neural network.🗣️ [The Database Directive’s] aim is to protect the investment in the creation, presentation, and verification of data, so basically data products and the producer of data products. Admittedly, this had no AI models in mind.🗣️ We know how much money and effort is put into developing [an AI] model and that the model is really the weights.🗣️ For EU-based companies that qualify, that means that they have a right to control the reuse or extraction of a substantial part of that database, in other words (...): a right to control the use of the model beyond contractual rules.🗣️ Some people say that if we want to talk about open source in AI, it needs to be both the disclosure of the training set and the model weights.📌 About Our Guest🎙️ Nuno Sousa e Silva | Lawyer (Partner @ PTCS) & Law Professor (Universidade Católica Portuguesa)🌐 Kluwer Copyright Blog - Are AI Models’ Weights Protected Databases?https://copyrightblog.kluweriplaw.com/2024/01/18/are-ai-models-weights-protected-databases/ 🌐 EU Database Directive (96/9/EC)https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A31996L0009 🌐 Nuno Sousa e Silvahttps://www.nss.pt/en/#about Nuno Sousa e Silva is a Lawyer (Partner at PTCS) and a Law Professor. He graduated from the Law School of the Catholic University of Portugal (Porto) and obtained a Master of Laws and a PhD from the same University and holds an LLM degree in Intellectual Property and Competition Law (MIPLC). Nuno acts frequently as an arbitrator, advisor, and legal expert for companies, governments, and international institutions. He published four books and over fifty articles on Intellectual Property, IT Law, EU Law, and Private Law. He has taught and given lectures in Portugal, Germany, Hungary, Poland, Denmark, and the UK.

12 Maalis 202410min

1:1 with Pamela Samuelson

1:1 with Pamela Samuelson

In this podcast Pamela Samuelson (UC Berkeley School of Law) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[02:59] Q1 - The Deepdive: AI Decrypted | What significant practical obstacles in complying with a transparency obligation about copyrighted works in training data do you identify?⏲️[10:50] Q2 - The Deepdive: AI Decrypted | Looking at the disassembly or tokenization in the training process, can you explain why “generative AI models are generally not designed to copy training data; they are designed to learn from the data at an abstract and uncopyrightable level”?⏲️[18:58] Q3 - The Deepdive: AI Decrypted | On generative AI outputs: 1) why is the idea that an AI could or should be recognised as author problematic, and 2) could prompts be detailed enough to meet the threshold of authorship?⏲️[26:30] Q4 - The Deepdive: AI Decrypted | On licensing AI input your submission states: “(...) it will be impossible under current technologies to calibrate payments made under a collective licensing arrangement to actual usage of individual authors’ works.” What’s at stake?⏲️[35:37] Outro🗣️ A rule that (...) you have to keep very, very accurate records about what your training datasets are (...) is just (...) impractical if you care about (...) a large number of people instead of a few big companies being able to participate in the (...) generative AI space.🗣️ Data basically is in a certain form in the in-copyright works that are part of the training data but the model does not embody the training data in a recognisable way. (...) It's just not the way we think about the component elements of copyright works.🗣️ If you think [licensing] will mean that authors will be able to continue to make a living, we're talking about really small change here in terms of each author's entitlement. It's not like you're going to get $10,000 or $50,000 a year.🗣️ The collective license idea doesn't pay attention to (...) that we're talking about billions of works, (...) billions of authors, (...) a lot of things that essentially have no commercial value.🗣️ [Collective licensing:] it's so impractical that it's just not really feasible. (...) No question that collecting societies would (...) be the big beneficiaries of this, not the authors.🗣️ If a voluntary licensing regime works (...), I think that's fine. (...) [A] mandate that everything be licensed (...) is kind of unrealistic.📌About Our Guest🎙️ Pamela Samuelson | Richard M. Sherman Distinguished Professor of Law and Information, UC Berkeley School of Law 𝕏  https://twitter.com/PamelaSamuelson🌐 Comments in Response to the U.S. Copyright Office’s Notice of Inquiry on Artificial Intelligence and Copyright by Pamela Samuelson, Christopher Jon Sprigman, and Matthew Sag (30 October 2023)🌐 U.S. Copyright Office Issues Notice of Inquiry on Copyright and Artificial Intelligence🌐 Allocating Ownership Rights in Computer-Generated Works (Pamela Samuelson, 1985)🌐 Common Crawl🌐 Shutterstock Expands Partnership with OpenAI, Signs New Six-Year Agreement to Provide High-Quality Training Data🌐 Prof Pamela SamuelsonPamela Samuelson is the Richard M. Sherman Distinguished Professor of Law and Information at UC Berkeley. She is recognized as a pioneer in digital copyright law, intellectual property, cyberlaw and information policy. Professor Samuelson is a director of the internationally-renowned Berkeley Center for Law & Technology. She is co-founder and chair of the board of Authors Alliance, a nonprofit organization that promotes the public interest in access to knowledge. She also serves on the board of directors of the Electronic Frontier Foundation, as well as on the advisory boards for the Electronic Privacy Information Center, the Center for Democracy & Technology, and Public Knowledge. Professor Samuelson has written and published extensively in the areas of copyright, software protection and cyberlaw, with recent publications looking into the possible intersections of generative AI and copyright.

17 Tammi 202438min

1:1 with Andres Guadamuz

1:1 with Andres Guadamuz

In this podcast Andres Guadamuz (University of Sussex) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[01:24] The TL;DR Perspective⏲️[10:34] Q1 - The Deepdive: AI Decrypted | You look at the inputs and outputs of AI. For the inputs, the key question is: does mining data infringe copyright? For the outputs, the main question is: can derivative works infringe copyright and what role do exceptions play?⏲️[20:28] Q2 - The Deepdive: AI Decrypted | In your blog entitled “Will we ever be able to detect AI usage”, you wonder if that is really the right question to ask and suggest alternatives. What are your key thoughts?⏲️[23:53] Outro🗣️ To think of copyright like any granular, tiny speck of information that went into the training of an input means that you own that [AI] output. That's ridiculous to me. That means there are billions of authors for every single ChatGPT or entry. 🗣️ What [AI providers are] doing is a temporary copy or transient copy. (...) They don't need them after the model is trained. (...) What's happening is they make a copy and then extract information.🗣️ Some of these actions [by AI providers] could fall under existing exceptions and limitations. (...) They make a copy (...) that allows the generativity to work. 🗣️ AI is actually making it easier for small-time creators to create quality content. (...) What we're starting to see: it’s enabling more creators to do stuff. 📌About Our Guest🎙️ Andres Guadamuz | Reader in Intellectual Property Law, University of Sussex  𝕏  https://twitter.com/technollama🌐 Openness, AI, and the Changing Creative Landscape (TechnoLlama blog)🌐 Corridor Crew’s Anime Rock, Paper, Scissors🌐 A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs (SSRN)🌐 Will We Ever Be Able to Detect AI Usage? (TechnoLlama blog)🌐 Asking Whether AI Outputs Are Art Is Asking the Wrong Question (TechnoLlama blog) 🌐 TechnoLlama blog🌐 Dr Andres GuadamuzDr Andres Guadamuz (aka technollama) is a Reader in Intellectual Property Law at the University of Sussex and the Editor in Chief of the Journal of World Intellectual Property. His main research areas are artificial intelligence and copyright, open licensing, cryptocurrencies, and smart contracts. He has written two books and over 40 articles and book chapters, and also blogs regularly about different technology regulation topics, notably on his TechnoLlama blog.

23 Marras 202325min

AI lab hot item | Michiel Van Lerbeirghe (ML6) - Copyright Transparency: An AI Firm’s Perspective

AI lab hot item | Michiel Van Lerbeirghe (ML6) - Copyright Transparency: An AI Firm’s Perspective

🔥 In this 'Hot Item', Michiel Van Lerbeirghe (ML6) & the AI lab explore how the push for copyright transparency in the EU AI Act’s could impact smaller European AI providers and how we can move towards a practical solution📌Hot Item Highlights⏲️[00:00] Intro⏲️[00:45] Michiel Van Lerbeirghe (ML6)⏲️[08:28] Wrap-up & Outro🗣️ Copyright protection is subjective: it is definitely not up to providers of foundation models to rule whether the criteria are met. However, under the current version of the AI Act, they would be required to make that assessment.🗣️ The current obligation regarding copyright [transparency] is almost impossible to comply with. (...) The obligation is still under review, and we hope that we can evolve to a mechanism that makes more sense.🗣️ While transparency is definitely a good thing that should be supported, (...) the upcoming [copyright transparency] obligation could prove to be very difficult, and not to say impossible, to comply with.🗣️ Copyright can actually go very far and a lot of different content can potentially be protected by copyright. (...) From a practical point of view: where would the [transparency] obligation start and where would it end?📌About Our Guest🎙️ Michiel Van Lerbeirghe | Legal Counsel, ML6🌐 Assessing the impact of the EU AI Act proposal (ML6 Blog Post)🌐 ML6🌐 Michiel Van LerbeirgheMichiel is an IP lawyer focusing on artificial intelligence. After working for law firms for multiple years, he recently became the in-house legal counsel for ML6, a leading European service provider building and implementing AI systems for several multinationals.

14 Marras 202310min

AI lab hot item | Brian Williamson (Communications Chambers) - Latest AI Policy Developments

AI lab hot item | Brian Williamson (Communications Chambers) - Latest AI Policy Developments

🔥 In this 'Hot Item', Brian Williamson (Communications Chambers) & the AI lab discuss the latest AI policy developments, from the U.K. AI Summit to the U.S. White House Executive Order on AI safety📌Hot Item Highlights⏲️[00:00] Intro⏲️[00:33] Brian Williamson (Communications Chambers)⏲️[12:25] Wrap-up & Outro🗣️ We didn't seek to regulate computing or have a law of computing. We did focus on particular problems that arose over time, and computing led to a focus on data protection, but that's different to having a law of computing.🗣️ What should we do? We should not seek a law of AI (...), not now, possibly not ever.🗣️ The EU is working to agree [on] a law for AI, but (...) the perceived challenges continue to evolve, as does the technology. So, that's a difficult thing to do, but I actually think it's the wrong thing to do at this point in time, if ever.🗣️ We should remain technology agnostic and focus on delivering a solution.🗣️ We need to do the hard work of thinking about whether existing regulation and market adaptation is going to be sufficient (...) but just trying to fix the problems now with a law in advance won't work.📌About Our Guest🎙️ Brian Williamson | Partner, Communications Chambers 𝕏  https://twitter.com/MarethBrian🌐 Communications Chambers🌐 Brian WilliamsonBrian Williamson is a London based partner of the consultancy Communications Chambers. His clients include governments, regulators, telcos, and tech companies. He has a background in economics and physics.

9 Marras 202313min

AI lab hot item | Kai Zenner (European Parliament) - EU AI Act Trilogue: The Focus Points

AI lab hot item | Kai Zenner (European Parliament) - EU AI Act Trilogue: The Focus Points

🔥 In this 'Hot Item', Kai Zenner (Head of Office & Digital Policy Adviser for MEP Axel Voss) & the AI lab discuss the state-of-play of the EU AI Act trilogue negotiations📌Hot Item Highlights⏲️[00:00] Intro⏲️[00:49] Kai Zenner (European Parliament)⏲️[12:00] Wrap-up & Outro🗣️ We do not have a lot of time left (...). There's a 50-50 chance (...) [We] really want this deal, because we don't believe that it would be a wise move to delay the adoption of the AI Act after the European election. We will really give our best to close this file at the end of this year.🗣️ [Remote biomedical identification (RBI):] to find a middle ground here is almost impossible, because the European Parliament really wants to ban it completely and to not allow any loopholes.🗣️ [Prohibited AI practices:] we really need to go into details, again always trying to find this difficult compromise between making the ban not too broad, that there are loopholes, but also not too narrow.🗣️ [High-risk AI systems:] we need to be extremely careful that no activities or deployment cases for the use of AI are listed that are not really risky. A lot of rather technical work needs to be invested there.🗣️ The European Parliament of course had much more time to come to a position (...) compared to the Council (...)  Therefore, parliamentarians of course saw the rise of ChatGPT.🗣️ We tried to make this AI value chain a little bit more transparent, to accelerate the information sharing from upstream to downstream and also to make sure that even though foundation models are not the focus of the AI Act, that they need to fulfil certain minimum criteria.🗣️ Where we can probably meet both Council and Parliament is (...) making sure that all the actors in the AI value chain are at least somehow covered by the AI Act and (...) that we allow the downstream actors to become compliant (...) by having all information necessary.🗣️  [AI governance:] the European Parliament pushed for the creation of an AI office (...) and we have a Parliament that really wants to learn from the mistakes with the General Data Protection Regulation (GDPR).📌About Our Guest🎙️ Kai Zenner | Head of Office & Digital Policy Adviser for MEP Axel Voss, European Parliament  𝕏  https://twitter.com/ZennerBXL 🌐 MEP Axel Voss 🌐 Kai ZennerKai Zenner is the Head of Office and Digital Policy Adviser for MEP Axel Voss in the European Parliament. He is heavily involved in the political negotiations on the AI Act and the AI Liability Directive. Kai is member of the OECD.AI Network of Experts since 2021, was awarded best MEP Assistant in 2023 and ranked Place #13 in Politico's Power 40 - class 2023.

19 Loka 202313min

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
rss-poliisin-mieli
utelias-mieli
tiedekulma-podcast
rss-lihavuudesta-podcast
rss-duodecim-lehti
docemilia
rss-ylistys-elaimille
mielipaivakirja
rss-tiedetta-vai-tarinaa
menologeja-tutkimusmatka-vaihdevuosiin
rss-ranskaa-raakana
rss-vaasan-yliopiston-podcastit
rss-radplus
rss-jyvaskylan-yliopisto
rss-tervetta-skeptisyytta