The Artificial Intelligence Show

The Artificial Intelligence Show

The Artificial Intelligence Show (formerly The Marketing AI Show) is the podcast that helps your business grow smarter by making AI approachable and actionable. The AI Show podcast is brought to you by the creators of the Marketing AI Institute, AI Academy for Marketers, and the Marketing AI Conference (MAICON). Hosts Paul Roetzer, founder and CEO of Marketing AI Institute, and Mike Kaput, Chief Content Officer, break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join Paul and Mike on The AI Show as they work to accelerate AI literacy for all.

Episoder(179)

#50: Prompt Engineering Best Practices from OpenAI, How GPT-4 Could Reshape Healthcare, and The Hidden Costs of AI Adoption

#50: Prompt Engineering Best Practices from OpenAI, How GPT-4 Could Reshape Healthcare, and The Hidden Costs of AI Adoption

Thanks for joining us for episode 50! While AI breakthroughs slowed down this week, insights, best practices, and conversations continued. Paul Roetzer and Mike Kaput catch up on the artificial intelligence news impacting marketing and business leaders. OpenAI dropped chat prompt suggestions Logan Kilpatrick from OpenAI gave us helpful tips on crafting prompts. Quite simply (or so it seems), Kilpatrick offers six strategies for getting better results: write clear instructions, provide reference text, split complex tasks into simpler subtasks, give GPTs time to "think", use external tools, and test changes systematically. Is it that easy? What has OpenAI learned, and how can marketers follow these strategies while still differentiating themselves?  Could generative AI transform healthcare?  Could generative AI transform healthcare for the better? One expert thinks so. Dr. Robert M. Wachter, professor, and chair of the Department of Medicine at the University of California, San Francisco, outlines why in a new essay commissioned by Microsoft. In it, Dr. Wachter says he’s optimistic that generative AI systems like GPT-4 have the potential to reshape how healthcare works. This article caught Paul’s attention, and Paul and Mike break it down on the podcast, discussing not only marketing but also better patient outcomes and a reduction in healthcare costs.  High costs and AI adoption According to a new report from The Information: “More than 600 of Microsoft’s largest customers, including Bank of America, Walmart, Ford, and Accenture, have been testing the AI features in its Microsoft Office 365 productivity apps, and at least 100 of the customers are paying a flat fee of $100,000 for up to 1,000 users for one year, according to a person with direct knowledge of the pilot program.” The proposed pricing models for AI features will impact business leaders' decision-making regarding AI adoption, especially small businesses. This helpful episode of The Marketing AI Show can be found on your favorite podcast player and be sure to explore the links below.

6 Jun 202344min

#49: Google AI Ads, Microsoft AI Copilots, Cities and Schools Embrace AI, Top VC’s Best AI Resources, Fake AI Pentagon Explosion Picture, and NVIDIA’s Stock Soars

#49: Google AI Ads, Microsoft AI Copilots, Cities and Schools Embrace AI, Top VC’s Best AI Resources, Fake AI Pentagon Explosion Picture, and NVIDIA’s Stock Soars

Google Introduces AI-Powered Ads Google just announced new AI features within Google Ads, from landing page summarizations to generative AI helping with relevant and effective keywords, headlines, descriptions, images, and other assets for your campaign. Microsoft Rolls Out AI Copilots and AI Plugins Two years ago, Microsoft rolled out its first AI “copilot,” and this year, Microsoft introduced other copilots across core products and services, including AI-powered chat in Bing, Microsoft 365 Copilot, and others across products like Microsoft Dynamics and Microsoft Security. Cities and Schools Embrace Generative AI We see some very encouraging action from schools and cities regarding generative AI. According to Wired, New York City Schools have announced they will reverse their ban on ChatGPT and generative AI. Additionally, the City of Boston's chief information officer sent guidelines to every city official encouraging them to start using generative AI to understand its potential. AI Resources from Andreessen Horowitz Andreessen Horowitz recently shared a curated list of resources, their “AI Canon,” they’ve relied on to get smarter about modern AI. It includes papers, blog posts, courses, and guides that have had an outsized impact on the field over the past several years. DeepMind’s AI Risk Early Warning System In DeepMind’s latest paper, they introduce a framework for evaluating novel threats–misleading statements, biased decisions, or repeating copyrighted content–co-authored with colleagues from a number of universities and organizations. OpenAI’s Thoughts on the Governance of Superintelligence Sam Altman, Greg Brockman, and Ilya Sutskever recently published their thoughts on the governance of superintelligence. They say that proactivity and mitigating risk are critical, alongside special treatment and coordination of superintelligence.  White House Takes New Steps to Advance Responsible AI Last week, the Biden-Harris Administration announced new efforts that “will advance the research, development, and deployment of responsible artificial intelligence (AI) that protects individuals’ rights and safety and delivers results for the American people.” This includes an updated roadmap and a new report on the risks and opportunities related to AI in education. Fake Image of Pentagon Explosion Causes Dip in the Stock Market A fake image purporting to show an explosion near the Pentagon was shared by multiple verified Twitter accounts on Monday, causing confusion and leading to a brief dip in the stock market. Based on the actions and reactions of the day, are we unprepared for this technology? Meta’s Massively Multilingual Speech Project Meta announces their Massively Multilingual Speech (MMS) project, combining self-supervised learning, a new dataset that provides labeled data for over 1,100 languages and unlabeled data for nearly 4,000 languages, as well as publicly sharing models and code so that others in the research community can build upon Meta’s work. More Funding Rounds Anthropic raised $450 million in Series C funding. Figure Raises $70M Series A to accelerate robot development, fund manufacturing, design an end-to-end AI data engine, and drive commercial progress. OpenAI CEO Sam Altman has raised $115 million in a Series C funding round for Worldcoin which aims to distribute a crypto token to people "just for being a unique individual." NVIDIA Stock Soars on historic earnings report Nvidia’s stock blew past already-high expectations last Wednesday in its earnings report. Dependency on Nvidia is so widespread that Big Tech companies have been working on developing their own competing chips, much like Apple spent years developing its own chips so it could avoid having to rely on — and pay — other companies to outfit its devices.

30 Mai 20231h 1min

#48: Artificial Intelligence Goes to Washington, the Biggest AI Safety Risks Today, and How AI Could Be Regulated

#48: Artificial Intelligence Goes to Washington, the Biggest AI Safety Risks Today, and How AI Could Be Regulated

AI came to Washington in a big way. OpenAI CEO Sam Altman appeared before Congress for his first-ever testimony, speaking at a hearing called by Senators Richard Blumenthal and Josh Hawley. The topic? How to oversee and establish safeguards for artificial intelligence. The hearing lasted nearly three hours and focused largely on Altman, though Christina Montgomery, an IBM executive, and Gary Marcus, a leading AI expert, academic, and entrepreneur, also testified. During the hearing, Altman covered a wide range of topics, including a discussion of different risks posed by generative AI, what should be done to address those risks, and how companies should develop AI technology. Altman even suggested that AI companies be regulated, possibly through the creation of one or more federal agencies and/or some type of licensing requirement. The hearing was divisive. Some experts applauded what they saw as much-needed urgency from the federal government to tackle important AI safety issues. Others criticized the hearing for being far too friendly, citing worries that companies like OpenAI are angling to have undue influence over the regulatory and legislative process. An important note: This hearing appeared to be informational in nature. It was not called because OpenAI is in trouble. And it appears to be the first of many such hearings and committee meetings on AI that will happen moving forward. In this episode, Paul and Mike tackled the hearing from three different angles as our three main topics today, as well as talked about a series of lower-profile government meetings that occurred. First, they do a deep dive into what happened, what was discussed, and what it means for marketers and business leaders.  Then they took a closer look at the biggest issues in AI safety that were discussed during the hearing and that the hearing is designed to address. At one point during the hearing, Altman said "My worst fear is we cause significant harm to the world.” Lawmakers and the AI experts at the hearing cited several AI safety risks they’re losing sleep over. Overarching concerns included election misinformation, job disruption, copyright and licensing, generally harmful or dangerous content, and the pace of change.  Finally, Paul and Mike talked through the regulatory measures proposed during the hearing and what dangers there are, if any, of OpenAI or other AI companies tilting the regulatory process in their favor. Some tough questions were raised in the process. Senate Judiciary Chair Senator Dick Durbin suggested the need for a new agency to oversee the development of AI, and possibly an international agency. Gary Marcus said there should be a safety review, similar to what is used with the FDA for drugs, to vet AI systems before they are deployed widely, advocating for what he called a “nimble monitoring agency.” On the subject of agencies, Senator Blumenthal cautioned that the agency or agencies must be well-resourced, with both money and the appropriate experts. Without those, he said, AI companies would “run circles around us.” As expected, this discussion wasn’t without controversy. Tune in to this critically important episode of The Marketing AI Show. Find it on your favorite podcast player and be sure to explore the links below. Listen to the full episode of the podcast Want to receive our videos faster? SUBSCRIBE to YouTube!  Visit our website Receive our weekly newsletter Register for a free webinar Come to our next Marketing AI Conference Enroll in AI Academy for Marketers Join our community on Slack, LinkedIn, Twitter, Instagram, and Facebook.

23 Mai 202355min

#47: Huge Google AI Updates, Teaching Large Language Models to Have Values, and How AI Will Impact Productivity and Labor

#47: Huge Google AI Updates, Teaching Large Language Models to Have Values, and How AI Will Impact Productivity and Labor

Another week of big news from Google Google just announced major AI updates, including an AI makeover of search. The updates were announced at Google’s I/O developers conference and some of the more important updates were discussed on the podcast.  A new next-generation large language model called PaLM 2, “excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency better than our previous state-of-the-art LLMs.” Next, an AI makeover of search through Google’s “Search Generative Experience” will deliver conversational results to search queries. This will become available to users who sign up for Google’s Search Labs sandbox. Additional improvements include new AI writing tools for Gmail, the removal of the waitlist for Bard, and the ability to create full documents, generate slides, and fill in spreadsheets across tools like Docs, Slides, and Sheets.  What’s next for Claude Anthropic, a major AI player and creator of the AI assistant “Claude,” just published research that could have a big impact on AI safety. In the research, the company outlines an approach they’re using “Constitutional AI,” or the act of giving a large language model “explicit values determined by a constitution, rather than values determined implicitly via large-scale human feedback.” This concept is designed to address the limitations of large-scale human feedback, which traditionally determines the values and principles of AI behavior. It aims to enhance the transparency, safety, and usefulness of AI models while reducing the need for human intervention. The constitution of an AI model consists of a set of principles that guide its outputs, and in Claude’s case, encourages the model to avoid toxic or discriminatory outputs, refrain from assisting in illegal or unethical activities, and aim to be helpful, honest, and harmless. Anthropic emphasizes that this living document is subject to revisions and improvements based on further research and feedback. More on the economy and knowledge workers  In a recent Brookings Institution article titled, Machines of Mind: The Case for an AI-powered Productivity, the authors explore the potential impact of AI, specifically large language models (LLMs), on the economy and knowledge workers. The authors predict LLMs will have a massive impact on knowledge work in the near future. They say: “We expect millions of knowledge workers, ranging from doctors and lawyers to managers and salespeople to experience similar ground-breaking shifts in their productivity within a few years, if not sooner.” The productivity gains from AI will be realized directly through output created per hour worked (i.e. increased efficiency), and indirectly through accelerated innovation that drives future productivity growth. The authors say they broadly agree with a recent Goldman Sachs estimate that AI could raise global GDP by a whopping 7%. But there’s more to it, so be sure to tune in. Listen to the full episode of the podcast: https://www.marketingaiinstitute.com/podcast-showcase Want to receive our videos faster? SUBSCRIBE to our channel! Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar  Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute

16 Mai 202355min

#46: Geoff Hinton Leaves Google, Google and OpenAI Have “No Moat,” and the Most Exciting Things About the Future of AI

#46: Geoff Hinton Leaves Google, Google and OpenAI Have “No Moat,” and the Most Exciting Things About the Future of AI

Hinton departs Google Geoffrey Hinton, a pioneer of deep learning and a VP and engineering fellow at Google, has left the company after 10 years due to new fears he has about the technology he helped develop.  Hinton says he wants to speak openly about his concerns, and that part of him now regrets his life’s work. He told MIT Technology Review: “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future. How do we survive that?” He worries that extremely powerful AI will be misused by bad actors, especially in elections and war scenarios, to cause harm to humans. He’s also concerned that once AI is able to string together different tasks and actions (like we’re seeing with AutoGPT), intelligent machines could take harmful actions on their own. This isn’t necessarily an attack on Google specifically. Hinton said that he has plenty of good things to say about the company. But he wants “to talk about AI safety issues without having to worry about how it interacts with Google’s business.” “No Moats” “We have no moat, and neither does OpenAI,” claims a leaked Google memo revealing that the company is concerned about losing the AI competition to open-source technology. The memo, led by a senior software engineer, states that while Google and OpenAI have been focused on each other, open-source projects have been solving major AI problems faster and more efficiently.  The memo’s author says that Google's large AI models are no longer seen as an advantage, with open-source models being faster, more customizable, and more private. What do these new developments and rapid shifts mean?  The exciting future of AI  We talk about a lot of heavy AI topics on this podcast—and it’s easy to get concerned about the future or overwhelmed. But Paul recently published a LinkedIn post that’s getting much attention because it talks about what he’s most excited about AI.  Paul wrote, “Someone recently asked me what excited me most about AI. I struggled to find an answer. I realized I spend so much time thinking about AI risks and fears (and answering questions about risks and fears), that I forget to appreciate all the potential for AI to do good. So, I wanted to highlight some things that give me hope for the future…” We won’t spoil it in this blog post, so tune in to the podcast to hear Paul’s thoughts.  Listen to this week’s episode on your favorite podcast player and be sure to explore the links below for more thoughts and perspectives on these important topics. Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar  Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute

9 Mai 202352min

#45: ChatGPT Business, AI Disrupts Politics, and AI-Powered Growth and Layoffs in Big Tech

#45: ChatGPT Business, AI Disrupts Politics, and AI-Powered Growth and Layoffs in Big Tech

Big changes are coming to ChatGPT OpenAI just announced two big updates to ChatGPT. The first is a soon-to-be-released subscription tier called ChatGPT Business. Designed for enterprises, the plan will follow OpenAI’s API data usage policies. That means user data won’t, by default, be used to train ChatGPT. The second is a feature that now allows ChatGPT users to turn off their chat history, which will prevent conversations from being used to train ChatGPT. We got a startling preview of how AI is going to impact politics In the U.S., the 2024 presidential election season kicked off with an attack ad generated 100% by artificial intelligence. The ad imagines a future dystopia where President Joe Biden remains in office after next year’s results. The images, voices, and video clips are stunningly real and were created with widely available AI tools. And they foreshadow an election season where AI can be used by all parties and actors to generate hyper-realistic synthetic content at scale. At the same time, lawmakers in the U.S. and Europe signaled this week that they’re taking more aggressive action to regulate AI. In the U.S., four major federal agencies, including the Federal Trade Commission and the Department of Justice, released a joint statement on their stance toward AI companies. The agencies clarified that they would not treat AI companies differently from other firms when enforcing rules and regulations. In Europe, the European Parliament has reached a deal to move forward on the world’s first “AI rulebook,” the Artificial Intelligence Act. This is a broad suite of regulations that will govern the use of AI within the European Union. These include safeguards against the misuse of these systems and rules that protect citizens from AI risks. AI’s major impact on big tech companies A recent round of tech earnings calls saw major companies like Microsoft, Google, and Meta displaying strong or better-than-expected results—and some of that growth was driven by AI. In Microsoft’s case, Azure revenue was up 27% year-on-year and Microsoft said it was already generating new sales from its AI products. Google was less specific about its AI plans but committed to incorporating generative AI into its products moving forward. Reports have surfaced that Meta is playing catch-up to retool its infrastructure for AI but still saw an unexpected increase in sales in the past quarter.  At the same time, these companies face enormous pressure from shareholders to get leaner. Some have conducted layoffs already, with more expected to come. And they’re all relying on AI to capture efficiencies. We saw a stark example of this in practice with a recent announcement from Dropbox that they’re cutting staff by 16%, or 500 people. How should knowledge workers think about this? What steps should we be taking? Today’s rapid-fire topics include Runway Gen-a for mobile, PwC invests $1 billion in generative AI, and AI and human empathy in healthcare, Replit’s funding round, and Hinton’s Google exit. Listen to the full episode of the podcast: https://www.marketingaiinstitute.com/podcast-showcase Want to receive our videos faster? SUBSCRIBE to our channel! Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar  Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute

2 Mai 202353min

#44: Inside ChatGPT’s Revolutionary Potential, Major Google AI Announcements, and Big Problems with AI Training Are Discovered

#44: Inside ChatGPT’s Revolutionary Potential, Major Google AI Announcements, and Big Problems with AI Training Are Discovered

New announcements, fast training with repercussions, and more are discussed in this week’s Marketing AI Show with Paul Roetzer and Mike Kaput. Read more, then tune in!  Stunning results from ChatGPT plugins The way we all work is about to change in major ways thanks to ChatGPT—and few are ready for how fast this is about to happen. In a new TED Talk, OpenAI co-founder and president Greg Brockman shows off the power and potential of the all-new ChatGPT plugins…and the results are stunning. Thanks to ChatGPT plugins, ChatGPT can now browse the internet and interact with third-party services and applications, resulting in AI agents that can take actions in the real world to help us with our work. In the talk, Brockman shows off how knowledge workers will soon work hand-in-hand with machines—and how this is going to start changing things months (or even weeks) from now, not years. Paul and Mike talk about capabilities that caught their eye, and what this means for the future of work. Google just announced some huge AI updates However, some within the company say Google is making ethical lapses in their rush to compete with OpenAI and others. There were three significant updates: Google announced that its AI research team, Brain, would merge with DeepMind, creating Google DeepMind. It was also revealed that Google is working on a project titled “Magi.” It involves Google reinventing its core search engine from the ground up to be an AI-first product, as well as adding more AI features to search in the short term. Details are light at the moment, but the New York Times has confirmed some AI features will roll out in the US this year and that ads will remain a part of AI-powered search results. Finally, Google announced Bard had been updated with new features to help you code. Bard can now generate code and help you debug code. As these updates rolled out, reporting from Bloomberg revealed that some Google employees think the company is making ethical lapses by rushing the development of AI tools, particularly around Bard and the accuracy of its responses. What problems arise during training AI tools? AI companies like OpenAI are coming under fire for how AI tools are trained, and social media channels are pushing back. Reddit, which is often scraped to train language models, just announced it would charge for API access, in order to stop AI companies from training models on Reddit data without compensation. Additionally, Twitter recently made a similar move. And Elon Musk publicly threatened to sue Microsoft for, he says, “illegally using Twitter data” to train models. Other companies are sure to follow suit. An investigative report by the Washington Post recently found that large language models from Google and Meta trained on data from major websites like Wikipedia, The New York Times, and Kickstarter. The report raises concerns that models may be using data from certain sites improperly. In one example, the Post found models had trained on an ebook piracy site and likely did not have permission to use the data it trained on. Not to mention, the copyright symbol appeared more than 200 million times in the data set the Post studied. And if that wasn’t enough, StableLM and AI Drake were discussed!

25 Apr 202354min

#43: AWS Gets Into the Generative AI Game, AutoGPT and Autonomous AI Agents, and How AI Could Impact Millions of Knowledge Workers Sooner Than You Think

#43: AWS Gets Into the Generative AI Game, AutoGPT and Autonomous AI Agents, and How AI Could Impact Millions of Knowledge Workers Sooner Than You Think

AI-driven automation is quickly becoming a fundamental part of businesses’ tech stacks, but there are also potential dangers associated with this technology. Paul pointed out that AI-driven automation is quickly becoming a fundamental part of businesses’ tech stacks. "I feel like the large language model is going to be as fundamental to the tech stack as a CRM has been for the last ten to 15 years," he said. Mike added that businesses should look for models that allow them to customize the model with their own data and integrate it into their own applications. "You want to tune these models on that data, whether it’s for internal external use cases, and you want to be highly confident in the privacy and security of that data and how these models work within your organization," he said. AI technology is rapidly advancing and is capable of performing complex tasks autonomously with minimal human intervention. Paul discussed AI technology and the need for safety and alignment when building these applications. "We're not going back, we're not going to just stop trying to build these action transformers," he said. "But I really hope that the people that are building these things understand the potential ramifications of what they're building and do everything in their power internally and with their peers who are working on similar technology, to do everything possible, to do it in a responsible way, and to do it with safety, first and foremost." Mike then discussed the impact of AI on labor, noting that AI tech has accelerated the ability of AI to perform knowledge work, including strategic and creative work. AI is advancing quickly and is likely to significantly reduce the time it takes to complete knowledge tasks such as writing, design, coding, and planning. Paul noted that AI-driven disruption of knowledge work is a very real possibility and that organizations and leaders should plan for significant job loss. He also pointed to a survey of almost 800 people, which showed that lack of education and training was the top obstacle to adoption. "It’s coming; it’s going to intelligently automate large portions of your work," Roetzer said. "Based on my own experiences in research as well as the context of dozens of conversations, it is reasonable to assume the time to complete most knowledge tasks such as writing, design, coding, planning, et cetera, can be reduced on average 20% to 30% at minimum with current generative AI technology. And the tech is getting faster and smarter at a compounding rate, so these percentages are only going to rise what it's capable of doing." AI technology has the potential to create a wide range of new jobs and career paths. Paul and Mike discussed the potential impacts of AI technology on the job market and the economy. Mike noted that "this is not just wild speculation," and Paul agreed, saying "I do believe it's going to create lots of new jobs and career paths we can't imagine." He went on to explain that "the flaws and limitations of generative AI are greater than are being discussed in the media and will prevent mass disruption in the near term." Paul also highlighted the importance of humans in the AI process, noting that "the dependence of the machine on the human to make sure it's accurate and safe and aligned with human values" is essential. He suggested creating "generative AI policies that explicitly say how you're using language tools, image generation tools, video generation tools, etc." This episode is brought to you by BrandOps, built to optimize your marketing strategy, delivering the most complete view of marketing performance, allowing you to compare results to competitors and benchmarks.

18 Apr 20231h 11min

Populært innen Business og økonomi

stopp-verden
lydartikler-fra-aftenposten
dine-penger-pengeradet
e24-podden
rss-penger-polser-og-politikk
kommentarer-fra-aftenposten
rss-borsmorgen-okonominyhetene
tid-er-penger-en-podcast-med-peter-warren
finansredaksjonen
livet-pa-veien-med-jan-erik-larssen
utbytte
pengepodden-2
stormkast-med-valebrokk-stordalen
lederpodden
pengesnakk
morgenkaffen-med-finansavisen
rss-sunn-okonomi
okonomiamatorene
rss-markedspuls-2
rss-impressions-2