The Mystery Behind Large Graphs

The Mystery Behind Large Graphs

Our guest in this episode is David Tench, a Grace Hopper postdoctoral fellow at Lawrence Berkeley National Labs, who specializes in scalable graph algorithms and compression techniques to tackle massive datasets.


In this episode, we will learn how his techniques enable real-time analysis of large datasets, such as particle tracking in physics experiments or social network analysis, by reducing storage requirements while preserving critical structural properties.

David also challenges the common belief that giant graphs are sparse by pointing to a potential bias: Maybe because of the challenges that exist in analyzing large dense graphs, we only see datasets of sparse graphs? The truth is out there…

David encourages you to reach out to him if you have a large scale graph application that you don't currently have the capacity to deal with using your current methods and your current hardware. He promises to "look for the hammer that might help you with your nail".

Avsnitt(587)

Program Aided Language Models

Program Aided Language Models

We are joined by Aman Madaan and Shuyan Zhou. They are both PhD students at the Language Technology Institute at Carnegie Mellon University. They join us to discuss their latest published paper, PAL: Program-aided Language Models. Aman and Shuyan started by sharing how the application of LLMs has evolved. They talked about the performance of LLMs on arithmetic tasks in contrast to coding tasks. Aman introduced their PAL model and how it helps LLMs improve at arithmetic tasks. He shared examples of the tasks PAL was tested on. Shuyan discussed how PAL's performance was evaluated using Big Bench hard tasks. They discussed the kind of mistakes LLMs tend to make and how the PAL's model circumvents these limitations. They also discussed how these developments in LLMS can improve kids learning. Rounding up, Aman discussed the CoCoGen project, a project that enables NLP tasks to be converted to graphs. Shuyan and Aman shared their next research steps. Follow Shuyan on Twitter @shuyanzhxyc. Follow Aman on @aman_madaan.

13 Nov 202332min

Which Programming Language is ChatGPT Best At

Which Programming Language is ChatGPT Best At

In this episode, we have Alessio Buscemi, a software engineer at Lifeware SA. Alessio was a post-doctoral researcher at the University of Luxembourg. He joins us to discuss his paper, A Comparative Study of Code Generation using ChatGPT 3.5 across 10 Programming Languages. Alessio shared his thoughts on whether ChatGPT is a threat to software engineers. He discussed how LLMs can help software engineers become more efficient.

6 Nov 202340min

GraphText

GraphText

On the show today, we are joined by Jianan Zhao, a Computer Science student at Mila and the University of Montreal. His research focus is on graph databases and natural language processing. He joins us to discuss how to use graphs with LLMs efficiently.

31 Okt 202330min

arXiv Publication Patterns

arXiv Publication Patterns

Today, we are joined by Rajiv Movva, a PhD student in Computer Science at Cornell Tech University. His research interest lies in the intersection of responsible AI and computational social science. He joins to discuss the findings of this work that analyzed LLM publication patterns. He shared the dataset he used for the survey. He also discussed the conditions for determining the papers to analyze. Rajiv shared some of the trends he observed from his analysis. For one, he observed there has been an increase in LLMs research. He also shared the proportions of papers published by universities, organizations, and industry leaders in LLMs such as OpenAI and Google. He mentioned the majority of the papers are centered on the social impact of LLMs. He also discussed other exciting application of LLMs such as in education.

23 Okt 202328min

Do LLMs Make Ethical Choices

Do LLMs Make Ethical Choices

We are excited to be joined by Josh Albrecht, the CTO of Imbue. Imbue is a research company whose mission is to create AI agents that are more robust, safer, and easier to use. He joins us to share findings of his work; Despite "super-human" performance, current LLMs are unsuited for decisions about ethics and safety.

16 Okt 202329min

Emergent Deception in LLMs

Emergent Deception in LLMs

On today's show, we are joined by Thilo Hagendorff, a Research Group Leader of Ethics of Generative AI at the University of Stuttgart. He joins us to discuss his research, Deception Abilities Emerged in Large Language Models. Thilo discussed how machine psychology is useful in machine learning tasks. He shared examples of cognitive tasks that LLMs have improved at solving. He shared his thoughts on whether there's a ceiling to the tasks ML can solve.

9 Okt 202327min

Agents with Theory of Mind Play Hanabi

Agents with Theory of Mind Play Hanabi

Nieves Montes, a Ph.D. student at the Artificial Intelligence Research Institute in Barcelona, Spain, joins us. Her PhD research revolves around value-based reasoning in relation to norms. She shares her latest study, Combining theory of mind and abductive reasoning in agent‑oriented programming.

2 Okt 202338min

LLMs for Evil

LLMs for Evil

We are joined by Maximilian Mozes, a PhD student at the University College, London. His PhD research focuses on Natural Language Processing (NLP), particularly the intersection of adversarial machine learning and NLP. He joins us to discuss his latest research, Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities.

25 Sep 202326min

Populärt inom Vetenskap

p3-dystopia
paranormalt-med-caroline-giertz
dumma-manniskor
svd-nyhetsartiklar
allt-du-velat-veta
rss-vetenskapligt-talat
rss-vetenskapspodden
kapitalet-en-podd-om-ekonomi
rss-ufobortom-rimligt-tvivel
medicinvetarna
dumforklarat
bildningspodden
sexet
halsorevolutionen
det-morka-psyket
rss-vetenskapsradion-2
rss-i-hjarnan-pa-louise-epstein
rss-vetenskapsradion
rss-spraket
rss-personlighetspodden