Github Network Analysis
Data Skeptic22 Kesä

Github Network Analysis

In this episode we'll discuss how to use Github data as a network to extract insights about teamwork.

Our guest, Gabriel Ramirez, manager of the notifications team at GitHub, will show how to apply network analysis to better understand and improve collaboration within his engineering team by analyzing GitHub metadata - such as pull requests, issues, and discussions - as a bipartite graph of people and projects.

Some insights we'll discuss are how network centrality measures (like eigenvector and betweenness centrality) reveal organizational dynamics, how vacation patterns influence team connectivity, and how decentralizing communication hubs can foster healthier collaboration.

Gabriel's open-source project, GH Graph Explorer, enables other managers and engineers to extract, visualize, and analyze their own GitHub activity using tools like Python, Neo4j, Gephi and LLMs for insight generation, but always remember – don't take the results on face value. Instead, use the results to guide your qualitative investigation.

Jaksot(587)

The Defeat of the Winograd Schema Challenge

The Defeat of the Winograd Schema Challenge

Our guest today is Vid Kocijan, a Machine Learning Engineer at Kumo AI. Vid has a Ph.D. in Computer Science at the University of Oxford. His research focused on common sense reasoning, pre-training in LLMs, pretraining in knowledge-based completion, and how these pre-trainings impact societal bias. He joins us to discuss how he built a BERT model that solved the Winograd Schema Challenge.

11 Syys 202331min

LLMs in Social Science

LLMs in Social Science

Today, We are joined by Petter Törnberg, an Assistant Professor in Computational Social Science at the University of Amsterdam and a Senior Researcher at the University of Neuchatel. His research is centered on the intersection of computational methods and their applications in social sciences. He joins us to discuss findings from his research papers, ChatGPT-4 Outperforms Experts and Crowd Workers in Annotating Political Twitter Messages with Zero-Shot Learning, and How to use LLMs for Text Analysis.

4 Syys 202334min

LLMs in Music Composition

LLMs in Music Composition

In this episode, we are joined by Carlos Hernández Oliván, a Ph.D. student at the University of Zaragoza. Carlos's interest focuses on building new models for symbolic music generation. Carlos shared his thoughts on whether these models are genuinely creative. He revealed situations where AI-generated music can pass the Turing test. He also shared some essential considerations when constructing models for music composition.

28 Elo 202333min

Cuttlefish Model Tuning

Cuttlefish Model Tuning

Hongyi Wang, a Senior Researcher at the Machine Learning Department at Carnegie Mellon University, joins us. His research is in the intersection of systems and machine learning. He discussed his research paper, Cuttlefish: Low-Rank Model Training without All the Tuning, on today's show. Hogyi started by sharing his thoughts on whether developers need to learn how to fine-tune models. He then spoke about the need to optimize the training of ML models, especially as these models grow bigger. He discussed how data centers have the hardware to train these large models but not the community. He then spoke about the Low-Rank Adaptation (LoRa) technique and where it is used. Hongyi discussed the Cuttlefish model and how it edges LoRa. He shared the use cases of Cattlefish and who should use it. Rounding up, he gave his advice on how people can get into the machine learning field. He also shared his future research ideas.

21 Elo 202327min

Which Professions Are Threatened by LLMs

Which Professions Are Threatened by LLMs

On today's episode, we have Daniel Rock, an Assistant Professor of Operations Information and Decisions at the Wharton School of the University of Pennsylvania. Daniel's research focuses on the economics of AI and ML, specifically how digital technologies are changing the economy. Daniel discussed how AI has disrupted the job market in the past years. He also explained that it had created more winners than losers. Daniel spoke about the empirical study he and his coauthors did to quantify the threat LLMs pose to professionals. He shared how they used the O-NET dataset and the BLS occupational employment survey to measure the impact of LLMs on different professions. Using the radiology profession as an example, he listed tasks that LLMs could assume. Daniel broadly highlighted professions that are most and least exposed to LLMs proliferation. He also spoke about the risks of LLMs and his thoughts on implementing policies for regulating LLMs.

15 Elo 202338min

Why Prompting is Hard

Why Prompting is Hard

We are excited to be joined by J.D. Zamfirescu-Pereira, a Ph.D. student at UC Berkeley. He focuses on the intersection of human-computer interaction (HCI) and artificial intelligence (AI). He joins us to share his work in his paper, Why Johnny can't prompt: how non-AI experts try (and fail) to design LLM prompts. The discussion also explores lessons learned and achievements related to BotDesigner, a tool for creating chat bots.

8 Elo 202348min

Automated Peer Review

Automated Peer Review

In this episode, we are joined by Ryan Liu, a Computer Science graduate of Carnegie Mellon University. Ryan will begin his Ph.D. program at Princeton University this fall. His Ph.D. will focus on the intersection of large language models and how humans think. Ryan joins us to discuss his research titled "ReviewerGPT? An Exploratory Study on Using Large Language Models for Paper Reviewing"

31 Heinä 202336min

Prompt Refusal

Prompt Refusal

The creators of large language models impose restrictions on some of the types of requests one might make of them. LLMs commonly refuse to give advice on committing crimes, producting adult content, or respond with any details about a variety of sensitive subjects. As with any content filtering system, you have false positives and false negatives. Today's interview with Max Reuter and William Schulze discusses their paper "I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box Generative Language Models". In this work, they explore what types of prompts get refused and build a machine learning classifier adept at predicting if a particular prompt will be refused or not.

24 Heinä 202344min

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
utelias-mieli
tiedekulma-podcast
rss-poliisin-mieli
rss-lihavuudesta-podcast
rss-duodecim-lehti
rss-lapsuuden-rakentajat-podcast
hippokrateen-vastaanotolla
rss-tiedetta-vai-tarinaa
docemilia
rss-ammamafia
rss-radplus
rss-kasvatuspsykologiaa-kaikille
rss-totta-vai-tuubaa
rss-yleislaakarin-sydanaanet
rss-taivaanranta
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita