DataRec Library for Reproducible in Recommend Systems

DataRec Library for Reproducible in Recommend Systems

In this episode of Data Skeptic's Recommender Systems series, host Kyle Polich explores DataRec, a new Python library designed to bring reproducibility and standardization to recommender systems research. Guest Alberto Carlo Maria Mancino, a postdoc researcher from Politecnico di Bari, Italy, discusses the challenges of dataset management in recommendation research—from version control issues to preprocessing inconsistencies—and how DataRec provides automated downloads, checksum verification, and standardized filtering strategies for popular datasets like MovieLens, Last.fm, and Amazon reviews.

The conversation covers Alberto's research journey through knowledge graphs, graph-based recommenders, privacy considerations, and recommendation novelty. He explains why small modifications in datasets can significantly impact research outcomes, the importance of offline evaluation, and DataRec's vision as a lightweight library that integrates with existing frameworks rather than replacing them. Whether you're benchmarking new algorithms or exploring recommendation techniques, this episode offers practical insights into one of the most critical yet overlooked aspects of reproducible ML research.

Episoder(590)

Indigenous American Language Research

Indigenous American Language Research

Manuel Mager joins us to discuss natural language processing for low and under-resourced languages.  We discuss current work in this area and the Naki Project which aggregates research on NLP for native and indigenous languages of the American continent.

13 Nov 201922min

Talking to GPT-2

Talking to GPT-2

GPT-2 is yet another in a succession of models like ELMo and BERT which adopt a similar deep learning architecture and train an unsupervised model on a massive text corpus. As we have been covering recently, these approaches are showing tremendous promise, but how close are they to an AGI?  Our guest today, Vazgen Davidyants wondered exactly that, and have conversations with a Chatbot running GPT-2.  We discuss his experiences as well as some novel thoughts on artificial intelligence.

31 Okt 201929min

Reproducing Deep Learning Models

Reproducing Deep Learning Models

Rajiv Shah attempted to reproduce an earthquake-predicting deep learning model.  His results exposed some issues with the model.  Kyle and Rajiv discuss the original paper and Rajiv's analysis.

23 Okt 201922min

What BERT is Not

What BERT is Not

Allyson Ettinger joins us to discuss her work in computational linguistics, specifically in exploring some of the ways in which the popular natural language processing approach BERT has limitations.

14 Okt 201927min

SpanBERT

SpanBERT

Omer Levy joins us to discuss "SpanBERT: Improving Pre-training by Representing and Predicting Spans". https://arxiv.org/abs/1907.10529

8 Okt 201924min

BERT is Shallow

BERT is Shallow

Tim Niven joins us this week to discuss his work exploring the limits of what BERT can do on certain natural language tasks such as adversarial attacks, compositional learning, and systematic learning.

23 Sep 201920min

BERT is Magic

BERT is Magic

Kyle pontificates on how impressed he is with BERT.

16 Sep 201918min

Applied Data Science in Industry

Applied Data Science in Industry

Kyle sits down with Jen Stirrup to inquire about her experiences helping companies deploy data science solutions in a variety of different settings.

6 Sep 201921min

Populært innen Vitenskap

fastlegen
fremtid-pa-frys
rekommandert
tingenes-tilstand
jss
rss-rekommandert
tomprat-med-gunnar-tjomlid
vett-og-vitenskap-med-gaute-einevoll
sinnsyn
villmarksliv
rss-paradigmepodden
forskningno
nordnorsk-historie
rss-overskuddsliv
dekodet-2
rss-nysgjerrige-norge
fjellsportpodden
doktor-fives-podcast
tidlose-historier
nevropodden