DataRec Library for Reproducible in Recommend Systems

DataRec Library for Reproducible in Recommend Systems

In this episode of Data Skeptic's Recommender Systems series, host Kyle Polich explores DataRec, a new Python library designed to bring reproducibility and standardization to recommender systems research. Guest Alberto Carlo Maria Mancino, a postdoc researcher from Politecnico di Bari, Italy, discusses the challenges of dataset management in recommendation research—from version control issues to preprocessing inconsistencies—and how DataRec provides automated downloads, checksum verification, and standardized filtering strategies for popular datasets like MovieLens, Last.fm, and Amazon reviews.

The conversation covers Alberto's research journey through knowledge graphs, graph-based recommenders, privacy considerations, and recommendation novelty. He explains why small modifications in datasets can significantly impact research outcomes, the importance of offline evaluation, and DataRec's vision as a lightweight library that integrates with existing frameworks rather than replacing them. Whether you're benchmarking new algorithms or exploring recommendation techniques, this episode offers practical insights into one of the most critical yet overlooked aspects of reproducible ML research.

Avsnitt(589)

Jumpstart Your ML Project

Jumpstart Your ML Project

Seth Juarez joins us to discuss the toolbox of options available to a data scientist to jumpstart or extend their machine learning efforts.

15 Dec 201920min

Serverless NLP Model Training

Serverless NLP Model Training

Alex Reeves joins us to discuss some of the challenges around building a serverless, scalable, generic machine learning pipeline.  The is a technical deep dive on architecting solutions and a discussion of some of the design choices made.

10 Dec 201929min

Team Data Science Process

Team Data Science Process

Buck Woody joins Kyle to share experiences from the field and the application of the Team Data Science Process - a popular six-phase workflow for doing data science.

3 Dec 201941min

Ancient Text Restoration

Ancient Text Restoration

Thea Sommerschield joins us this week to discuss the development of Pythia - a machine learning model trained to assist in the reconstruction of ancient language text.

1 Dec 201941min

ML Ops

ML Ops

Kyle met up with Damian Brady at MS Ignite 2019 to discuss machine learning operations.

27 Nov 201936min

Annotator Bias

Annotator Bias

The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on.  Folk wisdom estimates used to be around 100k documents were required for effective training.  The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora. Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand.  Thus, small specialized corpora are both useful and practical to create. In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora. Source code for the paper available here: https://github.com/mega002/annotator_bias

23 Nov 201925min

NLP for Developers

NLP for Developers

While at MS Build 2019, Kyle sat down with Lance Olson from the Applied AI team about how tools like cognitive services and cognitive search enable non-data scientists to access relatively advanced NLP tools out of box, and how more advanced data scientists can focus more time on the bigger picture problems.

20 Nov 201929min

Indigenous American Language Research

Indigenous American Language Research

Manuel Mager joins us to discuss natural language processing for low and under-resourced languages.  We discuss current work in this area and the Naki Project which aggregates research on NLP for native and indigenous languages of the American continent.

13 Nov 201922min

Populärt inom Vetenskap

p3-dystopia
svd-nyhetsartiklar
dumma-manniskor
allt-du-velat-veta
kapitalet-en-podd-om-ekonomi
paranormalt-med-caroline-giertz
dumforklarat
rss-ufobortom-rimligt-tvivel
rss-i-hjarnan-pa-louise-epstein
rss-vetenskapsradion
sexet
rss-vetenskapspodden
medicinvetarna
det-morka-psyket
rss-broccolipodden-en-podcast-som-inte-handlar-om-broccoli
barnpsykologerna
rss-vetenskapsradion-2
bildningspodden
rss-spraket
4health-med-anna-sparre