The Ghost in the MP3
Data Skeptic1 Maj 2015

The Ghost in the MP3

Have you ever wondered what is lost when you compress a song into an MP3? This week's guest Ryan Maguire did more than that. He worked on software to issolate the sounds that are lost when you convert a lossless digital audio recording into a compressed MP3 file.

To complete his project, Ryan worked primarily in python using the pyo library as well as the Bregman Toolkit

Ryan mentioned humans having a dynamic range of hearing from 20 hz to 20,000 hz, if you'd like to hear those tones, check the previous link.

If you'd like to know more about our guest Ryan Maguire you can find his website at the previous link. To follow The Ghost in the MP3 project, please checkout their Facebook page, or on the sitetheghostinthemp3.com.

A PDF of Ryan's publication quality write up can be found at this link: The Ghost in the MP3 and it is definitely worth the read if you'd like to know more of the technical details.

Det här avsnittet är hämtat från ett öppet RSS-flöde och publiceras inte av Podme. Det kan innehålla reklam.

Avsnitt(601)

ML Ops

ML Ops

Kyle met up with Damian Brady at MS Ignite 2019 to discuss machine learning operations.

27 Nov 201936min

Annotator Bias

Annotator Bias

The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on.  Folk wisdom estimates used to be around 100k documents were required f...

23 Nov 201925min

NLP for Developers

NLP for Developers

While at MS Build 2019, Kyle sat down with Lance Olson from the Applied AI team about how tools like cognitive services and cognitive search enable non-data scientists to access relatively advanced NL...

20 Nov 201929min

Indigenous American Language Research

Indigenous American Language Research

Manuel Mager joins us to discuss natural language processing for low and under-resourced languages.  We discuss current work in this area and the Naki Project which aggregates research on NLP for nati...

13 Nov 201922min

Talking to GPT-2

Talking to GPT-2

GPT-2 is yet another in a succession of models like ELMo and BERT which adopt a similar deep learning architecture and train an unsupervised model on a massive text corpus. As we have been covering re...

31 Okt 201929min

Reproducing Deep Learning Models

Reproducing Deep Learning Models

Rajiv Shah attempted to reproduce an earthquake-predicting deep learning model.  His results exposed some issues with the model.  Kyle and Rajiv discuss the original paper and Rajiv's analysis.

23 Okt 201922min

What BERT is Not

What BERT is Not

Allyson Ettinger joins us to discuss her work in computational linguistics, specifically in exploring some of the ways in which the popular natural language processing approach BERT has limitations.

14 Okt 201927min

SpanBERT

SpanBERT

Omer Levy joins us to discuss "SpanBERT: Improving Pre-training by Representing and Predicting Spans". https://arxiv.org/abs/1907.10529

8 Okt 201924min

Populärt inom Vetenskap

allt-du-velat-veta
p3-dystopia
dumma-manniskor
rss-ufobortom-rimligt-tvivel
kapitalet-en-podd-om-ekonomi
ufo-sverige
svd-nyhetsartiklar
rss-spraket
paranormalt-med-caroline-giertz
hacka-livet
medicinvetarna
dumforklarat
rss-vetenskapsradion
det-morka-psyket
ufo-sverige-2
sexet
rss-tidsmaskinen
halsorevolutionen
rss-tidslinjen-podcast
rss-vetenskapsradion-2