[MINI] One Shot Learning
Data Skeptic22 Sep 2017

[MINI] One Shot Learning

One Shot Learning is the class of machine learning procedures that focuses learning something from a small number of examples. This is in contrast to "traditional" machine learning which typically requires a very large training set to build a reasonable model.

In this episode, Kyle presents a coded message to Linhda who is able to recognize that many of these new symbols created are likely to be the same symbol, despite having extremely few examples of each. Why can the human brain recognize a new symbol with relative ease while most machine learning algorithms require large training data? We discuss some of the reasons why and approaches to One Shot Learning.

Det här avsnittet är hämtat från ett öppet RSS-flöde och publiceras inte av Podme. Det kan innehålla reklam.

Avsnitt(601)

ML Ops

ML Ops

Kyle met up with Damian Brady at MS Ignite 2019 to discuss machine learning operations.

27 Nov 201936min

Annotator Bias

Annotator Bias

The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on.  Folk wisdom estimates used to be around 100k documents were required f...

23 Nov 201925min

NLP for Developers

NLP for Developers

While at MS Build 2019, Kyle sat down with Lance Olson from the Applied AI team about how tools like cognitive services and cognitive search enable non-data scientists to access relatively advanced NL...

20 Nov 201929min

Indigenous American Language Research

Indigenous American Language Research

Manuel Mager joins us to discuss natural language processing for low and under-resourced languages.  We discuss current work in this area and the Naki Project which aggregates research on NLP for nati...

13 Nov 201922min

Talking to GPT-2

Talking to GPT-2

GPT-2 is yet another in a succession of models like ELMo and BERT which adopt a similar deep learning architecture and train an unsupervised model on a massive text corpus. As we have been covering re...

31 Okt 201929min

Reproducing Deep Learning Models

Reproducing Deep Learning Models

Rajiv Shah attempted to reproduce an earthquake-predicting deep learning model.  His results exposed some issues with the model.  Kyle and Rajiv discuss the original paper and Rajiv's analysis.

23 Okt 201922min

What BERT is Not

What BERT is Not

Allyson Ettinger joins us to discuss her work in computational linguistics, specifically in exploring some of the ways in which the popular natural language processing approach BERT has limitations.

14 Okt 201927min

SpanBERT

SpanBERT

Omer Levy joins us to discuss "SpanBERT: Improving Pre-training by Representing and Predicting Spans". https://arxiv.org/abs/1907.10529

8 Okt 201924min

Populärt inom Vetenskap

allt-du-velat-veta
p3-dystopia
dumma-manniskor
rss-ufobortom-rimligt-tvivel
kapitalet-en-podd-om-ekonomi
ufo-sverige
svd-nyhetsartiklar
rss-spraket
paranormalt-med-caroline-giertz
hacka-livet
medicinvetarna
dumforklarat
rss-vetenskapsradion
det-morka-psyket
ufo-sverige-2
sexet
rss-tidsmaskinen
halsorevolutionen
rss-tidslinjen-podcast
rss-vetenskapsradion-2