[MINI] One Shot Learning
Data Skeptic22 Syys 2017

[MINI] One Shot Learning

One Shot Learning is the class of machine learning procedures that focuses learning something from a small number of examples. This is in contrast to "traditional" machine learning which typically requires a very large training set to build a reasonable model.

In this episode, Kyle presents a coded message to Linhda who is able to recognize that many of these new symbols created are likely to be the same symbol, despite having extremely few examples of each. Why can the human brain recognize a new symbol with relative ease while most machine learning algorithms require large training data? We discuss some of the reasons why and approaches to One Shot Learning.

Tämä jakso on lisätty Podme-palveluun avoimen RSS-syötteen kautta eikä se ole Podmen omaa tuotantoa. Siksi jakso saattaa sisältää mainontaa.

Jaksot(601)

seq2seq

seq2seq

A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder. The encoder/decoder architecture has obvious promise...

1 Maalis 201921min

Text Mining in R

Text Mining in R

Kyle interviews Julia Silge about her path into data science, her book Text Mining with R, and some of the ways in which she's used natural language processing in projects both personal and profession...

22 Helmi 201920min

Recurrent Relational Networks

Recurrent Relational Networks

One of the most challenging NLP tasks is natural language understanding and reasoning. How can we construct algorithms that are able to achieve human level understanding of text and be able to answer ...

15 Helmi 201919min

Text World and Word Embedding Lower Bounds

Text World and Word Embedding Lower Bounds

In the first half of this episode, Kyle speaks with Marc-Alexandre Côté and Wendy Tay about Text World.  Text World is an engine that simulates text adventure games.  Developers are encouraged to try ...

8 Helmi 201939min

word2vec

word2vec

Word2vec is an unsupervised machine learning model which is able to capture semantic information from the text it is trained on. The model is based on neural networks. Several large organizations like...

1 Helmi 201931min

Authorship Attribution

Authorship Attribution

In a recent paper, Leveraging Discourse Information Effectively for Authorship Attribution, authors Su Wang, Elisa Ferracane, and Raymond J. Mooney describe a deep learning methodology for predict whi...

25 Tammi 201950min

Very Large Corpora and Zipf's Law

Very Large Corpora and Zipf's Law

The earliest efforts to apply machine learning to natural language tended to convert every token (every word, more or less) into a unique feature. While techniques like stemming may have cut the numbe...

18 Tammi 201924min

Semantic search at Github

Semantic search at Github

Github is many things besides source control. It's a social network, even though not everyone realizes it. It's a vast repository of code. It's a ticketing and project management system. And of course...

11 Tammi 201934min

Suosittua kategoriassa Tiede

tiedekulma-podcast
rss-poliisin-mieli
docemilia
rss-mita-tulisi-tietaa
filocast-filosofian-perusteet
rss-lapsuuden-rakentajat-podcast
rss-tiedetta-vai-tarinaa
rss-lihavuudesta-podcast
sotataidon-ytimessa
radio-antro
menologeja-tutkimusmatka-vaihdevuosiin
rss-bios-podcast
rss-duodecim-lehti
rss-metsantuntijat-podcast
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita