Building the howto100m Video Corpus
Data Skeptic19 Elo 2019

Building the howto100m Video Corpus

Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen.

This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities.

Related Links

The paper will be presented at ICCV 2019

@antoine77340

Antoine on Github

Antoine's homepage

Jaksot(590)

[MINI] Dropout

[MINI] Dropout

Deep learning can be prone to overfit a given problem. This is especially frustrating given how much time and computational resources are often required to converge. One technique for fighting overfitting is to use dropout. Dropout is the method of randomly selecting some neurons in one's network to set to zero during iterations of learning. The core idea is that each particular input in a given layer is not always available and therefore not a signal that can be relied on too heavily.

13 Tammi 201715min

The Police Data and the Data Driven Justice Initiatives

The Police Data and the Data Driven Justice Initiatives

In this episode I speak with Clarence Wardell and Kelly Jin about their mutual service as part of the White House's Police Data Initiative and Data Driven Justice Initiative respectively. The Police Data Initiative was organized to use open data to increase transparency and community trust as well as to help police agencies use data for internal accountability. The PDI emerged from recommendations made by the Task Force on 21st Century Policing. The Data Driven Justice Initiative was organized to help city, county, and state governments use data-driven strategies to help low-level offenders with mental illness get directed to the right services rather than into the criminal justice system.

6 Tammi 201749min

The Library Problem

The Library Problem

We close out 2016 with a discussion of a basic interview question which might get asked when applying for a data science job. Specifically, how a library might build a model to predict if a book will be returned late or not.

30 Joulu 201635min

2016 Holiday Special

2016 Holiday Special

Today's episode is a reading of Isaac Asimov's Franchise. As mentioned on the show, this is just a work of fiction to be enjoyed and not in any way some obfuscated political statement. Enjoy, and happy holidays!

23 Joulu 201639min

[MINI] Entropy

[MINI] Entropy

Classically, entropy is a measure of disorder in a system. From a statistical perspective, it is more useful to say it's a measure of the unpredictability of the system. In this episode we discuss how information reduces the entropy in deciding whether or not Yoshi the parrot will like a new chew toy. A few other everyday examples help us examine why entropy is a nice metric for constructing a decision tree.

16 Joulu 201616min

MS Connect Conference

MS Connect Conference

Cloud services are now ubiquitous in data science and more broadly in technology as well. This week, I speak to Mark Souza, Tobias Ternström, and Corey Sanders about various aspects of data at scale. We discuss the embedding of R into SQLServer, SQLServer on linux, open source, and a few other cloud topics.

9 Joulu 201642min

Causal Impact

Causal Impact

Today's episode is all about Causal Impact, a technique for estimating the impact of a particular event on a time series. We talk to William Martin about his research into the impact releases have on app and we also chat with Karen Blakemore about a project she helped us build to explore the impact of a Saturday Night Live appearance on a musician's career. Martin's work culminated in a paper Causal Impact for App Store Analysis. A shorter summary version can be found here. His company helping app developers do this sort of analysis can be found at crestweb.cs.ucl.ac.uk/appredict/.

2 Joulu 201634min

[MINI] The Bootstrap

[MINI] The Bootstrap

The Bootstrap is a method of resampling a dataset to possibly refine it's accuracy and produce useful metrics on the result. The bootstrap is a useful statistical technique and is leveraged in Bagging (bootstrap aggregation) algorithms such as Random Forest. We discuss this technique related to polling and surveys.

25 Marras 201610min

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
utelias-mieli
tiedekulma-podcast
rss-lihavuudesta-podcast
hippokrateen-vastaanotolla
sotataidon-ytimessa
rss-radplus
filocast-filosofian-perusteet
menologeja-tutkimusmatka-vaihdevuosiin
rss-poliisin-mieli
docemilia
rss-bios-podcast
rss-totta-vai-tuubaa
rss-duodecim-lehti
rss-ammamafia
rss-astetta-parempi-elama-podcast
rss-ilmasto-kriisissa
rss-tervetta-skeptisyytta
rss-metsanomistaja-podcast