Streetlight Outage and Crime Rate Analysis with Zach Seeskin
Data Skeptic18 Juli 2014

Streetlight Outage and Crime Rate Analysis with Zach Seeskin

This episode features a discussion with statistics PhD student Zach Seeskin about a project he was involved in as part of the Eric and Wendy Schmidt Data Science for Social Good Summer Fellowship. The project involved exploring the relationship (if any) between streetlight outages and crime in the City of Chicago. We discuss how the data was accessed via the City of Chicago data portal, how the analysis was done, and what correlations were discovered in the data. Won't you listen and hear what was found?

Det här avsnittet är hämtat från ett öppet RSS-flöde och publiceras inte av Podme. Det kan innehålla reklam.

Avsnitt(601)

Interpretability Practitioners

Interpretability Practitioners

Sungsoo Ray Hong joins us to discuss the paper Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs.

26 Juni 202032min

Facial Recognition Auditing

Facial Recognition Auditing

Deb Raji joins us to discuss her recent publication Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.

19 Juni 202047min

Robust Fit to Nature

Robust Fit to Nature

Uri Hasson joins us this week to discuss the paper Robust-fit to Nature: An Evolutionary Perspective on Biological (and Artificial) Neural Networks.

12 Juni 202038min

Black Boxes Are Not Required

Black Boxes Are Not Required

Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as "black boxes". While black boxes lack desirably properties like i...

5 Juni 202032min

Robustness to Unforeseen Adversarial Attacks

Robustness to Unforeseen Adversarial Attacks

Daniel Kang joins us to discuss the paper Testing Robustness Against Unforeseen Adversaries.

30 Maj 202021min

Estimating the Size of Language Acquisition

Estimating the Size of Language Acquisition

Frank Mollica joins us to discuss the paper Humans store about 1.5 megabytes of information during language acquisition

22 Maj 202025min

Interpretable AI in Healthcare

Interpretable AI in Healthcare

Jayaraman Thiagarajan joins us to discuss the recent paper Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models.

15 Maj 202035min

Understanding Neural Networks

Understanding Neural Networks

What does it mean to understand a neural network? That's the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.

8 Maj 202034min

Populärt inom Vetenskap

allt-du-velat-veta
dumma-manniskor
p3-dystopia
rss-ufobortom-rimligt-tvivel
ufo-sverige
kapitalet-en-podd-om-ekonomi
sexet
medicinvetarna
svd-nyhetsartiklar
rss-vetenskapsradion
hacka-livet
rss-vetenskapsradion-2
paranormalt-med-caroline-giertz
det-morka-psyket
ufo-sverige-2
rss-spraket
halsorevolutionen
rss-klotet
dumforklarat
ideer-som-forandrar-varlden