Robust Fit to Nature
Data Skeptic12 Juni 2020

Robust Fit to Nature

Avsnitt(590)

[MINI] The Vanishing Gradient

[MINI] The Vanishing Gradient

This episode discusses the vanishing gradient - a problem that arises when training deep neural networks in which nearly all the gradients are very close to zero by the time back-propagation has reached the first hidden layer. This makes learning virtually impossible without some clever trick or improved methodology to help earlier layers begin to learn.

30 Juni 201715min

Doctor AI

Doctor AI

hen faced with medical issues, would you want to be seen by a human or a machine? In this episode, guest Edward Choi, co-author of the study titled Doctor AI: Predicting Clinical Events via Recurrent Neural Network shares his thoughts. Edward presents his team's efforts in developing a temporal model that can learn from human doctors based on their collective knowledge, i.e. the large amount of Electronic Health Record (EHR) data.

23 Juni 201741min

[MINI] Activation Functions

[MINI] Activation Functions

In a neural network, the output value of a neuron is almost always transformed in some way using a function. A trivial choice would be a linear transformation which can only scale the data. However, other transformations, like a step function allow for non-linear properties to be introduced. Activation functions can also help to standardize your data between layers. Some functions such as the sigmoid have the effect of "focusing" the area of interest on data. Extreme values are placed close together, while values near it's point of inflection change more quickly with respect to small changes in the input. Similarly, these functions can take any real number and map all of them to a finite range such as [0, 1] which can have many advantages for downstream calculation. In this episode, we overview the concept and discuss a few reasons why you might select one function verse another.

16 Juni 201714min

MS Build 2017

MS Build 2017

This episode recaps the Microsoft Build Conference.  Kyle recently attended and shares some thoughts on cloud, databases, cognitive services, and artificial intelligence.  The episode includes interviews with Rohan Kumar and David Carmona.

9 Juni 201727min

[MINI] Max-pooling

[MINI] Max-pooling

Max-pooling is a procedure in a neural network which has several benefits. It performs dimensionality reduction by taking a collection of neurons and reducing them to a single value for future layers to receive as input. It can also prevent overfitting, since it takes a large set of inputs and admits only one value, making it harder to memorize the input. In this episode, we discuss the intuitive interpretation of max-pooling and why it's more common than mean-pooling or (theoretically) quartile-pooling.

2 Juni 201712min

Unsupervised Depth Perception

Unsupervised Depth Perception

This episode is an interview with Tinghui Zhou.  In the recent paper "Unsupervised Learning of Depth and Ego-motion from Video", Tinghui and collaborators propose a deep learning architecture which is able to learn depth and pose information from unlabeled videos.  We discuss details of this project and its applications.

26 Maj 201723min

[MINI] Convolutional Neural Networks

[MINI] Convolutional Neural Networks

CNNs are characterized by their use of a group of neurons typically referred to as a filter or kernel.  In image recognition, this kernel is repeated over the entire image.  In this way, CNNs may achieve the property of translational invariance - once trained to recognize certain things, changing the position of that thing in an image should not disrupt the CNN's ability to recognize it.  In this episode, we discuss a few high-level details of this important architecture.

19 Maj 201714min

Multi-Agent Diverse Generative Adversarial Networks

Multi-Agent Diverse Generative Adversarial Networks

Despite the success of GANs in imaging, one of its major drawbacks is the problem of 'mode collapse,' where the generator learns to produce samples with extremely low variety. To address this issue, today's guests Arnab Ghosh and Viveka Kulharia proposed two different extensions. The first involves tweaking the generator's objective function with a diversity enforcing term that would assess similarities between the different samples generated by different generators. The second comprises modifying the discriminator objective function, pushing generations corresponding to different generators towards different identifiable modes.

12 Maj 201729min

Populärt inom Vetenskap

p3-dystopia
dumma-manniskor
svd-nyhetsartiklar
allt-du-velat-veta
kapitalet-en-podd-om-ekonomi
rss-ufobortom-rimligt-tvivel
paranormalt-med-caroline-giertz
dumforklarat
rss-i-hjarnan-pa-louise-epstein
doden-hjarnan-kemisten
sexet
rss-vetenskapsradion
medicinvetarna
det-morka-psyket
rss-broccolipodden-en-podcast-som-inte-handlar-om-broccoli
rss-personlighetspodden
rss-vetenskapsradion-2
barnpsykologerna
rss-spraket
rss-vetenskapspodden