![[MINI] Activation Functions](https://cdn.podme.com/podcast-images/99E5B4C49CC9487AB4880B5C8DF050F0_small.jpg)
[MINI] Activation Functions
In a neural network, the output value of a neuron is almost always transformed in some way using a function. A trivial choice would be a linear transformation which can only scale the data. However, other transformations, like a step function allow for non-linear properties to be introduced. Activation functions can also help to standardize your data between layers. Some functions such as the sigmoid have the effect of "focusing" the area of interest on data. Extreme values are placed close together, while values near it's point of inflection change more quickly with respect to small changes in the input. Similarly, these functions can take any real number and map all of them to a finite range such as [0, 1] which can have many advantages for downstream calculation. In this episode, we overview the concept and discuss a few reasons why you might select one function verse another.
16 Juni 201714min

MS Build 2017
This episode recaps the Microsoft Build Conference. Kyle recently attended and shares some thoughts on cloud, databases, cognitive services, and artificial intelligence. The episode includes interviews with Rohan Kumar and David Carmona.
9 Juni 201727min
![[MINI] Max-pooling](https://cdn.podme.com/podcast-images/99E5B4C49CC9487AB4880B5C8DF050F0_small.jpg)
[MINI] Max-pooling
Max-pooling is a procedure in a neural network which has several benefits. It performs dimensionality reduction by taking a collection of neurons and reducing them to a single value for future layers to receive as input. It can also prevent overfitting, since it takes a large set of inputs and admits only one value, making it harder to memorize the input. In this episode, we discuss the intuitive interpretation of max-pooling and why it's more common than mean-pooling or (theoretically) quartile-pooling.
2 Juni 201712min

Unsupervised Depth Perception
This episode is an interview with Tinghui Zhou. In the recent paper "Unsupervised Learning of Depth and Ego-motion from Video", Tinghui and collaborators propose a deep learning architecture which is able to learn depth and pose information from unlabeled videos. We discuss details of this project and its applications.
26 Maj 201723min
![[MINI] Convolutional Neural Networks](https://cdn.podme.com/podcast-images/99E5B4C49CC9487AB4880B5C8DF050F0_small.jpg)
[MINI] Convolutional Neural Networks
CNNs are characterized by their use of a group of neurons typically referred to as a filter or kernel. In image recognition, this kernel is repeated over the entire image. In this way, CNNs may achieve the property of translational invariance - once trained to recognize certain things, changing the position of that thing in an image should not disrupt the CNN's ability to recognize it. In this episode, we discuss a few high-level details of this important architecture.
19 Maj 201714min

Multi-Agent Diverse Generative Adversarial Networks
Despite the success of GANs in imaging, one of its major drawbacks is the problem of 'mode collapse,' where the generator learns to produce samples with extremely low variety. To address this issue, today's guests Arnab Ghosh and Viveka Kulharia proposed two different extensions. The first involves tweaking the generator's objective function with a diversity enforcing term that would assess similarities between the different samples generated by different generators. The second comprises modifying the discriminator objective function, pushing generations corresponding to different generators towards different identifiable modes.
12 Maj 201729min
![[MINI] Generative Adversarial Networks](https://cdn.podme.com/podcast-images/99E5B4C49CC9487AB4880B5C8DF050F0_small.jpg)
[MINI] Generative Adversarial Networks
GANs are an unsupervised learning method involving two neural networks iteratively competing. The discriminator is a typical learning system. It attempts to develop the ability to recognize members of a certain class, such as all photos which have birds in them. The generator attempts to create false examples which the discriminator incorrectly classifies. In successive training rounds, the networks examine each and play a mini-max game of trying to harm the performance of the other. In addition to being a useful way of training networks in the absence of a large body of labeled data, there are additional benefits. The discriminator may end up learning more about edge cases than it otherwise would be given typical examples. Also, the generator's false images can be novel and interesting on their own. The concept was first introduced in the paper Generative Adversarial Networks.
5 Maj 20179min

Opinion Polls for Presidential Elections
Recently, we've seen opinion polls come under some skepticism. But is that skepticism truly justified? The recent Brexit referendum and US 2016 Presidential Election are examples where some claims the polls "got it wrong". This episode explores this idea.
28 Apr 201752min





















