Algorithmic Fairness
Data Skeptic14 Jan 2020

Algorithmic Fairness

This episode includes an interview with Aaron Roth author of The Ethical Algorithm.

Episoder(589)

Doctor AI

Doctor AI

hen faced with medical issues, would you want to be seen by a human or a machine? In this episode, guest Edward Choi, co-author of the study titled Doctor AI: Predicting Clinical Events via Recurrent Neural Network shares his thoughts. Edward presents his team's efforts in developing a temporal model that can learn from human doctors based on their collective knowledge, i.e. the large amount of Electronic Health Record (EHR) data.

23 Jun 201741min

[MINI] Activation Functions

[MINI] Activation Functions

In a neural network, the output value of a neuron is almost always transformed in some way using a function. A trivial choice would be a linear transformation which can only scale the data. However, other transformations, like a step function allow for non-linear properties to be introduced. Activation functions can also help to standardize your data between layers. Some functions such as the sigmoid have the effect of "focusing" the area of interest on data. Extreme values are placed close together, while values near it's point of inflection change more quickly with respect to small changes in the input. Similarly, these functions can take any real number and map all of them to a finite range such as [0, 1] which can have many advantages for downstream calculation. In this episode, we overview the concept and discuss a few reasons why you might select one function verse another.

16 Jun 201714min

MS Build 2017

MS Build 2017

This episode recaps the Microsoft Build Conference.  Kyle recently attended and shares some thoughts on cloud, databases, cognitive services, and artificial intelligence.  The episode includes interviews with Rohan Kumar and David Carmona.

9 Jun 201727min

[MINI] Max-pooling

[MINI] Max-pooling

Max-pooling is a procedure in a neural network which has several benefits. It performs dimensionality reduction by taking a collection of neurons and reducing them to a single value for future layers to receive as input. It can also prevent overfitting, since it takes a large set of inputs and admits only one value, making it harder to memorize the input. In this episode, we discuss the intuitive interpretation of max-pooling and why it's more common than mean-pooling or (theoretically) quartile-pooling.

2 Jun 201712min

Unsupervised Depth Perception

Unsupervised Depth Perception

This episode is an interview with Tinghui Zhou.  In the recent paper "Unsupervised Learning of Depth and Ego-motion from Video", Tinghui and collaborators propose a deep learning architecture which is able to learn depth and pose information from unlabeled videos.  We discuss details of this project and its applications.

26 Mai 201723min

[MINI] Convolutional Neural Networks

[MINI] Convolutional Neural Networks

CNNs are characterized by their use of a group of neurons typically referred to as a filter or kernel.  In image recognition, this kernel is repeated over the entire image.  In this way, CNNs may achieve the property of translational invariance - once trained to recognize certain things, changing the position of that thing in an image should not disrupt the CNN's ability to recognize it.  In this episode, we discuss a few high-level details of this important architecture.

19 Mai 201714min

Multi-Agent Diverse Generative Adversarial Networks

Multi-Agent Diverse Generative Adversarial Networks

Despite the success of GANs in imaging, one of its major drawbacks is the problem of 'mode collapse,' where the generator learns to produce samples with extremely low variety. To address this issue, today's guests Arnab Ghosh and Viveka Kulharia proposed two different extensions. The first involves tweaking the generator's objective function with a diversity enforcing term that would assess similarities between the different samples generated by different generators. The second comprises modifying the discriminator objective function, pushing generations corresponding to different generators towards different identifiable modes.

12 Mai 201729min

[MINI] Generative Adversarial Networks

[MINI] Generative Adversarial Networks

GANs are an unsupervised learning method involving two neural networks iteratively competing. The discriminator is a typical learning system. It attempts to develop the ability to recognize members of a certain class, such as all photos which have birds in them. The generator attempts to create false examples which the discriminator incorrectly classifies. In successive training rounds, the networks examine each and play a mini-max game of trying to harm the performance of the other. In addition to being a useful way of training networks in the absence of a large body of labeled data, there are additional benefits. The discriminator may end up learning more about edge cases than it otherwise would be given typical examples. Also, the generator's false images can be novel and interesting on their own. The concept was first introduced in the paper Generative Adversarial Networks.

5 Mai 20179min

Populært innen Vitenskap

fastlegen
rekommandert
jss
tingenes-tilstand
sinnsyn
tomprat-med-gunnar-tjomlid
rss-rekommandert
rss-nysgjerrige-norge
villmarksliv
dekodet-2
rss-paradigmepodden
doktor-fives-podcast
forskningno
vett-og-vitenskap-med-gaute-einevoll
fjellsportpodden
nordnorsk-historie
pod-britannia
rss-overskuddsliv
tidlose-historier
abid-nadia-skyld-og-skam