[MINI] Feed Forward Neural Networks
Data Skeptic24 Mars 2017

[MINI] Feed Forward Neural Networks

Feed Forward Neural Networks

In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case.

Below are the truth tables that describe each of these functions.

AND Truth Table Input 1 Input 2 Output 0 0 0 0 1 0 1 0 0 1 1 1 OR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 1 XOR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 0

The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions?

Let's consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.

Can this perceptron learn the AND function?

Sure. Let and

What about OR?

Yup. Let and

An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented.

How about XOR?

No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.

In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate.

Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated.

Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.

Check out our recent blog post on how we're using Periscope Data cohort charts.

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Det här avsnittet är hämtat från ett öppet RSS-flöde och publiceras inte av Podme. Det kan innehålla reklam.

Avsnitt(601)

GraphText

GraphText

On the show today, we are joined by Jianan Zhao, a Computer Science student at Mila and the University of Montreal. His research focus is on graph databases and natural language processing. He joins u...

31 Okt 202330min

arXiv Publication Patterns

arXiv Publication Patterns

Today, we are joined by Rajiv Movva, a PhD student in Computer Science at Cornell Tech University. His research interest lies in the intersection of responsible AI and computational social science. He...

23 Okt 202328min

Do LLMs Make Ethical Choices

Do LLMs Make Ethical Choices

We are excited to be joined by Josh Albrecht, the CTO of Imbue. Imbue is a research company whose mission is to create AI agents that are more robust, safer, and easier to use. He joins us to share fi...

16 Okt 202329min

Emergent Deception in LLMs

Emergent Deception in LLMs

On today's show, we are joined by Thilo Hagendorff, a Research Group Leader of Ethics of Generative AI at the University of Stuttgart. He joins us to discuss his research, Deception Abilities Emerged ...

9 Okt 202327min

Agents with Theory of Mind Play Hanabi

Agents with Theory of Mind Play Hanabi

Nieves Montes, a Ph.D. student at the Artificial Intelligence Research Institute in Barcelona, Spain, joins us. Her PhD research revolves around value-based reasoning in relation to norms. She shares ...

2 Okt 202338min

LLMs for Evil

LLMs for Evil

We are joined by Maximilian Mozes, a PhD student at the University College, London. His PhD research focuses on Natural Language Processing (NLP), particularly the intersection of adversarial machine ...

25 Sep 202326min

The Defeat of the Winograd Schema Challenge

The Defeat of the Winograd Schema Challenge

Our guest today is Vid Kocijan, a Machine Learning Engineer at Kumo AI. Vid has a Ph.D. in Computer Science at the University of Oxford. His research focused on common sense reasoning, pre-training in...

11 Sep 202331min

LLMs in Social Science

LLMs in Social Science

Today, We are joined by Petter Törnberg, an Assistant Professor in Computational Social Science at the University of Amsterdam and a Senior Researcher at the University of Neuchatel. His research is c...

4 Sep 202334min

Populärt inom Vetenskap

allt-du-velat-veta
p3-dystopia
dumma-manniskor
rss-ufobortom-rimligt-tvivel
ufo-sverige
kapitalet-en-podd-om-ekonomi
svd-nyhetsartiklar
hacka-livet
rss-spraket
paranormalt-med-caroline-giertz
ufo-sverige-2
medicinvetarna
rss-vetenskapsradion
dumforklarat
sexet
det-morka-psyket
rss-dennis-world
rss-vetenskapsradion-2
rss-tidsmaskinen
halsorevolutionen