[MINI] Feed Forward Neural Networks
Data Skeptic24 Mar 2017

[MINI] Feed Forward Neural Networks

Feed Forward Neural Networks

In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case.

Below are the truth tables that describe each of these functions.

AND Truth Table Input 1 Input 2 Output 0 0 0 0 1 0 1 0 0 1 1 1 OR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 1 XOR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 0

The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions?

Let's consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.

Can this perceptron learn the AND function?

Sure. Let and

What about OR?

Yup. Let and

An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented.

How about XOR?

No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.

In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate.

Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated.

Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.

Check out our recent blog post on how we're using Periscope Data cohort charts.

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Denne episoden er hentet fra en åpen RSS-feed og er ikke publisert av Podme. Den kan derfor inneholde annonser.

Episoder(601)

I LLM and You Can Too

I LLM and You Can Too

It took a massive financial investment for the first large language models (LLMs) to be created.  Did their corporate backers lock these tools away for all but the richest?  No.  They provided comodit...

23 Des 202323min

Q&A with Kyle

Q&A with Kyle

We celebrate episode 1000000000 with some Q&A from host Kyle Polich.  We boil this episode down to four key questions: 1) How do you find guests 2) What is Data Skeptic all about? 3) What is Kyle all ...

19 Des 202340min

LLMs for Data Analysis

LLMs for Data Analysis

In this episode, we are joined by Amir Netz, a Technical Fellow at Microsoft and the CTO of Microsoft Fabric. He discusses how companies can use Microsoft's latest tools for business intelligence. Ami...

12 Des 202329min

AI Platforms

AI Platforms

Our guest today is Eric Boyd, the Corporate Vice President of AI at Microsoft. Eric joins us to share how organizations can leverage AI for faster development. Eric shared the benefits of using natura...

4 Des 202333min

Deploying LLMs

Deploying LLMs

We are excited to be joined by Aaron Reich and Priyanka Shah. Aaron is the CTO at Avanade, while Priyanka leads their AI/IoT offering for the SEA Region. Priyanka is also the MVP for Microsoft AI. The...

27 Nov 202335min

A Survey Assessing Github Copilot

A Survey Assessing Github Copilot

In this episode, we are joined by Jenny Liang, a PhD student at Carnegie Mellon University, where she studies the usability of code generation tools. She discusses her recent survey on the usability o...

20 Nov 202326min

Program Aided Language Models

Program Aided Language Models

We are joined by Aman Madaan and Shuyan Zhou. They are both PhD students at the Language Technology Institute at Carnegie Mellon University. They join us to discuss their latest published paper, PAL: ...

13 Nov 202332min

Which Programming Language is ChatGPT Best At

Which Programming Language is ChatGPT Best At

In this episode, we have Alessio Buscemi, a software engineer at Lifeware SA. Alessio was a post-doctoral researcher at the University of Luxembourg. He joins us to discuss his paper, A Comparative St...

6 Nov 202340min

Populært innen Vitenskap

fastlegen
tingenes-tilstand
jss
rss-zahid-ali-hjelper-deg
rekommandert
sinnsyn
rss-paradigmepodden
liberal-halvtime
vett-og-vitenskap-med-gaute-einevoll
forskningno
rss-overskuddsliv
villmarksliv
kvinnehelsepodden
nordnorsk-historie
grunnstoffene
tidlose-historier
rss-inn-til-kjernen-med-sunniva-rose
nevropodden
dekodet-2
rss-rekommandert