[MINI] Activation Functions
Data Skeptic16 Jun 2017

[MINI] Activation Functions

In a neural network, the output value of a neuron is almost always transformed in some way using a function. A trivial choice would be a linear transformation which can only scale the data. However, other transformations, like a step function allow for non-linear properties to be introduced.

Activation functions can also help to standardize your data between layers. Some functions such as the sigmoid have the effect of "focusing" the area of interest on data. Extreme values are placed close together, while values near it's point of inflection change more quickly with respect to small changes in the input. Similarly, these functions can take any real number and map all of them to a finite range such as [0, 1] which can have many advantages for downstream calculation.

In this episode, we overview the concept and discuss a few reasons why you might select one function verse another.

Denne episoden er hentet fra en åpen RSS-feed og er ikke publisert av Podme. Den kan derfor inneholde annonser.

Episoder(601)

Student Spotlight: Aaron Payne, Data Analyst

Student Spotlight: Aaron Payne, Data Analyst

Aaron Payne, an MBA student at Georgia Tech studying business analytics and a Senior Insights Analyst at Chick-fil-A, joins Kyle Polich to talk about turning analytics into decisions that matter. They...

1 Mai 25min

The Future is Agentic in Recommender Systems

The Future is Agentic in Recommender Systems

Kyle Polich sits down with Yashar Deldjoo, research scientist and Associate Professor at the Polytechnic University of Bari, to explore how recommender systems have evolved and why trustworthiness mat...

25 Apr 49min

Book Ratings and Recommendations

Book Ratings and Recommendations

Goodreads star ratings can be misleading as measures of "book quality," and research from Hannes Rosenbusch suggests that for many professionally published books, differences between readers often mat...

27 Mar 39min

Disentanglement and Interpretability in Recommender Systems

Disentanglement and Interpretability in Recommender Systems

Ervin Dervishaj, a PhD student at the University of Copenhagen, discusses his research on disentangled representation learning in recommender systems, finding that while disentanglement strongly corre...

10 Mar 30min

Collective Altruism in Recommender Systems

Collective Altruism in Recommender Systems

Ekaterina (Kat) Fedorova from MIT EECS joins us to discuss strategic learning in recommender systems—what happens when users collectively coordinate to game recommendation algorithms. Kat's research r...

27 Feb 54min

Niche vs Mainstream

Niche vs Mainstream

Anas Buhayh discusses multi-stakeholder fairness in recommender systems and the S'mores framework—a simulation allowing users to choose between mainstream and niche algorithms. His research shows spec...

18 Feb 34min

Healthy Friction in Job Recommender Systems

Healthy Friction in Job Recommender Systems

In this episode, host Kyle Polich speaks with Roan Schellingerhout, a fourth-year PhD student at Maastricht University, about explainable multi-stakeholder recommender systems for job recruitment. Roa...

2 Feb 26min

Fairness in PCA-Based Recommenders

Fairness in PCA-Based Recommenders

In this episode, we explore the fascinating world of recommender systems and algorithmic fairness with David Liu, Assistant Research Professor at Cornell University's Center for Data Science for Enter...

26 Jan 49min

Populært innen Vitenskap

fastlegen
jss
tingenes-tilstand
forskningno
rekommandert
rss-paradigmepodden
villmarksliv
sinnsyn
rss-zahid-ali-hjelper-deg
vett-og-vitenskap-med-gaute-einevoll
kvinnehelsepodden
rss-inn-til-kjernen-med-sunniva-rose
nordnorsk-historie
nevropodden
tidlose-historier
liberal-halvtime
fjellsportpodden
rss-overskuddsliv
grunnstoffene
rss-rekommandert