Understanding Collective Insect Communication with ML, w/ Orit Peleg - #590

Understanding Collective Insect Communication with ML, w/ Orit Peleg - #590

Today we’re joined by Orit Peleg, an assistant professor at the University of Colorado, Boulder. Orit’s work focuses on understanding the behavior of disordered living systems, by merging tools from physics, biology, engineering, and computer science. In our conversation, we discuss how Orit found herself exploring problems of swarming behaviors and their relationship to distributed computing system architecture and spiking neurons. We look at two specific areas of research, the first focused on the patterns observed in firefly species, how the data is collected, and the types of algorithms used for optimization. Finally, we look at how Orit’s research with fireflies translates to a completely different insect, the honeybee, and what the next steps are for investigating these and other insect families. The complete show notes for this episode can be found at twimlai.com/go/590

Avsnitt(777)

Practical Deep Learning with Rachel Thomas - TWiML Talk #138

Practical Deep Learning with Rachel Thomas - TWiML Talk #138

In this episode, i'm joined by Rachel Thomas, founder and researcher at Fast AI. If you’re not familiar with Fast AI, the company offers a series of courses including Practical Deep Learning for Coders, Cutting Edge Deep Learning for Coders and Rachel’s Computational Linear Algebra course. The courses are designed to make deep learning more accessible to those without the extensive math backgrounds some other courses assume. Rachel and I cover a lot of ground in this conversation, starting with the philosophy and goals behind the Fast AI courses. We also cover Fast AI’s recent decision to switch to their courses from Tensorflow to Pytorch, the reasons for this, and the lessons they’ve learned in the process. We discuss the role of the Fast AI deep learning library as well, and how it was recently used to held their team achieve top results on a popular industry benchmark of training time and training cost by a factor of more than ten. The notes for this show can be found at twimlai.com/talk/138

14 Maj 201844min

Kinds of Intelligence w/ Jose Hernandez-Orallo - TWiML Talk #137

Kinds of Intelligence w/ Jose Hernandez-Orallo - TWiML Talk #137

In this episode, I'm joined by Jose Hernandez-Orallo, professor in the department of information systems and computing at Universitat Politècnica de València and fellow at the Leverhulme Centre for the Future of Intelligence, working on the Kinds of Intelligence Project. Jose and I caught up at NIPS last year after the Kinds of Intelligence Symposium that he helped organize there. In our conversation, we discuss the three main themes of the symposium: understanding and identifying the main types of intelligence, including non-human intelligence, developing better ways to test and measure these intelligences, and understanding how and where research efforts should focus to best benefit society. The notes for this show can be found at twimlai.com/talk/137.

10 Maj 201844min

Taming arXiv with Natural Language Processing w/ John Bohannon - TWiML Talk #136

Taming arXiv with Natural Language Processing w/ John Bohannon - TWiML Talk #136

In this episode i'm joined by John Bohannan, Director of Science at AI startup Primer. As you all may know, a few weeks ago we released my interview with Google legend Jeff Dean, which, by the way, you should definitely check if you haven’t already. Anyway, in that interview, Jeff mentions the recent explosion of machine learning papers on arXiv, which I responded to jokingly by asking whether Google had already developed the AI system to help them summarize and track all of them. While Jeff didn’t have anything specific to offer, a listener reached out and let me know that John was in fact already working on this problem. In our conversation, John and I discuss his work on Primer Science, a tool that harvests content uploaded to arxiv, sorts it into natural topics using unsupervised learning, then gives relevant summaries of the activity happening in different innovation areas. We spend a good amount of time on the inner workings of Primer Science, including their data pipeline and some of the tools they use, how they determine “ground truth” for training their models, and the use of heuristics to supplement NLP in their processing. The notes for this show can be found at twimlai.com/talk/136

7 Maj 201854min

Epsilon Software for Private Machine Learning with Chang Liu - TWiML Talk #135

Epsilon Software for Private Machine Learning with Chang Liu - TWiML Talk #135

In this episode, our final episode in the Differential Privacy series, I speak with Chang Liu, applied research scientist at Georgian Partners, a venture capital firm that invests in growth stage business software companies in the US and Canada. Chang joined me to discuss Georgian’s new offering, Epsilon, a software product that embodies the research, development and lessons learned helps in helping their portfolio companies deliver differentially private machine learning solutions to their customers. In our conversation, Chang discusses some of the projects that led to the creation of Epsilon, including differentially private machine learning projects at BlueCore, Work Fusion and Integrate.ai. We explore some of the unique challenges of productizing differentially private ML, including business, people and technology issues. Finally, Chang provides some great pointers for those who’d like to further explore this field. The notes for this show can be found at twimlai.com/talk/135

4 Maj 201846min

Scalable Differential Privacy for Deep Learning with Nicolas Papernot - TWiML Talk #134

Scalable Differential Privacy for Deep Learning with Nicolas Papernot - TWiML Talk #134

In this episode of our Differential Privacy series, I'm joined by Nicolas Papernot, Google PhD Fellow in Security and graduate student in the department of computer science at Penn State University. Nicolas and I continue this week’s look into differential privacy with a discussion of his recent paper, Semi-supervised Knowledge Transfer for Deep Learning From Private Training Data. In our conversation, Nicolas describes the Private Aggregation of Teacher Ensembles model proposed in this paper, and how it ensures differential privacy in a scalable manner that can be applied to Deep Neural Networks. We also explore one of the interesting side effects of applying differential privacy to machine learning, namely that it inherently resists overfitting, leading to more generalized models. The notes for this show can be found at twimlai.com/talk/134.

3 Maj 201859min

Differential Privacy at Bluecore with Zahi Karam - TWiML Talk #133

Differential Privacy at Bluecore with Zahi Karam - TWiML Talk #133

In this episode of our Differential Privacy series, I'm joined by Zahi Karam, Director of Data Science at Bluecore, whose retail marketing platform specializes in personalized email marketing. I sat down with Zahi at the Georgian Partners portfolio conference last year, where he gave me my initial exposure to the field of differential privacy, ultimately leading to this series. Zahi shared his insights into how differential privacy can be deployed in the real world and some of the technical and cultural challenges to doing so. We discuss the Bluecore use case in depth, including why and for whom they build differentially private machine learning models. The notes for this show can be found at twimlai.com/talk/133

1 Maj 201838min

Differential Privacy Theory & Practice with Aaron Roth - TWiML Talk #132

Differential Privacy Theory & Practice with Aaron Roth - TWiML Talk #132

In the first episode of our Differential Privacy series, I'm joined by Aaron Roth, associate professor of computer science and information science at the University of Pennsylvania. Aaron is first and foremost a theoretician, and our conversation starts with him helping us understand the context and theory behind differential privacy, a research area he was fortunate to begin pursuing at its inception. We explore the application of differential privacy to machine learning systems, including the costs and challenges of doing so. Aaron discusses as well quite a few examples of differential privacy in action, including work being done at Google, Apple and the US Census Bureau, along with some of the major research directions currently being explored in the field. The notes for this show can be found at twimlai.com/talk/132.

30 Apr 201842min

Optimal Transport and Machine Learning with Marco Cuturi - TWiML Talk #131

Optimal Transport and Machine Learning with Marco Cuturi - TWiML Talk #131

In this episode, i’m joined by Marco Cuturi, professor of statistics at Université Paris-Saclay. Marco and I spent some time discussing his work on Optimal Transport Theory at NIPS last year. In our discussion, Marco explains Optimal Transport, which provides a way for us to compare probability measures. We look at ways Optimal Transport can be used across machine learning applications, including graphical, NLP, and image examples. We also touch on GANs, or generative adversarial networks, and some of the challenges they present to the research community. The notes for this show can be found at twimlai.com/talk/131.

26 Apr 201832min

Populärt inom Politik & nyheter

svenska-fall
aftonbladet-krim
motiv
p3-krim
fordomspodden
flashback-forever
rss-viva-fotboll
rss-krimstad
aftonbladet-daily
rss-sanning-konsekvens
spar
blenda-2
rss-krimreportrarna
rss-frandfors-horna
rss-vad-fan-hande
dagens-eko
olyckan-inifran
krimmagasinet
rss-flodet
spotlight