Bayesian oracles and safety bounds, Guaranteed Safe AI Seminars (Horizon Events), November 2024
SlidesLive-recorded talks by Yoshua Bengio
Towards Quantitative Safety Guarantees and Alignment (scroll down a bit to see the video). Keynote presentation at the Alignment Workshop 2023 in New Orleans, December 11th (pptx).
Towards AI Safety That Improves With More Compute rather than the Other Way Around (video). Invited talk at Harvard-MIT, 13 Nov. 2023 (pptx).
Explaining properties of subjective experience (consciousness). Keynote at ASSC 2023, the annual conference of the Association of Scientific Studies of Consciousness. (pptx). Corresponding paper. Related video.
GFlowNets as Extensions of RL that are Learnable General-Purpose Inference Machines, RL Sofa (Mila reading group) 57min video, Feb 2022
GFlowNets for generative active learning, Amazon ML Conference invited talk, 29m video, Oct 2021
Active learning: Learning to Explore in Molecule Space, Google BayesOpt invited talk, 1h05 video, July 2021
From Conscious Processing to System 2 Deep Learning, Rockwood Memorial Lecture @ Salk Institute / UCSD, 1h talk + discussion, July 2021
Introduction to Deep Learning (in French) at École Langlands du Centre de Recherches Mathématiques, August 23, 2021.
Neuromatch 2021 lecture on ‘Deep Learning for AI‘, to air in August 2021 (30 minutes).
Fireside Chat with Dawn Song and Yoshua Bengio at the Responsible Data Summit, July 28, 2020.
Priors for Semantic Variables presentation at the Institute for Advanced Study Seminar on Theoretical Machine Learning, July 23, 2020.
Deep Learning for System 2 Processing presentation at the AAAI-20 Turing Award winners 2018 special event, February 9, 2020.
Invited talk at the NeurIPS’2019 workshop on Context and Compositionality in Biological and Artificial Neural Networks, Vancouver, BC, Towards Compositional Understanding of the World by Agent-Based Deep Learning, December 14th, 2019.
Invited talk at the NeurIPS’2019 workshop on Fairness and Ethics, Vancouver, BC, On the Wisdom Race, December 13th, 2019.
Posner lecture at NeurIPS’2019, Vancouver, BC, From System 1 Deep Learning to System 2 Deep Learning, December 11th, 2019. Video with synchoronized slides here.
Invited talk at the Climate Informatics conference, Paris, France, AI and the Climate Crisis, October 4th, 2019.
Turing Award Lecture, Heidelberg Laureate Forum, Heidelberg, Germany, Deep Learning for AI, September 23rd, 2019.
Invited talk at ISIT’2019, Paris, France, Information Theory for and by Deep Learning, July 11th, 2019.
Summer School Lectures at MINOA’2019, Ischia, Italy, Deep Learning for AI, June 25th, 2019.
Invited talk (in French) at the Colloque sur le logiciel libre de l’ADTE: IA, apprentissage profond, logiciel libre et bien commun, 4th of June 2019, Québec. Video.
Neurodiscussions presentation, GRSNC, Université de Montréal, Inspiration from Brains for Deep Learning and Inspiration from Deep Learning for Brains, April 2nd, 2019.
Invited talk at WHY-19 AAAI Symposium, Stanford University, CA, Meta-Transfer Objective for Learning to DisentangleCausalMechanisms, March 26th, 2019.
Public Lecture at Amnesty International, London, UK, Beyond the Hype: How we can make AI work for humanity January 31st, 2019.
Invited talk at the Beneficial AGI conference Challenges towards AGI, January 6th, 2019.
Keynote speech at MSR Asia 20th anniversary and later at Tsinghua University,
Beijing, Challenges for Deep Learning towards Human-Level AI November 7th, 2018.
MIT Hans-Lukas Teuber Memorial Lecture, Towards bridging the gap between deep learning and brains October 18th, 2018.
Lecture at the Deep Learning and Reinforcement Learning summer school DLRL’2018, in Toronto at the Vector Institute: Recurrent Nets and Attention for System 2 Processing, July 30th, 2018.
Lectures on July 19th 2018 at ICDL’2018 (Advanced Course on Data Science and Machine Learning): Lecture 1 (Deep Learning and AI),Lecture 2 (Deep Learning, Recurrent Nets and Attention for System 2 Processing),Lecture 3 (Unsupervised Representation Learning and Generative Adversarial Networks),
Talk at ICML’2018 Workshop on Deep Generative Models Capturing Dependencies Implicitly, July 14th, 2018.
Talk at the ICML’2018 Workshop on Nonconvex Optimization On stochastic gradient descent, flatness and generalization, July 14th, 2018.
Talk at the ICML’2018 Workshop on Learning with Limited Labels Towards disentangling underlying explanatory factors, July 13th, 2018.
Tutorial at IJCAI’2018 on Deep Learning for AI, July 13th, 2018.
Talk at FAIR-New York on A Pot-Pourri of Recent Advances at MILA July 6th, 2018.
Talk at NYU on GANs and Unsupervised Representation Learning March 19th, 2018.
Talk at the NIPS’2017 BigNeuro Workshop More Steps Towards Biologically Plausible Backprop, December 9th, 2017.
Talk at the University of Amsterdam, From Deep Learning of Disentangled Representations to Higher-Level Cognition, October 25th, 2017.
Heroes of Deep Learning: Andrew Ng interviews Yoshua Bengio, August 8th, 2017.
Talk at the Microsoft Research Faculty Summit, Deep Learning Research and the Future of AI, July 17, 2017.
Talk at the NIPS’2016 Brains and Bits Workshop Towards biologically plausible deep learning, December 10th, 2016.
Talk at the NIPS’2016 AutoDiff Workshop Credit Assignment: Beyond Backpropagation, December 10th, 2016.
Talk at OSDC West, Santa Clara, on Deep Learning, November 4th, 2016.
Talk at Stanford University on The Challenge of Sequential Modeling, September 26th, 2016.
Talk at the ICML’2016 Workshop on Optimization Methods for the Next Generation of Machine Learning, on From Curriculum Learning to Mollifying Networks, June 24, 2016, New York City.
Talk at the ICML’2016 Neural Nets Back to the Future Workshop, on Learning Long-Term Dependencies with Gradient Descent is Difficult, June 23, 2016, New York City.
Talk at the NIPS’2015 Workshop on Transfer and Multi-Task Learning on Learning Representations for Unsupervised and Transfer Learning, Dec. 12, 2015.
Talk at the NIPS’2015 RAM Workshop (on Reasoning, Attention and Memory) on From Attention to Memory and towards Longer-Term Dependencies, Dec. 12, 2015.
Talk at the NIPS’2015 Workshop on Deep Reinforcement Learning on RL for DL, Dec. 11, 2015.
Talk at the NIPS’2015 Workshop on Statistical Methods for Understanding Neural Systems on Small steps towards biologically plausible deep learning, Dec. 11, 2015.
Deep Learning Tutorial at NIPS’2015. Most slides except for the vision slides: (pdf); the slides of the vision part (pdf).
Plenary talk at the Asian Conference on Pattern Recognition (ACPR’2015), on Deep Learning (November 6, 2015).
Keynote talk at the IBM Cognitive Colloquium (San Francisco), on Scaling Deep Learning towards AI (October 13, 2015). (video of the talk).
Plenary talk at the International Conference on Image Processing (ICIP’2015), on Deep Learning (September 28, 2015). A tutorial was also given on the previous day.
Lectures at the Montreal Deep Learning Summer School (DLSS’2015), first on Theoretical Motivations for Deep Learning (August 3, 2015) and then on Deep Generative Models (August 12, 2015).
Lectures at the Lisbon Machine Learning Summer School 2015 on Deep Learning.
CORS/INFORMS’2015 tutorial Deep Learning.
CVPR’2015 DeepVision Workshop Deep Learning: Progress in Theory and Attention Mechanisms.
New York University invited talk Towards Biologically Plausible Deep Learning, and the corresponding paper on arXiv.
Montreal Institute of Learning Algorithms (MILA) tea-talk Towards Biologically Plausible Deep Learning. With videos of the talk: (mpg file 1)(mpg file 2) and the corresponding paper on arXiv.
MLSS’2015 Austin lecture slides: Deep Learning
NIPS’2014 MLINI workshop: Towards a biologically plausible replacement for back-prop: target-prop
NIPS’2014 workshop on transfer and multi-task learning: Learning disentangled representations that generalize across tasks, domains and time
NIPS’2014 workshop on optimization for ML (OPT’2014): Optimization challenges for deep learning
KDD’2014 invited tutorial on Scaling Up Deep Learning.
NIPS’2014 MLINI workshop: Towards a biologically plausible replacement for back-prop: target-prop
NIPS’2014 workshop on transfer and multi-task learning: Learning disentangled representations that generalize across tasks, domains and time
NIPS’2014 workshop on optimization for ML (OPT’2014): Optimization challenges for deep learning
KDD’2014 invited tutorial on Scaling Up Deep Learning.
“Deep Learning for AI”, Keynote talk at the Artificial General Intelligence conference, AGI’14, Quebec City, August 2nd, 2014. Video of the talk.
“Deep learning, Brains, and the Evolution of Culture” + Bonus: “How the brain could do credit assignment without backprop”, COGSCI’2014, Quebec City, Workshop on Deep Learning and the Brain, 23 July 2014
“Deep learning”, Invited lectures, University of Siena, Italy, July 2014.
“Deep learning”, Invited lectures, Beijing Machine Learning Summer School, China, June 20, 2014.
“Hyper-Parameters for Deep Learning”, Invited talk, ICML’2014 AutoML Workshop, Beijing, China, June 26, 2014.
ICML’2014 workshop on deep learning for text mining Learning to Represent Natural Language
“Scaling Up Deep Learning”, Invited talk, ICML’2014 Workshop on Deep Learning Models for Emerging Big Data Applications, Beijing, China, June 26, 2014.
Canadian AI Conference’2014 Tutorial on Deep Learning of Representations
“Deep Learning Challenges”, Invited talk, Google Research, Mountain View, California, April 11, 2014.
“Deep learning”, Invited lectures, Machine Learning Summer School, Iceland, 20, 2014. Videos
“Challenges for Deep Learning”, November 6th 2013, Daegu, South Korea, ICONIP’2013 Plenary Talk.
UCL Masterclass lectures Oct. 21-23 2013 (lecture 1 pdf) (lecture 2 pdf) (lecture 3 pdf) (videos)
Slides of my BTAS’2013 plenary talk on Deep Learning towards AI
SSTiC’2013 summer school 5.5-hour slides on Deep Learning of Representations
AAAI’2013 4-hour tutorial on Deep Learning of Representations
ICML’2012 Tutorial on Representation Learning
ACL’2012 Tutorial on Deep Learning for NLP (without Magic)
VIDEO of introductory talk at ICML’2012 Representation Learning workshop
My slides for the IPAM GSS’2012 Summer School on Representation Learning. See videos of all the talks there.
“Deep Learning of Representations”, November 22nd 2012, Google Montreal, Montreal, Google Montreal Tech Series, invited talk. (pdf slides)
ICML’2012 Tutorial on Representation Learning
ACL’2012 Tutorial on Deep Learning for NLP (without Magic)
VIDEO of introductory talk at ICML’2012 Representation Learning workshop
My slides for the IPAM GSS’2012 Summer School on Representation Learning
“On the power of deep architectures”, October 5th 2011, Espoo, Finland, ALT/DS 2011 invited talk. (pdf slides)
“Unsupervised learning of deep representations / Apprentissage non-supervisé de représentations profondes”, (slides in English, talk in French) September 27-29th 2011, Porquerolles, France, ERMITES invited lectures. (pdf slides)
“Understanding and Improving Deep Learning Algorithms”, April 22nd 2010, CMU, Machine Learning Google Distinguished Lecture, Pittsburgh, Pennsylvania, USA. (pdf slides)
“Deep Learning for Speech and Language”, December 12th 2009, NIPS 2009 Workshop on Deep Learning for Speech Recognition and Related Applications, Whistler, British Columbia, Canada. (pdf slides)
“Learning Deep Hierarchies of Representations”, September 23rd 2009, Google Research, Mountain View, California, USA. (pdf slides) (mp4 video)
“Learning Deep Feature Hierarchies”, September 21st 2009, Stanford University, Stanford, California, USA. (pdf)
“Learning Deep Architectures”, August 6th, 2009, Toronto CIFAR Summer School 2009, Ontario, Canada. (pdf)
“On The Difficulty of Training Deep Architectures”, July 8th 2009, Deep Learning Workshop, Gatsby Unit, UCL, London, U.K. (pdf)
“Learning Deep Architectures”, July 7th 2009, Microsoft Research, Cambridge, U.K. (pdf)
“Learning Deep Architectures”, UAI’2009, June 19th 2009, Montreal, Qc, Canada. (pdf)
“Tutorial: Learning Deep Architectures”, ICML’2009 Workshop on Learning Feature Hierarchies, June 18th 2009, Montreal, Qc, Canada. (pdf)
“Learning Deep Architectures: a Stochastic Optimization Challenge”, May 9th, 2009, CIFAR NCAP Workshop, Millcroft Inn, Ontario, Canada. (pdf)
“Learning Deep Architectures: a Stochastic Optimization Challenge”, May 6th, 2009, Waterloo University, Waterloo, Ontario, Canada. (pdf)
“Bebe IA: de l’humain a la machine”, Cegep de Valleyfield, 28th April, 2009, Valleyfield, Qc, Canada. (pdf)
“Curriculum Learning”, Learning Workshop (selected for oral presentation), April 16th 2009, Clearwater, Florida, USA. (pdf)(similar talk given at ICML2009 pdf)
“Statistical Machine Learning for Target Classification from Acoustic Signature”, Army Research Lab, January 15th, 2009, Adelphi, Maryland, USA.
“Learning Algorithms for Deep Architectures”, NIPS’2008 Workshop on Machine Learning Meets Human Learning, December 12th, 2008, Whistler, B.C., Canada. (pdf)
“On the difficulty of training deep neural nets and the effect of unsupervised pre-training”, CIFAR NCAP Workshop, December 7th, 2008, Vancouver, B.C., Canada.
“Machine learning and the curse of highly-variable functions”, November 14th, 2008, Montreal Music and Machine Learning Workshop 2008, Montreal, Qc, Canada. (pdf)
“Bebe IA: de l’humain a la machine”, University of Montreal, Oct 7th, 2008 Montreal, Canada. (pdf)
“Learning Deep Representations”, DARPA Deep Learning Workshop, Sept 25th, 2008, Virginia, USA. (pdf)
“Learning deep architectures for AI”, University of Siena, May 29th, 2008, Siena, Italy. (pdf)
“Learning deep architectures for AI”, University of Florence, May 26th, 2008 Florence, Italy. (pdf)
“Optimizing Deep Architectures”, NIPS Satellite Meeting on Deep Learning December 6th, 2007, Vancouver, BC.
“Deep architectures for Baby AI”, CIFAR Summer School August 7-11th, 2007, University of Toronto, Toronto, ON. (pdf)
“Learning long-term dependencies with recurrent networks”, CIFAR NCAP Workshop on sequential data June 14th, 2007, Millcroft Inn, Alton, ON.
“Non-Local and Deep Representations for AI”, NIPS 2006 Workshop on Novel Applications of Dimensionality Reduction, December 9th, 2006, Whistler, BC.
“Facing Non-Convex Optimization to Scale Machine Learning to AI”, Workshop on Data Mining and Mathematical Programming October 10th, 2006, Montreal, Qc.
“Manifold Learning, Semi-Supervised Learning, and the Curse of Dimensionality”, NPCDS/MITACS Spring School on Statistical and Machine Learning: Topics at the Interface May 27th, 2006, Montreal, Qc.
“On the challenge of learning complex functions”, Computational Neuroscience Symposium, From Theory to Neurons and Back Again. May 9th, 2006, Montreal, Qc.
“Scaling Learning Algorithms towards AI”, NIPS 2005 Large-Scale Kernel Machines Workshop, December 9th, 2005, Whistler, BC.
“Kernel Portfolio Management”, NIPS 2005 Machine Learning in Finance Workshop, December 9th, 2005, Whistler, BC.
“Neural Networks, Convexity, Kernels and Curses”, August 26th, 2005, Montreal, MITACS Workshop on Statistical Learning of Complex Data from Complex Distributions.
“Reassuring and Troubling Views on Graph-Based Semi-Supervised Learning”, August 7th, 2005, Bonn, ICML’2005 Workshop on Partially Supervised Learning.
“Neural Networks, Convexity, Kernels and Curses”, August 5th, 2005, Hebrew University, Jerusalem, Israel.
“Curse of Dimensionality: Local versus Non-Local Learning”, April 25th, 2005, Montreal, CIAR Manifold Learning Workshop.
“The Curse of Dimensionality for Local Kernel Machines”, The Learning Workshop, April 7th, 2005, Snowbird, Utah.
“The Curse of Dimensionality for Local Learning”, Microsoft Research, May 25th, 2005, Redmond, Washington.
“Curse of Dimensionality: Local versus Non-Local Learning”, CIAR Manifold learning workshop, April 25th, 2005, Montreal.
“Non-Local Learning of Geometric Invariants and Beyond”, CIAR Neural Computation Workshop, December 12th, 2004, Vancouver.
“Statistical Learning from High Dimensional and Complex Data: Not a Lost Cause”, Fields Institute Data Mining Workshop, October 28th, 2004.
“Local and Non-Local Manifold Learning”, University of Toronto, August 2004.
“Local and Non-Local Manifold Learning”, MIT, August 2004.
“Learning the Density Structure of High-Dimensional Data”, Joint Canada-France Meeting of the Mathematical Sciences, Toulouse 2004.
“Learning the Density Structure of High-Dimensional Data”, Computer Discovery Conference, Montreal, June 2004.
“Spectral Dimensionality Reduction via Learning Eigenfunctions”, NIPS’2003 post-conference workshops, British-Columbia.
“Learning Internal Representations by Capturing the Principal Factors of Variations that Summarize Similarity between Objects”, Canadian Institute for Advanced Research, Vancouver, December 2003.
“Learning eigenfunctions to model high-dimensional data”, Statistical Society of Canada, McGill University, April 2003.
“No Unbiased Estimator of Variance for K-Fold Cross-Validation”, NIPS’2002 post-conference workshops, British-Columbia.
“Statistical Learning from High-Dimensional Data”, MITACS Board Meeting, October 2002.