Summary
Yoshua Bengio OC FRS FRSC (born March 5, 1964) is a Canadian-French computer scientist, and a pioneer of artificial neural networks and deep learning. He is a professor at the Université de Montréal and scientific director of the AI institute MILA.
Bengio received the 2018 ACM A.M. Turing Award, often referred to as the “Nobel Prize of Computing”, together with Geoffrey Hinton and Yann LeCun, for their foundational work on deep learning. Bengio, Hinton, and LeCun are sometimes referred to as the “Godfathers of AI”. Bengio is the most-cited computer scientist globally (by both total citations and by h-index), and the most-cited living scientist across all fields (by total citations). In 2024, TIME Magazine included Bengio in its yearly list of the world’s 100 most influential people.
Source: Wikipedia
LunarTech – 10/02/2025 (01:31:00)
Join world-renowned AI pioneer Yoshua Bengio as he takes you on a deep dive into the fascinating world of machine learning and deep learning. In this insightful masterclass, Bengio explores the evolution of artificial intelligence, from its early days to the revolutionary breakthroughs in neural networks and deep learning.
Discover how machines learn from data, the distinction between supervised and unsupervised learning, and the crucial role of neuroscience-inspired algorithms in shaping the future of AI. Bengio also shares his personal journey—how he became immersed in AI research, the challenges he faced, and the exhilarating moments of discovery that drive scientific progress.
The conversation delves into the ethics of AI, the impact of deep learning on industries like computer vision, natural language processing, and robotics, and the increasing involvement of major tech companies in AI research. Bengio candidly discusses the risks and limitations of current AI models, addressing both public fears and misconceptions about artificial intelligence.
With a unique mix of technical depth and philosophical reflection, this masterclass offers a rare glimpse into the mind of one of AI’s leading thinkers. Whether you’re an AI researcher, a tech enthusiast, or simply curious about the future of intelligence, this session will leave you with a deeper understanding of how AI is transforming our world—and what lies ahead.
Topics Covered
The fundamentals of machine learning and deep learning
How AI models learn and generalize from data
The role of neuroscience in AI
The economic impact of deep learning breakthroughs
The challenges of unsupervised learning and reinforcement learning
The ethical concerns and societal implications of AI
The future of human-level intelligence in machines
Timestamps
00:00:00 – Introduction: Professor Yosa Benjo and the AI Landscape
00:00:32 – Deep Learning Explained: Machine Learning Meets Neural Networks
00:01:30 – Early Research: The Connectionist Movement & Graduate Beginnings
00:02:17 – Navigating Research Paths: Balancing Clarity and Exploration
00:03:34 – Defining Intelligence: The Quest for Underlying Principles
00:05:26 – Overcoming Challenges: Credit Assignment & Deep Planning
00:07:07 – Revolutionary Applications: From Speech Recognition to Computer Vision
00:08:48 – The AI Boom: From Academic Pioneering to Industry Investment 00:12:16 – Core Concepts in AI: Thinking, Learning, and Generalization
00:14:02 – Learning in Action: Iterative Adaptation and Neural Flexibility
01:00:05 – Debunking AI Fears: Misconceptions, Autonomy, and Ethics
01:17:37 – The Creative Process: Daily Routines, Meditation, & Eureka Moments
OnAir Post: Yoshua Bengio
About
Bio
Recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, “the Nobel Prize of Computing,” with Geoffrey Hinton and Yann LeCun, and making him the computer scientist with the largest number of citations and h-index.
He is Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program and acts as Scientific Director of IVADO.
He received numerous awards, including the prestigious Killam Prize and Herzberg Gold medal in Canada, CIFAR’s AI Chair, Spain’s Princess of Asturias Award, the VinFuture Prize and he is a Fellow of both the Royal Society of London and Canada, Knight of the Legion of Honor of France, Officer of the Order of Canada, Member of the UN’s Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology. Yoshua Bengio was named in 2024 one of TIME’s magazine 100 most influtential people in the world.
Concerned about the social impact of AI, he actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence and currently chairs the International Scientific Report on the Safety of Advanced AI.
Source: Website
Web Links
Videos
AI Masterclass: The Future of Machine Learning & Deep Learning
(01:31:00)
By: LunarTech
Join world-renowned AI pioneer Yoshua Bengio as he takes you on a deep dive into the fascinating world of machine learning and deep learning. In this insightful masterclass, Bengio explores the evolution of artificial intelligence, from its early days to the revolutionary breakthroughs in neural networks and deep learning.
Discover how machines learn from data, the distinction between supervised and unsupervised learning, and the crucial role of neuroscience-inspired algorithms in shaping the future of AI. Bengio also shares his personal journey—how he became immersed in AI research, the challenges he faced, and the exhilarating moments of discovery that drive scientific progress.
The conversation delves into the ethics of AI, the impact of deep learning on industries like computer vision, natural language processing, and robotics, and the increasing involvement of major tech companies in AI research. Bengio candidly discusses the risks and limitations of current AI models, addressing both public fears and misconceptions about artificial intelligence.
With a unique mix of technical depth and philosophical reflection, this masterclass offers a rare glimpse into the mind of one of AI’s leading thinkers. Whether you’re an AI researcher, a tech enthusiast, or simply curious about the future of intelligence, this session will leave you with a deeper understanding of how AI is transforming our world—and what lies ahead.
Topics Covered
The fundamentals of machine learning and deep learning
How AI models learn and generalize from data
The role of neuroscience in AI
The economic impact of deep learning breakthroughs
The challenges of unsupervised learning and reinforcement learning
The ethical concerns and societal implications of AI
The future of human-level intelligence in machines
Timestamps
00:00:00 – Introduction: Professor Yosa Benjo and the AI Landscape
00:00:32 – Deep Learning Explained: Machine Learning Meets Neural Networks
00:01:30 – Early Research: The Connectionist Movement & Graduate Beginnings
00:02:17 – Navigating Research Paths: Balancing Clarity and Exploration
00:03:34 – Defining Intelligence: The Quest for Underlying Principles
00:05:26 – Overcoming Challenges: Credit Assignment & Deep Planning
00:07:07 – Revolutionary Applications: From Speech Recognition to Computer Vision
00:08:48 – The AI Boom: From Academic Pioneering to Industry Investment 00:12:16 – Core Concepts in AI: Thinking, Learning, and Generalization
00:14:02 – Learning in Action: Iterative Adaptation and Neural Flexibility
01:00:05 – Debunking AI Fears: Misconceptions, Autonomy, and Ethics
01:17:37 – The Creative Process: Daily Routines, Meditation, & Eureka Moments
How AI threatens humanity, with Yoshua Bengio
October 25, 2024 (30:00)
By: Dr Waku
While recent innovations in the field of AI are exciting, there is danger on the horizon. I interviewed Yoshua Bengio to discuss the risks posed by advanced AI, including misuse by humanity, misalignment, and loss of control. To address these issues, we need significant technical change as well as political change, which means the public needs to get informed about and involved in this issue.
These risks are amplified because of the intense speed at which AI development is happening. We could be just a matter of years away from an AGI system, artificial general intelligence, which means AI as smart as a human. From there, the technology could improve itself all the way to AI much smarter than a human: superintelligence.
One way we can control this runaway train of development is to create a democratic, decentralized coalition of AI development institutions. Any lab can shut down everybody at once if they detect something dangerous, which spreads out the power and allows teams to agree to specific rates of development. Even then, we would likely need heavy surveillance on the people within these organizations. But a better, safer future is possible.
Research
Interests
Source: Website
My long-term goal is to understand the mechanisms giving rise to intelligence; understanding the underlying principles would deliver artificial intelligence, and I believe that learning algorithms are essential in this quest.
Since 1986 I have worked on neural networks, in particular on deep learning in this century. What fascinates me is how an intelligent agent, animal, human or machine, can figure out how their environment works. Of course this can be used to make good decisions, but I feel like at the heart is the notion of understanding, and the crucial question is how to learn to understand.
In the past I worked on learning of deep representations (either supervised or unsupervised), capturing sequential dependencies with recurrent networks and other autoregressive models, understanding credit assignment (including the quest for biologically plausible analogues of backprop, as well as end-to-end learning of complex modular information processing assemblies), meta-learning (or learning to learn), attention mechanisms, deep generative models, curriculum learning, variations of stochastic gradient descent and why SGD works for neural nets, convolutional architectures, natural language processing (especially with word embeddings, language models and machine translation), understanding why deep learning works so well and what its current limitations are. I worked on many applications of deep learning, including – but not limited to – healthcare (such as medical image analysis), standard AI tasks of computer vision, modeling speech and language and, more recently, robotics.
Looking forward, I am interested in
- how to go beyond the iid hypothesis (and more generally the assumption that the test cases come from the same distribution as the training set), so we can build more versatile AIs robust to changes in their environment.
- a hypothesis I explore is that an important ingredient to achieve this kind of out-of-distribution robustness is the kind of systematic generalization which system 2 (i.e., conscious) processing provides to humans.
- causal learning (i.e. figuring out what are the causal variables and how they are causally related), as this is a crucial part of understanding how the world works,
- modularizing knowledge so it can be factorized into pieces that can be re-used for fast transfer and adaptation,
- how agents can purposefully act to better understand their environment and seek knowledge, i.e., actively explore to learn,
- grounded language learning, as well as how neural networks could tackle system 2 cognitive tasks (such as reasoning, planning, imagination, etc) and how that can help a learner figure out high-level representations on both the perception and action sides.
I believe that all of the above are different aspects of a common goal, going beyond the limitations of current deep learning and towards human-level AI. I am also very interested in AI for social good, in particular in healthcare and the environment (with a focus on climate change).
Notable Past Research
1989-1998 Convolutional and recurrent networks trained end-to-end with probabilistic alignment (HMMs) to model sequences, as the main contribution of my PhD thesis (1991); NIPS 1988, NIPS 1989, Eurospeech 1991, PAMI 1991, and IEEE Trans. Neural Nets 1992. These architectures were first applied to speech recognition in my PhD (and rediscovered after 2010) and then with Yann LeCun et al to handwriting recognition and document analysis (most cited paper is “Gradient-based learning applied to document recognition”, 1998, with over 15,000 citations in 2018), where we also introduce non-linear forms of conditional random fields (before they were a thing).
1991-1995 Learning to learn papers with Samy Bengio, starting with IJCNN 1991, “Learning a synaptic learning rule”. The idea of learning to learn (particularly by back-propagating through the whole process) has now become very popular, but we lacked the necessary computing power in the early 90’s.
1993-1995 Uncovering the fundamental difficulty of learning in recurrent nets and other machine learning models of temporal dependencies, associated with vanishing and exploding gradients: ICNN 1993, NIPS 1993, NIPS 1994, IEEE Transactions on Neural Nets 1994, and NIPS 1995. These papers have had a major impact and motivated later papers on architectures to aid with learning long-term dependencies and deal with vanishing or exploding gradients. An important but subtle contribution of the IEEE Transactions 1994 paper is to show that the condition required to store bits of information reliably over time also gives rise to vanishing gradients, using dynamical systems theory. The NIPS 1995 paper introduced the use of a hierarchy of time scales to combat the vanishing gradients issue.
1999-2014 Understanding how distributed representations can bypass the curse of dimensionality by providing generalization to an exponentially large set of regions from those comparatively few occupied by training examples. This series of papers also highlights how methods based on local generalization, like nearest-neighbor and Gaussian kernel SVMs, lack this kind of generalization ability. The NIPS 1999 introduced, for the first time, auto-regressive neural networks for density estimation (the ancestor of the NADE and PixelRNN/PixelCNN models). The NIPS 2004, NIPS 2005 and NIPS 2011 papers on this subject show how neural nets can learn a local metric, which can bring the power of generalization of distributed representations to kernel methods and manifold learning methods. Another NIPS 2005 paper shows the fundamental limitations of kernel methods due to a generalization of the curse of dimensionality (the curse of highly variable functions, which have many ups and downs). Finally, the ICLR 2014 paper demonstrates that, in the case of piecewise-linear networks (like those with ReLUs), the regions (linear pieces) distinguished by a one-hidden layer network is exponential in the number of neurons (whereas the number of parameters is quadratic in the number of neurons, and a local kernel method would require an exponential number of examples to capture the same kind of function).
2000-2008 Word embeddings from neural networks and neural language models. The NIPS 2000 paper introduces for the first time the learning of word embeddings as part of a neural network which models language data. The JMLR 2003 journal version expands this (these two papers together get around 3000 citations) and also introduces the idea of asynchronous SGD for distributed training of neural nets. Word embeddings have become one of the most common fixtures of deep learning when it comes to language data and this has basically created a new sub-field in the area of computational linguistics. I also introduced the use of importance sampling (AISTATS 2003, IEEE Trans. on Neural Nets, 2008) as well as of a probabilistic hierarchy (AISTATS 2005) to speed-up computations and face larger vocabularies.
2006-2014 Showing the theoretical advantage of depth for generalization. The NIPS 2006 oral presentation experimentally demonstrated the advantage of depth and is one of the most cited papers in the field (over 2600 citations). The NIPS 2011 paper shows how deeper sum-product networks can represent functions which would otherwise require an exponentially larger model if the network is shallow. Finally, the NIPS 2014 paper on the number of linear regions of deep neural networks generalizes the ICLR 2014 paper mentioned above, showing that the number of linear pieces produced by a piecewise linear network grows exponentially in both width of layers and number of layers, i.e., depth, making the functions represented by such networks generally impossible to capture efficiently with kernel methods (short of using a trained neural net as the kernel).
2006-2014 Unsupervised deep learning based on auto-encoders (with the special case of GANs as decoder-only models, see below). The NIPS 2006 paper introduced greedy layer-wise pre-training, both in the supervised case and unsupervised case with auto-encoders. The ICML 2008 paper introduced denoising auto-encoders and the NIPS 2013, ICML 2014 and JMLR 2014 papers cast their theory and generalize them as proper probabilistic models, at the same time introducing alternatives to maximum likelihood as training principles.
2014 Dispelling the local-minima myth regarding the optimization of neural networks, with the NIPS 2014 paper on saddle points, and demonstrating that it is the large number of parameters which makes it very unlikely that bad local minima exist.
2014 Introducing Generative Adversarial Networks (GANs) at NIPS 2014, which introduced many innovations in training deep generative models outside of the maximum likelihood framework and even outside of the classical framework of having a single objective function (instead entering into the territory of multiple models trained in a game-theoretical way, each with their objective). Presently one of the hottest research areas in deep learning with over 6000 citations mostly from papers that introduce variants of GANs, which have been producing impressively realistic synthetic images one would not have imagined computers being able to generate just a few years ago.
2014-2016 Introducing content-based soft attention and the breakthrough it brought to neural machine translation, mostly with Kyunghyun Cho and Dima Bahdanau. First introduced the encoder-decoder (now called sequence-to-sequence) architecture (EMNLP 2014) and then achieved a big jump in BLEU scores with content-based soft attention (ICLR 2015). These ingredients are now the basis of most commercial machine translation systems, another entire sub-field created using these techniques.
Awards
For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.
Source: ACM
In 2000 he made a major contribution to natural language processing with the paper “A Neural Probabilistic Language Model.” Training networks to distinguish meaningful sentences from nonsense was difficult because there are so many different ways to express a single idea, with most combinations of words being meaningless. This causes what the paper calls the “curse of dimensionality,” demanding infeasibly large training sets and producing unworkably complex models. The paper introduced high-dimensional word embeddings as a representation of word meaning, letting networks recognize the similarity between new phrases and those included in their training sets, even when the specific words used are different. The approach has led to a major shift in machine translation and natural language understanding systems over the last decade.
Bengio’s group further improved the performance of machine translation systems by combining neural word embeddings with attention mechanisms. “Attention” is another term borrowed from human cognition. It helps networks to narrow their focus to only the relevant context at each stage of the translation in ways that reflect the context of words, including, for example, what a pronoun or article is referring to.
Together with Ian Goodfellow, one of his Ph.D. students, Bengio developed the concept of “generative adversarial networks.” Whereas most networks were designed to recognize patterns, a generative network learns to generate objects that are difficult to distinguish from those in the training set. The technique is “adversarial” because a network learning to generate plausible fakes can be trained against another network learning to identify fakes, allowing for a dynamic learning process inspired by game theory. The process is often used to facilitate unsupervised learning. It has been widely used to generate images, for example to automatically generate highly realistic photographs of non-existent people or objects for use in video games.
More Information
Wikipedia
Contents
Yoshua Bengio OC FRS FRSC (born March 5, 1964[3]) is a Canadian-French[4] computer scientist, and a pioneer of artificial neural networks and deep learning.[5][6][7] He is a professor at the Université de Montréal and scientific director of the AI institute MILA.[1]
Bengio received the 2018 ACM A.M. Turing Award, often referred to as the “Nobel Prize of Computing“, together with Geoffrey Hinton and Yann LeCun, for their foundational work on deep learning.[8] Bengio, Hinton, and LeCun are sometimes referred to as the “Godfathers of AI”.[9][10][11][12][13][14] Bengio is the most-cited computer scientist globally (by both total citations and by h-index[15]), and the most-cited living scientist across all fields (by total citations).[16] In 2024, TIME Magazine included Bengio in its yearly list of the world’s 100 most influential people.[17]
Early life and education
Bengio was born in France to a Jewish family who had emigrated to France from Morocco. The family then relocated to Canada.[4] He received his Bachelor of Science degree (electrical engineering), MSc (computer science) and PhD (computer science) from McGill University.[2][18]
Bengio is the brother of Samy Bengio,[4] also an influential computer scientist working with neural networks, who is currently Senior Director of AI and ML Research at Apple.[19]
The Bengio brothers lived in Morocco for a year during their father’s military service there.[4] His father, Carlo Bengio was a pharmacist and a playwright; he ran a Sephardic theater company in Montreal that performed pieces in Judeo-Arabic.[20][21] His mother, Célia Moreno, was an actor in the 1970s in the Moroccan theater scene led by Tayeb Seddiki. She studied economics in Paris, and then in Montreal in 1980 she co-founded with artist Paul St-Jean l’Écran humain, a multimedia theater troupe.[22]
Career and research
After his PhD, Bengio was a postdoctoral fellow at MIT (supervised by Michael I. Jordan) and AT&T Bell Labs.[23] Bengio has been a faculty member at the Université de Montréal since 1993, heads the MILA (Montreal Institute for Learning Algorithms) and is co-director of the Learning in Machines & Brains program at the Canadian Institute for Advanced Research.[18][23]
Along with Geoffrey Hinton and Yann LeCun, Bengio is considered by journalist Cade Metz to be one of the three people most responsible for the advancement of deep learning during the 1990s and 2000s.[24] Among the computer scientists with an h-index of at least 100, Bengio was as of 2018 the one with the most recent citations per day, according to MILA.[25][26] As of August 2024, he has the highest Discipline H-index (D-index, a measure of the research citations a scientist has received) of any computer scientist.[27] Thanks to a 2019 article on a novel RNN architecture, Bengio has an Erdős number of 3.[28]
In October 2016, Bengio co-founded Element AI, a Montreal-based artificial intelligence incubator that turns AI research into real-world business applications.[24] The company sold its operations to ServiceNow in November 2020,[29] with Bengio remaining at ServiceNow as an advisor.[30][31]
Bengio currently serves as scientific and technical advisor for Recursion Pharmaceuticals[32] and scientific advisor for Valence Discovery.[33]
At the first AI Safety Summit in November 2023, British Prime Minister Rishi Sunak announced that Bengio would lead an international scientific report on the safety of advanced AI. An interim version of the report was delivered at the AI Seoul Summit in May 2024, and covered issues such as the potential for cyber attacks and ‘loss of control’ scenarios.[34][35][36] The full report was published in January 2025 as the International AI Safety Report.[37][38]
Views on AI
In March 2023, following concerns raised by AI experts about the existential risk from artificial general intelligence, Bengio signed an open letter from the Future of Life Institute calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4“. The letter has been signed by over 30,000 individuals, including AI researchers such as Stuart Russell and Gary Marcus.[39][40][41]
In May 2023, Bengio stated in an interview to BBC that he felt “lost” over his life’s work. He raised his concern about “bad actors” getting hold of AI, especially as it becomes more sophisticated and powerful. He called for better regulation, product registration, ethical training, and more involvement from governments in tracking and auditing AI products.[42][43]
Speaking with the Financial Times in May 2023, Bengio said that he supported the monitoring of access to AI systems such as ChatGPT so that potentially illegal or dangerous uses could be tracked.[44] In July 2023, he published a piece in The Economist arguing that “the risk of catastrophe is real enough that action is needed now.”[45]
Bengio co-authored a letter with Geoffrey Hinton and others in support of SB 1047, a California AI safety bill that would require companies training models which cost more than $100 million to perform risk assessments before deployment. They claimed the legislation was the “bare minimum for effective regulation of this technology.”[46][47]
Awards and honours
In 2017, Bengio was named an Officer of the Order of Canada.[48] The same year, he was nominated Fellow of the Royal Society of Canada and received the Marie-Victorin Quebec Prize.[49][50] Together with Geoffrey Hinton and Yann LeCun, Bengio won the 2018 Turing Award.[8]
In 2020, he was elected a Fellow of the Royal Society.[51] In 2022, he received the Princess of Asturias Award in the category “Scientific Research” with his peers Yann LeCun, Geoffrey Hinton and Demis Hassabis.[52] In 2023, Bengio was appointed Knight of the Legion of Honour, France’s highest order of merit.[53]
In August 2023, he was appointed to a United Nations scientific advisory council on technological advances.[54][55]
He was recognized as a 2023 ACM Fellow.[56]
In 2024, TIME Magazine included Bengio in its yearly list of the 100 most influential people globally.[57] In the same year, he was awarded VinFuture Prize‘s grand prize along with Geoffrey E. Hinton, Yann LeCun, Jen-Hsun Huang and Fei-Fei Li for pioneering advancements in neural networks and deep learning algorithms.[58]
In 2025 he was awarded the Queen Elizabeth Prize for Engineering jointly with Bill Dally, Geoffrey E. Hinton, John Hopfield, Yann LeCun, Jen-Hsun Huang and Fei-Fei Li.[59]
Publications
- Ian Goodfellow, Yoshua Bengio and Aaron Courville: Deep Learning (Adaptive Computation and Machine Learning), MIT Press, Cambridge (USA), 2016. ISBN 978-0262035613.
- Dzmitry Bahdanau; Kyunghyun Cho; Yoshua Bengio (2014). “Neural Machine Translation by Jointly Learning to Align and Translate”. arXiv:1409.0473 [cs.CL].
- Léon Bottou, Patrick Haffner, Paul G. Howard, Patrice Simard, Yoshua Bengio, Yann LeCun: High Quality Document Image Compression with DjVu, In: Journal of Electronic Imaging, Band 7, 1998, S. 410–425doi:10.1117/1.482609
- Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I. and Culotta, Aron (eds.), Advances in Neural Information Processing Systems 22 (NIPS’22), December 7th–10th, 2009, Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation, 2009
- Y. Bengio, Dong-Hyun Lee, Jorg Bornschein, Thomas Mesnard, Zhouhan Lin: Towards Biologically Plausible Deep Learning, arXiv.org, 2016
- Bengio contributed one chapter to Architects of Intelligence: The Truth About AI from the People Building it, Packt Publishing, 2018,ISBN 978-1-78-913151-2, by the American futurist Martin Ford.[60]
References
- ^ a b Yoshua Bengio publications indexed by Google Scholar
- ^ a b c Yoshua Bengio at the Mathematics Genealogy Project
- ^ “Yoshua Bengio – A.M. Turing Award Laureate”. amturing.acm.org. Archived from the original on November 27, 2020. Retrieved December 15, 2020.
- ^ a b c d “Interview: The Bengio Brothers”. Eye On AI. March 28, 2019. Archived from the original on April 10, 2021. Retrieved February 24, 2021.
- ^ Knight, Will (July 9, 2015). “IBM Pushes Deep Learning with a Watson Upgrade”. MIT Technology Review. Retrieved July 31, 2016.
- ^ Yann LeCun; Yoshua Bengio; Geoffrey Hinton (May 28, 2015). “Deep learning”. Nature. 521 (7553): 436–444. doi:10.1038/NATURE14539. ISSN 1476-4687. PMID 26017442. Wikidata Q28018765.
- ^ Bergen, Mark; Wagner, Kurt (July 15, 2015). “Welcome to the AI Conspiracy: The ‘Canadian Mafia’ Behind Tech’s Latest Craze”. Recode. Archived from the original on March 31, 2019. Retrieved July 31, 2016.
- ^ a b “Fathers of the Deep Learning Revolution Receive ACM A.M. Turing Award”. Association for Computing Machinery. New York. March 27, 2019. Archived from the original on March 27, 2019. Retrieved March 27, 2019.
- ^ “‘Godfathers of AI’ honored with Turing Award, the Nobel Prize of computing”. March 27, 2019. Archived from the original on April 4, 2020. Retrieved December 9, 2019.
- ^ “Godfathers of AI Win This Year’s Turing Award and $1 Million”. March 29, 2019. Archived from the original on March 30, 2019. Retrieved December 9, 2019.
- ^ “Nobel prize of tech awarded to ‘godfathers of AI’“. The Telegraph. March 27, 2019. Archived from the original on April 14, 2020. Retrieved December 9, 2019.
- ^ “The 3 ‘Godfathers’ of AI Have Won the Prestigious $1M Turing Prize”. Forbes. Archived from the original on April 14, 2020. Retrieved December 9, 2019.
- ^ Ray, Tiernan. “Deep learning godfathers Bengio, Hinton, and LeCun say the field can fix its flaws”. ZDNet. Archived from the original on March 3, 2020. Retrieved February 15, 2020.
- ^ “Turing Award Winners 2019 Recognized for Neural Network Research – Bloomberg”. Bloomberg News. March 27, 2019. Archived from the original on April 10, 2020. Retrieved February 15, 2020.
- ^ “Best Computer Science Scientists”. research.com. Retrieved November 21, 2023.
- ^ “Highly Cited Researchers 2.393.028 Scientists Citation Rankings”. www.adscientificindex.com. Retrieved January 19, 2025.
- ^ “The 100 Most Influential People of 2024”. TIME. Retrieved August 28, 2024.
- ^ a b “Yoshua Bengio”. Profiles. Canadian Institute For Advanced Research. Archived from the original on August 15, 2016. Retrieved July 31, 2016.
- ^ “Apple targets Google staff to build artificial intelligence team”. Financial Times (ft.com). May 3, 2021. Retrieved September 13, 2024.
- ^ Levy, Elias (May 8, 2019). “À la mémoire de Carlo Bengio”. The Canadian Jewish News. Archived from the original on April 10, 2021. Retrieved February 24, 2021.
- ^ Tahiri, Lalla Nouzha (July 2017). Le théâtre juif marocain : une mémoire en exil : remémoration, représentation et transmission (Thèse ou essai doctoral accepté thesis) (in French). Montréal (Québec, Canada): Université du Québec à Montréal. Archived from the original on April 10, 2021. Retrieved April 10, 2021.
- ^ “Célia Moréno, une marocaine au Québec”. Mazagan24 – Portail d’El Jadida (in French). November 14, 2020. Archived from the original on February 12, 2021. Retrieved February 24, 2021.
- ^ a b Bengio, Yoshua. “CV”. Département d’informatique et de recherche opérationnelle. Université de Montréal. Archived from the original on March 6, 2018. Retrieved July 31, 2016.
- ^ a b Metz, Cade (October 26, 2016). “AI Pioneer Yoshua Bengio Is Launching Element.AI, a Deep-Learning Incubator”. WIRED. Archived from the original on September 7, 2018. Retrieved September 7, 2018.
- ^ “Yoshua Bengio, the computer scientist with the most recent citations per day”. MILA. September 1, 2018. Archived from the original on October 1, 2018. Retrieved October 1, 2018.
- ^ “Computer science researchers with the highest rate of recent citations (Google Scholar) among those with the largest h-index”. University of Montreal. September 6, 2018. Archived from the original on October 13, 2018. Retrieved October 1, 2018.
- ^ “World’s Best Computer Science Scientists: H-Index Computer Science Ranking 2023”. Research.com. Retrieved May 20, 2023.
- ^ “Collaboration Distance – zbMATH Open”. zbmath.org. Retrieved May 20, 2023.
- ^ “ServiceNow to Acquire AI Pioneer Element AI”. Retrieved April 16, 2023.
- ^ “Element AI sold for $230-million as founders saw value mostly wiped out, document reveals”. Archived from the original on December 19, 2020. Retrieved December 19, 2020.
- ^ “Element AI hands out pink slips hours after announcement of sale to U.S.-based ServiceNow”. Archived from the original on December 14, 2020. Retrieved December 19, 2020.
- ^ “Yoshua Bengio – Recursion Pharmaceuticals”. Recursion Pharmaceuticals. Archived from the original on March 27, 2019. Retrieved March 27, 2019.
- ^ “Yoshua Bengio Joins Valence Discovery as Scientific Advisor”. Valence Discovery. Retrieved March 9, 2021.
- ^ Pillay, Tharin (September 5, 2024). “TIME100 AI 2024: Yoshua Bengio”. TIME. Retrieved September 23, 2024.
- ^ Hemmadi, Murad (November 3, 2023). “Bengio backs creation of Canadian AI safety institute, will deliver landmark report in six months”. The Logic. Retrieved September 23, 2024.
- ^ “International Scientific Report on the Safety of Advanced AI”. GOV.UK. Retrieved September 23, 2024.
- ^ Milmo, Dan; editor, Dan Milmo Global technology (January 29, 2025). “What International AI Safety report says on jobs, climate, cyberwar and more”. The Guardian. ISSN 0261-3077. Retrieved February 3, 2025.
{{cite news}}
:|last2=
has generic name (help) - ^ “International AI Safety Report 2025”. GOV.UK. Retrieved February 3, 2025.
- ^ Samuel, Sigal (March 29, 2023). “AI leaders (and Elon Musk) urge all labs to press pause on powerful AI”. Vox. Retrieved August 9, 2024.
- ^ Woollacott, Emma. “Tech Experts – And Elon Musk – Call For A ‘Pause’ In AI Training”. Forbes. Retrieved August 9, 2024.
- ^ “Pause Giant AI Experiments: An Open Letter”. Future of Life Institute. Retrieved August 9, 2024.
- ^ “One of the three ‘godfathers of A.I.’ feels ‘lost’ because of the direction the technology has taken”. Fortune. Retrieved June 15, 2023.
- ^ “AI ‘godfather’ Yoshua Bengio feels ‘lost’ over life’s work”. BBC News. May 30, 2023. Retrieved June 15, 2023.
- ^ Murgia, Madhumita (May 18, 2023). “AI pioneer Yoshua Bengio: Governments must move fast to ‘protect the public’“. Financial Times. Retrieved July 12, 2023.
- ^ “One of the “godfathers of AI” airs his concerns”. The Economist. ISSN 0013-0613. Retrieved December 22, 2023.
- ^ Pillay, Tharin; Booth, Harry (August 7, 2024). “Exclusive: Renowned Experts Pen Support for California’s Landmark AI Safety Bill”. TIME. Retrieved August 9, 2024.
- ^ “Letter from renowned AI experts”. SB 1047 – Safe & Secure AI Innovation. Retrieved August 9, 2024.
- ^ “Order of Canada honorees desire a better country”. The Globe and Mail. June 30, 2017. Archived from the original on April 28, 2019. Retrieved August 28, 2017.
- ^ “Royal Society of Canada”. December 16, 2017. Archived from the original on April 12, 2020. Retrieved December 16, 2017.
- ^ “Prix du Quebec”. December 16, 2017. Archived from the original on December 16, 2017. Retrieved December 16, 2017.
- ^ “Yoshua Bendigo”. Royal Society. Archived from the original on October 27, 2020. Retrieved September 19, 2020.
- ^ IT, Developed with webControl CMS by Intermark. “Geoffrey Hinton, Yann LeCun, Yoshua Bengio and Demis Hassabis – Laureates – Princess of Asturias Awards”. The Princess of Asturias Foundation. Retrieved May 20, 2023.
- ^ Guérard, Marc-Antoine (March 8, 2022). “Professor Yoshua Bengio appointed Knight of the Legion of Honour by France”. Mila. Retrieved July 30, 2023.
- ^ “University of Montreal professor to join new UN technology advisory board”. CJAD, Bell Media. The Canadian Press. Retrieved August 4, 2023.
- ^ “UN Secretary-General Creates Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology | UN Press”. press.un.org. August 3, 2023. Retrieved August 4, 2023.
- ^ “Yoshua Bengio”. awards.acm.org. Retrieved January 26, 2024.
- ^ “The 100 Most Influential People of 2024”. TIME. Retrieved August 28, 2024.
- ^ “The VinFuture 2024 Grand Prize honours 5 scientists for transformational contributions to the advancement of deep learning”. Việt Nam News. December 7, 2024.
- ^ Queen Elizabeth Prize for Engineering 2025
- ^ Falcon, William (November 30, 2018). “This Is The Future Of AI According To 23 World-Leading AI Experts”. Forbes. Archived from the original on March 29, 2019. Retrieved March 20, 2019.