Lab 1 Questions Flashcards
In what year Alan Turing develops the Turing Test
1950
What year the term “artificial intelligence is first used
1956
While at the Cornell Aeronautical Laboratory, Frank Rosenblatt develops the perceptron,
the first artificial neuron, that will set the stage for the development of networks of
artificial neurons, a.k.a. neural networks
1957
Joseph Weizenbaum develops Eliza, the first chatbot. It is based on a set of re-write rules
written by hand.
1966
Marvin Minsky & Seymour Papert publish a book that shows the limits of perceptrons
and argue for more work in symbolic computation (aka Good-Old-Fashioned AI, GOFAI).
The book is often cited as the main reason for the abandoning research on neural networks
1969
The Lighthill and the ALPAC reports that showed little progress in AI, kills research
funding and leads to the first AI Winter.
early 1970s
The robot Shakey1
, programmed in LISP, resulted in the development of the A* search
algorithm.
1972
Alain Colmerauer, who was professor at the University of Montreal for a few years,
develops Prolog; a programming language based on logics that is very popular in AI to
write rule-based systems
1972
Marvin Minsky develops the Frames to reason with world-knowledge. Years later, frames
turned out to be the basis of object-oriented programming.
1974
The expert system MYCIN is developed to recognise bacterial infections and recommend
antibiotics. Its recommendations are often better than those of human experts. It is
based on a knowledge base of ≈ 600 hand-written rules (written in Lisp) and developed
in collaboration with medical doctors.
1975
The METEO rule-based machine translation system, developed at the University of Montreal, is deployed at Environment Canada to translate weather forecasts from English to
French.
1975
Expert systems, such as MYCIN, and other types of systems made of hand-written rules
are considered too expensive to maintain and to adapt to new domains. The industry
drops research in such systems. It is the 2
nd AI Winter
early 1980’s - early 1990’s
Corinna Cortes and Vladimir Vapnik develop an approach to machine learning called soft
margin Support Vector Machines (SVM), which quickly becomes one of the most popular
machine learning algorith
1993
After finishing his PhD on handwriting recognition, Yann Lecun makes public the MNIST
dataset. The dataset contains 70,000 images of handwritten digits and becomes the
benchmark to evaluate machine learning.
1998
Google launches its Google Translate service based on Statistical Machine Translation.
Translation rules are found automatically based on a statistical analysis of parallel texts
in different languages.
2006
Netflix initiates a competition in machine learning to beat its own film recommendation
system. It provides a data set of about 100 millions movie ratings to learn recommendations automatically
2006
The AlexNet system, developed at the University of Toronto by Alex Krizhevsky, a PhD
student of Geoffrey Hinton, wins the ImageNet Challenge and shows that deep learning
techniques can achieve significantly better results than classical machine learning techniques in image processing. This is considered to be a defining moment in the history of
AI.
2012
Google launches its Neural Machine Translation system based on recent advances in Deep
Neural Networks, and little by little drops the Statistical Machine Translation approach
of the years 2000’s.
2016
While having a beer at Les 3 Brasseurs at the corner of McGill College and Ste-Catherine,
Ian Goodfellow, a PhD student of Yoshua Bengio, comes up with an idea for a neural network that could generate realistic images. He called it Generative Adversarial Networks,
a.k.a. GANs.
2014
AlphaGo beats the world’s champion at the game of Go. AlphaGo’s strategy is learned
automatically from playing a large number of games against itself
2016
Canada invests massively in 3 AI research institutes: Amii in Alberta, Mila in Montréal,
and Vector Institute in Toronto.
2017
A Portrait of Edmond Belamy created by a Generative Adversarial Network (GAN) sells
for $432,500, at Christie’s auction in New York city.
2018
The Association for Computing Machinery (ACM) names Yoshua Bengio (from MILA,
Montreal), Geoffrey Hinton (from Vector Institute, Toronto), and Yann LeCun (from
Facebook) recipients of the Turing Award for their contribution in the field of AI and
Deep Learning.
2018
OpenAI publishes its GPT-2 model, astounding both public and experts with its text
generation capabilities, such as the Unicorn story generated using a prompt.
2019