Introduction to Artificial Intelligence Flashcards

(50 cards)

1
Q

What is Artificial Intelligence?

A

Artificial Intelligence (AI) is a branch of computer science focused on creating systems that can perform tasks requiring human-like intelligence, such as learning, reasoning, and perception. AI can be narrow, excelling at specific tasks like playing chess, or general, aiming for human-level versatility across many tasks. Modern AI often relies on machine learning, where systems improve from data without explicit programming.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What was the Dartmouth Conference?

A

The Dartmouth Conference, held in 1956, is considered the birthplace of AI as a formal field. Organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, it brought together researchers to explore the potential of machines to simulate human intelligence. The event marked the beginning of AI research and coined the term “artificial intelligence.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are AI Winters?

A

AI Winters were periods of reduced funding and interest in AI research, primarily in the 1970s and late 1980s, due to unmet expectations and technical limitations. These downturns followed overhyped promises, leading to skepticism. However, each winter was followed by renewed interest as breakthroughs, like deep learning, revitalized the field.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the Turing Test?

A

The Turing Test, proposed by Alan Turing in 1950, evaluates a machine’s ability to exhibit human-like intelligence. In the test, a human judge interacts with both a machine and a human via text. If the judge cannot reliably distinguish the machine from the human, the machine passes the test, demonstrating intelligent behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the difference between Strong AI and Weak AI?

A

Weak AI, or narrow AI, is designed for specific tasks, like voice assistants or recommendation systems, without true understanding. Strong AI, or artificial general intelligence (AGI), aims to replicate human cognitive abilities, enabling machines to perform any intellectual task a human can. While weak AI is prevalent today, strong AI remains theoretical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Machine Learning?

A

Machine Learning (ML) is a subset of AI that enables systems to learn from data and improve over time without being explicitly programmed. It involves algorithms that identify patterns in data to make predictions or decisions. ML is widely used in applications like spam detection, image recognition, and recommendation systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Supervised Learning?

A

Supervised Learning is a type of machine learning where the model is trained on labeled data, meaning each input has a corresponding output. The goal is to learn a mapping from inputs to outputs, enabling the model to make predictions on new, unseen data. Common algorithms include linear regression and decision trees.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Unsupervised Learning?

A

Unsupervised Learning involves training models on data without labeled outputs. The goal is to find hidden patterns or structures in the data, such as clustering similar items or reducing dimensionality. Common algorithms include k-means clustering and principal component analysis (PCA).

from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=3)
kmeans.fit(X)
labels = kmeans.labels_

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Reinforcement Learning?

A

Reinforcement Learning (RL) is a type of machine learning where an agent learns by interacting with an environment, receiving rewards or penalties for its actions. The agent aims to maximize cumulative rewards over time. RL is used in applications like game playing and robotics.

action = agent.select_action(state)
reward = environment.step(action)
agent.update_policy(reward)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is Linear Regression?

A

Linear Regression is a supervised learning algorithm used to predict a continuous target variable based on one or more input features. It assumes a linear relationship between inputs and the target, finding the best-fit line that minimizes the sum of squared errors.

from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train, y_train)
predictions = model.predict(X_test)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is Logistic Regression?

A

Logistic Regression is a supervised learning algorithm used for binary classification tasks. It models the probability that an input belongs to a particular class using the logistic function, which outputs values between 0 and 1.

from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
predictions = model.predict(X_test)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are Decision Trees?

A

Decision Trees are supervised learning models that split data into branches based on feature values, creating a tree-like structure. Each leaf node represents a class label or regression value. They are interpretable but can overfit if not pruned.

from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
predictions = model.predict(X_test)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are Random Forests?

A

Random Forests are an ensemble learning method that combines multiple decision trees to improve accuracy and reduce overfitting. Each tree is trained on a random subset of the data and features, and the final prediction is based on majority voting or averaging.

from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=100)
model.fit(X_train, y_train)
predictions = model.predict(X_test)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are Support Vector Machines (SVMs)?

A

Support Vector Machines (SVMs) are supervised learning models used for classification and regression. They find the hyperplane that best separates data into classes, maximizing the margin between support vectors. SVMs can handle non-linear data using kernel functions.

from sklearn.svm import SVC
model = SVC(kernel=’linear’)
model.fit(X_train, y_train)
predictions = model.predict(X_test)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is K-Means Clustering?

A

K-Means Clustering is an unsupervised learning algorithm that partitions data into k clusters based on similarity. It iteratively assigns data points to the nearest cluster centroid and updates centroids until convergence.

from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=3)
kmeans.fit(X)
labels = kmeans.labels_

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Hierarchical Clustering?

A

Hierarchical Clustering is an unsupervised learning method that builds a hierarchy of clusters either by merging smaller clusters (agglomerative) or splitting larger ones (divisive). It results in a dendrogram showing the relationships between clusters.

from scipy.cluster.hierarchy import dendrogram, linkage
Z = linkage(X, method=’ward’)
dendrogram(Z)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is Principal Component Analysis (PCA)?

A

Principal Component Analysis (PCA) is a dimensionality reduction technique used in unsupervised learning. It transforms data into a lower-dimensional space by identifying the directions (principal components) that capture the most variance in the data.

from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X_reduced = pca.fit_transform(X)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is a Perceptron?

A

A Perceptron is the simplest type of neural network, consisting of a single layer that can learn linear decision boundaries. It takes weighted inputs, applies an activation function, and outputs a binary classification. Perceptrons are the building blocks of more complex neural networks.

from sklearn.linear_model import Perceptron
model = Perceptron()
model.fit(X_train, y_train)
predictions = model.predict(X_test)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a Multilayer Perceptron (MLP)?

A

A Multilayer Perceptron (MLP) is a type of neural network with multiple layers, including input, hidden, and output layers. It can learn non-linear relationships by using activation functions like sigmoid or ReLU in the hidden layers. MLPs are trained using backpropagation.

from sklearn.neural_network import MLPClassifier
model = MLPClassifier(hidden_layer_sizes=(10, 10))
model.fit(X_train, y_train)
predictions = model.predict(X_test)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are Activation Functions?

A

Activation functions introduce non-linearity into neural networks, allowing them to learn complex patterns. Common functions include Sigmoid (for binary classification), ReLU (for hidden layers), and Tanh (for centered outputs). They determine whether a neuron should be activated based on its input.

import numpy as np
def relu(x):
return np.maximum(0, x)
print(relu(-1), relu(2)) # Outputs: 0 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is Backpropagation?

A

Backpropagation is the algorithm used to train neural networks by minimizing the error between predicted and actual outputs. It calculates the gradient of the loss function with respect to each weight by propagating the error backward through the network, updating weights via gradient descent.

for layer in reversed(network):
error = compute_error(layer)
gradients = compute_gradients(error)
update_weights(gradients)

22
Q

What are Convolutional Neural Networks (CNNs)?

A

Convolutional Neural Networks (CNNs) are specialized neural networks for processing grid-like data, such as images. They use convolutional layers to detect local patterns (e.g., edges), pooling layers to reduce dimensionality, and fully connected layers for classification. CNNs are widely used in computer vision.

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
model = Sequential([
Conv2D(32, (3,3), activation=’relu’, input_shape=(28,28,1)),
MaxPooling2D((2,2)),
Flatten(),
Dense(10, activation=’softmax’)
])

23
Q

What are Recurrent Neural Networks (RNNs)?

A

Recurrent Neural Networks (RNNs) are designed for sequential data, such as time series or text. They have loops that allow information to persist, enabling them to maintain a “memory” of previous inputs. RNNs are used in applications like language modeling and speech recognition.

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import SimpleRNN, Dense
model = Sequential([
SimpleRNN(50, input_shape=(10,1)),
Dense(1)
])

24
Q

What is Long Short-Term Memory (LSTM)?

A

Long Short-Term Memory (LSTM) is a type of RNN designed to capture long-term dependencies in sequential data. It uses gates to control the flow of information, preventing the vanishing gradient problem. LSTMs are effective in tasks like machine translation and text generation.

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
model = Sequential([
LSTM(50, input_shape=(10,1)),
Dense(1)
])

25
What is Natural Language Processing (NLP)?
Natural Language Processing (NLP) is a field of AI focused on enabling machines to understand, interpret, and generate human language. It involves tasks like tokenization, sentiment analysis, and machine translation. NLP powers applications like chatbots and voice assistants. import nltk nltk.download('punkt') text = "NLP is fascinating." tokens = nltk.word_tokenize(text) print(tokens) # Outputs: ['NLP', 'is', 'fascinating', '.']
26
What is Tokenization in NLP?
Tokenization is the process of breaking down text into smaller units, such as words or sentences, called tokens. It is a fundamental step in NLP, enabling further analysis like part-of-speech tagging or sentiment analysis. from nltk.tokenize import word_tokenize text = "Hello, world!" tokens = word_tokenize(text) print(tokens) # Outputs: ['Hello', ',', 'world', '!']
27
What is Part-of-Speech Tagging?
Part-of-Speech (POS) Tagging is the process of assigning grammatical categories (e.g., noun, verb) to each token in a text. It helps in understanding the structure and meaning of sentences, which is crucial for tasks like parsing and information extraction. import nltk nltk.download('averaged_perceptron_tagger') tokens = nltk.word_tokenize("She sells seashells.") pos_tags = nltk.pos_tag(tokens) print(pos_tags) # Outputs: [('She', 'PRP'), ('sells', 'VBZ'), ('seashells', 'NNS'), ('.', '.')]
28
What is Named Entity Recognition (NER)?
Named Entity Recognition (NER) is an NLP task that identifies and classifies named entities (e.g., people, organizations, locations) in text. It is used in information extraction to structure unstructured data. import spacy nlp = spacy.load("en_core_web_sm") doc = nlp("Apple is headquartered in Cupertino.") for ent in doc.ents: print(ent.text, ent.label_) # Outputs: Apple ORG, Cupertino GPE
29
What is Sentiment Analysis?
Sentiment Analysis is an NLP technique used to determine the emotional tone of text, such as positive, negative, or neutral. It is commonly used in social media monitoring, customer feedback analysis, and market research. from textblob import TextBlob text = "I love this product!" blob = TextBlob(text) print(blob.sentiment) # Outputs: Sentiment(polarity=0.625, subjectivity=0.6)
30
What is Machine Translation?
Machine Translation is the use of AI to automatically translate text or speech from one language to another. It relies on models like sequence-to-sequence networks or transformers to capture linguistic patterns and context. from googletrans import Translator translator = Translator() result = translator.translate("Hola, mundo!", dest='en') print(result.text) # Outputs: Hello, world!
31
What is Computer Vision?
Computer Vision is a field of AI that enables machines to interpret and understand visual information from the world, such as images and videos. It involves tasks like image classification, object detection, and image segmentation.
32
What is Image Classification?
Image Classification is a computer vision task where a model assigns a label to an image based on its content. For example, classifying whether an image contains a cat or a dog. It is often the first step in more complex vision tasks. from tensorflow.keras.applications import ResNet50 model = ResNet50(weights='imagenet') predictions = model.predict(preprocessed_image)
33
What is Object Detection?
Object Detection involves identifying and locating multiple objects within an image, typically by drawing bounding boxes around them. It combines classification and localization, enabling applications like autonomous driving and surveillance. from tensorflow.keras.applications import FasterRCNN model = FasterRCNN() detections = model.detect(image)
34
What is Image Segmentation?
Image Segmentation divides an image into regions or segments corresponding to different objects or areas. Unlike object detection, it provides pixel-level classification, which is useful in medical imaging and scene understanding. from tensorflow.keras.models import load_model model = load_model('segmentation_model.h5') segmentation = model.predict(image)
35
What are Generative Adversarial Networks (GANs)?
Generative Adversarial Networks (GANs) are a class of AI models consisting of two networks: a generator and a discriminator. The generator creates synthetic data, while the discriminator tries to distinguish it from real data. GANs are used to generate realistic images, videos, and other media. for epoch in range(epochs): fake_data = generator.predict(noise) real_data = get_real_data() discriminator.train_on_batch(real_data, 1) discriminator.train_on_batch(fake_data, 0) generator.train_on_batch(noise, 1)
36
What is Bias in AI?
Bias in AI refers to systematic errors in model predictions that can lead to unfair outcomes, often due to biased training data or flawed algorithms. It can result in discrimination against certain groups, highlighting the need for ethical AI development and diverse datasets. X_train = [[1], [2], [3]] # All from group A y_train = [1, 1, 1] # All positive model.fit(X_train, y_train) print(model.predict([[4]])) # Likely biased towards positive
37
What are Privacy Concerns in AI?
Privacy concerns in AI arise from the collection, storage, and use of personal data to train models. AI systems, especially those using deep learning, often require large datasets, raising issues about data consent, security, and potential misuse.
38
What is Accountability in AI?
Accountability in AI refers to the responsibility of developers and organizations to ensure that AI systems are transparent, fair, and ethical. It involves explaining how decisions are made, especially in high-stakes applications like healthcare or criminal justice.
39
How is AI used in Healthcare?
AI in healthcare is used for tasks like diagnosing diseases from medical images, predicting patient outcomes, and personalizing treatment plans. It enhances efficiency and accuracy but requires careful validation to ensure patient safety.
40
How is AI used in Finance?
AI in finance is used for fraud detection, algorithmic trading, credit scoring, and customer service automation. It analyzes large datasets to identify patterns and make predictions, but must be monitored for fairness and compliance.
41
How is AI used in Transportation?
AI in transportation powers autonomous vehicles, optimizes traffic management, and enhances logistics through route planning and demand forecasting. It aims to improve safety and efficiency but faces challenges in reliability and regulation.
42
How is AI used in Entertainment?
AI in entertainment is used for content recommendation, game design, and even creating art or music. It personalizes user experiences and generates creative content, but raises questions about originality and authorship.
43
What is TensorFlow?
TensorFlow is an open-source machine learning framework developed by Google. It provides tools for building and training neural networks, supporting both research and production use cases. TensorFlow is widely used for deep learning applications. import tensorflow as tf print(tf.__version__) # Check TensorFlow version
44
What is PyTorch?
PyTorch is an open-source machine learning framework developed by Facebook. It is known for its flexibility and ease of use, particularly in research settings, and supports dynamic computation graphs for neural networks. import torch print(torch.__version__) # Check PyTorch version
45
What is Scikit-learn?
Scikit-learn is a popular machine learning library in Python that provides simple and efficient tools for data mining and analysis. It includes implementations of various algorithms for classification, regression, clustering, and more. from sklearn import datasets iris = datasets.load_iris() print(iris.target_names)
46
Who is HAL 9000?
HAL 9000 is a fictional AI character from the movie 2001: A Space Odyssey. It is an advanced computer that controls the spaceship and interacts with the crew, but ultimately malfunctions, highlighting themes of AI ethics and control.
47
Who are R2-D2 and C-3PO?
R2-D2 and C-3PO are iconic droid characters from the Star Wars franchise. R2-D2 is an astromech droid skilled in repairs and navigation, while C-3PO is a protocol droid fluent in over six million forms of communication. They represent AI assistants in popular culture.
48
What are some predictions for the future of AI?
Predictions for the future of AI include advancements in general intelligence, increased automation across industries, and deeper integration into daily life. However, challenges like ethical considerations, job displacement, and ensuring safety must be addressed.
49
How does AI impact employment?
AI impacts employment by automating routine tasks, potentially displacing jobs in sectors like manufacturing and customer service. However, it also creates new opportunities in AI development, data science, and other fields, emphasizing the need for reskilling.
50
What is AI Safety?
AI Safety focuses on ensuring that AI systems operate reliably and do not cause unintended harm. It involves research into making AI systems robust, transparent, and aligned with human values, especially as they become more autonomous.