Oldie Flashcards
Shortcomings of Information Gain?
- IG tends to prefer highly branching attributes
- Subsets are more likely to be pure if there is a large number of variables
- May result in overfitting
Solve with Gain Ratio
What aspect of Information Gain is “Gain Ratio” intended to remedy? Explain with equation how to achieve this.
- Fixes IG’s shortcoming of preferring highly branching attributes (i.e. ID attributes low entropy and lots of them so high IG)
- GR reduces the bias for IG toward highly branching attributes by normalising relative to the Split Information (SI).
GR(Ra | R) = IG(Ra | R) / SI(Ra | R)
= H(R) - SumOf P(xi)H(xi) / - (sumof P(xi)logP(xi))
What’s the difference between classification and regression?
Classification = Predicting which class a data point is part of. - The dependent variables are categorical
Regression = Predicting continuous values
- The dependent variables are numerical
Outline the nature of “hill climbing” & provide example of hill climbing algorithm.
- Finds the global maximum iteratively - sometimes gets stuck in local maximum - unless convex function?
e. g. EM algorithm - guaranteed positive hill climb
What basic approach does hill climbing contrast with?
???
Advantages of “bagging”
- Decrease variance
- Highly effective over noisy data
- Performance is generally better & never substantially worse
- Possibility to parallelise computation of individual base classifiers
What is sum of squared errors? Name one thing it’s applied to.
Method to evaluate the quality of clusters.
Applied to K-Means
How does stratified cross validation contrast with non-stratified cross validation?
Stratification is generally better than regular.
- Both in terms of bias and variance
Achieves this by rearranging the data to ensure each fold is a good representation of the whole data set.
Outline the difference between supervised and unsupervised ML methods
Unsupervised:
- No knowledge of labelled data
- Needs to find clusters, patterns, etc by itself
Supervised:
- Has knowledge of labelled data
- Learning involves comparing its current predicted output with the correct outputs
- Able to know what the error is
Define “discretisation” with an example
Process of converting continuous attributes to nominal or ordinal attributes.
Some learners, such as Decision Trees, generally work better with nominal attributes.
Briefly describe what “overfitting” is
Overfitting is when the model picks up errors and/or noise OR has a lack of training data
- High variance in classifier
- Causes the training accuracy and test accuracy to not be similar - big gap between test and training curves
- Occurs when the model is excessively complex
- Struggles to generalise
- Overreacts to minor fluctuations in the training data
Briefly outline the “inductive learning hypothesis”
Any hypothesis found to approx. the target function well, over a sufficiently large set of training examples, will also approximate the target function well, over any other unobserved examples
Categorise each as either “parametric” or “non-parametric”
i) SVM
ii) Max Entropy Model
iii) C4.5 -> Style DT
SVM - Depends on kernel
Max Entropy - Parametric?
DT - Non-parametric
What is a “consistent” classifier?
A classifier which is able to flawlessly predict the class of all TRAINING instances
Outline the difference between “Lazy” and “eager” classifiers
Lazy (instance-based) -> KNN
- Stores data and waits until a query is made to do anything
Eager (decision tree, SVM)
- Constructs a classification model (generalise the training data) before receiving queries (new data to classify)
What is Zero-R?
Classifies all instances according to most occurring class.
Briefly outline the nature of conditional independence assumption in the context of NB
???
What is “hidden” in a HMM?
???
What is the role of “log likelihood” in EM algorithm?
???
“Smoothing” is considered important when dealing with probability
i) why?
ii) name one smoothing method - explain how it works
???
What are 2 parameters required to generate a Gaussian probability density function?
???
How are missing values in test instances dealt with in NB?
If training -> Don’t include in classifier
If test -> Ignore from end result
Just drop missing shit
Briefly explain how to generate a training & test learning curve
???
Explain with equations what PRECISION and RECALL are
Use Precision and Recall - if we want to know what we have positively identified NOT what we correctly identified
Precision = positive predictive values - proportion of positive predictions that are correct
= TP / TP+FP
Recall = sensitivity - accuracy with respect to positive cases
= TP / TP+FN