You have a total of 30 labelled examples from two classes. You are interested in designing a classifier using leave-one-out cross-validation methodology to obtain a fair measure of the performance of your intended classifier. How many train and test session/sequences would you be running?

You and your friend are working on a binary classification problem with 100 training examples, 80 from Positive class and 20 from Negative class. Your classifier gives 82% accuracy by correctly classifying all positive class examples and 2 of the negative class examples. Your friend's classifier also gives 82% accuracy by correctly classifying 70 of the positive examples and 12 of the negative examples. Whose classifier is better?

You are doing a binary classification problem with k-NN. Upon plotting the decision boundary, you notice it to be highly complex with many disjointed regions. You want to make the boundary little less complex. You will need to:

Is it possible to express the 1-NN boundary in a closed expression form, i.e. via an equation?

You have 5-dimensional labelled data from 4 classes. You intend to use FLD approach for dimensionality reduction. What is the lowest dimensionality of the data that is possible after applying the FLD approach?

The decision tree classifiers are considered:

A decision tree for a 4-class problem has 12 terminal (leaf) nodes. What is the tree size in terms of its internal nodes?

Suppose you have trained a three layer feedforward network for a non-linearly separable binary classification problem. While deploying the network in actual use, you decide to make all activation functions linear. What would happen to the decision boundary?

In a random forest classifier, every tree in the forest receives identical input. Correct or not?