What types of learning, if any, best describe the following three scenarios:
i. A coin classifcation system is created for a vending machine. In order to do this,
the developers obtain exact coin specifcations from the U.S. Mint and derive
a statistical model of the size, weight, and denomination, which the vending
machine then uses to classify its coins.
ii. Instead of calling the U.S. Mint to obtain coin information, an algorithm is
presented with a large set of labeled coins. The algorithm uses this data to
infer decision boundaries which the vending machine then uses to classify its
coins.
iii. A computer develops a strategy for playing Tic-Tac-Toe by playing repeatedly
and adjusting its strategy by penalizing moves that eventually lead to losing.
[a] (i) Supervised Learning,
(ii) Unsupervised Learning,
(iii) Reinforcement Learning
[b] (i) Supervised Learning,
(ii) Not learning,
(iii) Unsupervised Learning
[c] (i) Not learning,
(ii) Reinforcement Learning,
(iii) Supervised Learning
[d] (i) Not learning,
(ii) Supervised Learning,
(iii) Reinforcement Learning
[e] (i) Supervised Learning,
(ii) Reinforcement Learning,
(iii) Unsupervised Learning
Answer: [d]
Which of the following problems are best suited for the learning approach?
(i) Classifying numbers into primes and non-primes.
(ii) Detecting potential fraud in credit card charges.
(iii) Determining the time it would take a falling object to hit the ground.
(iv) Determining the optimal cycle for traffic lights in a busy intersection.
[a] (ii) and (iv)
[b] (i) and (ii)
[c] (i), (ii), and (iii)
[d] (iii)
[e] (i) and (iii)
Answer: [a]
We have 2 opaque bags, each containing 2 balls. One bag has 2 black balls and
the other has a black ball and a white ball. You pick a bag at random and then pick
one of the balls in that bag at random. When you look at the ball, it is black. You
now pick the second ball from that same bag. What is the probability that this ball
is also black?
[a] 1 / 4
[b] 1 / 3
[c] 1 / 2
[d] 2 / 3
[e] 3 / 4
possible outcomes:
bag_bw -> b -> w
bag_bw -> w -> b
bag_bb -> b -> b
bag_bb -> b -> b
since ball1 = b we are left with 3 possible outcomes
from those only 2 give us the b->b so the answer is 2/3
Answer: [d]
Consider a sample of 10 marbles drawn from a bin that has red and green marbles. The probability that any marble we draw is red is μ=0.55 (independently, with replacement). We address the probability of getting no red marbles (ν=0) in the following cases:
We draw only one such sample. Compute the probability that $\nu = 0$. The
closest answer is (closest is the answer that makes the expression jyour answer
given optionj closest to 0):
[a] 7.331 * 10^6
[b] 3.405 * 10^4
[c] 0.289
[d] 0.450
[e] 0.550
p_green = 1-0.55
pow(p_green,10)
0.0003405062891601559
Answer: [b]
We draw 1,000 independent samples. Compute the probability that (at least)
one of the samples has nu = 0. The closest answer is:
[a] 7.331 * 10^6
[b] 3.405 * 10^4
[c] 0.289
[d] 0.450
[e] 0.550
r = pow(p_green, 10) * 1000
print r
print 'c=', abs(r - 0.28)
print 'd=', abs(r - 0.45)
print 'e=', abs(r - 0.55)
0.34050628916 c= 0.0605062891602 d= 0.10949371084 e= 0.20949371084
Answer: [c]
Answer: [e] all get same score = 3*1 + 2*3 + 1*3 + 0 * 1 = 12
%pylab inline
import random as rnd
import numpy as np
import matplotlib.pyplot as plt
def generateLine():
points = np.random.uniform(-1, 1, (2, 2))
(x1,y1),(x2,y2) = points
k = (y2 - y1) / (x2 - x1)
m = y1 - k * x1
return k,m
def generateData(line, N=10):
k,m = line
x = np.ones((N, 3))
x[:,1:] = np.random.uniform(-1, 1, (N, 2))
y = x[:,2] - (x[:,1] * k + m)
return x,np.sign(y)
def showData(x,y, line, label):
pos = y > 0
neg = y < 0
plt.plot(x[:,1][pos], x[:,2][pos], 'gD')
plt.plot(x[:,1][neg], x[:,2][neg], 'rs')
k,m = line
v = np.array([-1.0, 1.0])
plt.plot(v, v * k + m, label=label)
def perceptron(x, y):
w = np.zeros(x.shape[1], dtype=float)
t = 0
while True:
missed = []
for i, r in enumerate(x):
result = np.sign(np.dot(r, w))
u = r * (y[i] - result)
if result != y[i]: missed.append(u)
if len(missed) == 0: return w,t
w = w + rnd.choice(missed)
t += 1
def disagreement(x,y,w):
result = np.dot(x, w)
d = np.ones(y.size)[y != np.sign(result)]
return np.sum(d) / y.size
def runProblem(N, n=1000):
c = 0.0
d = 0.0
for i in xrange(n):
line = generateLine()
x,y = generateData(line, N)
w,t = perceptron(x,y)
testX, testY = generateData(line, n)
d += disagreement(testX, testY, w)
c += t
return {'iterations': c / n, 'disagreement': d / n}
Populating the interactive namespace from numpy and matplotlib
line = generateLine()
x,y = generateData(line, 10)
w,_ = perceptron(x,y)
plt.title("PLA training result")
showData(x,y, line, "original")
showData(x, y, (- w[1] / w[2], - w[0] / w[2]), "PLA")
plt.legend(loc='best')
<matplotlib.legend.Legend at 0x3a446d0>
runProblem(10)
{'disagreement': 0.10742300000000012, 'iterations': 10.338}
Answer7: [b]
Answer8: [c]
runProblem(100)
{'disagreement': 0.013583999999999969, 'iterations': 92.811}
Answer9: [b]
Answer10: [b]