# Machine Learning Overview¶

### Preliminaries¶

• Goal
• Top-level overview of machine learning
• Materials
• Mandatory
• this notebook
• the pre-recorded video lecture
• video recording of Q&A sessions
• Optional
• Study Bishop pp. 1-4

### What is Machine Learning?¶

• Machine Learning relates to building models from data and using these models in applications.
• Problem: Suppose we want to develop an algorithm for a complex process about which we have little knowledge (so hand-programming is not possible).
• Solution: Get the computer to develop the algorithm by itself by showing it examples of the behavior that we want.
• Practically, we choose a library of models, and write a program that picks a model and tunes it to fit the data.
• Performance criterion: a good model should generalize well to unseen data from the same process.
• This field is known in various scientific communities with slight variations under different names such as machine learning, statistical inference, system identification, data mining, source coding, data compression, data science, etc.

### Machine Learning and the Scientific Inquiry Loop¶

• Machine learning technology uses the scientific inquiry loop to develop models and use these models in applications.

### Machine Learning is Difficult¶

• Modeling (Learning) Problems
• Is there any regularity in the data anyway?
• What is our prior knowledge and how to express it mathematically?
• How to pick the model library?
• How to tune the models to the data?
• How to measure the generalization performance?
• Quality of Observed Data
• Not enough data
• Too much data?
• Available data may be messy (measurement noise, missing data points, outliers)

### A Machine Learning Taxonomy¶

• Supervised Learning: Given examples of inputs and corresponding desired outputs, predict outputs on future inputs.
• Examples: classification, regression, time series prediction
• Unsupervised Learning: (a.k.a. density estimation). Given only inputs, automatically discover representations, features, structure, etc.
• Examples: clustering, outlier detection, compression
• Trial Design: (a.k.a. experimental design, active learning). Learn to make actions that optimize some performance criterion about the expected future.
• Examples: playing games like chess, self-driving cars, robotics.
• Two major approaches include reinforcement learning and active inference
• Reinforcement Learning: Given an observed sequence of input signals and (occasionally observed) rewards for those inputs, learn to select actions that maximize expected future rewards.
• Active inference: Given an observed sequence of input signals and a prior probability distribution about future observations, learn to select actions that minimize expected prediction errors (i.e., minimize actual minus predicted sensation).
• Other stuff, like preference learning, learning to rank, etc. (not covered in this course). Note that many machine learning problems can be (re-)formulated as special cases of either a supervised, unsupervised or trial design problem.

### Supervised Learning¶

• Given observations of desired input-output behavior $D=\{(x_1,y_1),\dots,(x_N,y_N)\}$ (with $x_i$ inputs and $y_i$ outputs), the goal is to estimate the conditional distribution $p(y|x)$ (i.e., how does $y$ depend on $x$?).
##### Classification¶

• The target variable $y$ is a discrete-valued vector representing class labels
• The special case $y \in \{\text{true},\text{false}\}$ is called detection.
##### Regression¶

• Same problem statement as classification but now the target variable is a real-valued vector.
• Regression is sometimes called curve fitting.

### Unsupervised Learning¶

Given data $D=\{x_1,\ldots,x_N\}$, model the (unconditional) probability distribution $p(x)$ (a.k.a. density estimation). The two primary applications are clustering and compression (a.k.a. dimensionality reduction).

##### Clustering¶

• Group data into clusters such that all data points in a cluster have similar properties.
• Clustering can be interpreted as ''unsupervised classification''.
##### Compression / dimensionality reduction¶

• Output from coder is much smaller in size than original, but if coded signal if further processed by a decoder, then the result is very close (or exactly equal) to the original.
• Usually, the compressed image comprises continuously valued variables. In that case, compression can be interpreted as ''unsupervised regression''.

### Trial Design¶

Given the state of the world (obtained from sensory data), the agent must learn to produce actions that optimize some performance criterion about expected future.

### Some Machine Learning Applications¶

• computer speech recognition, speaker recognition
• face recognition, iris identification
• printed and handwritten text parsing
• financial prediction, outlier detection (credit-card fraud)
• user preference modeling (amazon); modeling of human perception
• modeling of the web (google)
• machine translation
• medical expert systems for disease diagnosis (e.g., mammogram)
• strategic games (chess, go, backgammon), self-driving cars

• In summary, any 'knowledge-poor' but 'data-rich' problem

In [1]:
open("../../styles/aipstyle.html") do f display("text/html", read(f,String)) end

In [ ]: