%matplotlib inline
import mir_eval, librosa, numpy, matplotlib.pyplot as plt
mir_eval
¶mir_eval
(documentation, paper) is a Python library containing evaluation functions for a variety of common audio and music processing tasks.
mir_eval
was primarily created by Colin Raffel. This notebook was created by Brian McFee and edited by Steve Tjoa.
mir_eval
?¶Most tasks in MIR are complicated. Evaluation is also complicated!
Any given task has many ways to evaluate a system. There is no one right away.
For example, here are issues to consider when choosing an evaluation method:
mir_eval
tasks and submodules¶bss_eval
in Matlab)mir_eval
¶pip install mir_eval
If that doesn't work:
pip install --no-deps mir_eval
y, sr = librosa.load('audio/simple_piano.wav')
# Estimate onsets.
est_onsets = librosa.onset.onset_detect(y=y, sr=sr, units='time')
est_onsets
array([0.27863946, 0.510839 , 0.81269841, 1.021678 , 1.32353741, 1.50929705, 1.83437642, 2.02013605, 2.36843537, 2.53097506, 2.87927438, 3.0185941 , 3.36689342, 3.59909297])
# Load the reference annotation.
ref_onsets = numpy.array([0.1, 0.21, 0.3])
mir_eval.onset.evaluate(ref_onsets, est_onsets)
OrderedDict([('F-measure', 0.11764705882352941), ('Precision', 0.07142857142857142), ('Recall', 0.3333333333333333)])
mir_eval
finds the largest feasible set of matches using the Hopcroft-Karp algorithm.
est_tempo, est_beats = librosa.beat.beat_track(y=y, sr=sr)
est_beats = librosa.frames_to_time(est_beats, sr=sr)
est_beats
array([0.53405896, 1.021678 , 1.53251701, 2.04335601, 2.53097506])
# Load the reference annotation.
ref_beats = numpy.array([0.53, 1.02])
mir_eval.beat.evaluate(ref_beats, est_beats)
/Users/stjoa/anaconda3/lib/python3.6/site-packages/mir_eval/beat.py:91: UserWarning: Reference beats are empty. warnings.warn("Reference beats are empty.") /Users/stjoa/anaconda3/lib/python3.6/site-packages/mir_eval/beat.py:93: UserWarning: Estimated beats are empty. warnings.warn("Estimated beats are empty.")
OrderedDict([('F-measure', 0.0), ('Cemgil', 0.0), ('Cemgil Best Metric Level', 0.0), ('Goto', 0.0), ('P-score', 0.0), ('Correct Metric Level Continuous', 0.0), ('Correct Metric Level Total', 0.0), ('Any Metric Level Continuous', 0.0), ('Any Metric Level Total', 0.0), ('Information gain', 0.0)])
mir_eval.chord.evaluate()
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-35-16035b8d87a1> in <module>() ----> 1 mir_eval.chord.evaluate() TypeError: evaluate() missing 4 required positional arguments: 'ref_intervals', 'ref_labels', 'est_intervals', and 'est_labels'
Hidden benefits
mir_eval has tools for display and sonification.
import librosa.display
import mir_eval.display
Common plots: events
, labeled_intervals
pitch, multipitch, piano_roll segments, hierarchy, separation
librosa.display.specshow(S, x_axis='time', y_axis='mel')
mir_eval.display.events(ref_beats, color='w', alpha=0.8, linewidth=3)
mir_eval.display.events(est_beats, color='c', alpha=0.8, linewidth=3, linestyle='--')
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-37-982cafece7a6> in <module>() ----> 1 librosa.display.specshow(S, x_axis='time', y_axis='mel') 2 mir_eval.display.events(ref_beats, color='w', alpha=0.8, linewidth=3) 3 mir_eval.display.events(est_beats, color='c', alpha=0.8, linewidth=3, linestyle='--') AttributeError: module 'librosa' has no attribute 'display'
y_harm, y_perc = librosa.effects.hpss(y, margin=8)
plt.figure(figsize=(12, 4))
mir_eval.display.separation([y_perc, y_harm], sr, labels=['percussive', 'harmonic'])
plt.legend()
<matplotlib.legend.Legend at 0x117a2f048>
Audio(data=numpy.vstack([
mir_eval.sonify.chords()