This notebook shows the difference between identifying $\mathbf{m}^*(\varphi)$ and $\mathbf{K}(\varphi)$ as opposed to identifying $\mathbf{m}^*(\varphi)$ alone from this control structure:
$$\mathbf{m}(t) = \mathbf{m}^*(\varphi) - \mathbf{K}(\varphi)\mathbf{s}(t)$$import sys
sys.path.append('..')
import numpy as np
from gaitanalysis.controlid import SimpleControlSolver
from src import utils
from src.grf_landmark_settings import settings
%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(10, 8)
trials_dir = utils.trial_data_dir()
Trials data directory is set to /home/moorepants/Data/human-gait/gait-control-identification
trial_number = '068'
event_data_frame, meta_data, event_data_path = utils.write_event_data_frame_to_disk(trial_number)
Trials data directory is set to /home/moorepants/Data/human-gait/gait-control-identification Temporary data directory is set to ../data Loading pre-cleaned data: ../data/cleaned-data-068-longitudinal-perturbation.h5 0.19 s
walking_data, walking_data_path = \
utils.write_inverse_dynamics_to_disk(event_data_frame, meta_data, event_data_path)
Loading pre-computed inverse dynamics from ../data/walking-data-068-longitudinal-perturbation.h5. 2.20 s
params = settings[trial_number]
steps, walking_data = utils.section_signals_into_steps(walking_data, walking_data_path,
filter_frequency=params[0],
threshold=params[1],
num_samples_lower_bound=params[2],
num_samples_upper_bound=params[3])
Loading pre-computed steps from ../data/walking-data-068-longitudinal-perturbation.h5. 0.103687
sensors, controls = utils.load_sensors_and_controls()
num_steps = steps.shape[0]
solver = SimpleControlSolver(steps.iloc[:num_steps * 3 / 4],
sensors,
controls,
validation_data=steps.iloc[num_steps * 3 / 4:])
gain_omission_matrix = np.zeros((len(controls), len(sensors))).astype(bool)
result = solver.solve(gain_omission_matrix=gain_omission_matrix)
no_control_vafs = utils.variance_accounted_for(result[-1], solver.validation_data, controls)
fig, axes = utils.plot_validation(result[-1], walking_data.raw_data, no_control_vafs)
Generating validation plot. 0.83 s
for i, row in enumerate(gain_omission_matrix):
row[2 * i:2 * i + 2] = True
result = solver.solve(gain_omission_matrix=gain_omission_matrix)
joint_isolated_control_vafs = utils.variance_accounted_for(result[-1], solver.validation_data, controls)
fig, axes = utils.plot_validation(result[-1], walking_data.raw_data, joint_isolated_control_vafs)
Generating validation plot. 1.09 s
Note that this solution will take 30 minutes to and hour if the ignore_cov
flag is False. This is due to a speed bottleneck in dtk.process.least_squares_variance
.
result = solver.solve(ignore_cov=True)
full_control_vafs = utils.variance_accounted_for(result[-1], solver.validation_data, controls)
fig, axes = utils.plot_validation(result[-1], walking_data.raw_data, full_control_vafs)
Generating validation plot. 1.60 s
vafs = no_control_vafs.copy()
for k, v in vafs.items():
vafs[k] = [v, joint_isolated_control_vafs[k], full_control_vafs[k]]
import pandas
vaf_df = pandas.DataFrame(vafs, index=['No Control', 'Joint Isolated Control', 'Full Control'])
vaf_df
Left.Ankle.PlantarFlexion.Moment | Left.Hip.Flexion.Moment | Left.Knee.Flexion.Moment | Right.Ankle.PlantarFlexion.Moment | Right.Hip.Flexion.Moment | Right.Knee.Flexion.Moment | |
---|---|---|---|---|---|---|
No Control | 0.930887 | 0.914203 | 0.834340 | 0.942881 | 0.958586 | 0.803446 |
Joint Isolated Control | 0.965913 | 0.926422 | 0.882971 | 0.971519 | 0.966458 | 0.849856 |
Full Control | 0.974590 | 0.955228 | 0.928973 | 0.979329 | 0.977749 | 0.898630 |
mean_vaf = vaf_df.mean(axis=1)
mean_vaf
No Control 0.897390 Joint Isolated Control 0.927190 Full Control 0.952416 dtype: float64
The joint isolated gain matrix structure helps the base model account for ~3% more of the data.
100.0 * (mean_vaf.iloc[1] - mean_vaf.iloc[0])
2.9799602844254647
So the full gain matrix helps the base model account for ~6% more of the data.
100.0 * (mean_vaf.iloc[2] - mean_vaf.iloc[0])
5.5026142435791403
!git rev-parse HEAD
7ba68f0160c23a61204291ca107ad570ac6f6e5a
!git --git-dir=/home/moorepants/src/GaitAnalysisToolKit/.git --work-tree=/home/moorepants/src/GaitAnalysisToolKit rev-parse HEAD
a3732352747bc03ca839df9ff02ddcbd889e636d
%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
Installed version_information.py. To use it, type: %load_ext version_information
%load_ext version_information
%version_information gaitanalysis, numpy, scipy, pandas, matplotlib, tables, oct2py
Software | Version |
---|---|
Python | 2.7.8 64bit [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] |
IPython | 2.3.0 |
OS | Linux 3.13.0 37 generic x86_64 with debian jessie sid |
gaitanalysis | 0.1.0dev |
numpy | 1.8.2 |
scipy | 0.14.0 |
pandas | 0.12.0 |
matplotlib | 1.4.0 |
tables | 3.1.1 |
oct2py | 2.4.0 |
Fri Oct 17 12:23:51 2014 EDT |
!pip freeze
DynamicistToolKit==0.3.5 -e git+git@github.com:csu-hmc/GaitAnalysisToolKit.git@a3732352747bc03ca839df9ff02ddcbd889e636d#egg=GaitAnalysisToolKit-origin/HEAD Jinja2==2.7.2 MarkupSafe==0.18 PyYAML==3.11 backports.ssl-match-hostname==3.4.0.2 ipython==2.3.0 matplotlib==1.4.0 numexpr==2.3.1 numpy==1.8.2 oct2py==2.4.0 pandas==0.12.0 patsy==0.3.0 pyparsing==2.0.1 python-dateutil==1.5 pytz==2014.7 pyzmq==14.3.0 scipy==0.14.0 six==1.8.0 statsmodels==0.5.0 tables==3.1.1 tornado==3.2.1 uncertainties==2.4.6.1 wsgiref==0.1.2