Difference between revisions of "CoSMo 2014"

 
Line 2: Line 2:
 
[[File:CoSMo2014small.jpg | thumb | CoSMo logo]]
 
[[File:CoSMo2014small.jpg | thumb | CoSMo logo]]
  
 +
 +
----
  
 
== Introduction - overview of sensory-motor control ==
 
== Introduction - overview of sensory-motor control ==
Line 35: Line 37:
 
[[Media:Carmena_bci_model.pdf | Jimenez, Heliot and Carmena 2009]] <br>
 
[[Media:Carmena_bci_model.pdf | Jimenez, Heliot and Carmena 2009]] <br>
 
[[Media:Hsiao_2011.pdf | Hsiao, Fettiplace and Darbandi 2011]] <br>
 
[[Media:Hsiao_2011.pdf | Hsiao, Fettiplace and Darbandi 2011]] <br>
 +
 +
----
  
  
Line 68: Line 72:
 
*[[Media:Wei_2010.pdf | Wei 10]] -- movement in differing force fields <br>
 
*[[Media:Wei_2010.pdf | Wei 10]] -- movement in differing force fields <br>
 
*[[Media:Young_2009.pdf | Young]] -- movement time stayed the same, but distance changed; fast, medium, slow reaches. <br>
 
*[[Media:Young_2009.pdf | Young]] -- movement time stayed the same, but distance changed; fast, medium, slow reaches. <br>
 +
 +
----
  
  
Line 87: Line 93:
 
'''Afternoon tutorial 2''' <br>
 
'''Afternoon tutorial 2''' <br>
 
[[Media:Kalmanfilter_HW.pptx | Kalman filter assignment]] <br>
 
[[Media:Kalmanfilter_HW.pptx | Kalman filter assignment]] <br>
 +
 +
----
  
  
Line 104: Line 112:
 
'''Afternoon tutorials 2''' <br>
 
'''Afternoon tutorials 2''' <br>
 
[[Media:CoSMo_2014_lab.pdf | Tutorial instructions]] and [[Media:DIRECT_model.zip | Matlab code]] <br>
 
[[Media:CoSMo_2014_lab.pdf | Tutorial instructions]] and [[Media:DIRECT_model.zip | Matlab code]] <br>
 +
 +
----
  
  
Line 123: Line 133:
 
RIC data set: ask Levi Hargrove <br>
 
RIC data set: ask Levi Hargrove <br>
 
[[Media:Ninapro_Data.zip | Ninapro data]] <br>
 
[[Media:Ninapro_Data.zip | Ninapro data]] <br>
 +
 +
----
  
  
Line 200: Line 212:
 
2)  Estimate the prior from feedback as a function of the number of learning trials
 
2)  Estimate the prior from feedback as a function of the number of learning trials
 
3) How accurate do we expect the prior to be after 1010 trials?  a) with perfect memory b) with imperfect memory where you forget?  For part b, you will need a kalman filter.
 
3) How accurate do we expect the prior to be after 1010 trials?  a) with perfect memory b) with imperfect memory where you forget?  For part b, you will need a kalman filter.
 +
 +
----
  
 
== Computational neuroscience in industry ==
 
== Computational neuroscience in industry ==
 
''Aug 14, 4pm'' <br>
 
''Aug 14, 4pm'' <br>
 
Lecturers: Siddharth Dani, Rahul Gupta, Jadin Jackson, Ashutosh Chaturvedi ([http://www.medtronic.com/ Medtronic])
 
Lecturers: Siddharth Dani, Rahul Gupta, Jadin Jackson, Ashutosh Chaturvedi ([http://www.medtronic.com/ Medtronic])
 +
 +
----
  
 
== Computational Neuroimaging ==
 
== Computational Neuroimaging ==
Line 219: Line 235:
 
[http://ctnsrv.uwaterloo.ca/vandermeerlab/doku.php?id=analysis:cosmo2014 Day 2 tutorial and Data ]<br>
 
[http://ctnsrv.uwaterloo.ca/vandermeerlab/doku.php?id=analysis:cosmo2014 Day 2 tutorial and Data ]<br>
 
Log in using cosmo2014
 
Log in using cosmo2014
 +
 +
----
  
 
== Surveys for CoSMo 2014 ==
 
== Surveys for CoSMo 2014 ==
Line 231: Line 249:
  
 
<br>
 
<br>
 +
 +
----
  
 
== Final Project Presentations - Aug 15, 4:30-5:30pm and 7-11pm ==
 
== Final Project Presentations - Aug 15, 4:30-5:30pm and 7-11pm ==

Latest revision as of 14:45, 6 February 2018

This page contains course materials for the CoSMo 2014 summer school.

CoSMo logo



Introduction - overview of sensory-motor control

Aug 4-5
Lecturers: Gunnar Blohm, Kurt Thoroughman, Paul Schrater


Thoroughman slides 1
Thoroughman slides 2
Thoroughman notes
Blohm slides part 1
Thoroughman notes
Thoroughman slides 1
Thoroughman slides 2
Thoroughman slides 3
Thoroughman slides 4
Blohm slides part 2
Thoroughman - Interpreting imaging studies
How to model


Afternoon tutorials
Do Iterations, Fibonacci Numbers and Matrices in Moler's Matlab tutorial
Reproduce figure 1 from Pouget & Snyder paper: an example Matlab code
Bayesian tutorial and related Data set
Light bulb tutorial and related data set. Feel free to also consult Dayan & Abbott chapter 3
A solution for the light bulb tutorial...


Additional documents
Poggio & Bizzi 2004 paper
Eve Marder's viewpoint on modelling
Jimenez, Heliot and Carmena 2009
Hsiao, Fettiplace and Darbandi 2011



DREAM database - Introduction to the data and model sharing initiative

Aug 4 (evening)

Lecturer: Gunnar Blohm

You can get the DREAM project from Gunnar on a USB drive. DREAM can also be downloaded piece-wise (data sets, models, tools, and documentation) from CRCNS: http://crcns.org/data-sets/movements/dream/downloading-dream. You will need to create an account on CRCNS to be able to download the project files. If you want "all" of DREAM (models, tools, and documentation), click here: AllDream.zip

If you're familiar with svn and would like info/credentials for code in the repository, contact Ben Walker


Here's the latest version of LoadDreamPaths.m. (This script should work for all OSes.)


Here is a description of data sets currently in Dream. Dream is growing, but this list is accurate as of the time of the summer school (click on the link to access the related publication).

  • Burns -- reaching with head tilt and left/right visual perturbations
  • Corbett -- reach trajectory predictions based on EMG and gaze movements
  • Fernandes -- reaching with uncertain and rotated midpoint feedback
  • Flint -- decoding of reaching movements from local field potentials
  • Kording -- reaching with uncertain midpoint feedback
  • Mattar 07 -- generalizing from one, two or multi targets to another direction
  • Mattar 10 -- reaching to a distance (short/long), generalizing to the other one (long/short)
  • Ostry -- move in force field, get an estimation of where the hand is
  • Scott -- monkey (no spike), center out: even and not evenly distributed targets, also a forward/back
  • Stevenson -- center out, monkey with neural time stamps
  • Thoroughman -- reach adaptation to perturbations with different complexity
  • Vahdat -- movement in force field with FMRI scans pre/post learning
  • Wei 08 -- visual perturbations, cursor shown only at target
  • Wei 10 -- movement in differing force fields
  • Young -- movement time stayed the same, but distance changed; fast, medium, slow reaches.


Motor control & learning

Aug 6-7
Lecturers: Reza Shadmehr, Adrian Haith, Alaa Ahmed


Haith lecture
Ahmed lecture
Reza's slides cannot be posted due to copyright/embargo rules. If you would like a copy of them, please ask Gunnar.


Afternoon tutorial 1
Instructions
Saccade code


Afternoon tutorial 2
Kalman filter assignment



Sensory-motor transformations

Aug 8-9
Lecturers: Andrea Green and Paul Cisek


Green lecture
Cisek lecture


Afternoon tutorials 1
Tutorial files


Afternoon tutorials 2
Tutorial instructions and Matlab code



Prosthetics

Aug 11-12
Lecturers: Jon Sensinger and Levi Hargrove


Hargrove lecture 1
Hargrove lecture 2


Afternoon tutorial 1
Tutorial files


Afternoon 2 tutorials
Tutorial instructions and code
RIC data set: ask Levi Hargrove
Ninapro data



The Bayesian Brain

Aug 13-14
Lecturers: Adam Johnson, Paul Schrater

 Schrater slides 

Johnson slides

Afternoon tutorials 1
Tutorial files (instructions inside the *.m files)
he There are 3 problems to work on today.
Afternoon tutorials 2
Learning tutorial instructions
Matlab file

"Problem 1" In the first, start with AttractProjectGoals.m. This file has two other files, face imanalysis.mat and faceimgui.m The project explores the question "Where do cues come from?" in cue combination. Normally there are clever guesses by experimenters, but in less studied domains little is known about the information subjects use to infer properties. In this project we treat a toy version of a real problem - what are the cues to facial attractiveness? Here I have taken a database of images together with 1-10 rating scale attractiveness ratings, and done an initial unsupervised dimensionality reduction of the images. Your job is to characterize the cues to attractiveness ratings given the low dimensional image representation via a simple data analysis. Each dimension is a potential cue to attractiveness. Your goal is to characterize P(cue_j|attractiveness). Use the faceimgui to interactively view the relationship between the cues and the face images. Load the face images use the faceimananalysis.mat file.

Then the challenge is to remove the cue independence assumptions and do the analysis again. The second part will require you to estimate the joint probability of the cues P(cue1,cue2,cue3,...|attractiveness). One possibility is to assume multivariate gaussian.

"Problem 2" In the second problem, you will work through the explaining away model described in the class for image size and touch. The instructions are in the ExplainingAway.m file.

"Problem 3" In the third problem, we will explore simple Bayes by analyzing data from the Dream database, Kording 2004. Go to http://crcns.org/data-sets/movements/dream/data-sets/Kording_2004/ and download the paper and the dataset. In this tutorial, we will predict data from Koerding and Wolpert 2004, then fit the actual data.

In the experiments, they have subjects move a cursor to a target at 0cm with a random lateral shift between the hand position and the cursor. The cursor's position is rendered with 2 levels of blur or occluded, increasing the uncertainty about target position, predicting an increased reliance on the prior.

On each trial, there is a true shift, xtrue, given by the cursor offset. There is also a noisy estimate of the cursor position, "sensed". From these subjects form an estimate of the cursor position from data and prior knowledge.

Assuming that reach endpoints reflect the best estimate of the target location, we can compare predictions and reach endpoints.

x_hat = argmax P(xtrue|xsensed), which for Gaussians is the mean of P(xtrue|xsensed)

By Bayes P(xtrue|xsensed) = P(xsensed| xtrue)P(xtrue)/P(xsensed) where P(xsensed) = Int_{xtrue} P(xsensed| xtrue)P(xtrue) dxtrue

We need to form the prior P(xtrue) and the likelihood P(xsensed| xtrue) Using the following parameters, for today your goal is to produce simulations that replicate the prediction graphs in the paper. mu_prior = .01; sd_prior = 0.5/100;

xtrue = [-0.015 -0.01 -0.005 0 0.005 0.01 0.015]; sd_smallblur = 0.1/100; sd_largeblur = 1/100;

Use Bayes rule to derive the formula relating the cursor perturbation and prior knowledge. Then compute predictions for each of the four conditions.

SIMULATE THE OBSERVER FOR ONE CONDITION We will assume the observer has 1010 trials of feedback to estimate the prior. Then the subject's estimates of the prior's parameters are probably approximately correct. mu_prior_hat is approximately mu_prior; sd_prior_hat approximately sd_prior;


For 300 trials, draw the following random variables: cue location: cp (randomly draw by the experimenter from a gaussian with mean and std = prior) sensed position: sp (gaussian distributed with mean = cp and additional sensory noise std = .15/100) hand position estimate: Bayes estimate combining sensed and prior.

Treat this as data, and analyze it like in the Kording paper for ONE condition.

Challenge: Relax the assumption that subject's have learned the prior accurately. 1) Simulate data as above. 2) Estimate the prior from feedback as a function of the number of learning trials 3) How accurate do we expect the prior to be after 1010 trials? a) with perfect memory b) with imperfect memory where you forget? For part b, you will need a kalman filter.


Computational neuroscience in industry

Aug 14, 4pm
Lecturers: Siddharth Dani, Rahul Gupta, Jadin Jackson, Ashutosh Chaturvedi (Medtronic)


Computational Neuroimaging

Aug 15-16
Lecturers: Thilo Womelsdorf, Matthijs van der Meer

Womelsdorf_CosMo_Lecture_1_Prelude.pdf
Womelsdorf_CosMo_Lecture_2_Synchronization.pdf
Womelsdorf_CosMo_Lecture_3_ConnectivityMeasures.pdf
Womelsdorf_CosMo_Lecture_4_Fieldtrip_Tutorials.pdf
van der Meer lecture

Afternoon tutorials
Day 1 tutorial
Day 2 tutorial and Data
Log in using cosmo2014


Surveys for CoSMo 2014

CoSMo overall

Introduction Unit
Motor Control Unit
Sensory-motor Unit
Prosthetics Unit
Bayesian Brain Unit
Computational Neuroimaging Unit



Final Project Presentations - Aug 15, 4:30-5:30pm and 7-11pm

Presentation schedule

4:30pm - The Kalman touch
Predicting force with expectation: a haptic softness task
Group: Alexandra Lezkan, Elisabeth Rounis, Stefanie Mueller, Simone Toma, Ali Borji

5:00pm - Rolling heads
Multisensory integration in the perception of verticality
Group: Jorge Otero-Millan, Shany Grossman, Parisa Abedi, Anouk de Brouwer

5:30pm - 7:00pm - dinner break

7:00pm - Arousing Decisions
Group: Taraz, Windy, Dominic, Giuseppe, Giovanni

7:30pm - Triple Threat a.k.a. The SuperModel(ers) a.k.a. The CoSMo-nauts
Group: Reva Johnson, Amit Shah, Sean Barton, Chad Heley, Xing Chen

8:00pm - Tic Toc Pong
Adaptation to Delay while Playing Pong: Time or State Representation?
Group: Romy Bakker, Jemina Fasola, Guy Avraham, Raz Leib

8:30pm - Group X
The Effects of Motor Behaviors on the Temporal Ventriloquist Aftereffect
Group: Jacob Matthews, Brian Odegaard, Meytar Zemer

9:00pm - Bayesic Kalman Sense
Group: Emily Lawrence, General Lee, Gelsy Torres-Oviedo​

9:30pm - Followers of Titipat
Group: Titipat A., Vynn H., Carly S., Pablo I.

10:00pm - On Fire
Two Balls: The Mapping of Arbitrary Multimodal Stimuli
Group: Chin-Hsuan Lin, Michael Olshansky, and Salvatore Fara

10:30pm - HuStLa’z
There and Back Again: Vestibular Adaptation in Outer Space
Group: Josh Cashaback, Ethan Oblak, Melodie Tian, Qianli Yang

Please rate project presentations HERE