Final research project
Various due dates
before 10/27 – Contacting Brad for consultation on your proposal topic
11/1 – Proposal due
11/4, 11/5, or 11/6 – Meet with Brad to discuss the proposal
11/27 – Meet with Brad (over Skype) for a check-in on your progress
12/13 – Oral presentations
12/18 – Final paper and supplementary materials due
Instructions here: Final project guidelines
COUHES Protocol Application for Final Project
Due: Wednesday, 11/6 before 4pm (to make sure COUHES gets it that day)
Submit a COUHES protocol application for your planned experiment. This will include the application itself, recruitment materials (e.g., what you’d email to recruit people), and a consent form. For some studies, including Mechanical Turk, you might be able to submit an Exempt application, which simplifies the process; most studies will need to use the Non-Exempt application.
I attempted to cover all of your experiments with a single COUHES application, but it was rejected for being too general. This application is in the course resources on our Piazza page. I recommend using it as a starting point. However, any specific answer in that document is not guaranteed to be appropriate or specific enough for your planned study. You are ultimately responsible for submitting an application that can be approved. Work with the COUHES office on ensuring that your application is adequate.
List Cynthia Breazeal as the PI and me as research personnel. Also include yourself and anyone you might collaborate with. Please ask questions about the application process on Piazza so others can see the answers. I’ll answer what I can and direct you to COUHES for the rest.
You will ultimately turn the applications in to Judy Adams at firstname.lastname@example.org by November 6. Attach each component as a separate file in your email (e.g., application, consent form, recruitment materials, etc.). CC Cynthia Breazeal (email@example.com) and me (firstname.lastname@example.org) when you email Judy. Make sure to let her know that you are submitting as a student of the Media Lab’s Interactive Machine Learning course and mention that the application is part of the group of applications that are being fast-tracked to be approved by December 1st.
Do not worry about signatures or turning in the physical forms. I will get signatures once your applications look like they are in their final form.
You will also need to complete training for behavioral research, if you haven’t already. Instructions are in a previous email.
As mentioned above, I have been told by COUHES that they can approve your study by December 1, giving you 2.5 weeks to collect data, analyze it, and put it into a paper. That is not much time. I strongly recommend having your system ready by Dec. 1 so that you can immediately recruit participants (even if it’s just a handful) and have them use your system. Before approval, be sure to test the system out informally to guide your design of the system. (Side note: given the time we have, you might want to rely more on qualitative analysis with quantitative analysis serving to suggest hypotheses for further evaluation, since you will be unlikely to run enough people to get the statistical significance that would make your quantitative analysis especially meaningful. On the other hand, if you expect dramatic quantitative results, you might still be able to provide a convincing quantitative story with 10 or less subjects.)
Alternatively, if your final course project is part of a larger, existing study, you try to use that study’s COUHES protocol to cover your work, in which case you do not need to submit an application.
Dissect an ML algorithm for potential forms of interactivity
Due: Friday, 10/25 before class
In writing, attempt to exhaustively explore potential inputs and outputs from/to a human. Choose a machine learning algorithm. (Note that a single machine learning algorithm can contain one or many machine learning algorithms as subroutines.) Post the algorithm name/description to Piazza publicly in the folder pra4-dissect_an_ml_algorithm. Your algorithm must be different than those already posted, so the sooner you post the more likely your plan won’t be hijacked. If you’re not sure whether it’s different enough, just post anyway and I’ll let you know.
For the algorithm you choose, create a list of six ways that one could either provide human input to the algorithm or derive output for the human. Each of these six interface channels should plausibly be able to improve an interactive machine learning system that employs your algorithm. The following more obvious channels don’t count towards your six without a significant extension/twist: having the human select from a predefined set of features; having the human label samples; having the human choose from different learning algorithms; allowing the user to reset the learning algorithm, throwing out past samples; having the system predict labels; and communicating any standard machine-learning performance metric to the human (which shouldn’t be done anyway without good justification for its readability by the intended system users).
Your deliverable for this assignment is a written post on Piazza in the folder pra4-dissect_an_ml_algorithm. Devote one paragraph to each of these six input or output channels. Describe the specific form of input or output, both from the human’s perspective (e.g., a button marked “reset”) and from the machine learning algorithm’s perspective (e.g., the algorithm’s parameters are reset to their initial values). Describe (1) the intended functionality (e.g., when a person explores down a teaching path and wants to start over, they can press the reset button and do so), (2) HCI considerations, such as ease with which human users can understand the channel and level of cognitive load incurred or relieved by the channel, and (3) any challenges you foresee for the channel (ways it might cause failure or simply need further refinement).
Once or twice during the semester, you will give a 20-25 minute overview of a publicly available toolkit that somehow facilitates interactive machine learning. Such toolkits include libraries of machine learning algorithms (e.g., Weka or Orange), packages for GUI design, visualization software (e.g. R), and sensing libraries (such as the Kinect SDK).
Your presentation should include the following:
- One day beforehand, email the class to give any instructions to the other students about what they must install ahead of time.
- Overview of the toolkit – what it’s useful for, what it can do
- Demonstration(s) of some of the toolkit’s functions
- A guided activity for the rest of the students, helping them use the toolkit in at least one way
The ultimate goal is to give your fellow students comfort with the system and insight into how it might help them in their own work. The course instructor must agree on your choice of toolkit beforehand. Grades will be based on instructor’s judgment of presentation quality (especially clarity of voice and content), adherence to these instructions, and value of the content communicated.
Develop a simple IML system
This assignment will be in three parts:
- Create an ML system with hard data input
- Add human input to the system (removing hard data if you prefer)
- Add system transparency or feedback to the user
1. Create an ML system with hard data input
Due: Wednesday, 9/25 before class
Choose a data set from Kaggle.com and apply 3 different machine-learning algorithms to it. Share your Kaggle team name on the course Piazza site. Using 3-fold cross-validation to get one prediction for each sample in the data, compare F1 scores and accuracy evaluation metrics of your results to those of others by sharing on Piazza. (F1 scores should be shared for each class if data has binary classes; otherwise, provide micro-average and macro-averages F1 scores across all n>2 classes.) Submit the predictions of your best model to Kaggle so that you’re on the leaderboard for the respective dataset.
Note: An ensemble of models of type X does count as a different algorithm than a single model of type X.
Grade (assuming instructions are followed):
A for completing the assignment and being in the top 50% by leaderboard placement in comparison to other students
A- for completing the assignment
2. Add human input to the system
Due: Wednesday, 10/2 before class
Create an interface that receives teaching-oriented human input, which is taken into account by a machine learning algorithm. The input can be a sequence of labeled samples, suggestions of features to use, preferences, and/or anything that the learning algorithm can incorporate. However, the input interface must be useable by someone without programming skills or machine-learning expertise.
Ideally, you will build upon your work for the first part of this three-part assignment, whether with the Kaggle data or without it. But you are not required to use your work from the first part. You can start from scratch if you wish.
Note that this system, before the third part of the assignment at least, may not result in learning models with good performance. That is okay. The point is to allow almost anyone to give input that is meant by that person to affect the learning algorithm and meaningfully does so.
You will demonstrate your interfaces to each other in class on 10/4, giving and receiving feedback.
3. Add system transparency or feedback to the user
Due: Friday, 10/11 before class
Create a complete interactive machine learning system. Build upon what you created in the previous step, adding system transparency (e.g., a display of confidence level or the current hypothesis) or feedback to the user (e.g., predictions on test samples or performance on some evaluation metric). This added system output should affect subsequent human input, completing the interactive machine learning loop.
Like before, you are not actually constrained to use what you created in the previous steps, though reuse is recommended.
This complete system should be able to learn something through the interaction, and you should be able to provide evidence of this learning.
Your deliverable for the system is a technical description, a video of the system in action, and a copy of the source code and anything else need to run the system that’s not publicly available, like data. The description should be approximately one page in length (single spaced, 12 pt font) and cover the following:
- motivation for the system – Sell the potential impact of the system, once developed to maturity. If you do not see such an impact (don’t BS), which is fine since the directions did not call for one, instead describe your personal motivation for creating the system that you did.
- high-level description of the system – In one or two sentences, summarize what the system does.
- the inputs to the learning system and the outputs from it – Inputs will include human input but could also include a pre-existing dataset or any other input source. The outputs include both what is learned (e.g., a classifier) and what is communicated to the human. Give detail, such as descriptions of the features.
- the learning algorithm – What algorithm did you use? Why? Explain how it works in terms that the other students should understand. What parameters are used and what are their implications? When you describe the algorithm and its parameters, use general terms, not the specific terminology of your library. For instance, instead of only saying “I used the J48 algorithm in Weka”, also add “which is the C4.5 algorithm for learning decision-tree classifiers [Quinlan, 1993]”.
- interface – Describe the human-computer interface you built into this system.
- toolkits/libraries – Make sure that you mention somewhere what libraries you use and how you used them.
- description of how to run the system using your your source code
The video should demonstrate usage of your system and be narrated to explain what is occurring. The video does not need to be polished. It will be evaluated based on how well it clearly communicates the experience of interacting with the system and the meaning of the interface components that are shown.
Email your description, video, and runnable source code—or a link to them—to Brad Knox.
Readings will be assigned to cover both background and specific research in IML. You will respond in writing to each readings, and readings will usually then be discussed during the class period after which the responses are due.
By 1pm on the afternoon before a class with a new reading assignment due, everyone must submit a written response to the reading(s) on the course Piazza website (https://piazza.com/mit/fall2013/mass62/home), tagged with the corresponding folder. Responses should be at least 400 words in length.
In some cases, specific questions may be posted along with the readings. But in general, it is free form. Credit will be based on evidence that you have done the readings carefully. Acceptable responses include (but are not limited to):
- Insightful questions;
- Clarification questions about ambiguities;
- Comments about the relation of the reading to previous readings;
- Solutions to problems or exercises posed in the readings;
- Thoughts on what you would like to learn about in more detail;
- Possible extensions or related studies;
- Thoughts on the paper’s importance; and
- Summaries of the most important things you learned.
There is one exception to this free-form nature. You should create a discussion point that you may be asked to bring up in class. Possible discussion topics (note overlap with previous list):
- Controversial questions/answers;
- Points of confusion;
- Ideas about extensions to the work;
- Insights on broader topics that are relevant to this work;
- And anything you believe would effectively incite discussion.
Explicitly demarcate the discussion point in your response by making it both bold and italicized.
Responses on Piazza can be posted anonymously if you prefer (though they may be de-anonymized unintentionally through the discussion). If you have questions about the readings that you would strongly prefer to be anonymous, I recommend that you post them anonymously to the Piazza Q & A page instead of in your reading response.