Most online study tools use rigid multiple-choice, matching, or short-answer quizzes to provide feedback for which questions you answered “right” or “wrong”. But all those do is measure your ability to recognize a correct answer, rather than actually think of one in your head.
What if your knowledge retention could be better measured using varying degrees of right and wrong for each concept? What if a study tool could let you determine your level of readiness for each exam concept, by simply trusting you to answer the question: “How well did you know this?”
That's Brainscape. We do nuance. But first: here's the science.

What research tells us about student self-assessment
Self-assessment is the principle behind Brainscape’s application of the metacognitive concept known as Judgment of Learning (or JOL). Asking learners to rate their JOLs on an item-by-item basis is an extremely valuable instrument in education, as it helps learners more accurately identify their strengths and weaknesses, while helping educators focus on areas in which their students are least comfortable. It is also a critical instrument in educational software algorithms that use confidence-based repetition.
Over the past few decades, researchers have used various experimental formats to ask learners to rate their JOLs: either a series of confidence options (e.g. 1-5), a sliding confidence scale (0-100%), or a simple inquiry into whether the learner wishes to study the question again in the future (yes/no).
Researchers have also differed greatly in terms of how such JOLs are incorporated into a question list: either asking for the user’s self-assessment before each answer is revealed, after each answer is revealed, or after all answers in an entire list have been revealed. The important common thread is that each of these types of experiments engages the learner’s metacognitive processing.
How Brainscape uses self-assessment
Brainscape's JOLs actually replace, rather than supplement, computer-provided feedback.
For example, if Brainscape asks the learner “What is the capital of Canada?”, the user does not actually have to type or select an answer but simply reveals the back side of a virtual flashcard. The true student self-assessment comes when the user rates their own confidence on each flashcard, which Brainscape’s algorithm then uses to determine how soon the flashcard should be displayed again. (Low-confidence items are displayed more frequently until the user reports higher confidence.)
This shift of the assessment burden from the computer to the learner prompts three important questions:
- How do we know that the learner will accurately assess her knowledge?
- How do we know the learner won’t just start revealing answers without genuinely thinking about them beforehand?
- What happens if a learner incorrectly reports high confidence in an item they actually don’t know very well?
Is self-assessment accurate?
Adults are actually accurate in assessing their JOLs to begin with. Considering that Brainscape exists as a personal study tool, not an external assessment mechanism, Brainscape users have no motivation to “cheat” as they might have on an actual test or graded assignment. Brainscape instead helps users manage their own knowledge and learn to become more honest with themselves.
Alternatives to the flashcard approach are sub-optimal for time-pressed learners who are seeking a convenient and effective study experience. Multiple-choice and matching questions are inferior for learning since they merely test recognition rather than engaging active recall, while short-answer and fill-in-the-blank questions are not conducive to quick study sessions as they involve time-consuming effort.
Brainscape chooses to present questions as simple flip/reveal flashcards in order to maximize the number of repetitions that can be achieved in a given span of time. Research shows that the number of memory retrieval attempts, and the optimal spacing between them, are among the most important determinants of the strength of the memory encoding. The benefit of increased exposures outweighs the costs of potential “zone-out”—especially when the learner is already motivated to concentrate on learning the topic at hand.
What about overconfidence?
This is a benefit!
Butterfield and Metcalfe show that people are actually more likely to remember a corrected wrong answer if they had previously exuded high confidence that their submitted answer was correct and subsequently learned that it was wrong.
According to this logic, if a Brainscape user fails to recall a target displaying a previously high confidence ranking, they are likely to devote more mental energy to correcting the error. Barrick and Hall show that such error corrections are even more beneficial when items are spaced out rather than massed together.
In fact, in a spaced or expanding study environment such as Brainscape, even a systematic display of overconfidence is unlikely to hinder the user’s progress. While Meeter and Nelson demonstrate that a systematic confidence bias has no effect on the relative proportions of items in each JOL category, Pashler and colleagues confirm that “the cost of overshooting the right spacing is consistently found to be much smaller than the cost of having very short spacing” (p. 5). Brainscape is therefore immune to users’ potentially poor study skills—and can actually improve your metacognitive skills over time.
Whatever software or technique you are using to help manage your study, applying student self-assessment practices will consistently improve your performance on subsequent tests of your knowledge. Remember to trust yourself, because you are the best judge of how well you know something!
[Trying to figure out how to study most effectively? Learn how to develop solid study habits to optimize your learning.]
Sources
Bahrick, H., & Hall, L. (2004). The importance of retrieval failures to long-term retention: A metacognitive explanation of the spacing effect. Journal of Memory and Language, 52(4), 566-577. https://doi.org/10.1016/j.jml.2005.01.012
Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. London: King’s College.
Butterfield, B. & Metcalfe, J (2006). The correction of errors committed with high confidence. Metacognition and Learning, 1(1), 69-84. http://dx.doi.org/10.1007%2Fs11409-006-6894-z
Dunlosky, J., & Nelson, T. O. (1994). Does the sensitivity of judgments of learning (JOLs) to the effects of various study activities depend on when the JOLs occur? Journal of Memory and Language, 33, 545– 565. https://doi.org/10.1006/jmla.1994.1026
Janiszewski, C., Noel, H., & Sawyer, A.G. (2003). A meta-analysis of the spacing effect in verbal learning: Implications for research on advertising repetition and consumer memory. Journal of Consumer Research, 30, 138–149. https://doi.org/10.1086/374692
Lovelace, E. (1984). Metamemory: Monitoring future recallability during study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 10(4), 756-766. https://psycnet.apa.org/doi/10.1037/0278-7393.10.4.756
Meeter, M. & Nelson, T. (2003). Multiple study trials and judgments of learning. Acta Psychologica, 13(2), 123-132. https://doi.org/10.1016/S0001-6918(03)00023-4
Nelson, T. O. & Leonesio, R. J. (1988). Allocation of self-paced study time and the “labor-in-vain effect.” Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 676–686. https://psycnet.apa.org/buy/1989-14397-001
Pashler, H., Bain, P., Bottge, B., Graesser, A., Koedinger, K., McDaniel, M., & Metcalfe, J. (2007). Organizing instruction and study to improve student learning. Institute for Educational Sciences practice guide, U.S. Department of Education. https://eric.ed.gov/?id=ED498555
Son, L. (2004). Spacing one’s study: Evidence for a metacognitive control strategy. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(3), 601-604. https://psycnet.apa.org/doi/10.1037/0278-7393.30.3.601
Squire, L.R. (1992). Memory and the hippocampus: A synthesis from findings with monkeys and humans. Psychological Reviews, 99(2), 195-231. https://psycnet.apa.org/doi/10.1037/0033-295X.99.2.195