Spaced repetition is the process by which new or difficult concepts are studied more frequently than easier known concepts —thereby perpetually challenging you on your weakest areas of knowledge.
Cognitive science shows that spacing repetitions within the optimal time intervals has actually been shown to be the most important factor in your ability to retain knowledge. It is literally how our brains are wired to encode memories.
It has historically been inconvenient to study in frequent, bite-sized sessions: Physical books are tough to transport; while most flashcard apps are either 'vocab-only', limited in their analytics, difficult to use, unrefined in their repetition algorithms, or inflexible in their content creation & study features.
Our web & mobile app finally makes spaced repetition more accessible, by respecting or leveraging other key pillars of cognitive science such as motivation, hyperbolic discounting, active recall, self-assessment, motivation & metacognition.
Brainscape's spaced repetition algorithm places you into a virtuous cycle that builds upon your knowledge.
Research has shown that Brainscape's spaced repetition algorithm improves both knowledge retention and test scores when compared to other study methods, like books and traditional flashcards.
Brainscape is a web and mobile study application designed to improve the retention of declarative knowledge. It complements the well-known benefits of spaced repetition with additional features that improve learner motivation, reduce the burden of planning study sessions, and deepen the level of cognitive processing involved in each exposure.
This paper assesses Brainscape's application of the latest research into education and cognitive science. We conclude that using Brainscape is among the world's most scientifically valid ways to enhance both learning speed and retention for any subject matter that can be studied via text, imagery, and/or sound.
Keywords: spaced repetition, expanding effect, metacognition, proximal learning, judgment of learning, confidence-based repetition, scaffolding, feedback, motivation, Bloom's 2 Sigma Problem, active recall, retrieval practice, social learning
Table of contents:
Educators throughout history have intuitively understood that personalized instruction is among the most effective methods of imparting knowledge and skills to others. From ancient craft apprenticeships, through today's academic tutoring businesses, we've consistently found that personalized human instruction offers a level of adaptivity that cannot easily be attained with large class sizes.
Psychologist Benjamin Bloom famously illustrated the benefits of such personalized instruction in his influential 1984 study, where he showed that students receiving 1-on-1 tutoring were able to perform a whopping 2 standard deviations higher on achievement tests than students who did not receive personalized interventions, conveying an extreme advantage (Bloom, 1984). Social justice advocates have accordingly recognized these inequities and have lobbied for smaller class sizes and against the expensive tutors enjoyed only by the elites.
Yet there may be a more practical and affordably scalable solution to the so-called "2 Sigma Problem", with the answer hiding in Bloom's original 1984 paper itself.
The key is to examine the variables impacting student achievement other than tutorial instruction. It turns out that four of Bloom's other top success factors, when measured individually, still have an impressive 1-sigma impact on learning outcomes. The opportunity unforeseen by Bloom and his researchers is that these factors could eventually be combined by educational software to create a potentially greater aggregate benefit than human tutorial instruction. (See Figure 1.)
"Student time on task", for example, could be enhanced by software features that both improve motivation, through the use of frequent feedback and variable rewards; and by features that help students with planning their study sessions, through the use of learning schedules and the provision of a mobile interface that a student could access easily on their smartphone.
Similarly, the benefits of breaking concepts into more digestible "cues & explanations" could be leveraged by a flashcard-like Q&A software that engages users' active recall and metacognitive self-assessment faculties, thereby deepening the cognitive processing involved with each exposure. The programs' spaced repetition algorithms can also automate the processes of "corrective feedback" and "reinforcement" and thus mitigate the forgetting that would otherwise occur with traditional study methods.
The confluence of these four major ideas is what has motivated the Brainscape team to create our learning software in the way that we have.
Brainscape is a web and mobile study application that advances upon the traditional flashcard concept by improving student motivation, alleviating the burdens of planning study sessions, and deepening the amount of cognitive processing involved in each exposure —all in the service of crafting a personalized spaced repetition algorithm that mitigates forgetting and enhances retention.
Millions of students, teachers, tutors, corporate trainers, and independent learners of all ages have used Brainscape to find, create, organize, and share flashcards for thousands of subjects, ranging from science, to law, to foreign languages. These flashcards can be studied in a web browser or smartphone, and come equipped with an engaging user experience that repeats concepts within a specific personalized interval of time, based on the user's self-rated confidence of each flashcards.
From its core, Brainscape has been based upon the principles of cognitive science that most effectively, conveniently, and enjoyably improve learning outcomes, in a way that resembles or improves upon the benefits that a human tutor would otherwise offer. This paper explores the scientific underpinnings of Brainscape and identifies areas where additional research may be beneficial.
Long-term motivation is a primary prerequisite to any deliberate learning endeavor. Whether that motivation is driven by intrinsic interest in the subject, by professional ambition, by the desire to learn to communicate in a foreign language, or simply by the fear of parental punishment in the event of poor test scores, the prospect of an eventual "endgame" is a necessary spark that catalyzes any sustained educational pursuit.
The problem is that humans are not inherently skilled at conjuring the required short-term motivation that allows us to make progress toward those long-term goals in smaller steps. We suffer from a phenomenon known as hyperbolic discounting (or delay discounting), in which our instinctive brains subconsciously prefer smaller but more immediate rewards, even as our conscious brains still want the longer-term prize (Ainslie & Haslam 1992).
Fitness and diet are often cited as typical examples of delay discounting. We may want to get into physical shape, but that motivation does not necessarily ensure that we will schedule adequate gym time today. We may want to lose weight, but that chocolate cake looks so delicious right now.
To address this well-known myopia, students with financial means have traditionally hired personal human tutors to enforce structured study sessions onto their calendars, in the same way that they might pay for a personal trainer to schedule them a series of workout appointments they can't afford to miss.
Such forced accountability confers an unfair advantage upon students who are able to hire a human study coach, versus those left to wrestle with their own hyperbolic discounting weaknesses. Brainscape aims to bridge these gaps by providing the rigor that is necessary for learners to maintain short-term motivation and to schedule appropriately spaced study sessions over time.
The first way Brainscape improves motivation to study is actually by doing the inverse: removing the biggest short- term de-motivators.
As most of us painfully know, traditional studying tends to involve activities like (i) remembering which books and notes to pack into our bags, (ii) finding a table in a quiet place where we can "spread out" our papers, and (iii) navigating to the point in our study materials where we last "left off". The collective burden of such activities can entail several minutes of preparation time, thereby imposing a daunting level of cognitive effort just to get started.
Brainscape reduces the barriers to initiating a study session in three major ways:
Orbell and Verplanken (2010), show that people are significantly more likely to establish a positive new habit if they reduce the number of steps and the amount of time necessary to perform each repetition of the habit. Just as keeping your blender on your counter makes you more likely to make your daily smoothie, Brainscape's one-click Continue Studying button makes learners more likely to study by eliminating all possible barriers to initiating a short study session.
The second key ingredient to improving study motivation is to help the learner establish a plan.
Human tutors typically provide this service by assessing the student's existing rate of learning and the amount of material remaining to be mastered, and by generating an estimate of the net amount of study time remaining. Simply knowing this concrete number increases the student's likelihood of remembering to study as they work toward their long-term mastery goal.
Unfortunately, students without access to a personal tutor often neglect to estimate their current rate of learning and therefore overwhelmingly tend to underestimate the time required to invest in their studies. Without a persistent reminder of the size of the remaining study burden, students end up "cramming" at the last minute, which dramatically reduces their longer-term memory retention per unit of time spent studying (Pashler et al., 2007).
Brainscape alleviates this knowledge gap by providing students with a continually revised estimate of the amount of study time needed to reach 100% mastery of a given topic. After each 10-flashcard "Round" is completed, the "Checkpoint" screen gives the student an updated projection of study time remaining (in addition to summarizing the progress made during the Round).
Figure 5 shows an example of Brainscape's Checkpoint screen, which includes an estimate of the student's remaining time needed to master the subject. After each Round, this estimate is updated to reflect their mean cumulative increase in self-rated "confidence points", per minute of time spent studying the subject thus far.
This "Study Time Remaining" estimate is a vast improvement upon the uninformed guess students otherwise have to make about the study time required to master a given topic. It gives students a key tool to help them appropriately allocate their remaining study sessions over the ensuing days, weeks, or months studying the subject, which is made convenient through Brainscape’s availability on both the computer and mobile devices.
Even with an accurate projection of the amount of required study time remaining, the act of actually initiating subsequent study sessions can still be an exhausting and overwhelming experience. Deciding when to study can often be so burdensome that wealthier students may delegate the scheduling to their personal (human) tutor.
The continually evolving Ego Depletion Theory (Job et al., 2010) even suggests that the accumulation of such small daily decisions can "deplete" a person's finite mental reserves of focus and willpower, which can only be replenished with sleep or other recreational activities. In other words, each decision to begin a study session can drain mental energies that could have otherwise been dedicated to other activities, such as reflecting on the actual study material itself.
To circumvent this limitation, Brainscape takes its Study Time Remaining estimate to the next level, by suggesting and providing a Study Reminder for the next study session. After each 10-flashcard Round, if the user chooses not to continue studying another Round in the current session, Brainscape allows the user to set the time at which she will be sent an email or mobile push notification, which prompts her to continue studying.
Figure 6 shows Brainscape's interface for setting a study reminder. Note that the user can customize what she wants her reminder email to say, including any personal notes reminding her where she left off. With this small up-front investment in setting a reminder, the user can "outsource" the cognitive overhead of having to remember when to study, as she can trust the software to do it for her —just like a human tutor would.
Over time, Brainscape's algorithms will evolve to make the suggested study reminders even "smarter", by remembering the user's past preferences, or by suggesting reminders in intervals that are tailored to their personal circadian rhythms. For example, Payne (2012) shows that the most effective study time for many students is right before bedtime; Brainscape accordingly could "learn" to remind users to log a quick study session at around that time each night.
Brainscape's goal is to limit any unnecessary cognitive overhead in planning study sessions or staying motivated. This way, users can devote more cognitive resources on the study content at hand.
Once a student has finally overcome the logistical and planning demotivators and has initiated a study session, she then has to grapple with the next unfortunate reality: the process of studying itself can be a tedious and thankless task.
This is particularly true in modern times, where we have become so accustomed to frequent dopamine hits and self- validating reassurances. Not every student has a personal human tutor to selectively dole out positive affirmations or constructive criticisms at just the right adaptive cadence.
Brainscape mitigates study monotony by giving and requesting four types of constant feedback:
The confluence of these various forms of feedback conspire to maintain the user's attention and motivation for as long as possible, as she works toward the Checkpoint at the end of the 10-card Round.
A common limitation of positive habit formation is often the repetitiveness of the reward. At a certain point, another scoop of the same vanilla ice cream ceases to motivate good behavior in the same way it did at the beginning.
Software developers have long seized upon this truism by implementing more variable reward schedules that make their apps more addictive (Eyal, 2013). The so-called "Hooked Model" has been used from games to social networks to shape user behavior and our brains in general. When you don't know exactly what "likes", "retweets", or "achievements" you might unlock, you feel increasingly compelled to keep posting, scrolling, or playing in search of that next unexpected reward, thus resulting in higher levels of dopamine (Zald et al 2004).
Brainscape presents a modest manifestation of such variable reward on its Checkpoint screen. Even though the timing of this reward is fixed (every 10 flashcards), the content of the reward changes, in the form of newly presented information about the learner's progress:
By withholding both of these key pieces of information until the completion of the Round, the Brainscape user experiences a knowledge gap that helps push her toward that 10-card goal. Many users report that they often even create a mental game for themselves, where they guess what that new estimate will be, before it is revealed on the Checkpoint screen.
The combination of convenient access, short sessions, ease of planning, frequent feedback, and variable rewards all contribute to a Brainscape Round being among the most motivating and painless ways to study. Students find it difficult to resist clicking the button to study "10 more cards."
As a final driver toward improving the desire to study, we consider the influence of peers on student behavior, through the powers of camaraderie and competition.
Case studies from Hogg & Abrams (1993) show that the mere knowledge that other people are engaging in a behavior increases a person's desire to engage in that behavior themselves. Brainscape's subject-specific Learners screen leverages this bandwagon effect to provide the user with reassurance that she is not alone, and that other smart students have chosen to study using the same method.
Similarly, the competitive nature of the Learners list motivates users to out-study their peers, due to fear, shame, or superiority. Even though Brainscape only shows leaderboards based on participation (e.g. Cards Studied or Days Streak) rather than mastery (which is self-reported), the drive to be exceptional can be among the most compelling motivators of all (Atkinson, 1964).
Brainscape's fusion of both our intrinsic and extrinsic (social) drives has helped to create a truly global study community, where users feel they are not just making their studies less painful, but that they are joining a significant movement toward more efficient learning.
Traditional methods of studying typically involve some form of passively re-reading notes and textbook materials. While many students can learn to improve this process through active annotation -- e.g. by highlighting, dog-earing pages, and/or writing notes in margins —this studying method is considered more of a form of concept "recognition" rather than active retrieval.
Karpicke (2012) shows that passively reviewing existing materials is alarmingly less than half as effective at improving memory retention one week later, when compared to studying using active prompted recall. (Consider how hard it would be for you to accurately draw the exact design of a $20 bill, despite the fact that you have passively seen it thousands of times.)
The ineffectiveness of passive studying is one of the many reasons why human tutors are able to enhance student performance so much: A human tutor can simply ask students to recall information by giving a verbal prompt and listening for the student’s answer.
Many educational apps and games have thus attempted to substitute human tutors by employing tactics such as matching exercises, multiple-choice questions, and other tools that force the student to reflect on the material rather than re-reading it.
Yet these activities are still just a form of recognition of the correct answer from among the available choices, rather than a way to ensure that the user can retrieve the answer when given a real-world prompt on their own volition. We need a way to ensure that the user is actively seeking the answer from scratch, while encouraging her to more deeply reflect on the concept with the most efficient use of brainpower.
As alluded to above, most existing "quiz" apps are bound by the requirement for the software to assess the student's correctness for her, in order to generate a score. These apps' software designers are forced to use only question types that can easily assess a correct answer, such as multiple-choice or "matching" exercises.
Such passive question types only prepare users to recognize, rather than generate, the correct answer. New doors are open if the educational software designer could consider simply trusting the student to attempt a genuine free retrieval of an answer, when given a specific prompt, at which point she could self-assess her correctness, and rate her own confidence it the concept, in finer detail than just the binary right/wrong.
Son & Metcalfe (2005) show that students are actually quite fast and accurate in assessing the strength of their own knowledge, especially after they reach adolescence. And both Moreno & Saldaña (2004) and Kerly & Bull (2008) show that students can even be trained to improve their ability to assess their knowledge confidence over time. Educators could thus benefit by placing more trust in students' self-assessments of their own learning.
Brainscape's web and mobile flashcard application does exactly that. Like with a paper flashcard, the motivated student is shown only the question (without a list of plausible answers) and is trusted to make a genuine attempt to mentally retrieve the correct response on their own. The student is then asked to manually reveal the answer and assess their own correctness on a confidence scale of 1 (no confidence) through 5 (perfect confidence) in order to proceed to the next digital flashcard.
Contrary to the instincts of many instructional designers, this process of having students simply mentally retrieve the answer (and manually "flip" the flashcard) can be equally or more effective than other active-recall forms of input such as typing in a short answer response.
The main reason is that typing an answer would otherwise consume valuable time (especially on a mobile phone) and accordingly decrease the number of repetitions that can be achieved in a given span of time. Nelson and Leonesio (1988) show that when students are separated into groups graded on either speed or accuracy, the accuracy students make little or no gains in performance over the speed students, despite spending significantly more time on each item. i.e. The increase in learning per unit of time spent is much greater when proceeding more quickly through active mental retrievals.
Brainscape's application of Active Recall (i.e. Retrieval Practice) thus firmly addresses the limitations of the recognition-based quiz apps described above, while maximizing the depth of retrieval efforts per unit of study time spent. According to Brame & Biel (2015), actively recalling information —without looking at notes, a textbook, or plausible multiple-choice answers —results in stronger long-term retention of newly encountered facts and concepts. And in a study that compared elaboration techniques with retrieval practice, Karpicke & Blunt (2011) found that retrieval practice showed better results in promoting conceptual learning of science information.
Perhaps most importantly, Active Recall also adds an additional layer of metacognitive processing by forcing the student to reflect on the actual strength of their knowledge in that concept. Sadler (2006) shows that requiring a brief self-assessment elicits deeper memory encoding than if the program would have otherwise abruptly proceeded after simply displaying whether the learner’s answer was correct. We can remember things better by simply reflecting upon how well we really know them.
Another major limitation of traditional passive study methods is that the study material tends to be hard-coded into the order that it was initially presented. By simply re-reading a textbook or lecture notes "linearly" (rather than by reviewing concepts in a customized order of increasing difficulty), the learner is bound to experience a cognitive load that is either lower or higher than the optimum level.
Such rigidity can be addressed by using a different format to review the material than the format that was initially used to present or record that material. Using a digital flashcard platform like Brainscape to study concepts originally presented in a book, lecture, or video, but in a more adaptive pattern, can provide precisely such a flexible complement.
Specifically, educators can use Brainscape to create and organize students' flashcards in an order of increasing difficulty, and/or decreasing importance, in a way that is mutually exclusive of the order that was originally presented in the textbook or lecture notes. Vygotsky’s theory of the Zone of Proximal Development suggests that students can indeed improve their learning efficiency by studying increasingly challenging material in increments that are just beyond their current level of understanding (McLeod, 2014).
As we will see later, the division of concepts into their "atomic" flashcard-like components allows spaced repetition software to progressively introduce new concepts at precisely the pace at which they are ready for them, based on their confidence in each item. This helps prevent students from spending too much time either reviewing concepts that are too easy, or becoming overwhelmed by too many difficult concepts at once —the latter of which could increase their Cognitive Load and decrease their motivation to continue (Weidman, 2015).
Botvinick & Braver (2015) compare the burden of Cognitive Load to the idea of limited computational power in their description of the Cognitive Energetics Theory. Just as a computer has limited RAM to manage several concurrent processes at once, our mental activity is limited to a finite number of concepts that we can maintain in short-term memory at one time. Miller (1956) famously estimated that the average number of items we can retain in working memory is "seven, plus or minus two".
Brainscape accordingly does not introduce any new flashcards if the number of low-confidence flashcards exceeds seven. The closer the number of low-confidence items ("1s") comes to approaching seven, the more likely Brainscape will repeat those items until the user has upgraded their confidence rating to a higher level. Brainscape perpetually keeps the user in the Zone of Proximal Development to maximize the efficiency of working memory.
As the Brainscape user continues to rate her confidence in a successive number of flashcards, those flashcards repeat within an interval of time that is customized to her confidence rating in each card. Low-confidence flashcards repeat the most often until the user gradually upgrades them to higher levels.
In the course of this ongoing study process, there are times in which the user may overconfidently rate herself with a higher memory strength than she actually possesses. (e.g. Perhaps she rates a flashcard as a 4, when the card's rating really should have been a 2.) One might expect that this would irreparably lower the efficiency of the study algorithm.
However, it turns out that a learner's discovery of her previous overconfidence tends to actually deepen her memory trace than if she had been more accurate in her previous assessment in the first place. i.e. Discovering that we had been previously and overconfidently incorrect forces our brains to pay much more attention to the new information that has caused the cognitive dissonance (Butterfield & Metcalfe, 2006), especially when Inter-Study Intervals are spaced rather than massed (Barrick & Hall, 2004).
Brainscape makes this virtuous cycle of self-error correction as convenient as possible by blatantly highlighting the user's previous confidence in each flashcard. Each time a flashcard is displayed on the screen, its border color is shown in the color in which it was previously rated. (1=red, 2=orange, 3=yellow, 4=green, 5=blue.) The user quickly learns these color associations and becomes more sensitive to the implications of their confidence assessments over time.
As the user continues to study with Brainscape, she thus experiences a self-corrective feedback loop that makes her ever-more aware of her errors and increasingly skilled at assessing her own learning progress. And even if the overconfident rating forced the spaced repetition algorithm to wait longer before repeating the flashcard, Pashler et al. (2007) further confirm that "the cost of overshooting the right spacing is consistently found to be much smaller than the cost of having very short spacing."
A human tutor might not have otherwise been able to provide the same granularity of error reflection, as he may have felt pity being so blunt about expressing his student's egregious incorrectness or overconfidence in each missed item. Brainscape's color-coded confidence indicators conversely impose actions of self-reflection and error contemplation in the maximum number of instances.
As a final examination of Brainscape's deepening of students' cognitive processing, we should note that critics of premade, instructor-provided flashcards may lament that they involve nothing more than the most basic "Knowledge" level of Bloom's Taxonomy.
There may be some merit to those objections. Indeed, Clayton (2006) has shown that, when the end-goal is to understand the intricacies of a more complex process and/or the relationship between multiple concepts, tactics like student-driven concept-mapping exercises can be an effective complement to purely bite-sized, knowledge-focused study activities such as digital flashcards.
Yet digital flashcards can still be effectively used to improve students' holistic understanding of even the most complex and interrelated processes. The key is to involve students in the creation of the flashcards themselves.
When students analyze instructional materials —whether in a textbook, lecture notes, or a concept map itself —and then condense those materials into their component prompt/target pairs at the most fundamental level, students are actually applying the higher Bloom's levels of Analysis and Synthesis.
Educators can accordingly use Brainscape to create a variety of collaborative student-driven flashcard-creation activities to advance their students’ grasps of difficult concepts, beyond just the basic knowledge regurgitation level of Bloom's Taxonomy.
In fact, the very act of transforming complex material into more fundamental questions and answers mimics what the student would be doing if she herself were the human tutor, and/or if she were designing study materials for other students. Kapler et al., 2015, shows that attempting to teach material to others is among the most effective ways of solidifying tenuous memory traces in the brain.
Brainscape simplifies such digital flashcard creation by providing students with a convenient and social browser-based authoring platform. The app allows users to create, edit, and reorder flashcards in a shared collaborative environment, in the form of text, images, sound files, and/or short animations.
In addition, even for classes in which users do not explicitly have "Edit" permissions, Brainscape allows users to conveniently "suggest an edit" by clicking a single button on the flashcard, which initiates a message thread between the student and the class's "Administrator". This thoughtful dialog helps engage the higher Bloom's levels of Analysis and Evaluation in a way that traditional, single-user flashcard study techniques do not.
Several million students (and their teachers) have already taken advantage of Brainscape as a tool for collaboratively creating their own digital flashcards, thus making their own knowledge more holistic while providing a more effective practice utility for the other users who discover their flashcards.
Thus far, we've discussed how Brainscape applies cognitive science principles to help students stay motivated, plan their study sessions more effectively, and employ deeper mental processing to the flashcard material at hand. But all these tactics would be underutilized if we didn't also apply an adequate form of repetition to transfer newly learned concepts to long-term memory. This repetition system is the most important part of the core Brainscape study experience.
Truly understanding the benefits and mechanics of repetition requires that we first briefly analyze why we forget semantic declarative knowledge in the first place.
Modern scientific analysis of forgetting is typically thought to have begun when Hermann Ebbinghaus first introduced his famous "Forgetting Curve" in 1885. His general concept of the exponential decay of memory has since been replicated by thousands of researchers and students of memory, for nearly every type of knowledge and skill acquisition.
The basic Forgetting Curve has the following characteristics:
Figure 11 shows a simple version of the Forgetting Curve for remembering a new concept. For illustration, let's say it's the word for "meatball" in Spanish. (It's albóndiga.)
If you'd never heard that difficult word before, and you're not exposed to it again soon, you'd likely forget it within as quickly as a few seconds of the initial exposure (especially if you were subsequently presented with other distracting information). This is why the left part of the curve drops off very steeply at first.
However, if you'd still remembered the word albóndiga 9 weeks from today, then the chances are that you will also still remember the word 10 weeks from today. This explains why the curve flattens out to approach an asymptote as time goes on.
Learning a new subject is essentially just an aggregation of these forgetting curves for all the concepts and relationships contained therein. e.g. As you learn a new subject that consists of 100 smaller learning objectives, you technically have 100 individual forgetting curves decaying on you all at the same time.
The burden of such cumulative, exponential learning decay is precisely what has motivated educators to improve their methods of instruction throughout history. Creative instructors have used tactics like imagery, emotions, stories, intimidation, mnemonic devices, novel versus familiar stimuli, unimodal versus bimodal stimulus presentation, structural versus semantic cue relationships, sleep habit improvement, dietary habit improvement, and isolated versus context-embedded stimuli to make their instruction more salient and to thereby "flatten" the learning curve to foster long-term retention.
But as it turns out, none of these tactics improve long-term memory retention nearly as effectively as good-old-fashioned repetition, particularly when those repetitions occur at increasingly longer Inter-Study Intervals (Janiszewski et al., 2003).
The reason that repetition is so effective is that it essentially "refreshes" the concept's forgetting curve with a new, flatter curve upon each exposure. Each act of mentally retrieving a concept shifts the curve outward with a slower subsequent memory decay schedule.
Figure 12 shows what happens each time a student mentally retrieves a concept that she had already been in the process of forgetting. Upon receiving feedback on her correctness and re-internalizing the concept, the student's forgetting curve accordingly shifts toward a new, flatter forgetting curve, as the hippocampus is able to build new neural connections within the neocortex (Schwindel, 2011).
This process of continual curve-refreshing repeats upon each retrieval attempt, until the forgetting curve becomes as flat as the learner wishes to make it (e.g. until the memory is as permanent as desired), at which point fewer ongoing retrievals are needed to maintain the memory trace.
In reality, most people obviously don't need to understand the formal forgetting curve to appreciate repetition's importance to the learning process. Anyone who has ever uttered the phrase "practice makes perfect" already knows that the truism is basically common sense. Even ancient Roman educators were fond of saying repetitio est mater studiorum, or "repetition is the mother of all studying."
What the Romans didn't have, however, was web and mobile software that optimized the actual pattern of repetition in order to maximally flatten the forgetting curve with the least amount of cumulative student effort per concept. Enter Brainscape yet again….
Brainscape's core study algorithm was designed to accelerate the flattening of the forgetting curve by spacing each flashcard's Inter-Study Interval in the optimal amount of time. The app does so by using precisely the same user-provided confidence assessment that we discussed in the Active Recall section earlier.
Consider again the case of studying Spanish word albóndiga ("meatball"). If the student had rated her initial confidence as a "1", then Brainscape would repeat the flashcard very soon (perhaps even within the next four or five flashcards), since waiting too long to repeat the concept could have otherwise wasted the initial instruction, as the memory trace would have been too far decayed (Metcalfe and Kornell, 2003).
Conversely, if the user had rated an easy word (e.g. amigo) as a "5", Brainscape's study algorithm would not repeat the flashcard for a very long time —perhaps in a few days at first, then not again for weeks or years if the user maintains her rating of "5" on those subsequent exposures.
This method essentially automates the same technique that the "Leitner method" has taught for decades (in which a student makes piles of flashcards based on confidence and has to manually remember which pile to study based on its "staleness"). Brainscape's founders have spent over a decade refining our automated study algorithm so that Inter-Study Intervals appropriately adapt to each student's pace of learning, which is supported by the latest in cognitive science research.
For example, in a review of over 800 spaced repetition-related academic papers, Cepda et al (2006) show that over 96% of experiments yield statistically significant improvements in learning outcomes when students space their study versus massing it. And newer research continues to cite stronger benefits of spaced repetition across even more subject areas and environments, particularly for large concepts studied over a long period of time (K.S.H, 2016, and Roediger III, 2011).
The effectiveness of spacing repetitions has even led hundreds of educational software developers to implement such forms of adaptivity into their applications over the past several years (see Anki, Cerego, SuperMemo, etc.). The term "spaced repetition" has become part of the standard vernacular in the world of education.
Brainscape advances the field of spaced repetition by basing the optimization of the Inter-Study Interval upon the user's self-rated confidence in each item, while applying dozens of other features that enhance the student's motivation and depth of cognitive processing, as described throughout the rest of this paper. We call this collective process "Confidence-Based Repetition" (CBR).
CBR allows Brainscape to more effectively space flashcards over increasing Inter-Study Intervals after each repetition, which has been shown to be more important to memory retention than any other possible study tactic. Cull et al. (1996) confirms that "expanding practice" is superior to massed or equally distributed practice in 100% of cases, while both Bahrick & Phelps (1987) and Ebbinghaus himself (1913) proposes that the best Inter-Study Interval is the longest one before which the item is forgotten.
Without an app like Brainscape to home in on that optimal interval, even the best human tutors would not be able to replicate the efficiency with which they repeatedly assess their students. Doing so would require that they maintain mental estimates of their student's confidence in hundreds or thousands of specific concepts, and that they use their constantly evolving estimates to determine exactly when and how to assess each concept again.
There is also no way that a human tutor could be perpetually by a student's side, and available to ask questions at the right time, in the way that a convenient mobile app could. Modern technology finally makes it practical to space study sessions more sporadically throughout the day, giving maximum intervening time for memory consolidation.
Based on both the scientific research and on the anecdotal evidence we've collected from our users over the past decade, we conclude that Brainscape's application of Confidence-Based Repetition is the most effective known way for students to optimize their use of study time. Millions of students continue to experience unprecedented learning efficiency from using Brainscape, whether they have simply crammed over one long session, or whether they have reviewed a massive topic over the course of days, weeks, months, or years.
Figure 17 summarizes the various burdens experienced with "traditional" studying (i.e. re-reading notes, highlighting textbooks, or even creating paper flashcards) that are alleviated by studying with Brainscape.
This summary makes it especially apparent that the benefits in the Brainscape column also happen to be many of the same benefits typically offered by human tutors. As we have discussed, the right software features can replicate private instructors' tactics with an efficiency that is equal to or better than a human.
In other words, Brainscape is making great strides toward finally resolving the injustice of Bloom's 2 Sigma Problem, particularly for subjects that rely heavily on large amounts of semantic declarative knowledge that can be conveyed by text, images, sounds, or animations. These may include:
Brainscape continues to build out our "Knowledge Genome" of such content through partnerships with educators, publishers, and other subject-matter experts who are interested in improving the way people learn. This Knowledge Genome presents a hierarchical map of all of the world's study-able knowledge, from top-level subjects (e.g. Science → Biology → …) all the way down to the individual flashcard level (e.g. → What is the energy-containing chemical produced by mitochondria?). Brainscape's goal is to eventually offer comprehensive content for every subject, in every country, in every foreign language.
While we have a long way to go to achieve that goal, our continued exponential growth raises the question of what other types of knowledge or skills could benefit from Confidence-Based Repetition, beyond just the subjects that are heavily reliant on semantic domain knowledge.
For example, Brainscape might be able to replace the flashcard interface with Math widgets, where the student's correctness, speed, and confidence are combined to determine how frequently various concepts are assessed.
Or perhaps we could support academic objectives not well addressed by educational software —such as complex problem-solving or creative writing —through the use of a remote human instructor virtually assessing the student's correctness, determining what exercises will be recommended to the student next.
Such a blend of human instructors and technology could allow a future version of Brainscape to support non-academic learning endeavors such as playing music or excelling at sports. e.g. A live or virtual coach could watch the learner and input "confidence ratings" for the learner's various micro skills, thus automatically determining the optimal order of remedial exercises that will be suggested to the user during their ongoing independent practice.
All of these eventual features could be supported by a well-organized Learning Genome rubric that breaks all knowledge and skills into their most atomic assessable components, upon which the Confidence-Based Repetition algorithm can determine appropriate intervals of ongoing assessment.
In general, the duty of mapping all of the world's micro-learning objectives into their most optimization-friendly format is what drives the Brainscape team every day. We constantly iterate upon both our learning algorithms and our users’ experience in order to provide an increasingly effective learning tool, for an ever-increasing number of subjects, in a way that can affordably impact more learners at a greater scale than 1-to-1 human instruction ever could.
With the right continued applications of the latest cognitive science research, we can continue to create a smarter world by simplifying and accelerating the learning process itself.