Introduction

In December 1987, a twenty-seven-year-old biology student in Poznań, Poland, sat down at a borrowed computer and typed sixteen evenings' worth of code into a program that would change how the world studies. His name was Piotr Woźniak. His program was called SuperMemo. And the problem he was trying to solve was as old as education itself: why do we forget what we learn, and can a machine help us remember? [1]

That question had been asked before. A German psychologist named Hermann Ebbinghaus had measured forgetting with scientific precision a century earlier, in 1885. An Austrian journalist named Sebastian Leitner had turned the answer into a cardboard box system in 1972. But Woźniak did something neither of them could. He wrote an algorithm. And that algorithm, called SM-2, is still running inside flashcard apps used by tens of millions of students today [2].

The history of flashcard software is not a story about technology. It is a story about memory, obsession, and the strange journey from a single man memorizing English vocabulary on paper cards to machine-learning models trained on hundreds of millions of reviews that predict, for each individual learner, the exact moment a memory is about to disappear.

Vintage 1980s computer transitioning to modern neural network visualization.

The Man Who Measured Forgetting

Before there could be flashcard software, someone had to prove that forgetting follows a pattern. That someone was Hermann Ebbinghaus.

In the late 1870s, Ebbinghaus was a young philosopher in Berlin with an unusual ambition. He wanted to measure memory the way physicists measured heat. Not describe it. Not philosophize about it. Measure it with numbers [3]. The problem was obvious: if you ask someone to memorize a poem, their prior knowledge of the language, their emotional reaction, their familiarity with the poet, all of it contaminates the measurement. Ebbinghaus needed material that carried zero meaning.

So he invented nonsense syllables. Roughly 2,300 consonant-vowel-consonant combinations like WID, ZOF, and BUP. He arranged them into lists of thirteen. And then, working entirely alone between 1879 and 1885, he memorized those lists, waited, and measured how much he had forgotten.

His method was simple but brutal. He would learn a list until he could recite it twice without error, then wait a specific interval and relearn it. The difference between the original learning time and the relearning time, expressed as a percentage, was what he called savings. More savings meant more memory remained. Less savings meant more had been lost.

The intervals he tested were 19 minutes, 63 minutes, 525 minutes, one day, two days, six days, and thirty-one days. The result was a curve that dropped steeply at first, then gradually flattened. Within the first hour, roughly forty percent of the savings had vanished. Within a day, about sixty-six percent. After a month, roughly eighty percent was gone [3].

He published the results in 1885 as Über das Gedächtnis. The book was ignored by most of his contemporaries. It took decades before anyone realized what he had done: he had discovered that forgetting is not random. It follows a mathematical function. And if forgetting is predictable, then in theory, it is preventable.

In 2015, Jaap Murre and Joeri Dros at the University of Amsterdam replicated the entire Ebbinghaus program. One subject. Seventy hours of testing. The original curve held up almost perfectly, with one small addition: a slight upward bump at the twenty-four-hour mark, likely reflecting overnight sleep consolidation [3]. The forgetting curve was not a metaphor. It was biology.

But Ebbinghaus also noticed something else in his data, something he mentioned almost in passing. Distributing practice across days produced better retention than cramming the same amount of practice into one session. He had discovered the spacing effect, though he did not call it that. And this observation, buried in a nineteenth-century German monograph about nonsense syllables, would become the scientific foundation of every flashcard app ever built.

From Paper Cards to Cardboard Boxes

For eighty-seven years after Ebbinghaus, the spacing effect sat in the scientific literature doing essentially nothing useful. Researchers confirmed it. They replicated it. They wrote papers about it. But no one turned it into a practical study method that ordinary students could use.

There were attempts. C. A. Mace recommended "active rehearsal" at expanding intervals in his 1932 book The Psychology of Study. Herbert Spitzer tested over 3,600 Iowa schoolchildren in 1939 and showed that spaced quizzing dramatically improved retention. His paper was published, praised briefly, and forgotten [4]. Paul Pimsleur, a linguist at Ohio State, published "A Memory Schedule" in the Modern Language Journal in 1967, proposing graduated intervals: 5 seconds, 25 seconds, 2 minutes, 10 minutes, 1 hour, 5 hours, 1 day, 5 days, 25 days, 4 months, 2 years [5]. His intervals became the backbone of the Pimsleur audio courses. But none of these reached classrooms.

The person who finally did it was Sebastian Leitner.

Leitner's biography reads like a novel. Born in Salzburg in 1919. Detained by the Gestapo in 1938 for opposing the Anschluss. Conscripted into the Wehrmacht in 1942. Captured by the Soviets and held as a prisoner of war until the late 1940s [6]. After returning to Germany, he reinvented himself as a science journalist. His wife, Thea Leitner, was a well-known Austrian author. In 1972, at the age of fifty-three, he published a book called So lernt man lernen (How to Learn to Learn) through Herder Verlag in Freiburg im Breisgau. It became a German bestseller.

The book contained, almost as an afterthought, what is now universally called the Leitner system. The original design was a physical box divided into five compartments of 1, 2, 5, 8, and 14 centimeters [7]. Flashcards started in the first compartment. When you answered a card correctly, it moved to the next compartment. When you got it wrong, it went back to the first. You only reviewed a compartment when it was full.

The genius of this design was self-regulation. Cards you knew well migrated to the back of the box and rarely came up. Cards you struggled with stayed in the front and appeared constantly. The box automatically concentrated your effort where it was needed most. No calculator required. No scheduling. Just cardboard and common sense.

What Leitner probably did not realize was that his box was a crude physical implementation of the spacing effect. Correctly answered cards were reviewed at longer and longer intervals. Incorrectly answered cards were reviewed immediately. The intervals were not mathematically optimal, but they were directionally correct. And for millions of German, Austrian, and Swiss students who used Leitner boxes throughout the 1970s and 1980s, they worked well enough.

1885
Ebbinghaus publishes the forgetting curve
1932
Mace recommends expanding rehearsal intervals
1939
Spitzer tests spaced quizzing on 3,600 students
1967
Pimsleur proposes graduated interval schedule
1972
Leitner publishes the cardboard box system
1987
Woźniak writes SuperMemo 1.0 in Turbo Pascal
1991
SuperMemo World incorporated in Poznań
2003
Mnemosyne launches as open-source alternative
2006
Damien Elmes creates Anki for learning Japanese
2007
Andrew Sutherland releases Quizlet publicly
2022
Jarrett Ye publishes FSRS at ACM KDD
2023
Anki integrates FSRS natively in version 23.10
Beautiful wooden Leitner box with colorful flashcards in warm light.

Sixteen Evenings in Poznań

The leap from Leitner's cardboard to software happened in a student apartment in communist Poland. And like most revolutions, it began with frustration.

Piotr Woźniak entered Adam Mickiewicz University in Poznań in the early 1980s to study molecular biology. He was an obsessive learner. He wanted to remember everything he studied, not just long enough to pass an exam, but permanently. And he was failing at it. No matter how many notes he took, no matter how many hours he studied, the knowledge leaked out of his memory within weeks [1].

In 1982, Woźniak did something unusual. Instead of studying harder, he started studying his own studying. He began tracking what he could remember and what he had forgotten, looking for patterns in his own retention. By January 1985, he had formalized this into an experiment. He wrote English vocabulary words on paper flashcards and tested himself at carefully controlled intervals, recording every result. The experiment ran from 31 January 1985 to 2 August 1986. From this data, he derived what he later called Algorithm SM-0: a set of roughly doubling intervals (1, 7, 16, 35 days) that seemed to keep his retention above ninety percent [8].

But paper flashcards could not scale. With hundreds of cards at different intervals, the scheduling became impossible to manage by hand. Woźniak needed a computer.

In late 1987, now also enrolled in computer science at Poznań University of Technology, he borrowed an Amstrad PC 1512 and wrote the first version of SuperMemo in Borland Turbo Pascal 3.0. It took sixteen evenings. His first computerized review session was on 13 December 1987 [1].

The algorithm he implemented, SM-2, was elegantly simple. Every flashcard carried three numbers: a repetition count, a review interval in days, and an easiness factor (E-Factor) initialized at 2.5. After each review, the user graded their recall on a scale of 0 to 5. If the grade was 3 or higher, the card advanced: the first interval was 1 day, the second was 6 days, and every subsequent interval was the previous interval multiplied by the E-Factor. The E-Factor itself was adjusted by a small formula after each review, but it could never drop below 1.3. If the grade was below 3, the card reset to day one [1].

The entire algorithm fits in twenty lines of code. And it is still, nearly four decades later, the most widely used spaced-repetition algorithm in the world.

Woźniak's 1990 master's thesis at Poznań University of Technology, supervised by Professor Zbigniew Kierzkowski and titled Optimization of Learning: A New Approach and Computer Application, documented the theoretical foundation in a 200-page dissertation [9]. On 5 July 1991, Woźniak and his university classmate Krzysztof Biedalak incorporated SuperMemo World. Their entire capital was one Intel 80286 personal computer [10].

Vintage Amstrad PC 1512 on a cluttered student desk with glowing monitor.

The Wired Article That Changed Everything

SuperMemo evolved steadily through the 1990s and 2000s. SM-4 added adaptive interval matrices. SM-5 introduced regression fitting. SM-8 refined stability calculations. SM-17, released around 2016, was the first built entirely on the two-component model of long-term memory, separating stability (how durable a memory is) from retrievability (how accessible it currently is) [11]. The latest iterations, SM-18 through SM-20, delegate parameter tuning to machine learning.

But SuperMemo had a problem. It ran only on Windows. Its interface was notoriously complex. And it was commercial software in an era when the open-source movement was gaining momentum. Outside Poland and a small community of power users, almost nobody had heard of it.

That changed on 21 April 2008, when Gary Wolf published a feature in Wired magazine titled "Want to Remember Everything You'll Ever Learn? Surrender to This Algorithm." Wolf traveled to the Baltic resort town of Kołobrzeg and profiled Woźniak jogging half-naked on a winter beach, living a monastic life organized entirely around his own software's review schedule [12]. The article painted a portrait of a man who had organized his entire life around memory and paid a social cost for it. It was strange, compelling, and widely read. For the first time, the Anglophone tech community learned that a Polish programmer had solved a fundamental problem of human learning thirty years earlier and almost nobody knew.

The article did not make SuperMemo mainstream. Its Windows-only interface and steep learning curve remained barriers. But it planted a seed. Within a few years, two open-source projects would carry Woźniak's algorithm to an audience he never reached.

The Open-Source Revolution

The first crack in SuperMemo's monopoly came from Belgium. Peter Bienstman, a physicist at Ghent University, launched Mnemosyne in 2003 [13]. Named for the Greek goddess of memory, Mnemosyne was built in Python, distributed under the GNU AGPLv3 license, and ran on Windows, macOS, and Linux. It used a modified version of SM-2. But Bienstman added something SuperMemo never had: a research component. Users who opted in could contribute their anonymized review data to a public dataset on long-term memory. Mnemosyne was not just a study tool. It was a scientific instrument [13].

Three years later, on the other side of the world, an Australian programmer named Damien Elmes was trying to learn Japanese. He had been using a basic flashcard program and was impressed by how well spaced repetition worked, but frustrated by the software's limitations. "I was absolutely blown away," he later said in an interview with the language blogger Benny Lewis. He started modifying the code, then rewriting it entirely. The oldest reference to his project that he could later find dated to 5 October 2006 [14]. He called it Anki, the Japanese word for memorization.

Anki was built on a modified SM-2 algorithm. Elmes had initially implemented SM-5 but found it produced erratic interval growth. He reverted to SM-2 with modifications: the failure penalty was softened (cards were not always reset to day one), and the easiness factor was adjusted more conservatively [15].

But the real innovations were architectural. Anki introduced three things that SuperMemo lacked. First, a sync server called AnkiWeb that kept a user's deck consistent across desktop and mobile devices. Second, a plugin ecosystem with eventually more than 1,600 community-built add-ons for everything from image occlusion to automatic Japanese furigana. Third, a content-agnostic note-and-template system where a single note could generate multiple card types [14].

Anki was free on desktop, free on Android (through the separately developed AnkiDroid project), and $24.99 on iOS. The iOS price was controversial, but it funded Elmes's full-time development of the platform. By the mid-2010s, Anki had become the de facto standard for serious self-directed learners, language students, and, increasingly, medical students.

SoftwareYearCreatorAlgorithmLicensePlatform
SuperMemo1987Piotr WoźniakSM-2 to SM-20CommercialWindows only
Mnemosyne2003Peter BienstmanModified SM-2AGPLv3Cross-platform
Anki2006Damien ElmesModified SM-2, FSRS (2023+)AGPLv3Cross-platform
Quizlet2007Andrew SutherlandProprietaryCommercialWeb and mobile
Brainscape2010Andrew CohenCBRCommercialWeb and mobile
Memrise2010Ed Cooke and Greg DetreProprietaryCommercialWeb and mobile

A Fifteen-Year-Old and 111 French Animals

While Woźniak and Elmes were building tools for serious memorizers, a high school student in California was about to create the flashcard platform that would reach more users than all of them combined.

In 2005, Andrew Sutherland was a fifteen-year-old sophomore at Albany High School. His French III teacher had assigned 111 French animal vocabulary words to memorize overnight. Sutherland had been asking his father to quiz him from paper flashcards, and he realized he could build a website that did the same thing faster [16].

For the next 420 days, he designed, programmed, and debugged the entire platform alone. He released it to the public in January 2007 under the name Quizlet. By September 2007, it had 50,000 registered users. By 2010, thirteen million. By 2017, half of all U.S. high school students had reportedly used it [17].

Quizlet was not a spaced-repetition tool in the Woźniak tradition. It did not use SM-2. Its original scheduling was simple and gamified: terms and definitions, matching games, timed tests. What it had was ease of use and virality. Anyone could create a flashcard set in minutes and share it with a link. Students did not have to understand anything about memory science. They just studied.

Sutherland enrolled at MIT, left after three years to run Quizlet full-time, raised $12 million from Union Square Ventures in 2015, and then a $30 million Series C from General Atlantic in 2020 [18]. The platform reached over sixty million monthly active users. Sutherland stepped down as CTO in 2020 at age thirty [17].

The Quizlet story illustrates something important about the history of flashcard software. The scientifically superior tool does not always win the market. Anki's algorithm was better. SuperMemo's was better still. But Quizlet understood that most students do not care about optimal review intervals. They care about not failing tomorrow's quiz. And for that, a simple set of digital flashcards with a share button was enough.

In 2023 and 2024, Quizlet pivoted hard toward AI, launching GPT-powered features including Magic Notes (automatic card generation from text) and Q-Chat (a conversational tutor). It also paywalled several previously free features, including Learn mode, Test mode, and Write mode. The backlash from students was immediate and sustained [17].

The Medical School That Runs on Flashcards

No community adopted digital flashcards as completely as American medical students. And no tool became as dominant in that community as Anki.

The numbers are striking. A 2025 survey at the University of Central Florida College of Medicine found that 94 percent of first-year medical students used Anki. Among users, 97.6 percent relied on pre-made decks rather than creating their own cards. And 87.8 percent believed Anki significantly contributed to their success [19].

A separate study at the University of Minnesota (Wothe et al., 2023) found that 56 percent of medical students used Anki daily, and that each approximately 1,700 unique cards reviewed correlated with a one-point increase on the USMLE Step 1 board exam [20]. A 2025 systematic review in Medical Science Educator confirmed a positive association between Anki use and academic performance across multiple institutions [21].

The mechanism behind this adoption was community-driven. Around 2017, an anonymous medical student created a deck called Zanki, containing roughly 25,000 cards built around the standard medical school resources: First Aid for the USMLE Step 1, Pathoma pathology videos, and Sketchy Medical mnemonics. Zanki was eventually superseded by the AnKing Step Deck, a community-maintained mega-deck of over 30,000 cards distributed through the AnkiHub platform. More than 100,000 medical students have used it.

What makes the medical school case interesting from a software history perspective is that it demonstrates something Woźniak predicted but could never prove with SuperMemo alone: at scale, with enough cards and enough consistency, spaced repetition does not just help with studying. It fundamentally restructures how knowledge is acquired.

A medical student reviewing 200 Anki cards per day for two years will have reviewed over 140,000 individual card encounters. Each encounter is a retrieval attempt. Each retrieval strengthens the memory trace. The student is not just memorizing facts. The student is systematically building a retrieval architecture in their brain that mirrors the structure of medical knowledge.

The Gamification Wave and Its Casualties

Between 2010 and 2020, the flashcard software market expanded rapidly, driven by smartphone adoption and venture capital. Several notable platforms emerged, peaked, and in some cases disappeared.

Brainscape was founded around 2010 by Andrew Cohen, who had built an early prototype as a Microsoft Excel macro while studying Spanish in Panama. Cohen developed what he called Confidence-Based Repetition (CBR): instead of the algorithm grading the card, the user rates their own confidence on a 1-to-5 scale, and the system uses that signal to determine when to show the card again [22]. The approach was simpler than SM-2 and arguably more user-friendly. Brainscape also introduced "certified" decks created by professional educators and publishers, building what Cohen called the Knowledge Genome.

Memrise, founded in London the same year by Ed Cooke (a Grand Master of Memory) and Princeton neuroscientist Greg Detre, took a different approach entirely [23]. Its signature feature was the "mem," a user-generated mnemonic image or wordplay attached to each vocabulary item. The idea was borrowed from competitive memory sports: association is more powerful than repetition alone. Memrise won Google Play's Best App award in 2017 and accumulated over seventy million registered users.

But the Memrise trajectory reveals a tension that runs through the entire history of flashcard software. User-generated content made the platform cheap to build and viral to spread. A decade later, it was the layer Memrise was most eager to remove. In September 2022, Memrise eliminated mems entirely. In November 2023, it announced further restrictions on community courses. By 2024, the company had pivoted to GPT-powered AI conversation partners for language learning [23]. The mems, the feature that made Memrise Memrise, were gone.

Other casualties were more dramatic. Tinycards, Duolingo's standalone flashcard app launched in 2016, was killed on 1 September 2020 to focus resources on the core Duolingo platform. StudyBlue, founded at the University of Wisconsin-Madison in 2006, raised over $20 million in funding, reached 15 million users, was acquired by Chegg for $20.8 million on 2 July 2018, and was shut down entirely at the end of 2020 [24].

The pattern is consistent. Flashcard platforms that depended on venture capital and user growth had to either find a sustainable business model or disappear. The ones that survived, Anki and Quizlet, did so through very different strategies: Anki through a $24.99 iOS app and volunteer development, Quizlet through advertising and eventually a premium subscription tier. SuperMemo survived on CD-ROM sales and later a subscription model. Brainscape and Memrise pursued freemium with mixed results.

Digital graveyard of flashcard app icons on a misty hillside.

An Undergraduate Rewrites the Algorithm

For thirty-five years, from 1987 to 2022, the flashcard software industry ran on variations of SM-2. Anki used it. Mnemosyne used it. Dozens of smaller apps used it. The algorithm worked. But it had known limitations. The easiness factor could spiral downward, trapping difficult cards in short intervals. The exponential growth of intervals was arbitrary, not fitted to actual forgetting data. And the reset-on-failure mechanic was punishing, sending a card that had been correctly reviewed for months back to day one after a single lapse [15].

The person who broke this thirty-five-year stalemate was not a professor. He was not at a research lab. He was an undergraduate.

Jarrett Ye was working part-time for MaiMemo, a Chinese vocabulary app, when he began experimenting with a new approach to scheduling. Instead of hand-tuned heuristics like SM-2, he treated the problem as what it mathematically is: a prediction task. Given a card's review history, when will the probability of recalling it drop below a target threshold? This is exactly the kind of question machine learning excels at answering [25].

His 2022 paper, co-authored with Su and Cao and published at the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, formalized the approach. The algorithm, called FSRS (Free Spaced Repetition Scheduler), models each card with three variables: Difficulty (a number between 1 and 10), Stability (the time in days for recall probability to fall from 100 percent to 90 percent), and Retrievability (current probability of recall). Twenty-one parameters govern the relationships between these variables, and those parameters are fitted by gradient descent on each user's review history [26].

What happened next is a story that could only happen in the open-source era. Ye posted about FSRS on the Anki subreddit. A commenter challenged him to stop publishing papers and go implement the algorithm. He did. FSRS v3 appeared as an Anki add-on in October 2022. FSRS v4 followed in July 2023. And on 31 October 2023, Anki version 23.10 shipped with FSRS integrated directly into the core application [14].

The default parameters were trained on 738 million reviews from approximately 20,000 users. Benchmarks run by the open-spaced-repetition project across roughly 10,000 review collections showed FSRS outperforming SM-2 on prediction accuracy for 99.5 to 99.6 percent of users [27]. Simulations suggested 20 to 30 percent fewer reviews were needed to achieve the same retention level.

One technical detail matters for the history. SM-2 implicitly assumed that the forgetting curve is exponential: memory decays at a constant proportional rate. FSRS models the curve as a power function instead, a choice supported by both Ebbinghaus's original data and the 2015 Murre and Dros replication [3]. The distinction sounds arcane, but it changes how intervals scale for well-learned cards. Under an exponential model, intervals grow faster than they should. Under a power model, they grow more conservatively, which matches what learners actually experience.

FSRS vs SM-2: Prediction Accuracy (% of Users Where FSRS Wins)FSRS v3FSRS v4FSRS v5FSRS v610099.59998.59897.59796.59695.595% Users
Competing mathematical curves in amber and electric blue on dark indigo.

When Machines Start Making the Cards

By 2024, the bottleneck in flashcard-based learning had shifted. The scheduling problem was largely solved. FSRS could predict when a card would be forgotten with remarkable accuracy. SM-2, for all its simplicity, had been good enough for decades. The new problem was creation.

Making flashcards by hand is tedious. A medical student building a personal deck from a pathology textbook might spend more time typing cards than studying them. A language learner transcribing vocabulary from a lecture might give up before the deck is finished. The creation cost was the single biggest reason students abandoned spaced repetition before it could work [28].

Large language models solved this problem in roughly twelve months. Starting in 2023 and accelerating through 2024 and 2025, a wave of AI-powered tools appeared that could generate flashcard decks from text, PDFs, lecture slides, and YouTube videos. Quizlet launched Magic Notes in August 2023. Knowt, founded by college students, grew to over seven million users on a free tier that included AI generation from uploaded documents. Gizmo gamified the same workflow on mobile. Mindomax integrated AI card generation with spaced repetition scheduling [28].

The shift was fundamental. For thirty-six years, from SuperMemo 1.0 in 1987 to early 2023, flashcard software had been about when to show a card. Now it was also about what to put on the card in the first place. The scheduling engine was becoming a commodity. The differentiation was moving upstream, to content generation, and downstream, to the learning experience itself.

Market research firms began tracking this new category. SNS Insider estimated the global AI-generated personalized flashcard market at $1.98 billion in 2024, with a projection of $8.62 billion by 2032 at a compound annual growth rate of 20.25 percent [29]. The broader EdTech market, within which flashcard software is a small but cognitively outsized segment, sits in the range of $145 to $187 billion as of 2024-2025, depending on the research firm consulted [30].

Modern tablet on white desk with glowing holographic flashcards above.

The Science That Makes It All Work

The reason flashcard software works is not the software. It is the cognitive science underneath it. Three phenomena, each replicated hundreds of times across decades of research, form the scientific foundation.

The first is the testing effect. In 2006, Henry Roediger and Jeffrey Karpicke at Washington University in St. Louis published a study in Psychological Science that became one of the most cited papers in educational psychology [31]. Students who read a passage once and were tested three times remembered about 61 percent of the material a week later. Students who read the same passage three times and were tested once remembered only about 40 percent. Testing was not just an assessment tool. It was a learning tool. And every flashcard app is, structurally, a forced retrieval-practice machine.

Their 2008 follow-up in Science went further [32]. Once a foreign-language word pair had been recalled correctly, further studying it added nothing to long-term retention. But further testing it did. The act of pulling information out of memory, not putting information in, was what strengthened the trace.

The second phenomenon is the spacing effect. The definitive modern synthesis came from Cepeda, Pashler, Vul, Wixted, and Rohrer in 2006: a meta-analysis of 839 effect sizes from 317 experiments confirmed that distributed practice reliably outperforms massed practice [33]. Their 2008 follow-up tested over 1,350 subjects and found that the optimal study gap is roughly 10 to 20 percent of the desired retention interval [34]. Want to remember something for a year? Space your reviews about five to seven weeks apart. This quantitative target is what modern algorithms like FSRS now hit dynamically, personalized to each learner.

The third is Robert Bjork's desirable difficulties framework [35]. Bjork argued that conditions which make learning feel harder in the moment, such as spacing, interleaving, and retrieval practice, actually produce stronger long-term retention. The framework explains a persistent paradox: students consistently prefer rereading their notes over testing themselves, even though testing produces better results. Rereading feels smooth. Testing feels effortful. The brain mistakes fluency for learning. Bjork called this the illusion of competence.

Together, these three phenomena explain why a simple flashcard app with a decent scheduling algorithm outperforms most other study methods. The app forces retrieval (testing effect), spaces reviews over time (spacing effect), and makes each review slightly difficult (desirable difficulty). No other study method combines all three as naturally.

Three translucent circles in indigo, amber, and emerald illustrating cognitive principles.

What the Numbers Cannot Tell

There is a version of this history that reads like a triumph. Ebbinghaus discovered the curve. Leitner built the box. Woźniak wrote the code. Ye trained the model. Each generation improved on the last. Memory, once left to chance, became a choice.

But the numbers hide something important. As of 2026, the most widely used study method among university students worldwide is still rereading notes. Not spaced repetition. Not retrieval practice. Rereading. The same technique that Roediger and Karpicke proved inferior in 2006. The same technique that Ebbinghaus's data implicitly warned against in 1885.

The gap between what cognitive science knows and what students do remains enormous. Flashcard software has reached tens of millions of users, but it has not reached the hundreds of millions who could benefit from it. The tools exist. The science is settled. The barrier is not technology. It is behavior.

Perhaps that is the next chapter of this history. Not better algorithms. Not better AI. But better understanding of why humans resist the very techniques that would help them most.

Flashcard illuminated by warm light on a shadowy desk.

Frequently Asked Questions

What was the first flashcard software ever created?

SuperMemo, written by Piotr Woźniak in December 1987 using Turbo Pascal 3.0 on a borrowed Amstrad PC 1512 in Poznań, Poland. It implemented the SM-2 algorithm, which scheduled reviews at expanding intervals based on a user's recall performance. SuperMemo remains the earliest known computer program specifically designed for spaced-repetition flashcard learning.

How does the SM-2 algorithm work?

SM-2 assigns each flashcard an easiness factor initialized at 2.5 and schedules reviews at intervals of 1 day, then 6 days, then each subsequent interval multiplied by the easiness factor. After each review, the user grades recall from 0 to 5. Grades below 3 reset the card. The easiness factor adjusts slightly after each review but never drops below 1.3.

What is FSRS and how is it different from SM-2?

FSRS (Free Spaced Repetition Scheduler) is a machine-learning-based algorithm created by Jarrett Ye and published at ACM KDD in 2022. Unlike SM-2, which uses fixed rules, FSRS models each card with three variables: difficulty, stability, and retrievability. Its 21 parameters are fitted by gradient descent on user data, producing personalized schedules that benchmarks show require 20 to 30 percent fewer reviews than SM-2.

Why do so many medical students use Anki?

Medical education requires memorizing vast amounts of factual knowledge across anatomy, pharmacology, pathology, and clinical reasoning. Anki's spaced-repetition algorithm keeps this material in long-term memory with minimal daily review time. Community-maintained decks like AnKing contain over 30,000 cards aligned with standard study resources, saving students hundreds of hours of card creation.

What is the difference between Anki and Quizlet?

Anki uses a spaced-repetition algorithm (SM-2 or FSRS) that schedules reviews based on individual card performance and is free on desktop and Android. Quizlet is a broader study platform offering flashcards, games, and AI-generated content with a freemium model. Anki prioritizes long-term retention through algorithmic scheduling. Quizlet prioritizes ease of use and social sharing.