The Expertise Reversal Effect (2003)

Instructional techniques that are highly effective with inexperienced learners can lose their effectiveness when used with more experienced learners.

Instructional techniques that are highly effective with inexperienced learners can lose their effectiveness and even have negative consequences when used with more experienced learners.

In this edition, I summarize The Expertise Reversal Effect (2003) from Sweller, J., Ayres, P. L., Kalyuga, S. & Chandler, P. A.

This is reposted from my site, with some edits for clarity. I’m wondering if I should write more paper summaries like this, so please comment or reply if you like it!

The paper walks through a number of pedagogical strategies based on Cognitive Load Theory and demonstrates that most of them have a marked ‘flip’ in results depending on the level of expertise of the learner.

The full paper (pdf) is only 8 pages, so it’s a quick read (if you’ve got the appropriate schemas 😉).

  1. Background: Cognitive Load Theory

  2. Instructional effectiveness and expertise

  3. Expertise Reversal and the Split Attention and Redundancy effects

  4. Text Processing and Expertise Reversal

  5. But what about modalities?

  6. Worked Examples and Expertise Reversal

  7. Interacting elements and expertise reversal

  8. Imagination effect and expertise reversal

  9. Conclusion

1. Cognitive Load Theory

To make sense of this paper, you need to know about Cognitive Load Theory. Here’s the Wikipedia entry on Cognitive Load if you want a quick primer. 

The paper describes lots of instructional techniques. The cognitive science explanations for why the techniques work all come down to limits on working memory.

Cognitive Load Theory says that there are only a few “slots” available for “chunks” of information to fit in working memory at a time. If you try to hold too many chunks in your head at once, you get reduced processing ability.

The fix is schemas. Learners build schemas that make concepts fit into less working memory.

Because of the limited capacity of working memory, the proper allocation of available cognitive resources is essential to learning. If a learner has to expend limited resources on activities not directly related to schema construction and automation, learning may be inhibited.

In Cognitive Load Theory, all learning is framed in light of working memory limits. Learners acquire new schemas, then through practice, make schema use automatic instead of effortful. Schema use reduces the working memory burden. Expertise means having lots of schemas, practiced to the level of automatic use. Experts can handle complex tasks because their schemas reduce working memory demands.

This all makes intuitive sense to me. When there’s too many new facts and terms, I get overwhelmed and can’t really process the material. After spending time learning, I am comfortable with how the ideas fit together, so I can fit in new facts more easily without getting overwhelmed.

2. Instructional effectiveness and expertise

The goal of instruction is to scaffold the construction of schemas.

Novice learners have fewer schemas in place, and therefore, less ability to organize new information. Effective instruction can substitute for missing schemas by structuring new information, like “pre-chewing” the tough new knowledge to make it easier to digest. Instructors can also model the schemas for learners. Instructors show how they use the schemas, and their example helps learners build schemas faster. Without structuring from an instructor, learners are more prone to cognitive overload, which limits learning.

Expert learners already have some schemas in place to guide them in dealing with a new task. If instruction provides guidance that’s helpful for novices, it may be redundant for experts. Experts still need to process the redundant information. The added guidance still requires attention, i.e. it takes up working memory, so it might be distracting.

The overlap of schema-based guidance already in experts’ heads and guidance from instruction can lead to cognitive overload. That’s the Expertise Reversal effect in brief: guidance that is useful for novices can be negative for experts.

The rest of the paper explores situations where this effect shows up.

3. Expertise Reversal and the Split Attention and Redundancy effects

First, let’s define what these effects are.

The Split Attention Effect

Separating sources adds cognitive load, because learners have to search and match between representations.

If you have a diagram and explanatory text side-by-side, readers have to scan back and forth to match up concepts. The scanning back and forth adds cognitive load, limiting learning. If you integrate the text with the diagram, it reduces the load from the searching and matching.

This effect is similar for text shown now and text shown later. If you have to think about a previous slide in a deck and compare it to the current slide in your mind, that adds cognitive load, compared to showing them at once. 

Spatially and chronologically integrated materials reduce cognitive load for new learners.

The Redundancy Effect

If multiple sources of info are necessary for learning a concept, integrating them is good. However, if they could stand on their own, eliminating the redundant one is better.

At a glance, this is surprising! Adding more information can hurt learning? On reflection, it makes sense. If many representations of an idea are all shown at once, that’s more to process, which makes it harder.

One source alone is better than redundant sources. I would naively expect that repetition and presenting information in multiple modalities (as text and as an image), would make something easier to learn. But, counter to that expectation, adding redundant sources of information will increase cognitive load and limit learning.

The way I understand this effect is to think about the level of cognitive load in a particular moment. Over time, seeing something repeated or presented in different ways might aid learning, but not if it leads to cognitive overload. In the space of a single slide, adding more information can be overwhelming, and therefore worse for learning. 

Expertise Reversal

Okay, acknowledging the Split Attention and Redundancy Effects, how do you design an individual slide? The Split Attention Effect says that you should present the necessary information together, to reduce the cognitive load from scanning and matching. But, the Redundancy Effect says to not present more information at once than is necessary.

This seeming contradiction is what the Expertise Reversal Effect attempts to resolve.

A source of information that is essential for a novice may be redundant for someone with more domain-specific knowledge.

Inexperienced trainees benefitted from textual explanations integrated into the diagrams (to reduce split attention). However, more experienced trainees performed significantly better with the diagram-only format. For these more knowledgeable learners, the textual information, rather than being essential and so best integrated with the diagram, was redundant and so best eliminated.

This is the Expertise Reversal Effect. Adding expertise as a dimension resolves the contradiction between the Split Attention and Redundancy effects.

4. Text Processing and Expertise Reversal

When reading about new concepts, verbose and detailed explanations can help inexperienced learners. Learners with more expertise get distracted by the additional explanatory text, and benefit from minimal text.

Less knowledgeable learners benefited from additional explanatory material, but more knowledgeable learners were better able to process the material without the additions.

Text that is minimally coherent for novices may well be fully coherent for experts. Providing additional text is redundant for experts and will have negative rather than positive effects.

Expertise Reversal again! Another domain where novice learners need something that would be detrimental to expert learners.

5. What about multiple modalities?

I mentioned above that presenting information in multiple modalities could help learners — in a way that seems to contradict the Redundancy Effect. The authors have a CLT-based explanation for why multi-modal learning works:

[The] capacity to process information is distributed over several partly independent subsystems.

Spreading the load across different systems means more total working memory available. The authors consider visual and auditory systems as having semi-independent working memory capacities:

Many studies have demonstrated that learners can integrate words and diagrams more easily when the words are presented in auditory form rather than visually.

Visual working memory is one ‘bucket’ that gets filled by an image or diagram, so doesn’t have capacity for a textual explanation of that diagram. But there’s another bucket — the auditory working memory bucket — that students can use for the words.

Seeing and hearing at the same time spreads the load across more available cognitive resources. Less chance of cognitive overload, so better learning.

What about the Redundancy Effect, and Expertise Reversal?

auditory explanations may also become redundant when presented to more experienced learners

Adding auditory explanation detracts from learning, based on learners’ level of experience! Experts get distracted by additional material that would benefit novices.

As an instructor or experience designer, adding additional explanatory material or additional modalities will make things better for novices, but it might hurt more advanced learners.

6. Worked Examples and Expertise Reversal

Worked examples are problems presented with along with solution steps. Worked examples are often more effective than other problem-solving based teaching strategies. For instance, guided tutorials work better than unstructured exploration for introducing a concept to beginners.

However, for experts, worked examples add cognitive load. Advanced learners do better working through the problems on their own, without having the steps laid out for them.

The description of the experiment from the paper:

Inexperienced mechanical trade apprentices were presented with either a series of worked examples to study or problems to solve. On subsequent tests, inexperienced trainees benefited most from the worked examples condition. Trainees who studied worked examples performed better with lower ratings of mental load than similar trainees who solved problems, duplicating a conventional worked example effect. With more experience in the domain, the superiority of worked examples disappeared. Eventually, with sufficient experience, additional learning was facilitated more by problem solving than through studying worked examples. The worked examples became redundant and problem solving proved superior, demonstrating another expertise reversal effect.

Before you’ve seen someone do something, it’s overwhelming to be ‘thrown in the deep end’ and try to solve things on your own. However, after you’ve seen someone else demonstrate a skill, you benefit most from trying it on your own.

Many types of support reduce cognitive load for beginners, but add cognitive load for experts. The other implication is that when instructors give too much help, they miss an opportunity to allow more expert learners to practice their schemas.

Inexperienced learners benefited most from an instructional procedure that placed a heavy emphasis on guidance. Any additional instructional guidance (e.g., indicating a goal or subgoals associated with a task, suggesting a strategy to use, providing solution examples, etc.) should reduce cognitive load for inexperienced learners, especially in the case of structurally complex instructional materials. At the same time, additional instructional guidance might be redundant for more experienced learners and require additional working memory resources to integrate the instructional guidance.

Unsurprisingly, experience is a gradient. For all of these effects, we see a gradual fading out and then crossing over, not a sudden flip.

7. Interacting elements and expertise reversal

Systems with lots of interacting elements are hard to learn.

There’s a double bind — you need to keep all the pieces in your head in order to understand how the pieces work individually, but you become cognitively overloaded from trying to keep all the new things in your head at once.

It’s a chicken-and-egg problem for instructors: learners need the schemas to reduce cognitive load, but before they have the schemas, they have too much cognitive load to learn effectively.

For example, learning the syntax of a foreign language requires you to keep the track of relations between all the different parts of speech in your head. You need to hold ‘how nouns work’ and ‘how verbs work’ and ‘how modifiers work’ all at once, which is hard (and prone to cognitive load failure). Conversely, learning a new language’s vocabulary, while time-intensive, only requires a few new items in working memory at a time. It might be boring, but it’s not confusing or overwhelming the way that learning grammar is.

The instructional solution for teaching concepts with interacting parts is to present a simplified (but false) model to help learners build a partial schema. That way, learners have some tools in place when they encounter the full system with all its interacting parts.

This matches my mental image of scaffolding. The instructor puts up a fake structure to hang on to, so that students can eventually handle the real, complex model. Instructors help students manage complexity by hand-waving, over-simplifying, and ignoring what they can, until students have built the necessary schemas.

Interestingly, the instructional strategy suggested here did not result in the full expertise reversal effect. Experienced learners showed no difference in effectiveness between the mixed approach (isolated elements followed by interacting elements) and the conventional method (interacting elements instruction during both stages).

Since there’s no added cognitive load for advanced learners, it's safe for teachers to explore false-but-suggestive isolated elements models with students of any level as a way of building up to the truer interacting elements model. Cool!

8. Imagination effect and expertise reversal

This effect has a great name, and seems like an underrated strategy in general, so I’m glad the authors bring it up. Here’s how the technique works:

Instead of giving students actual problems to work through, or having them study worked examples, the instructor prompts students to imagine the steps they would take to solve a problem. Imagining the steps encourages automation of schemas, which improves learning. Given that the students have enough experience with the concept, the technique is more effective than worked examples!

The imagination effect “turns on” as learners gain more experience. When students don’t have sufficient experience, worked examples are more effective. The explanation in the paper is in terms of working memory overload — imagining works if schemas are in place. Without the schemas, learners’ working memory gets overloaded, since they have to process too many individual components.

To me, it seems like instead of working memory overload, this is probably a recall issue. Students that don’t have enough experience aren’t overloaded by too much to process, they don’t have enough in their head to usefully imagine a procedure.

The working memory explanation seems like it’s probably just the authors interpreting everything through the working memory lens. It’s super hard to inspect the contents of learners minds as they are instructed to imagine a problem solving procedure. We can’t tell whether they’re imagining but overloaded, or just squeezing their eyes closed and pretending to imagine.

9. Conclusion

Instructional design should account for cognitive load. But, if you knew about CLT before, you already knew that. So, what’s new?

A lot of it can be summed up as “know your student”. If your target student is a novice, add more support. If your target student has incorporated some schemas already, that same support could be distracting.

Materials should recognize the level of the learner in terms of 1) the schemas they have incorporated and 2) the level of automation with those schemas. One idea that struck me for instructional design practice was a ‘cognitive load audit’. Count the number of new concepts learners have to hold in memory at any given time in a course, and learners’ expected level of automaticity of schemas, and figure out where the course is prone to cognitive overload. The goal is to always present the material “just right” for the learner — not too much information, not too little.

Since the paper demonstrates expertise reversal for so many different effects, it also serves as a survey of findings in the broader Cognitive Load Theory literature. The Split Attention, Redundancy, Text Processing, Worked Examples, Interacting Elements, and Imagination Effects are each worth digging into and applying on their own.

Now that you’ve got some of the schemas in place (and if you’re still curious), maybe read the paper itself: The Expertise Reversal Effect (2003). If you’re curious to dig into this literature more, the references section is a jumping off point to tons of other interesting papers.

———

Thanks for reading!

Let me know if paper summaries like this are interesting or helpful to you, and I’ll do more. If there’s particular papers you think are interesting, send me a link!

Thanks to Compound and especially Sarah Ramsey for proofreading and edits.

The Billion Dollar Bootcamp

What would it take to build a billion-dollar-a-year school?

Today, we’re doing creative arithmetic. Math is fun!

Let’s play a game with bootcamp business models.

Here are the rules:

  1. The bootcamp has only one teacher.

  2. The cost to acquire a student is $1,000.

  3. The sum of the other costs for a student is $1,000.

  4. There are no other costs. There is no other revenue.

  5. We can only adjust three variables: program length, tuition, and student-to-teacher ratio.

We’re going to ignore everything else about the schools for now, and just focus on annual revenue.

Before we start, some notes and definitions.

CAC is the cost to acquire a customer. Basically, it’s money you spend on advertising, reviewing applications, conducting admissions interviews, giving campus tours - whatever it takes to get students to actually show up on day one.

For all of these hypothetical schools, CAC amounts to $1,000 for each student.

COGS is the cost of goods sold. For bootcamps, it usually includes line-items like teacher salaries, leases on classroom space, t-shirts and swag for students, software licenses, student community events, and (if they’re part of the program) career services. Everything you pay for in order to teach the program and support students.

For the game, we aren’t including teacher salaries, and rounding everything else off to $1,000.

In my post about bootcamp business models, I slice and dice the contributing factors in greater depth. Go look there if you want all the gory details.

Fixing CAC and COGS at $1,000 each makes my game much easier. If you’re running an actual school, you can’t just decide how much things will cost. But... this is our house, we make the rules. 

This is a dumb game. It’s meant to illustrate how these variables interact, and to give a rough sense of the space of possibilities for schools, in pure financial terms. It ignores almost everything that matters about a school.

The three variables — program length, tuition, and student-to-teacher ratio — trade off against each other. In theory, a school could double the number of teachers and halve the program length without changing the annual revenue. Or, a school could double the tuition, and double the length of the program, and have the same result.

Those kinds of tweaks — turning one variable up and another down — don’t affect the outcome, I’m only going to turn the knobs in one direction: up.

Let’s look at the first bootcamp, and then dig into the formula I’m using.

This first, tiny school brings in just $30,000 in annual revenue per teacher. By the end of the post, we’ll see what kind of numbers are needed to build a billion dollar bootcamp.

A tiny, but maybe feasible bootcamp

A long program, a low tuition (about half of what most bootcamps charge), and a low student-to-teacher ratio. This sounds like it’d be an awesome deal for students - if it existed.

Teachers generally have better employment opportunities, so unless they’re really not in it for the money, this school doesn’t make sense to run.

Still, we have to start somewhere. If this solo teacher were really committed, this is around what a minimum feasible school could be.

What would this program actually look like?

In six months, one teacher could cover a lot of material with only five students. That’s way more support and individualized attention than most bootcamps (or really any kind of school) deliver. It’s shorter than some of the longest bootcamps, but it’s also much cheaper than any US based bootcamp of that length.

With a 5:1 ratio, a teacher would have over an hour to spend with each student each day one on one. All five students could work in pairs, trios, or even one big group, mob-debugging problems elbow-to-elbow on the same screen. The teacher could micromanage all student learning.

That sounds like it would be an intense experience. That level of attention might deliver something like Bloom’s Two Sigmas, a storied two-standard-deviation improvement in student learning with the guidance of a tutor.

At $5,000, most of the real cost to a student is the opportunity cost of their time, not the actual tuition — more on this later. Whether through a loan, an Income Share Agreement, or paid up front, the tuition isn’t the biggest barrier to this school.

How does the math work?

So glad you asked.

Here’s the formula I’m using for all of these hypothetical schools:

programs per year * students per teacher * (tuition - costs)

Applied to our first bootcamp: the six month program could run twice per year, with five students each paying $5,000 in tuition. In our model, every student costs one thousand dollars to sign up, and one thousand dollars to teach.

2 * 5 students * (5,000 tuition - 1,000 CAC - 1,000 COGS)

So, ten students per year, times three thousand in tuition per student:

10 * $3,000 = $30,000

Thirty grand total, from fifty grand in tuition collected.

You may have noticed I’m using the word “revenue” the wrong way. If you are screaming in your head “that’s not what revenue means! 🤬🤬🤬”, then yes, you’re right.

The game is about answering this question:

“How much could a teacher make each year, if they ran a bootcamp solo?”

That’s probably closer to the definition of profit than revenue. But! Most big bootcamps treat teachers’ salaries as a cost in their model, so from that perspective, it doesn’t really make sense to call it profit either.

The right term for it might be something like “annual revenue contribution per teacher”, but that’s a mouthful. My game, my abuse of terminology.

If someone were running this tiny-but-maybe-feasible bootcamp for real, they would try to reduce the costs. They’d probably find ways to do it! But again — I’m skipping those kinds of cost optimizations for now.

Let’s ratchet everything up a notch.

The Cheap Boutique

Let’s run through the math, briefly.

A four month program means three programs per year (every month has four weeks! I’m rounding!). Ten students in each cohort each pay ten grand in tuition, cost one thousand dollars to market to and enroll, and one thousand dollars to teach. That leaves eight thousand per student, times thirty students. We arrive at $240,000 annual “revenue”.

What does this program look like?

Sixteen weeks is fast. Students have to put in long hours, even with the pretty high level of support that they get from instructors. With ten students, a teacher might add more direct instruction to the mix compared to the individual attention they could give to five students.

Mob debugging with all 10 students probably wouldn’t be the norm. Still, 10:1 is an enviable ratio! The teacher would spend a lot of time getting to know each student personally, and figuring out how best to support them.

Ten students is enough that there starts to be a real community dynamic. Students won’t always be at the exact same place in the curriculum, but everyone will get a chance to work with everyone else. There’ll be enough personalities in the room for closer friendships to emerge, though probably not cliques.

The tuition doubled from the tiny-and-maybe-feasible school, and it is now a significant investment for the student. The opportunity cost is still relatively large, but by my reckoning, this is roughly the price and program length where the tuition and opportunity cost are roughly balanced as factors in the overall cost.

My logic for this: US median household income is ~$60,000, so four months of foregone wages is about $20,000. Four months at US minimum wage ($7.50 / hr) is $5,000. I think it’s fair to assume that students’ opportunity costs are, on average, somewhere between minimum wage and median household income. $10,000 is a nice round number in that range. Like all napkin math, this is probably wrong in some way. It’s also worth remembering that individual students vary tremendously!

With loans and ISAs, borrowing and repaying $10,000 in tuition is doable, so long as grads get jobs, and non-job-getting grads don’t have to pay.

This bootcamp is pretty plausible to run sustainably as solo teacher, or as a small team. If a single teacher were running it alone, $240,000 is competitive with tech jobs in overall benefits. Split with a co-owner or employee, these numbers can still work in lots of markets.

Ten students at a time is a sustainable teaching load, especially with a little break between programs (from rounding every month to February — see, rounding is good!). A teacher would have enough time to prepare for each day, give students individual attention, and avoid burnout.

As another positive check on the numbers, we’re in the range of real bootcamps! Take a glance at the market reports from Switchup, Career Karma, and Course Report to make your own guesses at which particular bootcamps are in this range. My guess is that the bootcamps that fit this revenue number are mostly online, plus a few that (in normal times) run in-person, outside of big tech cities.

Note: Tuition and program length are published and pretty easy to find for most schools. It’s super hard to tell student-to-teacher ratio from the outside — that’s where most of my uncertainty is in modeling any particular school.

Turn the dials up another notch, and we’ll find a whole lot more bootcamps.

The Universal Standard Bootcamp

The math, briefly: four cohorts of fifteen, times thirteen thousand dollars net per student, makes 60 * $13,000 = $780,000.

A lot of bootcamps are in this range.

The program I taught, Flatiron’s Software Engineering program, doesn’t perfectly fit, but it’s close.

Actualize, App Academy, Galvanize, General Assembly... a lot of the big players in the in-person bootcamp space look a lot like this.

The trend since the emergence of coding bootcamps around the turn of the decade has been toward a longer program at a higher price point, maybe with more students per teacher. But almost every program you could name still comes in near this mark.

Let’s tweak and tune the dials, keeping the same $780,000 “revenue” per teacher per year as a constant. Here’s some of the possible schools we can generate:

A: 16 weeks (3 cohorts per year), $15,000 tuition, 20 students per teacher

B: 24 weeks (2 cohorts per year), $8,000 tuition, 65 students per teacher

C: 9 months (1.33 cohorts per year), $30,000 tuition, 21 students per teacher

D: 2 years (half of a cohort per year), $70,000 tuition, 23 students per teacher

E: 4 years (one quarter of a cohort per year), $70,000 tuition, 46:1 student-to-teacher ratio

A is the neighborhood of Flatiron and other immersive in-person programs.

B is pretty similar to the swath of self-paced online programs, like Springboard.

C looks roughly like Lambda School or Kenzie Academy, though I don’t know their actual student-to-teacher ratios.

D is Make School (or their evil twin, Holberton School, if it actually had teachers and a two-year program).

E could stand in for a BS in Computer Science at most undergraduate programs. 😳

We found it! The Pareto frontier for coding schools! Or, at least my guess at it.

For those who aren’t familiar with 19th century Italian engineer, sociologist, economist,  and philosopher Vilfredo Pareto, his name is on  a ton of the tools in the economics toolbox (viz. the Pareto Principle, the Pareto Distribution, the Pareto Index the Pareto Chart, and Pareto efficiency).

This one is simple enough once you see it.

(Original Image Source: Wikipedia

Cheap, low quality products at the top left. Expensive, high quality products at the bottom right. Lots of products end up along the cost-for-quality tradeoff frontier.

Turning our quality knobs (student-to-teacher ratio, program length) and our cost dial (tuition) we can fit a lot of schools on this curve.

There’s more to school quality than student-to-teacher ratio, but if a school is much above or below this curve, it’s a hint that they could be undercharging or overcharging.

So, what does all this mean? Well, for one, we’ve shown that multiplication works (and is ever so fun). The frontier also points to what innovation would mean — a higher quality bootcamp at a cheaper price has to beat this curve.

Let’s push beyond the frontier. For science! 🚀

A little faster, a little more expensive

One hundred students per year, in five cohorts of twenty, each netting $18,000, after costs.

This is pushing the envelope, but not pushing it to absurdity. (Just you wait!)

Twenty students is a lot for a single teacher, ten weeks is a very fast program, and $20,000 is a lot to pay in tuition. Still, I bet there are real programs with revenue per teacher numbers in this range.

Some programs that might look like this? Employers paying to upskill employees, short-but-expensive workshops, and programs that use a combination of curriculum, community, and technology to increase the number of students each teacher serves.

I won’t run through all the variants at this tier. Feel free to take a minute to sketch out what schools you think might fit, and let me know what you come up with!

Now we’re getting out of hand

Truly stretching our creative muscles.

What $40,000 program could you teach 50 students in five weeks?

Framed that way, it seems far-fetched. Let’s run through some programs that come close.

AltMBA is only four weeks, but it’s also only $4,450. (It also advertises a 10:1 student-to-coach ratio, which throws a wrench in our imaginary-bootcamp-hypothesizing machine. Curses!)

What about massive open online courses (“MOOCS”), like Coursera and Udemy? They turn the student-to-teacher ratio dial wayyyy up, but the price wayyyy down. Program length... isn’t really a consideration for recorded lectures.

The top Udemy course I could find after a few minutes googling had 1,046,582 students enrolled. We can pretend that each enrollment paid the same $14.99 I saw listed, though Udemy changes the price frequently. That gets us close but not quite there - $15.7 million in per-teacher revenue.

Coursera offers most of its courses for free, but charges for “certificates”. Its most popular course, Andrew Ng’s Machine Learning, has enrolled 3.3 million students (🤯). It doesn’t list a price, since it only has a free enrollment option. To hit the $19 million target, it’d have to charge $5.75 per student, or more likely, get ten percent of those students to pay $57.50.

So, maybe big online courses could hit this bar, but only the top ones, and only if we make generous assumptions about how many students enroll and how much they pay.

The Billion Dollar Bootcamp

The moment you’ve been waiting for. It’s here.

Multiplication is a helluva drug.

Let’s try to imagine it.

* * *

200 students show up, each having written the big h*ckin check. They project calm confidence, but underneath their glances around the room is a quavering, aching fear. Will this really be worth it? Are they themselves ready for the program?

The teacher enters.

Hello, and welcome to the course. I’m sure you’re all wondering if what I have to say will justify taking out that second mortgage.

A pause, as the teachers eyes roam to meet the gaze of several students in turn. The room shudders invisibly under the weight.

It will.

The tension, impossibly, heightens further.

Let’s not waste any time. Today’s first topic?

Multiplication.

* * *

Revealing the rest of this riveting content would violate my NDA ( 😂), but rest assured that it’d have to be good stuff.

Um, so, seems pretty impossible, right? No one would pay $100,000 for a one-week course, certainly not if they’ll be just one of 200 students.

Well, what about Y Combinator? (I did warn you that this game was dumb.)

The startup incubator actually pays founders to come to their “school,” but takes a piece of the company in exchange. If and when the company makes a big exit (like Cruise or Twitch), YC effectively charges a multi-million-dollar tuition.

To infinity...

Multiplication just keeps on going.

We can make it an infinite-dollar bootcamp three ways:

And beyond!

…well, maybe that’s enough multiplication for one sitting.

This game doesn’t have a prescriptive or normative moral. Mostly, I wanted an excuse to do arithmetic and explore a ton of simple bootcamp models. In that respect, we’re all winners.

If there’s a point to the game, it’s to push on our assumptions. If you can deliver a program where it makes sense for students to pay millions of dollars, or for millions of students to pay a few dollars each, you can make weird, model-breaking, curve-beating schools.

If you read this far, maybe you want to start your own bootcamp, or you think your bootcamp charges too much — or too little. If you want to sink your teeth into a meatier model, check out the bootcamp business model post. If you’re running a bootcamp, obviously do what makes sense for your circumstances — and let me know where this analysis misses the mark!

Whether or not you’re a bootcamp operator, reach out to me if you want to b.s. about this kind of thing. I’m always down for a conversation. Replying to the newsletter will reach me, or you can email hi [at] rob.co.bb, or find me on Twitter.

Thanks to early readers Keely Murphy, Sarah duRivage-Jacobs, Tom White, Nick deWilde, Matt Tower, Joshua Mitchell, Stew Fortier and Compound Writing, and my dad. Errors all mine!

The best online interactive learning experiences

And what makes them good

I help write a lot of coding curriculum. When I’m drawing up a learning plan or designing a lesson, I often find inspiration in some of the learning experiences I’ve seen.

Here’s my list of the best online interactive learning experiences. After the list, I try to pull out the patterns and underlying principles that make them great.

These learning experiences span a huge variety of topics. Some are focused on design, others are focused on coding. I recommend them to students when they’re interested in learning these topics, and I’ve talked about a lot of them with other folks who teach or make curriculum.

Games

  • Cant Unsee - Game for building design intuition by choosing the better of two side-by-side designs.

  • CSS Diner - Learn CSS Selectors interactively, by using them to select food from plates.

  • CodePip - Lots of code learning games, including CSS Grid Garden and CSS Flexbox Froggy, which make difficult concepts easy with structured practice and visual feedback.

  • Method of Action - Five different interactive design games. My favorites are Type Method, and Boolean Method, but maybe the Bezier game is more important if you’re actually a designer. I am not a designer, so I can’t tell!

  • Vim Adventures - Build muscle memory and intuition for how to get around in the Vim text editor by moving a character around in a game.

Tutorials

  • SQLBolt - Learn a significant portion of what might be the most important coding language in a few hours.

  • RegexOne - From the same creator as SQLBolt, a terrific guided explanation of Regular Expressions, with lots of opportunities for practice.

  • Git Immersion - Guided tutorial through the fundamentals of Git, on your own machine.

  • ExecuteProgram has courses on Modern Javascript and TypeScript, as well as Regular Expressions, SQL, and JavaScript Array Methods.

  • Introduction to A* - To me, this goes to show that games are still among the best domains for teaching or learning algorithms. A* search is really hard, but it feels almost obvious when you read it in this context and see the visual examples.

  • Web Design in 4 Minutes - A click-through explainer that uses the medium of the web page powerfully and satisfyingly. The follow-up JavaScript in 14 minutes is also recommended, though it doesn’t deliver the same deep and satisfying ‘win’ as you complete it.

  • Learn Layout - the interaction model is just looking and clicking, but it presents each idea in a bite-sized page, and it boils down a tricky set of connected concepts into a manageable learning path.

Playgrounds

These pair an editor and a visual output to demonstrate or explain something powerful. By and large, these are not structured learning experiences, but they allow for structured play in a way that supports learning.

  • Python Tutor - It isn’t much to look at, but visualizing code as it executes step by step is profound. The site also has free, anonymous live tutoring. The name is misleading - it supports Ruby, Javascript, Java, and more (in addition to Python).

  • Become a Select Star Playground - Published alongside Julia Evans’ corresponding zine, and if you get the zine, you can follow the exercises, like this one on aggregations.

  • Regexr - My favorite of the many regex playgrounds. Has a built-in cheatsheet, examples, and excellent hover explanations for matches.

  • Loupe - Video and playground for exploring the JavaScript event loop.

  • AST Explorer - In-browser abstract syntax tree parser and visualizer. A great way of teaching or learning what an AST is, what a parser does, what a compiler or transpiler does, and what codemods are. Admittedly, LISPs have a way of making those ideas more natural, but since lots of folks start with JavaScript, this is really handy in practice.

  • ExplainShell - Visual and text explanation of shell commands. Paste a weird command into the tool, and it tells you how it works.

What makes these good?

Each of these tools take a hard or intimidating concept and make it learnable, fun, and empowering. If you have the right context and want to learn those topics, starting the learning experience is almost enough to make the learning inevitable.

That’s more-or-less a definition of a great learning experience. Anyone who’s making an online learning experience should aspire to that goal. Still, setting a goal of ‘learnable, fun, and empowering’ doesn’t automatically make something good. What can we mimic in these learning experiences to make others as great?

The people who made these put a lot of thought and intention into what makes a good learning experience. Some of them have written about the design principles and what they think about coding, teaching, and learning.

So, one trick to making great interactive experiences is to become an expert.

“You want to know how to paint a perfect painting? It's easy. Make yourself perfect and then just paint naturally.” - Robert Persig

“Think deeply and become wise”, while highly recommended, doesn’t actually convey any insights. It’s not a replicable path to follow. Reading some of those posts, however, does reveal concrete design decisions in common between these learning experiences.

The creators also iterate a lot, and (at least as far as I can tell) have built lots of learning materials. Building great things takes lots of creative output and feedback, as well as insight and aesthetic sense on top of the baseline understanding of the material and how to explain it.

Okay, what are the patterns?

  • Teach just one thing

  • Break the concept into small pieces

  • Sequence for speed without walls

  • Have an interaction for each new idea

  • Everything is designed around the core interaction

Why do these patterns work? In my book, it’s all about cognitive load.

Cognitive Load is how difficult it is to learn something, in a given context. If there’s lots of new stuff presented at once (or if there’s external stressors, or other sources of load), it’s harder to learn. Breaking a big concept down into bite-sized pieces reduces the load at any one moment. These games feel easy, despite the fact that the underlying concepts are hard.

Focus on teaching just one thing means no distractions. Teaching one thing is hard enough. For the learner, it means less cognitive load, so, higher success rates on each interaction.

The next trick to making difficult things easy is breaking the concept into small pieces. When going through any of these experiences, it feels like you’re building up to the big picture from carefully chosen morsels. Despite learning 50 or 100 new facts, at no point are you overloaded.

Each of these experiences is also sequenced for speed without walls. They try to move through the concepts as fast as possible. Having extra practice seems like it would be nice, but getting to the end is more important. These learning experiences are lean. They don’t have fluff.

Walls, though, are worse than fluff. Getting stuck means rereading, leaving the experience to search for an answer, or quitting altogether. Each of these tools is careful to sequence the concepts so that they don’t depend on an idea before they’ve introduced it. They never present you with a challenge without the knowledge you need to solve it.

In order to make sure that the learner has that knowledge, many of these experiences have an interaction for each new concept. Many also block progress until the reader demonstrates comprehension. When they introduce more complex challenges, they can do so knowing that the learner has solved other challenges already.

Having any interaction at all makes these tools a step above most learning experiences, but these tools are their interactions. Everything is designed around the core interaction. In contrast to some learning tools, they aren’t blindly ‘gamifying’ something adjacent. They don’t layer on a points mechanism ‘just because’. The core interaction is about the content being learned, and the learner does the thing that they’re learning. It’s one thing to be active in your learning, but it’s another to directly manipulate the symbols that are what you’re learning.

We don’t learn sports by analogy. These learning tools make it possible to learn concepts like SQL, Regular Expressions, and CSS by direct manipulation, the way we learn to dribble.

You, careful reader, might have noticed that the ‘Playgrounds’ on the list don’t follow the rest of these patterns. They don’t break concepts down into small pieces, they don’t have a sequence, and since they don’t introduce concepts, they don’t have an interaction for each new idea.

All they are is the core interaction. Do something, see the result. While they aren’t as surefire as the step-by-step guided activities, they tend to be more powerful tools for exploring. They reveal something that would otherwise be hidden, and the core interaction is around a particular kind of insight.

Some other commonalities stood out to me, though I’m not sure whether or not they’re critical to the learning experience.

The lessons are online and accurate, which seem like obvious prerequisites. They’re also notable in that they don’t have ads, and, for the most part, aren’t trying to sell you something. When they are, they’re selling more of what they are delivering, not selling a different product.

These tools don’t do a lot to explain ‘why’ learn what they offer. They expect that it’s already evident to the learner. No page space goes to justifying their own existence. That means there’s some missing steps for the learner to connect those concepts to some other context, but it also means that these tools can be dropped in to any context that provides those connections.

Honorable Mentions

I’ve seen a lot of online learning content, but not everything. There’s a lot I didn’t pick! The following didn’t make the top list for one reason or another, but I still think they’re great or contain some element of greatness.

The old CodeSchool TryRuby (webarchive version linked) was amazing, but since Pluralsight bought CodeSchool, they’ve shut down the interactive version.

Learn Git Branching is a visual tutorial for practicing with Git branches. It’s a topic area that tends to be particularly hard for new Git users. The tutorial has some sharp corners where the instructions aren’t clear, but it’s still pretty great.

Why’s Poignant Guide to Ruby, and Learn You a Haskell For Great Good are book-length and aren’t interactive, but they commit to storytelling and a whimsical aesthetic that I love.

Quantum Country and Explorable Explanations are pushing on how we can make better use of tools for learning, explaining, and (hopefully) thinking.

Typography in Ten Minutes doesn’t have a ton of interactions (just links). The content is great, and there’s a proof-in-the-pudding factor because the typography is stunning. The guide boils down hard ideas into clear, easy-to-follow recommendations. It has links to further justification and explanations (it’s a teaser for a whole book on typography).

It reminds me of Latacora’s Cryptographic Right Answers, which provides no interactivity, but makes it really easy to find the right thing to do. For some important hard decisions that ‘normal people’ will have to make infrequently, it’s more useful to have the rule than the derivation.

Codecademy, Khan Academy, Code.org’s Project Studio, Flatiron’s Precourse, General Assembly’s Dash, Grasshopper, and lots of others offer in-browser interactive courses. They’re okay. I haven’t seen any that spark joy the way the above recommendations do.

For code playgrounds and communities, Repl.it and Glitch are my favorites, but there’s a ton of other great ones like CodeSandbox, CodePen, JSFiddle, and JSBin. Encouraging play is good, and encouraging sharing and remixing is amazing.

There are also excellent language-specific playgrounds (for languages other than JavaScript). The two that come to mind are the Rust Playground and the Swift Playground, but it’s pretty common for a language to have one.

For younger kids, there’s WoofJS. It’s better than Processing as a next step after Scratch or Blockly. If you’re curious why, it’s worth reading Making JavaScript Learnable from WoofJS creator Steve Krouse, and for more background, Learnable Programming by Bret Victor. I don’t have a current LOGO implementation that I like, but Turtle Academy looks good at first glance.

When I polled other code teachers, I got recommendations for SQLZoo, LinuxZoo, and Mastery Games, which I haven’t used myself, but am excited to check out.

For coding exercises, Exercism and Project Euler were really good for me. There’s a lot of great places to go for exercises though, so I find that different people tend to enjoy different sites. CodeWars doesn’t have a great name, but the community-driven exercises are actually pretty great. CodeSignal, LeetCode, InterviewCake and HackerRank all have coding interview questions, but are centered around selling you interview training, or selling you as the product to companies hiring engineers. The problems are generally well curated.

And, if you just want a big list of tech education resources to scan, here’s one on github.

Wrapping up

Thanks for reading! I’d love to know what interactive learning experiences you love and what you think makes them great, as well as any feedback you have for me.

I’m also having lots of great conversations over the phone or video chat with folks who are interested in or working in higher ed and ed tech. If you want to chat, let’s find a time!

The Emotional Journey of Learning to Code

Today, we’re talking about our feelings. I’ll tell a story, map out the student emotional journey, and lay out some of the implications for folks teaching or building schools.

The emotional journey of learning to code is more important than any of the content knowledge learning.

First, a story about debugging.

A debugging story

I walk past the group of Week 2 students, working diligently on the day’s labs - today it’s basic SQL exercises. I notice that a student is visibly frustrated (staring hard at their screen, frowning, clicking and typing with force, sighing, breathing with sharp intakes, slouching). I ask what they’re working on, and they describe the bug they’ve run into.

“It’s not working. Nothing I try seems to help.”

They’ve tried the normal debugging go-tos. Searching for their problem online, asking a neighbor, trying changing different things to see what works. They’re still stuck.

I squat next to them, and ask them to walk through their code, and tell me what it’s supposed to do. They show me the error, and some of the things that they’ve tried in order to fix it. Recounting all the different rabbit holes they’ve been down, they show more agitation, frustration, and anger.

We step away from the code, have a sip of water. After a minute or two, we start again, and read through the code they’ve written, line by line.

Upon rereading the code and explaining what each piece does, they notice a typo, fix it, and the code works.

(This is a composite, not a ‘true’ story, but it’s representative.)

My role wasn’t to notice the typo, or offer the solution, or to point the student towards a particular resource so they could understand what was happening. It wasn’t to ‘teach’.

Still, this has happened enough times that I doubt my presence is incidental to the typo getting noticed and fixed. So, what is going on here?

In order to notice the typo, the student had to be able to read through their code patiently. Solving this bug, for this student, meant changing how they felt. With a calming presence next to them, the student could manage their stress. With their stress under control, they could actually read their code, and solve their problem.

Debugging is a state of mind

Coaches and teachers see hundreds of versions of this story, with minor variations in the details. Sometimes, there’s a new concept that needs to be explained. Sometimes, there’s underlying misunderstanding that the bug reveals. Sometimes there’s bugs that are confusing, even to the teachers.

There are lots of ways that students get stuck and unstuck as they learn to code. In lots of ways, improvement in software skill looks like growing independence in getting unstuck.

Almost all bugs get solved in a patient, curious mindset - not a frantic one. Students’ progress is emotional maturation. After seeing and successfully resolving lots of bugs, programmers are not scared of new ones. Instead of panicking, they patiently apply the techniques they know.

There is a concept and skill story here too. Students start with a weak or missing mental model for how their code executes, what terms to use when searching for information, how to recognize what’s relevant, how to rule out hypothesis for the source of error, how to use their tools to better understand what’s going on. Coding has important facts that need some kind of transmission.

There’s also an education theory story here. Cognitive Load, the Zone of Proximal Development, Scaffolding. I’m not breaking any new ground, noticing that a teacher acts as a guide - emotionally as well as conceptually.

Still, especially in the bootcamp world, where many of the teachers come from a non-education background, and the students are adults, it’s easy to underweight the importance of emotional growth and development. This is even more true for folks further from students.

Attention, learning, memory, and emotion

We know that memory formation is negatively impacted by stress, and that emotional connection is key to successful retention. Stress can be a blocker, effectively reducing the number of working memory ‘slots’ available to work with new information. It can also be demotivating - you don’t want to spend time learning when you are stressed.

It’s not surprising that motivation and attention are key to learning. If you are too bored to pay attention you won’t learn. Your brain filters new facts by relevance, and won’t retain concepts that aren’t significant enough. Both in the micro and the macro, emotion is key to learning.

Particularly when dealing with difficult and overwhelming new topics, it’s easy to get caught in loops of distraction. Thrashing between topics, tutorials, and learning resources can be a vicious cycle, where students switch tasks before they can make progress. This feeds a feeling of discouragement, a lack of results, and

The emotional growth answer to thrashing is to learn to notice when you get distracted from what you set out to accomplish. This frequently shows up as switching tabs in the middle of reading, or having several incomplete tutorials going at the same time. Once you learn to notice that as a learner, you can build a system for coping.

Escaping thrashing means having a system for noting interest in something, without letting it distract you from your current task. Completing tasks is key. That way, you clear the decks for taking on a new task - and you get the ‘win’ from completing that tutorial. (This set of ideas is similar to me to the notion of limiting work in progress, but with cognitive load and context switching costs instead of communication overhead as the bottlenecked resource.)

It’s important to note that interest, attention, and diligence aren’t fixed attributes that a person has. Different people grow and change, are differently attentive to particular topics, and learn different habits and coping skills.

Should we valorize frustration?

Since getting stuck is such a common experience, lots of people online will tell you that you need to spend time stuck to learn to be a ‘real programmer’. There are lots of forum posts suggesting that banging your head against the wall is a necessary step to programming enlightenment.

This is not true. ‘Stuck’ isn’t a place where you are learning new things. While going down rabbit holes may provide exposure to lots of concepts, you won’t retain much from it. Those concepts are removed from what you were trying to accomplish, and when debugging, you aren’t paying the right kind of attention to learn a new concept.

While ‘bang your head against the wall’ might be misguided advice, it’s a lie that may help prepare you for reality. You will get stuck.

Still, I think we can do more to help students avoid some of the darker, deeper feelings that often come up in the journey of learning to code. Spending hours (days, weeks!) on a bug, and making what feels like no progress? A coach or teacher can help students get unstuck and back to making real progress learning.

I’ve gotten (and given) the advice before, as a teacher, to make more mistakes in live-coding. Great coding teachers emphasize their practices for reading error messages, debugging, and will leave in deliberate mistakes in order to support this kind of learning.

The past newsletter on learning in public talks about how sharing stories about the learning journey can help learners manage the intimidation of the learning roadmap. At Flatiron, we had Feelings Friday, time explicitly dedicated to paying attention and validating how we felt.

Making mistakes and telling stories can help make students feel normal. That means sharing the full range of emotional experiences of coding, including the frustration, sadness, and worry as well as the excitement, curiosity, and joy. Knowing that others feel the same way can help students cope with their own feelings when they encounter confusing or frustrating situations.

Not all fear, anxiety, and doubt is impostor syndrome

Many schools tend to focus on combating impostor syndrome as their way of starting to talk more clearly and explicitly about emotions in tech. But, particularly for beginners, impostor syndrome doesn’t necessarily speak to the experience they have. Feeling overwhelmed when faced with a wall of jargon isn’t impostor syndrome per se, but it is a common and natural experience when getting started with something new.

The discussion around impostor syndrome does give us words and a space to talk about how we’re feeling - that’s way better than nothing! Impostor Syndrome proper will also happen to students, so it’s awesome that schools help students with tools and approaches for managing it.

Talking and writing about stages in the learn to code journey in terms of emotions may help expand students and schools vocabulary.

Stages in the journey

In the bootcamp business model, I spelled out the stages of the student journey from the perspective of the bootcamp, and its balance sheet. Today, let’s look at the journey from the view of a students’ emotions.

There’s a lot of emotions, and a lot of stages, so this is inevitably a shallow view, and it doesn’t capture everything. I also focus on coding bootcamps - that’s not everyone’s experience of learning to code.

Before the bootcamp

In the business model, this is ‘marketing and admissions’, and the goal is to get students enrolled in the program. What does it feel like for students?

  • Finding out about bootcamps (curiosity, excitement, confusion)

  • Trying out tutorials online (confusion, thrashing, discouragement, proud)

  • Applying to bootcamps (nervousness, fear, confidence, confusion)

  • Getting Rejected (sadness, dejection, pain)

  • Getting Accepted (validation, eagerness, anxiety)

  • Figuring out how to pay (stress, worry, confusion)

  • Prework (stress, confusion, excitement, anxiety, self-doubt)

Students have a mix of emotions (we contain multitudes). There’s confusion and doubt, stress, anxiety, and eagerness. “Can I do this? I hope so.” Applying to anything comes with a bundle of doubt, curiosity, excitement, and fear. Maybe even more so when the promises of bootcamp marketing are so profound and life-changing.

During the bootcamp

  • Orientation (exhilaration, pressure, overwhelmed, newness)

  • Learning to code (confusion, frustration, stress, joy)

  • Assessments and Code Challenges (fear, anxiety, doubt, thrill)

  • Projects (pressured, fear, doubt, frustration, creative, disappointment, pride)

There’s so many things happening in a bootcamp, and it’s easy to be overwhelmed. Classmates can be a source of comfort and commiseration, but can also be intimidating and overwhelming to meet all these new people. On top of that, sometimes students don’t get along!

Then, there’s the content! Hopefully, it’s exciting and motivating material, but it’s sometimes boring, frustrating, or confusing. Courses move really fast, which is intimidating and overwhelming. It’s thrilling too - there are so many moments of new understanding and joy.

Different bootcamps have different kinds of assessments and projects, but no matter the structure, they still feel like a judgement. Students, understandably, feel stress and anxiety!

Job Search

After graduation, students have to actually find a job! There’s a period of doubt and loneliness, especially since they might no longer be with their classmates.

There’s a lot more to say about the job search, but my experience with it is fuzzier - I’m not a career coach, so I’ve seen way less of it. Keep an eye out though, I may have more from actual career coaches about mapping this phase!

What do we do with all these feelings?

Okay, so we should all internalize this map of how students might feel across the different stages of the learning journey (and probably build user personas and tell more stories about students). But, ultimately, we have to put this knowledge and empathy into practice somehow.

Social Support

Teachers and staff should help build healthy community among students, set norms and expectations, and help students make friends. Leading by example, community activities, icebreakers, shared myths, explicit rules and expectation setting, and informal conversations all have a role to play.

There’s also tons of room to help students recognize that they need the support of their families and friends in order to learn. Students are embedded in a social context already.

Drills and Assessments can be confidence building

Drills should help students learn. Through practice, they can be assured that they’ll remember the new skill or knowledge. Of course, there are lots of drills that are boring, irrelevant, or bad. These lead to frustration, annoyance, or self-doubt, depending on the nature of the drill’s badness. Good drills feel relevant, real, and ‘in range’ for the students level of knowledge.

Assessments should give students confidence. Students and teachers should trust that assessments accurately measures the skill they wanted to learn. If that’s true, then the assessment gives them crucial feedback about their own knowledge, and they can move forward with confidence.

There’s an evil twin version of this, using “It’s going to be on the exam” as a hack to convince students to pay attention. Don’t do it.

Helping students get unstuck

Students shouldn’t spend too much time stuck. The goal should be to build independence, but that doesn’t mean never getting help. Setting norms for how much time students should spend before asking means that folks will maintain consistent progress.

Wins are motivating, and motivation helps students stay invested in learning.

Project-based learning

Projects can be tremendously motivating and energizing. They can also be frustrating pits! Appropriately scaffolded and scoped projects can drive home the point of learning some content, and give concrete proof of learning and capability. It’s really hard to deny that you’ve learned a skill when you have made an artifact that you can point to.

Still, it’s also common for students to feel disappointed by their projects, or to feel even more anger and dejection when they get stuck, because they care more about the outcome. Teachers can help students make good design and architectural decisions at the outset and plan projects that they will be able to complete to their satisfaction.

It’s also critical to recognize that you can’t only do project based learning. To be effective, there has to be other pedagogy (especially, guided instruction and deliberate practice)! For more on this idea, see the section on the trap of intuitive best practice from Sean Dagony-Clark’s excellent review of Teachers vs. Tech.

Explicit emotional coaching

Talking about your feelings is good. It’s key to creating a space where learning can happen, and it’s often key to getting better.

If you haven’t seen the Feelings wheel, it can give you more language for talking about feelings. There’s also lots of reading and training on active listening, handling feelings, and being emotionally supportive.

Referring out to others

There’s situations that schools aren’t equipped to handle. Teachers and administrators should talk about what those are, and have a plan for dealing with them.

Mental health is really important, and there’s lots of good things to do as an education provider to design experiences that are empowering.

Links and updates

Maybe I was wrong about Holberton

In the past, I’ve been dismissive about the regulatory action against them, on the grounds that historically, most of the regulator’s actions haven’t amounted to much. This week, I watched a pretty compelling segment from the San Francisco local CBS station that paints the school pretty negatively. It features in depth interviews with several students that are unhappy with the program, for what seem like legitimate reasons. Other programs are much cheaper (Holberton’s ISA cap is $85,000), and seem to have better outcomes, going by the surface numbers.

They also recently opened a school in Tulsa.

In case you weren’t aware, sexism is very real

Ali Spittel posted a bunch of messages and comments she has gotten on her work online. Worth browsing for the reminder, and the reality check, if you are in a place where you can handle the toxicity.

I don’t have much to say about this, except that this behavior is awful and shouldn’t be tolerated. The environment shapes how people learn, who teaches, and how. This is the environment, and it’s on us to change it.

Badass users

Joel Hooks of Egghead.io writes about a book that frames how he thinks about teaching, and what they’re building at Egghead. The notes are somewhat raw, but I really like how it puts the user at the center, and gives you as a teacher or content developer a mission - make the user a badass.

The post is here, and the book he’s reviewing is Badass: Making Users Awesome by Kathy Sierra [Note: I left Joel’s referral code on the book link].

Tools and communities for learning to code

General badass Jenn Schiffer of Glitch writes about code learning and how that experience connects to why she’s building Glitch. Seeing live examples of tools in use, “in their native environment” helps people understand why bother learning those tools, and what they can do. (Plus lots of other stuff, go read the post and check out Glitch if you haven’t!)

Speed of Learning

Learning things quickly is appealing to the learner — as well as to the bootcamp business model.

A half hour to learn Rust from Amos is an interesting attempt at pushing the limits of how much language syntax you can cover, and how quickly.

From the conclusion,

And with that, we hit the 30-minute estimated reading time mark, and you should be able to read most of the Rust code you find online.

Writing Rust is a very different experience from reading Rust. On one hand, you're not reading the solution to a problem, you're actually solving it. On the other hand, the Rust compiler helps out a lot.

I am doubtful that someone who read this will retain even 50% of it, and as acknowledged, they’re still at the starting gate when it comes to learning to write Rust. I’m doubtful of it as a standalone resource - it’s hard to stay engaged with reading something unfamiliar, and you don’t have actions to take to engage.

Still - 30 minutes? That’s not that much time at all! It’s short enough that much of the earlier concepts are still close by in short-term memory, though the chance of cognitive overload and encoding errors is high. It’s also short enough that it might be well worth the read (if you want to read about Rust).

The text is mostly code snippets, with some prose providing guidance. I wonder if making it interactive, either with drills or exercises, or something like the Spaced Repetition System / mnemonic medium of Quantum Computing for the Very Curious could turn it into a tool where learners actually learned it all.

See also: learnxinyminutes.com, which has this style of document for lots of languages. Includes all caveats above plus more about thrashing, but it’s close to the maximum content-knowledge-density, if you’re down with Transmissionism-style learning.


Okay, that’s all for this week!

Let me know in the comments or with an email response if you have thoughts, links, or suggestions.

Thanks for reading!

Lambda School Twitter fight, ISAs, Incentive Alignment, and Outcomes Reporting

I started writing this section for yesterday’s newsletter, but as news kept coming out and I kept adding more thoughts, it got too long, so I pulled it into a post of its own.

There’s been a lot of Twitter back-and-forth, and that medium facilitates picking a side and aiming for dunks - playing to the folks who already agree with you. This post is my attempt to understand what’s going on, without trying to judge a winner or rebut anyone. I explicitly focus on areas of agreement between the two ‘sides’ that I map out.

I’ll summarize the recent Lambda School news and the various tweets, subtweets, and threads, then attempt to go full galaxy brain and try to understand why the fight is happening.

Note that I might be wrong, there’s a lot at stake here for students, neutrality isn’t the ‘correct’ mode or opinion, and that both critics and proponents have a ton of valid things to say. Acknowledging my own biases - I’ve been involved in bootcamps (but not Lambda in particular), and have a general pro-bootcamp sentiment.

Context and Terms

In case you’re just catching up on all this, Lambda School is a popular learn-to-code bootcamp that markets itself to career changing adults. It’s received a lot of favorable media coverage, raised close to $50 million in funding, and attracted lots of online students.

Part of its rise is due to the popularization of a new kind of student financing agreement called an Income Share Agreement, or ISA, where students pay for their tuition as a percentage of their income after they get a job after the program. Particularly in light of mounting student loan debt in the United States, ISAs have been touted as a more fair way for students to pay for their education. Lambda School’s founder Austen Allred has touted ISAs both for their student protections and for the incentive alignment that ISAs ought to create between the student’s interests (nominally, getting a high-paying job) and those of the school (making more money).

Lambda in the news

There were three big Lambda school articles, two in the Verge and one in NY Mag, and a few scattered reaction pieces.

The Verge kicked off with an article The High Cost of a Free Coding Bootcamp, which reported on students who were frustrated by different aspects of Lambda’s program, with particular focus on a UX cohort that had gone poorly.

The next day, they followed up with As Lambda students speak out, the school’s debt-swapping partnership disappears from the internet, detailing how Lambda had raised money from Edly backed by its ISAs - a financing scheme that the Verge claimed diluted the promise of the school’s advertised incentive alignment.

NY Mag’s Intelligencer followed with Lambda School’s Misleading Promises, which found an internal communication (an investment memo’s section on risks) with a lower figure for Lambda’s job placement than the 86% advertised.

Lambda’s CEO Austin Allred published details about the new ISA financing the same day as the NY Mag piece, (a deal strikingly similar to Kenzie Academy’s debt raise last November).

The Internet reaction

There were scattered reactions on Twitter and across the web. Several big tech personalities got involved.

I am not going to link to the Tweets, but there are plenty of high-profile people on either side. I trust that you can find satisfying dunks and hot takes. There’s plenty of smart and compassionate commentary on both sides, along with the snark.

Two interesting pieces to call out.

This Twitter thread from Louis Gelinas (current Lambda student):

The thread is long, but worth reading. It’s notable because it recognizes complexity, and speaks to both the experience on the ground as well as the facts - and how they can be presented. It has a generally pro-Lambda position, but got positive responses from folks who have taken publicly skeptical or critical positions.

The other take that I think is worth sharing was this weeks newsletter from Ranjan Roy in The Margins (another Substack newsletter). Ranjan comes from the world of finance, and goes into depth about his change of heart from thinking of financing ISAs as unquestionably good and needed, to having more doubts.

Possibly the biggest lesson I took away was, the more distant a risk becomes, the more distorted all the related incentives become.

My thinking has followed a similar path. My initial reaction to the Verge piece on the financialization of ISAs was that it misrepresented the situation and ignored the reality of having to actually pay teacher salaries to run the bootcamp, and make the cash-in-the-bank work while students trickle through to the tuition-repayment phase of their journey.

After reading the Margins piece, I start to see why selling pieces of ISAs might actually change the on-the-ground incentives for Lambda. It means they can be (and are incentivized to be) more aggressive about recruiting students, since they can get some of the money up front. Otherwise, they’d have to focus on getting the current students graduated and paying before they could grow more.

Ultimately, that may be a better set of incentives. I think bootcamps should be bigger, and marketing and recruiting is an important way that education pathways other than 4-year degrees will become popular and accepted.

Still, the set of incentives created by raising debt based on ISAs makes a material difference to what their strategy will be, and it’s different from what Lambda’s advertised.

What’s all the fighting about?

I keep returning to the question - why is Lambda School in particular getting all this scrutiny?

They aren’t the biggest bootcamp — not by a long shot — I don’t think they crack the top five! They also aren’t the only new online bootcamp to raise a bunch of money - Kenzie Academy raised $100 million in debt in November. They aren’t the only bootcamp to feature ISAs - in fact, nearly all of the popular bootcamps offer an ISA-like financing option.

I think the answer here takes two steps. First, Lambda gets a lot of media attention because its founder is excellent at using Twitter for promotion. That triggers a name-recognition among tech media folks, who then mention Lambda in coverage of student debt and the rise of ISAs, and Lambda manages to get several favorable profiles in tech media. That draws more eyes to Lambda, which has an ‘everyone-is-talking-about-it’ feedback loop, drawing in more eyes.

Second, of a piece with broad tech backlash, more critically-inclined people in the media are on the lookout for mistakes, failures, and complaints about Lambda. When criticism does emerge, it gets picked up by folks who are predisposed against the model, for their idiosyncratic reasons. Because Twitter facilitates fighting, dunking on people, and subtweets, we get a lot of that.

The reactions seemed to lump broadly around two poles. One is the pro-Lambda cluster, and seems to broadly track with venture capital, Y Combinator, pro-bootcamp, and pro-ISA accounts — a “building things is hard and takes courage and financing” camp. The other pole is characterized by skepticism of big tech (venture-capital-backed tech in particular) and is more critical of bootcamps, ISAs, and Lambda.

As I read the articles, tweets, and subtweets, I feel my body tense and my heart beat faster. When I see the back and forth on Twitter, the tone I hear in my head is angry, frustrated, or smug.

I’m going to take a piece from Anatol Rapoport’s rules for criticizing with kindness, as popularized by Daniel Dennet’s Intuition Pumps and Other Tools for Thinking - particularly, Rule 2:

List any points of agreement (especially if they are not matters of general or widespread agreement).

So, what beliefs are shared between all sides?

  • ISAs (with appropriate regulation) can protect students from predators and align incentives between students and for-profit schools

  • Bootcamps help lots of people break into the tech industry, taken broadly, and as many jobs in the industry are currently high paying, this can have a transformational impact on those students financial fortunes

  • Pathways and access to spaces and industries that are traditionally hard to access is a positive good

  • Some people don't succeed in bootcamps, whether they don’t get in, don’t graduate, or don’t find a job afterwards

  • It is a personal hardship when someone starts a bootcamp and ends up dropping out, or graduates from a bootcamp and doesn’t find a job

  • Prospective students should have accurate information about their chances of reaching their goals when deciding on an educational product

  • Usually, for bootcamps, this means the chance of graduating and finding a job

  • Good regulation can help protect students from scams and "certificate mills"

  • There are, and have been through history, scams and certificate mills that prey on students, particularly on students from vulnerable populations, military veterans as a prominent example

  • Some Lambda students have found success in the program, and their outcomes have been transformational

  • Some Lambda students have found the program deeply lacking, and have asked to be released from their enrollment agreements, and some have actually been released

Finding the disagreement

As far as I can tell, there’s room for nuance and complexity in this narrative - and there’s lots of places where people agree.

But, with all the fighting, there must be some critical claims at the heart, some actual points of contention.

Here’s my summary of the critical claims about Lambda School, and my understanding of how deep the contention runs.

Lambda hasn’t gotten the necessary approval from the California Regulator, which issued a scary-sounding action against them

My dive into the history of California bootcamp regulation found that this probably isn’t something that’s super concerning. Lambda’s public statements are pro-regulation, they’ve said that they’re complying with the regulator, and from the looks of the other bootcamps that have had similar action taken against them, it usually disappears after the bootcamps pay a fine.

This fits an existing media narrative about tech startups being willing to break the rules to get what they want, but it looks like this might just be par for the course for bootcamps in California. I don’t actually think that students suffer from this, nor do I expect the regulator to meaningfully impact the student experience.

Lambda students’ learning experience is poor

The core claims here seem to be that the curriculum is underdeveloped and that the teachers aren’t experienced - particularly the coaches, who Lambda hires from the student body, and who have the most interaction with students.

Much of Lambda’s curriculum is open source, so you can see it. Curriculum is only one piece of a learning experience, but from a cursory inspection, it doesn’t seem obviously bad. As a curriculum designer myself, I am sure there are lots of places it could improve - Lambda is probably aware of lots of those places, and is probably working on improving them.

Lots of schools, including most bootcamps, hire TAs and coaches from their student bodies. Having worked with a ton of amazing coaches drawn from the population of bootcamp grads, I think this is a good model. It promotes social learning and gives students a concrete picture for where they will be soon. Coaches are close to the student experience, and so can effectively cross the knowledge and understanding gap that arises between experts. Maybe Lambda should pay its coaches more. From student reports about expertise and coaching style, it sounds like there are some coaches who should get better training (or who shouldn’t be coaches), but that doesn’t invalidate the coaching model.

Financing ISAs meaningfully changes the school’s incentives

Financing ISAs means the school gets some of the money in advance, in exchange for some of the ISA payments in the future. In the bootcamp business model I shared yesterday, I highlighted why any new bootcamp needs something to fill this gap.

Schools need cash to pay for running the school long before graduates start paying back their ISAs. Whether this comes from a loan, from raising seed money from VCs, having students pay tuition up front, or financing ISAs, every school needs to solve this bootstrapping problem.

As noted above, financing ISAs does change the strategy that the school will follow. It can focus on growing faster, and it can last longer before a failure to place students would shut the school down.

Still, the school will make more money when students get paid more, and they will eventually fail as a business if the students don’t do well enough. I can’t say whether this invalidates the school’s claim that ‘we only make money when you do’. It seems to me that they do have cash in the bank before students get placed in their jobs, but also, the financial outcome of the business is still inextricably tied to that of the students.

Marketed placement rates don't match actual placement rates

Similar to the claim about financing ISAs, there is a false-advertising core to this claim. Prospective students should have a clear picture of their chances of graduating, and upon graduation, of getting a job - especially if that’s the core of the marketing message.

The job placement rate is central to the heart of lots of bootcamp marketing - Flatiron School, where I worked, has the Annual Outcomes Report as a centerpiece of its marketing message.

Unfortunately for everyone, things are hard to count. Should a bootcamp be on the hook, outcomes-reporting-wise, for students who decide after the first week that the program isn’t for them? That might be a failure of the program, or it might just not have been interesting for that student. Lots of students who enroll do not finish the program. How should those students be counted?

Bootcamps also tend to exclude from their outcomes reporting students who start with structural barriers to employment where the bootcamp is located. In the US, that often means that folks who would need a work visa to get a job don’t get included as ‘eligible ‘ or ‘job-seeking’ students in bootcamp outcomes reports. Similarly, a lot of students stop responding to emails after they graduate - it’s hard to tell if they got a job, are still looking, or have given up. In the detailed reports that bootcamps publish, these details show up, but they get left out of the top-line number.

My post yesterday termed the product of the graduation and job placement rate the ‘prospective success rate’. In effect, that’s the expectation that a student should have, on average, of completing the program and getting a job - an all-in number for your odds of success. This seems like it would be the fairer number for students to have in mind when they’re thinking about whether they should attend some program.

Lambda’s leadership’s response to criticism

Critics want Lambda’s leaders to hear the criticism, believe it, and take action to correct the deficiencies in their program. Often, students report feeling that their voices were unheard or brushed off.

These criticisms land for me. I think listening and responding to criticism is good, and that Lambda can probably do better at this.

Who are we comparing Lambda to?

If we’re comparing Lambda to other bootcamps or to traditional higher ed, how does it stack up? How does it compare to self-study, or taking free online courses? Or, are we comparing each decision the school has made against an idealized version of the school, where the decision was handled ‘better’?

Choosing the appropriate comparison group is challenging. It frames the rest of the discussion, but it’s also hard to explicitly mention that frame. When someone chooses a comparison group, someone else might choose another.

Compared to colleges

Lambda seems to be pretty different. It’s all online, it’s shorter, students pay later and are protected from the student loan debt of traditional colleges. It’s not accredited, it’s not as widely known or understood, and it doesn’t behave like college.

Colleges also don’t report, generally, on their job placement numbers. Their graduation numbers vary, but don’t seem to stack up particularly well against Lambda. The six-year graduation rate of students first starting a 4-year bachelors degree is 60%. Some schools - particularly elite colleges and universities - do much better. Other schools - particularly for-profit colleges - do much worse. The US national average for for-profit schools 6-year graduation rate is a 21%.

Compared to bootcamps

Lambda seems broadly similar to other bootcamps. It has a longer program than lots of bootcamps, but not the longest. It’s all-online, but there are other all-online bootcamps. The graduation rate and job placement rate are neither the best or the worst among bootcamps.

It’s got a lot of media attention, and it’s what people on Twitter are paying attention to.

Outcomes and Transparency

Everyone cares about the actual student outcomes numbers.

Bootcamps, especially because they have used their job placement rates as a cornerstone of their marketing. Perhaps rightly, they’ve had lots of criticism about the way they’ve reported (or not reported) those numbers.

For Lambda, there’s a 15-month lag between when a student starts the program and when they are 6 months past graduation, the cutoff for inclusion as a successfully placed graduate. Since compiling an outcomes report that you can trust takes time (some bootcamps use an external auditor, a necessarily slow and careful process), bootcamps usually report on outcomes once or twice a year. That means they are usually reporting on students who started the program two years before!

This structure leads to a lot of calls for more transparency from Lambda School, with simultaneous claims from defenders that Lambda is better than most schools in terms of outcomes reporting and transparency.

On Monday, Austen shared the current numbers breakdown in a twitter thread, in advance of a more detailed (and audited) version:

So,

  • Reporting is hard, many caveats from above

  • 72% Graduation Rate

  • 78% Placement Rate

It doesn’t seem like advertising an 85% placement rate was deceptive, based on their previous outcomes.

From the list of points of agreement above,

Prospective students should have accurate information about their chances of reaching their goals when deciding on an educational product

I’m sure that when Lambda has audited their new outcomes report, they’ll update their marketing to reflect it.

What would a ‘good’ placement number be?

Some bootcamps have a high admissions bar, and manage to graduate and place most of their students. They sacrifice access, and sometimes explicitly market their exclusivity. Other bootcamps let all students start, and end up with many students who drop out or don’t get a job.

I don’t foresee any bootcamps publishing a ‘prospective success’ figure that includes acceptance rate, graduation rate, and job placement rate. Maybe someone like Career Karma or Course Report sits in the kind of position to build that kind of tool.

If a school could drive up all three numbers simultaneously, it would be a Pareto improvement over all the other schools. I don’t doubt that there are real differences in quality between schools, but I’m skeptical that focusing on the job placement number alone helps understand what’s going on - or incentives the right behaviors from bootcamps.

What’s missing from the Lambda / ISA discourse?

All schools have some skin in the game

Reputational risk matters to schools that aren’t bootcamps. Every school’s ability to attract students and faculty and place their alumni in jobs depends on the impression they create.

Incentive Alignment isn’t sufficient

There is no invisible hand that teaches the students to code.

Schools with poor incentive alignment are many, and they are often predatory - see the previous post’s aside on cosmetology schools.

But for all the talk of Incentive Alignment, it all has to hit bedrock with differences in students’ experience. For the incentive alignment to mean anything, you have to actually innovate in teaching, in placements, in community, in curriculum, or in something else that is actually different for the students. The financial experience of students is real, and different under ISAs than under traditional loans - but it is only a part.

We don't really get to see into the inner workings of most schools, and few critics, students, or investors can get enough information to tell whether the students will actually learn differently.

So, what’s the galaxy-brain take on Lambda School?

I have been trying to figure out where I stand on the debate. I have been trying not sign up for one ‘team’ or another, since my identity will get wrapped up in that side, and that might blind me to what’s actually going on.

I keep coming back to this quote about AI alignment believers and skeptics (with a slightly twisty origin):

The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research. The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.

We can probably formulate something similar here.

The skeptic position is that, although bootcamps and ISAs are important models to explore, we need to protect students from poor educational experiences and deceptive marketing - after all, education is supposed to be a pathway to a better life. Proponents’ position, on the other hand, is that education is supposed to be a pathway to a better life, and, although we need to protect students from poor educational experiences and deceptive marketing, Bootcamps and ISAs are important models to explore.


This is more than I wanted to write about Lambda. They get a lot of attention already, a lot of what I have to say has already been said well already, and following the controversy closely just invites stress into my life.

Still, I think this helped me see some of the threads of the argument on either side more clearly, and hopefully it did the same for you!

If you have thoughts or feedback (or if I got it all wrong), please let me know - you can just reply to the email, and I’ll get it!

Thanks for reading!

Loading more posts…