Student-Created Rubrics: Preparing Designers for Real-World Quality Decisions

What’s a Rubric, and Why Should We Care?

A rubric is basically a scorecard for quality used by educators when we grade. It breaks down what you’re evaluating into categories, defines what excellence looks like in each category, and assigns weights to what matters most. Using a rubric makes you a less biased grader, and keeps you focused on the student’s accomplishments. In academia, rubrics are usually handed down from on high—the professor tells you exactly how you’ll be graded, and you work to hit those targets.

But here’s the thing: that’s not how the real world works.

In industry, nobody hands you a rubric. You have to figure out what good looks like. Is this feature shippable? Does this design meet our goals? Will users actually want this? These are judgment calls that designers and product managers make every single day.

So I flipped the script. What if students created their own rubrics?

The Theory: Building Industry-Ready Judgment

My thinking was simple: if you’re going to work in tech, which most of my students will, you need to develop an internal compass for quality. You need to be able to look at a piece of work and say, “This is ready” or “This needs more time” based on criteria that actually matter.

When you’re building a product, you’re constantly asking: What makes this good? Is it the visual polish? The interaction design? How well it meets user needs? Whether it actually changes behavior? Does it need to be fun, or functional, or both?

These aren’t academic questions. They’re the daily bread of product development.

The Practice: How It Actually Works

I have students create their rubrics about two-thirds of the way through a project. This timing is deliberate—they know enough about their domain to make informed judgments, but still have time to use these insights to improve their work.

The process varies by class size:

In large classes (think 60+ students), we run structured discussions in small groups where teams share what they think matters. Once the small groups have a rubric they like, it all goes into a massive Google Doc that my CAs (course assistants) synthesize into an overall rubric. The CAs then define what an A, B, or C looks like in each category. This goes back to the class for comments and refinement.

In smaller classes (under 20), each team creates their own rubric. Then representatives from each team negotiate with each other about what the final rubric should include and how to weight different criteria. Again, the CAs define the grade boundaries, and the class reviews and comments.

The categories that emerge are fascinating. For a game project, students might identify:

  • Core mechanics (Is it fun? Does it work?)
  • Visual coherence (Does it feel like one thing?)
  • Technical stability (How many bugs can we tolerate?)
  • Player engagement (Do people want to keep playing?)
  • Learning curve (Can people figure it out?)

For a behavior change app, they might focus on:

  • Evidence of behavior change
  • User retention metrics
  • Ease of adoption
  • Emotional resonance
  • Sustainability of the intervention

The Surprising Dynamics

When I first announced the class would create its own rubric, students laughed. “What if we just set it up so we all get A’s?” they asked.

“Well, let’s see,” I said. What I thought was: good luck with that.

Here’s why it doesn’t work that way: it’s a project-based class, so everyone’s building something different. A team building a complex strategy game wants to be rewarded for sophisticated mechanics. A team creating a meditation app wants credit for simplicity and calm. A team working on a social platform cares about engagement and virality.

Try getting those three teams to agree on an “easy A” rubric. They can’t. They have to negotiate, to argue for what matters, to find a standard of quality that works across wildly different projects.

It also helps that these are Stanford students—they’re drawn to excellence like coffee junkies to a Philz. Even when given the chance to lower the bar, they can’t help but raise it.

The tension is productive. It’s exactly the kind of debate that happens in product reviews at tech companies—different teams advocating for different definitions of quality based on what they’re trying to achieve.

The Outcomes: Better Work, Deeper Thinking

Two things that make me happy:

First, the work quality goes up. When students define their own success criteria, they become more deliberate about their choices. They can’t just throw features at the wall—they have to think about whether each decision serves their stated goals.

Second, it creates psychological safety. Instead of trying to guess what I want, students focus on articulating and meeting their own standards of excellence. They own their criteria, which means they own their success.

The conversations become richer too. Instead of “Will this get an A?” I hear “Does this meet our engagement goals?” Instead of “Is this good enough?” they ask “Have we delivered on what we said mattered?”

The Real Lesson

By making students create rubrics, I’m not just teaching them to evaluate design work. I’m teaching them to think systematically about quality, to negotiate with stakeholders about what matters, and to hold themselves accountable to standards they believe in.

These are exactly the skills they’ll need when they’re sitting in a product review, arguing for why their feature is ready to ship. Or when they’re defining OKRs for their team. Or when they’re deciding whether to pivot or persevere with a struggling product.

I have to be honest: they don’t come up with a very different rubric that past classes have, or even that I would have. The rubric isn’t the point. The thinking is the point. And when students learn to do that thinking for themselves, they stop being students and start being practitioners.

Christina

Comments are closed.