This piece is part of a series exploring competency-based education (CBE):
- Introduction to Competency-Based Education
- Determining Competencies
- Annotated Bibliography: Competency-Based Education in Higher Education
The current piece focuses on how to design and implement high-quality assessments that generate meaningful evidence of student proficiency.
Introduction
How can we truly know when a learner is ready to succeed in the workplace or contribute meaningfully in their field? Traditional assessments often emphasize recall and seat time, but they don’t always capture what learners can actually do.
Competency-based education offers a compelling alternative. It redefines assessment by shifting the focus from time spent in class or performance on cumulative exams to demonstrated proficiency. In a CBE model, learners must repeatedly show what they can do through valid, authentic demonstrations—anchored in both academic content and real-world tasks.
High-quality assessments in CBE are intentionally aligned to clearly defined competencies, provide multiple opportunities for demonstration, and rely on transparent, equitable scoring practices that support learning over time.
This piece is designed for faculty, instructional designers, and academic leaders who want to strengthen assessment practices through a competency-based lens. It outlines the essential components of competency-based assessment, shares practical implementation strategies, and highlights examples from professional fields. Even outside formal CBE programs, these principles can improve assessment quality, promote equity, and better reflect learners’ true capabilities.
How Competency Assessment Differs From Traditional Testing
In CBE, assessment serves to gather valid and reliable evidence that a learner has achieved proficiency on a defined competency—a demonstrable ability to integrate knowledge, skills, and behaviors in context. This approach differs from traditional testing in several key ways:
- Performance over recall: Assessments are centered on application, not just retention. For example, instead of selecting correct answers from a list, students might conduct a stakeholder analysis or evaluate a policy draft.
- Context over content coverage: Tasks are designed to mirror professional scenarios, requiring learners to integrate knowledge, skills, and judgment.
- Proficiency over seat time: Student progress is evaluated against clearly defined standards, not time-based milestones. Some may demonstrate competency quickly; others may need additional support and opportunities.
These shifts require a corresponding change in assessment design. Evaluation must be authentic, embedded in the learning process, and iterative to capture meaningful evidence of proficiency (McClarty & Gaertner, 2015).
Key Features of High-Quality Assessments
High-quality competency assessments are defined by features that promote fairness, validity, and meaningful demonstration of learning. The Competency-Based Education Network (C-BEN, 2017) outlines a widely used quality framework, which includes the following essential attributes:
- Authenticity: Assessment tasks simulate real-world scenarios, allowing learners to apply their knowledge and skills in professional or practical contexts. This not only enhances relevance but also mirrors how competencies are used in the field (Gervais, 2016).
- Multiple demonstrations: Competency is not proven through a single test or product. Instead, it is demonstrated through varied assessment formats and over time to ensure validity and reliability of inferences (McClarty & Gaertner, 2015).
- Clear proficiency standards: Criteria for success are explicitly defined and consistently applied, fostering shared understanding among evaluators and inter-rater reliability (C-BEN, 2017).
- Embedded feedback: Formative assessments and feedback loops are built into the learning process, providing students with timely insights and revision opportunities (Johnstone & Soares, 2014).
- Transparent expectations: Assessment criteria and proficiency expectations are shared with learners early and clearly from the onset, empowering them to monitor their progress and make informed decisions about how to improve (Reddy & Andrade, 2010).
Designing Authentic Competency Assessments
Effective competency assessments are rooted in authentic, discipline-specific tasks. These tasks reflect not only what students need to know, but also what they need to do in real professional contexts. For example, nursing students might demonstrate clinical skills in a simulation, business students might present a strategic plan, and social work students might respond to a case study. These tasks reflect authentic performance in professional contexts and often integrate multiple competencies (Gervais, 2016).
To strengthen alignment, assessment designers should consult with professionals to understand how competence is judged during hiring, onboarding, performance reviews, and advancement. These insights ensure assessments reflect real evaluation practices—not just academic proxies. The Center for Skills by C-BEN offers tools, partnerships, and exemplars to support this alignment.
Assessment of competency should be iterative, multimodal, and sustained over time—not a one-time test. Providing learners with multiple opportunities to engage with a skill across varied timeframes is inherently more authentic than a one-time test. In real-world settings, it is rare to perform a skill just once or to succeed on a single attempt. Most meaningful tasks involve iteration, feedback, and the chance to try again. To ensure fairness and validity, students should have multiple opportunities and methods to demonstrate what they can do. This allows instructors to better triangulate evidence and distinguish consistent proficiency from isolated success (McClarty & Gaertner, 2015). To support valid, equitable judgments of learning, designers should intentionally vary assessment opportunities along several key dimensions:
- Format variation: A student demonstrating data interpretation skills might do so through a written case memo, an interactive dashboard, or a live presentation, depending on the context and the available tools. Each format offers a different lens on the same underlying competency.
- Temporal variation: Competence is often best judged over time. Revising a project, responding to formative feedback, or submitting milestone deliverables can provide a more accurate picture of sustained ability than a single, static performance.
- Feedback loops: Iterative assignments with opportunities for revision allow students to engage with feedback, refine their thinking, and strengthen their performance, mirroring real professional practice.
These strategies reflect the core principle that CBE is not about how quickly a student completes an assignment, but about whether they ultimately meet the standard. Allowing multiple, varied demonstrations of learning makes assessment evidence more valid and defensible—especially when high-stakes decisions like graduation or credentialing are on the line. This approach also supports equity. Learners from diverse backgrounds may need different supports or timeframes to show what they can do. Moving beyond one-shot assessments reduces structural barriers and creates more inclusive pathways to success.
Using Rubrics to Support Transparent, Equitable Evaluation
Rubrics are central to competency-based assessment because they translate broad competencies into observable performance criteria. When designed intentionally, rubrics support the following:
- Enhancing transparency, helping learners track progress and set goals
- Reducing grading bias by focusing on evidence rather than intuition or norms
- Providing formative guidance, allowing instructors to deliver actionable, targeted feedback aligned with each stage of learning
Envision’s Rubric Best Practices Guide recommends focusing each assessment on 3–5 “non-negotiable” criteria—the clearest indicators of the competency. It also suggests using even-point scales to avoid overuse of middle scores and writing criteria in specific, observable terms, avoiding vague language. For instance, instead of the broad phrase “Writing is excellent,” a stronger criterion would be “Writing is organized, precise, and well-developed.” These practices make expectations clearer and judgments more defensible.
In competency-based education, inclusivity is paramount to the design of a competency-based education rubric. CBE aims to assess what learners can do, not how closely they conform to dominant academic or cultural norms. As emphasized in Rubrics as a Tool to Support Equity and Inclusion, rubrics should guide evaluators to focus on the quality of student work, not peripheral factors.
This involves the following:
- Avoiding penalizing students for language variation, stylistic differences, or communication that reflects diverse backgrounds—unless such elements are directly tied to the assessed competency
- Refraining from deducting points for lateness or formatting issues, unless timeliness or presentation is a stated learning outcome
Beyond evaluation, rubrics can also be used to support instructional alignment, learner agency, and peer engagement.
- Scaffolding peer reviews and draft feedback: Rubrics provide students with a shared language and clear criteria for offering and receiving feedback during collaborative activities and revision processes.
- Supporting self-assessment and reflection: When students use rubrics to evaluate their own work, they build metacognitive awareness and deepen their understanding of what it means to meet or exceed a standard.
- Calibrating grading among instructional teams: Rubrics serve as reference points for faculty collaboration, helping instructors apply consistent expectations and minimize subjective grading differences.
When used this way, rubrics not only support equitable grading but also enhance learning itself (Reddy & Andrade, 2010).
Proficiency-Based Scoring Models
Traditional point-based grading often obscures what learners can actually do. In CBE, scoring models are designed to reflect demonstrated proficiency against clearly defined standards. These models prioritize transparency and growth by focusing on what learners can do and not just how many points they earn.
- Threshold-based ratings: Instead of subtracting points for errors, evaluators assess whether student work meets clearly defined performance standards using terms such as “Meets Expectations” or “Needs Further Development.” This reinforces the idea that competency is binary—either demonstrated or not yet—while promoting a growth mindset by framing unmet expectations as opportunities for continued learning rather than failure.
- Proficiency tracking: Dashboards and digital learning records provide real-time visualizations of student progress across competencies. These systems support transparency and enable both learners and instructors to monitor growth over time.
- Portfolios and microcredentials: Competency evidence can be compiled into digital portfolios or represented through microcredentials such as badges aligned with rubrics and authentic tasks. For example, in a CBE-aligned Master of Social Work program, a student might earn badges in trauma-informed practice or community needs assessment by completing authentic tasks and demonstrating proficiency. These artifacts provide concrete, shareable proof of ability that can be presented to employers or professional networks (Young et al., 2019).
These models also support institutional goals such as credit for prior learning, strengthened employer connections, and transcript innovations that reflect demonstrated skills—not just course titles or grades (Cunningham et al., 2016; Dodge et al., 2018).
Conclusion
Assessment in CBE isn’t an endpoint—it’s the evidence. High-quality competency assessments require intentional design, authentic performance tasks, inclusive rubrics, and scoring models that capture and support student growth over time. Whether implemented in a fully competency-based program or applied to a single capstone course, these practices empower educators to shift from measuring knowledge to demonstrating real-world capability.
References
Competency-Based Education Network (C-BEN). (2017). Quality framework for competency-based education programs: A user’s guide.
Cunningham, J., Key, E., & Capron, R. (2016). An evaluation of competency‐based education programs: A study of the development process of competency‐based programs. The Journal of Competency-Based Education, 1(3), 130–139.
Dodge, L., Bushway, D. J., & Long, C. S. (2018). A leader’s guide to competency-based education: From inception to implementation. Routledge.
Gervais, J. (2016). The operational definition of competency-based education. The Journal of Competency-Based Education, 1(2), 98–106.
Johnstone, S. M., & Soares, L. (2014). Principles for developing competency-based education programs. Change: The Magazine of Higher Learning, 46(2), 12–19.
McClarty, K. L., & Gaertner, M. N. (2015). Measuring mastery: Best practices for assessment in competency-based education. American Enterprise Institute.
Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448.
Young, D., West, R. E., & Nylin, T. A. (2019). Value of open microcredentials to earners and issuers: A case study of National Instruments Open Badges. The International Review of Research in Open and Distributed Learning, 20(5), 104–121.