Student Learning Outcomes

Glossary of Terms

Here are definitions to common terms relating to Student Learning Outcomes.
Click on the term to reveal the definition. Click again to hide the definition.
arrow Affective Domain
  • includes the manner in which we deal with things emotionally, such as feelings, values, appreciation, enthusiasms, motivations and attitudes (Bloom)
  • described under the categories of attitudes, interests, appreciations, and adjustments
  • based upon behavioral aspects and may be labeled as beliefs
  • three levels in the domain are awareness, distinction and integration
arrow Affective Learning
Includes personal awareness and self-image, and involves values clarification, developments of creative thinking skills and acceptance of others' opinions. Affective areas are 1) self-worth, 2) relationships with others, 3) world awareness, 4) learning and 5) spiritual life.
arrow Assessment
A means of determining whether the learning or performance outcome has been achieved. The systematic collection of data and information focused on student learning, and other outcomes and objectives.
arrow Authentic Assessment
Requires students to perform a task rather than take a test (in a real-life context or simulated). Authentic assessment is designed to actively demonstrate knowledge, skills, and abilities rather than rely on recognition or recall to answer questions.
arrow Bundled
Two or more outcomes written into a single outcome indicated by use of a conjunction (and, or).
arrow Cognitive Domain
  • Includes the recall or recognition of specific facts, procedural patterns, and concepts that serve in the development of intellectual abilities and skills (Bloom)
  • knowledge or mind based
  • includes three practical instructional levels: fact, understanding, and application
arrow Cohort
A group of students. Examples include all first-time freshmen for a given fall semester, all students in the Bridge program, or all students graduating within a given academic year. Cohorts are often tracked via longitudinal studies.
arrow College Goals
Allows a campus to focus on critical issues. At Mt. SAC, the college goals are articulated by PAC, and they guide all planning and assessment processes.
arrow Competency
A combination of knowledge, skills, and abilities needed to perform a specific task at a specific criterion established by the evaluator.
arrow Criterion-Referenced
Evaluation is based on proficiency of pre-determined standards rather than subjective measures such as improvement over time or compared to other students. See also norm-referenced.
arrow Cross-Sectional Analysis
Research studies that look at a cross-section of the student body or clients at a given point in time. See also longitudinal study.
arrow Department or Unit Goals
Allow an area to focus its priorities. At Mt. SAC, Department and Unit goals are prompted in part by college goals and generated by faculty/staff. They guide area planning and assessment.
arrow Direct Assessment
Methods provide evidence in the form of student products or performances. Such evidence demonstrates that actual learning has occurred (related to a specific content or skill). Direct measures show WHAT students learn, but not how or why.
arrow Embedded Assessment
Included as part of the regular instruction or service. For example, specific questions can be embedded in numerous classes via quizzes, tests, and homework to provide summative and formative evaluation of departmental, program, or institutional objectives.
arrow Focus Groups
A qualitative means of assessment that rely on a series of facilitated discussions, usually with 6-10 respondents each, that are asked a series of carefully constructed open-ended questions about their attitudes, beliefs, and experiences.
arrow Formative Assessment
Gives information and feedback during the course of instruction or service that allows for improvement.
arrow General Education
The content, skills, and learning outcomes expected of students who achieve a college degree (or certificate) regardless of program or major. General education outcomes often include knowledge, skills, and abilities in such areas as writing, critical thinking, problem solving, quantitative reasoning, and information competency.
arrow Holistic Scoring
Assessment with a scoring process based on an overall rating of a finished product or performance. Holistic scoring provides only one score for the entire assessment.
arrow Indirect Assessment
Methods reveal characteristics associated with learning that imply learning has occurred. Indirect measures include self-reports and observations from others on attitude, motivation, perception, satisfaction and some behaviors (such as time on task, study habits, engagement, etc.). Indirect measures answer HOW and WHY questions about student learning.
arrow Inter-Rater Reliability
Norming includes an assessment that raters must pass before using the rubric to score student work. Raters need to independently evaluate similar performances with very similar scores. When raters disagree, open discussions between raters can clarify the scoring criteria and performance standards, while providing opportunities to practice applying the rubric to work of various levels.
arrow Longitudinal Study
A.K.A. longitudinal cohort analysis. Research design that tracks a particular group of students over time. See also, cross-sectional analysis.
arrow Metacognition
  • what is known about one's own cognitive resources and regulation of those resources
  • ability to control and self-regulate learning
  • ability to reflect on one's own cognitive process and explain it to others
arrow Measurable Objectives
Instructional expectations for a given course or class that establish curricular elements and standards. The measurable objectives for each course are listed on the Course Outline of Record.
arrow Mission Statement
  • Institutional: an expanded statement of institutional purpose, a campus is unified through its demonstrated connection to the mission. At Mt. SAC the mission is driven by the California Master Plan for Higher Education, revised by PAC, and approved by the Board of Trustees. It informs all planning and assessment
  • Unit: A description of both services provided and for whom those services are provided.
arrow Multiple Measures
Using more than one type of assessment to measure objectives. Whenever feasible, use both direct and indirect assessments.
arrow Norm-Referenced
Assessment where student performances are compared to a larger group. Usually the larger group, or "norm group," is a national sample representing a wide and diverse cross-section of students. Assessment of an individual is compared to that of another individual (or to the same individual's improvement over time). Individuals are commonly ranked to determine a median or an average. Grading on a curve is an example of norm-referenced assessment.
arrow Norming
Also called Rater Training. The process of educating raters to evaluate student or client performances and produce dependable scores. Typically, this process uses criterion-referenced standards and primary trait analysis rubrics. Raters need to participate in norming sessions on a regular basis. See also inter-rater reliability.
arrow Performance-based Assessment
Items or tasks that require students to apply knowledge, skills, and abilities in real-world situations.
arrow PIE (Planning for Institutional Effectiveness)
A campus-wide SLOs/AUOs-based planning process at Mt. SAC designed to foster innovation and change within departments/units in alignment with the institutional mission and goals.
arrow Portfolio
A representative collection of an individual’s work, including some evidence that the individual has evaluated the quality of his or her work. The method for evaluating the work is important, as well as determining the reasons the individual chose each selection included in his or her portfolio.
arrow Reliability
The data are reproducible. Repeated assessment yields the same data.
arrow Relevant
The data answers important questions, and is not generated simply because it is easy to measure.
arrow Rigor
The degree to which research methods are meticulously carried out in order to recognize important influences occurring in the assessment.
arrow Rubric
An assessment tool often shaped like a matrix, with criteria on one side and levels of achievement along the other. Primary Trait Analysis Rubrics are often used for measuring a complex outcome such as performances, projects or presentations. Rubrics require a "norming process."
arrow Sampling
A population (of students or clients) commonly contains too many individuals to study conveniently, so research is often restricted to one or more samples drawn from it. A well chosen sample will contain most of the information about a particular population but the relation between the sample and the population must allow true inferences to be made about a population from that sample. Consequently, the first important attribute of a sample is that every individual has an equal chance of being included in the study. This is called random selection. The second important attribute of a sample is that it is representative; that is to say the demographic features of the sample closely resemble the entire population.
arrow Scale
Values used to rate items in an assessment instrument. Scales can be numerical (1 to 10) or semantic (agree to disagree). Both can be adopted for quantitative and/or qualitative analysis.
arrow Summative Assessment
Designed to provide a final evaluative summary or score. A student's grade is an example of summative assessment. Summative evaluation is a final determination of particular knowledge, skills, and abilities. This could be exemplified by exit or licensing exams, or any final assessment which is not created to provide feedback for improvement.
arrow Validity
The extent to which an assessment instrument measures what it is supposed to measure; and the extent to which inferences made on the basis of test scores are appropriate and accurate. For example, if a student performs well on a reading test, how confident are we that the same student is a good reader? A valid standards-based assessment is (1) aligned with the standards intended to be measured, (2) provides an accurate and reliable estimate of students' performances relative to the standard(s), and (3) is fair. An assessment cannot be valid if it is not reliable.