×

Glossary

Accountability: The demand by a community (public officials, employers, and taxpayers) for school officials to prove that money invested in education has led to measurable learning. 

Accreditation: A certification awarded by an external, recognized organization, that the institution or program meets certain requirements overall, or in a particular discipline. 

Achievement Test: A standardized test designed to efficiently measure the amount of knowledge and/or skill a person has acquired, usually as a result of classroom instruction. Such testing produces a statistical profile used as a measurement to evaluate student learning in comparison with a standard or norm. 

Affective: Outcomes of education involving feelings more than cognitive understanding; likes, dislikes, frustrations, values, etc. 

Assessment: Any effort to gather, analyze, and interpret evidence, which describes institutional, departmental, divisional, or agency effectiveness (Upcraft & Schuh, 1996).

Assessment of Student Learning: Both direct and indirect measures used to evaluate student learning in order to change, improve, and enhance student learning and the college experience. In doing so, the university provides accountability measures to serve the internal need of the university and in meeting requirements of external agencies. 

Benchmark: A criterion-referenced objective; "Performance data that are used for comparative purposes. A program can use its own data as a baseline benchmark against which to compare future performance. It can also use data from another program as a benchmark. In the latter case, the other program often is chosen because it is exemplary and its data are used as a target to strive for, rather than as a baseline." (p. xv) Hatry, H., van Houten, T., Plantz, M., & Greenway, M.T. (1996). 

Bias: A situation that occurs in testing when items systematically measure differently for different ethnic, gender, or age groups. Test developers reduce bias by analyzing item data separately for each group, then identifying and discarding items that appear to be biased.

Cohort: A group whose progress is followed by means of measurements at different points in time. 

Competency: Level at which performance is acceptable. 

Core Element(s): The basic everyday functions that are essential components of individual departments. In previous years they have been known as critical processes, continuous objectives, and basic operations.

Direct Measurements: Standardized or non-standardized objective measures demonstrating competency in specific areas. 

Effectiveness (results of operations): How well an approach, a process, or a measure addresses its intended purpose.

Enhancement: Can be the improvement of a core element (process, service, or program that is essential to daily operation; also called an enhancement), or the creation of a new program (an innovation). Any improvement made to your operation is considered an enhancement. 

Evaluation: Any effort to use assessment evidence to improve institutional, departmental, divisional, or unit effectiveness. 

Forced-choice: The respondent only has a choice among given responses (e.g., very poor, poor, fair, good, very good). 

Formative Assessment: Intended to assess ongoing program/project activity and provide information to improve the project. Assessment feedback is short term in duration. 

Indirect Measurements: Opinion surveys, interviews, and other subjective data. This may be combined with enrollment analyses, retention rates, graduation rates, employment data, transfer data, etc.

Institutional Assessment: Assessment of institutional mission and goal statements including student services, financial stability, business and industry training, adult education, as well as academic programs. 


Institutional Review Board: Charged with the responsibility of protecting the rights and welfare of human subjects involved in research. 

Learning Outcomes: Changes or consequences that occur as a result of enrollment in a particular educational institution and involvement in its courses and programs. What a student is able to know, demonstrate, analyze, and synthesize following course and program instruction/involvement. 

Longitudinal Studies: Data collected from the same population at different points in time. 

Norm-reference: Test designed to highlight achievement differences between and among students to produce a dependable rank order of students across a continuum of achievement. 

Observer Effect: The degree to which the assessment results are affected by the presence of an observer.

Open-ended: Assessment questions that are designed to permit unstructured responses. 

Program Assessment (Program Review): The program outcomes are based on how each part is interacting with the rest of the parts, not on how each part is doing individually. The knowledge, skills, and abilities that students achieve at the end of their programs are affected by how well courses and other experiences in the curriculum fit together and build on each other throughout the undergraduate years. 

Program Objectives: Reflects student learning outcomes and achievements related to the academic program as a unit rather than an individual course. 

Reliability: The extent to which an experiment, test, or any measuring procedure consistently yields the same result on repeated trials. 

Research: “...investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws” (Miriam-Webster, 2011).

Rubrics: A set of categories that define and describe the important components of the work being completed, critiqued, or assessed. Each category contains a gradation of levels of completion or competence with a score assigned to each level and a clear description of what criteria need to be met to attain the score at each level.

Stakeholder: Anyone who has a vested interest in the outcome of the program/project. 

Summative Assessment: An assessment that is done at the conclusion of a course or some larger instructional period (e.g., at the end of the program). The purpose is to determine success or to what extend the program/project/course met its goals. 

Triangulation: The use of a combination of assessment methods in a study. An example of triangulation would be an assessment that incorporated surveys, interviews, and observations.

Utility: The usefulness of assessment results.

Validity: Refers to the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. Validity has three components: relevance (direct measurement), accuracy (how precise are the measurements), and utility (how clear are the implications for improvement).

Variable: Observable characteristics that vary among individual responses.

Accountability: The demand by a community (public officials, employers, and taxpayers) for school officials to prove that money invested in education has led to measurable learning. 

Accreditation: A certification awarded by an external, recognized organization, that the institution or program meets certain requirements overall, or in a particular discipline. 

Achievement Test: A standardized test designed to efficiently measure the amount of knowledge and/or skill a person has acquired, usually as a result of classroom instruction. Such testing produces a statistical profile used as a measurement to evaluate student learning in comparison with a standard or norm. 

Affective: Outcomes of education involving feelings more than cognitive understanding; likes, dislikes, frustrations, values, etc. 

Assessment: Any effort to gather, analyze, and interpret evidence, which describes institutional, departmental, divisional, or agency effectiveness (Upcraft & Schuh, 1996).

Assessment of Student Learning: Both direct and indirect measures used to evaluate student learning in order to change, improve, and enhance student learning and the college experience. In doing so, the university provides accountability measures to serve the internal need of the university and in meeting requirements of external agencies. 

Benchmark: A criterion-referenced objective; "Performance data that are used for comparative purposes. A program can use its own data as a baseline benchmark against which to compare future performance. It can also use data from another program as a benchmark. In the latter case, the other program often is chosen because it is exemplary and its data are used as a target to strive for, rather than as a baseline." (p. xv) Hatry, H., van Houten, T., Plantz, M., & Greenway, M.T. (1996). 

Bias: A situation that occurs in testing when items systematically measure differently for different ethnic, gender, or age groups. Test developers reduce bias by analyzing item data separately for each group, then identifying and discarding items that appear to be biased.

Cohort: A group whose progress is followed by means of measurements at different points in time. 

Competency: Level at which performance is acceptable. 

Core Element(s): The basic everyday functions that are essential components of individual departments. In previous years they have been known as critical processes, continuous objectives, and basic operations.

Direct Measurements: Standardized or non-standardized objective measures demonstrating competency in specific areas. 

Effectiveness (results of operations): How well an approach, a process, or a measure addresses its intended purpose.

Enhancement: Can be the improvement of a core element (process, service, or program that is essential to daily operation; also called an enhancement), or the creation of a new program (an innovation). Any improvement made to your operation is considered an enhancement. 

Evaluation: Any effort to use assessment evidence to improve institutional, departmental, divisional, or unit effectiveness. 

Forced-choice: The respondent only has a choice among given responses (e.g., very poor, poor, fair, good, very good). 

Formative Assessment: Intended to assess ongoing program/project activity and provide information to improve the project. Assessment feedback is short term in duration. 

Indirect Measurements: Opinion surveys, interviews, and other subjective data. This may be combined with enrollment analyses, retention rates, graduation rates, employment data, transfer data, etc.

Institutional Assessment: Assessment of institutional mission and goal statements including student services, financial stability, business and industry training, adult education, as well as academic programs. 


Institutional Review Board: Charged with the responsibility of protecting the rights and welfare of human subjects involved in research. 

Learning Outcomes: Changes or consequences that occur as a result of enrollment in a particular educational institution and involvement in its courses and programs. What a student is able to know, demonstrate, analyze, and synthesize following course and program instruction/involvement. 

Longitudinal Studies: Data collected from the same population at different points in time. 

Norm-reference: Test designed to highlight achievement differences between and among students to produce a dependable rank order of students across a continuum of achievement. 

Observer Effect: The degree to which the assessment results are affected by the presence of an observer.

Open-ended: Assessment questions that are designed to permit unstructured responses. 

Program Assessment (Program Review): The program outcomes are based on how each part is interacting with the rest of the parts, not on how each part is doing individually. The knowledge, skills, and abilities that students achieve at the end of their programs are affected by how well courses and other experiences in the curriculum fit together and build on each other throughout the undergraduate years. 

Program Objectives: Reflects student learning outcomes and achievements related to the academic program as a unit rather than an individual course. 

Reliability: The extent to which an experiment, test, or any measuring procedure consistently yields the same result on repeated trials. 

Research: “...investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws” (Miriam-Webster, 2011).

Rubrics: A set of categories that define and describe the important components of the work being completed, critiqued, or assessed. Each category contains a gradation of levels of completion or competence with a score assigned to each level and a clear description of what criteria need to be met to attain the score at each level.

Stakeholder: Anyone who has a vested interest in the outcome of the program/project. 

Summative Assessment: An assessment that is done at the conclusion of a course or some larger instructional period (e.g., at the end of the program). The purpose is to determine success or to what extend the program/project/course met its goals. 

Triangulation: The use of a combination of assessment methods in a study. An example of triangulation would be an assessment that incorporated surveys, interviews, and observations.

Utility: The usefulness of assessment results.

Validity: Refers to the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. Validity has three components: relevance (direct measurement), accuracy (how precise are the measurements), and utility (how clear are the implications for improvement).

Variable: Observable characteristics that vary among individual responses.