Terminology Document

Student Photos - Slider

Assessment Terminology

Vision Statement

- A vision statement provides strategic direction for an institution and describes their high level goals for the future-what they hope to achieve if they successfully fulfill their organizational purpose or mission.

Albany State University will be a world-class comprehensive university and a powerful catalyst for the economic growth and development of Southwest Georgia. ASU will be recognized for its innovative and creative delivery of excellent educational programs, broad-based community engagement and public service, and creative scholarship and applied research, all of which enrich the lives of the diverse constituencies served by the University.

Mission Statement

- The mission statement describes and institution’s purposes and its vision of excellence. ... That educational vision should be deeply rooted in the institution's identity and practices, rather than being discarded when a president, dean, or inspired faculty leader moves on.

Albany State University, a proud member institution of the University System of Georgia, elevates its community and region by offering a broad array of graduate, baccalaureate, associate, and certificate programs at its main campuses in Albany as well as at strategically-placed branch sites and online. Committed to excellence in teaching and learning, the University prepares students to be effective contributors to a globally diverse society, where knowledge and technology create opportunities for personal and professional success. ASU respects and builds on the historical roots of its institutional predecessors with its commitment to access and a strong liberal arts heritage that respects diversity in all its forms and gives all students the foundation they need to succeed. Through creative scholarship, research, and public service, the

University’s faculty, staff, students, and administrators form strategic alliances internally and externally to promote community and economic development, resulting in an improved quality of life for the citizens of southwest Georgia and beyond.

Program Performance Outcomes or Program Outcomes (PPO)

– Program outcomes are statements of what faculty expect graduates should be able to do after completing their programs of study. Like learning objectives, these statements should be written in specific, demonstrable (measurable), and student-centered terms.

For instance, a CMU Department of Physics learning outcome states that graduates should be able to solve complex and diverse problems by:

  • Recognizing universal physical laws relevant to the problem,
  • Applying the relevant laws to the problem,
  • Applying mathematical and computational techniques, using experimental, computational, and/or theoretical methods, and
  • Evaluating the limitations of their solutions.

Student Learning Outcomes or Objectives (SLO)

– Student learning outcomes or objectives are typically course-level statements describing what the students should be able to do (or demonstrate) by the end of the course. These statements describe student performance in a specific, demonstrable (measurable), and student-centered way.

Example: Students should be able to apply basic principles of energy, momentum and angular momentum conservation to solve real-world problems on the microscopic, macroscopic and astrophysical size scales.

Unit Performance Outcomes (UPO)

– Unit performance outcomes can be defined as statements that describe the desired quality (timeliness, accuracy, responsiveness, etc.) of key functions and services within the administrative unit. Operational outcomes define exactly what the services should promote (understanding, knowledge, awareness, appreciation, etc.). Outcomes also can be stated in terms of student learning outcomes. This is most appropriate for services that aim to increase students’ knowledge or understanding of specific concepts.

Complete College Georgia -

In August 2011, Governor Nathan Deal announced the launch of Complete College Georgia, a statewide effort to increase attainment of a high quality certificate or degree.  Since that announcement, the University System of Georgia and the Technical College System of Georgia have partnered and collaborated on the strategizing, planning, and implementing efforts that drive the primary goal of Complete College Georgia to improve student access to and graduation from institutions of higher education. CCG has five major work areas:

  • College Readiness: Mending the P-12 pipeline to increase the number of high school students graduating and ready to begin higher education work.
  • Improving Access & Completion for Underserved Students: Identifying and removing commons barriers for minority, part-time, adult, military, disabled, low-income, and first generation students.
  • Shortening the Time to Degree: Improving current and developing new paths for students to earn a high quality degree in a timely manner.
  • Restructuring Instructional Delivery: Improving the quality of student learning through effective teaching, facilitation and innovative modes of learning.
  • Transforming Remediation: Improving remedial education practices to remove barriers and increase success.
  • Examples: Guided Pathways, Go back Move Ahead, Predictive Analysis, Transformed Remediation, New Models of Learning, Credit Intensity)

Indirect Measures

– Indirect assessments use perceptions, reflections or secondary evidence to make inferences about student learning. For example, surveys of employers, students’ self-assessments, and admissions to graduate schools are all indirect evidence of learning

Indirect measures are best situated at program or university level assessment. These measures are commonly in conjunction with direct measures of student learning.

Examples of Indirect Measures of Student Learning

  • Course grades provide information about student learning indirectly because of a series of reasons, such as: a) due to the focus on student performance or achievement at the level of an individual class, such grades do not represent an indication of learning over a longer course of time than the duration of that particular class or across different courses within a program; b) grading systems vary from class to class; and c) grading systems in one class may be used inconsistently from student to student
  • Grades assigned to student work in one particular course also provide information about student learning indirectly because of the reasons mentioned above. Moreover, graded student work in isolation, without an accompanying scoring rubric, does not lead to relevant meaning related to overall student performance or achievement in one class or a program
  • Comparison between admission and graduation rates
  • Number or rate of graduating students pursuing their education at the next level
  • Reputation of graduate or post-graduate programs accepting graduating students
  • Employment or placement rates of graduating students into appropriate career positions
  • Course evaluation items related to the overall course or curriculum quality, rather than instructor effectiveness
  • Number or rate of students involved in faculty research, collaborative publications and/or presentations, service learning, or extension of learning in the larger community
  • Surveys, questionnaires, open-ended self-reports, focus-group or individual interviews dealing with current students’ perception of their own learning
  • Surveys, questionnaires, focus-group or individual interviews dealing with alumni’s perception of their own learning or of their current career satisfaction (which relies on their effectiveness in the workplace, influenced by the knowledge, skills, and/or dispositions developed in school)
  • Surveys, questionnaires, focus-group or individual interviews dealing with the faculty and staff members’ perception of student learning as supported by the programs and services provided to students
  • Quantitative data, such as enrollment numbers
  • Honors, awards,  scholarships, and other forms of public recognition earned by students and alumni

Direct Measures

– Direct assessment is when measures of learning are based on student performance or demonstrates the learning itself. Direct assessment of learning can occur within a course (e.g., performance on a series of tests) or could occur across courses or years (comparing writing scores from sophomore to senior year).

Examples of Direct Measures of Student Learning

  • Scores and pass rates on standardized tests (licensure/certification as well as other published tests determining key student learning outcomes)
  • Writing samples
  • Score gains indicating the “value added” to the students’ learning experiences by comparing entry and exit tests (either published or locally developed) as well as writing samples
  • Locally designed quizzes, tests, and inventories
  • Portfolio artifacts (these artifacts could be designed for introductory, working, or professional portfolios)
  • Capstone projects (these could include research papers, presentations, theses, dissertations, oral defenses, exhibitions, or performances)
  • Case studies
  • Team/group projects and presentations
  • Oral examination
  • Internships, clinical experiences, practica, student teaching, or other professional/content-related experiences engaging students in hands-on experiences in their respective fields of study (accompanied by ratings or evaluation forms from field/clinical supervisors)
  • Service-learning projects or experiences
  • Authentic and performance-based projects or experiences engaging students in opportunities to apply their knowledge to the larger community (accompanied by ratings, scoring rubrics or performance checklists from project/experience coordinator or supervisor)
  • Graduates’ skills in the workplace rated by employers
  • Online course asynchronous discussions analyzed by class instructors

Whenever appropriate, scoring keys help identify the knowledge, skills, and/or dispositions assessed by means of the particular assessment instrument, thus documenting student learning directly.

Formative

- Formative assessment refers to the gathering of information or data about student learning during a course or program that is used to guide improvements in teaching and learning. Formative assessment activities are usually low-stakes or no-stakes; they do not contribute substantially to the final evaluation or grade of the student or may not even be assessed at the individual student level.  For example, posing a question in class and asking for a show of hands in support of different response options would be a formative assessment at the class level.  Observing how many students responded incorrectly would be used to guide further teaching. Examples of formative assessments include asking students to:

  • draw a concept map in class to represent their understanding of a topic
  • submit one or two sentences identifying the main point of a lecture
  • turn in a research proposal for early feedback

Summative

- The gathering of information at the conclusion of a course, program, or undergraduate career to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program. Examples: examining student final exams in a course to see if certain specific areas of the curriculum were understood less well than others; analyzing senior projects for the ability to integrate across disciplines. Examples of summative assessments include:

  • a midterm exam
  • a final project
  • a paper
  • a senior recital

Rubric

- A rubric is a scoring tool that explicitly represents the performance expectations for an assignment or piece of work. A rubric divides the assigned work into component parts and provides clear descriptions of the characteristics of the work associated with each component, at varying levels of mastery.

Rubrics can be used for a wide array of assignments: papers, projects, oral presentations, artistic performances, group projects, etc. Rubrics can be used as scoring or grading guides, to provide formative feedback to support and guide ongoing learning efforts, or both.

VALUE rubrics

- VALUE (Valid Assessment of Learning in Undergraduate Education) is a campus-based assessment initiative sponsored by AAC&U as part of its Liberal Education and America’s Promise (LEAP) initiative. VALUE rubrics or scoring guides provide needed tools to assess students’ own authentic work, produced across their diverse learning progressions and institutions, to determine whether and how well students are meeting graduation level achievement in learning outcomes that both employers and faculty consider essential.

The VALUE rubrics include Inquiry and Analysis, Critical Thinking, Creative Thinking, Written Communication, Oral Communication, Quantitative Literacy, Information Literacy, Reading, Teamwork, Problem Solving, Civic Knowledge and Engagement – Local and Global, Intercultural Knowledge and Competence, Ethical Reasoning and Action, Global Learning, Foundations and Skills for Lifelong Learning, and Integrative Learning.

What is the difference between assessment and grading?

Assessment and grading are not the same. Generally, the goal of grading is to evaluate individual students’ learning and performance. Although grades are sometimes treated as a proxy for student learning, they are not always a reliable measure. Moreover, they may incorporate criteria – such as attendance, participation, and effort – that are not direct measures of learning.

The goal of assessment is to improve student learning. Although grading can play a role in assessment, assessment also involves many ungraded measures of student learning (such as concept maps and CATS). Moreover, assessment goes beyond grading by systematically examining patterns of student learning across courses and programs and using this information to improve educational practices.

Assessment for Improvement

- Assessment activities that are designed to feed the results directly, and ideally, immediately, back into revising the course, program or institution with the goal of improving student learning. Both formative and summative assessment data can be used to guide improvements.

Curriculum Map

– Curriculum mapping is a process for collecting and recording curriculum-related data that identifies core skills and content taught, processes employed, and assessments used for each subject area.

In most cases, curriculum mapping refers to the alignment of learning standards and teaching—i.e., how well and to what extent a school or teacher has matched the content that students are actually taught with the academic expectations described in learning standards—but it may also refer to the mapping and alignment of all the many elements that are entailed in educating students, including assessments, textbooks, assignments, lessons, and instructional techniques.

For faculty, the process of curriculum mapping facilitates understanding of how courses are sequenced and “fit” together, and shifts the focus from “my course” to “our degree program.” Mapping also supports curriculum revision, as the activity often reveals curricular strengths, gaps, necessary and unnecessary overlaps, and needs (e.g., Do students have enough practice to achieve an outcome?).

For students, curriculum maps facilitate understanding of how (1) each course is intended to contribute to specific knowledge or skills of the program; (2) elective courses could be strategically selected to further strengthen knowledge or skills or to explore new areas of interest; and (3) work products could be selected for inclusion in portfolios or résumés.

For academic advisors, curriculum maps help to (1) focus discussions on students’ academic and professional goals and identify intersections of those goals with the curriculum; (2) identify opportunities for additional majors or minors that can be achieved without undue overload; and (3) determine where and how transfer credit might support students’ and curricular goals.

For deans, department or program heads, curriculum maps may (1) inform resource allocation (fiscal, human, technological, space); (2) identify potential growth or specialization areas; and (3) support Presidential Advisory Board, discipline-specific accreditation, or other reporting.

Comprehensive Program Review (CPR)

– The Comprehensive Program Review template was developed as a summative reporting vehicle for academic program review. This reporting vehicle is for use by University System of Georgia (USG) institutions and the system office in order to ensure adherence to Board of Regents Policy 3.6.3 Comprehensive Program Review and to enable consistency in executive level reporting to the Board of Regents, the system as a whole, and external constituents.

The goal of the reporting vehicle was to provide both standardization of reporting along with institutional flexibility and consideration of such factors as mission, program variability, level of degree and major, student and institutional inputs and outcomes, and academic unit composition.

Embedded Assessment

- A means of gathering information about student learning that is integrated into the teaching-learning process. Results can be used to assess individual student performance or they can be aggregated to provide information about the course or program. Assessment can be formative or summative, quantitative or qualitative.  Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a college-wide outcome to demonstrate information literacy).

Program Assessment

- Uses the department or program as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability.  Ideally, program goals and objectives would serve as a basis for the assessment. Example: How well can senior engineering students apply engineering concepts and skills to solve an engineering problem?  This might be assessed through a capstone project, by combining performance data from multiple senior level courses, collecting ratings from internship employers, etc.  If a goal is to assess value added, some comparison of the performance to newly declared majors would be included.

Sources: