Report of the 2017 UF Faculty Focus Group Study

Read what faculty shared about their assessment practices and what UF can do to improve institutional assessment processes.

Click Context Matters - The 2017 Faculty Engagement with Assessment Focus Group Study to download this report as a PDF. 

Abstract

This report summarizes the findings of the Academic Assessment Committee’s 2017 study of faculty engagement with assessment at the University of Florida, the third in a series of studies conducted by the Office of Institutional Assessment. Faculty focus groups convened in each of the university’s 16 colleges (= 146).  Field notes and recordings were analyzed using NVivo11. Three themes emerged: (a) UF faculty value the assessment of student learning and the information it provides, and want to learn from and share assessment work with their colleagues across colleges; (b) certain conditions influence the faculty’s assessment methodologies; and (c) there are misconceptions about data requirements for regional accreditation. The report closes with recommendations for Academic Assessment Committee actions to address the findings.

Introduction

            As a comprehensive learning institution, the core mission of the University of Florida is to “enable students to lead and influence the next generation and beyond for economic, cultural, and societal benefit” (University of Florida, 2016). At the time of this report, over 5,000 UF faculty accomplish this mission through their teaching, research, and service in 496 undergraduate, graduate, professional, and certificate programs. How students learn and the way faculty know what students have learned is a key contributing factor to student achievement in these programs. Faculty plan for student learning and program effectiveness by establishing program goals and student learning outcomes, and provide evidence of their students’ achievement and program effectiveness annually as part of the University of Florida Assessment System (Brophy, 2017; University of Florida, 2017a).

            The Academic Assessment Committee is a joint committee that provides faculty oversight of the academic assessment process at the University of Florida (University of Florida, 2017b).  In the interest of improving institutional assessment processes, the Office of Institutional Assessment engages in internal research projects designed to gauge the operationalization of institutional assessment and effectiveness in all University of Florida units. To that end, UF Assessment staff, in collaboration with the Academic Assessment Committee, designed a series of three studies to determine the degree to which faculty and staff engage in the assessment of student learning in the university’s programs. The first two studies focused on the academic assessment and accreditation coordinators in each college. This report summarizes the background, methodology, and results of the third study with the faculty. The report closes with recommended actions.

Background

If you don’t determine they’re learning, why are we here? - UF Faculty member

           Each of UF’s 16 colleges has one or two individuals assigned to be their SACSCOC Coordinators. The coordinators oversee SACSCOC accreditation data reporting in their colleges and serve as the colleges’ direct contacts with the Office of Institutional Assessment, In the first study, Das and Gater (2015) distributed a survey to the SACSCOC accreditation coordinators. They followed up with the second study in which they interviewed each accreditation coordinator.  In these studies, they collected data on college level assessment processes, specifically how the colleges (a) use the assessment system as a tool for planning and data reporting, and (b) manage assessment and effectiveness operations at the college level.The assessment of student learning in university academic programs has been a longstanding practice for UF faculty.

           The institutionalization of student learning assessment and student learning outcomes, though, is a somewhat contested terrain. The success of student learning outcomes in higher education has been studied (Judd & Keith, 2012), challenged (Shireman, 2016), and acclaimed (Dwyer, Millett, & Payne, 2006; Wehlburg, 2017), while regional accreditors require them for continued reaffirmation (Middle States Commission on Higher Education, 2007; Southern Association of Colleges and Schools Commission on Colleges [SACSCOC], 2012). Because regional reaffirmation is tied to eligibility for students to receive federal financial aid, all academic programs at the University of Florida engage in the “assessment cycle,” and develop program goals, student learning outcomes, assessments and other appropriate measures for these goals and outcomes, conduct the assessments, analyze the data, and use the results to modify and improve their programs.

           Das & Gater (2015) found that the colleges engaged in institutional assessment and effectiveness work in diverse ways. The coordinators revealed that defining and collecting data for program goals and student learning outcomes were highly beneficial to some programs, and all agreed that assessing program goals and student learning outcomes is an integral element to any college or department-specific accreditation criteria. Several key themes and practices emerged, which led to a set of recommendations that comprise a set best practices for success. These are: 

  • Hierarchical networks of support and communication are critical to facilitate these processes.  
  • A clear distribution system for information is important so that faculty report data and close the assessment loop in a timely manner.  
  • A variety of training and professional development is needed that fits faculty schedules, is flexible and includes a variety of training options such as, written guides, on-line guides, video tutorials, and in person meetings.  
  • Regular engagement with campus leaders is important to build relationships, lines of communication, support, and trust.  
  • Messaging and marketing should emphasize the value of assessment efforts to improving academic programs. 

         To follow up on these findings, in spring 2017 the Academic Assessment Committee developed and implemented the third study in the series – a set interviews with faculty focus groups, one in each of the university’s 16 colleges. Because faculty are the core actors in the assessment of student learning, these focus groups provided an opportunity to gather data on how faculty engage in the assessment process in their courses and programs.

Delimitations

          While all tenured and tenure-track faculty in each college received an invitation to participate, this study was limited to the faculty who volunteered and were available during the scheduled meeting times. Questions were limited to assessment practices. This report focuses on common themes the emerged across all of the colleges.

Methodology

          The results of the Das and Gater (2015) studies revealed the need to obtain baseline data on faculty engagement with assessment. The Academic Assessment Committee developed two research questions that guided the study. These questions were:

  1. How are UF faculty engaged in academic assessment processes at the University of Florida?
  2. In what ways could this information lead to the modification and improvement of institutional assessment processes?

The nature of these questions led the committee to choose focus groups as the data collection method. Focus groups are effective for the exploration of topics of interest, and for generating impressions of the process of interest. Focus group participants are selected purposively because they can provide the information that the researcher is seeking, and are often homogeneous to promote discussion (Johnson & Christensen, 2012; Stewart, Shamdasani, & Rook, 2009).

          For this study, the participant pool consisted of the tenured and tenure-track faculty at the University of Florida. Using an email list provided by the Office of Institutional Planning and Research, the Director of Institutional Assessment issued an email invitation to the faculty in each college along with a Doodle poll that provided a set of available times to meet. Faculty responded to the poll and self-selected their participation based on their availability and interest. The 16 focus groups ranged in size from 5-10 each, and 146 faculty participated. The focus groups took place between February and April 2017. The SACSCOC coordinators arranged the room locations. The groups met in their own colleges.

           The Academic Assessment Committee members developed the interview protocol for the focus groups and submitted it to the UF IRB02, which determined the protocol to be exempt (Protocol ID# 16U0312). The protocol began with a standardized introduction (see Appendix A) so that all focus groups started identically. There were four sets of questions organized in the following categories: Instructor assessments, perceived value of assessments, assessment at the department/program/major level, and closing questions (see Table 1). The moderator presented the questions in order, but the emergent nature of the discussions led to some additional questions and dialogue as needed to explore topics as they arose.

          Faculty participants were given name tents with letters from the alphabet to identify them as they entered the focus group location. Faculty participants remained anonymous throughout the discussion and referred to each other by their participant letter names. Members of the Academic Assessment Committee, the Director of Institutional Assessment, and a staff member served as moderators for the focus groups. There were one or two moderators for each group. The moderator(s) recorded the focus group discussions with the participants’ permission, and took field notes. The focus group recordings and field notes were the primary sources of data used for analysis.

Table 1. Focus Group Questions

Topic Area

Questions

Instructor Assessments

  • How do you determine if your students are learning material in your course? [For example, Surveys show that exams, papers and capstone projects are commonly used. From here on, we will be using the term assessment to mean what you do to determine whether students are learning.]
  • After you have gathered the data, what do you do with it besides using it for grading students?

Perceived Value of Assessments

  • Is there value in determining whether your students are learning in your course?
  • Are there specific assessment methods that add value to your teaching?
  • What value do these assessments add to ongoing revisions to the courses you teach?
  • Is there evidence that you can collect that you do not collect right now but that you would like to collect?
  • What about this do you value?

Assessment at Department/

Program/Major Level

(Moderator - poll the group on how many have experience with or understand their program/major assessment.)

  • So far we have discussed your assessments in courses. We recognize that you are offering high quality programs and majors. We are interested in what outcomes – quality measures (not counts – outputs), you gather at the department level. What do you use to tell the story of the quality of your program/major?

Closing Questions

  • What haven’t we asked today that you would like to talk about?
  • Based on this conversation:
    • What recommendations do you have for the Academic Assessment Committee?
    • What resources or training would you like to have?

 Results

            The 34 sets of field notes and recordings were loaded into NVivo11 for analysis. Eight response categories emerged from the data coding. Table 2 presents these categories and their descriptions.

           Focus group participants provided substantial evidence of the extent to which faculty engage with assessment at the University of Florida. Three primary themes emerged from the focus groups: (a) the value of assessment at UF; (b) influential conditions that shape faculty assessment; and (c) misconceptions about SACSCOC assessment reporting. The themes are described here.

Table 2. Data categories and descriptions

Data Category

Description

Assessment methods and processes

Various assessment types used for the assessment of student learning

Challenges

Issues that impede assessment or make it challenging

Concerns

Areas that cause concern or are barriers to assessment they would like to do

Context

Factors that influence assessment that faculty cannot control

Data gaps

Information that faculty would like to collect but cannot or do not

Needs

What faculty would like to have to facilitate their assessment processes

Use of results

The ways that faculty use the results of their assessments

Value

What faculty value about assessment


Theme 1: Assessment is valued

                    There is value in seeing students succeed, and assessment provides information that is used to re-examine student knowledge. - UF Faculty member

            For most faculty, the value of assessment is directly associated with the standards of the field. They design their course syllabi to contribute to student learning that prepares them for the next step in their programs, careers, or to meet professional standards. Some believe that because students are reward driven, assessments can provide these rewards. UF faculty value the assessment of student learning and the information it provides. In every college, faculty described clearly the ways they collect and use student learning information. As one participant stated, “we learn how effective we are as instructors,” and that their student learning information helps them to “make adjustments to the course.” Some faculty participants take advantage of Canvas’s item analysis information to modify and improve item performance on their exams. However, most faculty participants were not aware of Canvas’s potential for technologically facilitating the collection of student learning data (see Smith, 2017, for ways Canvas can be used for data collection).

            Faculty use assessment results for a variety of reasons. The most commonly mentioned use was to inform ongoing instruction, as described by this  participant: “I know what they have mastered and what they haven’t, and can adjust my instruction to accommodate their needs.” The assessment data/instructional modification loop was a prevalent process across all colleges. For faculty who teach in studio or one-on-one instructional situations, day to day engagement with the individual student is a primary source of student learning information. As one participation disclosed, “Open dialogue is important. In studios, what makes students better is not necessarily based on an assessment, but on day-to-day engagement.”

            Faculty also expressed an interest in sharing assessment work across colleges and programs. They would like to know how their colleagues are assessing student learning, using the information, and modifying and improving their programs.

Theme 2: Influential Conditions

The type of assessment I use depends on the size of the class. - UF Faculty member

Condition 1: Class size

            Across the colleges, the faculty participants described two primary conditions that shape the contexts in which they operationalize the assessment of student learning in their programs. The next section of this report presents a description of these conditions along with sample faculty comments that illustrate the findings.

            Faculty were consistent throughout the colleges that their choices of assessment methodologies were dependent on the number of students in the course being taught and the student learning approaches designed for those courses. For professors who teach large courses delivered primarily through lectures and electronic means, the primary assessment methods are multiple choice exams, quizzes, and homework. For faculty who (a) teach studio classes (for example, design), (b) provide one-on-one instruction such as individual lessons (such as art and music), or (c) guide students through research projects, interdisciplinary programs, graduate theses, and dissertations, individualized approaches are common. This faculty member describes the influence of class size on the types of assessments used to assess student learning and development:

          There are a myriad of assessments that can be used. There might be the traditional exam, multiple choice, fill in the blank, short answer, depending upon what is appropriate to the actual topic being discussed. There are also opportunities for individual assessment by the instructor as an expert in the field. - UF faculty member

There is a tension between what faculty want to do to assess student learning and what they feel they have to do because of class size. Professors who teach large classes often expressed that class size constrains their choice of assessment methodology.  One participant remarked, “The best way to measure student learning is through free response exams, but I can’t do this with 1,000 students in the class.” Faculty reiterated this across several colleges.  Another participant concurred: “I’m not happy with the exams having to be multiple choice. Students need lots of support, which is hard to do because of manpower in large classes.”

Condition 2: Disciplinary accreditation

In colleges where programs are accredited by disciplinary organizations (such as the Accreditation Board for Engineering and Technology [ABET], the Council for the Accreditation of Educator Preparation [CAEP], the Commission on Dental Accreditation [CoDA], the Liaison Committee on Medical Education [LCME], and others), the assessment of student learning is often required as part of the accreditation process. Additionally, some professional associations have developed assessment standards for their members.

                                                     Our disciplinary accreditor requires a set of national exams that all of our students must take. Why can’t we use these as outcome measures. - UF faculty member

For programs with disciplinary accreditors that require national or state examinations for professional licensure or certification, a common question had to do with why these types of exams cannot be used as student learning outcome measures. Student learning outcome measures must be those that are developed, administered, and graded by the faculty in the program where the student is learning. When faculty yield a program outcome to an external measure, they lose their control of it - they do not develop, administer, or grade it. Faculty may read more about this at the Institutional Assessment website, http://assessment.aa.ufl.edu. An important distinction here is that external measures are appropriate to measure program goals, but not student learning outcomes.

Some faculty in these programs expressed frustration with reporting student learning outcomes for their disciplinary accreditors in addition to the regional accreditor, SACSCOC. As one participant made clear: “There is a lot of repetition with SACSCOC reporting.” However, some faculty adapt the student learning outcome reporting required for disciplinary accreditation to meet reporting requirements for additional accreditors. There are successful instances of this throughout the campus.

Theme 3: Misconceptions about SACSCOC reporting

            Focus group participants consistently revealed some misconceptions regarding student learning outcome and program goal data reporting for UF’s regional accreditor, SACSCOC. Program faculty submit these reports annually as part of UF’s assessment process (University of Florida, 2017a). Three misconceptions emerged: (a) all student learning data must be quantified, (b) UF’s academic assessment planning process limits program assessment to specific categories and types, and (c) there is a sense that the data faculty submit to Institutional Assessment goes nowhere. I describe these here.

Misconception 1: All student learning data must be quantified

            For some participants, the value of assessment was impeded by their misconception that student learning data must be quantified, as described by this participant: “Our faculty are very engaged in gathering anecdotal evidence, but push back with quantification of student learning information.” Student learning data does not need to be quantified for reporting. As revealed earlier, faculty who can collect only quantitative student learning information feel that this constrains their desired assessment practices. The focus group participants consistently described that some of the most valuable information they collect is qualitative information gathered from student interactions in various ways.

            The data revealed that this misconception arises largely from the requirement that all academic programs provide a rubric or other guide for measuring the degree to which students demonstrate faculty-established criteria used to assess student learning outcomes achievement. Rubrics provide the opportunity for faculty to assess levels of achievement of pre-established criteria on multidimensional assessments, and some interpret these levels of achievement as quantitative information. The levels of achievement themselves are criterion-referenced data, and rubric achievement levels are not interval scales; that is, the measurement between levels is not equidistant because they are somewhat subjective. Level descriptors are labels (numbers, words, or short phrases) that describe a specific degree of learning demonstration. As another faculty participant confirmed: “Subjective data cannot be quantified.” This is true. What can be reported, though, are the percentages of students who meet the faculty’s criteria for the student learning outcome of interest. These percentages provide a quantitative summary of student achievement, and are not meant to capture the qualitative information that faculty collect related to outcome achievement.

Misconception 2: UF’s academic assessment planning process limits program assessment to specific categories and types

The criteria for SACSCOC are limited; I feel like my hands are tied. - UF faculty member

            There is also a perception that academic program assessment data reporting must be limited to specific types of measures, such as standardized or scalable assessments that yield quantitative data, described by one faculty member as “short assignments that can be easily quantified.” Performance measures, observations, simulations, role-playing, portfolios, student interactions, projects, papers, etc., are all multidimensional assessment methods that yield valuable program student learning data. In programs where individualized student learning predominates, descriptions of the process of data collection are acceptable for reporting.  Percentages of students who meet the faculty’s criteria for successful outcome achievement is the primary quantitative data needed. Because all programs must provide outcomes in specific categories, there is a misconception that this limits assessment to those outcome categories. One faculty participant expressed this frustration: “The criteria for SACSCOC are limited; I feel like my hands are tied.” This sentiment was echoed across several colleges. While the Board of Governors regulation 8.016 (State University System of Florida Board of Governors, 2012) requires that all undergraduate programs at all Florida State University System institutions develop student learning outcomes for content, critical thinking, and communication, and the UF Graduate School has established three student learning outcomes categories of content, skills, and professional behavior, there is no restriction that limits programs to these categories. Program faculty may include additional categories of outcomes if they choose to do so.

Misconception 3: The data disappears

            Several faculty participants questioned where the student learning data that UF collects goes – there is no easily accessible location. The common question was “what do you do with the data we provide?” This data is housed in the university’s third-party software program, Compliance Assist!, which is not accessible by all faculty.  The data is used as evidence that UF programs are establishing student learning outcomes, developing and administering assessments, analyzing the collected data, and then modifying and improving their programs based on that analysis (SACSCOC, 2012, Standard 3.3.1.1). The faculty concern about the use of data is substantiated because at the time of this report there is no generally available means for faculty access to this information.

Conclusions and Discussion: Context Matters

           Assessment is an important responsibility that faculty take seriously. The contexts within which the assessment of student learning take place are as varied as the educational environments from which they arise.  The evidence is clear:  context matters in all facets of the assessment of student learning at the University of Florida.

           We designed this study to build on the Das and Gater findings (2015). There were three purposes: (a) obtain baseline data on faculty engagement in the academic assessment processes, (b) modify and improve our institutional assessment processes so that they align maximally with those that faculty engage, and (c) to make institutional processes more relevant, efficient, and meaningful for the faculty.  This study provided the baseline data needed to begin the process of modifying and improving our UF academic assessment processes.  The closing sections of this report summarize the findings from the focus groups and provide a set of recommendations for the Academic Assessment Committee to consider.

Summary of Findings

The following is a summary of the study findings.

  1. UF faculty value and use assessment for many purposes to modify and improve their teaching and maximize student learning. Faculty would like to learn more about how other faculty assess their programs.
  2. Influential conditions. Class size and disciplinary accreditation influence the faculty’s choice of assessment methodology. The desire to collect information beyond the content knowledge that dichotomous response exams provide is constrained by large class size. As a result, there is a tension between what some faculty would like to do to assess student learning and what they feel constrained to do by these factors.
  3. The UF assessment planning and data reporting process is designed to capture student learning achievement in the programs from the variety of assessment methodologies that faculty engage. The faculty revealed that there are some misconceptions regarding assessment planning and data reporting regarding the quantification of student learning data and limitations on assessment outcomes and measures.
  4. Data access. Faculty are not clear on how the data they report is used at the institutional level, nor do they ready have access to it.

Recommendations for the Academic Assessment Committee

Table 3 presents a set of recommendations for consideration by the Academic Assessment Committee during the 2017-18 academic year to address the findings.

Table 3. Findings and Recommendations for the Academic Assessment Committee

Finding

Recommendations

Value

  1. Share assessment work across colleges
  • Continue the UF Assessment Conference, and develop an online mechanism for faculty to share their assessment work with others.

Influential conditions

  1. Class size
  2. Disciplinary accreditation

 

 

 

 

  • Develop faculty workshops in conjunction with the Office of Faculty Development and Teaching Excellence on using Canvas assessment tools to facilitate data collection for multiple assessment methods.
  1. Work with specific disciplines to maximize use of student learning data collected for disciplinary accreditors for regional accreditation reports.
  2. Quantification of student learning data
  3. Limitations on assessment outcomes and measures.
  4. Faculty are not clear on how the data they report is used at the institutional level, nor do they have ready access to it.

Misconceptions

 

  1. Develop online tools to clarify what can be reported.
  2. Develop online tools to clarify assessment types.
  3. Develop a faculty-access view of student learning data reports, perhaps through visualization using Tableau.

 

The Academic Assessment Committee meets on the second Tuesday of each month at 3pm in the President’s Conference Room, 236 Tigert Hall. The meetings are open, and faculty suggestions are welcome.                                                                                                                                                                 

Appendix A: Focus Group Opening Statement

[Introduce moderator and co-moderator and state your affiliation and membership on the Academic Assessment Committee. Ask participants to introduce themselves.]

Moderator: Thank you for agreeing to take part in this focus group. We appreciate your willingness to participate. As part of the Academic Assessment Committee, we are conducting this focus group to inform institution-wide academic assessment efforts. We need your input and want you to share your honest and open thoughts with us so that we can gather actionable data that helps us define the Culture of Engagement here at UF. The Academic Assessment Committee applied for IRB approval for this work in 2016, and the IRB determined this to be exempt. The Protocol ID is 16U0312.

This committee serves to meet both the University of Florida commitment and the State University System of Florida requirements regarding the achievement of student learning outcomes and program goals.  The AAC provides faculty oversight of student learning and program assessment at UF, and reviews and approves all Board of Governors required Academic Learning Compacts.  This focus group is conducted as a complement to the assessment cycle that includes the preparation and review of student learning outcomes for all units. 

Our purpose is to obtain baseline data on faculty engagement in the academic assessment processes. The Academic Assessment Committee will use the results of our focus group discussions to modify and improve our institutional assessment processes so that they better align with faculty assessment processes. Our goal is to streamline UF’s institutional processes to make them more relevant, efficient, and meaningful for you.

For this discussion today, we will define ‘assessment’ as the collection and evaluation of student-learning data obtained from diverse sources in order to ascertain the degree to which students have achieved faculty-established outcomes. The process culminates when assessment results are used to improve subsequent student learning or program effectiveness. 

I’ve given each of you a copy of this definition. There is space for you to take notes during our discussion for your own use if you would like; these will not be collected.

1. We want you to do the talking. We would like everyone to participate. I may call on you if I haven’t heard from you in a while.

2. There are no right or wrong answers. Every person’s experiences and opinions are important. Speak up whether you agree or disagree. We expect and want to hear a wide range of opinions and we do not anticipate consensus, just sharing.

3. We emphasize that what is said in this room should remain here. You should be comfortable to share anything if sensitive issues come up. Please don't’ disparage another participant’s remarks and let’s have just one speaker at a time.

4. The discussion will last for about one hour. Please silence your mobile phones.  Please give everyone the chance to express his/her opinion during the conversation. You can address each other if you like. We are only here to assist in the discussion.

5. We will record this session as we want to capture everything you have to say. We don’t identify anyone by name in our findings. When you respond, be sure to not mention your name. You will remain anonymous. Audio recordings will be summarized and the recordings secured by the PI, Dr. Tim Brophy.  We can provide summary details once the study is complete. 

6. We will begin with questions about course level assessment, and then move to questions about program level assessment.

Are there any questions?

References

Brophy, T. S. (2017). Case study: The University of Florida Assessment System. In T. Cumming, & M. D. Miller (Eds.), Enhancing assessment in higher education: Putting psychometrics to work (pp. 186-204). Sterling, VA: Stylus.

Das, R., & Gater, C. (2015, December). Assessing assessment: Fostering new energy to maintain momentum for institutional effectiveness. Presentation for the 2015 Southern Association of Colleges and Schools Commission on Colleges annual conference. Houston, TX.

Dwyer, C. A., Millett, C. M., & Payne, D. G. (2006, June). A culture of evidence: Postsecondary assesssment and learning outcomes. Retrieved from Educational Testing Service: http://www.ets.org/Media/Resources_For/Policy_Makers/pdf/cultureofevidence.pdf

Johnson, B., & Christensen, L. (2012). Educational research: Quantitative, qualitative, and mixed approaches (4th ed.). Thousand Oaks, CA: Sage.

Judd, T., & Keith, B. (2012). Student learning outcomes assessment at the program and institutional levels. In C. Secolsky, & D. B. Denison (Eds.), Handbook on measurement, assessment, and evaluation in higher education (pp. 31-46). New York, NY: Taylor & Francis.

Middle States Commission on Higher Education. (2007, June). Student learning assessment: Options and resources. Retrieved from MIddle States Commission on Higher Education: https://www.msche.org/publications/SLA_Book_0808080728085320.pdf

Shireman, R. (2016, February 25). The real value of what students do in college. Retrieved from College Completion Series: Part One: https://s3-us-west-2.amazonaws.com/production.tcf.org/app/uploads/2016/02/19105347/TheRealValue_RobertShireman.pdf

Smith, J. K. (2017). Quick guide: Canvas outcomes. Retrieved from Academic Assessment: http://assessment.aa.ufl.edu/Data/Sites/22/media/2017assessmentconference/canvas-outcomes-quick-guide.pdf

Southern Association of Colleges and Schools Commission on Colleges. (2012). Principles of accreditation: Foundations for quality enhancement (5th ed.). Retrieved from Southern Association of Colleges and Schools Commission on Colleges: http://sacscoc.org/pdf/2012PrinciplesOfAcreditation.pdf

State University System of Florida Board of Governors. (2012, January 19). Regulation 8.016 - Student learning outcomes assessment. Retrieved from State University System of Florida Board of Governors: http://www.flbog.edu/documents_regulations/regulations/8_016_StudentLearningOutcomes_final.pdf

Stewart, D. W., Shamdasani, P. N., & Rook, D. W. (2009). Group depth interviews: Focus group research. In L. Bickma, & D. J. Rog (Eds.), The SAGE handbook of applied social research methods (pp. 589-616). Thousand Oaks, CA: Sage.

University of Florida. (2016). Mission statement. Retrieved from UF undergraduate catalog: https://catalog.ufl.edu/ugrad/current/uf-mission/Pages/home.aspx

University of Florida. (2017a). Academic Assessment. Retrieved from Institutional Assessment: http://assessment.aa.ufl.edu/academic-assessment

University of Florida. (2017b). Academic Assessment Committee. Retrieved from Fora - University of Florida: http://fora.aa.ufl.edu/University/JointCommittees/Academic-Assessment-Committee

Wehlburg, C. M. (2017, 28 March). Research and models of learning assessment in higher education: Keynote video. Retrieved from Academic Assessment: https://mediasite.video.ufl.edu/Mediasite/Play/ce1fe5c9d5064e4583becb9cc668b11a1d?catalog=297f05a1-19c0-4a67-90b1-07bff508b558

Acknowledgements

2016-17 Academic Assessment Committee members:

  • William Bauer, College of the Arts
  • Gail Childs, College of Dentistry
  • Margaret Fields, College of Liberal Arts and Sciences
  • Dennis Kramer, College of Education
  • Suzanne Murphy, College of Health and Human Performance
  • Amy Simonne, College of Agricultural and Life Sciences
  • Laura Spears, George A. Smathers Libraries
  • Catherine Striley, College of Public Health and Health Professions

The following individuals participated as moderators for the focus groups:

  • Gail Childs, College of Dentistry
  • Rajeeb Das, Institutional Planning and Research
  • Margaret Fields, College of Liberal Arts and Sciences
  • Amy Simonne, College of Agricultural and Life Sciences
  • Laura Spears, Smathers Libraries
  • Catherine Striley, College of Public Health and Health Professions

The following individuals assisted with the transcription of the data:

  • Kimberly Bagley, Office of the Provost
  • Rajeeb Das, Institutional Planning and Research
  • Ann Greene, Office of the Provost
  • Rebecca Holt, Office of the Provost