Wednesday, September 28, 2011

New resouce page: Document analysis & observation

We are happy to announce that we added a new resource page to the FLPEP website on "document analysis and observation techniques in language program evaluation." The page contains a list of how-to books and articles, example studies, and online advice.


Document analysis is often conducted to understand contextual information, such as program or instructional context, official/public policies and plans, program updates, curriculum and syllabus theory/design, and so on. Common types of documents often gathered in language program evaluation include course syllabi, instructor/curriculum handbooks, mission, goals, and outcomes statements, and students' enrollment and achievement records. 


While document analysis can reveal official or stated views of a program, observation techniques can reveal what actually happens in the program. Observation is useful when you are interested in understanding program context, implementation, processes, experiences, and interactions. In language program evaluation, observation is frequently applied to what learners and teachers do in the classroom, but other foci of observation might include language assessment practices, teacher development or induction procedures, learner counseling, and so on. Document analysis and observation can be complementary when identifying gaps between what actually happens in the program and what is formally stated. 



Saturday, August 20, 2011

What we've been working on this summer

Every year, summer goes so fast. Here in Hawaii, our new academic year is starting this coming week! Dr. John Norris is back from his one-year sabbatical, and we've been keeping ourselves busy with outreach work and resource building this summer.

Between July 29th--31st, we had a group of enthusiastic Middle Eastern language program (MELP) educators from various universities (not only US-based, but also people from overseas, too!) gather at the University of Texas, Austin to attend our three-day workshop on program evaluation and outcomes assessment (see our previous blog post). We had very energetic discussions on the issues surrounding program evaluation in college programs, including ways to transform evaluation culture, strategies to take advantage of the external evaluation "wave" (we had a series of surfing themes going on, since the facilitators were all from Hawaii!), and so on. Presentation materials and discussion summaries are all posted on our new resource page, so check it out!

John Davis (one of our collaborating staff) created a very useful guide on "Using surveys for understanding and improving foreign language programs". An abbreviated version of the content was delivered at the MELP workshop, and you may find his presentation slides to be useful as well.

Contact us for any updates on evaluation work in your program! We'll be happy to showcase your work in our blog. 

Thursday, August 11, 2011

Patton's new book "Essentials of U-FE"

Essentials of Utilization-Focused Evaluation (by Michael Quinn Patton) will be released this month from Sage Publications! 

The new book on U-FE is a concise summary of his popular book, Utilization-Focused Evaluation (4th edition). Patton takes the reader through 17 steps of Utilization-focused evaluation from situational analysis (Step 1) to meta-evaluation of evaluation use (Step 17). Each chapter includes case study examples to show how evaluation theory/approach is applied in various program contexts.

Wednesday, July 13, 2011

Prog eval workshop for the Middle Eastern Lang educators

This month, we are organizing a three-day program evaluation workshop event (July 29th--31st) at the University of Texas at Austin for the Western Consortium of University Centers of the Middle East. There will be over 40 educators from college Middle East language programs gathering for this event.

The event is a collaboration among the National Middle East Language Resource Center, the Center for Middle Eastern Studies at the University of Texas at Austin, and the University of Hawaii National Foreign Language Resource Center.

We will be offering a variety of practical tools, examples, and evaluation principles at work in college foreign language education contexts. There will also be panel discussions and a breakout discussion session.

Here is a sneak peak of the event:
  • A keynote speech "High-value evaluation strategies in foreign language education" by John Norris
  • A survey development workshop by John Davis
  • Four evaluation and outcomes assessment showcase presentations from diverse language program contexts
  •  A round-table discussion sessions on (a) externally-mandated program review and (b) data gathering methods for various types of outcomes. 
For details (summaries and schedule), go to: http://www.nflrc.hawaii.edu/evaluation/R_MELP.htm
The presentations and workshops will be videotaped and be available via NMELRC's website.

Claremont's Program Evaluation Professional Dvlpt Workshops

Yes, it's that time of year again! Between August 19th and 22nd, Claremont Graduate University is offering Professional Development Workshops on program evaluation (online and onsite). This year, they feature Michael Scriven's work at the whole-day symposium ("Future of Evaluation") on August 20th.

Registration: http://www.cgu.edu/pages/4729.asp
Workshop titles and descriptions: http://www.cgu.edu/pages/465.asp
The symposium titles: http://www.cgu.edu/pages/465.asp

A one-day workshop costs $50 for a student and $75 for a faculty.

Sunday, May 22, 2011

Tips: Increasing survey response rates

Getting a high survey response rate is always a concern when administering a survey.  A high response rate will provide an accurate picture of the target population and allow you to make meaningful conclusions.

The response rate is calculated by the total number of complete returned surveys (or 80% or more) divided by the total number of participants you contacted. So if you asked 20 graduating students to complete an exit survey and 15 completed the survey, the response rate is 75%. There are several strategies to increase the response rates:
  • Communicate the survey purpose, value, and how results will be used.
     
  • Give sufficient time to complete the survey (online survey: 7-14 days)
  • Make sure the survey is short, clear, logical, and easy to follow. Pilot-test the survey, so the items and instructions are written in understandable manner to the potential respondents.
  • Send out reminders (thank the respondents, show how many responded, include a survey link, and remind respondents about the deadline)
  • Use the existing opportunities to gather the target respondents (e.g., administer the survey in class, staff meeting, etc.)
  • Offer an incentive (e.g., gift certificate, etc.)  
  • Consider the best timing of your survey, so you are not administering the survey when the respondents are busy.  


Other resources: 

Tuesday, May 10, 2011

Heiland & Rosenthal (Eds.). (2011). Literary Study, Measurement, and the Sublime: Disciplinary Assessment

The Teagle Foundation has announced a free online book on outcomes assessment in literary studies. This edited volume by Heiland and Rosenthal brings together literary scholars, foreign language and English department faculty, and assessment experts to provide disciplinary perspectives to outcomes assessment.

Those of you who are engaged in humanities and liberal arts programs will find the book enlightening and informative. The book, consisting of 19 chapters, responds to questions, such as:
  • "How do we accurately depict and assess humanities outcomes that are often perceived as sublime and ineffable?" 
  • "How can we localize assessment within the discipline?" 
  • "How do we ensure ownership of assessment and make assessment a collaborative and useful process?"

In concordance with the publication of the book, a "National Symposium on Assessment in the Humanities" was held at Miami University on February 23rd and 24th, 2011. The papers presented at the symposium is scheduled to be available online soon (according to the website). Stay tuned!

Friday, February 25, 2011

Evaluation event in Japan

February was an exciting month!

One of the FLPEP staff (Yukiko Watanabe) was invited to International Christian University (ICU) in Tokyo, Japan as a visiting scholar for one week to deliver program evaluation workshops and presentations for language programs, writing center, and graduate seminars. Interested in what she presented and what she learned? Read further!

-----------------------------------------
Many ICU faculty and graduate students were enthusiastic to learn about program evaluation and accreditation practices in the United States. The one-week visiting scholar program was a very rewarding experience!

Below is a line up of presentations and workshops I did during my visiting scholar week.
  1. Program evaluation 101: Getting started workshop (2 language programs)  
  2. Developing and evaluating a writing center
  3. Graduate seminar presentations
    - College-driven initiatives on building evaluation culture
    - A nation-wide evaluation needs and capacity survey study
  4. One-day open-public program evaluation event
    - A 3-hour workshop on utilization-focused program evaluation
    - Panel presentations 
    - A discussion session 
Since I have so much to say about each event, I will only blog about the evaluation workshop I did for the two language programs today.

The two ICU language programs I separately interacted are both going through major program reform (one in the middle of it and one at the beginning of reform). I often see program innovation and evaluation go hand-in-hand. When there is an agreement to innovate a program, faculty are likely to be open to program changes on the basis of program evaluation. And most likely, if you innovate a program, you want to learn how effective the innovation is/was! Agreement to program change/innovation makes it easier to gain buy-in from program staff to take on an evaluation project when they see that evaluation is use-driven and is based on internal needs. So it was very timely that my visit coincided with their innovation efforts. Since one of the programs was at the beginning stage of program reform, integrating program evaluation from the get-go can provide empirical basis for making decisions on what to change, what to keep, and how to reform the program.

The workshop for these two programs aimed at building understandings on how to conduct situational analysis and how to focus on program evaluation use and questions. Of course, they both had their own unique programmatic issues, but what was interesting was that improving student learning outcomes and faculty collaboration were the two major evaluation themes that emerged out of the discussion. In higher education, student learning outcomes assessment is a common one (especially in the United States), but evaluating organizational collaboration is rare and is a challenge.

Learning from the literature on organizational management, the sine qua non of organizational collaboration is shared vision/purpose and opportunities for co-construction. Some of the evaluative questions on faculty collaboration that immediately come to my mind are: Do faculty have shared understanding of the program goals and student outcomes? Do faculty have sufficient time for fruitful discussion enabled by constructive meaning-making process? Are there any missing voices in decision-making process? How are knowledge shared and stored within and between teams?

Unfortunately, we didn't have time to prioritize evaluation questions during the workshop, but if they decide to evaluate faculty collaboration in the future, it will be very interesting.

One of the challenges for me putting together the workshop materials was to translate evaluation concepts from English to Japanese. Assessment and evaluation are both "hyouka" (評価、ひょうか) in Japanese, and we don't distinguish the two terms. My solution? For student learning outcomes assessment, I translated "assessment" as "調査" (ちょうさ、 "inquiry/study")in order to avoid the common misconceptions of assessment with testing.

Here is a question to multi-lingual readers of this blog. How do other languages handle the two terms, "assessment" and "evaluation," and how are they translated?

Tuesday, January 18, 2011

Evaluation quality

"Evaluation quality" was AEA 2010's conference theme. Whenever I engage in program evaluation, I "try to" meta-analyze evaluation quality to improve evaluation practices. Some of the questions I ask in my evaluation context are:
  • Was the evaluation design appropriate to the given situation and intended users' needs? 
  • Were the evaluation findings put to use as intended? 
  • Did the intended users find evaluation findings and processes useful and insightful (and transformative)? In other words, did any learning from evaluation findings and processes happen? 
  • Did stakeholders' program theory improve? 
  • Was the facilitation of evaluation done in an ethical and responsive manner? 
I also refer back to the professional standards from time to time throughout the evaluation process. The Program Evaluation Standards and Guiding Principles for Evaluators are my friends when I facilitate evaluation.

At the opening plenary at AEA 2010, Eleanor Chelimsky talked about different evaluation quality arguments in three types of evaluation: evaluation for accountability, improvement, and knowledge-building. For example, in the case of accountability-driven evaluation (such as in Request For Proposals), we often see evaluation quality criteria described as "rigorous"(often in RFPs, rigorousness means randomized control trial or quasi-experiment design) and "objective" (i.e., an external evaluator conducting evaluation). In improvement-oriented evaluation, these "rigorousness or objectiveness" may not necessarily be the quality criterion used.

What would then happen, when evaluation is serving competing purposes and interest groups? As an evaluator, how would we balance and negotiate quality criteria to serve different interest groups with different definitions of evaluation quality (as well as our own standard of practice as a professional evaluator)? Is it negotiable at all? Can we get different interest groups to understand with each other?