Sunday, November 28, 2010

Evaluation 2010 Conference: What is the conference like?

This year was my first time attending and presenting at the American Evaluation Association's conference. Let me report you my experience as a first-timer while my memory and excitement are still fresh.

Evaluation 2010, an annual conference hosted by the American Evaluation Association, was held at San Antonio, TX between November 5th and 13th. The conference had over 2500 attendees from various sectors (education, government, health, community programs, etc.). The first 2.5 days were capacity building workshops, followed by a 3.5-day conference. What is unique about this conference is that it's not only academics presenting their work. Evaluators, academics, funders (people from the foundation), policy makers attend this conference. Amazing synergy and dynamic.

There were over 35 concurrent sessions, so you can imagine how hard it was to choose which sessions to attend. Moreover, there is a variety of session types you need to know. For example, demonstration, panel, expert lecture, skill-building workshop, roundtable (presentation + discussion), think-tank (breakout discussion), poster sessions. Be aware that some of the panel and multipaper sessions were put together by the conference organizers, and few of them had an odd combination of papers in one session. There are no abstracts in the conference handbook, so you need to guess what the papers and sessions are about from the title (AEA recommends searching the abstracts online before you attend the conference). I ended up following specific SIG/TIG strands, so I don't need to go though 35 session titles during the break.

My general impression of the conference is that evaluators are good at relating to people and are good listeners. That's what they do, professionally, when they go into programs, right? They listen and try to understand the context. Naturally, evaluators are good at (or trained at) providing constructive comments, sharing their ideas, and creating a collegial network.

Observing how people interact in sessions made me wonder about evaluator traits, because AEA conferencing culture seems slightly different from where I come from (Applied Linguistics/TESOL). One of the session I attended was, in fact, discussing evaluator competencies. What makes an evaluator a good evaluator? Social science research skills? Theoretical and practical knowledge and skills to conduct evaluation via professional development or in graduate schools? Some of the audience in the session agreed that person/communication skills and personality make a difference. I wondered what the result will be, if we administer Myers-Briggs to evaluators.

Conference highlight? The champaign toast with everyone in the auditorium at the closing remark!!

Stay tuned for some of the highlights from the sessions I attended in the next blog.

No comments:

Post a Comment