Borderline cases in COPUS

The COPUS tool for collecting data on classroom practices has been around for a few years now, and is getting lots of use, which is great.  (For a cheat sheet on what COPUS is, check out the LS-CWSEI blog post about it.)

As with any tool, there are always a few question marks as to how best to use it in a given context.  In our work looking at a bunch of biology classes with a bunch of different observers, we had to make some judgment calls about a few codes, in the name of consistency.

I don’t think there’s a standard set of these judgment calls out there, since the use and value of the tool really depends on what you’re using it for. Instructor feedback about a specific activity/practice? comparing classes against each other? baseline data? counting numbers of students that ask questions, and where they sit?  etc.  You need to use the right tool for the right job, and you can always modify a tool to best suit your needs.

It certainly was valuable for us to talk about, come to consistency on, and write down.  Maybe the T.E.A. people might be interested, or already doing this?

In case anyone else is interested, here’s what we decided on for our project.

(We also collected data with BERI to capture student engagement… though we weren’t as focussed on that metric for this particular project.)

COPUS notes for Biology Class Observation Project (over two semesters):

Timeline for setting up this document (and our project):

  • COPUS-observe to a few classes in pairs (not sitting together)
  • Compare notes, making any changes as required, and noting where the changes took place. Keep copy of both original observation data and modified.
  • Meeting with all observers to discuss those notes, and make this document.
    • Make list of decisions (this document)
    • Criteria: Consider how data will be analyzed, and what distinctions will be valuable or not.
  • COPUS-observe a few classes in pairs, using these decisions/definitions (not sitting together; try to have each person observe with different people so we can calculate overall inter-rater reliability
  • Using un-modified observations from 2nd round, calculate inter-rater reliability (IRR)
  • If over a set value (e.g. 90-95%), then agree that one observer is sufficient for data collection. If not, repeat process until acceptable IRR is achieved.
  • Only use data from when good IRR is achieved. 

(More recent note – a better metric for inter-rater reliability is kappa, but we hadn’t seen it at the time.)

Notes on COPUS coding borderline cases- our decisions:

Clickers

  • Check Ind when students either instructed to work alone, OR when there is no relevant discussion until the poll stops.
  • Otherwise: Check CG
  • CQ: until the second poll closes. Make a note if confusing.

Do not click Lec with FUp: it’s not appropriate for follow up material, even if the feedback is planned

Can click PQ and AnQ with FUp.

Lec – only for new material.

PQ/AnQ transition to 1on1 – exclusion of the rest of the class

WC – check this only largely student-student interaction for whole/large group; may be facilitated by instructor but not significantly contributed to by instructor.

Instructors: for worksheet or lecture-slide questions, click nothing (this is captured in the student data, and instructor you need to do 1o1, MG, etc).

Extensive animations in powerpoint count as demo/video.

Review at start of class: Lec (plus PQ/AnQ if appropriate)

MG: if the prof is interacting with, or at least moving around and listening to student groups. Can become 1on1 if the prof gets stuck with a group. Awkward feeling about the length of time –> 1on1.

Co-opted codes:

  • Engagement-L for students: Using for ‘drone response’ or if instructor is asking only for hands. This covers both instructor and student; do not check PQ or AnQ.
  • Engagement-H for any time TAs/peertutors/etc are up and actively involved in class.

Other data to record:

  • # of people you counted for BERI
  • # students in the room (clicker count, or eyeball)
  • Where you are in the room
  • Course #

Notes on BERI

  • BERI: count the # of engaged, report as “9E” and record somewhere how many students were observed for BERI.
  • Collect BERI once every 2 minutes, not for the whole 2 minute slot.
  • Make sure you pick a good spot to try to see 10 people. If you can’t see 10, just observe and record for the number you can see.

General comments:

  • To be able to average, the observers need to start at the same time (the official start of class time, e.g. :00 or :30, whether or not class actually starts then)
  • Recommend everyone uses online – easier for consistency.
  • Try your best: Students shouldn’t be able to see your screen

New notes since starting observations – add your own if necessary/helpful

W (waiting) for students: Decide how to use in analysis later, so every time we click this, we include notes. When we choose waiting for students due to them being restless or very done with the activity, double-check with a BERI and code this as a 2nd BERI value.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s