Archive for February, 2014

Bait-and-Switch: The COMPASS Writing Skills Placement Test

February 2, 2014 Leave a comment

The COMPASS Writing Skills Placement Test (WSPT), published by ACT, is widely used by community colleges to place incoming students into English/Writing courses. Part of ACT’s COMPASS suite of placement tests, the COMPASS WSPT is a multiple-choice, computer-adaptive test. COMPASS tests are used by 46% of community colleges, according to one source (p. 2). At some colleges, the COMPASS WSPT operates as the primary instrument to place English students, though many use it as one of “multiple measures.”

With the current spotlight on developmental education, placement mechanisms like the COMPASS WSPT are increasingly being scrutinized. When a student is misplaced into a class that’s too easy or too hard, they face a needless obstacle to completing college. ACT reference materials market the COMPASS tests as predictive of student success in college courses, but two recent reports from the Community College Research Center (CCRC) cast doubt on the power of such standardized multiple-choice tests to predict student success.

The CCRC reports, rather than examining how well the content of placement tests aligns with the content of college courses, both focus on large-scale statistical analyses that compare student performance on placement tests with performance in college classes. In this post, I consider a related, yet narrower, set of issues: what skills exactly does the COMPASS WSPT assess? How well do the skills it assesses align with how ACT markets the test and with the skills students need to succeed in their writing courses?

My answer: the ACT’s online documentation repeatedly suggests the COMPASS WSPT assesses examinees’ rhetorical skills and mechanical skills in equal measure. However, this impression misleads; close-reading sample test questions, I show that the COMPASS WSPT is almost entirely a multiple-choice test of sentence-level skills with mechanics; it barely assess higher-order skills essential to success in most writing classes.

The Impression Created by the ACT Website

Let’s examine how ACT describes the content of the COMPASS WSPT on its website. I found four such places:

1. The COMPASS Guide to Effective Student Placement and Retention in the Language Arts, which is geared towards faculty and administrators, states:

COMPASS Writing Skills Placement Test The COMPASS Writing Skills Placement Test is designed to help determine whether a student possesses the writing skills and knowledge needed to succeed in a typical entry-level college  composition course. Examinees are presented with a passage on-screen and are asked to read it while looking for problems in grammar, usage, and style. Upon finding an error, students can replace the portion of text with one of five answer options presented.

Writing Skills Test passages are presented to the examinee as an unbroken whole, with no indication of where errors are located. To accommodate the task for computer-based testing, the test passages are divided into a series of segments. Because examinees can choose to edit any portion of the passage, every part of the text is included within these segments, and no part of the text is contained in more than one segment. There is a test item for each segment of the text so that an item with five answer options will appear no matter where an examinee chooses to revise the text. Of the five answer options, option “A” always reproduces the original text segment. If the segment selected by the examinee contains no error, then the correct alternative would be option “A.” Allowing students to select and correct any part of the passage broadens the task from simple recognition of the most plausible alternative to a more generative error-identification exercise.

In addition to the items that correspond to passage segments, the COMPASS Writing Skills Placement Test has one or two multiple-choice items that appear after the examinee is finished revising the passage. These items pose global questions related to the passage.

COMPASS Writing Skills Placement Test items are of two general categories: usage/mechanics and rhetorical skills. Each of these general categories is composed of the subcategories listed below.

Usage/Mechanic Items Items in this category are directed at the surface-level characteristics of writing, as exemplified in three major subcategories: punctuation, basic grammar and usage, and sentence structure.

Rhetorical Skills Items Items in this category deal with misplaced, omitted, or superfluous commas; colons; semicolons; dashes; parentheses; apostrophes; question marks; periods; and exclamation points. (p. 2)

Note the two bold headings at the bottom, which suggest that mechanical skills and rhetorical skills are assessed in equal measure.

2. A similar implication is made throughout parts of the ACT’s COMPASS WSPT website geared at a broader audience. For instance, this page states:

This test asks students to find and correct errors in essays presented on the computer screen. The test items include the following content categories:


  • Punctuation
  • Basic grammar and usage
  • Sentence structure
Rhetorical Skills

  • Strategy
  • Organization
  • Style
3. Likewise, this page states:
Writing Skills Placement Test is a multiple-choice test that requires students to find and correct errors in essays in the areas of usage and mechanics, including basic grammar, punctuation and sentence structure, and rhetorical skills, including strategy, organization and style. (colors added to illustrate equal emphasis)
4. Finally, the introduction to the packet of sample questions states:

Items in the Writing Skills Placement Test assess basic knowledge and skills in usage and mechanics (e.g., punctuation, basic grammar and usage, and sentence structure) as well as more rhetorical skills such as writing strategy, organization, and style. (p. 1, colors added to illustrate equal emphasis)

At the end of the first except above, witness a bizarre sleight-of-hand: here, “rhetorical” means punctuation, the sort of thing that belongs under the heading of “Usage/Mechanic Items.” The only thing “rhetorical” about punctuation is the assertion that it is.

In the subsequent three excerpts, rhetorical skills are conceptualized more vaguely, as “strategy,” “organization,” and “style.” Arguably, these three might qualify as rhetorical skills. But we’re left wondering: Strategy of what? Organization of what? Style of what? These could refer to re-arranging a couple words in a sentence, or re-conceptualizing the entire thrust of the essay.

It helps to operationalize the distinction between mechanical skills and rhetorical skills in a way that’s both clear and generally accepted by writing teachers. I’d posit the following:

Mechanical skills require writers to operate at the sentence-level, and not far beyond. These skills enable writers to make ideas intelligible through language, and make that language conform to the usage conventions followed by respected writers in published writing.

Rhetorical skills require writers to think beyond the sentence level and often beyond the text itself. Writers must consider how the meaning of one sentence relates to the meaning of the whole text. Further, they must consider the social context in which they are writing and the needs of their audience—as well as logos, ethos, and pathos.

As with many binaries, the distinction between can be hazy: Laura Micciche, for instance, posits that grammatical choices have a rhetorical component. But using my distinction, the next section will analyze actual test questions, with the understanding that some questions could simultaneously be categorized as mechanical and rhetorical.

The Reality of the COMPASS WSPT

I examined sample test questions for a sense of what the COMPASS WSPT actually assesses. My data comes directly from ACT. For the COMPASS WSPT, ACT publishes a booklet of Sample Test Questions.

How well do these represent actual test questions? The booklet’s preface assures examinees that “the examples in this booklet are similar to the kinds of questions you are likely to see when you take the actual COMPASS test.”

Of the 68 sample questions, just 7 uncontroversially assess students’ rhetorical abilities, as defined above. That’s 10%. These are the 2 – 3 “global” questions that follow each of the three passages. Whereas the remaining questions ask examinees to find and correct “errors,” the “global” questions ask examinees to consider the broader meaning and the extent to which language meets a higher-order goal. Here’s one such representative question:

Suppose the writer wants to show that lending programs similar to the one administered by the Grameen Bank have been widely accepted. Which of the following phrases, if added to the last sentence of the essay, would best achieve that goal?

A. to make credit available

B. over the years

C. around the world

D. to encourage development

E. with some variations (p. 9)

In fairness, of the remaining 90% of the questions, another 10% or so could be classified as primarily sentence-level in scope, but having a rhetorical component, under a charitable definition. Three such questions assess examinees on what the packet of COMPASS WSPT Sample Test Question calls “judging relevancy” (p. 24, 25, 26).  In these, examinees must decide whether certain words are superfluous or essential to the passage. Other marginally rhetorical questions assess whether examinees can choose the most fitting transitional expression between two sentences.

Now consider a representative test item that purely assesses sentence-level skills. In the following, examinees must choose which segment is correct:

A. If one member will fail to repay a loan, the entire group is unable to obtain credit

B. If one member fails to repay a loan, the entire group is unable to obtain credit

C. If one member do fail to repay a loan, the entire group is unable to obtain credit

D. If one member is fail to repay a loan, the entire group is unable to obtain credit

E. If one member failing to repay a loan, the entire group is unable to obtain credit (p. 7, emphasis is mine)

This question focuses exclusively on verb forms and tense.

The vast majority (80 – 90%) of COMPASS WSPT questions are crafted similarly: examinees much choose the answer with the correct preposition, with the right punctuation placed between the right words, or with the transposed words in grammatical order, with the right suffixes. No deep thinking needed. For many, the right answer could be picked by Microsoft Word’s grammar-checker. Read through the Sample Test Questions and see for yourself.

Why the Bait-and-Switch?

This bait-and-switch is striking, but no accident; ACT’s online documentation is permeated by the sort of measured wording, consistent style, and politic hedging that evinces a lengthy process of committee vetting. What’s happening is that ACT wants to make the COMPASS WSPT look more appealing to its target audience. Who is the target audience? Primarily, those who decide whether to purchase the COMPASS product—faculty and administrators at community colleges.

Consider their situation and needs. Decades ago, this audience might have eagerly embraced a placement test that primarily assessed sentence-level mechanical skills. But the values of English faculty in particular have shifted. First, today’s writing curriculum—from the developmental level to upper-division—focuses much more on higher order concepts: rhetorical decisions, critical thinking skills, the writing process, and the social context of writing. As such, placement tests need to assess students’ abilities on these dimensions. Second, the consensus is emerging that accurate placement requires  students to complete a writing sample evaluated by human eyes.

Placement tests that meet both criteria are commonly found at universities. But at community colleges, they are rendered less practical by budgetary constraints. So community college faculty are left seeking a compromise: a more economical multiple-choice test that assesses, at least in part, students’ higher-order skills. That’s the niche purportedly filled by the COMPASS WSPT.

With their heavy workload, community college faculty and administrators are vulnerable to the ACT’s bait-and-switch. How many actually take these tests themselves or analyze the sample questions? In my experience, most faculty understand the content of their placement tests vaguely to not at all (unless the test is developed in-house). When we debate placement policy, the debate too often gravitates towards a higher level of abstraction, focusing on pedagogical theory (multiple-choice test versus holistic writing sample) and statistical outcomes (how predictive the tests are of success), rather than the specific details of what the test questions actually assess.

English teachers would learn much if they sat down and took the high-stakes tests that determine where their students are placed. In fact, I recently sat down and took the reading and writing placement tests where I teach. The process enlightened me, giving me a much more practical perspective on the science art of student placement.