Archive

Posts Tagged ‘writing’

Bait-and-Switch: The COMPASS Writing Skills Placement Test

February 2, 2014 Leave a comment

The COMPASS Writing Skills Placement Test (WSPT), published by ACT, is widely used by community colleges to place incoming students into English/Writing courses. Part of ACT’s COMPASS suite of placement tests, the COMPASS WSPT is a multiple-choice, computer-adaptive test. COMPASS tests are used by 46% of community colleges, according to one source (p. 2). At some colleges, the COMPASS WSPT operates as the primary instrument to place English students, though many use it as one of “multiple measures.”

With the current spotlight on developmental education, placement mechanisms like the COMPASS WSPT are increasingly being scrutinized. When a student is misplaced into a class that’s too easy or too hard, they face a needless obstacle to completing college. ACT reference materials market the COMPASS tests as predictive of student success in college courses, but two recent reports from the Community College Research Center (CCRC) cast doubt on the power of such standardized multiple-choice tests to predict student success.

The CCRC reports, rather than examining how well the content of placement tests aligns with the content of college courses, both focus on large-scale statistical analyses that compare student performance on placement tests with performance in college classes. In this post, I consider a related, yet narrower, set of issues: what skills exactly does the COMPASS WSPT assess? How well do the skills it assesses align with how ACT markets the test and with the skills students need to succeed in their writing courses?

My answer: the ACT’s online documentation repeatedly suggests the COMPASS WSPT assesses examinees’ rhetorical skills and mechanical skills in equal measure. However, this impression misleads; close-reading sample test questions, I show that the COMPASS WSPT is almost entirely a multiple-choice test of sentence-level skills with mechanics; it barely assess higher-order skills essential to success in most writing classes.

The Impression Created by the ACT Website

Let’s examine how ACT describes the content of the COMPASS WSPT on its website. I found four such places:

1. The COMPASS Guide to Effective Student Placement and Retention in the Language Arts, which is geared towards faculty and administrators, states:

COMPASS Writing Skills Placement Test The COMPASS Writing Skills Placement Test is designed to help determine whether a student possesses the writing skills and knowledge needed to succeed in a typical entry-level college  composition course. Examinees are presented with a passage on-screen and are asked to read it while looking for problems in grammar, usage, and style. Upon finding an error, students can replace the portion of text with one of five answer options presented.

Writing Skills Test passages are presented to the examinee as an unbroken whole, with no indication of where errors are located. To accommodate the task for computer-based testing, the test passages are divided into a series of segments. Because examinees can choose to edit any portion of the passage, every part of the text is included within these segments, and no part of the text is contained in more than one segment. There is a test item for each segment of the text so that an item with five answer options will appear no matter where an examinee chooses to revise the text. Of the five answer options, option “A” always reproduces the original text segment. If the segment selected by the examinee contains no error, then the correct alternative would be option “A.” Allowing students to select and correct any part of the passage broadens the task from simple recognition of the most plausible alternative to a more generative error-identification exercise.

In addition to the items that correspond to passage segments, the COMPASS Writing Skills Placement Test has one or two multiple-choice items that appear after the examinee is finished revising the passage. These items pose global questions related to the passage.

COMPASS Writing Skills Placement Test items are of two general categories: usage/mechanics and rhetorical skills. Each of these general categories is composed of the subcategories listed below.

Usage/Mechanic Items Items in this category are directed at the surface-level characteristics of writing, as exemplified in three major subcategories: punctuation, basic grammar and usage, and sentence structure.

Rhetorical Skills Items Items in this category deal with misplaced, omitted, or superfluous commas; colons; semicolons; dashes; parentheses; apostrophes; question marks; periods; and exclamation points. (p. 2)

Note the two bold headings at the bottom, which suggest that mechanical skills and rhetorical skills are assessed in equal measure.

2. A similar implication is made throughout parts of the ACT’s COMPASS WSPT website geared at a broader audience. For instance, this page states:

This test asks students to find and correct errors in essays presented on the computer screen. The test items include the following content categories:

Usage/Mechanics

  • Punctuation
  • Basic grammar and usage
  • Sentence structure
Rhetorical Skills

  • Strategy
  • Organization
  • Style
3. Likewise, this page states:
Writing Skills Placement Test is a multiple-choice test that requires students to find and correct errors in essays in the areas of usage and mechanics, including basic grammar, punctuation and sentence structure, and rhetorical skills, including strategy, organization and style. (colors added to illustrate equal emphasis)
4. Finally, the introduction to the packet of sample questions states:

Items in the Writing Skills Placement Test assess basic knowledge and skills in usage and mechanics (e.g., punctuation, basic grammar and usage, and sentence structure) as well as more rhetorical skills such as writing strategy, organization, and style. (p. 1, colors added to illustrate equal emphasis)

At the end of the first except above, witness a bizarre sleight-of-hand: here, “rhetorical” means punctuation, the sort of thing that belongs under the heading of “Usage/Mechanic Items.” The only thing “rhetorical” about punctuation is the assertion that it is.

In the subsequent three excerpts, rhetorical skills are conceptualized more vaguely, as “strategy,” “organization,” and “style.” Arguably, these three might qualify as rhetorical skills. But we’re left wondering: Strategy of what? Organization of what? Style of what? These could refer to re-arranging a couple words in a sentence, or re-conceptualizing the entire thrust of the essay.

It helps to operationalize the distinction between mechanical skills and rhetorical skills in a way that’s both clear and generally accepted by writing teachers. I’d posit the following:

Mechanical skills require writers to operate at the sentence-level, and not far beyond. These skills enable writers to make ideas intelligible through language, and make that language conform to the usage conventions followed by respected writers in published writing.

Rhetorical skills require writers to think beyond the sentence level and often beyond the text itself. Writers must consider how the meaning of one sentence relates to the meaning of the whole text. Further, they must consider the social context in which they are writing and the needs of their audience—as well as logos, ethos, and pathos.

As with many binaries, the distinction between can be hazy: Laura Micciche, for instance, posits that grammatical choices have a rhetorical component. But using my distinction, the next section will analyze actual test questions, with the understanding that some questions could simultaneously be categorized as mechanical and rhetorical.

The Reality of the COMPASS WSPT

I examined sample test questions for a sense of what the COMPASS WSPT actually assesses. My data comes directly from ACT. For the COMPASS WSPT, ACT publishes a booklet of Sample Test Questions.

How well do these represent actual test questions? The booklet’s preface assures examinees that “the examples in this booklet are similar to the kinds of questions you are likely to see when you take the actual COMPASS test.”

Of the 68 sample questions, just 7 uncontroversially assess students’ rhetorical abilities, as defined above. That’s 10%. These are the 2 – 3 “global” questions that follow each of the three passages. Whereas the remaining questions ask examinees to find and correct “errors,” the “global” questions ask examinees to consider the broader meaning and the extent to which language meets a higher-order goal. Here’s one such representative question:

Suppose the writer wants to show that lending programs similar to the one administered by the Grameen Bank have been widely accepted. Which of the following phrases, if added to the last sentence of the essay, would best achieve that goal?

A. to make credit available

B. over the years

C. around the world

D. to encourage development

E. with some variations (p. 9)

In fairness, of the remaining 90% of the questions, another 10% or so could be classified as primarily sentence-level in scope, but having a rhetorical component, under a charitable definition. Three such questions assess examinees on what the packet of COMPASS WSPT Sample Test Question calls “judging relevancy” (p. 24, 25, 26).  In these, examinees must decide whether certain words are superfluous or essential to the passage. Other marginally rhetorical questions assess whether examinees can choose the most fitting transitional expression between two sentences.

Now consider a representative test item that purely assesses sentence-level skills. In the following, examinees must choose which segment is correct:

A. If one member will fail to repay a loan, the entire group is unable to obtain credit

B. If one member fails to repay a loan, the entire group is unable to obtain credit

C. If one member do fail to repay a loan, the entire group is unable to obtain credit

D. If one member is fail to repay a loan, the entire group is unable to obtain credit

E. If one member failing to repay a loan, the entire group is unable to obtain credit (p. 7, emphasis is mine)

This question focuses exclusively on verb forms and tense.

The vast majority (80 – 90%) of COMPASS WSPT questions are crafted similarly: examinees much choose the answer with the correct preposition, with the right punctuation placed between the right words, or with the transposed words in grammatical order, with the right suffixes. No deep thinking needed. For many, the right answer could be picked by Microsoft Word’s grammar-checker. Read through the Sample Test Questions and see for yourself.

Why the Bait-and-Switch?

This bait-and-switch is striking, but no accident; ACT’s online documentation is permeated by the sort of measured wording, consistent style, and politic hedging that evinces a lengthy process of committee vetting. What’s happening is that ACT wants to make the COMPASS WSPT look more appealing to its target audience. Who is the target audience? Primarily, those who decide whether to purchase the COMPASS product—faculty and administrators at community colleges.

Consider their situation and needs. Decades ago, this audience might have eagerly embraced a placement test that primarily assessed sentence-level mechanical skills. But the values of English faculty in particular have shifted. First, today’s writing curriculum—from the developmental level to upper-division—focuses much more on higher order concepts: rhetorical decisions, critical thinking skills, the writing process, and the social context of writing. As such, placement tests need to assess students’ abilities on these dimensions. Second, the consensus is emerging that accurate placement requires  students to complete a writing sample evaluated by human eyes.

Placement tests that meet both criteria are commonly found at universities. But at community colleges, they are rendered less practical by budgetary constraints. So community college faculty are left seeking a compromise: a more economical multiple-choice test that assesses, at least in part, students’ higher-order skills. That’s the niche purportedly filled by the COMPASS WSPT.

With their heavy workload, community college faculty and administrators are vulnerable to the ACT’s bait-and-switch. How many actually take these tests themselves or analyze the sample questions? In my experience, most faculty understand the content of their placement tests vaguely to not at all (unless the test is developed in-house). When we debate placement policy, the debate too often gravitates towards a higher level of abstraction, focusing on pedagogical theory (multiple-choice test versus holistic writing sample) and statistical outcomes (how predictive the tests are of success), rather than the specific details of what the test questions actually assess.

English teachers would learn much if they sat down and took the high-stakes tests that determine where their students are placed. In fact, I recently sat down and took the reading and writing placement tests where I teach. The process enlightened me, giving me a much more practical perspective on the science art of student placement.

Three Fun Videos on Grammar

In a prior post, I discussed how to keep your sentence-level instruction fresh and fun. In addition, you can also break up the usual classroom routine with some YouTube videos on grammar topics. As a bonus, videos appeal to students with varied styles of learning.

Here are my three favorites:

Victor Borge’s Phonetic Punctuation

A student of mine showed me Borge’s video when we were discussing the differences between written and spoken English. I had been pointing out that writers who write how they talk tend to mix up different punctuation marks, since punctuation marks all sound the same—like silence.

Borge’s comedic routine leads us to a similar point much more cleverly. He starts from the premise of a spoken language where each punctuation mark is pronounced with its own distinct onomatopoetic flamboyance. From there, it just gets goofier.

The shtick had me laughing so hard that at first I overlooked Borge’s questionable implication that written language prevents miscommunication better than spoken language. Most writing teachers would take issue with this implication, especially after trudging through a particularly bewildering stack of student essays.

 

Schoolhouse Rock’s Conjunction Junction

This Schoolhouse Rock animation is a classic. In fact, can hardly finish my lesson on conjunctions without some student singing the Conjunction Junction refrain.

The catchy, repetitive tune succinctly explains the function of conjunctions. By today’s standards, the animation is clunky, but students get a kick out of that too.

As a teacher, I appreciate that this video gives students another way to conceptualize how the pieces of sentences fit together—like boxcars in a train. As a linguist, I instinctively want to point out the inaccuracies of this metaphor for sentence structure, but by the time the video finishes, many of my students look like they’re ready to start dancing!

 

College Humor’s Grammar Nazis

On the topic of metaphors, this College Humor video extends the metaphor that people that self-righteously correct your grammar resemble Nazis. This parody of Quentin Tarantino’s Inglorious Basterds addresses the troublesome details of English usage, including the “dangling modifier” and the “double negative,” as well as the case marking of conjoined personal pronouns (“me and her” versus “she and I”).

This video can be used in a variety of ways. It offers a good jumping off point for distinguishing important issues of usage from the distraction of prescriptive “rules.” It also raises the issue of why people have such Nazish zeal in their beliefs about issues of usage. Of course, logically we all know that a slip in linguistic usage differs fundamentally from a real atrocity like the holocaust. But why do some get more irked by linguistic slips?

Since the dialogue unfolds quickly, it helps to transcribe key exchanges onto the board. From here, the usage issues can be examined and teachers can address the pseudo-logic that motivates many of the prescriptive “rules.”

Warning: the video is ends with graphic violence that’s not appropriate for all classrooms, but that part can be skipped without loss.

Review of Joseph M. Williams’s “The Phenomenology of Error”

I recommend that you read Joseph M. Williams’s The Phenomenology of Error as a companion piece to Patrick Hartwell’s Grammar, Grammars, and the Teaching of Grammar. Both readings form the foundation for thoughtful discussions into the specifics of sentence-level pedagogies. Just as Hartwell questions what “grammar” means, Williams questions what linguistic “error” means, and what epistemologies and methodologies underlie labeling an expression an error. Williams and Hartwell help us see that “error” and “grammar” are often used in a sloppy, untheorized way, which forces us to lump disparate items into the same broad category.

Joseph M. Williams, author of The Phenomenology of Error.

Williams begins by comparing linguistic errors with social errors. This comparison lets us view error not in the usual product-centric perspective, where error exists on the page of a text, but in a transactional perspective, where error is socially situated within a flawed transaction between writer and reader. At the same time, Williams points out a hole in the analogy between social error and linguistic error: social errors can cause big problems; linguistic errors largely cluster into the domain of the trivial.

Williams views the common methodologies of defining error (and the rules that demarcate error) with skepticism, for several reasons:

1. One common methodology has researchers survey people about whether a given expression contains an error. Such surveys, Williams believes, are flawed. The question itself is leading. It entices us to read more self-consciously, and self-conscious readers over-report perceived errors.

2. We report our own linguistic habits inaccurately. How we profess to use the language differs from how we actually do. As evidence, Williams cites several prominent handbook authors whose attested usage contradicts their own prescriptions for usage (sometimes in the same sentence!).

3. In determining error, we tend to appeal too trustingly to the authority of a handbook or a teacher.

4. Regardless of our methodology, no one can ever agree on what things constitute grammatical errors.

How do we address these issues? Williams begins by contrasting two ways of reading: how we read when we hunt for errors (the way many teachers read student essays), versus how we read when we read for content (the way we read experts’ writing). Williams believes that if we read the second way, we can develop a formal classification of at least four types of rules:

1. Those we notice if followed and if violated.

2. Those we notice if followed but not if violated.

3. Those we don’t notice if followed but do if violated.

4.  Those we never notice, regardless of whether followed or violated.

This last sort strikes me as an interesting category—vacuous errors printed in some handbooks but with no psycholinguistic reality. Each person might categorize any given rule into different categories. That’s expected. Crucially, when we read for content, not every rule (or “rule”) will enter into our consciousness.

Williams’s categorization of rules could be further elaborated in the way suggested by Hartwell’s five definitions of “grammar.” For any rule posited, what forces are said to motivate it?

• Does the rule differentiate “standard” written English from less prestigious dialects?
• Does it help to enhance rhetorical style?
• Is it a core part of the grammar of all dialects?
• Does it distinguish native speakers from ESL learners?
• Is it some grammarian’s pet-peeve?

Williams prescribes a big change for teachers and language researchers. Regardless of what “experts” posit as error, teachers and researchers should focus their attention on the sorts of errors that rudely interpose themselves when we read for content, rather than defining error by appealing to outside authority. (At the end of the essay, Williams makes the much celebrated revelation that his own essay embodies this principle: he has inserted numerous errors into the article, errors which most readers will—on their first read—overlook.)

At the end, Williams concedes that his proposal might prove futile. Why? We get more satisfaction from hunting for errors and chastising supposed linguistic transgressors. Grammar Nazism and the “gotcha!” approach to language satisfy us more than merely noting what jumps out to us on a non-self-conscious reading.

Three decades after this article was first published, Williams’s proposal has carried more influence than he could have predicted. For one, linguists and psycholinguists have developed even more sophisticated methodologies for carefully assessing various shades of grammaticality. Linguists search corpuses of actual speech and writing to see what usages are attested (Google enables anyone with an internet connection to do a crude version of this sort of research). Psycholinguists rely on furtive research techniques to gauge grammaticality, such as cameras that track reader’s eye movements, reaction-time tasks, and even brain imaging.

At the same time, the grammar Nazism of prior generations has faded somewhat from the collective consciousness. Consider three pieces of evidence:

1. The specific prescriptive rules that Williams discusses throughout his essay seem dated. In fact, when I recently assigned this essay to my advanced composition students, they were confused because they knew nothing about these rules.

2. Two of the most influential language commentators in the popular media—Geoffrey Nunberg and Grammar Girl—base their analysis and usage advice not on cocksure pronouncements of correctness or dogmatic appeals to authority, but on careful historical research and corpus research that takes into account an impressive range of linguistic subtleties.

3. If they’ve been trained in the past three decades, every writing teacher I’ve met tends to take a non-dogmatic approach to sentence-level rules and error.

But one force will always be working against Williams’s proposal: native-speakers’ over-confidence in what they know about their language. As native speakers, we are swimming in the English language. And we’ve been using it since we were toddling. So any native speaker can easily authorize themselves to wear the hat of the grammar Nazi, and inveigh against whatever “error” so happens to bug them.

Although I embrace Williams’s rejection of the handbook’s authority when it comes to my own writing, my principal critique of Williams’s article stems from this fact: Williams taught at The University of Chicago. His classrooms were composed of the country’s elite students. The sentence-level needs of his students share little in common with those of the under-prepared students that most of us teach. I’m guessing that Williams’s students wrote relatively clean, sophisticated sentences and found it easy to navigate the subtleties and ambiguities of English usage.

Under-prepared students don’t cope as well with these complexities. A legitimate argument can be be made that they benefit from the authoritarian clarity and structure of the rigid prescriptive rules that characterize handbooks. In this view, handbook rules function as a necessary evil that serves a purpose at a certain stage in writers’ development, like the five-paragraph essay. Williams probably lacked the perspective to appreciate this.

Ten Ways to Keep Grammar Relaxed and Fun

February 19, 2013 Leave a comment

To too many teachers and students, the term “grammar” is synonymous with “boredom.” Further, Patrick Hartwell has suggested that teachers use grammar instruction to assert power over students.[1]

But it doesn’t have to be so.

I was recently asked how I keep my sentence-level instruction relaxed and fun for students. Here are ten ways:

Did you know that ancient Greek manuscripts contained no punctuation? Be thankful English isn't like that.

Did you know that ancient Greek manuscripts contained no punctuation? Be thankful English isn’t like that.

1. Don’t play the drill sergeant.  Teachers easily default into drill-sergeant mode when discussing grammar, trying to explain every detail with confident authority. I avoid this. For one, most rules (or “rules”) aren’t as clear-cut as is suggested by the cocksure writers of handbooks. Things change over time. When we look carefully, we see countless exceptions and countless areas of controversy in the language where attested usages disagree and where respected writers also disagree. Yes, you should generally avoid starting a sentence with “and,” but who cares if you do it every now and then?

2. Ask students to read a paragraph without any punctuationYou can take this to the extreme: no capitalization and no spacing between words!  Not surprisingly, students struggle with the reading. But this struggle helps them understand that punctuation  wasn’t invented for English teachers to torture their students; it serves a real purpose for readers.  (I sometimes accompany this activity with a picture of an ancient Greek manuscript, which shows that the convention of no punctuation was once widely accepted.)

3. Discuss slang and neologisms. When I was recently discussing parts of speech and the discussion moved to articles, I not only gave the standard examples (“a,” “the,” and “each”), but I also added “hella.” (it’s the way youth in northern California make “many” superlative.) When we arrived at verbs, I mentioned “chilax,” and asked a knowledgeable student to define it for the class. When we talked about verbing nouns, I mentioned mention the act of “Tebowing.” When students hear these examples, they light up.

4. Make fun of silly prescriptive “rules.”  These “rules” were invented by 18th-century grammarians who worried that English was a degenerate version of Latin sullied by “false syntax.”[2]  The classic example include the “rule” against splitting and infinitive and the “rule” against ending a sentence in a preposition, both modeled on Latin grammar. Yes, in Latin and the romance languages, you truly can’t end a sentence this way or split an infinitive (because it’s one word). It’s unattested. But English isn’t Latin. It’s not even a Romance language. So the “rule” against ending a sentence in a preposition makes as much sense as applying to English the patterns of Sanskrit or Swahili.

5. Contrast the conventions of school writing with texting. This is a subject where students have so much to say. Most students are keenly aware of the difference, especially when it comes to spelling and punctuation. I ask them about the impacts of texting on their writing. Students are shocked to find out that—contrary to what many assume—texting probably won’t destroy their language skills.

6. Question what we assume about people based on their linguistic habits. These assumptions relate to one’s morals, intelligence, and  manners—as pointed out by Patricia Dunn and Kenneth Lindblom. I ask students if these assumptions are based in logic, prejudice, or both. Again, students have tons to say about this rich topic for discussion, in part because many have themselves been judged based on their linguistic habits.

7. Make fun of the ridiculousness of language. Every language, when carefully examined, contains patterns that are the antithesis of intelligent design, as I’ve written in this post. For instance, we drive on a “parkway” and park on a “driveway.” Uh? Also, English very logically uses the same suffix to pluralize nouns as it does to make present tense verbs agree with third-person, singular subjects. Why? Because.

8. Use memorable or goofy example sentences. Many of my teachers, a long time ago, used goofy examples to prove a grammatical point that still sticks in my head. These sentences featured death metal and violent zoo animals.  Too often, we default to sentences about Dick and Jane. Yawn. The best examples are ones that you’ve designed in advance, rather than generating them on the spot. Quotes of politicians putting their feet in their mouths work well. So do sentence with pop-culture references. I’ve written about good examples sentences in this post and also in this one. A good pair of example sentences often illustrates a point much better than a long-winded technical explanation.

9. Play the typo game. This game reverses the usual power dynamic: usually the teacher catches student errors. For the typo game, the students catch the teacher’s errors. Whenever the teacher makes a typo on the chalkboard or a handout, the first student to bring it to the teacher’s attention gets a point. At the end of the term, the top point-getters receive extra credit. The typo game helps students see that yes, even English teachers make mistakes, and it teaches them to shed their paranoia about the tiny mistakes we all make and instead focus on what’s important.

10. Admit what you don’t know. Just like in Psychology, Astrophysics, or Medicine, the study of language contains many mysteries and idiosyncrasies that defy easy explanation. Some questions about grammar I truly don’t know how to answer, or might require research. For instance, when students ask me whether certain compound words are written as two words, one word, or a hyphenated word, I often confess that I don’t know, more than one way might be accepted, and we could use Google to research what actual writers are doing.


[1] Patrick Hartwell. 1985. Grammar, Grammars, and the Teaching of Grammar. In Cross-Talk in Comp Theory: A Reader. Second Edition. 2003. Edited by Victor Villanueva. P. 228.

[2] Brock Haussamen. 1997. Revising the Rules: Traditional Grammar and Modern LinguisticsP. 14 – 19.

Is Reading Aloud the Secret to Proofreading?

November 7, 2012 Leave a comment

Recently I began re-teaching myself the trumpet, after a brief break of twenty years. I fancy no ambitions of being the next Miles Davis, only the ambition of unwinding between grading essays, planning lessons, and responding to the usual student emails about their grandparents—dead and dying.

Trumpet, Mouthpiece, and Bucket Mute.

How quickly old skills come back! After a few days, I was lighting up the house with the brassy tones of Kumbaya and When Johnny Comes Marching Home. As is, the trumpet is a loud instrument. Our house’s hardwood acoustics amplified things to the point where my wife politely insisted I shove a practice mute in the horn.

At the local music shop, I spoke with a clerk whose main qualification was a Jim Morrison haircut. He couldn’t answer my questions about which mute to buy. Instead, he stood baffled by the store’s selection of mutes, contemplating them the way a stoned teenager contemplates the rainbow reflection off a DVD. Annoyed, I purchased a practice mute with a midrange price, and vowed never to return.

A mute works like a car muffler. This mute’s cork seal lodges into the bell of the horn. Air escapes only from two holes drilled into the mute, each the diameter of a BB. Even when I blew hard, only the faintest sound escaped, tinny and muffled. Against the noise from the street and the ambient hum of household electronics, I could barely hear myself play. Gone was the enjoyment.

Gone also was my feedback loop. I couldn’t hear which notes I was flubbing and which I was nailing. And this meant I couldn’t figure out what I needed to work on improving.

A similar feedback principle holds with writing. Typically, the writer’s feedback loop comes from re-reading what they’ve written, or getting the response of peers, tutors, or teachers. But how often are the words of a student’s essay actually read aloud?

I’ve begun to wonder whether we are asking students to do something akin to playing a horn plugged with a mute. What do students miss when their writing remains silent? When we read aloud the words we write, the language becomes not just textual but acoustic. We hear the music of language. And there’s something different, something richer and more moving, about how we process language we hear.

The crucial way that written and spoken language differ:

Most fundamentally, our facilities with spoken language are intuitive and inborn. In fact, the psychologist/linguist Steven Pinker has referred to our facility with spoken language—rightly—as The Language Instinct. We all have been speaking our native tongue from our earliest years, with no explicit instruction required, and we are geniuses at it before we reach ten. (If you don’t believe me, try teaching a dog or a chimp to understand or speak any sentence that comes out of a kindergartner’s mouth.)

Written language, though related to spoken language, is something else entirely. It’s against nature. It’s something that we produce only after many years of rigorous instruction in the classroom. Even then, many people never get very good at it.

Evidence also suggests humans’ facilities with language are linked to our facilities with music. For one, all cultures have spoken language and music, and just as with language, all normal individuals possess extraordinary talent with music. (If you don’t believe me, try teaching a kindergartner to hum a tune, and then try teaching a chimp or a dog to hum the same tune). Even people with severe intellectual disabilities or brain damage often have remarkable abilities with music and spoken language.

The link between language and music appears to run deeper. Neuroscientist Anirudhh P. Patel argues that both spoken language and music share a hierarchical structure and both “overlap in important ways in the brain” (674). Other psycholinguistic research from the Max Planck Institute shows that when listeners heard a musical sequence with a dissonant chord, brain imaging recorded neural activity in some of the same regions where we see neural activity if a person hears an ungrammatical sentence.

Confession: I myself struggle with proofreading.

When I’ve browsed back through prior blog posts, I’ve noticed them blighted by more than a few mechanical errors. (The irony of the writing teacher making grammatical errors on a blog about grammar teaching doesn’t amuse me as much as it should; I’m a perfectionist.) Before those posts went up, I had proofread them obsessively, and then had a colleague proofread them once more for good measure. What happened?

We both proofread silently.

After the incident with the trumpet mute, I switched to proofreading aloud. I sometimes feel like a lunatic talking to myself, but I was catching three times as many errors in a single read-through.

Was I on to something that could help my students?

I had always instructed students on proofreading strategies, but the results too often left me disappointed. I covered the handbook’s advice on proofreading (the sort of conventional wisdom that’s become obligatory for handbook authors) and reminded them throughout the semester to budget ample time in their writing process for proofreading.

In my experience, many teachers take a product-centric approach to proofreading (“These are the errors to look for”), rather than the process-centric approach I favor (“These are the steps to go through, and this is how much time it takes.”) Many students were following my advice. Many forgot to budget the necessary time. Still, too many essays—even ones where students put in the time—read like marvels of shoddy proofreading.

I recently suggested that my developmental students try proofreading aloud. When students turned in their first essays, I asked for a show of hands to see how many followed this advice and how many found it helpful. About half of the class proofread aloud and about a third of the class found it helpful. Several students vocally endorsed proofreading aloud. But the striking evidence emerged from the final product—essays much more mechanically polished than usual for this point in the semester.

Of course, this evidence falls short of scientific, but it suggests a future experiment: measure what happens when two otherwise similar classrooms of students are given different guidance about whether to proofread aloud or in silence.

Proofreading aloud helps a lot, but I don’t expect it to be a panacea, for the following reasons:

1. It obviously won’t work with in-class writing.

2. English words are spelled in ways that correlate only loosely with the phonetic sequences that issue from our vocal tracts, so it won’t necessarily help with certain spelling errors (including homophones and anti-phonetic spellings).

3. Some students’ teachers have told them to put commas wherever a pause seems natural—a gross oversimplification. Many speakers randomly pause, only because they hesitate in thought. Also, many speakers briefly pause after they finish pronouncing the grammatical subject of a sentence, especially a long subject, even though it’s widely accepted as ungrammatical to separate the subject and verb with a single comma.

4. Punctuation errors are tough to catch reading aloud, especially errors where students are confounding commas and ending punctuation. Both of these sound the same—like silence. Apostrophes, likewise, are silent.

5. Some students, when they read aloud, only loosely scan the lines they read, glossing over written errors and pronouncing what they think they see, rather than focusing on the letters on the page. This problem is exacerbated when students proofread at the computer, where the optics encourage this “scanning.”

However, proofreading aloud does especially help in certain situations. It helps students notice sentences where the syntax muddles in circles, the sentence rambles out of control, or the different phrases don’t quite fit together as a complete sentence. It makes choppy transitions more salient. It also works better when students track the word they’re pronouncing with a pencil, in order to prevent “scanning” past errors.

Why do we catch more errors when proofreading aloud?

Jan Madraso suggests that when students proofread aloud, it relieves the burden on their short-term memory and it forces them to proceed more slowly (32 – 33). I would add, as mentioned earlier, that working with spoken language is easier, since it’s more natural, more intuitive, and hard-wired in the human mind.

Then we should consider the role of prosody (that complex pattern of stress, rhythm, and intonation that voice synthesizers struggle to replicate). When we read aloud, we are forced to map onto the words a prosody. A large body of linguistic research shows that the prosodic structure of a sentence depends heavily on its syntactic structure. So if the syntactic structure of a sentence is malformed, sprawling, or difficult to parse, when you try to pronounce it, your tongue gets tied.

Another body of linguistic research shows that prosody depends on a variety of semantic and pragmatic factors about the discourse. For instance, stress patterns change depending on whether words represent items that are new to the discourse or familiar to the discourse. Similarly, we use a special intonation when we contrast two elements with one another, convey sarcasm, read the items in a list, or transition to a new subject. Thus, when a reader can easily map a prosody onto words, it signals that the text is comprehensible.

Epilogue

In the end, I exchanged that practice mute for a “bucket mute” that muffled the sound much less. Practicing the trumpet grew more enjoyable.

The experience also taught me to appreciate the sound of writing. Written words, by comparison, seem more lifeless. I now ask students to read excerpts from texts aloud in class. It appeals to the auditory learners, and brings the classroom alive with the music of language.

Four Ways Texting Enhances Students’ Literacy, and One Way it Hurts it

September 16, 2012 Leave a comment

In my prior post, I discussed why one study fails to convince that texting is hurting the grammar skills of middle schoolers, and I challenged the apocalyptic prediction that texting will destroy the English language.

In the current post, I take a position that runs even more contrary to conventional wisdom. I’ll point out four ways texting actually enhances students’ literacy, and one way it hurts it.

1. Texting Improves Audience Awareness

Andrea Lunsford conducted research that showed that emerging digital technologies have given college students a better awareness of their audience. Knowing your audience and how to meet their needs is one of the many keys to being a successful writer—whether you’re writing a memo for work or you’re a student trying to figure out what makes an A-paper in the eyes of a particular teacher.

With text messages, the sender is especially aware of the needs of their recipient. If they suspect that the recipient doesn’t know what “lmao” or “yolo” means, then most texters wisely choose something else. In this way, using textisms is no different than a professional choosing whether to use the jargon of their field.

2. Texting Teaches Students Concision

In all genres, writers must balance the need to express themselves economically (with as few words/letters as possible) against the need to express themselves with accuracy. These two constraints usually operate at cross purposes. Genres such as scholarly research writing favor accurate expression over concision, whereas others, such as haiku, place a premium on economy of expression. Since texting also heavily values economy of expression, students who text should be expected to learn the necessity and power of brevity—as Shakespeare put it, “the soul of wit.”

3. Using Textims Improves Phonological Awareness

Some evidence shows, counter-intuitively, that students who regularly use textisms actually learn to spell and read better. A 2009 study by Beverly Plester et. al found that 10 – 12-year old children who had a higher ratio of textisms to total words in their texts tended to do better with word reading, vocabulary, and phonological awareness. Similarly, a 2011 study by Clare Wood et. al found that  students’ use of textisms at the start of the school year was able to predict their spelling performance at the end of the school year.

How can using non-standard spellings help students improve with standard spellings? Self-consciously manipulating standard spellings enhances their phonological awareness—their understandings of the ways in which written letters relate to the sounds of spoken language. Phonological awareness skills help students not just learn to spell with greater accuracy but also to decode unfamiliar words in readings and more fully comprehend.

4. Texting Provides More Reading/Writing Practice

Don’t forget that just a few decades ago, for most people writing was something that happened primarily when a teacher required it. A narrow segment of the population went into white-collar professions that required writing. Those outside of the workforce, or in blue-collar jobs, wrote infrequently, if at all, once they left school. Teachers know how rusty student writers go over a 12-week summer break; imagine the same rust accumulating over the course of one’s adult life.

As Andrea Lunsford puts it, we’ve never had a generation of youth like today’s, where authorship has spread to the masses. Youth today of all walks of life write constantly outside of school—email, social media, texting, etc. Don’t expect them to stop as they age. Even if it’s not formal school writing, such constant practice with writing has real benefits to their overall skills with literacy. Summarizing the research, Beverly Plester et. al note that one factor “reliably associated with reading attainment is exposure to the printed word” (147).

The Real Danger: Texting as a Classroom Distraction

Most teachers have had that student—the one who sits towards the back of the classroom, their eyes focused downward towards the smart-phone buried in their lap. They text away, thinking that teacher doesn’t see the busy thumbs underneath their desk.

Smartphones can introduce a huge distraction into the classroom. When you’re fiddling with your phone, you can’t learn what’s being taught. One Wilkes University study found that 91% of college students admit to texting during class time. In a composition classroom with 20 to 30 students, students have to work harder to hide their texting than in a large lecture hall, but it still happens.

How should college teachers deal with this? It’s tempting to take the tone of the anti-texting fascist on day one, sternly warning students of the consequences of not turning off their electronics before they enter the classroom. And while these rules must be made clear, as the semester progresses, students will test these waters.

I think of cell-phones less as the cause and more as the symptom of a separate problem—students not being engaged by the teaching. In other words, if my students are pulling out their phones, I might need to find a better way to engage them.

The Wilkes University study points out the importance of how the student desks are configured in the classroom, and whether the teacher is focused on the blackboard or on interacting with students. In my experience, this is correct. I disallow students from sitting in the back rows of the classroom, where it’s easy to hide their texting. My students also spend much of their class time engaged in discussions or working in small groups, where it’s harder to text inconspicuously.

Yet I still catch the occasional student texting in class. When I see it, I’ll conspicuously stop what I’m doing and personally ask them if they have a question. I might say they looked a little puzzled. They usually get the message (pun intended).

Sorry, but I’m not Convinced Texting is Destroying English Grammar

September 14, 2012 4 comments

Is texting hurting the grammar skills of middle-schoolers?

The Journal New Media and Society, which contains Cingel and Sundar’s study.

“Yes,” says a recent study by Drew P. Cingel and S. Shyam Sundar and (hereafter: C & S) in the Journal New Media and Society. C & S studied the texting habits of 6th, 7th, and 8th graders, and found that when students sent and received texts more frequently, it correlated significantly with poorer scores on a test of grammar. Further, the frequency with which students sent texts with nonstandard spellings correlated significantly and to a greater degree with poorer scores on the grammar test. Interestingly, the frequency with which students sent texts only with nonstandard capitalization or punctuation (independent of spelling) did not correlate to a statistically significant degree with how they did on the grammar test.

Of course, when scientific research like this gets into the hands of journalists, the results are depressingly predictable: news feeds overflow with insta-reporting that ignores the prior research on the topic, elides the researchers’ methodology, uncritically repeats the study’s results, and sexes up the most startling conclusions. Such reporting resonates most when the take-home message aligns with what the public already believes: “kids these days” are behaving slothfully, and English is decaying.

In fact, when we scrutinize C & S’s study, the negative link they draw between texting and grammar/literacy skills unravels. Here’s why:

Good Grammar doesn’t Equal Good Writing

Implicitly, C & S take a narrow view what makes for good writing—good grammar. They could have measured students’ grades in their writing classes, the holistic quality of a sample of students’ writing, or any number of other measures. Instead they chose to use a multiple-choice grammar assessment (and not a well conceived one, which I discuss below).

Good grammar is one of many components of effective writing, but not anything like the most important. I’ve taught plenty of students who can manufacture grammatically flawless prose that lacks any semblance of organization or meaningful thought.

Questionable Statistical Analysis

The linguist Mark Lieberman points out a number of serious flaws in C & S’s methodology and their statistical analysis. I won’t repeat them all here, but I do want to focus on Lieberman’s critique of C & S’s statistical analysis—which is damning (in the most literal sense). Most notably, Lieberman notes that the effect of nonstandard texting on students’ performance on the grammar test was quite weak, less than the effect of a student’s grade level.

Age of the Students Studied

C & S acknowledge that texting is a different genre of writing than school writing, and propose that students should be taught to register-switch between the two (14)—an uncontroversial prescription. But C & S fail to consider the relationship between the age of the students studied and their ability to register-switch, or how this relationship may have influenced the results of their experiment.

Recall that according to Lieberman, C & S’s results show that students’ grade level had a stronger effect on how they did on the grammar test than their texting behavior. We could interpret this to mean that the 8th graders did better on the exam because they have cumulatively received more writing instruction. Or consider a somewhat complimentary explanation: perhaps the 8th graders also did better because, with age, they’ve gained skill at register-switching.

Register-switching is a skill that improves as students mature and become more socially and meta-linguistically aware. In my college writing classes, the older students demonstrate the greatest skill at switching between different dialects/registers of English, while the students fresh out of high school tend more to struggle with it. Whenever I receive an email in the wrong register that says something like “hey prof can u send me the hw i missed?”, it always comes from one of my younger students.

Perhaps texting’s (mild) impact on students’ performance on the grammar assessment disappears as students get older. What if C & S conducted a similar experiment on high school or college students? I hypothesize they’d find little to no correlation between how much a student texts and how they perform on a grammar test. In fact, a 2010 study by M.A. Drouin found that the frequency with which university students texted actually correlated positively with higher spelling and reading fluency. By high school or college age, most students should have grown acutely aware of the differences between the conventions of texting, standard written English, and the other varieties of English.

Poor Design of the Grammar Test

The students C & S studied completed a brief multiple-choice grammar test. Many aspects of the test design left me puzzled. Amongst other problems, the test evinces a fuzzy conception of what “grammar” means, and a bizarre conception of how the nonstandard linguistic features of textisms might influence students’ skills with English mechanics. C & S didn’t think through the test’s details carefully enough.

First, though, a couple issues show a general sloppiness with details. I would argue—as a trained linguist and an English teacher—that  questions #2, #13, and #15 plausibly permit more than one correct answer, and will thus needlessly confuse students and muddy the results. Next, C & S call it a 22-question test, but in the appendix, the test is only 20-questions long. (Was their proofreader distracted by their Twitter account?)

Anyways, here are the 20 questions:

Cingel & Sundar’s grammar assessment: 20 questions or 22?

Before we proceed, let’s think carefully about the linguistic features of text messages, and how they deviate from standard written English. In his book Txtng: The Gr8 Db8, the linguist David Crystal identifies the following features of textisms (37 – 52):

  • Pictograms and logograms: “be” –> “b” or “kisses” –> “xxx”
  • Initialisms: “laughing out loud” –> “lol”
  • Omitting Medial Letters: “difficult” –> “difclt”
  • Other Nonstandard Spellings: “sort of” –> “sorta”
  • Shortenings: “difference” –> “diff”

How common are these features? In one Norwegian study Crystal cites, only 6% of text messages contained abbreviations (105). This figure strikes me as low, but certainly not all texters use these sorts of abbreviations. We should note also that text messages frequently omit punctuation marks and capitalizations.

Crucially, all of these differences fall in the domain of orthography—not syntax. I know of only one significant way in which the syntax of texting differs from standard written English: text messages can elide certain function words that speakers can infer:

“Do you want to go to the game?” –> “want to go to game?”

Similar to text messages, this sign elides certain function words and it contains non-standard punctuation and capitalization. Does anyone blame this sign for a decline in literacy?

But these sorts of elisions pre-date text messaging. They also occur in informal speech (how many people actually speak in complete sentences?), as well as in street signs, restaurant menus, and instructional manuals.

Compare the nonstandard linguistic features of textisms with the mechanical issues assessed by C & S’s test:

  • 9 questions test verb inflection issues, such as agreement and tense (#1, 3 – 6, & 9 – 12)
  • 8 questions test students on punctuation and/or capitalization (#13 – 20)
  • 2 questions test students on the spelling of homophones (#7 & 8)
  • 1 question tests students on pronoun choice (#2).

C & S do not explain why they’ve chosen to test these students on these mechanical issues as opposed to any others, except that they wanted to test students on grammar issues all had previously been taught in school. This rationale strikes me as strange. With language development, all students know infinitely more than their teachers have taught them explicitly.

C & S’s test doesn’t strictly test grammar (if you take “grammar” to mean syntax); it primarily tests punctuation, capitalization, spelling, and verb form. While one might reasonably expect students who use textisms to struggle with how to spell certain words or how to punctuate and capitalize correctly, C & S propose no theory of why textisms would interfere with students’ ability to properly inflect verbs or choose pronouns, even though about half the test assesses these abilities.

What does “grammar” even mean to C. & S? They commit the mortal sin of literacy researchers—identified by Patrick Hartwell—of not specifying what exactly they take “grammar” to mean. They assume that “grammar” is some monolithic entity, and that all grammar errors are equal. Crucially, C &S don’t provide results that show if errors on certain types of question on the grammar test correlate with certain types of texting habits. They only measure students’ overall score on the test. In this way, their grammar test acts as a coarse-grained tool, one which isn’t founded on any particular theory of grammar, grammatical miscues, and the linguistic features of text messages.

In a more carefully designed study, researchers would differentiate more thoughtfully between types of grammatical errors students might commit, and how these relate to the conventions of text messaging.

An Alarmist View of Emergent Technology

There’s a long thought tradition in which people feel threatened by emergent technologies, as Adam Gopnik points out in a 2011 New Yorker article. We constantly see them as the thing that’s going to make everything in the world fall apart. Gopnik lists many examples of people predicting such destructive impacts of everything from “horse-drawn carriages running by bright colored posters” to “color newspaper supplements.” Even Plato’s Socrates worried in the Phaedrus dialogue that the technology of—get this—writing would make people forgetful, and give the masses the illusion of possessing wisdom.

This alarmist view operates on steroids when we consider how technology will impact language and literacy. Few complaints have a longer history than the complaint that our language is decaying. English still hasn’t collapsed, yet the scapegoat keeps changing to fit the historical moment—young people, immigrants, pop culture, or slang. C & S have gone looking for the next scapegoat:

“[Routine use of textisms] by current and future generations of 13 — 17 year-olds may serve to create the impression that this is normal and accepted use of the language and rob this age group of a fundamental understanding of standard English grammar” (2).

Their negative attitude surfaces too in their loaded language. They begin their abstract writing that

“the perpetual use of mobile devices by adolescents has fueled a culture of text messaging” (1). (emphasis mine)

And they later write that

“techspeak has crept into the classroom” (13). (emphasis mine)

Cue the sinister music!

Nowhere in the article do they even consider that texting might enhance students’ school literacy. They presuppose that texting technology is inherently detrimental, and then run an experiment to test that proposition. If one assumes that every new technology damages literacy, then everywhere we look, we’ll see the supposed damage.

In doing so, C & S overlook the advantages to students from texting. As Mark Lieberman points out, the authors fail to cite relevant research in the field that generally finds students’ texting has a positive relationship with their literacy skills. Further, the literature review of M.A. Plouin’s 2010 texting study summarizes that empirical studies show “mixed results” when analyzing the relationship between how frequently students text and how it impacts their literacy, and they show no significant negative relationship between using textisms and students’ literacy (69).

It remains to be seen how texting impacts literacy. Some of those changes may end up being negative, and some positive. But when evaluating emergent technologies, the interesting question is how potential negative impacts measure up with the positive.

Clearly, this area of inquiry remains in its infancy. And we should humble ourselves knowing how difficult it is to conduct solid quantitative research into issues of education and literacy. This sort of research is dogged by the same problems that dog most quantitative research in the social sciences, such as countless confounding variables that are impossible to control and disagreements over which outcomes to measure. On top of that, digital technologies are still evolving, and most empirical results are embedded in the historical and demographic contexts of the students studied, and are thus difficult to generalize from.

In my follow-up to this post, I’ll cover ground not covered by C & S: I’ll point out four real ways in which texting probably helps students grow as writers, and one way it can really hurt them.