Archive

Archive for the ‘Reviews of Books and Articles’ Category

Morphological Analysis as a Vocabulary Strategy in Post-Secondary Reading: Literature Review and Annotated Bibliography

Just as a grammar governs the ordering of words in sentences and phrases, another sort of grammar governs the ways in which morphemes combine to form words.

How helpful is this morphological grammar to reading teachers? Developmental reading textbooks often teach students to infer word meaning from the parts of words, but what’s the pedagogical grounding for this? I recently wrote a literature review and annotated bibliography on exactly that topic:

Click here to see Morphological Analysis as a Vocabulary Strategy in Post-Secondary Reading: Literature Review and Annotated Bibliography (PDF Document)

Bait-and-Switch: The COMPASS Writing Skills Placement Test

February 2, 2014 Leave a comment

The COMPASS Writing Skills Placement Test (WSPT), published by ACT, is widely used by community colleges to place incoming students into English/Writing courses. Part of ACT’s COMPASS suite of placement tests, the COMPASS WSPT is a multiple-choice, computer-adaptive test. COMPASS tests are used by 46% of community colleges, according to one source (p. 2). At some colleges, the COMPASS WSPT operates as the primary instrument to place English students, though many use it as one of “multiple measures.”

With the current spotlight on developmental education, placement mechanisms like the COMPASS WSPT are increasingly being scrutinized. When a student is misplaced into a class that’s too easy or too hard, they face a needless obstacle to completing college. ACT reference materials market the COMPASS tests as predictive of student success in college courses, but two recent reports from the Community College Research Center (CCRC) cast doubt on the power of such standardized multiple-choice tests to predict student success.

The CCRC reports, rather than examining how well the content of placement tests aligns with the content of college courses, both focus on large-scale statistical analyses that compare student performance on placement tests with performance in college classes. In this post, I consider a related, yet narrower, set of issues: what skills exactly does the COMPASS WSPT assess? How well do the skills it assesses align with how ACT markets the test and with the skills students need to succeed in their writing courses?

My answer: the ACT’s online documentation repeatedly suggests the COMPASS WSPT assesses examinees’ rhetorical skills and mechanical skills in equal measure. However, this impression misleads; close-reading sample test questions, I show that the COMPASS WSPT is almost entirely a multiple-choice test of sentence-level skills with mechanics; it barely assess higher-order skills essential to success in most writing classes.

The Impression Created by the ACT Website

Let’s examine how ACT describes the content of the COMPASS WSPT on its website. I found four such places:

1. The COMPASS Guide to Effective Student Placement and Retention in the Language Arts, which is geared towards faculty and administrators, states:

COMPASS Writing Skills Placement Test The COMPASS Writing Skills Placement Test is designed to help determine whether a student possesses the writing skills and knowledge needed to succeed in a typical entry-level college  composition course. Examinees are presented with a passage on-screen and are asked to read it while looking for problems in grammar, usage, and style. Upon finding an error, students can replace the portion of text with one of five answer options presented.

Writing Skills Test passages are presented to the examinee as an unbroken whole, with no indication of where errors are located. To accommodate the task for computer-based testing, the test passages are divided into a series of segments. Because examinees can choose to edit any portion of the passage, every part of the text is included within these segments, and no part of the text is contained in more than one segment. There is a test item for each segment of the text so that an item with five answer options will appear no matter where an examinee chooses to revise the text. Of the five answer options, option “A” always reproduces the original text segment. If the segment selected by the examinee contains no error, then the correct alternative would be option “A.” Allowing students to select and correct any part of the passage broadens the task from simple recognition of the most plausible alternative to a more generative error-identification exercise.

In addition to the items that correspond to passage segments, the COMPASS Writing Skills Placement Test has one or two multiple-choice items that appear after the examinee is finished revising the passage. These items pose global questions related to the passage.

COMPASS Writing Skills Placement Test items are of two general categories: usage/mechanics and rhetorical skills. Each of these general categories is composed of the subcategories listed below.

Usage/Mechanic Items Items in this category are directed at the surface-level characteristics of writing, as exemplified in three major subcategories: punctuation, basic grammar and usage, and sentence structure.

Rhetorical Skills Items Items in this category deal with misplaced, omitted, or superfluous commas; colons; semicolons; dashes; parentheses; apostrophes; question marks; periods; and exclamation points. (p. 2)

Note the two bold headings at the bottom, which suggest that mechanical skills and rhetorical skills are assessed in equal measure.

2. A similar implication is made throughout parts of the ACT’s COMPASS WSPT website geared at a broader audience. For instance, this page states:

This test asks students to find and correct errors in essays presented on the computer screen. The test items include the following content categories:

Usage/Mechanics

  • Punctuation
  • Basic grammar and usage
  • Sentence structure
Rhetorical Skills

  • Strategy
  • Organization
  • Style
3. Likewise, this page states:
Writing Skills Placement Test is a multiple-choice test that requires students to find and correct errors in essays in the areas of usage and mechanics, including basic grammar, punctuation and sentence structure, and rhetorical skills, including strategy, organization and style. (colors added to illustrate equal emphasis)
4. Finally, the introduction to the packet of sample questions states:

Items in the Writing Skills Placement Test assess basic knowledge and skills in usage and mechanics (e.g., punctuation, basic grammar and usage, and sentence structure) as well as more rhetorical skills such as writing strategy, organization, and style. (p. 1, colors added to illustrate equal emphasis)

At the end of the first except above, witness a bizarre sleight-of-hand: here, “rhetorical” means punctuation, the sort of thing that belongs under the heading of “Usage/Mechanic Items.” The only thing “rhetorical” about punctuation is the assertion that it is.

In the subsequent three excerpts, rhetorical skills are conceptualized more vaguely, as “strategy,” “organization,” and “style.” Arguably, these three might qualify as rhetorical skills. But we’re left wondering: Strategy of what? Organization of what? Style of what? These could refer to re-arranging a couple words in a sentence, or re-conceptualizing the entire thrust of the essay.

It helps to operationalize the distinction between mechanical skills and rhetorical skills in a way that’s both clear and generally accepted by writing teachers. I’d posit the following:

Mechanical skills require writers to operate at the sentence-level, and not far beyond. These skills enable writers to make ideas intelligible through language, and make that language conform to the usage conventions followed by respected writers in published writing.

Rhetorical skills require writers to think beyond the sentence level and often beyond the text itself. Writers must consider how the meaning of one sentence relates to the meaning of the whole text. Further, they must consider the social context in which they are writing and the needs of their audience—as well as logos, ethos, and pathos.

As with many binaries, the distinction between can be hazy: Laura Micciche, for instance, posits that grammatical choices have a rhetorical component. But using my distinction, the next section will analyze actual test questions, with the understanding that some questions could simultaneously be categorized as mechanical and rhetorical.

The Reality of the COMPASS WSPT

I examined sample test questions for a sense of what the COMPASS WSPT actually assesses. My data comes directly from ACT. For the COMPASS WSPT, ACT publishes a booklet of Sample Test Questions.

How well do these represent actual test questions? The booklet’s preface assures examinees that “the examples in this booklet are similar to the kinds of questions you are likely to see when you take the actual COMPASS test.”

Of the 68 sample questions, just 7 uncontroversially assess students’ rhetorical abilities, as defined above. That’s 10%. These are the 2 – 3 “global” questions that follow each of the three passages. Whereas the remaining questions ask examinees to find and correct “errors,” the “global” questions ask examinees to consider the broader meaning and the extent to which language meets a higher-order goal. Here’s one such representative question:

Suppose the writer wants to show that lending programs similar to the one administered by the Grameen Bank have been widely accepted. Which of the following phrases, if added to the last sentence of the essay, would best achieve that goal?

A. to make credit available

B. over the years

C. around the world

D. to encourage development

E. with some variations (p. 9)

In fairness, of the remaining 90% of the questions, another 10% or so could be classified as primarily sentence-level in scope, but having a rhetorical component, under a charitable definition. Three such questions assess examinees on what the packet of COMPASS WSPT Sample Test Question calls “judging relevancy” (p. 24, 25, 26).  In these, examinees must decide whether certain words are superfluous or essential to the passage. Other marginally rhetorical questions assess whether examinees can choose the most fitting transitional expression between two sentences.

Now consider a representative test item that purely assesses sentence-level skills. In the following, examinees must choose which segment is correct:

A. If one member will fail to repay a loan, the entire group is unable to obtain credit

B. If one member fails to repay a loan, the entire group is unable to obtain credit

C. If one member do fail to repay a loan, the entire group is unable to obtain credit

D. If one member is fail to repay a loan, the entire group is unable to obtain credit

E. If one member failing to repay a loan, the entire group is unable to obtain credit (p. 7, emphasis is mine)

This question focuses exclusively on verb forms and tense.

The vast majority (80 – 90%) of COMPASS WSPT questions are crafted similarly: examinees much choose the answer with the correct preposition, with the right punctuation placed between the right words, or with the transposed words in grammatical order, with the right suffixes. No deep thinking needed. For many, the right answer could be picked by Microsoft Word’s grammar-checker. Read through the Sample Test Questions and see for yourself.

Why the Bait-and-Switch?

This bait-and-switch is striking, but no accident; ACT’s online documentation is permeated by the sort of measured wording, consistent style, and politic hedging that evinces a lengthy process of committee vetting. What’s happening is that ACT wants to make the COMPASS WSPT look more appealing to its target audience. Who is the target audience? Primarily, those who decide whether to purchase the COMPASS product—faculty and administrators at community colleges.

Consider their situation and needs. Decades ago, this audience might have eagerly embraced a placement test that primarily assessed sentence-level mechanical skills. But the values of English faculty in particular have shifted. First, today’s writing curriculum—from the developmental level to upper-division—focuses much more on higher order concepts: rhetorical decisions, critical thinking skills, the writing process, and the social context of writing. As such, placement tests need to assess students’ abilities on these dimensions. Second, the consensus is emerging that accurate placement requires  students to complete a writing sample evaluated by human eyes.

Placement tests that meet both criteria are commonly found at universities. But at community colleges, they are rendered less practical by budgetary constraints. So community college faculty are left seeking a compromise: a more economical multiple-choice test that assesses, at least in part, students’ higher-order skills. That’s the niche purportedly filled by the COMPASS WSPT.

With their heavy workload, community college faculty and administrators are vulnerable to the ACT’s bait-and-switch. How many actually take these tests themselves or analyze the sample questions? In my experience, most faculty understand the content of their placement tests vaguely to not at all (unless the test is developed in-house). When we debate placement policy, the debate too often gravitates towards a higher level of abstraction, focusing on pedagogical theory (multiple-choice test versus holistic writing sample) and statistical outcomes (how predictive the tests are of success), rather than the specific details of what the test questions actually assess.

English teachers would learn much if they sat down and took the high-stakes tests that determine where their students are placed. In fact, I recently sat down and took the reading and writing placement tests where I teach. The process enlightened me, giving me a much more practical perspective on the science art of student placement.

Grammar Terminology and E.D. Hirsch’s Cultural Literacy

October 20, 2013 Leave a comment

For literacy educators, E.D. Hirsch’s dubious yet influential Cultural Literacy: What Every American Needs to Know (1988) is a must-read—partly because Hirsch often gets caricatured in graduate seminars as the stand-in for what Paolo Friere calls the “banking method” of education. But Hirsch also indirectly helps explain a key problem teachers encounter when we teach about grammar and mechanics, a problem which—in this post—I will focus on.

Yes, everything you'll ever need to know, all in one book!

Yes, everything you’ll ever need to know, all in one book! And as a bonus: the cure to our benighted educational system!

Hirsch’s thesis holds that the K-12 educational system leaves students with a low level of literacy because it over-emphasizes teaching general skills (word decoding, summarizing, or making text-to-self connections), and it under-emphasizes the importance of familiarizing students with the broad range of background knowledge shared by highly literate Americans; Johnny can’t read, according to Hirch, because he doesn’t understand our cultural symbols.

It’s hardly controversial that background knowledge helps readers more deeply understand a text. What English teacher hasn’t seen a class of students overlook crucial textual allusions to things that most teenagers know nothing about?—things like the filibusters, Sodom and Gomorrah, or Jim Crow. That’s why reading teachers, as part of a venerable pre-reading activity, build and activate students’ schemata.

But Hirsch’s undoing stems from the way the book exemplifies a genre that today seems all too familiar. This genre:

1. starts from the premise that the K-12 education system is failing.

2. focuses not on what students can do, but on what they can’t.

3. takes a glib approach to examining competing pedagogies.

4. presents the author’s prescriptions as a panacea.

When it comes to our sentence-level pedagogies, how are Hirsch’s ideas relevant? He infamously ends his book with a 65-page list of items titled “What Literate Americans Know” —everything from “abominable snowman” (152) to “Zionism” (215). Introducing the list, Hirsch professes that he’s describing and not prescribing, but his list betrays his prejudices, since it includes “a rather full listing of grammatical and rhetorical terms because students and teachers will find them useful for discussing English grammar and style” (146). (Hirsh holds a Ph.D. in English)

Here Hirsch implies that college writing teachers cannot assume all students enter college knowing common grammar terminology. He’s right. But this doesn’t mean our high schools are failing. It’s just an objective fact about our students.

A comparison: I once taught in a community college that served high schools dominated by whole-language instruction. In such contexts, many of my students sat mystified when I first mentioned terms like subject, verb, and preposition. Over time, I learned to presuppose that only one group of students was familiar with common grammatical terms—those with coursework in ESL.  That was in California. Now I’m in Illinois, and things are different: my current writing students mostly tell me that in high school they already learned grammar terminology. Neither high school system is inherently better; each chooses to emphasize different things.

When students enter our classrooms without background knowledge in grammar terminology, it constrains how much we can teach when it comes to sentence-level pedagogies. Conversely, when students bring such knowledge into the classroom, we’re enabled to do more. (And even when they have that background knowledge, their prior teachers may have used differing definitions for the same term, a problem I discuss in another blog post.)

A practical example helps: how do you address subject-verb agreement if most students don’t know what a subject or a verb is—not to mention related terms like noun, suffix, tense, grammatical number, auxiliary verb?

This obstacle certainly can be surmounted. I see two general approaches:

1. The traditional approach: you could start by explicitly teaching students the definitions of subject and verb. When we label the parts with jargon, the jargon makes the discussion easier to follow. But this approach has some pitfalls: it can bore students; grammatical definitions lack inherent sexiness. This also subtracts more instructional time from other topics. And you risk walking down the slippery slope of having to define many more grammatical terms.

2. The inductive approach: the concept of agreement can be taught through intuitive example sentences, while skirting the technical names of the sentence parts and thus saving time. I discuss the advantages of this approach here. The major disadvantage is that students do not acquire knowledge and definitions of grammatical jargon that they can use in the future.

Either way, it’s a trade-off.

My point is this: if you have an ambitious agenda when it comes to sentence-level pedagogies, think carefully about what your students already know. You might need to scale back your ambitions.

Review of Joseph M. Williams’s “The Phenomenology of Error”

I recommend that you read Joseph M. Williams’s The Phenomenology of Error as a companion piece to Patrick Hartwell’s Grammar, Grammars, and the Teaching of Grammar. Both readings form the foundation for thoughtful discussions into the specifics of sentence-level pedagogies. Just as Hartwell questions what “grammar” means, Williams questions what linguistic “error” means, and what epistemologies and methodologies underlie labeling an expression an error. Williams and Hartwell help us see that “error” and “grammar” are often used in a sloppy, untheorized way, which forces us to lump disparate items into the same broad category.

Joseph M. Williams, author of The Phenomenology of Error.

Williams begins by comparing linguistic errors with social errors. This comparison lets us view error not in the usual product-centric perspective, where error exists on the page of a text, but in a transactional perspective, where error is socially situated within a flawed transaction between writer and reader. At the same time, Williams points out a hole in the analogy between social error and linguistic error: social errors can cause big problems; linguistic errors largely cluster into the domain of the trivial.

Williams views the common methodologies of defining error (and the rules that demarcate error) with skepticism, for several reasons:

1. One common methodology has researchers survey people about whether a given expression contains an error. Such surveys, Williams believes, are flawed. The question itself is leading. It entices us to read more self-consciously, and self-conscious readers over-report perceived errors.

2. We report our own linguistic habits inaccurately. How we profess to use the language differs from how we actually do. As evidence, Williams cites several prominent handbook authors whose attested usage contradicts their own prescriptions for usage (sometimes in the same sentence!).

3. In determining error, we tend to appeal too trustingly to the authority of a handbook or a teacher.

4. Regardless of our methodology, no one can ever agree on what things constitute grammatical errors.

How do we address these issues? Williams begins by contrasting two ways of reading: how we read when we hunt for errors (the way many teachers read student essays), versus how we read when we read for content (the way we read experts’ writing). Williams believes that if we read the second way, we can develop a formal classification of at least four types of rules:

1. Those we notice if followed and if violated.

2. Those we notice if followed but not if violated.

3. Those we don’t notice if followed but do if violated.

4.  Those we never notice, regardless of whether followed or violated.

This last sort strikes me as an interesting category—vacuous errors printed in some handbooks but with no psycholinguistic reality. Each person might categorize any given rule into different categories. That’s expected. Crucially, when we read for content, not every rule (or “rule”) will enter into our consciousness.

Williams’s categorization of rules could be further elaborated in the way suggested by Hartwell’s five definitions of “grammar.” For any rule posited, what forces are said to motivate it?

• Does the rule differentiate “standard” written English from less prestigious dialects?
• Does it help to enhance rhetorical style?
• Is it a core part of the grammar of all dialects?
• Does it distinguish native speakers from ESL learners?
• Is it some grammarian’s pet-peeve?

Williams prescribes a big change for teachers and language researchers. Regardless of what “experts” posit as error, teachers and researchers should focus their attention on the sorts of errors that rudely interpose themselves when we read for content, rather than defining error by appealing to outside authority. (At the end of the essay, Williams makes the much celebrated revelation that his own essay embodies this principle: he has inserted numerous errors into the article, errors which most readers will—on their first read—overlook.)

At the end, Williams concedes that his proposal might prove futile. Why? We get more satisfaction from hunting for errors and chastising supposed linguistic transgressors. Grammar Nazism and the “gotcha!” approach to language satisfy us more than merely noting what jumps out to us on a non-self-conscious reading.

Three decades after this article was first published, Williams’s proposal has carried more influence than he could have predicted. For one, linguists and psycholinguists have developed even more sophisticated methodologies for carefully assessing various shades of grammaticality. Linguists search corpuses of actual speech and writing to see what usages are attested (Google enables anyone with an internet connection to do a crude version of this sort of research). Psycholinguists rely on furtive research techniques to gauge grammaticality, such as cameras that track reader’s eye movements, reaction-time tasks, and even brain imaging.

At the same time, the grammar Nazism of prior generations has faded somewhat from the collective consciousness. Consider three pieces of evidence:

1. The specific prescriptive rules that Williams discusses throughout his essay seem dated. In fact, when I recently assigned this essay to my advanced composition students, they were confused because they knew nothing about these rules.

2. Two of the most influential language commentators in the popular media—Geoffrey Nunberg and Grammar Girl—base their analysis and usage advice not on cocksure pronouncements of correctness or dogmatic appeals to authority, but on careful historical research and corpus research that takes into account an impressive range of linguistic subtleties.

3. If they’ve been trained in the past three decades, every writing teacher I’ve met tends to take a non-dogmatic approach to sentence-level rules and error.

But one force will always be working against Williams’s proposal: native-speakers’ over-confidence in what they know about their language. As native speakers, we are swimming in the English language. And we’ve been using it since we were toddling. So any native speaker can easily authorize themselves to wear the hat of the grammar Nazi, and inveigh against whatever “error” so happens to bug them.

Although I embrace Williams’s rejection of the handbook’s authority when it comes to my own writing, my principal critique of Williams’s article stems from this fact: Williams taught at The University of Chicago. His classrooms were composed of the country’s elite students. The sentence-level needs of his students share little in common with those of the under-prepared students that most of us teach. I’m guessing that Williams’s students wrote relatively clean, sophisticated sentences and found it easy to navigate the subtleties and ambiguities of English usage.

Under-prepared students don’t cope as well with these complexities. A legitimate argument can be be made that they benefit from the authoritarian clarity and structure of the rigid prescriptive rules that characterize handbooks. In this view, handbook rules function as a necessary evil that serves a purpose at a certain stage in writers’ development, like the five-paragraph essay. Williams probably lacked the perspective to appreciate this.

Sorry, but I’m not Convinced Texting is Destroying English Grammar

September 14, 2012 4 comments

Is texting hurting the grammar skills of middle-schoolers?

The Journal New Media and Society, which contains Cingel and Sundar’s study.

“Yes,” says a recent study by Drew P. Cingel and S. Shyam Sundar and (hereafter: C & S) in the Journal New Media and Society. C & S studied the texting habits of 6th, 7th, and 8th graders, and found that when students sent and received texts more frequently, it correlated significantly with poorer scores on a test of grammar. Further, the frequency with which students sent texts with nonstandard spellings correlated significantly and to a greater degree with poorer scores on the grammar test. Interestingly, the frequency with which students sent texts only with nonstandard capitalization or punctuation (independent of spelling) did not correlate to a statistically significant degree with how they did on the grammar test.

Of course, when scientific research like this gets into the hands of journalists, the results are depressingly predictable: news feeds overflow with insta-reporting that ignores the prior research on the topic, elides the researchers’ methodology, uncritically repeats the study’s results, and sexes up the most startling conclusions. Such reporting resonates most when the take-home message aligns with what the public already believes: “kids these days” are behaving slothfully, and English is decaying.

In fact, when we scrutinize C & S’s study, the negative link they draw between texting and grammar/literacy skills unravels. Here’s why:

Good Grammar doesn’t Equal Good Writing

Implicitly, C & S take a narrow view what makes for good writing—good grammar. They could have measured students’ grades in their writing classes, the holistic quality of a sample of students’ writing, or any number of other measures. Instead they chose to use a multiple-choice grammar assessment (and not a well conceived one, which I discuss below).

Good grammar is one of many components of effective writing, but not anything like the most important. I’ve taught plenty of students who can manufacture grammatically flawless prose that lacks any semblance of organization or meaningful thought.

Questionable Statistical Analysis

The linguist Mark Lieberman points out a number of serious flaws in C & S’s methodology and their statistical analysis. I won’t repeat them all here, but I do want to focus on Lieberman’s critique of C & S’s statistical analysis—which is damning (in the most literal sense). Most notably, Lieberman notes that the effect of nonstandard texting on students’ performance on the grammar test was quite weak, less than the effect of a student’s grade level.

Age of the Students Studied

C & S acknowledge that texting is a different genre of writing than school writing, and propose that students should be taught to register-switch between the two (14)—an uncontroversial prescription. But C & S fail to consider the relationship between the age of the students studied and their ability to register-switch, or how this relationship may have influenced the results of their experiment.

Recall that according to Lieberman, C & S’s results show that students’ grade level had a stronger effect on how they did on the grammar test than their texting behavior. We could interpret this to mean that the 8th graders did better on the exam because they have cumulatively received more writing instruction. Or consider a somewhat complimentary explanation: perhaps the 8th graders also did better because, with age, they’ve gained skill at register-switching.

Register-switching is a skill that improves as students mature and become more socially and meta-linguistically aware. In my college writing classes, the older students demonstrate the greatest skill at switching between different dialects/registers of English, while the students fresh out of high school tend more to struggle with it. Whenever I receive an email in the wrong register that says something like “hey prof can u send me the hw i missed?”, it always comes from one of my younger students.

Perhaps texting’s (mild) impact on students’ performance on the grammar assessment disappears as students get older. What if C & S conducted a similar experiment on high school or college students? I hypothesize they’d find little to no correlation between how much a student texts and how they perform on a grammar test. In fact, a 2010 study by M.A. Drouin found that the frequency with which university students texted actually correlated positively with higher spelling and reading fluency. By high school or college age, most students should have grown acutely aware of the differences between the conventions of texting, standard written English, and the other varieties of English.

Poor Design of the Grammar Test

The students C & S studied completed a brief multiple-choice grammar test. Many aspects of the test design left me puzzled. Amongst other problems, the test evinces a fuzzy conception of what “grammar” means, and a bizarre conception of how the nonstandard linguistic features of textisms might influence students’ skills with English mechanics. C & S didn’t think through the test’s details carefully enough.

First, though, a couple issues show a general sloppiness with details. I would argue—as a trained linguist and an English teacher—that  questions #2, #13, and #15 plausibly permit more than one correct answer, and will thus needlessly confuse students and muddy the results. Next, C & S call it a 22-question test, but in the appendix, the test is only 20-questions long. (Was their proofreader distracted by their Twitter account?)

Anyways, here are the 20 questions:

Cingel & Sundar’s grammar assessment: 20 questions or 22?

Before we proceed, let’s think carefully about the linguistic features of text messages, and how they deviate from standard written English. In his book Txtng: The Gr8 Db8, the linguist David Crystal identifies the following features of textisms (37 – 52):

  • Pictograms and logograms: “be” –> “b” or “kisses” –> “xxx”
  • Initialisms: “laughing out loud” –> “lol”
  • Omitting Medial Letters: “difficult” –> “difclt”
  • Other Nonstandard Spellings: “sort of” –> “sorta”
  • Shortenings: “difference” –> “diff”

How common are these features? In one Norwegian study Crystal cites, only 6% of text messages contained abbreviations (105). This figure strikes me as low, but certainly not all texters use these sorts of abbreviations. We should note also that text messages frequently omit punctuation marks and capitalizations.

Crucially, all of these differences fall in the domain of orthography—not syntax. I know of only one significant way in which the syntax of texting differs from standard written English: text messages can elide certain function words that speakers can infer:

“Do you want to go to the game?” –> “want to go to game?”

Similar to text messages, this sign elides certain function words and it contains non-standard punctuation and capitalization. Does anyone blame this sign for a decline in literacy?

But these sorts of elisions pre-date text messaging. They also occur in informal speech (how many people actually speak in complete sentences?), as well as in street signs, restaurant menus, and instructional manuals.

Compare the nonstandard linguistic features of textisms with the mechanical issues assessed by C & S’s test:

  • 9 questions test verb inflection issues, such as agreement and tense (#1, 3 – 6, & 9 – 12)
  • 8 questions test students on punctuation and/or capitalization (#13 – 20)
  • 2 questions test students on the spelling of homophones (#7 & 8)
  • 1 question tests students on pronoun choice (#2).

C & S do not explain why they’ve chosen to test these students on these mechanical issues as opposed to any others, except that they wanted to test students on grammar issues all had previously been taught in school. This rationale strikes me as strange. With language development, all students know infinitely more than their teachers have taught them explicitly.

C & S’s test doesn’t strictly test grammar (if you take “grammar” to mean syntax); it primarily tests punctuation, capitalization, spelling, and verb form. While one might reasonably expect students who use textisms to struggle with how to spell certain words or how to punctuate and capitalize correctly, C & S propose no theory of why textisms would interfere with students’ ability to properly inflect verbs or choose pronouns, even though about half the test assesses these abilities.

What does “grammar” even mean to C. & S? They commit the mortal sin of literacy researchers—identified by Patrick Hartwell—of not specifying what exactly they take “grammar” to mean. They assume that “grammar” is some monolithic entity, and that all grammar errors are equal. Crucially, C &S don’t provide results that show if errors on certain types of question on the grammar test correlate with certain types of texting habits. They only measure students’ overall score on the test. In this way, their grammar test acts as a coarse-grained tool, one which isn’t founded on any particular theory of grammar, grammatical miscues, and the linguistic features of text messages.

In a more carefully designed study, researchers would differentiate more thoughtfully between types of grammatical errors students might commit, and how these relate to the conventions of text messaging.

An Alarmist View of Emergent Technology

There’s a long thought tradition in which people feel threatened by emergent technologies, as Adam Gopnik points out in a 2011 New Yorker article. We constantly see them as the thing that’s going to make everything in the world fall apart. Gopnik lists many examples of people predicting such destructive impacts of everything from “horse-drawn carriages running by bright colored posters” to “color newspaper supplements.” Even Plato’s Socrates worried in the Phaedrus dialogue that the technology of—get this—writing would make people forgetful, and give the masses the illusion of possessing wisdom.

This alarmist view operates on steroids when we consider how technology will impact language and literacy. Few complaints have a longer history than the complaint that our language is decaying. English still hasn’t collapsed, yet the scapegoat keeps changing to fit the historical moment—young people, immigrants, pop culture, or slang. C & S have gone looking for the next scapegoat:

“[Routine use of textisms] by current and future generations of 13 — 17 year-olds may serve to create the impression that this is normal and accepted use of the language and rob this age group of a fundamental understanding of standard English grammar” (2).

Their negative attitude surfaces too in their loaded language. They begin their abstract writing that

“the perpetual use of mobile devices by adolescents has fueled a culture of text messaging” (1). (emphasis mine)

And they later write that

“techspeak has crept into the classroom” (13). (emphasis mine)

Cue the sinister music!

Nowhere in the article do they even consider that texting might enhance students’ school literacy. They presuppose that texting technology is inherently detrimental, and then run an experiment to test that proposition. If one assumes that every new technology damages literacy, then everywhere we look, we’ll see the supposed damage.

In doing so, C & S overlook the advantages to students from texting. As Mark Lieberman points out, the authors fail to cite relevant research in the field that generally finds students’ texting has a positive relationship with their literacy skills. Further, the literature review of M.A. Plouin’s 2010 texting study summarizes that empirical studies show “mixed results” when analyzing the relationship between how frequently students text and how it impacts their literacy, and they show no significant negative relationship between using textisms and students’ literacy (69).

It remains to be seen how texting impacts literacy. Some of those changes may end up being negative, and some positive. But when evaluating emergent technologies, the interesting question is how potential negative impacts measure up with the positive.

Clearly, this area of inquiry remains in its infancy. And we should humble ourselves knowing how difficult it is to conduct solid quantitative research into issues of education and literacy. This sort of research is dogged by the same problems that dog most quantitative research in the social sciences, such as countless confounding variables that are impossible to control and disagreements over which outcomes to measure. On top of that, digital technologies are still evolving, and most empirical results are embedded in the historical and demographic contexts of the students studied, and are thus difficult to generalize from.

In my follow-up to this post, I’ll cover ground not covered by C & S: I’ll point out four real ways in which texting probably helps students grow as writers, and one way it can really hurt them.

Review of “Grammar Rants” by Patricia A. Dunn and Ken Lindblom

Reviewed: Grammar Rants: How a Backstage Tour of Writing Complaints Can Help Students Make Informed, Savvy Choices About Their Writing, by Patricia A. Dunn and Ken Lindblom[1]

Dunn & Lindblom (hereafter “D & L”) present teachers with a novel approach to integrating grammar into the writing curriculum, a pedagogy focused on critiquing the critics who complain about other people’s bad grammar and mechanics. Such complaints form their own genre (grammar rants), and are cast from a common mold with a long history. For instance, D & L cite professors at Illinois State Normal University ranting about spelling and grammar mistakes back in the mid-19th century (3). The genre never dies. Today, the prototypical grammar ranter would be Lynn Truss, author of the supercilious Eats, Shoots & Leaves.

Grammar rants, D & L argue, derive their power from the commonplaces assumed to be shared between writer and reader (26). You know them too: that having good grammar is crucial, that modern English is deteriorating, that kids these days are abusing the language, that liberal teachers have gone soft, and that we must return to the rigorous standards of an idyllic yesteryear.

Dunn & Lindblom’s annotations illustrate how they tear apart a grammar rant. (click for larger image)

D & L detail how these commonplaces can be scrutinized, how these grammar rants can be close-read, and how writing curriculum can be scaffolded to enable students to do the same. The grammar rants analyzed are picked from a variety of publications—most commonly, small-town newspapers. D & L unpack the questionable assumptions made by the grammar ranters, such as the implicit link between correct grammar and good morals (chapter 1), or between correct grammar and high intelligence (chapter 2). Grammar rants get torn apart line for line, and this is where the book excels; D & L’s meticulous, insightful close-readings provide rich models from which to teach.

Grammar rants, in D & L’s view, have little to do with the actual error in question or the clarity of the writing; grammar functions as a proxy. Through it, the ranter asserts superiority over the transgressor. In this analysis, D & L draw on Joseph William’s work on “the phenomenology of error,” which holds that “error” is a slippery thing, in part a product of one’s expectations: we expect and notice errors in the writing of amateurs, while we overlook similar errors in the writing of educated professionals (xi).

The pedagogy of Grammar Rants integrates reading and writing. It’s based on the belief that when students deconstruct grammar rants, they not only learn close-reading skills, but they also become less anxious and blocked up with their own writing. They learn to focus more on meaning and substance and obsess less over those who’d wag their fingers at the imperfections on the surface (xiv).

Some of D & L’s discussion touches on what grammar ranters imply about race and class (10, 26, & 27), but these topics demand deeper attention, especially since so many of us teach students from groups historically under-represented in higher education. Grammar rants easily slip into the realm of class and ethno-linguistic supremacy. It always pains me when my non-white and working-class students tell me they speak “incorrect” English, because it shows they have internalized the same value system as the grammar ranters.

Students will find the lessons on rants about spelling (chapter 3) especially engaging for their humorous content. Here D & L draw an analogy between spelling bees and reality TV shows: both focus on the spectacle of contestants’ very public failures (54 – 55). Also, D & L discuss lighthearted news articles about criminals who spell incorrectly and that imply the two are linked (58 – 62).

Chapter 4 is especially timely and relevant to today’s students. It deals with texting and emailing, and the common (yet questionable) complaint that they’re hurting the language of young people.

Chapter 5 deals with what D & L call “the grammar trap”—those perilous situations where a writer needs to make a grammatical choice, but all options will draw the ire of some grammar ranter. Instead of the teacher prescribing a correct option, D & L believe students should be given both a “close-up view and a bird’s eye view of language controversies” and all the possible options (96). With such a perspective, students are less intimidated by potential grammar ranters and more empowered to think through the implications each possible choice.

Will it work? D & L argue such an approach won’t leave students confused or overwhelmed. Instead, they will enjoy the “human drama” of grammar rants and they will gain confidence (96). I’m skeptical. I’ve found that most students at the basic level and many at the developmental level lack the patience for complex digressions into the many nuances of usage options. In many situations, they demand clear maxims that simplify matters, separating the correct from the incorrect.

D & L’s scope is both theoretical and practical. Each chapter ends with grammar rants for students to analyze, classroom activities, worksheets, discussion questions, and ideas for writing assignments. D & L’s  classroom materials thoughtfully guide students through complex issues and draw on students’ personal experiences with the English language.

I’ve wondered what level of writing class the curriculum is designed for. D & L state that “[t]hrough their imaginative use of our suggestions, instructors should be able to engage students at all levels of writing proficiency” (xv). Nonetheless, the difficulty of the readings, the complexity of the activities, and the knowledge assumed by the discussion questions all are most fitting for students at first-year composition level, or perhaps one level below.

Grammar Rants has an abstract, impersonal quality to it, as if the pedagogy and curriculum were fleshed out in a graduate seminar, but never tested out in an undergraduate writing course. I doubt that’s true, but D & L never discuss their experiences using their pedagogy with specific students, or how they’ve tailored their pedagogy to different student populations. Similarly, the answer keys for discussion questions provide D & L’s ideal answers, rather than discussion of how actual students have responded, and where they tend to go astray. As I read, I wanted to know what living, breathing students in D & L’s classes have said. How have the lessons played out? And, of course, what unexpected issues arose out of left field?


[1] Full Disclosure: I received a complimentary evaluation copy of this book from the publishers.

Response to “The Thunder is Playing Well”

The media criticism show On The Media recently aired a radio segment on grammar issues with certain NBA team names like The Heat and The Thunder. The show notes that unlike most other team names that allow the regular plural suffix (-s), these names are treated as mass nouns. Semantically speaking, all team names denote a plural group of players. But when it comes to subject-verb agreement, copyeditors and sportswriters are confused whether The Heat and The Thunder should be treated as grammatically singular or grammatically plural.

In other words, which is correct?

1. The Heat are playing well.

2. The Heat is playing well.

As the show starts us wondering about how to resolve this issue of usage, I couldn’t help to think about how many times my students have asked me to give them the rule about all sorts of similar issues. I’ve always felt like no matter how I answer them, I’m not satisfied.

Show guest and sportswriter Tom Scocca observes that in actual usage, publications vary in treating these names as both grammatically singular and plural. At the end of the piece Scocca takes a fairly nuanced position: there’s no fixed rule about which way to treat these team names. Instead, you have to examine them in the context of the sentence they appear and judge which sounds better. (Here it strikes me that investigating usage disputes based on the judgments of native speakers on a case-by-case basis is the same methodology used by linguists.)

In my experience, this sort of answer leaves many writing students unsatisfied. Most want fixed rules and definitive answers. They want the safety of a bright red line that can be drawn across all contexts, clearly separating the correct from the incorrect. This is the worldview inculcated by an educational system overtaken by multiple-choice tests. And this is the certainty that many handbooks provide: always place a comma before coordinating conjunctions. Only use “whom” in the following situations.

If only language were so tidy. Although these maxims may work in the prototypical contexts, when you examine the messiness of usage across many contexts, you’ll find countless irregularities and idiomatic exceptions, you’ll find that respected writers vary in their preferences, and you may even find that different usages often correspond to subtle differences in meaning or style. For instance, #1 above sounds more British, while #2 sounds more American. In addition, Brock Haussamen takes the position that in cases where the verb morphology and the plural/singular morphology of the subject don’t agree in the normal way, the speaker may be conveying a slight meaning difference.[1] According to this view, the plural morphology on the verb in #1 entails that the playing described by the verb is being performed by separate entities, whereas in #2, the singular morphology on the verb entails that the playing is being performed by a single entity.

It refreshes me to see that contemporary commentators on language and usage are growing more nuanced and enlightened. Scocca bases his opinions on corpus research into actual usage, and he avoids dogmatic prescriptivism. Yes, he does tease some writers for their “poncy” Anglicized usage, but it’s done in good humor. If this same topic were covered a decade or two ago, I’m sure we’d get a different take: the guest would have insisted that there’s one right way to do it, and that those who deviate betray their illiteracy and ignorance. Next, the piece would have concluded with a cliched rant about sloppiness of writers nowadays, the failure of our schools, and the impending decline of civilization. That are what I call progress!