Posts Tagged ‘English’

Morphemic Analysis as a Vocabulary Approach: Science or Art?

On 10/3/2014, my colleague Phil Sloan and I had the honor of co-presenting to The 2014 TYCA Midwest conference in Grand Rapids, MI. Our presentation was entitled Reconsidering Our Pedagogical Foundations: Sonic Literacy and Morphemic Analysis.

My half of the presentation was a cross-disciplinary work entitled Morphemic Analysis as a Vocabulary Approach: Science or Art? In the presentation, I drew on research from formal linguistics to argue that applying morphemic analysis to authentic texts is as much art as science. I also analyzed whether textbooks for developmental reading suggest to students that morphemic analysis should be applied artfully, or whether it is applied with a scientific precision.

I’ve posted the eight-page handout for that here:

Handout for Morphemic Analysis as a Vocabulary Approach: Science or Art? (PDF Document)

If you want to know even more about using morphemic analysis in your reading classroom, I see my literature review and annotated bibliography on the topic.

Morphological Analysis as a Vocabulary Strategy in Post-Secondary Reading: Literature Review and Annotated Bibliography

Just as a grammar governs the ordering of words in sentences and phrases, another sort of grammar governs the ways in which morphemes combine to form words.

How helpful is this morphological grammar to reading teachers? Developmental reading textbooks often teach students to infer word meaning from the parts of words, but what’s the pedagogical grounding for this? I recently wrote a literature review and annotated bibliography on exactly that topic:

Click here to see Morphological Analysis as a Vocabulary Strategy in Post-Secondary Reading: Literature Review and Annotated Bibliography (PDF Document)

Bait-and-Switch: The COMPASS Writing Skills Placement Test

February 2, 2014 Leave a comment

The COMPASS Writing Skills Placement Test (WSPT), published by ACT, is widely used by community colleges to place incoming students into English/Writing courses. Part of ACT’s COMPASS suite of placement tests, the COMPASS WSPT is a multiple-choice, computer-adaptive test. COMPASS tests are used by 46% of community colleges, according to one source (p. 2). At some colleges, the COMPASS WSPT operates as the primary instrument to place English students, though many use it as one of “multiple measures.”

With the current spotlight on developmental education, placement mechanisms like the COMPASS WSPT are increasingly being scrutinized. When a student is misplaced into a class that’s too easy or too hard, they face a needless obstacle to completing college. ACT reference materials market the COMPASS tests as predictive of student success in college courses, but two recent reports from the Community College Research Center (CCRC) cast doubt on the power of such standardized multiple-choice tests to predict student success.

The CCRC reports, rather than examining how well the content of placement tests aligns with the content of college courses, both focus on large-scale statistical analyses that compare student performance on placement tests with performance in college classes. In this post, I consider a related, yet narrower, set of issues: what skills exactly does the COMPASS WSPT assess? How well do the skills it assesses align with how ACT markets the test and with the skills students need to succeed in their writing courses?

My answer: the ACT’s online documentation repeatedly suggests the COMPASS WSPT assesses examinees’ rhetorical skills and mechanical skills in equal measure. However, this impression misleads; close-reading sample test questions, I show that the COMPASS WSPT is almost entirely a multiple-choice test of sentence-level skills with mechanics; it barely assess higher-order skills essential to success in most writing classes.

The Impression Created by the ACT Website

Let’s examine how ACT describes the content of the COMPASS WSPT on its website. I found four such places:

1. The COMPASS Guide to Effective Student Placement and Retention in the Language Arts, which is geared towards faculty and administrators, states:

COMPASS Writing Skills Placement Test The COMPASS Writing Skills Placement Test is designed to help determine whether a student possesses the writing skills and knowledge needed to succeed in a typical entry-level college  composition course. Examinees are presented with a passage on-screen and are asked to read it while looking for problems in grammar, usage, and style. Upon finding an error, students can replace the portion of text with one of five answer options presented.

Writing Skills Test passages are presented to the examinee as an unbroken whole, with no indication of where errors are located. To accommodate the task for computer-based testing, the test passages are divided into a series of segments. Because examinees can choose to edit any portion of the passage, every part of the text is included within these segments, and no part of the text is contained in more than one segment. There is a test item for each segment of the text so that an item with five answer options will appear no matter where an examinee chooses to revise the text. Of the five answer options, option “A” always reproduces the original text segment. If the segment selected by the examinee contains no error, then the correct alternative would be option “A.” Allowing students to select and correct any part of the passage broadens the task from simple recognition of the most plausible alternative to a more generative error-identification exercise.

In addition to the items that correspond to passage segments, the COMPASS Writing Skills Placement Test has one or two multiple-choice items that appear after the examinee is finished revising the passage. These items pose global questions related to the passage.

COMPASS Writing Skills Placement Test items are of two general categories: usage/mechanics and rhetorical skills. Each of these general categories is composed of the subcategories listed below.

Usage/Mechanic Items Items in this category are directed at the surface-level characteristics of writing, as exemplified in three major subcategories: punctuation, basic grammar and usage, and sentence structure.

Rhetorical Skills Items Items in this category deal with misplaced, omitted, or superfluous commas; colons; semicolons; dashes; parentheses; apostrophes; question marks; periods; and exclamation points. (p. 2)

Note the two bold headings at the bottom, which suggest that mechanical skills and rhetorical skills are assessed in equal measure.

2. A similar implication is made throughout parts of the ACT’s COMPASS WSPT website geared at a broader audience. For instance, this page states:

This test asks students to find and correct errors in essays presented on the computer screen. The test items include the following content categories:


  • Punctuation
  • Basic grammar and usage
  • Sentence structure
Rhetorical Skills

  • Strategy
  • Organization
  • Style
3. Likewise, this page states:
Writing Skills Placement Test is a multiple-choice test that requires students to find and correct errors in essays in the areas of usage and mechanics, including basic grammar, punctuation and sentence structure, and rhetorical skills, including strategy, organization and style. (colors added to illustrate equal emphasis)
4. Finally, the introduction to the packet of sample questions states:

Items in the Writing Skills Placement Test assess basic knowledge and skills in usage and mechanics (e.g., punctuation, basic grammar and usage, and sentence structure) as well as more rhetorical skills such as writing strategy, organization, and style. (p. 1, colors added to illustrate equal emphasis)

At the end of the first except above, witness a bizarre sleight-of-hand: here, “rhetorical” means punctuation, the sort of thing that belongs under the heading of “Usage/Mechanic Items.” The only thing “rhetorical” about punctuation is the assertion that it is.

In the subsequent three excerpts, rhetorical skills are conceptualized more vaguely, as “strategy,” “organization,” and “style.” Arguably, these three might qualify as rhetorical skills. But we’re left wondering: Strategy of what? Organization of what? Style of what? These could refer to re-arranging a couple words in a sentence, or re-conceptualizing the entire thrust of the essay.

It helps to operationalize the distinction between mechanical skills and rhetorical skills in a way that’s both clear and generally accepted by writing teachers. I’d posit the following:

Mechanical skills require writers to operate at the sentence-level, and not far beyond. These skills enable writers to make ideas intelligible through language, and make that language conform to the usage conventions followed by respected writers in published writing.

Rhetorical skills require writers to think beyond the sentence level and often beyond the text itself. Writers must consider how the meaning of one sentence relates to the meaning of the whole text. Further, they must consider the social context in which they are writing and the needs of their audience—as well as logos, ethos, and pathos.

As with many binaries, the distinction between can be hazy: Laura Micciche, for instance, posits that grammatical choices have a rhetorical component. But using my distinction, the next section will analyze actual test questions, with the understanding that some questions could simultaneously be categorized as mechanical and rhetorical.

The Reality of the COMPASS WSPT

I examined sample test questions for a sense of what the COMPASS WSPT actually assesses. My data comes directly from ACT. For the COMPASS WSPT, ACT publishes a booklet of Sample Test Questions.

How well do these represent actual test questions? The booklet’s preface assures examinees that “the examples in this booklet are similar to the kinds of questions you are likely to see when you take the actual COMPASS test.”

Of the 68 sample questions, just 7 uncontroversially assess students’ rhetorical abilities, as defined above. That’s 10%. These are the 2 – 3 “global” questions that follow each of the three passages. Whereas the remaining questions ask examinees to find and correct “errors,” the “global” questions ask examinees to consider the broader meaning and the extent to which language meets a higher-order goal. Here’s one such representative question:

Suppose the writer wants to show that lending programs similar to the one administered by the Grameen Bank have been widely accepted. Which of the following phrases, if added to the last sentence of the essay, would best achieve that goal?

A. to make credit available

B. over the years

C. around the world

D. to encourage development

E. with some variations (p. 9)

In fairness, of the remaining 90% of the questions, another 10% or so could be classified as primarily sentence-level in scope, but having a rhetorical component, under a charitable definition. Three such questions assess examinees on what the packet of COMPASS WSPT Sample Test Question calls “judging relevancy” (p. 24, 25, 26).  In these, examinees must decide whether certain words are superfluous or essential to the passage. Other marginally rhetorical questions assess whether examinees can choose the most fitting transitional expression between two sentences.

Now consider a representative test item that purely assesses sentence-level skills. In the following, examinees must choose which segment is correct:

A. If one member will fail to repay a loan, the entire group is unable to obtain credit

B. If one member fails to repay a loan, the entire group is unable to obtain credit

C. If one member do fail to repay a loan, the entire group is unable to obtain credit

D. If one member is fail to repay a loan, the entire group is unable to obtain credit

E. If one member failing to repay a loan, the entire group is unable to obtain credit (p. 7, emphasis is mine)

This question focuses exclusively on verb forms and tense.

The vast majority (80 – 90%) of COMPASS WSPT questions are crafted similarly: examinees much choose the answer with the correct preposition, with the right punctuation placed between the right words, or with the transposed words in grammatical order, with the right suffixes. No deep thinking needed. For many, the right answer could be picked by Microsoft Word’s grammar-checker. Read through the Sample Test Questions and see for yourself.

Why the Bait-and-Switch?

This bait-and-switch is striking, but no accident; ACT’s online documentation is permeated by the sort of measured wording, consistent style, and politic hedging that evinces a lengthy process of committee vetting. What’s happening is that ACT wants to make the COMPASS WSPT look more appealing to its target audience. Who is the target audience? Primarily, those who decide whether to purchase the COMPASS product—faculty and administrators at community colleges.

Consider their situation and needs. Decades ago, this audience might have eagerly embraced a placement test that primarily assessed sentence-level mechanical skills. But the values of English faculty in particular have shifted. First, today’s writing curriculum—from the developmental level to upper-division—focuses much more on higher order concepts: rhetorical decisions, critical thinking skills, the writing process, and the social context of writing. As such, placement tests need to assess students’ abilities on these dimensions. Second, the consensus is emerging that accurate placement requires  students to complete a writing sample evaluated by human eyes.

Placement tests that meet both criteria are commonly found at universities. But at community colleges, they are rendered less practical by budgetary constraints. So community college faculty are left seeking a compromise: a more economical multiple-choice test that assesses, at least in part, students’ higher-order skills. That’s the niche purportedly filled by the COMPASS WSPT.

With their heavy workload, community college faculty and administrators are vulnerable to the ACT’s bait-and-switch. How many actually take these tests themselves or analyze the sample questions? In my experience, most faculty understand the content of their placement tests vaguely to not at all (unless the test is developed in-house). When we debate placement policy, the debate too often gravitates towards a higher level of abstraction, focusing on pedagogical theory (multiple-choice test versus holistic writing sample) and statistical outcomes (how predictive the tests are of success), rather than the specific details of what the test questions actually assess.

English teachers would learn much if they sat down and took the high-stakes tests that determine where their students are placed. In fact, I recently sat down and took the reading and writing placement tests where I teach. The process enlightened me, giving me a much more practical perspective on the science art of student placement.

Subjects and Predicates in Language and Logic

January 6, 2014 2 comments

Grammatical terms mislead us; I’ve argued this case previously: Exhibit A and Exhibit B. Too often, non-linguists read too deeply into the names of terminology, drawing conclusions that conclude too much. As Exhibit C, I present a case study with the terms “subject” and “predicate”:

Recently, I was teaching a Critical Thinking course with a unit on class logic and set theory. Things were going swell—all Venn diagrams and syllogistic reasoning—until my unfortunate students stumbled into this textbook passage:

Categorical propositions, and indeed all English sentences, can be broken down into two parts—the subject and the predicate. These terms are shared by both grammar and logic, and they mean the same thing in both disciplines. The subject is that part of the sentence about which something is being asserted, and the predicate includes everything being asserted about the subject. (Writing Logically, Thinking Critically, by Sheila Cooper and Rosemary Patton, 7th edition, p.161, emphasis mine)

Reviewing this passage with my students, I explained my nuanced position:


First, imagine students trying to get their brains around Cooper and Patton’s final sentence. Isn’t every part of every sentence a part about which something is being asserted? Is every part of a sentence a subject?

Where does Cooper and Patton’s claim originate from? I’ve got my hunch. The analysis that grammatical subjects/predicates are equivalent to the logical ones traces back to Aristotle. In an Aristotelian analysis, we see the sentence

1. Socrates is mortal.

(In this example sentence and others, the grammatical subject is colored blue, and the grammatical predicate, orange.)

analyzed such that “Socrates” is both the logical and grammatical subject, and something like “(is) mortal” is the logical and grammatical predicate. In set theory, this means that the individual “Socrates” belongs to the set of individuals that are mortal.

Aristotle’s analysis—one of the first recorded analyses of the semantics of human language—lags behind the state of the art by a couple millennium. In fairness, though, if we’re only analyzing tidy sentences like #1, the logical terms and the linguistic terms line up nicely. But when we analyze more complex sentences, things get messy.

Cooper and Patton’s analysis suggests that a grammatically active sentence and its passive counterpart have different meanings:

2. John hit Mary. (active)

3. Mary was hit by John. (passive)

In fact, #2 and #3 are logically synonymous—each holds true (or false) in exactly the same situations as the other. #2 and #3 differ crucially in their pragmatics. #2 is a more natural answer to

4. What did John do?

while #3 is a more natural answer to

5. What happened to Mary?

Cooper and Patton’s analysis hits another problem with sentences where the grammatical subject does not refer to an entity:

6. There’s a problem.

7. It rained.

In #6, “there” acts as filler material, occupying the grammatical subject position of what linguists call an existential construction. (This assumes a reading where #6 posits the existence of a problem, as opposed to the location of a problem). In #7, “it” is similarly used to fill the grammatical subject position of the meteorological verb “rain.”

As Cooper and Patton would analyze #6 and #7, the entity “there” would probably belong to the set of entities that “is/are a problem,” while the entity “it” would belong to the set of entities that “rained.” But these “meanings” don’t compute.

Cooper and Patton’s analysis grows even more problematic when we examine certain expressions of quantification:

8. Not everyone slept.

If you believe Cooper and Patton’s analysis, this sentence would mean that the entity “not everyone” belongs to the set of individuals who slept. Probably not the right semantic analysis. The sentence is better translated into set theory as follows: at least one person does not belong to the set of individuals that slept.

So what exactly is the relationship between the grammatical subject/predicate and the logical ones? Actually, a couple of these terms have gone obsolete, and we should examine each separately:

The grammatical subject: To linguists, this is a purely syntactic position, largely independent of semantics. In English, the subject is identifiable by a number of syntactic and morphological features. Most notably, it’s a noun-phrase in a pre-verbal position. Typically, the subject and verb agree in number. A number of other tests can pinpoint the grammatical subject of a sentence, but the two above are most reliable.

The grammatical predicate: amongst linguists, this term has long disappeared from usage. It still lingers in English textbooks, where its definition tends to be muddled. Some textbooks define it in negative terms—it’s every part of the sentence other than the grammatical subject. In practice, such a definition approximates what linguists might call a “verb phrase.”

The logical subject: the term “subject” isn’t really used in logic or set theory. (I’ve seen it in literary theory, but that’s a separate usage.) Semanticists and logicians tend to speak instead about individuals or entities.

The logical predicate: this term defies easy definition, but it’s used in set theory and predicate calculus (a logical language). A predicate is a semantic relation that applies to one or more arguments. A one-place predicate would be “(be) green.” A two-place predicate takes two arguments. For example, the two-place predicate  “hit” involves both at hitter and the entity being hit. Nouns, verbs, and adjectives all correspond to semantic predicates.

As teachers, we must remember that a human language like English differs fundamentally from a logical language. Human language is messy, littered with vagueness and ambiguity. With time, usage and meaning drifts. Humans misunderstand and re-interpret. To skirt these problems, logical languages are crafted. Terms in logical languages are supposed to be defined carefully. An expression of logical language carries one unambiguous, unchanging meaning. Writing teachers will always be puzzling over the meanings within student essays, but a computer program will never puzzle over how to interpret a particularly complex line of code.

Grammar Terminology and E.D. Hirsch’s Cultural Literacy

October 20, 2013 Leave a comment

For literacy educators, E.D. Hirsch’s dubious yet influential Cultural Literacy: What Every American Needs to Know (1988) is a must-read—partly because Hirsch often gets caricatured in graduate seminars as the stand-in for what Paolo Friere calls the “banking method” of education. But Hirsch also indirectly helps explain a key problem teachers encounter when we teach about grammar and mechanics, a problem which—in this post—I will focus on.

Yes, everything you'll ever need to know, all in one book!

Yes, everything you’ll ever need to know, all in one book! And as a bonus: the cure to our benighted educational system!

Hirsch’s thesis holds that the K-12 educational system leaves students with a low level of literacy because it over-emphasizes teaching general skills (word decoding, summarizing, or making text-to-self connections), and it under-emphasizes the importance of familiarizing students with the broad range of background knowledge shared by highly literate Americans; Johnny can’t read, according to Hirch, because he doesn’t understand our cultural symbols.

It’s hardly controversial that background knowledge helps readers more deeply understand a text. What English teacher hasn’t seen a class of students overlook crucial textual allusions to things that most teenagers know nothing about?—things like the filibusters, Sodom and Gomorrah, or Jim Crow. That’s why reading teachers, as part of a venerable pre-reading activity, build and activate students’ schemata.

But Hirsch’s undoing stems from the way the book exemplifies a genre that today seems all too familiar. This genre:

1. starts from the premise that the K-12 education system is failing.

2. focuses not on what students can do, but on what they can’t.

3. takes a glib approach to examining competing pedagogies.

4. presents the author’s prescriptions as a panacea.

When it comes to our sentence-level pedagogies, how are Hirsch’s ideas relevant? He infamously ends his book with a 65-page list of items titled “What Literate Americans Know” —everything from “abominable snowman” (152) to “Zionism” (215). Introducing the list, Hirsch professes that he’s describing and not prescribing, but his list betrays his prejudices, since it includes “a rather full listing of grammatical and rhetorical terms because students and teachers will find them useful for discussing English grammar and style” (146). (Hirsh holds a Ph.D. in English)

Here Hirsch implies that college writing teachers cannot assume all students enter college knowing common grammar terminology. He’s right. But this doesn’t mean our high schools are failing. It’s just an objective fact about our students.

A comparison: I once taught in a community college that served high schools dominated by whole-language instruction. In such contexts, many of my students sat mystified when I first mentioned terms like subject, verb, and preposition. Over time, I learned to presuppose that only one group of students was familiar with common grammatical terms—those with coursework in ESL.  That was in California. Now I’m in Illinois, and things are different: my current writing students mostly tell me that in high school they already learned grammar terminology. Neither high school system is inherently better; each chooses to emphasize different things.

When students enter our classrooms without background knowledge in grammar terminology, it constrains how much we can teach when it comes to sentence-level pedagogies. Conversely, when students bring such knowledge into the classroom, we’re enabled to do more. (And even when they have that background knowledge, their prior teachers may have used differing definitions for the same term, a problem I discuss in another blog post.)

A practical example helps: how do you address subject-verb agreement if most students don’t know what a subject or a verb is—not to mention related terms like noun, suffix, tense, grammatical number, auxiliary verb?

This obstacle certainly can be surmounted. I see two general approaches:

1. The traditional approach: you could start by explicitly teaching students the definitions of subject and verb. When we label the parts with jargon, the jargon makes the discussion easier to follow. But this approach has some pitfalls: it can bore students; grammatical definitions lack inherent sexiness. This also subtracts more instructional time from other topics. And you risk walking down the slippery slope of having to define many more grammatical terms.

2. The inductive approach: the concept of agreement can be taught through intuitive example sentences, while skirting the technical names of the sentence parts and thus saving time. I discuss the advantages of this approach here. The major disadvantage is that students do not acquire knowledge and definitions of grammatical jargon that they can use in the future.

Either way, it’s a trade-off.

My point is this: if you have an ambitious agenda when it comes to sentence-level pedagogies, think carefully about what your students already know. You might need to scale back your ambitions.

Three Fun Videos on Grammar

In a prior post, I discussed how to keep your sentence-level instruction fresh and fun. In addition, you can also break up the usual classroom routine with some YouTube videos on grammar topics. As a bonus, videos appeal to students with varied styles of learning.

Here are my three favorites:

Victor Borge’s Phonetic Punctuation

A student of mine showed me Borge’s video when we were discussing the differences between written and spoken English. I had been pointing out that writers who write how they talk tend to mix up different punctuation marks, since punctuation marks all sound the same—like silence.

Borge’s comedic routine leads us to a similar point much more cleverly. He starts from the premise of a spoken language where each punctuation mark is pronounced with its own distinct onomatopoetic flamboyance. From there, it just gets goofier.

The shtick had me laughing so hard that at first I overlooked Borge’s questionable implication that written language prevents miscommunication better than spoken language. Most writing teachers would take issue with this implication, especially after trudging through a particularly bewildering stack of student essays.


Schoolhouse Rock’s Conjunction Junction

This Schoolhouse Rock animation is a classic. In fact, can hardly finish my lesson on conjunctions without some student singing the Conjunction Junction refrain.

The catchy, repetitive tune succinctly explains the function of conjunctions. By today’s standards, the animation is clunky, but students get a kick out of that too.

As a teacher, I appreciate that this video gives students another way to conceptualize how the pieces of sentences fit together—like boxcars in a train. As a linguist, I instinctively want to point out the inaccuracies of this metaphor for sentence structure, but by the time the video finishes, many of my students look like they’re ready to start dancing!


College Humor’s Grammar Nazis

On the topic of metaphors, this College Humor video extends the metaphor that people that self-righteously correct your grammar resemble Nazis. This parody of Quentin Tarantino’s Inglorious Basterds addresses the troublesome details of English usage, including the “dangling modifier” and the “double negative,” as well as the case marking of conjoined personal pronouns (“me and her” versus “she and I”).

This video can be used in a variety of ways. It offers a good jumping off point for distinguishing important issues of usage from the distraction of prescriptive “rules.” It also raises the issue of why people have such Nazish zeal in their beliefs about issues of usage. Of course, logically we all know that a slip in linguistic usage differs fundamentally from a real atrocity like the holocaust. But why do some get more irked by linguistic slips?

Since the dialogue unfolds quickly, it helps to transcribe key exchanges onto the board. From here, the usage issues can be examined and teachers can address the pseudo-logic that motivates many of the prescriptive “rules.”

Warning: the video is ends with graphic violence that’s not appropriate for all classrooms, but that part can be skipped without loss.

Where do Errors Come From?

storkLet’s consider some common explanations of the sources of linguistic error and disfluency, explanations I see reiterated over and over (both explicitly and explicitly) in handbooks and the professional literature. Even if errors are being blamed on text messaging or bleeding-heart teachers or whatever else, the source is reducible to one of the following:

1. The Standard Theory: students make errors because they don’t fully comprehend the grammatical patterns of English, as well as the idiomatic expressions and the conventions of usage. Students simply lack linguistic knowledge. This view has been around since the invention of writing, and most teachers still turn to it as the default.

2. The L1-Interference Theory: multilingual writers commit errors when they over-generalize the grammatical patterns of their native language (L1) to English. (Similarly, this theory suggests that students will transfer the grammatical patterns of native dialects of English into their school writing.) This theory comes to us courtesy of our colleagues who teach ESL and foreign languages.

3. The Speech-Based Theory: many students write in ways that are closely modeled on the way they talk or the way people around them talk.  Standard written English differs substantially from speech, which is fragmentary and halting, and which is aided by para-language and contextual cues. This theory is often connected with scholarship on “Generation 1.5” learners.

4. The “Competence versus Performance” theory: students commit many errors that they know how to identify and fix. Their performance in the writing task misrepresents their actual competence—their true knowledge of the language. Errors emerge when writers are tired or distracted, when they simply fail to invest enough time, or even when they don’t know how to use their word processing software effectively. Other times, our brain just hiccups. This theory is usually attributed to the early work of Noam Chomsky.

5. The Complex Ideas Theory: students who otherwise write grammatically clean prose make more errors and write more clumsily when they are asked to write about complex ideas or use academic registers that they can’t fully control. The complexity of the writing task overloads their ability to process syntax. Amongst others, this theory is articulated particularly well by David Bartholomae in Inventing the University.

Different theories are not mutually exclusive. In fact, the source of any given error can usually be explained by some combination. For instance, #4 and #5 both suggest that errors often belie our true abilities in ideal circumstances. Or a student may lack linguistic knowledge about how standard English enables two sentences to be joined (#1), and might thus fall back on the patterns of their native language (#2) or the patterns of their speech (#3). For any given error and any given student, discovering the source requires thoughtful inquiry.

Still, as teachers, we often lack the time to conduct such inquiry, so we make hidden assumptions about where errors come from. We should be aware of these assumptions, because each theory suggests something different about how to respond:

1. The Standard Theory suggests that we should teach students the grammatical patterns of correct English, and how these patterns are distinguished from the incorrect. Such pedagogies typically include teaching students “rules” or having them correct errors and/or demonstrate correct usage in workbook exercises. Alternatively, students could be encouraged to spend lots of time reading grammatically correct prose, so they can internalize and unconsciously imitate the patterns.

2. The L1-Interference Theory suggests a similar response to the standard theory, except that instruction should be tailored to the specific errors and disfluencies that characterize particular groups of ESL students. For instance, if students speak a native language that lacks the inflectional morphology on verbs that characterizes English and they tend to leave endings off of verbs, instruction should focus on the patterns of English inflectional morphology.

3. The Speech-Based Theory suggests that students need to be made more aware of the differences between the conventions of spoken English and written English. Since the two are essentially different dialects of the same language, the approach suggested is akin to #2 above. Since speech-based errors suggest students have been under-exposed to the written word, students should be encouraged spend lots of time reading, similar to #1 above.

4. The “Competence versus Performance” Theory suggests that students need to learn the steps and strategies for effective proofreading. Further, they may need to learn academic success skills, such as how to budget ample time for their writing process or how to manage the mental exhaustion of academic work.

5. The Complexity Theory suggests—somewhat counter-intuitively—that errors and disfluencies often represent a necessary sign of linguistic development, rather than a cause for concern. When writing lacks errors, the assignment has failed to challenge. Many errors will resolve themselves without a teacher’s intervention as students grow more experienced and comfortable with writing tasks of greater complexity.

Again, no one way is “right.” In my own teaching, I integrate a little of each into my classroom instruction. Once I have a good understanding of a particular student or group of students, I tailor my instruction to their grammatical needs.