Use ‘to be’, or not?
Raymond LeClair
In a fraught email exchange with my adult son, I noticed something odd about our writing. I observed a disparate use of the verb “to be”. My writing contained many more of the various forms “be”, “am”, “is”, “are”, “being”, “was”, “were”, and “been”, and my son’s writing contained many fewer, though not none. I felt as though I had read something about this usage before, but could not immediately remember where.
Now when I say fraught, I do not mean any party to our interaction meant ill will. On the contrary, the tension in our conversations seems to stem more from generational differences in the importance and practice of noticing, responding, and discussing emotions. I think the cultural norms around emotional intelligence, a term which, by the way, did not enter usage until well after the birth of my son, differed markedly between our respective formative years.
Anyway, in our conversation about this email, the idea of cognitive distortions came up, though not in reference to my observations. In a recent article in the European Journal of Psychology the authors define cognitive distortions as1 …
“… negative biases in thinking that are theorized to represent vulnerability factors for depression and dysphoria.”
One such negative bias included in lists of cognitive distortions involves the tendency to use “should” statements about what we or others should have done, but didn’t. This usage stems from the negative bias that we or others failed to do what we could have done, when, in fact, other factors may have prevented our doing. Another negative bias often listed involves all-or-nothing thinking, for example, the tendency to evaluate things as “good”, or “bad”. Such thinking may lead to a failure to see the shortcomings in the “good”, or the benefits in the “bad”, a negative bias in the sense of distorting the reality of a situation. In fact, a type of psychological treatment, Cognitive Behavior Therapy, focuses on recognizing and reevaluating these cognitive distortions, and, apparently, can effectively treat depression and anxiety.
Now cognitive distortions differ from the related idea of cognitive biases, or deviations from rationality, introduced by Amos Tversky and Daniel Kahneman. With cognitive biases, an individual’s mental model of reality, which likely differs from the objective reality of their situation, guides their behavior. Although distortions of reality in a sense, biases differ from distortions in an important respect: some biases may serve an adaptive purpose. For example, heuristics, which enables one to make decisions more rapidly, can provide an advantage in situations in which optimality of choice matters less than speed. Other biases, though, lead to behaviors that might seem irrational.
For example, people tend to overestimate their ability and skill to make decisions. A related and seemingly well known bias, the Dunning-Kruger effect, states that confidence starts high with lack of expertise, only to fall quickly as one gains knowledge, and then recover when one has acquired expertise. See this recent post describing my experience with Dunning-Kruger. In the post I describe Springbok’s efforts to design a system using the Oxford Nanopore Technologies portable DNA sample preparation device, the VolTRAX, and portable DNA sequencing device, the MinION, to automate edge sequencing of wastewater samples for genomic surveillance of novel pathogens such as bacteria and viruses. The post recounts our ride on the Dunning-Kruger roller-coaster and our initial overconfidence.
Now both cognitive distortions and biases differ from fallacies, or reasoning based on incorrect logic. Many informal fallacies with evocative names exist: continuum fallacy, ecological fallacy, historian’s fallacy, if-by-whiskey, or nirvana fallacy. While informal fallacies seem to me commonly occurring in conversations, formal fallacies also exist which I will use to draw the distinction between fallacies and cognitive distortions and biases. Consider especially propositional fallacies, for example, affirming a disjunct: A or B, A, therefore not B. A fallacy since A or B does not exclude A and B. Even though obviously logically incorrect, I wonder how often I fall into the trap of affirming a disjunct. Other propositional fallacies exist like this one, for example, affirming the consequent: If A, then B, B, therefore A. A fallacy since If A, then B does not exclude If C, then B. And denying the antecedent: if A, then B, not A, therefore not B, a fallacy in much the same way.
About at this point in my Wikipedia reading I remembered where I had seen issues around the use of the verb “to be”. E-Prime. A version of the English language which excludes all forms of the verb “to be”. Wikipedia provides this helpful summary of the different functions of “to be”:
- Identity: “The cat is Garfield.”
- Class Membership: “Garfield is a cat.”
- Class Inclusion: “A cat is an animal.”
- Predication: “The cat is furry.”
- Auxiliary: “The cat is being bitten by the dog.”
- Existence: “There is a cat.”
- Location: “The cat is on the mat.”
D. David Bourland Jr. proposed E-Prime as an addition to the idea of general semantics proposed by Alfred Korzybski in 1965. As part of general semantics, Korzybski thought the identity and predication usages of the verb “to be” have a faulty structure. For example, the sentence “Elizabeth is a fool” would have a less faulty structure when stated “Elizabeth has done something we regard as foolish”. In the former, Elizabeth becomes encompassed by the concept of fool, while in the latter, Elizabeth has done one particular thing that we regard as foolish, implicitly recognizing that others might not regard that particular thing as foolish.
Similarly, Bourland, with E. W. Kellogg III, in an article entitled “Working With E-Prime: Some Practical Notes”, write:
“E-Prime allows users to minimize many “false to facts” linguistic patterns inherent in ordinary English, and to move beyond a two-value Aristotelian orientation that views the world through overly simplistic terms such as “true-or-false,” “black-or-whte,” “all-or-none,” “right-or-wrong.””
And
“E-Prime automatically eliminates the “is-dependent,” overdefining of situations in which we confuse one aspect, or point of view, of an experience with a much more complex totality (…). This overdefining occurs chiefly in sentences using the “is of identity” (e.g., “John is a scientist”) and the “is of predication” (e.g., “The leaf is green”), two of the main stumbling blocks impeding a non-Aristotelian approach. E-Prime can also enhance creativity in problem solving, by transforming premature judgment statements such as “There is no solution to this problem” into more strictly accurate versions such as “I don’t see how to solve this problem (yet).””
The E-Prime idea that “is” possesses overdefining power, say, with respect to Elizabeth, who perhaps does not act foolishly much of the time, and problems we face, which often we can and do solve, principally interests me. Notice the similarity to the cognitive distortion of all-or-nothing thinking, and the cognitive bias of overconfidence. And this similarity got me to wondering, does my thinking cause me to choose to use “is” in this overdefining way? Or, when I use “is” in this overdefining way do I find my cognition distorted and biased? So, as an experiment, I set about to write and speak in E-Prime.
I have not found writing in E-Prime especially difficult, since I can read what I have written and edit to remove any rogue uses of “to be”. I find it to take some time, though, and my writing has slowed down. Furthermore, since the Message application on macOS does not support editing after sending, I find sending text messages requires frequent post hoc repair. I have come to want all messaging applications to support editing, like that provided by Slack, in which I can edit a message using the up arrow. On the other hand, I have found speaking in E-Prime problematic. I find consistency especially difficult. For example, if I casually converse with someone who does not object to pauses, restarts, and restatements, then I can manage to mostly speak in E-Prime. But when engaged in conversation with a client, I find the need to proceed promptly in the conversation to override my ability to speak in E-Prime. And so some of the time I speak in E-Prime, and some of the time I don’t, which makes the habit of speaking consistently in E-Prime difficult to acquire. As it happens, Kellogg undertook this same exact experiment, which he describes in an article entitled “Speaking in E-Prime: An Experimental Method for Integrating General Semantics into Daily Life.”, and my experience closely matches his.
What did I learn from my experiment? Did I find my thinking liberated from the “true-or-false,” “black-or-white,” “all-or-none,” or “right-or-wrong” orientation? Well, not exactly. My experience has seemed more to involve the development of an awareness of when my statements fell into these categories. That awareness does permit me to make an assessment of whether or not to let my thinking remain there, or to take a more nuanced view. While far from a settled habit, I do find the practice sufficiently useful to continue its development. Oh, and when did you first notice that I wrote this post in E-Prime?