Textual Criticism - Lesson 2

How to Count Textual Variants

Contrary to popular textual critics, the wrong way to record textual variants is to count each unique variant and multiply by the number of existing manuscripts, rendering millions of variants. On the contrary, the correct method is to count the same variant that occurs across all manuscripts as one variant, rendering not millions but hundreds of thousands of predominantly minor variants.

Daniel Wallace
Textual Criticism
Lesson 2
Watching Now
How to Count Textual Variants

How to Count Textual Variants


A. Norman Geisler

B. Several errors in his statement

1. Number of manuscripts

2. Number of variants

3. How variants are counted


A. The number of wording differences, regardless of the number of manuscripts

B. What was the motive for the errant view?

C. What was the source for this evangelical miscalculation?

1. Proof of the miscalculation

2. Summary

  • Since the original autographs of the Bible no longer exist, the primary goal of Biblical Textual Criticism is to determine the exact wording of the original inspired text dispatched from the author with as much accuracy as possible. As a secondary goal, we desire to trace changes to the text and get a window into ancient Christianity.

  • Contrary to popular textual critics, the wrong way to record textual variants is to count each unique variant and multiply by the number of existing manuscripts, rendering millions of variants. On the contrary, the correct method is to count the same variant that occurs across all manuscripts as one variant, rendering not millions but hundreds of thousands of predominantly minor variants.

  • Compared to other ancient literature, the field of Biblical textual criticism possesses “an embarrassment of riches.” New Testament TC absolutely dwarfs the resources of other ancient literature, not only in number of manuscripts and the recent time in which they were produced, but also confirming quotations by extra-biblical writings.

  • The vast majority of NT Variants are minor, easily explained scribal errors that don’t affect the meaning of the text. Among 400,000 textual variants of the NT, over 99% make no difference to the meaning, and less than 1% are both meaningful and viable.

  • Recent attempts to change the goals of NTTC such that critics no longer seek to obtain the original autographs in favor of understanding a writer’s historical contexts undermine the original goal of NTTC. However, faithful textual critics must not subscribe to the notion of a “multivalence” of the original text, but instead pursue the primary goal: to get as close as possible to the original autographs.

  • The vast majority of all copies of the New Testament were probably recorded on scrolls, but copied in codex format. This may lend to the theory that Christians used cutting-edge, easier-to-use media technologies to further the word-based faith.

  • Various materials were used in creating NT manuscripts. Wallace discusses papyrus, parchments, and paper, each with advantages and disadvantages for transmitting the text faithfully.

  • There are three fundamental issues that significantly affect the transmission of the NT Text: early copies and causes of corruption, the role of canon in shaping the text, and the emergence of localized text forms.

  • Because of the radical nature of Christianity, it took some time for OT-based Jews to accept the NT as canonical. But over time, coinciding with the progressive development of a certain “canon-consciousness,” scribes were compelled to modify texts in various ways, not for malicious reasons, but in efforts to clarify, preserve, and revere the sacred scriptures.

  • Although questioned by some critics, most TCs acknowledge four major localized forms of the NT text: Alexandrian, Western, Byzantine, and (questionably) Caesarian. These “cross-pollinated” text families have arisen from diverse historical, cultural and socio-political factors, but all serve to strengthen, and not weaken the integrity of the NT text.

  • While it is undeniable that NT scribes made mistakes of various types in copying the inspired text, understanding the often simple reason for these mistakes renders much reward in understanding the sacred text. The fundamental principle of textual criticism is this: select the reading that best explains the rise of the other readings.

  • Contrary to popular belief, intentional scribal changes were not malicious in nature, but rather displayed pious intentions and a high view of scripture. Scribal corruptions for the most part, did not reflect a desire to obfuscate, but to clarify the scripture.

  • This lecture introduces papyri, critically important as the earliest witnesses of New Testament text. Papyri are some of the most important documents of NT MSS.

  • Since papyri are the earliest records of NT text (containing 50% of NT) they are critical in revealing the original text shape of the NT text. Even Codex Sinaticus and Vaticanus, the two most important NT MSS in the world, are confirmed by Papyri.

  • This lecture describes the most important new Testament manuscripts: the Majuscules, formerly known as uncials. These documents contain the full text of the NT written many times over, on parchment, written in all caps.

  • This lecture continues the discussion about the most important New Testament manuscripts: the Majuscules, formerly known as uncials. This lecture describes Codex Alexandrinus - A, Codex Ephraemi Rescriptus - C, Codex Sinaiticus (Aleph), and Codex Washingtonianus - W - 1906.

  • Since the field of TC is so small, obtaining resources are very expensive. However the internet is still a great place to conduct free TC research. In this lecture, major internet resources for studying NT manuscripts are compared and contrasted.

  • Founded 2002 by Daniel Wallace, the mission of the Center for the Study of New Testament Manuscripts (CSNTM) is to be a premiere resource in the great and noble task of determining the wording of the autographa of the New Testament. This is facilitated through high-resolution digital photography of extant Greek New Testament manuscripts so that such images can be preserved, duplicated without deterioration, and accessed by scholars doing textual research.

  • The KJV has been rightfully called “the single greatest monument to the English language,” but this is more from a literary rather than a translation standpoint. This is because the Greek MSS behind the KJV text is far inferior to that of modern translations in terms of textual basis, late MSS dates, and a less than perfect process of creation.

  • The arguments used to position the Textus Receptus as the sole textual basis for the true word of God range from questionable to downright irrational. Proponents of this position rely on view of the so-called “doctrine of preservation,” which illegitimately uses certain Bible texts to argue its dubious claims.

  • This lecture describes the major problems of TR-only people, who subscribe to an unbiblical Doctrine of Preservation, which as defined, effectively emerges as a Marcionite view of the Bible. Wallace claims that while there is no biblical, exegetical, or empirical basis to argue for the doctrine of preservation, God has overwhelmingly preserved Scripture in a way that is not true of any other ancient literature.

  • In this lecture, Daniel Wallace describes the discovery of Sinaiaticus, and its importance to the field of textual criticism. He recounts fascinating details about his visits to St. Catherine’s, the oldest Christian monastery, at the base of Mount Sinai, Egypt.

  • This lecture summarizes the life of Constantine von Tischendorf [1815-1874], and his very important discovery of Codex Sinaiticus.

  • This lecture describes highlights of the history of NT TC since the TR. Describing the formation of the textus receptus, Wallace also characterizes major players in the process of arriving at the modern text.

  • This lecture describes Westcott and Hort, and how they dethroned the Textus Receptus by proving that the Textus Receptus was late, inferior, and secondary.

  • This lecture is 1 of 3 lectures on reasoned eclecticism. Eclecticism is the process of compiling a text from multiple sources, while reasoned eclecticism consists of rectifying the differences and evaluating variants based on both their attestation and intrinsic merit.

  • This lecture is 2 of 3 lectures on reasoned eclecticism.

  • This lecture illustrates the principles of reasoned eclecticism.

  • Was Jesus "moved with compassion" or "indignant" when he saw that his disciples could not heal the man with leprosy?

  • Why was the man waiting for so many years at the pool of Bethesda? Was there really an angel stirring up the waters and healing the first one in?

  • Do these two passages call Jesus “God”? Thankfully, the Bible affirms the divinity of Christ many other ways and in many other passages than these two.

  • This lecture presents some very technical arguments for why Daniel Wallace believes that the phrase “ουδεουιός” (nor the Son) is not an authentic part of Matthew 24:36.

  • This lesson teaches you to appreciate the rigorous historical research required in biblical studies and the importance of respecting dual authorship. It sharpens your understanding of external and internal textual evidence and their implications for a passage's authenticity.
  • The text of Mark 16:9-20 is most likely not part of the original inspired text of scripture, and v 8 is Mark's intended ending.

  • This lecture evaluates popular translations of the Bible in terms of their textual basis. The bottom line is that while all translations are interpretations, The Spirit of God has ensured that the truth of the scriptures can be found in any one of them, and reading widely among different versions is good to promote understanding about different concerns of TC.

  • As time progresses in the field of Textual Criticism, we continue to get razor-thin closer to the original manuscripts. The good news is that with all the known variants, no essential doctrine of the Christian faith is jeopardized by any viable variant, so we can have great confidence in the text of our Bibles to provide us all we need for life and godliness.

Dr. Daniel Wallace is one of the world's leading textual critics. His ministry, the Center for the Study of New Testament Manuscripts (CSNTM.org) is currently the most prolific organization for discovering, photographing, and cataloging ancient Greek manuscripts of the New Testament. In this class, he discusses the issues of textual variants, how ancient manuscripts were made, the types of errors that we can see in the manuscripts, the issue of the Textus Receptus and its role in the King James translation of the Bible, the historic work of Westcott and Hort, and ends with discussions of the most famous textual problems.

Dr. Wallace gives a three hour summary of this class in our Academy program. The first of the lectures is here.

Please visit Dr. Wallace's ministry, Center for the Study of New Testament Manuscripts and support them financially. 

Thank you to our friends at Credo House for sharing this class with us. You can purchase their workbook or the DVDs for the class from them.



A. How to Count Textual Variants

There have been two different ways amongst evangelicals on how to count Textual Variants. I will go through this carefully so that you understand the issues. But first I want to point out how we should not count variants:

1. How Not to Count Variants

I need to use the sources that I have in order to demonstrate this. Some folks, who are not textual scholars, get their information from textual critics and yet they haven’t asked these critics how to define a variant. Some have estimated, says Norm Geisler in the Baker Encyclopedia of Christian Apologetics that there are about two hundred thousand textual variants. This book was published in 1998. First of all these are not errors but variant readings. The vast majority of which are strictly grammatical. Second, these readings are spread through more than fifty-three hundred manuscripts. So, according to Geisler a variant spelling of one letter of one word in one verse in two thousand manuscripts is counted as two thousand variants. This is what he says in his encyclopedia published in 1998. Frankly, there are a lot of problems with what he has to say. If the primary goal of New Testament textual criticism is to recover the wording of the autographs of the autographer or the Ausgang text as we have said in lecture one; that is the text as they left the apostle’s hands. Any deviation from that wording is by definition an error and what we mean, does this variant reading create a problem for inherency but is this wording down to the very letters what the Biblical author himself wrote. So, by that definition any deviation from that is an error. It is not an error theologically and not an error for inherency but a deviation from what the author wrote. The goal of textual critics, some of who are the most anal people that you could meet and some of them you don’t want to meet. They are so focused on details that they want to get every single letter right. If there is an error in terms of spelling, for example, the name John whether with one ‘n’ or two for every time we see the word John in the New Testament, it has two different spellings in the manuscript. If we hold to verbal plenary inspiration which Norman Geisler does, he would say that the very words are inspired and what he would most likely mean by that is right down to every jot and tittle. In other words, it is important for us to try to get back to those very words, the way that the author wrote them.

We need to think about errors as in different categories. We should think in terms of whether it is a theological error, if it contradicts the truth of the Bible which would have to do with inerrancy. An error in terms of differences in which the author wrote an error in terms of not being the exact word that author wrote and in terms of inspiration that becomes another issue. But even scholars who have zero bibiology; they don’t believe that the Bible is the Word of God, they don’t believe it is inspired, they don’t believe that it is inerrant nor that it is infallible, none of those things. Their concern to get back to the exact original wording of the New Testament and because it is a historical document, we want to know what it said. When we look at the Gettysburg Address, for example, we want to know what Lincoln said; even how he spelt a word. This is part of the task.

The number of manuscripts he mentions, fifty-three hundred is far too low; he is not to be faulted for this too much. He wrote this in 1998 and using older sources. Today the number of Greek New Testament Manuscripts that are known to exist is five thousand eight hundred and twenty-four. I am saying this in the year 2013 that number is actually on its way up because of more manuscripts that have been recently discovered and have not yet received on official catalogue number. So Geisler was off by some five hundred manuscripts. These are just Greek New Testament manuscripts by the way; the original language of the New Testament, not any of the ancient versions that were translated into. The next point is that the estimate of two hundred thousand textual variants are too low; in the last hundred years or so, scholars looked at these textual variants and estimated that there were about one hundred and fifty thousand. And then earlier in the 20th century, they came up with the number of about two hundred thousand, but that number has increased as we have examined more and more manuscripts and done more comparisons. There have been two books that have really helped out on this. Frankly, we don’t have an exact number of how many textual variances there are. A couple of books that have helped us to understand this include one by Herman Hoskier, who wrote a book in 1929 called, ‘Concerning the Test of the Apocalypse.’ It took him thirty years of work to compile this book. What Hoskier did, he went through every single manuscript of the Book of Revelation and collated them. That means he wrote down deviation of the letters between themselves. What he then did, he published the text using the Textus Receptus, the Greek text that stands behind the King James Version Bible as the text that he would print and if there were any differences among the manuscripts, he would individually list those manuscripts. I dare say that for the Book of Revelation alone, if we are going to count textual variants in terms of how many manuscripts have a variant reading, then, for the Book of Revelation alone, we have two hundred thousand. But that is not how we are supposed to count variants.

So Hoskier shows all these differences; this is what a collation is: you list a text, a base text that you are comparing it to and you only list those differences. In listing those differences you see how much the manuscripts agree with that base text and how much they don’t. That was the first book of the New
Testament that was ever completely collated, where all the manuscripts were completely examined and compared and written out; it took the scholar thirty years. Hoskier chose Revelation, most likely, because it was the easiest New Testament book to do as we have fewer manuscripts for Revelation than for any other book of the New Testament. There are only about three hundred and twenty-five manuscripts for the Book of Revelation. As I said, we have about fifty-eight hundred New Testament manuscripts. That doesn’t mean that every manuscript covers the whole of the New Testament. In fact, only sixty of them cover the whole New Testament. At the same time, the average Greek New Testament manuscript is more than four hundred and fifty pages long. It is a lot of material to go through. My Center for the Study of New Testament Manuscripts has as a fundamental objective; once we digitize every Greek New Testament manuscript and feed them through some very sophisticated OCR software that is being developed now will be able to read these hand written texts and print out a complete apparatus of all the textual variances for all of the manuscripts. If we had all of these manuscripts digitized right now and if one person were to go through and collate all those manuscripts, it would take about four hundred years. With this software, it would take about five years for one person to do it. It is an enormous difference. What has really driven me is the recognition that this work has to be done and to date only two New Testament books have been completely collated: the Book of Revelation by Hoskier and then Jude’s Epistle by Tommy Vosserman of Lund, Sweden. Vosserman did his doctoral dissertation on the text of Jude. It took him six years to collate all the manuscripts for Jude. On the basis of those two works, scholars have been able to come up with a better estimate of how many textual variants we have. Today, it is about four hundred thousand, some estimate as high as a half a million and some estimate as low as three hundred thousand.

2. How to Count Variants

In regards to Geisler again, another mistake that he made, he claimed that textual variances are counted by the number of manuscripts supported. But the number of manuscripts is almost irrelevant, yet not completely. There has to be the number of at least one manuscript. If you don’t have one manuscript that disagrees with the others, then by definition you don’t have a variant. There is no variant; you only have to just one. Hypothetically, it doesn’t matter if you have one manuscript that disagrees with some other manuscript or a million manuscripts that disagree with another. That counts as one textual variant. It is not the number of manuscripts that counts, it is the changing in wording that counts. So how do we count textual variants? This is not a matter of opinion; this is absolute fact, because apologists are taking this information from textual scholars without allowing the textual scholars to define what they mean, yet they take the numbers and guest at it. A textual variant is counted by the number of wording differences found in the manuscript regardless of how many manuscripts have that wording. All that is necessary is that a variant has one manuscript with this wording to count. So whether it is one manuscript or two thousand that have the same variant, it still counts as only one variant.

Examples: In John 4:1, there are two different variants. The Gospels, by the way, has more manuscripts with them in it than any other portion of the New Testament. As many as two thousand manuscripts have the Gospels where Revelation has about three hundred and twenty-five. I am only talking about Greek manuscripts now. The next largest section after the Gospels would be Paul’s letters with somewhere around eight hundred and fifty manuscripts and then we have Acts and the Catholic Epistles or the general letters, not written by Roman Catholics but written by someone besides Paul with about six hundred and fifty of those. From two thousand down to three hundred twenty-five, still hundreds of manuscripts for every portion of the New Testament. Some of these manuscripts go all the way back to the 2nd century. Let’s say that we have a thousand manuscripts on each side. Consider this, ‘when Jesus knew that the Pharisees had heard that Jesus was baptizing more disciples than John,’ does it say when Jesus knew or does it say when the Lord knew? This is a textual variant. Some manuscripts have when the Lord knew and others have when Jesus knew. It really depends on what your base text is as to how you are going to count the variants. There are going to be some mile differences; but if I start with a base text that says when Jesus knew, which has fewer manuscripts; maybe there is a thousand or twelve hundred manuscripts that have when the Lord knew. That counts as one variant. It doesn’t matter if I start with when the Lord knew - it still counts as a single variant. So it doesn’t matter how many manuscripts we have. That should be fairly clear and it is important to keep that in mind.

3. What was the Motive for the Errant View?

So, an interesting question to ask is what was the motive for this errant view about errors? I think this method of counting reduces the actual number of differences in wording among the manuscripts to a few hundred. This is what this method actually does. If you claim there are two hundred thousand textual variants, and if you think that there are fifty-three hundred manuscripts that all read at every single place. And let’s say that you have a thousand manuscripts that disagree. That means you are going to have two hundred places where those thousand manuscripts disagree with the other; that comes up to two hundred thousand. That would be really nice if that is how few textual variants we had. The reality is, though, we have a whole lot more than that. This method gives Christians assurance about having the Word of God in their hands today? But if the assurance has a faulty basis, it is no assurance at all. And this definitely has a faulty basis. We will talk about the significance of those four hundred thousand textual variances later, and the most important issue we are going to wrestle with is not just the number of variances; in fact that is largely irrelevant, even though it is a fairly high number. It is the nature of those variants that we are going to wrestle with and how many of them are meaningful and viable. What do they actually affect? Is there any essential belief that Christians have that is dependent on a textual variant? You would probably like to know whether the resurrection of Jesus is found only in verses that are suspect, textually, or is it absolutely secure. We want to talk about the assurance of our faith and the reality of the manuscripts, but we are not going to play games and deceive you; we are going to tell you what the facts are and we need to make some adjustments to that and think through this. You want to find out really what is going on and get the full story.

4. What was the Source for this Evangelical Miscalculation?

Geisler isn’t alone in this as many apologists have made and continue to make the same claims. The source seems to be a book published in 1963 called ‘How We Got the Bible,’ by Neil Lightfoot, Grand Rapids: Baker. It has been reprinted and edited many times and has already sold over one million copies. So it has had a huge influence. However, Neil Lightfoot wasn’t a textual critic. Here is what he had to say, from one point of view it may be said that there are two hundred thousand scribal errors in the manuscripts, but it is wholly misleading and untrue to that that there are two hundred thousand errors in the text of the New Testament. This large number is gained by counting all the variations in all of the manuscripts (about 4,500). This means that if, for example, one word is misspelled in four thousand different manuscripts, therefore, it amounts to four thousand errors. Actually in the case of this only one slight error has been made and it has been copied four thousand times. But this is the procedure which is followed in arriving at the large number of two hundred thousand errors. Note that two hundred thousand were known in 1963 and in 1998 Geisler uses the same number. But it is misleading to say that there are two hundred thousand errors in the text of the New Testament. This large number comes from counting all the variations in all the manuscripts of about 4500. This means, for example, if one word is misspelled in four thousand different manuscripts, it amounts to four thousand errors. Not so! Actually in a case of this kind, only one slight error has been made and it has been copied four thousand times. But this is the procedure that is followed in arriving at the number two hundred thousand errors. Lightfoot says that about four thousand five hundred manuscripts were known in 1963 and then Geisler says about five thousand and three hundred manuscripts in 1998. So if you are counting the textual variants by the number of manuscripts then Geisler should have known that in those twenty-five years between these two books, there would have been a lot more textual variances. The average-sized New Testament manuscript is over four hundred and fifty pages long, but most of them do have some gaps in them. And so, to have a place where you have got four thousand manuscripts reading, this doesn’t exist. As helpful as Neil Lightfoot’s book has been for many Christians and it’s a very encouraging work; yet at the point he is making isn’t correct.

5. Proof of the Miscalculation

The majority text disagrees with the standard critical Greek New Testament, which is known as the Nestle-Aland Novum Testamentum Graece, in more than six thousand and five hundred places. On average, the majority text has a good five hundred manuscripts on its side for each variant it supports. When you look at the Gospels, Paul, the general letters and Revelation, it is probably closer to eight hundred; but that is on average. If Lightfoot is correct, then these numbers should be much more than three million. Even looking at two standard texts and I am giving you the basis on how Lightfoot says we should do this, we actually have more than three million variants. And these six thousand five hundred differences are only a small fraction of all the variants there are. So what numbers would we actually have? If we count variants the way Lightfoot and Geisler and other apologists have counted them, how many textual variants would we actually have? This would not give a lot of comfort to Christians, but instead, it would cause despair. The Nestle-Aland Novum Testamentum Graece lists approximately thirty thousand textual variants. So, a textual variant is any place where at least one manuscript deviates from a base text. Nestle-Aland lists only a small fraction of all textual variants. If Lightfoot’s method of counting variants were correct, we would have tens of millions of variants.

6. Summary How to Count Textual Variants

No textual critic defines a textual variant the way that Lightfoot has done. Yet, the number of textual variants comes from textual critics. Shouldn’t they be the ones to define what this means since they’re the ones doing the counting? To recap: A textual variant is not a difference in wording times the number of manuscripts supporting the difference. It is simply any place where at least one manuscript shares the same variant; it still only counts as one variant. There are far more than two hundred thousand variants, even when properly counted.