In a discussion over at Diglotting, Kevin Brown wrote "The default position is that Nazareth in Mark 1.9 is authentic; one has to prove otherwise." When I asked him why this was the default position, he responded, "Really?! This shouldn’t be a controversial issue."
It's hard for me to see how presuming the authenticity of the text of Mark wouldn't be controversial. We lack any manuscript evidence for more than the first century of its existence. In the manuscripts that we do have, the earlier ones show a higher rate of variants than the later ones from which we can surmise that the highest rate of variants would have occurred during the period for which we lack evidence. There was no church authority overseeing the copying of the texts, but there were plenty of men who were willing to forge texts in the name of Peter or Paul or any other figure that the forger thought would lend authority to the writing.
In short, there is every reason to think that there were any of number of opportunities for alterations by men with both the motivation and the willingness to do so and every reason to think that there are any number of alterations that left no evidence in the manuscripts. How could we possibly justify authenticity as a default position?
Kevin argued for the presumption on the basis of "tenacity," which refers to the idea that the original reading of the text is always somewhere in the manuscript tradition. Tenacity is lovingly embraced by conservative apologists, but my impression is that the consensus among textual critics is that there likely are original readings that were lost in the early transmission. The problem is that tenacity is inferred from manuscripts that were overwhelmingly produced by trained scribes after the time that Christianity had become the official state religion of the Roman Empire with established structure and doctrine. We cannot assume that it prevailed in the earlier period when Christianity was a disfavored minority religion composed of competing sects with significant variations in belief.
The idea that something should be presumed until disproved is a familiar one to judges and lawyers. We all know that a man is presumed innocent until proven guilty. What we may overlook, however, is the fact that such presumptions are usually based on considerations of policy rather than probability. The presumption of innocence isn't based on the likelihood that the police have arrested the wrong man. In fact, we hope that the presumption will help assure that they usually get the right one. A husband is presumed to be the father of a child conceived by his wife not because matrimony proves paternity, but because we are more interested in protecting kids than cuckolds.
I believe that the presumption of authenticity is also based on policy rather than probability. As Kevin points out "We could argue that every word or phrase in the NT was interpolated with such specious argumentation." So what? If in fact our evidence isn't sufficient to eliminate the possibility of tampering and alterations, why not simply have New Testament scholars qualify their conclusions to reflect the appropriate degree of uncertainty. Isn't that more intellectually honest than presuming authenticity. New Testament scholars want to talk about the original texts, but maybe the evidence doesn't justify it.
I think Kevin overstates the problem though. The problem isn't that every word might be an interpolation but that any word might be. That's a problem that historians are always faced with, but they deal with it through corroboration rather than presumption. If a passage in one of Paul's letters contains an idea that is found three times in other letters, then we have stronger reasons for thinking it genuine. However, if an idea only appears once, we are necessarily less certain about the passage containing it. If by positing that passage as an interpolation, we would radically change our understanding of Paul, then our original understanding is necessarily less secure than we might otherwise have thought it to be.