Between 1850 and 1910, most US censuses asked whether an individual was deaf. There were four alternative descriptions among the combinations of deafness and dumbness. Seems straightforward enough. The problem is that these aren’t discrete categories, they’re continuous. That is, one’s ability to hear can be zero, very good, bad, or just middling. What constitutes the threshold for deafness? In practice, it was the discretion of the enumerator. Understandably, there was a lot of variation in judgement from one enumerator to another. A lot of older people were categorized as deaf, even if they had some hearing loss.
Another problem is that there are other conditions that may be like deafness, given a brief encounter. Individuals who are unresponsive to external audio stimulation may be considered deaf when they are merely unable to respond. Consider any number of mental disabilities that result in low physical functionality – so much so that reasonable people can disagree about whether there was a response to audio or merely a random glance that was solicited apart from a noise.
And this is all when the enumerator actually finds the person that they’re supposed to count. It is well known that urban enumerators counted urban residents less accurately and missed people at a higher rate. They missed the homeless, those who lived in the same place as they worked, students, and other semi-stationary people. Not to mention that enumerators were usually locals. A rural enumerator may already know about the singular deaf person or family that lives in the county. But an urban enumerator could easily fail to identify multiple deaf people in a relatively close proximity. Given that the deaf were a more urban population than the hearing, it stands to reason that they would be missed at greater rates.
Another source of error comes from stigma. Parents of very young deaf children may have ‘held out hope’ that their child was merely slow to develop and not deaf. Being slow means being different from others by degree. Being deaf means being categorically different from others. Many young deaf children likely went unrecognized in the official statistics. However, as children age, a state of deafness would become increasingly undeniable. Indeed, historical census reports suspect that some deaf children only received their designation when they finally attended a school for the deaf – often past the age of 10.
Stigma can also affect how deaf individuals themselves self-identify. While there has been a very pro-deafness, anti-disability, pro-equality movement among the deaf population since at least the 1880s, many deaf people were still wary of being enumerated as deaf for fear of what the hearing majority might decide to do with that information.
Most of us know Alexander Graham Bell as the inventor of the telephone. But, he was also one of the leading researchers of the deaf population during his lifetime. He even wrote the special report on the blind and deaf for the 1900 census. At a time when forced sterilization and eugenics were popular ideas among the political and scientific elite, Bell often referenced the burden of deaf people, their tendency to beget deaf children, and he considered it imprudent for deaf people to marry one another and to reproduce. By 1880, the 4th US census to note deafness for specific individuals, reports were discussing the consistent pattern of higher, and probably more dependable deaf designations between the ages of 10 and 20, and then drastically lower counts among 20-30 year-olds a decade later. The differences were much too large to be caused by reasonable rates of attrition. After leaving school, deaf adults chose to misreport.
The census reports of the time do offer one reason that deaf counts could have been overestimated. Many schools for the deaf were ‘residential schools’ wherein students resided during the regular semesters. Since census have always been completed during the summer, the responses might have some strange disjunctions. For example, a numerator may note the deaf residents at a school, only for the same students to be counted again when they returned home in time for there household to be enumerated. There were documented cases of this happening. But, the miscounting could just as easily cause an undercount of deaf people by counting their home first, and then counting the school, never noting the student who traveled and was absent for both enumerations.
Except for a solitary report, the historical census reports use all of the above reasoning to argue that the US deaf population was substantially undercounted. But even so, the magnitudes were absolutely unknown. The counts varied widely across states and from year to year. While reports were most confident about adolescent counts, the counts for the entire deaf population were known to be barely of use, if not outright nonsense.
A glimpse of light is on the horizon, however. As greater proportions of the historical individual census data is being linked across decades, we are increasingly able to track individuals at a scale that was previously impossible. As of right now, the prospect is still difficult since nowhere near all of the data is linked and the deaf population had a small N. But, as we make progress, we’ll be able to discuss more precisely who was changing their deaf designation, how many people did it, and the reasons why they may have done it.