Just the Facts Report Lists Verifiable Facts, Shows 2–5 Million Non-Citizens Registered to Vote in U.S.



Critics Fail to Debunk Explosive Study on Illegal Voting by Non-Citizens

By James D. Agresti
 

new study by Just the Facts Research & Educational Institute has gone viral with influential people, political analysis organizations, news outlets, and websites citing it. This includes, for example, U.S. Senator Mike Lee, Congressional testimony by Heritage Foundation legal scholar Hans von Spakovsky, and a post on X with more than 24 million views due to a repost by Elon Musk. Based on 2022 survey data and an enhanced version of a stress-tested methodology from a scholarly journal,the study found that about 10% to 27% of non-citizen adults in the U.S. are illegally registered to vote.

Since more than 20 million of them now live in the U.S., this amounts to 2–5 million illegally registered voters, which can easily overturn the results of close major elections.

Yet, some critics have alleged the study is false, including the New York TimesSnopes, the Cato Institute, the University of Washington’s Center for an Informed Public, and others. However, all of them failed to find any real flaw in the study and resorted to:

  • inventing blatant falsehoods.
  • using lies of omission.
  • misportraying discredited claims as if they were facts.
  • misportraying facts as if they were discredited claims.
  • misquoting sources.
  • cherry-picking evidence that is shattered by evidence they ignored.

In sum, they couldn’t even put a minor dent in the study despite considerable efforts to do so — even with help from a cadre of scholars. This ultimately reinforces the study because this swarm of ardent critics was unable to identify a single legitimate problem with it.

Contrary to those critiques, verifiable facts show that about 2–5 million non-citizens are registered to vote and reveal that Just Facts’ study:

  • Is transparent and thoroughly documented with credible primary sources.
  • Is based on a survey large enough to measure statistically significant nationwide results.
  • Is firmly grounded in new data that annihilates an old canard used to deny the facts of this matter.
  • Uses straightforward and empirically justified methodologies.
  • Is vastly more reliable than other studies on this issue that are plagued by unrealistic and counterfactual assumptions.

Beyond shedding light on the important issue of election integrity, this affair provides valuable insights about how activists, journalists, and scholars mislead the public.

Thoroughly Documented & Transparent

The first and most simple-minded criticism of Just Facts’ study comes from a top-viewed reply to the viral post on X and alleges “there’s no information or transparency on who conducted the research or their methods.”

That statement is the exact opposite of reality.

First, the full study can be accessed in less than 20 seconds with two clicks from the post on X that was the target of this attack. The post linked directly to an article on the news site Not the Bee, which linked to Just Facts’ study in the second sentence of the article.

Second, the study is located on Just Facts’ main website, which contains extensive details about the organization.

Third, even a brief look at the study reveals that it thoroughly documented with hyperlinks, full methodological details, and a spreadsheet that contains all of the study’s sources, data, and calculations. Such rigor is standard practice for Just Facts and is the hallmark that distinguishes its publications from typical news reports and policy analyses.

While the other critiques aren’t as shallow as this one, such duplicity permeates them as well.

Accomplished & Credible

Another ruse used by critics of the study is to cast doubt on it by referring to Just Facts as a mere “website.” This tactic was employed by the Center for an Informed Public in an article written by five scholars with more than enough acumen to know it isn’t true.

In reality, Just Facts is a non-profit research and educational institute with 16 years of public tax filings and 17 years of academic accomplishments.

Among these are
 citations of Just Facts’ work by a wide variety of prominent and scholarly organizations, including:

  • Peer-Reviewed Journals, such as the Proceedings of the National Academy of SciencesCritical Education, the International Journal of Sciences, and the Journal of Development Policy, Research and Practice.
  • Media Outlets, such as CBS, PBS, the Wall Street Journal, Investor’s Business Daily, Fox News, Yahoo News, CNBC, Forbes, Psychology Today, and Roll Call.
  • Educational Institutions, such as Rice University, Pepperdine University, Vanderbilt University, West Virginia University, the University of Texas, the University of Hawaii, and the University of Washington.
  • Academic Publishers, such as Gale Cengage Learning, Encyclopedia Britannica, Praeger, Routledge, Elsevier Health Sciences, and McGraw Hill Professional.
  • Government entities, such as the Oklahoma Department of Labor, the Detroit City Council, the Alabama Department of Education, the Utah State Board of Education, and the education ministry of Northern Ireland.
  • Think Tanks, such as the Hoover Institution, the National Tax Association, the Pacific Research Institute, the Heritage Foundation, the California Policy Center, and Instituto Liberdade (Brazil).
  • Associations & Corporations, such as the Association of American Medical Colleges, the Harvard Graduate Council, IBM Corporation, and the American Nurses Association.

Just Facts has also preempted multi-billion-dollar government health agencies – and virtually every major media outlet – on key facts related to Covid-19 almost a dozen times.

Furthermore, Just Facts President James D. Agresti, the author of the study and the present article, has an extensive record of scholarly achievements, and his work has been explicitly complimented by people with Ph.D.’s in economics, law, biostatistics, chemistry, mathematics, molecular biology, biochemistry, biochemical taxonomy, metrology, psychology, psychiatry, epidemiology, oceanography, horticulture, operations research, electrical engineering, biomedical engineering, political science, and computer science.

Statistically Significant Nationwide Results

Concerning the study’s methodology, the attack from Snopes was written by an “investigative journalist and science writer” named Alex Kasprak. Before publishing his article, Kasprak emailed Agresti and declared that the study is “wildly untenable” for two reasons.

The first, he said, is that the results stem “from a pool of at most 71 purported non-citizen respondents, and extrapolations based on these numbers” to “the entire U.S. population.”

Referring to his critique of a similar study by Just Facts in 2017, Kasprak wrote, “My criticism is no different now than it was in 2017.”

Kasprak, however, was oblivious to the fact that Just Facts published a thorough rebuttal to his 2017 piece, proving that his arguments were based on “mathematically illiterate notions instead of concrete, quantifiable facts.” Using 12 academic sources, such as the textbook Statistics for K–8 Educators and the textbook Mind on Statistics, the rebuttal documented that:

  • Just Facts’ study wasn’t an “extrapolation” but a “straightforward application of survey data.”
  • Snopes and “every major media outlet routinely cite similar figures” without a hint of skepticism and without calling them “extrapolations.”
  • the figures cited by Kasprak were numerators used in rate calculations, but the main driver of survey statistical power is denominators.
  • comparatively small numerators increase (not decrease) the statistical sampling confidence of surveys.
  • the survey was large enough to measure statistically significant nationwide results with at least 95% confidence.

Just Facts previously notified Snopes of those errors back in 2017, and Just Facts’ new study contains a glaring reference to Kasprak’s “mathematically illiterate” fact check.

Mr. Kasprak somehow managed to overlook all of that.

So, Agresti replied to Kasprak’s email by spoon feeding him these facts, and Kasprak abandoned this naïve argument in his new critique while failing to correct his old one. Nevertheless, Kasprak’s new critique links to his old one as proof that Just Facts’ study is “mathematically misguided.”

That is a called projection – or falsely accusing others of one’s own misdeeds.

This is an oft-used, insidious, and effective means of mass deception.

Non-Citizen “Citizens”

Kasprak’s second and other allegation about why Just Facts’ study is “wildly untenable” is that the “purported non-citizen respondents” in the survey who were registered to vote are “known to be at least in part” citizens who submitted “incorrect responses.”

This refers to an old claim that a consequential number of citizens who were registered to vote mistakenly identified themselves as non-citizens in this survey.

First off, that claim was refuted in 2014 by three scholars in a peer-reviewed study about this issue. The same scholars also detailed “multiple lines of evidence” that further debunked it in 2017. On top of this, Just Facts shredded the claim in 2016 and 2017.

Yet, Kasprak doesn’t even acknowledge, much less try to refute, any of the facts in these publications that obliterate this argument. The New York Times does the same in an article by reporter Minho Kim, as does the Cato Institute in a post by Walter Olson.

Even more importantly, Just Facts’ new study has a feature that renders that argument irrelevant. Indeed, the study explicitly debunked this predictable attack by documenting that the survey used “multiple citizenship questions” that “limit the possibility of honest mistakes by survey respondents.”

The hyperlink in that sentence leads to a 2023 report by Dr. Jesse Richman, the lead author of the 2014 peer-reviewed study on this matter. In this report, Richman provides the following facts about the annual survey that he and Just Facts used in their studies:

  • In 2019, “a more robust approach to measuring citizenship status was adopted.”
  • “This involved asking two questions about citizenship status,” the original “complex” one and a new “simpler” one.
  • Across 164,000 respondents from 2019 through 2022, “every individual in the survey dataset had a consistent pattern of answers across the two questions.”
  • “These results appear to indicate no errors at all: an error rate of 0.”

The New York Times article, the Cato post, and Kasprak’s email to Just Facts were clueless about all of this, even though these facts were right under their noses. So once again, Agresti spoon fed the facts to Kasprak.

But instead of candidly reporting them in his article, Kasprak danced around them and alleged that Just Facts’ study isn’t firmly grounded in the new data.

This is a pattern for Kasprak, who has a history of flagrantly misquoting studies and people.

Contra Kasprak, the actual facts about Just Facts’ study are as follows:

  • 100% of the data on non-citizen voter registration is from the 2022 survey in which both citizenship questions were asked.
  • One of the study’s formulas uses older data to calculate a denominator, and if the figure were affected by the error that Kasprak imagines, it would actually decrease the non-citizen voter registration rate — the polar opposite of what Kasprak claims.

All of this can be easily ascertained just by reading the study and examining the documentation it provides, including a spreadsheet of the study’s data and calculations.

However, Kasprak ensured that almost none of his readers would do that by burying the hyperlink to Just Facts’ study more than 1,100 words into his article. Hence, only one person has clicked on that link in four weeks since Kasprak’s article was published, and that one person was Agresti.

The New York Times made it even harder to find the study by not mentioning Just Facts and by falsely reporting that “a witness at a House hearing last week on election integrity cited a faulty report from 2020 suggesting that around 15 percent of noncitizens routinely vote in federal elections.”

In reality, the witnesses cited Just Facts’ study from 2024 (not 2020), the rate is “about 5 to 13%” (not 15%), and nothing in the witness testimony matches the description given by the Times.

Discredited Claims Presented as Facts

In his article, Kasprak quotes a 2017 open letter from “over 200 political scientists” declaring that the “scholarly political science community has generally rejected the findings” of a peer-reviewed study that Agresti uses “as the primary academic foundation for his claims.”

First, this self-selected group of scholars is a small subset of 3,000+ political scientist faculty members at research universities and doesn’t speak for the “community” as they declare. Furthermore, their open letter is entirely based on the discredited argument that citizen voters misidentified themselves as non-citizens in the survey.

Again, that argument never had merit and is irrelevant to Just Facts’ new study. Nevertheless, Kasprak treats the falsely predicated conclusion of this cabal as if it were a salient fact. So do the five scholars at the Center for an Informed Public, whose analysis amounts to little more than “Snopes says so” and “the political science community says so.”

This bears a striking resemblance to the “Russian disinformation” hoax, where an open letter signed by 51 former intelligence officials was used to spread the lie that emails from Hunter Biden’s laptop incriminating Joe Biden were “Russian disinformation.”

From the start, the facts were clear this was untrue, but the media ignored the facts and widely reported this claim as if it were a fact.

Kasprak’s main source of discredited claims is Dr. Brian Schaffner, the lead signatory of the “non-citizens are citizens” letter and the-co-principal investigator of the survey used in these studies. Like Kasprak, Schaffner either fails to conduct basic research or deliberately ignores published facts that contradict his arguments, thus lying by omission.

For example, Schaffner told the Huffington Post in 2017 that Just Facts, Richman, and Richman’s coauthors were guilty of “ignoring measurement error” on the citizenship question in the survey. In reality, these scholars and Just Facts had addressed this matter in no less than six publications. Yet, Schaffner ignored all of this.

Nor was that the first time. As Richman previously noted, Schaffner leveled the same charge in 2015 at his 2014 peer-reviewed paper “without ever addressing or acknowledging” an appendix of the paper that already tackled Schaffner’s argument.

Nationally Representative Results

Another bogus claim that Kasprak parrots from Schaffner is that the survey used by Just Facts “is fundamentally unsuited to answer any question about noncitizens,” including their voter registration rates. Using the survey for this purpose, says Schaffner, is like “trying to squeeze blood from a turnip.”

For background, the survey was conducted online, and online surveys can be highly inaccurate if the people sampled in the survey differ substantially from the people who aren’t sampled. In other words, online surveys rarely involve a random or representative sample of the population the survey is supposed to cover.

However, a scientific technique called “weighting” can be used to correct for this common problem. As explained in the academic book Designing and Conducting Survey Research: A Comprehensive Guide, weighting “is one of the most common approaches” that researchers use to “present results that are representative of the target population….”

In fact, Schaffner himself uses weighting and another technique called “matching” to make the results of the very same survey representative of U.S. citizens. Yet, he claims this cannot be done for non-citizens because “no amount of” weighting “is going to actually fix the issue” that the survey was “never intended to include a representative sample of noncitizens” — a problem called “noncoverage bias.”

As before, Kasprak just regurgitates Schaffner’s claim and fails to report any facts. Worse still, Kasprak conceals from his readers the fact that Richman told him the polar opposite of Schaffner. In an email exchange that Richman shared with Just Facts:

  • Kasprak asked Richman, “How do you know” that the non-citizens in the survey are “representative of that population as a whole?”
  • Richman replied that because the survey doesn’t provide “a true random sample” of the citizen or non-citizen population, he and his coauthors “confront this issue primarily by weighting the data.”
  • Kasprak then pushed back by writing, “While it may not be a ‘true’ random sample, surely you are not suggesting that both the full dataset and the noncitizen subset are equally flawed from a representative standpoint, are you?”
  • Richman then explained to Kasprak that “the entire” survey “dataset is in fact drawn from a set of opt-in panels” and “at a fundamental level it’s all something approximating a quota / convenience sample. Everything on top of that is about some tweaking of who gets into the final published dataset, and about the design of weights to aim to make the opt-in panel match as closely as possible the target population.”

In short, Kasprak asked for comments from Schaffner and Richman, reported one of their claims, and threw out the opposing one without disclosing it.

This is a clear lie of omission.

More importantly, three categories of facts suggest that Richman is correct and Schaffner is wrong.

First, Schaffner’s opt-in survey is loaded with opening for noncoverage bias of both citizens and non-citizens. As explained in the textbook Mind on Statistics, “Surveys that simply use those who respond voluntarily are sure to be biased in favor of those with strong opinions or with time on their hands.” Schaffner’s survey exemplifies this because it only involves people who:

  • have “an account on yougov.com” and “opt-in to being a YouGov panelist” or are recruited via “online advertisements” or “another survey provider.”
  • “are compensated by points for taking each survey” and “can exchange accumulated points with giftcards and other prizes.”
  • are willing to invest roughly 20 minutes to take the pre-election surveys, 10 minutes to take the post-election surveys, and 20 minutes to take the non-election-year surveys.

Second, scholarly publications are clear that weighting is regularly used to correct for noncoverage bias in surveys:

  • Journal of Official Statistics: “Weighting adjustments are commonly applied in surveys to compensate for nonresponse and noncoverage, and to make weighted sample estimates conform to external values.”
  • American Journal of Public Health: The “potential for noncoverage bias” can be “significantly” “reduced through weighting adjustments.”
  • Federal Reserve Small Business Survey: “To control” for “noncoverage bias,” the “sample data are weighted so that the weighted distribution of firms” in the survey “matches the distribution of the firm population in the United States.”
  • Pew Research: “Weighting is generally used in survey analysis to compensate for sample designs and patterns of non-response that might bias results.”

Third, the accuracy of weighted non-citizen data from Schaffner’s survey was independently corroborated by a 2013 scientific bilingual survey of 800 Latinos. In this nationally representative random survey, 13% ± 6 percentage points of Latino non-citizens admitted they were registered to vote with at least 95% confidence. This is virtually the same result as the weighted data from Schaffner’s 2012 survey, which showed that 15% ±5 percentage points of non-citizens admitted they were registered to vote with at least 95% confidence.

Both Just Facts and Richman make clear that weighting is far from foolproof. This is because survey participants may differ in ways that transcend the factors that are weighted. Thus, they repeatedly use terms like “about” and “roughly” to convey that the results of their studies are best estimates and not hard numbers.

Also of note is that Agresti’s latest study didn’t use weighted data because Richman hadn’t supplied it yet. Richman later gave the data to Kasprak, and it corroborates Just Facts’ research, which found that the unweighted data may actually understate the voter registration rate of non-citizens.

Documented Facts Presented as Discredited Claims

Last, but far from least, Snopes and the Center for an Informed Public engage in the farce of portraying documented facts as if they were discredited claims. They do this by deleting hyperlinks and other vital information from their sources.

Snopes, for a prime example, reports that Richman’s new estimates of non-citizen voter registration are much lower than his prior estimates and that Agresti falsely “dismissed” them as “lowball estimates.” In reality, Agresti didn’t merely dismiss them as lowball estimates but rigorously documented this is the case by:

  • Publishing an extensive appendix that details how each of these figures are based on narrow, bowdlerized, or futile measures that underestimate the rate of non-citizen voter registration.
  • Including a section in the study that highlights the “glaring disparity” between Richman’s old and new estimates and links to the appendix as proof that his new figures are “lowball estimates.”
  • Including in that same section a long and explicit statement from Richman in which he diplomatically admits that his new figures are lowball estimates.

No one who relies on Snopes would know any of this because Kasprak failed to report these pivotal facts and deleted Just Facts’ hyperlink from the words “lowball estimates,” thereby creating the illusion that Agresti made a totally unsupported claim, when in fact, he presented a thoroughly documented fact.

Hyperlinks are one of the primary ways in which people document facts in the age of the internet, and stripping them out, as Kasprak did, amounts to a fraud against his readers and a libel against Just Facts.

Underscoring the implications of Kasprak’s ruse, a 2011 paper in the University of Pennsylvania Journal of Constitutional Law explains that hyperlinks indicate “factual accuracy or support,” inform “readers that the supporting materials can be viewed with merely a click,” and are “critical to the provision of news generally.”

The Center for an Informed Public uses a similar ploy by reporting that Richman and one of his coauthors “accept” that the survey used in their 2014 study “does not provide a representative sample of non-citizens,” but “they argue that they conducted their analyses in an appropriate manner.”

In fact, they didn’t just argue that but repeatedly documented in their paper that they dealt with it by weighting the data. the Center for an Informed Public, however makes it appear as if this fact is nothing but an empty claim.

Conclusion

Remarkably, the 3,000+ words above don’t address all of the falsehoods that infest the critiques of Just Facts’ study – but they are more than enough to show that the critics are either lying or were extremely biased and negligent in their research of this issue. This includes people who should know better.

If any had real evidence that Just Facts’ study was unreliable, they’d have no need to launch false desperate attacks on it.

The fact that they resorted to this strengthens the study’s findings, which show that:

  • Roughly 10% to 27% of non-citizens are registered to vote.
  • About 5% to 13% of them vote in presidential elections.
  • Roughly 1.0 million to 2.7 million non-citizens will illegally vote in 2024 unless stronger election integrity measures are implemented.

Every illegal vote cancels the vote of a U.S. citizen, thus usurping their Constitutional right to vote. But instead of informing citizens and legislators about this threat to election integrity, these “fact checkers,” journalists, and scholars are misleading the public to believe it doesn’t exist.

James D. Agresti is the President & Co-Founder of Just Facts, a research and educational institute dedicated to publishing facts about public policies and teaching research skills. With two decades of experience in public policy research, his work has been cited by a diverse range of organizations and individuals.





















 
ad-image
image
11.20.2024

TEXAS INSIDER ON YOUTUBE

ad-image
image
11.20.2024
image
11.19.2024
ad-image