The economics of damned lies

Economists have become almost comically skeptical of estimated effects. A researcher estimating the effect of X on Y has always had to consider the bias and efficiency of their estimator, where bias is the result of unconsidered or unobserved forces pulling your estimated effect in one particular direction away from the truth (too positive or too negative), and efficiency is the overall noisiness of the estimate, where a less efficient estimater provides too large a range of possible effect sizes.

Under the umbrella of efficiency were concerns about random measurement error – the basic and unavoidable difficulties in accurately recording the the underlying “true” value. Filed under “everywhere and always”, measurement error is often simply the cost of doing business, while nonetheless limiting the precision which the world can be known and, in turn, the precision with which decision making or policy can be calibrated.

Coping with bias has been in many ways the story of empirical economics and the “credibiilty revolution” of the last 25 years. It’s why “identitication strategy” is the fourth slide of almost any microeconomics presentation, why the econometrics of every great applied economics working paper is seemingly obsolete before it finds itself in print, and why there is a genuine possibility I will retire with a half dozen ulcers before I finish this blog post. Economists make themselves crazy thinking, strategizing, and internalizing criticism about the potential bias in their estimates. Selection bias, omitted variable bias, reverse causality, and even observer bias lurk in the shadows of our minds. To be an expert in causal inference is to anticipate and guard against myriad sources of bias in your empirical analysis. For many living economists, however, there is a new bogeyman.

Systemic measurement error.

Sounds banal enough. And if you’re a chemist, it is. The gauge is consistently measuring every temperature too high, mass too low, electromagnetic spectra too red. Something to test for every day. Vigilence and repetition, the solution. For economists, however, the answer is less simple.

What happens when the data is rigged to make the results too good? Unemployment too low. Wages too high. Expenditures too productive. <Redacted> too <redacted>. Economists have looked for cheaters as a research subject and rooted out fraud within scientific endeavor itself. But it is precious few who have made it their job to sift through manipulated public data and carefully distill the true underlying numbers. And for good reason — as soon as you declare the data unreliable, you open the door to your own personal bias. Your politics, career ambitions, or even just your good hearted desire to observe people being more decent than our own pessimissim might otherwise allow for. To allow yourself to manipulate the potentially fraudulent data is to potentially make a bad situation worse.

Replicability and transparency of analysis was important before, but now we’re entering an even more tedious and slow landscape because critics aren’t just going to want to adjudicate your analysis, they’re going to want to adjudicate every observation in your data set. Or perhaps I am being too negative. There is a genuine upside. As people look to distill and correct for systemic measurement error, they’re going to create greater demand for 1) parallel analysis of similar questions using different techniques on the same data and 2) great forensic analysis of data and the institutions that create it. Never forget that sovietology was a genuine research career. More work to be done, but it can be done.

More work that has to be done. Sigh. My stomach hurts.

Bureau of Labor Statistics Under Siege

Thousands of keyboards were likely drenched four days ago as coffee spewed from thousands of nostrils upon reading the headlines that President Trump fired the head of the Bureau of Labor Statistics because he (the prez) didn’t like the July 2025 job numbers that were reported. Apparently, the job stats were not as great as we had been led to expect for the new regime of tariffs and deportations. (Someone should inform the politicians that businessmen need predictability for making any expansionary plans). So, shoot the messenger, that will fix it.

The First Ire was apparently kindled especially by the truly massive downward revisions to the May (-125,000) and June (-133,000) job figures, which reduced the combined employment gain for those months by 258,000. That made for three anemic employment months in a row, which is a different picture that had been earlier portrayed. For those unfamiliar with past BLS reports, that could seem like manipulation or gross incompetence. For instance, whitehouse.gov published an article titled, “BLS Has Lengthy History of Inaccuracies, Incompetence”, excoriating the “Biden-appointed”, now-fired Erika McEntarfer who “consistently published overly optimistic jobs numbers — only for those numbers to be quietly revised later.”

But massive overestimations of jobs creation, followed a month or two or three later by massive downward revisions are pretty standard procedure for the BLS in recent years. Fellow blogger Jeremy Horpedahl has noted prior occurrences of this, e.g. here and here. There is no reason to suspect nefarious motives, though. The understaffed and overworked folks at BLS seem to be doing the best they can. It is just a fact that some key data simply is not available as early as other data. There are also rational adjustments, e.g. seasonal trends, that must first be estimated, and only later get revised.

Bloomberg explains some of the fine points of the recent revisions:

The downward revision to the prior two months was largely a result of seasonal adjustment for state and local government education, BLS said in earlier comments to Bloomberg. Those sectors substantially boosted June employment only to be largely revised away a month later.

But economists say the revisions also point to a more concerning, underlying issue of low response rates.

BLS surveys firms in the payrolls survey over the course of three months, gaining a more complete picture as more businesses respond. But a smaller share of firms are responding to the first poll. Initial collection rates have repeatedly slid below 60% in recent months — down from the roughly 70% or more that was the norm before the pandemic.

In addition to the rolling revisions to payrolls that BLS does, there’s also a larger annual revision that comes out each February to benchmark the figures to a more accurate, but less timely data source. BLS puts out a preliminary estimate of what that revision will be a few months in advance, and last year [2024], that projection was the largest since 2009.

Perhaps it would be wise for the BLS to hang a big “preliminary” label on any of the earlier results they publish, to minimize the howls when the big revisions hit later. Or perhaps some improvements could be made in pre-adjusting the adjustments, since revisions there do seem to swing things around outrageously. I expect forthcoming BLS reports to be the subject of derision from all sides. We all know which parties will scoff if the job report looks great or if it looks not great. Presumably the interim head of the Bureau, William Wiatrowski, is busy polishing his resume.

And POTUS should be careful what he wishes for – “great” job growth numbers would, ironically, strengthen the case for the Fed to delay the interest rate cuts he so desires.

The (attempted) return of Soviet economic statistics

From Warren Nutter’s “The structure and growth of Soviet industry: A comparison with the United States.” The Journal of Law and Economics 2 (1959): 147-174.:

“Let us acknowledge at once that all statistics contain faults and errors. Let us also acknowledge that no government or other agency resists the temptation to stretch figures to its own account if it feels it can get away with it. Representative government, competitive scholarship, and free public discourse are the Western institutions that have counteracted error and misrepresentation in statistics, imperfectly to be sure, but at least to some degree.

The peculiar difficulties with Soviet statistics stem, in the first instance, from the system of authoritarian, centralized planning-from what has been called a “command economy.” Published statistics come from only one source: the state. There are no independent sources to restrain each other or used as checks against each other, except to the extent that related figures published by different state agencies might not be fully coordinated before publication. At the same time, the suppliers of data to the central authorities -the economic and administrative units- have a stake in the figures they report, since their performance is judged on the basis of them. The Soviet statistical authorities do not hide their concern over the misreporting that results from this feature of the economic system. A second set of difficulties stems from the crusading nature of Soviet communism. Statistics are grist for the propaganda mill. Knowing the ideological views of Soviet leaders, one cannot expect them to dispense facts in a passive and detached manner.”

As many of you likely know, the President fired the director of the Bureau of Labor Statistics because he didn’t like that the newest employment numbers painted an unflattering portrait of the US labor market. Fortunately, the US continues to benefit from alternative to government statistics, but make no mistake, the BLS produces the absolute best labor market measurements the world has ever known.

It’s telling that while Soviet data seemed to often fool outsiders at the time (despite the occasionally raising of doubts), there was no such delusion within the Soviet Union, where provincial leaders would consistently look to outside sources for accurate economic reports.

Nutters is credited with co-founding the “Virginia School of Political Economy” at the University of Virginia with future Nobel Laureate James Buchanan. The Virginia school is most associated with public choice economics, something which by the 70s was often construed as an intellectual counterbalance to the modeling of government as infallible corrective to market failures that was particularly enticing to those favoring a socialist planned economy. That an administration and political coalition that loves to rail against omnipresent socialist threats is demanding that the US embrace a Soviet-style data apparatus is a reminder that history is never without irony.

The consequences of a “Papers, please” economy

While DOGE is advertising their new deregulation AI (HT MR) with promises of “trimming 100,000 of those rules“, the reality is that the administration is ushering in the most profound layer of government involvement into our lives since the introduction of the income tax.

It defies the opportunity cost of my time to try to recap the crappy-policy-via-executive-order blunderbuss that has been the last 6 months, but it is sufficient to focus on two dimensions: immigrant targeting and tariffs. ICE is pulling people off of the street and detaining them for hours for “based on their physical appearance” in what can only be described as a dedicated effort to remove current immigrants, denaturalize past immigrants, and deter future immigrants. While these travesties play out one raid and immigration court ambush after another, tariffs are being introduced rapidly and haphazarly, always at the expense of the economy, and sometimes even in opposition to their stated goals of reshoring manufacturing. The prospect for (relatively) frictionless commerce across borders is quickly becoming unobtainable.

So what’s going to happen, now? Is America going to become a Whiter, autarkic island that steadily de-growths itself into a quieter state of nostalgic bliss, cheerfully accepting a shorter, sicker, less opulent life than before? Sure, the food will be worse and more expensive, our electronics obsolete and more expensive, our cars older and made from inferior materials (and more expensive), but that’s just the way things have to be, right? People will live and do as they’re told, right?

Have you met people?

People adapt. They find every workaround, every crack. Their lives will change, in many ways for the much worse, but they will work with what they have to make the best that they can. And this case, the best way to adapt will be to become just a little more criminal. Not fully criminal, just a little more. More aspects of our lives will become akin to driving 10 mph over the speed limit because that’s just what everyone does.

Daily life will, slowly and at times imperceptibly, move underground. More jobs will pay in cash. Fewer exchanges will be made absent a personal relationship. More goods will arrive in suitcases at the luggage return. Friends will ask friends to pick up a phone/earbuds/tablets for them when visiting less economically restricted countries, while also reminding them to delete their messaging apps before heading through security. More goods will be altered from their true, optimal consumable form to qualify for a lower tariff. The advantage of physical over digital media will widen again. Where exchange exists outside of the law, trust needs to be found outside of contract. At the margin, business will become just a fraction of a percent more nepotistic. More employees will be found somewhere in the family tree. Everyone will just become a bit more crooked and, in doing so, expect everyone else to be a just a little more crooked. The US is a shockingly high trust society because it pays to be trustworthy. This is how such things unravel.

More immigrants will live within arrangements that hide them from not just the authorities, but from observation in general. Curtains and blinds covering windows at all hours. Dinner will taken at home rather than the restaurant. Clubs and concert gatherings that appeal to immigrant crowds, or even just less White crowds, will advertise less, relying on word of mouth. Workers will move more often, rather than garner attention. The sick and injured will not be taken to the emergency room. The gaps in an already fractured society will become a little wider.

Employers will keep more people off the books. Off health insurance and workers compensation. Employees will, perversely, be grateful for the lower exposure. Insurance companies will find new ways to audit liability without exposing their clients personnel. Predatory human trafficking will find larger herds of underground populations to hide their practices within. Fewer people will trust and rely on the law. Fewer people will enjoy it’s protections.

What about compensating wage differentials? On the one hand, labor supply will be reduced as it is pushed underground, reducing their numbers and safely available hours. On the other hand, the necessity for employers to reduce the visibility of their workers while incurring the risk of legal punishment will reduce demand. The net effect on equilibrium wages is uncertain. However, those employers who manage to guarantee longer and safer tenured employment will capture greater rents from those they employ. Getting to work more consistently and going to sleep feeling safer is quite the fringe benefit, one that employers may find to be a more a profitable form of compensation than just simple wages.

I’d keep writing, but I already sound like a paranoid crank. I’m not sure I am comfortable, anonymous reader, with this level of intellectual vulnerability in such a public forum. Papers, please.

I’m Chair! 😬

As of July 1st of this year, I am the Chairman of the Department of Economics at my university. It’s one of those positions that includes more work and not much compensation. Depending on who I tell, I’m given both congratulations and condolences. Generally, at my university there is an expectation that department faculty ‘take turns’ being chair. So, we’re expected to serve whether the pay is good or not. There’s a lot of informal practice around this process.

In addition, Economics Majors have been less popular at liberal arts institutions over the past several years. No one knows why and there are probably multiple reasons. At my institution, our department has healthy enrollment among the peripheral majors. So, the Economics BA and BS have lower enrollment, but the Business Economics and the Global Affairs majors are more popular than ever.

All the same, I’d like to increase the number of students who have declared majors in our department and the number of Economics graduates. How do I do that?

Continue reading

Meta AI Chief Yann LeCun Notes Limits of Large Language Models and Path Towards Artificial General Intelligence

We noted last week Meta’s successful efforts to hire away the best of the best AI scientists from other companies, by offering them insane (like $300 million) pay packages. Here we summarize and excerpt an excellent article in Newsweek by Gabriel Snyder who interviewed Meta’s chief AI scientist, Yann LeCun. LeCun discusses some inherent limitations of today’s Large Language Models (LLMs) like ChatGPT. Their limitations stem from the fact that they are based mainly on language; it turns out that human language itself is a very constrained dataset.  Language is readily manipulated by LLMs, but language alone captures only a small subset of important human thinking:

Returning to the topic of the limitations of LLMs, LeCun explains, “An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that’s clearly System 1—it’s reactive, right? There’s no reasoning,” a reference to Daniel Kahneman’s influential framework that distinguishes between the human brain’s fast, intuitive method of thinking (System 1) and the method of slower, more deliberative reasoning (System 2).

The limitations of this approach become clear when you consider what is known as Moravec’s paradox—the observation by computer scientist and roboticist Hans Moravec in the late 1980s that it is comparatively easier to teach AI systems higher-order skills like playing chess or passing standardized tests than seemingly basic human capabilities like perception and movement. The reason, Moravec proposed, is that the skills derived from how a human body navigates the world are the product of billions of years of evolution and are so highly developed that they can be automated by humans, while neocortical-based reasoning skills came much later and require much more conscious cognitive effort to master. However, the reverse is true of machines. Simply put, we design machines to assist us in areas where we lack ability, such as physical strength or calculation.

The strange paradox of LLMs is that they have mastered the higher-order skills of language without learning any of the foundational human abilities. “We have these language systems that can pass the bar exam, can solve equations, compute integrals, but where is our domestic robot?” LeCun asks. “Where is a robot that’s as good as a cat in the physical world? We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”

This gap exists because language, for all its complexity, operates in a relatively constrained domain compared to the messy, continuous real world. “Language, it turns out, is relatively simple because it has strong statistical properties,” LeCun says. It is a low-dimensionality, discrete space that is “basically a serialized version of our thoughts.”  

[Bolded emphases added]

Broad human thinking involves hierarchical models of reality, which get constantly refined by experience:

And, most strikingly, LeCun points out that humans are capable of processing vastly more data than even our most data-hungry advanced AI systems. “A big LLM of today is trained on roughly 10 to the 14th power bytes of training data. It would take any of us 400,000 years to read our way through it.” That sounds like a lot, but then he points out that humans are able to take in vastly larger amounts of visual data.

Consider a 4-year-old who has been awake for 16,000 hours, LeCun suggests. “The bandwidth of the optic nerve is about one megabyte per second, give or take. Multiply that by 16,000 hours, and that’s about 10 to the 14th power in four years instead of 400,000.” This gives rise to a critical inference: “That clearly tells you we’re never going to get to human-level intelligence by just training on text. It’s never going to happen,” LeCun concludes…

This ability to apply existing knowledge to novel situations represents a profound gap between today’s AI systems and human cognition. “A 17-year-old can learn to drive a car in about 20 hours of practice, even less, largely without causing any accidents,” LeCun muses. “And we have millions of hours of training data of people driving cars, but we still don’t have self-driving cars. So that means we’re missing something really, really big.”

Like Brooks, who emphasizes the importance of embodiment and interaction with the physical world, LeCun sees intelligence as deeply connected to our ability to model and predict physical reality—something current language models simply cannot do. This perspective resonates with David Eagleman’s description of how the brain constantly runs simulations based on its “world model,” comparing predictions against sensory input. 

For LeCun, the difference lies in our mental models—internal representations of how the world works that allow us to predict consequences and plan actions accordingly. Humans develop these models through observation and interaction with the physical world from infancy. A baby learns that unsupported objects fall (gravity) after about nine months; they gradually come to understand that objects continue to exist even when out of sight (object permanence). He observes that these models are arranged hierarchically, ranging from very low-level predictions about immediate physical interactions to high-level conceptual understandings that enable long-term planning.

[Emphases added]

(Side comment: As an amateur reader of modern philosophy, I cannot help noting that these observations about the importance of recognizing there is a real external world and adjusting one’s models to match that reality call into question the epistemological claim that “we each create our own reality”.)

Given all this, developing the next generation of artificial intelligence must, like human intelligence, embed layers of working models of the world:

So, rather than continuing down the path of scaling up language models, LeCun is pioneering an alternative approach of Joint Embedding Predictive Architecture (JEPA) that aims to create representations of the physical world based on visual input. “The idea that you can train a system to understand how the world works by training it to predict what’s going to happen in a video is a very old one,” LeCun notes. “I’ve been working on this in some form for at least 20 years.”

The fundamental insight behind JEPA is that prediction shouldn’t happen in the space of raw sensory inputs but rather in an abstract representational space. When humans predict what will happen next, we don’t mentally generate pixel-perfect images of the future—we think in terms of objects, their properties and how they might interact

This approach differs fundamentally from how language models operate. Instead of probabilistically predicting the next token in a sequence, these systems learn to represent the world at multiple levels of abstraction and to predict how their representations will evolve under different conditions.

And so, LeCun is strikingly pessimistic on the outlook for breakthroughs in the current LLM’s like ChatGPT. He believes LLMs will be largely obsolete within five years, except for narrower purposes, and so he tells upcoming AI scientists to not even bother with them:

His belief is so strong that, at a conference last year, he advised young developers, “Don’t work on LLMs. [These models are] in the hands of large companies, there’s nothing you can bring to the table. You should work on next-gen AI systems that lift the limitations of LLMs.”

This approach seems to be at variance with other firms, who continue to pour tens of billions of dollars into LLMs. Meta, however, seems focused on next-generation AI, and CEO Mark Zuckerberg is putting his money where his mouth is.

Impossible Trinity of Macroeconomic Stability

Trump wants both low taxes and low interest rates. I hope that he doesn’t get it.

For the last ten days of my Principles of Macroeconomics course, I emphasize the aggregate supply and aggregate demand model coupled with monetary offset. What’s monetary offset? It says that, given some target and administrative insulation, the Federal Reserve can ‘offset’ the aggregate demand effects of government fiscal policy. It’s what gives us a relatively stable economy, despite big fiscal policy changes from administration to administration.

For example, if the Fed has a 2% inflation target, then they have an idea of how much total spending in the economy (NGDP) must change. If the federal government changes tax revenues or spends more, then the Fed can increase or decrease the money supply in order to achieve the NGDP growth rate that will realize their target. For example, after the 2017 Tax and Jobs Act lowered taxes, the Federal Funds rate rose in 2018. The effect of the tax cuts on NGDP were *offset* by monetary policy tightening to keep inflation near 2%.

If the Fed doesn’t engage in monetary offset, then fiscal policy has a bigger impact on the business cycle, causing more erratic bouts of unemployment and inflation. The economy would be less stable. Importantly, monetary offset  works in both directions. It prevents tight fiscal policy from driving us into a national depression, and loose fiscal policy from fueling inflation. That’s good since politicians face an incentive/speed/knowledge/political problem.

Personally, I would love lower taxes and lower interest rates. I’d get to enjoy more of my income rather than sending it to uncle Sam and, after refinancing, I’d pay less to service my debts. BUT, the same is true for everyone else too. All of that greater spending would result in higher prices and persistent inflation.

Right now, low taxes and high spending meant that the government is running persistent budget deficits – it’s borrowing money. That’s stimulative. If the Fed lowers interest rates, individuals would refinance and borrow more. That’s also stimulative. If both fiscal and monetary policy are stimulative as part of achieving the Fed’s target, then there is nothing wrong. But deviation from that policy goal brings economic turbulence.

This analysis implies an impossible trinity of macroeconomic stability (not the one from international trade):

Continue reading

The option to leave

The US, like every geopolitical entity to ever exist, has produced global public goods (i.e. international security, defeating the Nazis, etc) and global public bads (greenhouse gases, failed interference in other countries, etc).

I would like to posit something very simple: the greatest public good the United States has ever produced is the option to leave where you are and emigrate to the United States. If a country and its leadership is failing, non-trivial fractions of their population had the viable option to pack their bags and walk out the door. Perhaps unfairly, this is doubly true for their best, brightest, and most endowed with resources, making the threat all the more salient. It’s voting with your feet i.e. Tiebout effects writ large.

If you are a failing nation, your options become to watch your population dissipate or put up a wall blocking exit. Either that or, you know, actively take steps to improve your country so that fewer people wish to leave their home and start over elsewhere. The ramifications of stifled immigraion to the United States will be felt for decades, and not just in the United States in the form of an enervated economy and betrayal of our core civic values, but globally in weakended constraints on every failing regime.

Economic Impact of Agricultural Worker Deportations Leads to Administration Policy Reversals

Here is a chart of the evolution of U.S. farm workforce between 1991 and 2022:

Source: USDA

A bit over 40% of current U.S. farm workers are illegal immigrants. In some regions and sectors, the percentage is much higher. The work is often uncomfortable and dangerous, and far from the cool urban centers. This is work that very few U.S. born workers would consider doing, unless the pay was very high, so it would be difficult to replace the immigrant labor on farms in the near term. I don’t know how much the need for manpower would change if cheap illegal workers were not available, and therefore productivity was supplemented with automation.

It apparently didn’t occur to some members of the administration that deporting a lot of these workers (and frightening the rest into hiding) would have a crippling effect on American agriculture. Sure enough, there have recently been reports in some areas of workers not showing up and crops going unharvested.

It is difficult for me as a non-expert to determine how severe and widespread the problems actually are so far. Anti-Trump sources naturally emphasize the genuine problems that do exist and predict apocalyptic melt-down, whereas other sources are more measured. I suspect that the largest agribusinesses have kept better abreast of the law, while smaller operations have cut legal corners and may have that catch up to them. For instance, a small meat packer in Omaha reported operating at only 30% capacity after ICE raids, whereas the CEO of giant Tyson Foods claimed that “every one who works at Tyson Foods is authorized to do so,” and that the company “is in complete compliance” with all the immigration regulations.

With at least some of these wholly predictable problems from mass deportations now becoming reality, the administration is undergoing internal debates and policy adjustments in response. On June 12, President Trump very candidly acknowledged the issue, writing on Truth Social, “Our great Farmers and people in the hotel and leisure business have been stating that our very aggressive policy on immigration is taking very good, long-time workers away from them, with those jobs being almost impossible to replace…. We must protect our Farmers, but get the CRIMINALS OUT OF THE USA. Changes are coming!” 

The next day, ICE official Tatum King wrote regional leaders to halt investigations of the agricultural industry, along with hotels and restaurants. That directive was apparently walked back a few days later, under pressure from outraged conservative supporters and from Deputy White House Chief of Staff Stephen Miller. Miller, an immigration hard-liner, wants to double the ICE deportation quota, up to 3,000 per day.

This issue could go in various ways from here. Hard-liners on the left and on the right have a way of pushing their agendas to unpalatable extremes. It can be argued that the Democrats could easily have won in 2024 had their policies been more moderate. Similarly, if immigration hard-liners get their way now, I predict that the result will be their worst nightmare: a public revulsion against enforcing immigration laws in general. If farmers and restaurateurs start going bust, and food shortages and price spikes appear in the supermarket, public support for the administration and its project of deporting illegal immigrants will reverse in a big way. Some right-wing pundits would not be bothered by an electoral debacle, since their style is to stay constantly outraged, and (as the liberal news outlets currently demonstrate), it is easier to project non-stop outrage when your party is out of power.

An optimist, however, might see in this controversy an opening for some sort of long-term, rational solution to the farm worker issue. Agricultural Secretary Brooke Rollins has proposed expansion of the H-2A visa program, which allows for temporary agricultural worker residency to fill labor shortages. This is somewhat similar to the European guest worker programs, though with significant differences. H-2A requires the farmer to provide housing and take legal responsibility for his or her workers. H-2B visas allow for temporary non-agricultural workers, without as much employer responsibility. A bill was introduced into Congress with bi-partisan support to modernize the H-2A program, so that legislative effort may have legs. Maybe there can be a (gasp!) compromise.

President Trump last week came out strongly in favor of this sort of solution, with a surprisingly positive take on the (illegal) workers who have worked diligently on a farm for years. By “put you in charge” he is seems to refer to the responsibilities that H-2A employers undertake for their employers, and perhaps extending that to H-2B employers. He acknowledges that the far-right will not be happy, but hopes “they’ll understand.” From Newsweek:

“We’re working on legislation right now where – farmers, look, they know better. They work with them for years. You had cases where…people have worked for a farm, on a farm for 14, 15 years and they get thrown out pretty viciously and we can’t do it. We gotta work with the farmers, and people that have hotels and leisure properties too,” he said at the Iowa State Fairgrounds in Des Moines on Thursday.

“We’re gonna work with them and we’re gonna work very strong and smart, and we’re gonna put you in charge. We’re gonna make you responsible and I think that that’s going to make a lot of people happy. Now, serious radical right people, who I also happen to like a lot, they may not be quite as happy but they’ll understand. Won’t they? Do you think so?”

We shall see.

It’s not AGI if it has a dial you can adjust to produce your preferred falsehoods

It’s not AGI, it’s barely even regular AI, when an LLM is this heavily directed. This appears to be very real over on Twitter. What’s most telling is the thinness of the prompts that yield very specific responses that, suffice it to say, Grok would not have provided even 3 months ago.

Musk has adjusted Grok’s algorithm so it’s now a neo-Nazi.Pretty cool that almost every progressive commentator, elected official and organization still uses Musk’s X algorithm to communicate with the public! Good job guys.

Max Berger (@maxberger.bsky.social) 2025-07-06T17:39:08.885Z

I’ll simply say this: no one has declined more in my estimation in my entire life than Elon Musk. I thought he was an engineering genius not even 5 years ago, perhaps awkward in some ways, but earnest. Now he is (or is working very diligently to project an identity of) a white supremacist desperate to play off of traditional racist and antisemitic fears to maintain his own status and influence. His ambition and resources have been combined with a monstrous agenda, and the world is much worse for it. It’s tragic in every way.

With regards to AI, there needs to be more discussion of the market for AIs, plural. I think a lot of people are operating off the assumption that AI will be like Google or VHS. A natural monopoly; one AI to rule them all and bind them. I’m not so sure. I think there is a very real chance that AI’s will find niches. That different algorithms will create different families of bespoke AIs. It feels like the world is already siloed into echo chambers of entertainment- and identity-based news feeds. If AI allows us each to get bespoke answers, serving our own person confirmation biases, to each and every question, is that better or worse? In a counter intuitive way, it could actually be better. You can’t get communities and cults of one. It might be better for the world if the news became something you couldn’t create effective propaganda out of.