WSJ: Nothing Important Happened in China, India, or AI This Year

I normally like the Wall Street Journal; it is the only news page I check directly on a regular basis, rather than just following links from social media. But their “Biggest News Stories of 2024” roundup makes me wonder if they are overly parochial. When I try to zoom out and think of the very biggest stories of the past five to ten years, three of the absolute top would be the rapid rise of China and India, together with the astonishing growth in artificial intelligence capabilities.

All three of those major stories continued to play out this year, along with all sorts of other things happening in the two most populous countries in the world, and all the ways existing AI capabilities are beginning to be integrated into our businesses, research, and lives. But the Wall Street Journal thinks that none of this is important enough to be mentioned in their 100+ “Biggest Stories”.

To be fair, China and AI do show up indirectly. AI is driving the 4 (!) stories on NVIDIA’s soaring stock price, and China shows up in stories about spying on the US, hacking the US, and the US potentially forcing a sale of TikTok. But there are zero stories regarding anything that happened within the borders of China, and zero that let you know that AI is good for anything besides NVIDIA’s stock price.

Plus of course, zero stories that let you know that India- now the world’s most populous country, where over one out of every six people alive resides- even exists.

AI’s take on India’s Prime Minister using AI

This isn’t just an America-centric bias on WSJ’s part, since there is lots of foreign coverage in their roundup; indeed the Middle East probably gets more than its fair share thanks to “if it bleeds, it leads”. For some reason they just missed the biggest countries. They also seem to have a blind spot for science and technology; they don’t mention a single scientific discovery, and only had two technology stories, on SpaceX catching a rocket and doing the first private spacewalk.

The SpaceX stories at least are genuinely important- the sort of thing that might show up in a history book in 50+ years, along with some of the stories on U.S. politics and the Russia-Ukraine war, but unlike most of the trivialities reported.

I welcome your pointers to better takes on what was important in 2024, or on what you consider to be the best news source today.

Humans are struggling to understand LLM Progress

Ajeya Cotra writes the following in “Language models surprised us” (recommended, with more details on benchmarks)

In 2021, most people were systematically and severely underestimating progress in language models. After a big leap forward in 2022, it looks like ML experts improved in their predictions of benchmarks like MMLU and MATH — but many still failed to anticipate the qualitative milestones achieved by ChatGPT and then GPT-4, especially in reasoning and programming.

Joy’s thoughts: A possible reason for underestimating the rate of progress is not just a misunderstanding of the technology but a missed estimate on how much money would get poured in. When Americans want to buy progress, they can (see also SpaceX).

I compare this to the Manhattan project. People said it couldn’t be done, not because it was physically impossible but because it would be too expensive.

After a briefing regarding the Manhattan Project, Nobel Laureate Niels Bohr said to physicist Edward Teller, “I told you it couldn’t be done without turning the whole country into a factory.” (https://www.energy.gov/lm/articles/ohio-and-manhattan-project)

We are doing it again. We are turning the country into a factory for AI. Without all that investment, the progress wouldn’t be so fast.

Joy on AI in Higher Education

I was interviewed for an article “Navigating AI in Christian Higher Education“. Here’s an excerpt:

Rosenberg: What impact do you foresee in your field due to the increasing sophistication of AI, and what kind of skills do you think your students will need to be successful?

Buchanan: AI will reshape economic analysis and modeling, making complex data processing and predictive analytics more accessible. This will lead to more sophisticated economic forecasting and policy design. Economists will become more productive, and expectations will rise accordingly. While some fields might resist change, economics will be at the forefront of AI integration.

For students aiming to succeed, it’s crucial to embrace AI tools without relying on them excessively during college. Strong fundamentals in economic theory and critical thinking remain essential, coupled with data science and programming skills.

Interdisciplinary knowledge, especially in tech and social sciences, will be valuable. Adaptability and lifelong learning are key in this evolving field. Human skills like creativity, communication, and ethical reasoning will remain crucial.

While AI will alter economics, it will also present opportunities for those who can adapt and effectively combine economic thinking with technological proficiency.

My Frozen Assets at BlockFi, Part 4: Full Recovery of My Funds

In March and April of this year, I moaned and groaned here in blogland, chronicling my attempts to recover my funds from an interest-bearing account at crypto firm BlockFi.

Back in 2021, interest rates had been so low for so long that that seemed to be the new normal. Yields on stable assets like money market funds were around 0.3% (essentially zero, and well below inflation), as I recall. As a yield addict, I scratched around for a way to earn higher interest, while sticking with an asset where (unlike bonds) the dollar value would stay fairly stable.

It was an era of crypto flourishing, and so I latched onto the notion of decentralized finance (DeFi) lending. I found what seemed to be a reputable, honest company called BlockFi, where I could buy stablecoin (constant dollar value) crypto assets which would sit on their platform. They would lend them out into the crypto world, and pay me something like 9 % interest. That was really, really good money back then, compared to 0.3%.

On this blog, I chronicled some of my steps in this journal. First, in signing up for BlockFi, I had to allow the intermediary company Plaid complete access to my bank account. Seriously, I had to give them my username and password, so they could log in as me, and not only be able to withdraw all my funds, but see all my banking transactions and history. That felt really violating, so I ended up setting up a small auxiliary bank account for Plaid to use and snoop to their heart’s content.

I did get up and running with BlockFi, and put in some funds and enjoyed the income, as I happily proclaimed (12/14/2021) on this blog, “ Earning Steady 9% Interest in My New Crypto Account “.

BlockFi assured me that they only loaned my assets out to “Trusted institutional counterparties” with a generous margin of collateral. What could possibly go wrong??

What went wrong is that BlockFi as a company got into some close relationship with Sam Bankman-Fried’s company, FTX.  Back in 2021-2022, twenty-something billionaire Sam Bankman-Fried (“SBF”) was the whiz kid, the visionary genius, the white knight savior of the crypto universe. In several cases, when some crypto enterprise was tottering, he would step in and invest funds to stabilize things. This reminded some of the role that J. P. Morgan had played in staving off the financial panics of 1893 and 1907. SBF was feted and lauded and quoted endlessly.

For reasons I never understood, BlockFi as a company was having a hard time turning a profit, so I think the plan was for FTX to acquire them. That process was partway along, when the great expose’ of SBF as a self-serving fraudster occurred at the end of 2022. FTX quickly declared bankruptcy, which forced BlockFi to go BK as well. SBF was eventually locked up, but so were the funds I had put into BlockFi. The amount was not enough to threaten my lifestyle, but it was enough to be annoying.

BlockFi Assets Begin to Thaw

I got emails from BlockFi every few months, assuring customers that they would do what they could to return our assets. Their bankruptcy proceedings kept things locked, but eventually they started to return some money.

 As I noted in a blog post, in April, 2024, I was able to recover about 27% of my account. At the time, there was no clear prospect of getting the rest.   Along the way, I clicked on a well-camouflaged scam email link, which gave me some heartburn but fortunately no harm came of it.

And now, hooray, they have finally returned it all, following their successful claw-back of assets from SBF’s organization(s). This vindicates my sense that the BlockFi management was/is fundamentally honest and good-willed, and was just a victim of SBF’s machinations.

Some personal takeaways from all this:

  • Keep allocations smallish to outlier investments
  • Sell out at the first serious signs of trouble
  • Triple-check before clicking on any link in an email
  • Having been forced to engage in opening crypto wallets and transferring coins, I have a better feel for the world of crypto which had seemed like a black box. It does not draw me like it does some folks, but if circumstances ever require me to deal in crypto (relocate to Honduras?), I could do it.

Can researchers recruit human subjects online to take surveys anymore?

The experimental economics world is currently still doing data collection in traditional physical labs with human subjects who show up in person. This is still the gold standard, but it is expensive per observation. Many researchers, including myself, also do projects with subjects that are recruited online because the cost per observation is much lower.

As I remember it, the first platform that got widely used was Mechanical Turk. Prior to 2022, the attitude toward MTurk changed. It became known in the behavioral research community that MTurk had too many bots and bad actors. MTurk had not been designed for researchers, so maybe it’s not surprising that it did not serve our purposes.

The Prolific platform has had a good reputation for a few years. You have to pay to use Prolific but the cost per observation is still much lower than what it costs to use a traditional physical laboratory or to pay Americans to show up for an appointment. Prolific is especially attractive if the experiment is short and does not require a long span of attention from human subjects.

Here is a new paper on whether supposedly human subjects are going to be reliably human in the future: Detecting the corruption of online questionnaires by artificial intelligence   

Continue reading

Literature Review is a Difficult Intellectual Task

As I was reading through What is Real?, it occurred to me that I’d like a review on an issue. I thought, “Experimental physics is like experimental economics. You can sometimes predict what groups or “markets” will do. However, it’s hard to predict exactly what an individual human will do.” I would like to know who has written a little article on this topic.

I decided to feed the following prompt into several LLMs: “What economist has written about the following issue: Economics is like physics in the sense that predictions about large groups are easier to make than predictions about the smallest, atomic if you will, components of the whole.”

First, ChatGPT (free version) (I think I’m at “GPT-4o mini (July 18, 2024)”):

I get the sense from my experience that ChatGPT often references Keynes. Based on my research, I think that’s because there are a lot of mentions of Keynes books in the model training data. (See “”ChatGPT Hallucinates Nonexistent Citations: Evidence from Economics“) 

Next, I asked ChatGPT, “What is the best article for me to read to learn more?” It gave me 5 items. Item 2 was “Foundations of Economic Analysis” by Paul Samuelson, which likely would be helpful but it’s from 1947. I’d like something more recent to address the rise of empirical and experimental economics.

Item 5 was: “”Physics Envy in Economics” (various authors): You can search for articles or papers on this topic, which often discuss the parallels between economic modeling and physics.” Interestingly, ChatGPT is telling me to Google my question. That’s not bad advice, but I find it funny given the new competition between LLMs and “classic” search engines.

When I pressed it further for a current article, ChatGPT gave me a link to an NBER paper that was not very relevant. I could have tried harder to refine my prompts, but I was not immediately impressed. It seems like ChatGPT had a heavy bias toward starting with famous books and papers as opposed to finding something for me to read that would answer my specific question.

I gave Claude (paid) a try. Claude recommended, “If you’re interested in exploring this idea further, you might want to look into Hayek’s works, particularly “The Use of Knowledge in Society” (1945) and “The Pretense of Knowledge” (1974), his Nobel Prize lecture.” Again, I might have been able to get a better response if I kept refining my prompt, but Claude also seemed to initially respond by tossing out famous old books.

Continue reading

Human Capital is Technologically Contingent

The seminal paper in the theory of human capital by Paul Romer. In it, he recognizes different types of human capital such as physical skills, educational skills, work experience, etc. Subsequent macro papers in the literature often just clumped together some measures of human capital as if it was a single substance. There were a lot of cross-country RGDP per capita comparison papers that included determinants like ‘years of schooling’, ‘IQ’, and the like.

But more recent papers have been more detailed. For example, the average biological difference between men and women concerning brawn has been shown to be a determinant of occupational choice. If we believe that comparative advantage is true, then occupational sorting by human capital is the theoretical outcome. That’s exactly what we see in the data.

Similarly, my own forthcoming paper on the 19th century US deaf population illustrates that people who had less sensitive or absent ability to hear engaged in fewer management and commercial occupations, or were less commonly in industries that required strong verbal skills (on average).

Clearly, there are different types of human capital and they matter differently for different jobs. Technology also changes what skills are necessary to boot. This post shares some thoughts about how to think about human capital and technology. The easiest way to illustrate the points is with a simplified example.

Continue reading

Rote Education has a Purpose

A tweet that got over 2 million views and 2500 likes:

https://x.com/ianmcorbin1/status/1831353564246979017

“Why do our students (even the ones paying a jillion dollars!) *want* to skip their lessons?”

“You give us work fit for machines. You want rote answers.”

He asks why students want to cheat and what is wrong with education. Why did this tweet take off? This is obvious.

I’m not of the opinion that education is entirely signaling (see Bryan Caplan). However, anyone can see that education is partly signaling. It’s difficult to get good grades. Good grades is a noisy signal of excellence. Students want to cheat so that they can obtain the good grades and signal to employers that they are excellent. There is nothing mysterious about that.

Part of a professor’s job is to make it hard to cheat and costly if you are caught.

Now we get to the “rote answers” part. How is a professor who has over 100 students every semester supposed to monitor the students’ performance and make it hard to cheat and be fair to every student? The “rote answers” part is a technology called the multiple-choice test with auto or semi-auto (e.g. Scantron machine) grading. Multiple choice tests serve an important role in our society, and they aren’t going anywhere.

A professor who has only 10 students per semester could give personalized assignments and grade oral exams and be an Oxford tutor for the students hand-written essays or whatnot. However, that kind of education would be extremely expensive/exclusive and does not scale.

Readers are more scarce than writers. AI’s can read now. The implications that will have for education and assessment have yet to be seen.

Charles Hugh Smith: Six Reasons the Global Economy Is Toast

If you are feeling OK about the world after a nice Labor Day weekend, I can fix that. How about six reasons why global economic growth will slow to a crawl, courtesy of perma-bear Charles Hugh Smith?

Smith is recognized as an earnest, good-willed alternative economic thinker. His OfTwoMinds blog and other publications bring out many valid facts and factors. He has been extrapolating from those factors to global financial collapse for well over fifteen years now, growing out of the imminent peak oil movement of circa 2007 vintage and the scary 2008-2009 financial crisis. Obviously, he has continually underestimated the resilience of the national and global systems, especially the ability of our finance and banking folks at keeping the debt plates spinning, and our ability to harness practical technology (e.g. fracking for oil production). Smith recommends preparing to become more self-reliant: we should learn more practical skills, and prepare to barter with local folks if the money system freezes up.

For now, I will let him speak for himself, and leave it to the readers here to ponder countervailing factors. From August 11, 2024, we have his article titled, These Six Drivers Are Gone, and That’s Why the Global Economy Is Toast:

The six one-offs that drove growth and pulled the global economy out of bubble-bust recessions for the past 30 years have all reversed or dissipated. Absent these one-off drivers, the global economy is stumbling off the cliff into a deep recession without any replacement drivers. Colloquially speaking, the global economy is toast.

Here are the six one-offs that won’t be coming back:

1) China’s industrialization.

2) Growth-positive demographics.

3) Low interest rates.

4) Low debt levels.

5) Low inflation.

6) Tech productivity boom.

( 1 ) Cutting to the chase, China bailed the world out of the last three recessions triggered by credit-asset bubbles popping: the Asian Contagion of 1997-98, the dot-com bubble and pop of 2000-02, and the Global Financial Crisis of 2008-09. In each case, China’s high growth and massive issuance of stimulus and credit (a.k.a. China’s Credit Impulse) acted as catalysts to restart global expansion.

The boost phase of picking low-hanging fruit via rapid industrialization boosting mercantilist exports and building tens of millions of housing units is over. Even in 2000 when I first visited China, there were signs of overproduction / demand saturation: TV production in China in 2000 had overwhelmed global and domestic demand: everyone in China already had a TV, so what to do with the millions of TVs still being churned out?

China’s model of economic development that worked so brilliantly in the boost phase, when all the low-hanging fruit could be so easily picked, no longer works at the top of the S-Curve. Having reached the saturation-decline phase of the S-Curve, these policies have led to an extreme concentration of household wealth in real estate. Those who favored investing in China’s stock market have suffered major losses.

( 2 ) Demographics

Where China’s workforce was growing during the boost phase, now the demographic picture has darkened: China’s workforce is shrinking, the population of elderly retirees is soaring, and so the cost burdens of supporting a burgeoning cohort of retirees will have to be funded by a shrinking workforce who will have less to spend / invest as a result.

This is a global phenomenon, and there are no quick and easy solutions. Skilled labor will become increasingly scarce and able to demand higher wages regardless of any other factors, and that will be a long-term source of inflation. Governments will have to borrow more–and probably raise taxes as well–to fund soaring pension and healthcare costs for retirees. This will bleed off other social spending and investment.

( 3 ) The era of zero-interest rates and unlimited government borrowing has ended. As Japan has shown, even at ludicrously low rates of 1%, interest payments on skyrocketing government debt eventually consume virtually all tax revenues. Higher rates will accelerate this dynamic, pushing government finances to the wall as interest on sovereign debt crowds out all other spending. As taxes rise, households are left with less disposable income to spend on consumption, leading to stagnation.

( 4 ) At the start of the cycle, global debt levels (government and private-sector) were low. Now they are high. The boost phase of debt expansion and debt-funded spending is over, and we’re in the stagnation-decline phase where adding debt generates diminishing returns.

( 5 ) The era of low inflation has also ended for multiple reasons. Exporting nations’ wages have risen sharply, pushing their costs higher, and as noted, skilled labor in developed economies can demand higher wages as this labor cannot be automated or offshored. Offshoring is reversing to onshoring, raising production costs and diverting investment from asset bubbles to the real world.

Higher costs of resource extraction, transport and refining will push inflation higher. So will rampant money-printing to “boost consumption.”

( 6 ) The tech productivity boom was also a one-off. Economists were puzzled in the early 1990s by the stagnation of productivity despite the tremendous investments made in personal and corporate computers, a boom launched in the mid-1980s with Apple’s Macintosh and desktop publishing, and Microsoft’s Mac-clone Windows operating system.

By the mid-1990s, productivity was finally rising and the emergence of the Internet as “the vital 4%” triggered the adoption of the 20% which then led to 80% getting online combined with distributed computing to generate a true revolution in sharing, connectivity and economic potential.

The buzz around AI holds that an equivalent boom is now starting that will generate a glorious “Roaring 20s” of trillions booked in new profits and skyrocketing productivity as white-collar work and jobs are automated into oblivion.

There are two problems with this story:

1) The projections are based more on wishful thinking than real-world dynamics.

2) If the projections come true and tens of millions of white-collar jobs disappear forever, there is no replacement sector to employ the tens of millions of unemployed workers.

In the previous cycles of industrialization and post-industrialization, agricultural workers shifted to factory work, and then factory workers shifted to services and office work. There is no equivalent place to shift tens of millions of unemployed office workers,as AI is a dragon that eats its own tail: AI can perform many programming tasks so it won’t need millions of human coders.

As for profits, as I explained in There’s Just One Problem: AI Isn’t Intelligent, and That’s a Systemic Risk, everyone will have the same AI tools and so whatever those tools generate will be overproduced and therefore of little value: there is no pricing power when the world is awash in AI-generated content, bots, etc., other than the pricing power offered by monopoly, addiction and fraud–all extreme negatives for humanity and the global economy.

Either way it goes–AI is a money-pit of grandiose expectations that will generate marginal returns, or it wipes out much of the middle class while generating little profit–AI will not be the miraculous source of millions of new high-paying jobs and astounding profits.

(End of Smith excerpt; emphases mainly his)

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

Have a nice day…

Many Impressive AI Demos Were Fakes

I recently ran across an article on the Seeking Alpha investing site with the provocative title “ AI: Fakes, False Promises And Frauds “, published by LRT Capital Management. Obviously, they think the new generative AI is being oversold. They cite a number of examples where demos of artificial general intelligence were apparently staged or faked.  I followed up on a few of these examples, and it does seem like this article is accurate. I will quote some excerpts here to give the flavor of their remarks.

In 2023, Google found itself facing significant pressure to develop an impressive innovation in the AI race. In response, they released Google Gemini, their answer to OpenAI’s ChatGPT. The unveiling of Gemini in December 2023 was met with a video showcasing its capabilities, particularly impressive in its ability to handle interactions across multiple modalities. This included listening to people talk, responding to queries, and analyzing and describing images, demonstrating what is known as multimodal AI. This breakthrough was widely celebrated. However, it has since been revealed that the video was, in fact, staged and that it does not represent the real capabilities of Google’s Gemini.

… OpenAI, the company behind the groundbreaking ChatGPT, has a history marked by dubious demos and overhyped promises. Its latest release, Chat GPT-4-o, boasted claims that it could score in the 90th percentile on the Unified Bar Exam. However, when researchers delved into this assertion, they discovered that ChatGPT did not perform as well as advertised.[10] In fact, OpenAI had manipulated the study, and when the results were independently replicated, ChatGPT scored on the 15th percentile of the Unified Bar Exam.

… Amazon has also joined the fray. Some of you might recall Amazon Go, its AI-powered shopping initiative that promised to let you grab items from a store and simply walk out, with cameras, machine learning algorithms, and AI capable of detecting what items you placed in your bag and then charging your Amazon account. Unfortunately, we recently learned that Amazon Go was also a fraud. The so-called AI turned out to be nothing more than thousands of workers in India working remotely, observing what users were doing because the computer AI models were failing.

… Facebook introduced an assistant, M, which was touted as AI-powered. It was later discovered that 70% of the requests were actually fulfilled by remote human workers. The cost of maintaining this program was so high that the company had to discontinue its assistant.

… If the question asked doesn’t conform to a previously known example ChatGPT will still produce and confidently explain its answer – even a wrong one.

For instance, the answer to “how many rocks should I eat” was:

…Proponents of AI and large language models contend that while some of these demos may be fake, the overall quality of AI systems is continually improving. Unfortunately, I must share some disheartening news: the performance of large language models seems to be reaching a plateau. This is in stark contrast to the significant advancements made by OpenAI’s ChatGPT, between its second iteration (GPT-2), and the newer GPT-3 – that was a meaningful improvement. Today, larger, more complex, and more expensive models are being developed, yet the improvements they offer are minimal. Moreover, we are facing a significant challenge: the amount of data available for training these models is diminishing. The most advanced models are already being trained on all available internet data, necessitating an insatiable demand for even more data. There has been a proposal to generate synthetic data with AI models and use this data for training more robust models indefinitely. However, a recent study in Nature has revealed that such models trained on synthetic data often produce inaccurate and nonsensical responses, a phenomenon known as “Model Collapse.”

OK, enough of that. These authors have an interesting point of view, and the truth probably lies somewhere between their extreme skepticism and the breathless hype we have been hearing for the last two years. I would guess that the most practical near-term uses of AI may involve some more specific, behind the scenes data-mining for a business application, rather than exactly imitating the way a human would think.