Save $$$, Easily Change Your Car Cabin Air Filter Yourself

I have done various maintenance and repairs on my cars over the decades. Usually, they turn out to be harder and more time-consuming than I thought. Changing the engine oil and oil filter has become genuinely harder since the oil filters have migrated deep up under the engine, where it is hard to access them without putting the car on a lift, and disposing of a milk jug of used oil has gotten more difficult.  I used to be able to easily change out a light bulb in the headlight, but the last car where that needed doing required you to take apart much of the front end of the car to get at the headlight. However, I recently found that changing the cabin air filters in my two vehicles (van and sedan) is so easy, I wish I had started doing it years ago.

Why Change the Cabin Air Filter?

The cabin air filter filters the air coming into the passenger section of the car. It knocks out road dust and pollen, and other bits of whatever that might get sucked into your air system as you are going down the road. So, it protects your and your family’s lungs as well as the components of the air handling system. Typical recommendations are to change out the filter about once a year or every 15,000-20,000 miles.

The photo below shows the cabin air filter I just pulled out of my van after maybe 2 years and 25,000 miles, next to a relatively clean filter. Obviously, I let this one go a bit too long: it is grey with dust/dirt, and partly blocked with plant debris.

I have not been quick to change out these filters because garages or dealers often charge something like $80-$100 for this. And until recently, I never considered doing it myself, because for some reason I thought it was a hard job. I had read of people having to contort in unnatural positions with heads inserted under dashboards as they disassemble layers of car to get at the filter.

It Is (Often) Super Easy to Change a Cabin Air Filter

It all depends on where the filter is located. For most models of cars, you can find guidelines on line, including YouTube videos. There are some models where you indeed may have to unscrew a cover plate somewhere below the dashboard to expose the filter. But in most cars, you remove the glove box to expose the filter. That may involve undoing come screws or a snap or strut, and squeezing the edge of the glove box inward. For my Hondas, all I had to do was empty the glovebox, (authoritatively) squeeze in the edges, and the glove box pivoted down, and behold, there was the filter in its little holder. Then slide out the holder, pull out the old filter and put in the new filter (purchased at AutoZone for $20 each), slide the holder back in place, and finally tilt the glovebox back up until it snapped in place.

Ten minutes max, easy-peasy. Obviously, this saved money, but it also felt empowering. I highly recommend trying it.

Video of Joy Buchanan on Tech Jobs and Who Will Program

Here are some show notes for a keynote lecture to a general audience in Indiana. This was recorded in April 2023.

Minute Topic
2:00“SMET” vs STEM Education – Does Messaging Matter?  
(Previous blog post on SMET)
5:00Is Computer Programming a “Dirty Job”? Air conditioning, compensating differentials, and the nap pods of Silicon Valley  
(post on the 1958 BLS report)
7:50Wages and employment outlook for computer occupations
10:00Presenting my experimental research paper “Willingness to be Paid: Who Trains for Tech Jobs?” in 23 minutes  

Motivation and Background 10:00 – 15:30
Experimental Design         15:30 – 22:00
Results                    22:00 – 30:00
Discussion                 30:00 – 33:30
33:50Drawbacks to tech jobs  

See also my policy paper published by the CGO on tech jobs and employee satisfaction
35:30The 2022 wave of layoffs in Big Tech and vibing TikTok Product Managers  

I borrowed a graph on Tech-cession from Joey Politano and a blog point from Matt Yglesias, and of course reference the BLS.
39:00Should You Learn to Code? (and the new implications of ChatGPT)  

Ethan Mollick brought this Nature article to my attention. 
Tweet credits to @karpathy and @emollick
48:00Q&A with audience

China To Squeeze West by Restricting Export of Essential Rare Earths

Rare earths are a set of 17 metals with properties which make them essential to a swathe of high-tech products. These products include lasers, LEDs, catalysts, batteries, medical devices, sensors, and above all, magnets. Rare earth magnets are used in electric motors and generators and vibrators, making them essential to electric cars, wind turbine generators, cell phones/tablets/computers, airplanes, phones, and all sorts of military devices. 

China happens to have large amounts of rare earth oxide ores for mining, relatively lax environmental standards, and a large, compliant workforce. The Chinese government has harnessed these resources to make the nation by far the largest producer of rare earths. Their massive, relatively low-cost production has suppressed production in other countries. This has been a conscious policy, to achieve global control over a vital raw material.

The first time China used this effective monopoly as a political weapon was in a maritime dispute with Japan in 2010. China cut off exports of rare earth metals to Japan for two years, crimping the Japanese electronics industry.  Other nations took note of this threat, and since then have been a number of half-hearted (in my opinion) efforts in various Western nations to develop some domestic capacity and to redesign motors to reduce dependence on rare earth materials.

 China’s share of rare earth ore mined is down to 60%, but they totally dominate processing the ore to metals, and subsequent fabrication of magnets from the metal.  Nearly all of the ore mined in the U.S. is shipped over to China for processing, mainly because of environmental regulations here.  

According to the Asia Times,

The PRC still dominates the entire vertical industry and can flood global markets with cheap material, as it has done before with steel and with solar panels. In 2022, it mined 58% of all rare earths elements, refined 89% of all raw ore, and manufactured 92% of rare earths-based components worldwide.

There is no other global industry so concentrated in the hands of the Chinese Communist Party, nor with such asymmetric downstream impact, as rare earths.

It seems the only way for the West to blunt the Chinese monopoly in rare earths is with large, long-term subsidies (since the Chinese can always undersell the rest of the world on a free market basis) and probably some pushing past environmental objections.

Alarmed by the rapid buildup of Chinese military forces (towards a possible invasion of Taiwan), the U.S. and its allies have begun restricting exports of the highest-power silicon chips to China. In retaliation, China has reportedly made plans to restrict exports of rare earths, starting in 2023. If they follow through, that move would crush fabrication of magnets and of magnet-dependent devices like motors and generators in other countries; the rest of the world would have to come crawling to China for all these items.

This move would in turn cause the rest of the world to accelerate its plans to produce rare earths outside China, but there would be several years of great disruption, and Chinese-made final devices like motors and generators would always have a huge price advantage, due to their cheaper raw material inputs.

I suspect there may be a high-stakes game of brinksmanship going on behind the scenes. The Chinese leadership presumably knows that they can only play this rare earth export ban card once, and the West does not really want to plow a lot of resources into producing large amounts of rare earths much more expensively than they can be bought from China. So maybe we will see some relaxation in chip export controls for China in exchange for them not pulling the final trigger on a rare earth export ban.

We live in interesting times.

Bitcoin’s Dramatic Comeback: Resurrection or Dead Cat Bounce?

In the past year, one cryptocurrency firm after another has gone bust, culminating in the grand implosion of the FTX exchange. The crypto vortex also contributed to some of the recent banking failures.

The prices of cryptocurrencies shot up in 2021, probably fueled by pandemic stimulus money sloshing around in the bank accounts of restless 20- and 30-somethings. All this came crashing back to earth in 2022, giving ample scope for skeptics to say, “I told you this was all foolishness.” Last rites were said, and crypto was left for dead.

But wait… in 2023, when no one was looking, the lid of the crypto coffin started to rattle, a bony hand reached out, and…crypto is back!!

Well, sort of. Here is a five-year chart of Bitcoin from Seeking Alpha, in U.S. dollars:

And here is the past six months:

We can see that Bitcoin took its final big leg down in November, 2022, with the FTX collapse. Its price stayed fairly plateaued down there (with heavy trading volume) until January. Since then, it has nearly doubled.

What has triggered this rise in 2023? Observers such as Michael Grothaus at Fast Company suggests some four factors:

( a ) A shift to “risk-on” with the prospect of the Fed easing off with interest rate hikes this year.

( b ) A flight to alternative assets in the wake of the turbulence in the banking sector. Also, since the total amount of bitcoin is programmed to never increase over a certain number, Bitcoin should be a hedge against inflation. (Many observers believe that the Fed will live with 3-4 % inflation indefinitely, to help inflate away the gigantic debt that the federal government incurred with pandemic relief).

( c ) Buying of Bitcoin by traders who were short, and now need to cover their positions.

( d ) The usual rise in Bitcoin values as a bitcoin “halving” event is on the horizon. (About every four years, with the next time scheduled for May 2024, the rewards for mining new bitcoins drops by 50%).

Will the rise in Bitcoin prices continue? Is this truly a resurrection from the dead, or just a “dead cat bounce”? [1] Nobody knows. But this latest, sustained rally seems to have helped it recover some luster of legitimacy as an asset class. Here is a list of some popular crypto exchanges that are still in operation.

My personal take: I hold a sliver of the Bitcoin fund GBTC, just to have some skin in the game. I have been too lazy to learn about and activate an actual crypto wallet. I think Bitcoin in particular is an intriguing entity. Many other cryptos at some level depend on some centralized administration, but Bitcoin embodies the ideal of a decentralized, power-to-the-people form of something like money.

[1] From Wikipedia: In finance, a dead cat bounce is a small, brief recovery in the price of a declining stock.  Derived from the idea that “even a dead cat will bounce if it falls from a great height”, the phrase is also popularly applied to any case where a subject experiences a brief resurgence during or following a severe decline. This may also be known as a “sucker rally”.

Comparing ChatGPT and Bing for a research literature review in April 2023

We wrote “ChatGPT Cites Economics Papers That Do Not Exist

I expect that problem to go away any day, so I gave it another try this week. For the record, they are currently calling it “ChatGPT Mar 23 Version” on the OpenAI website.

First, I asked ChatGPT for help with the following prompt:

ChatGPT is at it again. There is no such paper, as I will verify by showing John Duffy’s publications from that year: 

ChatGPT makes up lies (“hallucinations”). It is also great for some tasks, and smart people are already using it to become more productive. My post last week was on how impressive ChatGPT seemed in the Jonathan Swift impersonation. I didn’t take any time to do fact checking and I would bet money that at least something was made-up-facts in there.

I posed the same question to the Bing plug-in for the Edge browser (Microsoft). Yup, I have opened Edge for the first time in forever to use Bing.

Bing handles the prompt by linking to a useful relevant paper – so if you click the link you will get to a helpful and not misleading answer. Just being a smart search engine instead of hallucinating randomly is better, for my purposes.

The actual paper I wanted returned was this one, by the way:

Duffy, John. “Experimental macroeconomics.” Behavioural and Experimental Economics (2010): 113-119.

There is no reason that ChatGPT should be better than an expert in a subfield of a field of economics. But that’s the genius of a good search engine. You ask it “Can I repair a broken fiddlewhat?” The search engine does not claim to know but rather directs you to the blog of the world expert in fiddlewhats.

I can’t find the link to it, but I’m going to toss in one more thing here. Tyler Cowen did an interview this Spring on AI. There was a newspaper reporter who had a “creepy” interaction with an AI that made for the topic of a viral internet article. Tyler made a very contrarian point by saying that he interprets this as a case of AI alignment. The reporter wanted something sensational and he got what he wanted.

So, it will probably be true for a long time that if you want to find a failure of AI, you can get what you want. Still, I’m putting this on the record here because I wonder if this particular problem will get solved quickly.

GPT showing off on Conversations with Tyler

Since posting it in January of 2023, our post on a failure of ChatGPT has been one of the most popular articles of the year in terms of web traffic. Read ChatGPT Cites Economics Papers That Do Not Exist

LLMs will make up nonsense lies, but it might sound sophisticated and serious. As human editors, we are not used to detecting that kind of problem, because usually if something sounds serious that means it has been written by a serious thoughtful person (usually).

That’s a problem, but ChatGPT is really good at some tasks.

Instead of pointing out the fact that ChatGPT is not perfect, Tyler Cowen wants people to see the potential. He has done an entire podcast interview with ChatGPT, asking it to take on the voice and role of deceased writer Jonathan Swift.

I’ll share part of the transcript:

COWEN: In 1709, you, Jonathan Swift, still were calling yourself a Whig. What exactly induced you to switch to the Tory side of the ledger?

SWIFT: My shift from the Whigs to the Tories was a gradual and complex process, influenced by several factors, including personal connections, political developments, and ideological disagreements.

During my early years in politics, I was indeed more aligned with the Whigs, who championed principles such as constitutional government, religious toleration, and the supremacy of Parliament over the monarchy. However, over time, I became increasingly disillusioned with the Whigs, particularly as their internal factions and power struggles intensified.

The Whigs’ support for the war with France, which began in 1702, was a point of contention for me. I believed that the war was costly…

The replies seem nuanced and insightful. Nothing made me suspect that they were factually inaccurate, although I do not know enough about Irish history to judge.

Is there any human who could have produced this script? I think so, although it would have required a lot of work. If one of these replies is better than anything a human Swift scholar would produce, how would we know?

GPT4 can write good summaries for the work of a prolific author like Swift, because the model can train on lots of examples.

GPT4 could probably write a good biography of a modern figure by pulling together all of the writing by them and about them. Maybe GPT4 could efficiently scrape up all mentions of this figure online and synthesize them faster than a human scholar. However, we observed GPT3 completely making up citations when we tried to get it to do economics summaries.

I’m concerned that humans will use GPT4 to write but not do the requisite fact-checking. That could introduce a new corpus of work that the next LLMs will train on, which might be full of lies. Humans might not admit to using GPT, and therefore we wouldn’t have a mechanism for using extra scrutiny on AI-generated writing from 2023. Humans can make mistakes too… so the ultimate solution could be an all-powerful AI that somehow does begin with a fairly accurate map of the world and goes around fact-checking everything faster than human editors ever could.

Self-Replicating Machines: A Practical Human Response

Currently, we have software that can write software. What about physical machines that can produce physical machines? Indeed, what about machines that can produce other machines without human direction?

First of all, machines-building machines (MBM) still require resources: energy, transportation, time, and other inputs. A well-programmed machine that self-replicates quickly can grow in number exponentially. But where would the machines get the resources that enable self-replication? They’d have to purchase them (or conquer the world sci-fi style). Where would a machine get the resources to make purchases of necessary inputs? The same place that everyone else gets them.

Continue reading

Yes, it was SMET

Last week I posted about the transition from SMET to STEM at the National Science Foundation. I was repeating a story that can be found on several websites including an entry in Britannica.

Andrew Ruapp reached out to me about a possible error in my post. He presented some evidence that the term STEM has been used prior to 2001. Casually Googling the topic did not bring me to a reputable source for the claim I had made last week. “SMET” is comically bad. So, I did start to wonder if it had never been officially used at the NSF and was just a funny story getting repeated online.

To solve this problem, I reached out directly to the person who was credited with making the transition. Dr. Judith Ramaley is currently President Emerita and Distinguished Professor of Public Service at Portland State University.

Having her permission to share, here is our email correspondence:

Encouraged by her reply, I looked online and found a public NSF document from 1998 that clearly uses SMET.

Lastly, I asked her several questions, in a mini email interview:

  1. Are you surprised by how widespread the STEM term has become?

Ramaley: I wasn’t surprised because once NSF adopted the new acronym, I expected it would catch on.

2. Do you feel that the “STEM” brand has been successful?

Ramaley: STEM isn’t really a brand. It is simply an acronym. It works better than SMET I think because engineering and technology are framed by science and mathematics rather than trailing along behind as if less important. I am fascinated by the growing pressure to add other elements to STEM, making it STEAM, for instance. 

3. My son in 2nd grade goes to a STEM activity class once a week. (They just call it “STEM.”) This week he tells me they are working on a pollination project. Would you recommend anything different than the current system for encouraging American students to pursue technology fields?

Ramaley: Your third question is a sweeping one. It would help to know what a STEM activity means each week in your son’s second grade class.  I am drawn to ways of learning STEM that encourage students to approach these issues in an inquiry-based way that lets them explore what it means to ask interesting questions and work out ways to try to answer them. Young people are very curious about how the world works. I doubt that I need to tell you that since I bet your son sometimes drives you nuts with WHY and HOW questions. Questions like that are beautiful questions. 

Online Reading Onpaper

We have six weekly contributors here at EWED and I try to read every single post. I don’t always read them the same day that they are published. Being subscribed is convenient because I can let my count of unread emails accumulate as a reminder of what I’ve yet to read.

Shortly after my fourth child was born over the summer, I understandably got quite behind in my reading. I think that I had as many as twelve unread posts. I would try to catchup on the days that I stayed home with the children. After all, they don’t require constant monitoring and often go do their own thing. Then, without fail, every time that I pull out my phone to catch up on some choice econ content, the kids would get needy. They’d start whining, fighting, or otherwise suddenly start accosting me for one thing or another – even if they were fine just moments before. It’s as if my phone was the signal that I clearly had nothing to do and that I should be interacting with them. Don’t get me wrong, I like interacting with my kids. But, don’t they know that I’m a professional living in the 21st century? Don’t they know that there is a lot of good educational and intellectually stimulating content on my phone and that I am not merely zoning out and wasting my time?

No. They do not.

I began to realize that it didn’t matter what I was doing on my phone, the kids were not happy about it.

I have fond childhood memories of my dad smoking a pipe and reading the newspaper. I remember how he’d cross his legs and I remember how he’d lift me up and down with them. I less well remember my dad playing his Game Boy. That was entertaining for a while, but I remember feeling more socially disconnected from him at those times. Maybe my kids feel the same way. It doesn’t matter to them that I try to read news articles on my phone (the same content as a newspaper). They see me on a 1-player device.

So, one day I printed out about a dozen accumulated EWED blog posts as double-sided and stapled articles on real-life paper.

The kids were copacetic, going about their business. They were fed, watered, changed, and had toys and drawing accoutrement. I sat down with my stack of papers in a prominent rocking chair and started reading. You know what my kids did in response? Not a darn thing! I had found the secret. I couldn’t comment on the posts or share them digitally. But that’s a small price to pay for getting some peaceful reading time. My kids didn’t care that I wasn’t giving them attention. Reading is something they know about. They read or are read to every day. ‘Dad’s reading’ is a totally understandable and sympathetic activity. ‘Dad’s on his phone’ is not a sympathetic activity. After all, they don’t have phones.

They even had a role to play. As I’d finish reading the blog posts, I’d toss the stapled pages across the room. It was their job to throw those away in the garbage can. It became a game where there were these sheets of paper that I cared about, then examined , and then discarded… like yesterday’s news. They’d even argue some over who got to run the next consumed story across the house to the garbage can (sorry fellow bloggers).

If you’re waiting for the other shoe to drop, then I’ve got nothing for you. It turns out that this works for us. My working hypothesis is that kids often don’t want parents to give them attention in particular. Rather, they want to feel a sense of connection by being involved, or sharing experiences. Even if it’s not at the same time. Our kids want to do the things that we do. They love to mimic. My kids are almost never allowed to play games or do nearly anything on our phones. So, me being on my phone in their presence serves to create distance between us. Reading a book or some paper in their presence? That puts us on the same page.

ChatGPT Cites Economics Papers That Do Not Exist

This discovery and the examples provided are by graduate student Will Hickman.

Although many academic researchers don’t enjoy writing literature reviews and would like to have an AI system do the heavy lifting for them, we have found a glaring issue with using ChatGPT in this role. ChatGPT will cite papers that don’t exist. This isn’t an isolated phenomenon – we’ve asked ChatGPT different research questions, and it continually provides false and misleading references. To make matters worse, it will often provide correct references to papers that do exist and mix these in with incorrect references and references to nonexistent papers. In short, beware when using ChatGPT for research.

Below, we’ve shown some examples of the issues we’ve seen with ChatGPT. In the first example, we asked ChatGPT to explain the research in experimental economics on how to elicit attitudes towards risk. While the response itself sounds like a decent answer to our question, the references are nonsense. Kahneman, Knetsch, and Thaler (1990) is not about eliciting risk. “Risk Aversion in the Small and in the Large” was written by John Pratt and was published in 1964. “An Experimental Investigation of Competitive Market Behavior” presumably refers to Vernon Smith’s “An Experimental Study of Competitive Market Behavior”, which had nothing to do with eliciting attitudes towards risk and was not written by Charlie Plott. The reference to Busemeyer and Townsend (1993) appears to be relevant.

Although ChatGPT often cites non-existent and/or irrelevant work, it sometimes gets everything correct. For instance, as shown below, when we asked it to summarize the research in behavioral economics, it gave correct citations for Kahneman and Tversky’s “Prospect Theory” and Thaler and Sunstein’s “Nudge.” ChatGPT doesn’t always just make stuff up. The question is, when does it give good answers and when does it give garbage answers?

Strangely, when confronted, ChatGPT will admit that it cites non-existent papers but will not give a clear answer as to why it cites non-existent papers. Also, as shown below, it will admit that it previously cited non-existent papers, promise to cite real papers, and then cite more non-existent papers. 

We show the results from asking ChatGPT to summarize the research in experimental economics on the relationship between asset perishability and the occurrence of price bubbles. Although the answer it gives sounds coherent, a closer inspection reveals that the conclusions ChatGPT reaches do not align with theoretical predictions. More to our point, neither of the “papers” cited actually exist.  

Immediately after getting this nonsensical answer, we told ChatGPT that neither of the papers it cited exist and asked why it didn’t limit itself to discussing papers that exist. As shown below, it apologized, promised to provide a new summary of the research on asset perishability and price bubbles that only used existing papers, then proceeded to cite two more non-existent papers. 

Tyler has called these errors “hallucinations” of ChatGPT. It might be whimsical in a more artistic pursuit, but we find this form of error concerning. Although there will always be room for improving language models, one thing is very clear: researchers be careful. This is something to keep in mind, also, when serving as a referee or grading student work.