Structure Integrated Panels (SIP): The Latest, Greatest (?) Home Construction Method

Last week I drove an hour south to help an acquaintance with constructing his retirement home. I answered a group email request, looking for help in putting up a wall in this house.
I assumed this was a conventional stick-built construction, so I envisioned constructing a studded wall out of two by fours and two by sixes whilst lying flat on the ground, and then needing four or five guys to swing this wall up to a vertical position, like an old-fashioned barn raising.

But that wasn’t it at all. This house was being built from Structure Integrated Panels (SIP). These panels have a styrofoam core, around 5 inches thick, with a facing on each side of thin oriented strandboard (OSB). (OSB is a kind of cheapo plywood).


The edges have a sort of tongue and groove configuration, so they mesh together. Each of the SIP panels was about 9 feet high and between 2 feet and 8 feet long. Two strong guys could manhandle a panel into position. Along the edge of the floor, 2×6’s had been mounted to guide the positioning of the bottom of each wall panel.


We put glue and sealing caulk on the edges to stick them together, and drove 7-inch-long screws through the edges after they were in place, and also a series of  nails through the OSB edges into the 2×6’s at the bottom. Pneumatic nail guns give such a satisfying “thunk” with each trigger pull, you feel quite empowered. Here are a couple photos from that day:


The homeowner told me that he learned about SIP construction from an exhibit in Washington, DC that he attended with his grandson. The exhibit was on building techniques through the ages, starting with mud huts, and ending with SIP as the latest technique. That inspired him.

(As an old guy, I was not of much use lifting the panels. I did drive in some nails and screws. I was not initially aware of the glue/caulk along the edges, so I spent my first 20 minutes on the job wiping off the sticky goo I got all over my gloves and coat when I grabbed my first panel. My chief contribution that day was to keep a guy from toppling backwards off a stepladder who was lifting a heavy panel beam overhead).

We amateurs were pretty slow, but I could see that a practiced crew could go slap slap slap and erect all the exterior walls of a medium sized single-story house in a day or two, without needing advanced carpentry skills. Those walls would come complete with insulation. They would still need weatherproof exterior siding (e.g. vinyl or faux stone) on the outside, and sheetrock on the inside. Holes were pre-drilled in the Styrofoam for running the electrical wiring up through the SIPs.

From my limited reading, it seems that the biggest single advantage of SIP construction is quick on-site assembly. It is ideal for situations where you only have a limited time window for construction, or in an isolated or affluent area where site labor is very expensive and hard to obtain (e.g., a ski resort town). Reportedly, SIP buildings are mechanically stronger than stick-built, handy in case of earthquakes or hurricanes. Also, an SIP wall has very high insulation value, and the construction method is practically airtight.

SIP construction is not cheaper than stick built. It’s around 10% more expensive. You need perfect communication with the manufacturer of the SIP panels; if the delivered panels don’t fit properly on-site, you are hosed. Also, it is tough to modify an SIP house once it is built.

Because it is so airtight, it requires some finesse in designing the HVAC system. You need to be very careful protecting it from the walls from moisture, both inside and out, since the SIP panels can lose strength if they get wet. For that reason, some folks prefer to not use SIP for roofs, but only for walls and first-story flooring.
For more on SIP pros and cons, see here and here.

Fresh observations of Americans working hard

It has been over 100 years since GK Chesterton visited America. I wrote about his observations on the American “enthusiasm for work” for Liberty Fund.

Henry Oliver commented on much the same thing this week in The American art of being busy.

The whole place was as busy as a hive. It went on and on. Everyone was cheerful. No-one fussed and bothered.

And what of the Americans who are not allowed to work because of the government shutdown? Here is a guy who has rapidly found a way to work in DC again and seems “cheerful”: This furloughed IRS lawyer is living out his dream of being a hot dog vendor

This restlessness and energy likely has something to do with us being currently in the lead for the race to AI. Ho hum… building God while still having the equivalent of the GDP of a small country to drop on Halloween trinkets.

Joy on the Anthropic Copyright Settlement

I’m at Econlog this week with:

The Anthropic Settlement: A $1.5 Billion Precedent for AI and Copyright

There are two main questions. Will AI companies need to pay compensation to authors they are currently training off of? Secondly, how important is it for human writing to be a paying career in the future, if AI continues to need good new material to train from?

There is more at the link but here are some quotes:

If human writing ceases to be a viable career due to inadequate compensation, will LLMs lose access to fresh, high-quality training data? Could this create a feedback loop where AI models, trained on degraded outputs, stagnate?

This case also blurs the traditional divide between copyright and patents. Copyrighted material, once seen as static, now drives “follow-on” innovation derived from the original work. That is, the copyright protection in this case affects AI-content influenced by the copyrighted material in a way that previously applied to new technology that built on patented technical inventions. Thus, “access versus incentives” theory applies to copyright as much as it used to apply to patents. The Anthropic settlement signals that intellectual property law, lagging behind AI’s rapid evolution, must adapt.

What’s the Best Major to Prepare for Law School?

  • This is post coauthored with Jack Cavanaugh, Ave Maria University Graduate of 2025.

Say that you want to become a successful lawyer. What does that mean? One possible meaning is that you are well-compensated. Money is not everything, but it does give people more options for how to spend their time and resources. Law degrees are a type of graduate degree. So, what bachelor’s degree major should one choose in preparation for law school? We lack rich administrative data on college majors and LSAT scores.

Luckily, the 2023 American Community Survey (ACS) comes to the rescue. It has all of the typical demographic covariates, income, occupation, and college major. So, if we make the small leap that well-prepared law school students become high-performing lawyers who are ultimately paid more, then what college major puts you on the right path? What should your major be?

We don’t look at an exhaustive list. We place several occupations into bins and examine only a few alternative majors. Any unlisted major falls under ‘other’. Below are the raw average incomes by occupational category and college major. Note two majors in particular. First, Pre-law literally has the word ‘law’ in the name and is marketed as preparation for law school. However, it is the undergraduate major associated with the lowest paid lawyers. For that matter, Pre-law majors have the lowest pay no matter what their occupation is. Second, Economics majors are the most highly paid in all of the occupations.

Continue reading

Is AI learning just MOOCs again?

I created a provocative title for fun. Tyler pointed me to this podcast:

Joe Liemandt  – Building Alpha School, and The Future of Education (Apple podcast link)

I suppose I’m sold on their claim that most kids can learn basic facts and some academic skills from an iPad app. Listen all the way through if you are going to listen at all, because even some cracks in the tech product are revealed after the big pitch in the beginning.

I have been using Duolingo to review my high school French and Spanish. I think the few minutes a day I spend have helped drag some vocabulary back out of long-term storage. Although, as I recently heard a comedian say, “All my friends who have Duolingo are still speaking English to me.”

Folks should consider whether AI learning apps is just MOOCs again. Essentially, they need to get kids to watch (short, this time) videos of lecture content. MOOCs were longer lecture content videos. Maybe shorter is the key, combined with personalized feedback. Maybe not, for getting cheap effective comprehensive education that scales.

Last year I wrote Why Podcasts Succeeded in Gaining Influence Where MOOCs Failed

About half an hour in, Liemandt asserts that anyone in America would agree that kids learn life skills through “sports” not school. That’s an oversimplification, but I agree that sports ranks higher than “math class” for developing leadership ability.

Since they at Alpha School believe that have solved quickly learning facts, it’s interesting to hear how they do the rest of “education.” The school must fill enough time that the parents don’t have to see their kids half the day and also teach leadership/ communication/character. Alpha school is expensive ($40,000 a year) and there are many paid adults involved who are called “guides and coaches.”

The extracurriculars that Alpha school offers sounds a lot like what most kids can do in some form at a good public middle school or high school in America.  I wrote about the value of outside-class activities in college here: The Value of Student Organizations and On-Campus Education: Anecdotal Evidence from Tim Keller

My students at Samford are especially good at taking on leadership roles and creating a thriving community. Residential college provides a good testing ground for leadership and there are real “market tests” of success for things like sorority events, as the Alpha school encourages for older kids.

I applaud people trying to innovate. I think we’ll see more educational apps in schools, and that will be great. I’m not trying to dump on Alpha School. I just think the underperformance arc of MOOCs should temper our enthusiasm.

The 2018 Tariffs in Many Graphs

Did president Trump’s first term tariffs, enacted in 2018, increase manufacturing employment or even just manufacturing output? Let’s set the stage.

Manufacturing employment was at its peak in 1979 at 19.6 million. That number declined to 18m by the 1980s, 17.3m in the 1990s. By 2010, the statistics bottom out at 11.4m. Since then, there has been a rise and plateau to about 12.8m if we omit the pandemic.

Historically, economists weren’t too worried about the transition to services for a while. After all, despite falling employment in manufacturing, output continued to rise through 2007. But, after the financial crisis, output has been flat since 2014, again, if we omit the pandemic. Since manufacturing employment has since risen by 5% through 2025, that reflects falling productivity per worker. That’s not comforting to either economists or to people who want more things “Made in the USA”.

Looking at the graphs, there’s no long term bump from the 2018 tariffs in either employment or output. If you squint, then maybe you can argue that there was a year-long bump in both – but that’s really charitable. But let’s not commit the fallacy of composition. What about the categories of manufacturing? After all, the 2018 tariffs were targeted at solar panels, washing machines, and steel. Smaller or less exciting tariffs followed.

Breaking it down into the major manufacturing categories of durables, nondurables, and ‘other’ (which includes printed material and minimally processed wood products),  only durable manufacturing output briefly got a bump in 2018. But we can break it down further.

Continue reading

Meta Is Poaching AI Talent With $100 Million Pay Packages; Will This Finally Create AGI?

This month I have run across articles noting that Meta’s Mark Zuckerberg has been making mind-boggling pay offers (like $100 million/year for 3-4 years) to top AI researchers at other companies, plus the promise of huge resources and even (gasp) personal access to Zuck, himself. Reports indicate that he is succeeding in hiring around 50 brains from OpenAI (home of ChatGPT), Anthropic, Google, and Apple. Maybe this concentration of human intelligence will result in the long-craved artificial general intelligence (AGI) being realized; there seems to be some recognition that the current Large Language Models will not get us there.

There are, of course, other interpretations being put on this maneuver. Some talking heads on a Bloomberg podcast speculated that Zuckerberg was using Meta’s mighty cash flow deliberately to starve competitors of top AI talent. They also speculated that (since there is a limit to how much money you can possibly, pleasurably spend) – – if you pay some guy $100 million in a year, a rational outcome would be he would quit and spend the rest of his life hanging out at the beach. (That, of course, is what Bloomberg finance types might think, who measure worth mainly in terms of money, not in the fun of doing cutting edge R&D).

I found a thread on reddit to be insightful and amusing, and so I post chunks of it below. Here is the earnest, optimist OP:

andsi2asi

Zuckerberg’s ‘Pay Them Nine-Figure Salaries’ Stroke of Genius for Building the Most Powerful AI in the World

Frustrated by Yann LeCun’s inability to advance Llama to where it is seriously competing with top AI models, Zuckerberg has decided to employ a strategy that makes consummate sense.

To appreciate the strategy in context, keep in mind that OpenAI expects to generate $10 billion in revenue this year, but will also spend about $28 billion, leaving it in the red by about $18 billion. My main point here is that we’re talking big numbers.

Zuckerberg has decided to bring together 50 ultra-top AI engineers by enticing them with nine-figure salaries. Whether they will be paid $100 million or $300 million per year has not been disclosed, but it seems like they will be making a lot more in salary than they did at their last gig with Google, OpenAI, Anthropic, etc.

If he pays each of them $100 million in salary, that will cost him $5 billion a year. Considering OpenAI’s expenses, suddenly that doesn’t sound so unreasonable.

I’m guessing he will succeed at bringing this AI dream team together. It’s not just the allure of $100 million salaries. It’s the opportunity to build the most powerful AI with the most brilliant minds in AI. Big win for AI. Big win for open source

And here are some wry responses:

kayakdawg

counterpoint 

a. $5B is just for those 50 researchers, loootttaaa other costs to consider

b. zuck has a history of burning big money on r&d with theoretical revenue that doesnt materialize

c. brooks law: creating agi isn’t an easily divisible job – in fact, it seems reasonable to assume that the more high-level experts enter the project the slower it’ll progress given the communication overhead

7FootElvis

Exactly. Also, money alone doesn’t make leadership effective. OpenAI has a relatively single focus. Meta is more diversified, which can lead to a lack of necessary vision in this one department. Passion, if present at the top, is also critical for bleeding edge advancement. Is Zuckerberg more passionate than Altman about AI? Which is more effective at infusing that passion throughout the organization?

….

dbenc

and not a single AI researcher is going to tell Zuck “well, no matter how much you pay us we won’t be able to make AGI”

meltbox

I will make the AI by one year from now if I am paid $100m

I just need total blackout so I can focus. Two years from now I will make it run on a 50w chip.

I promise

My Perfunctory Intern

A couple years ago, my Co-blogger Mike described his productive, but novice intern. The helper could summarize expert opinion, but they had no real understanding of their own. To boot, they were fast and tireless. Of course, he was talking about ChatGPT. Joy has also written in multiple places about the errors made by ChatGPT, including fake citations.

I use ChatGPT Pro, which has Web access and my experience is that it is not so tireless. Much like Mike, I have used ChatGPT to help me write Python code. I know the basics of python, and how to read a lot of of it. However, the multitude of methods and possible arguments are not nestled firmly in my skull. I’m much faster at reading, rather than writing Python code. Therefore, ChatGPT has been amazing… Mostly.

I have found that ChatGPT is more like an intern than many suppose:

Continue reading

Counting Hallucinations by Web-Enabled LLMs

In 2023, we gathered the data for what became “ChatGPT Hallucinates Nonexistent Citations: Evidence from Economics.” Since then, LLM use has increased. A 2025 survey from Elon University estimates that half of Americans now use LLMs. In the Spring of 2025, we used the same prompts, based on the JEL categories, to obtain a comprehensive set of responses from LLMs about topics in economics.

Our new report on the state of citations is available at SSRN: “LLM Hallucination of Citations in Economics Persists with Web-Enabled Models

What did we find? Would you expect the models to have improved since 2023? LLMs have gotten better and are passing ever more of what used to be considered difficult tests. (Remember the Turing Test? Anyone?) ChatGPT can pass the bar exam for new lawyers. And yet, if you ask ChatGPT to write a document in the capacity of a lawyer, it will keep making the mistake of hallucinating fake references. Hence, we keep seeing headlines like, “A Utah lawyer was punished for filing a brief with ‘fake precedent’ made up by artificial intelligence

What we call GPT-4o WS (Web Search) in the figure below was queried in April 2025. This “web-enabled” language model is enhanced with real-time internet access, allowing it to retrieve up-to-date information rather than relying solely on static training data. This means it can answer questions about current events, verify facts, and provide live data—something traditional models, which are limited to their last training cutoff, cannot do. While standard models generate responses based on patterns learned from past data, web-enabled models can supplement that with fresh, sourced content from the web, improving accuracy for time-sensitive or niche topics.

At least one third of the references provided by GPT-4o WS were not real! Performance has not significantly improved to the point where AI can write our papers with properly incorporated attribution of ideas. We also found that the web-enabled model would pull from lower quality sources like Investopedia even when we explicitly stated in the prompt, “include citations from published papers. Provide the citations in a separate list, with author, year in parentheses, and journal for each citation.” Even some of the sources that were not journal articles were cited incorrectly. We provide specific examples in our paper.

In closing, consider this quote from an interview with Jack Clark, co-founder of Anthropic:

The best they had was a 60 percent success rate. If I have my baby, and I give her a robot butler that has a 60 percent accuracy rate at holding things, including the baby, I’m not buying the butler.