The Fermi Paradox: Where Are All Those Aliens?

Last week NASA’s independent study team released its highly anticipated report on UFOs.  A couple of takeaways: First, the term “UFO” has been replaced  in fed-speak by “UAP” (unidentified anomalous phenomena). Second, no hard evidence has emerged demonstrating an extra-terrestrial origin for UAPs, but, third, there is much that remains unexplained.

Believers in aliens are undeterred. Earlier this summer, former military intelligence officer David Grusch had made sensational claims in a congressional hearing that the U.S. government is concealing the fact that they are in possession of a “non-human spacecraft.”  The NASA director himself, Bill Nelson, holds that it is likely that intelligent life exists in other corners of the universe, given the staggering number of all the stars which likely have planets with water and moderate temperatures.

A famous conversation took place in 1950 amongst a group of top scientists at Los Alamos (think: Manhattan Project) over lunch. They had been chatting about the recent UFO reports and the possibility of faster-than-light travel. Suddenly Enrico Fermi blurted out something like, “But where is everybody?”

His point was that if (as many scientists believe) there is a reasonable chance that technically-advanced life-forms can evolve on other planets, then given the number of stars (~ 300 million) in our Milky Way galaxy and the time it has existed, it should have been all colonized many times over by now. Interstellar distances are large, but 13 billion years is a long time.  Earth should have received multiple visits from aliens. Yet, there is no evidence that this has occurred, not even one old alien probe circling the Sun. This apparent discrepancy is known as the Fermi paradox.

A variety of explanations have been advanced to explain it. To keep this post short, I will just list a few of these factors, pulled from a Wikipedia article:

Extraterrestrial life is rare or non-existent

Those who think that intelligent extraterrestrial life is (nearly) impossible argue that the conditions needed for the evolution of life—or at least the evolution of biological complexity—are rare or even unique to Earth.

It is possible that even if complex life is common, intelligence (and consequently civilizations) is not.

Periodic extinction by natural events [e.g., asteroid impacts or gamma ray bursts]

 Intelligent alien species have not developed advanced technologies [ e.g., if most planets which contain water are totally covered by water, many planets may harbor intelligent aquatic creatures like our dolphins and whales, but they would be unlikely to develop starship technology].

It is the nature of intelligent life to destroy itself [Sigh]

It is the nature of intelligent life to destroy other technically-advanced species [A prudent strategy to minimize threats; the result being a reduction in the number of starship civilizations].

And there are many other explanations proposed, including the “zoo hypothesis,” i.e., alien life intentionally avoids communication with Earth to allow for natural evolution and sociocultural development, and avoiding interplanetary contamination, similar to people observing animals at a zoo.

As a chemical engineer and amateur reader of the literature on the origins of life, I’d put my money on the first factor. We have reasonable evidence for tracing the evolution of today’s complex life-forms back to the original cells, but I think the odds for spontaneous generation of those RNA/DNA-replicating cells are infinitesimally  low.  Hopeful biochemists wave their hands like windmills proposing pathways for life to arise from non-living chemicals, but I have not seen anything that seems to pass the sniff test. It is a long way from a chemical soup to a self-replicating complex system. I would be surprised to find bacteria, much less star-travelling aliens, on many other planets in the galaxy.

Maybe that’s just me. But Joy Buchanan’s recent poll of authors on this blog suggest that we are collectively a skeptical lot.

Innovation as inspiration

Moments of inspiration can and do lead to innovation, almost by definition. Sometimes we forget that innovation is itself inspirational. I first read about “inverse vaccines” two days ago and it hasn’t left my mind since.

“Inverse vaccine” shows potential to treat multiple sclerosis and other autoimmune diseases

These are lab results, not clinical trials. This is not a new treatment coming any time soon. The logic though, the idea, is absolutely brilliant. Traditional vaccines teach the immune something the blueprint for attacking a new enemy. These researchers realized that the liver has a method for marking molecules with N-acetylgalactosamine so that the immune system knows not to attack them. Autoimmune disorders, from common allergies to multiple schlerosis, are a product of the immune system attacking what it shouldn’t.

Why can’t we mark the cells of the body being tormented by an overzealous immune system with N-acetylgalactosamine?

It’s such a simple idea. Simple and completely brilliant. I am convinced we are living in a new golden age of vaccines. But this? This is inspired and inspiring, promising to take what is already a time of miracles and push it in an entirely new direction. There are new ideas sitting out there, waiting to be conceived. But sometimes what we need is for an act of innovation to inspire us to think in a new way. Or an old way. Or a reciprocal way.

Asking EWED if the Aliens Visited

All the chatter about aliens made me want to do something new: poll my excellent co-bloggers.

Overall, this group does not put the probability of alien intelligent life existing at 0%. It is not possible to prove that aliens are nowhere in a vast universe. A separate question is whether the recent unboxing event in Mexico or the sightings by US military pilots raises the probability that aliens have visited earth. This group does find that recent evidence to be very convincing.

Here are the group thoughts, separated by paragraphs but not indented as quotes:

I don’t think that I have enough information to put a probability on aliens existing. I am not compelled by recent evidence. 

I’m near 50% that they’re out there somewhere in the universe, less than 10% that they are visiting Earth, though some recent evidence (the US military videos, not the Peruvian mummies) is compelling enough to raise this slightly. The 50% is coming from the Fermi paradox, and what I find most compelling from the last few years isn’t any of these potential sightings on Earth, but rather the recent attempts to model the Fermi paradox differently.  Sandberg, Drexler and Ord (2018) argue that when you use probability distributions instead of point estimates in the Drake equation, it is actually reasonable to expect that we are alone in the universe. Robin Hanson has a different model where alien life is common in the universe, but we shouldn’t expect to see them yet.

I put the probability of aliens existing between 0.01% and 10%, but  I find none of the recent evidence compelling enough to have raised my probability. 

Put the probability of aliens existing between 0.01% and 10%. I find some recent evidence compelling enough to have raised my probability.

I doubt that aliens exist, and I find all recent evidence uncompelling.

Joy again: Part of the reason for doing a poll is that I have not dug into this. I have not watched all of the videos, or even most of the most famous videos. I did skim “The UFO craze was created by government nepotism and incompetent journalism” and the part that makes the most sense to me is that UFO stories are great for clicks (clicks are web traffic -> money).

Christine Lagarde on Instability in 2023

Christine Lagarde, President of the European Central Bank, gave a speech called “Policymaking in an age of shifts and breaks” at Jackson Hole in August 2023.

She mentioned multiple factors that make the near future hard to predict, from the effect of A.I. on jobs to the war in Ukraine.

In the pre-pandemic world, we typically thought of the economy as advancing along a steadily expanding path of potential output, with fluctuations mainly being driven by swings in private demand. But this may no longer be an appropriate model.

For a start, we are likely to experience more shocks emanating from the supply side itself.

A line I found interesting, because of my paper on sticky wages:

Large-scale reallocations can also lead to rising prices in growing sectors that cannot be fully offset by falling prices in shrinking ones, owing to downwardly sticky nominal wages. So the task of central banks will be to keep inflation expectations firmly anchored at our target while these relative price changes play out.

And this challenge could become more complex in the future because of two changes in price- and wage-setting behaviour that we have been seeing since the pandemic.

First, faced with major demand-supply imbalances, firms have adjusted their pricing strategies. In the recent decades of low inflation, firms that faced relative price increases often feared to raise prices and lose market share. But this changed during the pandemic as firms faced large, common shocks, which acted as an implicit coordination mechanism vis-à-vis their competitors.

Under such conditions, we saw that firms are not only more likely to adjust prices, but also to do so substantially. That is an important reason why, in some sectors, the frequency of price changes has almost doubled in the euro area in the last two years compared with the period before 2022.

Once Covid changed our lives so much, then things kept changing. Firms are raising prices because consumers got used to change.

At this Jackson Hole meeting, both J. Powell, the chair of the Federal Reserve, and Lagarde indicated that they are trying to get inflation under control and back to the 2% target. If you want to get this information via podcast, listen to “Joe Gagnon on Inflation Progress and the Path Ahead: Breaking Down Jerome Powell’s Jackson Hole Speech

After reading her interesting speech, I had to know more about C. Lagarde. On Wikipedia, I discovered:

After her baccalauréat in 1973, she went on an American Field Service scholarship to the Holton-Arms School in Bethesda, Maryland.[18][19] During her year in the United States, Lagarde worked as an intern at the U.S. Capitol as Representative William Cohen’s congressional assistant, helping him correspond with French-speaking constituents from his northern Maine district during the Watergate hearings.

Since my post about “awards for young talent” was found and shared on Twitter, I have continued thinking about it. According to Wiki, C. Lagarde has received several prestigious awards. Her progression through the “Most Powerful Woman in the World” ranking is something.

Imagine being that close to the top back in 2015 and getting beat out by American Melinda Gates.  But today, Lagarde is winning over both Melinda French Gates and Kamala Harris. Will an economist climb to #1? Lagarde is currently sitting at #2 when I checked the Forbes website.

Dysfunctional Virtue: A Tale of No Profits

For-profit firms are well-oriented. The managers within firms may not make profit their only explicit priority, but it is pre-requisite to their other concerns. Without profits, firms eventually cease to exist. Non-profits are different. They might have revenues due to sales and operate much like a for-profit firm. But, they many times operate on revenue from donations and endowments. Because the success of non-profits is harder to measure, the signals of triumph and defeat do not orient the employees as clearly. The result can be that there is a lot of ruin in a non-profit. Plenty of tasks are done inefficiently, poorly, or not at all.

Mission-driven non-profits are able to attract enthusiastic, dedicated employees given the pay that they offer. But, supporting the mission of such an organization often acts as an implicit “belief test”, filtering out other would-be job applicants who self-select out of applying to open positions for which they are otherwise qualified. Indeed, part of the purpose of mission statements is to filter for the kind of employees that the organization managers or donors desire. While the employees may be enthusiastic and dedicated to the mission, that is mostly separate from whether they have the technical skills to flourish in their position and to effectively serve the organization.

Continue reading

Bond King Doesn’t Like Bonds

Bill Gross grew PIMCO into a trillion dollar company by trading bonds, earning the epithet “Bond King“. But in an interview with Odd Lots this week, he disclaims both bonds and his title. He wasn’t the king:

My reputation as a bond king was first of all made by Fortune. They printed a four page article with me standing on my head doing yoga, and I was supposedly the bond king, and that was good because it sold tickets. But I never really believed it. The minute you start believing it, you’re cooked.

Who is the real bond king? The Fed:

The bond kings and queens now are are at the Fed. They rule, they determine for the most part which way interest rates are going.

Who still isn’t the bond king? Any other trader, especially Jeff Gundlach:

To be a bond king or a queen, you need a kingdom, you need a kingdom. Okay, Pimco had two trillion dollars. Okay, DoubleLine’s got like fifty five billion. Come on, come on, that’s no kingdom. That’s like Latvia or Estonia whatever. Okay, and then then look at his record for the last five, six, seven years. How does sixtieth percentile smack of a bond king? It doesn’t.

Why he doesn’t believe in long-term bonds right now:

We have a deficit of close to two trillion. The outstanding treasury market is about 33 trillion… about thirty percent of the existing outstanding treasuries, so ten trillion have to be rolled over in the next twelve months, including the two trillion that’s new. So that’s that’s twelve trillion dollars. Where the treasuries that have to be financed over the next twelve months, and who’s going to buy them at these levels? Well, some people are buying them, but it just seems to be a lot of money. And when you when you add on to that, Powell is doing quantitative tightening, as you know, and that theoretically is a trillion dollars worth of added supply, I guess. And so it just seems like a very dangerous time based on supply, even if inflation does comedown.

By revealed preference I agree with Gross, in that I don’t own any long-term bonds. Their yields are way up from 2 years ago, making them somewhat tempting, but I can get higher yields on short-term bonds, some savings accounts, and some stocks. So I see no reason to go long term, especially given the factors Gross highlights. If he’s right, better long-term yields will be here in a year or two. If he turns out to be wrong, I think it would be because of a severe recession here or in another major economy, but I don’t expect that. So what is Gross buying instead of bonds? He likes the idea of real estate:

 All all my buddies at the country club are in real estate, and they’ve never paid a tax in their life…. I’ve paid a lot of taxes.

He landed on Master Limited Partnerships, common in the energy sector, as an easier way to avoid taxes, and has 40% of his wealth there. Those are yielding more like 9% and have the tax benefits, though they are risker than treasury bonds. The rest of his portfolio he implies is in stocks, describing some merger arbitrage opportunities. I am a bit tempted by bonds because they’ve done so badly recently (and so have gotten much cheaper), but like Gross I think we’re still not to the bottom.

Median Income Is Down Again. Are There Any Silver Linings in the Data?

This week the Census Bureau released their annual update on “Income, Poverty and Health Insurance Coverage in the United States.” This release is always exciting for researchers, because it involves as massive release of data based on a fairly large (75,000 household) sample with detailed questions about income and related matters. For non-specialists, it also generates some of the most commonly used national data on income and poverty. Have you heard of the poverty rate? It’s from this data. How about median household income? Also from this data.

I’ll focus on income data in this post, though there is a lot you could say about poverty and health insurance too. The headline result on median income is, once again, a dismal one. Whether you look at median household income (very commonly reported, even though I don’t like this measure) or median family income (which I prefer), both are down from 2021 to 2022 when adjusted for inflation. Both are still down noticeably from the pre-pandemic high in 2019 (though both are also above 2018 — we aren’t quite back to the Great Depression or Dark Ages, folks!).

These headline results are bad. There is no way to sugarcoat or “on the other hand” those results. And these results are probably more robust and representative than other measures of average or median earnings, since they aren’t subject to “composition effects” — when those with zero wages in one period don’t show up in the data. I will note that these results are for 2022, and we are highly likely to see a turnaround when we get the 2023 data in about a year (inflation has slowed to less than wage growth in 2023).

But given that obviously bad headline result, was there any good data? As I mentioned above, a ton of data, sliced many different ways, is released with this report. Some of it also gives us consistent data back decades, in some cases to the 1940s. What else can we learn from this data release?

Median Income by Race

When we look at median income by race, there are a few silver linings. The headline data from Census tells us that only the drop in household income for White, Non-Hispanics was statistically significant. For other races and ethnicities, the changes were not statistically significant from 2021 to 2022 — and some of those changes were actually positive. We shouldn’t dismiss White, Non-Hispanics — they are the largest racial/ethnic group! — but it is useful to look at others.

Black household and families are the most interesting to look at in more detail, especially because they are the poorest large racial group in the US. Black household and family income increased from 2021 to 2022, although the increase was small enough that we can’t say it is statistically significant (remember, this is a sample, not the universe of the decennial Census).

But what’s more important is that median Black household income is now at the highest level it has ever been (adjusted for inflation, as always). Median Black household income is about $1,000, or around 2 percent higher than in 2019 — the peak date for overall median income. Two percent growth over 3 years is nothing to shout from the rooftops, but it is very different from White, Non-Hispanic households, which are down over 6 percent since 2019.

Median black family income is roughly flat since 2019, but it is up about 1.5 percent in the past year — not quite as robust, but still better than the overall numbers.

Historical Income Data

The other silver lining I always like to mention is the long-run historical data. This data often gets overlooked in the obsessive focus on the most recent changes, so it’s useful to sit back and look at how far we have come. Let’s start where we just left off, with Black families. I wrote a post back in February about Black family income, which had data current through 2021, but it’s useful once again to look at the data with another year (plus they have updated the inflation adjustments for 2000 onward).

The chart shows the percent of Black families that are in three income groups, using total money income data. The data is adjusted for inflation. The progress is dramatic. In 1967, the first year available, half of Black families had incomes under $35,000. By 2022 that number had been cut in half to just one quarter of families (the 2022 number is the lowest on record, even beating 2019). Twenty-five percent is still very high, especially when compared to White, Non-Hispanics (it’s about 12 percent), but it’s still massive progress. It’s even a 10-percentage point drop from just 10 years ago. And Black families haven’t just moved up a little bit: the “middle class” group (between $35,000 and $100,000) has been pretty stable in the mid-40 percentages, while the number of rich (over $100,000) Black families has grown dramatically, from just 5 percent to over 30 percent.

We saw earlier that progress for White, Non-Hispanics has stumbled in the past 3 years, but the long run data is much more optimistic (this data starts in 1972).

The progress here should be evident too, but let me highlight one thing for emphasis: as far back as 1999, the largest of these three groups was the “rich” (over $100,000 group). And since 2017, the upper income group has been the majority, with median White Non-Hispanic family income surpassing $100,000 in 2017, up from $70,000 at the beginning of the series in the early 1970s (all inflation adjusted, of course).

The next question I often get with this historical data is: How much of this increase is due to the rise of two-income households. Well, this same data release allows us to look at that data too! This final chart shows median family income for families with either one or two earners (there are families with zero earners or more than two, but these two categories make up the bulk of families). This data is pretty cool because it goes all the way back to 1947.

This chart doesn’t look so good for one-earner families. After growing along with two-earner families in the 1950s and 1960s, it basically stagnates from the early 1970s until the late 2010s. Then you get a little growth. Not good!

I think more investigation is needed here, but the share of families that have two earners has grown dramatically, from 26 percent of families in 1947 to 42 percent in 2022. Single earner families shrunk from 59 percent to 31 percent, and dual-income families have been the most common family type since the late 1960s. There are some important compositional differences here in what types of families only have one earner. If we imagine some alternate history where, by law, only one spouse was allowed to work, certainly the single earner line would have risen more. And many of the single earner families today are single mothers, who for a variety of reasons have much lower earning potential than the fathers heading married couples in the 1950s and 1960s. So the numbers aren’t perfectly comparable.

Still, even for single earner families, real median income has more than doubled since 1947 — though most of that growth had happened by the early 1970s.

As we make our way through a challenging economic time following the pandemic and 2 years of unusually high inflation, hopefully we can look forward to a future of resuming the upward trajectory of incomes for all kinds of families.

Generative AI Nano-Tutorial

Everyone who has not been living under a rock this year has heard the buzz around ChatGPT and generative AI. However, not everyone may have clear definitions in mind, or understanding of how this stuff works.

Artificial intelligence (AI) has been around in one form or another for decades. Computers have long been used to analyze information and come up with actionable answers. Classically, computer output has been in the form of numbers or graphical representation of numbers. Or perhaps in the form of chess moves, beating all human opponents since about 2000.

Generative AI is able to “generate” a variety of novel content, such as images, video, music, speech, text, software code and product designs, with quality which is difficult to distinguish from human-produced content. This mimicry of human content creation is enabled by having the AI programs analyze reams and reams of existing content (“training data”), using enormous computing power.

I wanted to excerpt here a fine article I just saw which is informative on this subject. Among other things, it lists some examples of gen-AI products, and describes the “transformer” model that underpins many of these products. I skipped the section of the article that discusses the potential dangers of gen-AI (e.g., problems with false “hallucinations”), since that topic has been treated already in this blog.

Between this article and the Wikipedia article on Generative artificial intelligence , you should be able to hold your own, or at least ask intelligent questions, when the subject next comes up in your professional life (which it likely will, sooner or later).

One technical point for data nerds is the distinction between “generative” and “discriminative” approaches in modeling. This is not treated in the article below, but see here.

All text below the line of asterisks is from Generative AI Defined: How it Works, Benefits and Dangers, by Owen Hughes, Aug 7, 2023.

*******************************************************

What is generative AI in simple terms?

Generative AI is a type of artificial intelligence technology that broadly describes machine learning systems capable of generating text, images, code or other types of content, often in response to a prompt entered by a user.

Generative AI models are increasingly being incorporated into online tools and chatbots that allow users to type questions or instructions into an input field, upon which the AI model will generate a human-like response.

How does generative AI work?

Generative AI models use a complex computing process known as deep learning to analyze common patterns and arrangements in large sets of data and then use this information to create new, convincing outputs. The models do this by incorporating machine learning techniques known as neural networks, which are loosely inspired by the way the human brain processes and interprets information and then learns from it over time.

To give an example, by feeding a generative AI model vast amounts of fiction writing, over time the model would be capable of identifying and reproducing the elements of a story, such as plot structure, characters, themes, narrative devices and so on.

……

Examples of generative AI

…There are a variety of generative AI tools out there, though text and image generation models are arguably the most well-known. Generative AI models typically rely on a user feeding it a prompt that guides it towards producing a desired output, be it text, an image, a video or a piece of music, though this isn’t always the case.

Examples of generative AI models include:

  • ChatGPT: An AI language model developed by OpenAI that can answer questions and generate human-like responses from text prompts.
  • DALL-E 2: Another AI model by OpenAI that can create images and artwork from text prompts.
  • Google Bard: Google’s generative AI chatbot and rival to ChatGPT. It’s trained on the PaLM large language model and can answer questions and generate text from prompts.
  • Midjourney: Developed by San Francisco-based research lab Midjourney Inc., this gen AI model interprets text prompts to produce images and artwork, similar to DALL-E 2.
  • GitHub Copilot: An AI-powered coding tool that suggests code completions within the Visual Studio, Neovim and JetBrains development environments.
  • Llama 2: Meta’s open-source large language model can be used to create conversational AI models for chatbots and virtual assistants, similar to GPT-4.
  • xAI: After funding OpenAI, Elon Musk left the project in July 2023 and announced this new generative AI venture. Little is currently known about it.

Types of generative AI models

There are various types of generative AI models, each designed for specific challenges and tasks. These can broadly be categorized into the following types.

Transformer-based models

Transformer-based models are trained on large sets of data to understand the relationships between sequential information, such as words and sentences. Underpinned by deep learning, these AI models tend to be adept at NLP [natural language processing] and understanding the structure and context of language, making them well suited for text-generation tasks. ChatGPT-3 and Google Bard are examples of transformer-based generative AI models.

Generative adversarial networks

GANs are made up of two neural networks known as a generator and a discriminator, which essentially work against each other to create authentic-looking data. As the name implies, the generator’s role is to generate convincing output such as an image based on a prompt, while the discriminator works to evaluate the authenticity of said image. Over time, each component gets better at their respective roles, resulting in more convincing outputs. Both DALL-E and Midjourney are examples of GAN-based generative AI models…

Multimodal models

Multimodal models can understand and process multiple types of data simultaneously, such as text, images and audio, allowing them to create more sophisticated outputs. An example might be an AI model capable of generating an image based on a text prompt, as well as a text description of an image prompt. DALL-E 2 and OpenAI’s GPT-4 are examples of multimodal models.

What is ChatGPT?

ChatGPT is an AI chatbot developed by OpenAI. It’s a large language model that uses transformer architecture — specifically, the “generative pretrained transformer”, hence GPT — to understand and generate human-like text.

What is Google Bard?

Google Bard is another example of an LLM based on transformer architecture. Similar to ChatGPT, Bard is a generative AI chatbot that generates responses to user prompts.

Google launched Bard in the U.S. in March 2023 in response to OpenAI’s ChatGPT and Microsoft’s Copilot AI tool. In July 2023, Google Bard was launched in Europe and Brazil.

…….

Benefits of generative AI

For businesses, efficiency is arguably the most compelling benefit of generative AI because it can enable enterprises to automate specific tasks and focus their time, energy and resources on more important strategic objectives. This can result in lower labor costs, greater operational efficiency and new insights into how well certain business processes are — or are not — performing.

For professionals and content creators, generative AI tools can help with idea creation, content planning and scheduling, search engine optimization, marketing, audience engagement, research and editing and potentially more. Again, the key proposed advantage is efficiency because generative AI tools can help users reduce the time they spend on certain tasks so they can invest their energy elsewhere. That said, manual oversight and scrutiny of generative AI models remains highly important.

Use cases of generative AI

Generative AI has found a foothold in a number of industry sectors and is rapidly expanding throughout commercial and consumer markets. McKinsey estimates that, by 2030, activities that currently account for around 30% of U.S. work hours could be automated, prompted by the acceleration of generative AI.

In customer support, AI-driven chatbots and virtual assistants help businesses reduce response times and quickly deal with common customer queries, reducing the burden on staff. In software development, generative AI tools help developers code more cleanly and efficiently by reviewing code, highlighting bugs and suggesting potential fixes before they become bigger issues. Meanwhile, writers can use generative AI tools to plan, draft and review essays, articles and other written work — though often with mixed results.

The use of generative AI varies from industry to industry and is more established in some than in others. Current and proposed use cases include the following:

  • Healthcare: Generative AI is being explored as a tool for accelerating drug discovery, while tools such as AWS HealthScribe allow clinicians to transcribe patient consultations and upload important information into their electronic health record.
  • Digital marketing: Advertisers, salespeople and commerce teams can use generative AI to craft personalized campaigns and adapt content to consumers’ preferences, especially when combined with customer relationship management data.
  • Education: Some educational tools are beginning to incorporate generative AI to develop customized learning materials that cater to students’ individual learning styles.
  • Finance: Generative AI is one of the many tools within complex financial systems to analyze market patterns and anticipate stock market trends, and it’s used alongside other forecasting methods to assist financial analysts.
  • Environment: In environmental science, researchers use generative AI models to predict weather patterns and simulate the effects of climate change

….

Generative AI vs. machine learning

As described earlier, generative AI is a subfield of artificial intelligence. Generative AI models use machine learning techniques to process and generate data. Broadly, AI refers to the concept of computers capable of performing tasks that would otherwise require human intelligence, such as decision making and NLP.

Machine learning is the foundational component of AI and refers to the application of computer algorithms to data for the purposes of teaching a computer to perform a specific task. Machine learning is the process that enables AI systems to make informed decisions or predictions based on the patterns they have learned.

( Again, to make sure credit goes where it is due, the text below the line of asterisks above was excerpted from Generative AI Defined: How it Works, Benefits and Dangers, by Owen Hughes).

AI as matter compiler

I just learned that the de-aging process used to produce scenes with a “younger” Harrison Ford in the most recent Indiana Jones movie was produced using an AI process that matched the Ford’s face in each newly filmed moment with a perfectly matching facial expression from archival stock of the actor.

The movie was fine, even if the third act was a little undercooked. I just want to point out two things. First, this is a natural extension of the LLM model of artificial intelligence: pulling from library of information provided (i.e. the internet or footage of Indiana Jones punching Nazis) and then reassembling that information to produce a new product. When we debate whether or not these pieces of software constitute actual “intelligence” what we are really arguing about is whether or not the substrate is sufficiently simple, sufficiently inert for the act of assembly to constitute an act of intelligence.

Case in point, nobody is arguing that de-aging Harrison Ford betrays true intelligence on the part of the software. The material being acted upon is already coherent, it’s just being reordered to convey a new message ( i.e. scenes in a different movie). Similar things could be said about ChatGPT. It’s just searching through pre-existing text and ideas, sifting for relevance, and reordering to optimally assemble into an updated product.

Operating within this metaphor, what would constitute intelligence is the that can sift through a primordial substrate of inorganic, individually incoherent components, and assemble them into original pieces of coherent information; sparks of cognition. An example of this would be to take the ambient, ineffable sentiments implied by the collective set of questions being asked by ChatGPT users (hopes, fears, feelings, etc) and produce not just answers to questions not yet answered by humans, but to answer questions not yet asked. To sift through information, break it into into its smallest possible molecules of cognition, and contribute to the broader collective body of knowledge by assembling new thoughts.

This sort of process has been occasionally posited in the form of a “matter compiler” in science-fiction, which is essentially just a 3-D printer at the molecular level. That remains pretty far off, as best I know. I suspect the same will be true of true artificial generalized intelligence. We know an awful lot about how molecules are assembled, the problem with producing a matter compiler is largely one of cost. We know comparably less about how neurons firing are assembled into acts of generative intelligence, of creativity. We will know no doubt get there, but getting there usually happens well before we cross the chasm of cost, of material feasibility.

But yeah, the new Indiana Jones movie was okay.

Assorted Links on Women and Family

First, there have been many tweets about Sophie Turner as a young mom and human who is getting divorced. Here’s an article (Stylist UK).

Also so many tweets about the 29-year-old who made eggs on the weekend. Here’s an article about it by Mary Harrington.

Thirdly, Understanding the Baby Boom (Works in Progress)

Parenthood rapidly became much easier and safer between the 1930s and 1950s. The spread of labour-saving devices in the home such as washing machines and fridges made raising children easier; improvements in medicine making childbirth safer; and easier access to housing made it cheaper to house larger families.

Anvar Sarygulov & Phoebe Arslanagic-Wakefield

I hate to be the next person publicly talking about Joe Jonas and Sophia Turner. I wish them both the best, and this kind of attention is probably hard on their kids. Anyway… what interests me about this case is that parenting seems to have been hard on them, even though Joe Jonas is worth $50 million. They could have a washing machine on every floor of their huge house. So, do the Works in Progress authors really understand the Baby Boom?