META Stock Slides as Investors Question Payout for Huge AI Spend

How’s this for a “battleground” stock:

Meta stock has dropped about 13% when its latest quarterly earnings were released, then continued to slide until today’s market exuberance over a potential end to the government shutdown. What is the problem?

Meta has invested enormous sums in AI development already, and committed to invest even more in the future. It is currently plowing some 65% (!!) of its cash flow into AI, with no near-term prospects of making big profits there. CEO Mark Zuckerberg has a history of spending big on the Next Big Thing, which eventually fizzles. Meta’s earnings have historically been so high that he can throw away a few billion here and there and nobody cared. But now (up to $800 billion capex spend through 2028) we are talking real money.

Up till now Big Tech has been able to finance their investments entirely out of cash flow, but (like its peers), Meta started issuing debt to pay for some of the AI spend. Leverage is a two-edged sword – – if you can borrow a ton of money (up to $30 billion here) at say 5%, and invest it in something that returns 10%, that is glorious. Rah, capitalism! But if the payout is not there, you are hosed.

Another ugly issue lurking in the shadows is Meta’s dependence on scam ads for some 10% of its ad revenues. Reuters released a horrifying report last week detailing how Meta deliberately slow-walks or ignores legitimate complaints about false advertising and even more nefarious mis-uses of Facebook. Chilling specific anecdotes abound, but they seem to be part of a pattern of Meta choosing to not aggressively curtail known fraud, because doing so would cut into their revenue. They focus their enforcement efforts in regions where their hands are likely to be slapped hardest by regulators, while continuing to let advertisers defraud users wherever they can get away with it:

…Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document.

But those fines would be much smaller than Meta’s revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that “present higher legal risk,” the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds “the cost of any regulatory settlement involving scam ads.”

Rather than voluntarily agreeing to do more to vet advertisers, the same document states, the company’s leadership decided to act only in response to impending regulatory action.

Thus, the seamy underside of capitalism. And this:

…The company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain – but still believes the advertiser is a likely scammer – Meta charges higher ad rates as a penalty, according to the documents. 

So…if Meta is 94% (but not 95%) sure that an ad is a fraud, they will still let it run, but just charge more for it.  Sweet. Guess that sort of thinking is why Zuck is worth $250 million, and I’m not.

But never fear, Meta’s P/E is the lowest of the Mag 7 group, so maybe it is a buy after all:

Source

As usual, nothing here should be considered advice to buy or sell any security.

Meta Is Poaching AI Talent With $100 Million Pay Packages; Will This Finally Create AGI?

This month I have run across articles noting that Meta’s Mark Zuckerberg has been making mind-boggling pay offers (like $100 million/year for 3-4 years) to top AI researchers at other companies, plus the promise of huge resources and even (gasp) personal access to Zuck, himself. Reports indicate that he is succeeding in hiring around 50 brains from OpenAI (home of ChatGPT), Anthropic, Google, and Apple. Maybe this concentration of human intelligence will result in the long-craved artificial general intelligence (AGI) being realized; there seems to be some recognition that the current Large Language Models will not get us there.

There are, of course, other interpretations being put on this maneuver. Some talking heads on a Bloomberg podcast speculated that Zuckerberg was using Meta’s mighty cash flow deliberately to starve competitors of top AI talent. They also speculated that (since there is a limit to how much money you can possibly, pleasurably spend) – – if you pay some guy $100 million in a year, a rational outcome would be he would quit and spend the rest of his life hanging out at the beach. (That, of course, is what Bloomberg finance types might think, who measure worth mainly in terms of money, not in the fun of doing cutting edge R&D).

I found a thread on reddit to be insightful and amusing, and so I post chunks of it below. Here is the earnest, optimist OP:

andsi2asi

Zuckerberg’s ‘Pay Them Nine-Figure Salaries’ Stroke of Genius for Building the Most Powerful AI in the World

Frustrated by Yann LeCun’s inability to advance Llama to where it is seriously competing with top AI models, Zuckerberg has decided to employ a strategy that makes consummate sense.

To appreciate the strategy in context, keep in mind that OpenAI expects to generate $10 billion in revenue this year, but will also spend about $28 billion, leaving it in the red by about $18 billion. My main point here is that we’re talking big numbers.

Zuckerberg has decided to bring together 50 ultra-top AI engineers by enticing them with nine-figure salaries. Whether they will be paid $100 million or $300 million per year has not been disclosed, but it seems like they will be making a lot more in salary than they did at their last gig with Google, OpenAI, Anthropic, etc.

If he pays each of them $100 million in salary, that will cost him $5 billion a year. Considering OpenAI’s expenses, suddenly that doesn’t sound so unreasonable.

I’m guessing he will succeed at bringing this AI dream team together. It’s not just the allure of $100 million salaries. It’s the opportunity to build the most powerful AI with the most brilliant minds in AI. Big win for AI. Big win for open source

And here are some wry responses:

kayakdawg

counterpoint 

a. $5B is just for those 50 researchers, loootttaaa other costs to consider

b. zuck has a history of burning big money on r&d with theoretical revenue that doesnt materialize

c. brooks law: creating agi isn’t an easily divisible job – in fact, it seems reasonable to assume that the more high-level experts enter the project the slower it’ll progress given the communication overhead

7FootElvis

Exactly. Also, money alone doesn’t make leadership effective. OpenAI has a relatively single focus. Meta is more diversified, which can lead to a lack of necessary vision in this one department. Passion, if present at the top, is also critical for bleeding edge advancement. Is Zuckerberg more passionate than Altman about AI? Which is more effective at infusing that passion throughout the organization?

….

dbenc

and not a single AI researcher is going to tell Zuck “well, no matter how much you pay us we won’t be able to make AGI”

meltbox

I will make the AI by one year from now if I am paid $100m

I just need total blackout so I can focus. Two years from now I will make it run on a 50w chip.

I promise

Zuckerberg wants to solve general intelligence

Why does Mark Zuckerberg want to solve general intelligence? Well, for one thing, if he doesn’t, one of his competitors will have a better chatbot. Zuckerberg wants to be the best (and good for him). At his core, he wants to build the best stuff (even the world’s best cattle on his ranch).

If AGI is possible, it will get built. I’m not the first person to point out that this is a new space race. If America takes a pause, then someone else will get there first. However, I thought the Zuck interview was an interesting microcosm for why AGI, if possible, will get made.

… We started FAIR about 10 years ago. The idea was that, along the way to general intelligence or whatever you wanna call it, there are going to be all these different innovations and that’s going to just improve everything that we do. So we didn’t conceive of it as a product. It was more of a research group. Over the last 10 years it has created a lot of different things that have improved all of our products. …
There’s obviously a big change in the last few years with ChatGPT and the diffusion models around image creation coming out. This is some pretty wild stuff that is pretty clearly going to affect how people interact with every app that’s out there. At that point we started a second group, the gen AI group, with the goal of bringing that stuff into our products and building leading foundation models that would power all these different products.
… There’s also basic assistant functionality, whether it’s for our apps or the smart glasses or VR. So it wasn’t completely clear at first that you were going to need full AGI to be able to support those use cases. But in all these subtle ways, through working on them, I think it’s actually become clear that you do. …
Reasoning is another example. Maybe you want to chat with a creator or you’re a business and you’re trying to interact with a customer. That interaction is not just like “okay, the person sends you a message and you just reply.” It’s a multi-step interaction where you’re trying to think through “how do I accomplish the person’s goals?” A lot of times when a customer comes, they don’t necessarily know exactly what they’re looking for or how to ask their questions. So it’s not really the job of the AI to just respond to the question.
You need to kind of think about it more holistically. It really becomes a reasoning problem. So if someone else solves reasoning, or makes good advances on reasoning, and we’re sitting here with a basic chat bot, then our product is lame compared to what other people are building. At the end of the day, we basically realized we’ve got to solve general intelligence… (emphasis mine)

Credit to Dwarkesh Patel for this excellent interview. Credit to M.Z. for sharing his thoughts on topics that affect the world.

“we’ve got to solve general intelligence” If a competitor solves AGI first, then you are left behind. No one would not want general intelligence on their team, on the assumption that it can be controlled.

I would like the AGI to do my chores for me, please. Unfortunately, it’s more likely to be able to write my blog posts first.

Zuckerberg Wants to Suck You into His Metaverse

Facebook founder Mark Zuckerberg has been making a lot of noise in the past few months about the “metaverse”, and now has changed his company’s name from Facebook to “Meta Platforms” (MVRS on the NASDAQ). What, you may ask, is the metaverse?

The term itself has been around for a while. Wikipedia defines it as, ”The metaverse is an iteration of the Internet part of shared virtual reality, often as a form of social media. The metaverse in a broader sense may not only refer to virtual worlds operated by social media companies but the entire spectrum of augmented reality.” In the near term, it will to be embodied by people wearing headsets with Augmented Reality (AR) goggles (with little projector screens in front of your eyes) connected over the internet to other people wearing AR googles. Instead of seeing people on flat screens (think Zoom calls), both you and they will seem to be in the same room, interacting with each other in 3-D. You and they will each be represented by digitally constructed avatars. Eventually your body would have various sensors attached to it to convey your position and motions, and your sense of touch for objects you are handling. For instance, this just in:

Together with scientists from Carnegie Mellon University, artificial intelligence researchers at Meta created a deformable plastic “skin” less than 3 millimeters thick….When the skin comes into contact with another surface, the magnetic field from the embedded particles changes. The sensor records the change in magnetic flux, before feeding the data to some AI software, which attempts to understand the force or touch that has been applied.

Zuckerberg gave a presentation on October 28 touting his company’s pivot.  In his words:

The next platform and medium will be even more immersive, an embodied internet where you’re in the experience, not just looking at it, and we call this the metaverse….When you play a game with your friends, you’ll feel like you’re right there together in a different world, not just on your computer by yourself. And when you’re in a meeting in the metaverse, it’ll feel like you’re right in the room together, making eye contact, having a shared sense of space and not just looking at a grid of faces on a screen. That’s what we mean by an embodied internet. Instead of looking at a screen, you’re going to be in these experiences.  You’re going to really feel like you’re there with other people. You’ll see their face expressions. You’ll see their body language. Maybe figure out if they’re actually holding a winning hand…

Next, there are avatars, and that’s how we’re going to represent ourselves in the metaverse. Avatars will be as common as profile pictures today, but instead of a static image, they’re going to be living 3D representations of you, your expressions, your gestures that are going to make interactions much richer than anything that’s possible online today. You’ll probably have a photo realistic avatar for work, a stylized one for hanging out and maybe even a fantasy one for gaming. You’re going to have a wardrobe of virtual clothes for different occasions designed by different creators and from different apps and experiences.

Beyond avatars, there is your home space. You’re going to be able to design it to look the way you want, maybe put up your own pictures and videos and store your digital goods. You’re going to be able to invite people over, play games and hang out. You’ll also even have a home office where you can work…

We believe that neural interfaces are going to be an important part of how we interact with AR glasses, and more specifically EMG input from the muscles on your wrist combined with contextualized AI. It turns out that we all have unused neuromotor pathways, and with simple and perhaps even imperceptible gestures, sensors will one day be able to translate those neuromotor signals into digital commands that enable you to control your devices. It’s pretty wild.

The reactions to all this I have seen on the internet have not been particularly positive. Some suggest that this is largely a publicity stunt to deflect attention from recent revelations of hypocritical and harmful decisions by Facebook management. The Guardian scoffs:

First came the Facebook papers, a series of blockbuster reports in the Wall Street Journal based on a cache of internal documents leaked by Frances Haugen, a former employee turned whistleblower.

The dam broke wider last week after Haugen shared the documents with a wider consortium of news publications, which have published a slew of stories outlining how Facebook knew its products were stoking real-world violence and aggravating mental health problems, but refused to change them.

Now the regulatory sharks are circling. Haugen recently testified before US and UK lawmakers, heightening calls to hold the company to account.

Facebook, meanwhile, appeared to be living in another universe. Its rebrand to Meta this week has prompted ridicule and incredulity that a company charged with eroding the bedrock of global democracy would venture into a new dimension without apologizing for the havoc it wreaked on this one.

Ouch. Privacy advocates are concerned about the implications of identity theft taken into the 3D domain: imagine some malicious actor sending a realistic avatar of you around cyberspace doing things you would not do. Also, it is widely recognized that too much time on today’s (flat) screens is unhealthy; how would 3D glasses make that better?

Scott Rosenburg at Axios notes some more prosaic shortcomings of Zuck’s beatific vision:

The real you is just sitting in a chair wearing goggles…The video mock-ups of the metaverse Zuckerberg unveiled showed us what remote-presence wizardry might look like from within the 3D dimension. But they omitted the prosaic reality of most current VR… Right now, the metaverse isn’t “embodied” at all. It’s an out-of-body experience where your senses take you somewhere else and leave your body behind on a chair or couch or standing like a blindfolded prisoner…

Today’s headsets mostly block out the “real world” — and sometimes induce wooziness, headaches and even nausea. Why it matters: If you fear screen time atrophies your flesh and cramps your soul or find Zoom drains your energy, wait till you experience metaverse overload….

Virtual-world makers will feel the same incentives to boost engagement and hold onto users’ eyeballs in the metaverse that they have on today’s social platforms.

That could leave us all nostalgic for our current era of screen-blurred vision, misinformation-filled newsfeeds and privacy compromises.