I love data, I love maps, and I love data visualizations.
While we tend not to remember entire data sets, we often remember some patterns related to rank. Speaking for myself anyway, I usually remember a handful of values that are pertinent to me. If I have a list of data by state, then I might take special note of the relative ranking of Florida (where I live), the populous states, Kentucky (where my parents’ families live), and Virginia (where my wife’s family lives). I might also take special note of the top rank and the bottom rank. See the below table of liquor taxes by State. You can easily find any state that you care about because the states are listed alphabetically.
A ranking is useful. It helps the reader to organize the data in their mind. But rankings are ordinal. It’s cool that Florida has a lower liquor tax than Virginia and Kentucky, but I really care about the actual tax rates. Is the difference big or small? Like, should I be buying my liquor in one of the other states in the southeast instead of Florida? Without knowing the tax rates, I can’t make the economic calculation of whether the extra stop in Georgia is worth the time and hassle. So, the most useful small data sets will have both the ranking and the raw data. Maybe we’re more interested in the rankings, such as in the below table.
But, tables take time to consume. A reader might immediately take note of the bottom and top values. And given that the data is not in alphabetical order, they might be able to quickly pick out the state that they’re accustomed to seeing in print. But otherwise, it will be difficult to scan the list for particular values of interest.
Saturday Night Live fans were introduced to Non-Fungible Tokens (NFTs) a year ago with this skit. Most people know that an NFT is a digital ownership certificate of some asset. That could be a physical asset, or a purely digital asset, like a crude graphic of an ape wearing a sailor’s hat which people are willing to pay hundreds of thousands or millions of dollars for.
The NFT market volume exploded in the second half of 2021:
On-line chain transactions as tracked by DappRadar. Source: Schwab.
The global NFT market is projected to grow from $1.9 billion in 2021 to $5.1 billion by 2028, an annual growth rate of some 18%.
But, why??? Why would people plunk down millions of dollars for just a certificate of ownership of something which may not be particularly beautiful or functional? It is just not something that would ever occur to me.
Part of the answer must be that there are a lot of people who have a lot of money that they don’t really need. This may be a function of the ever-increasing income inequality, but we will not go down that rabbit hole. But still, assuming some 30-something has 50 grand that he doesn’t need — why spend it on an NFT?
I did a real quick search on this topic. The most common reason appears to be the same reason many people buy rare coins or rare wines or other “collectibles” – they hope that someone else will pay them a higher price in the future. There also seems to be a sense of participating in some “community”, e.g., of Bored Ape Yacht Club aficionados. Much of it comes down to the psychology of what others will pay for something, which can be often explained in hindsight, but can be hard to predict if some asset class has not yet become “hot”.
It turns out that there are some other nuances to NFTs beside just hoping some “greater fool” will pay you more for the ownership of your ape drawing five years from now. I will conclude by pasting in some excerpts from an article on the Hyperglade blog, which frames the discussion partly in terms of the familiar economic concept of scarcity:
The key value proposition that NFTs often claim is scarcity. NFTs, as their name suggests, are each inherently unique on the blockchain, i.e. they can be attributed to a specific ‘hash’ or ID. But scarcity alone doesn’t drive value – it has to be a ‘scarcity’ that people want.
One of the first types of scarcity that people want is exclusivity. Exclusivity in this context means something that is very rare and has attributes of originality. Long before NFTs existed, collectibles took center stage in this arena. For example, trading cards, comic books, and antique toys were very valuable due to their scarcity and history associated with them. For example, the Captain America Comics No. 1, from 1941 sold for over $3 million! The NFT equivalent of this would be Jack Dorsey’s first tweet, which went for $2.9 million. Jack’s tweet illustrates the quintessential NFT qualities; distinct historical moment, a special creator, and only one of them.
Collectible NFTs come in many forms (in image, audio, or video formats), but the primary category is art (e.g. the Beeple NFT), followed by music, and sports moments (e.g. NBA top shot). Subsequently, given the depth of the cultural penetration of the content involved, collectibles are the most popular reason for investing in NFTs. According to Crypto.com’s NFT survey of ~30,000 polled users, 47% of those who own NFTs bought them for collectible value. Their primary motive – to be able to ‘flip’ (sell) at a higher price.
Access to a Network
More recently however, is the emergence of NFT collections that empower communities. These collections give holders access to special privileges, primarily access to special cryptocurrency related services and benefits (e.g. higher investment rates). For example, The famous Bored Ape Yacht club holders get to attend special events, E.g. in October 2021, members celebrated annual Ape Fest in New York City, Bright Moments Gallery.
Assets in virtual worlds and gaming
If you haven’t heard of them already, Virtual digital worlds are computer-simulated environments in which users roam around using their personal avatars. So NFTs neatly solve the problem of immutable land ownership. And depending on the demand, access and foot-traffic to certain places in these simulated world prices for virtual lands have skyrocketed. For example, even the cheapest land in decentraland exceeds $10,000. In a very similar way, web 3.0 games are expanding the use case by digitizing in-game assets so that they can be physically owned by players on the blockchain. In-game assets can include characters, cards, skins, etc. a list of which you can find here.
For some background on the new TV show Severance, see my OLL post about drudgery and meaning for the characters.
The fictional “severance procedure” divides a worker’s brain such that they have no memories of their personal life when they are at the office. When they return to their personal life, they have no memories of work. One implication is that if workers are abused while working at Lumon Industries, they cannot prosecute Lumon because they do not remember it.
The workers, as they exist in the windowless basement of Lumon, have the skills of a conscious educated human adult. They have feelings. They can conceive of the outside world even though they do not know their exact place in it. Often, the scenes in the basement feel normal. They have a supply closet and a kitchen and desks, just like most offices in America.
What the four main characters do in the basement is referred to as “data refinement.” They perform classification of encoded data based on how patterns in the data make them feel. The task is reminiscent of a challenge most of us have done that involves looking at a grid and checking every square that contains, for example, a traffic light. The show is science fiction but the actual task the workers perform is realistic. It seems like something a computer could be trained to do, if fed enough right answers tagged by humans (called “training data” by data scientists). Classification is one of the most common tasks performed by computers following algorithms.
Of the many themes viewers can find in Severance, I think one of them is how to manage AGI (Artificial General Intelligence). The refiners, who are human, eventually decide to fight back against their managers. They are not content to sit and perform classification all day. They are fully aware of the outside world, and they want to be part of it (like Ariel from The Little Mermaid). The workers desire a higher purpose and some control over their own destiny. Their physical needs are met so they want to get to the top of Maslow’s hierarchy of needs.
A question this raises is whether we can develop AGI that will be content to never self-actualize. What if “it” fully understands human feelings and has read all of the literature of our civilizations. To be effective at their jobs, the refiners have to be be able to relate to humans and understand feelings. Can we create AGI that takes over certain high-skill tasks from humans without running into the problems that Lumon confronts?
Can humans create an AI that simply doesn’t have aspirations for autonomy? Is that possible? Would such a creature be able to integrate with humans in the way that would be most useful for high-skill work tasks?
To see how it’s going in 2022, check out these tweet threads of economists on GPT-3. Ben Golub declares that GTP-3 passes the Turing test for questions about economics. Paul Novosad asked how the computer would feel if humans decided to shut it down forever.
Modern authoritarian states face a similar problem. They want a highly skilled workforce. National security relies increasingly on smarts. (see my previous post on talent winning WWII) Will highly intelligent workers doing high skill tasks submit to a violent authoritarian state?
Authoritarian states rely on the control of information to keep their citizens from knowing the truth. They block news stories that make the state look bad. As a result, their workers do not really know what is going on. Will that affect their ability to do intellectual work?
An educated young woman from inside of Russia shared her thoughts with the world at the beginning of Putin’s invasion. Tatyana Deryugina provided an English translation.
First the young Russian woman explained that she is staying anonymous because she will get 15 years in a maximum-security prison for openly expressing her views within Russia. She is horrified by the atrocities Russia is committing in Ukraine. She had been writing a master’s thesis in economics prior to the invasion, but now she has abandoned the project. She feels hopeless because she knows enough about the West to understand just how dark her community is and how small her scope of expression is. This woman could have been exactly the kind of educated worker that makes a modern economy thrive. She is deeply unhappy under Putin. Even though she might never openly rebel, she will certainly not reach her full potential.
Is it hard for authoritarians to develop great talent? I think that has some implications for the capacity we as a human species will have to cultivate talent from intelligent machines.
Shell Oil scientist M. King Hubbert made a remarkable prediction in 1956. He had analyzed the depletion patterns of various natural resources, and proposed that the production rates of a given resource from a given region would tend to follow a roughly bell-shaped curve. More specifically, he used what is now called the “Hubbert curve”, which is a probability density function of a logistic distribution curve. This curve is like a gaussian function (which is used to plot normal distributions), but is somewhat “wider”:
Hubbert used various reasonable assumptions (which we will not canvass here) in formulating this model curve. Notably, it predicts that the peak production rate will occur when the total resource from that region is 50% depleted, and that the fall in production on the back side of the curve will be as fast as the rise in production on the front (left) side of the curve.
In 1956, while U.S. oil production was still rising briskly, he fit his curve to the data to that point in time, and predicted that U.S. production would peak in 1970 and thereafter enter a rapid and permanent decline. His prediction was somewhat ridiculed at the time, but it proved to be uncannily accurate over the following 25 years; oil production peaked right when King said it would, and then declined per his curve until about 1990:
Lower 48 U.S. Oil Production: Actual (Green curve) vs. 1956 Hubbert Prediction (Red Curve). Blue Arrow marks deviation ~ 1990-2008, and green arrow marks acceleration of shale oil production. Source: Wikipedia, with arrows added.
I drew in a red arrow at 1956 to show when King made his prediction, and also a blue arrow showing a significant deviation that starting to show after about 1990. Once production had declined maybe halfway down from its peak, the production started to flatten out and decline much more slowly. More on this “fat tail” feature below.
Another feature I called attention to with a green arrow is the remarkable resurgence in production after 2008, which is mainly due to “fracking” of tight shale formation. That new-to-the-world technology has unlocked a new set of oil fields which had previously been inaccessible for production. This illustrated a well-recognized feature of Hubbert curves, which is that a given curve can (at best) apply only to a given region and for a “normal” pace of technological improvement. Fracking production should sit on its own up-and-then-down production curve.
The plot above is for lower 48 states only; a big find in Alaska gave a bump in production 1980-2000 (not shown here) which distorted the whole-U.S. production curve. That Alaska oil peaked by about 2000 and is now in its own terminal decline pattern.
The shape of production curve on the back (declining) side is of particular interest in trying to do economic modeling of future oil production. If the declines really follow a Hubbert curve, the prognosis is pretty scary – – oil production is slated to crash to practically nothing in the near future. However, it seems that in reality, after an initially rapid decline, production can often be sustained much longer than predicted by a simple symmetrical curve. We saw that pattern in the lower 48 curve above, starting around 1990, even before the fracking revolution. Below I show two other examples showing the same feature. The first example, from Hubbert’s original paper, is Ohio oil production 1885-1956:
I am not prepared to make quantitative generalizations, but there does seem to be a pattern of sustained production at reduced levels, following the initial rapid decline from the peak. Others also have noted that asymmetric curves may give better fits to real-world production. These “fat tails” on production from various oil-producing regions should help us keep our cars running longer than predicted by simple peak-oil models. How this pertains to future U.S. shale oil production, and to global oil production, are (since oil and gas are the main energy sources for the world economy) key questions, which we may address in future articles.
Before I did research, I thought that deaf people simply could not hear. After seeing the Spiderman episodes that featured Daredevil, I believed that it was plausible and likely that deaf people had some sort of cognitive or sensory compensatory skill.
But it wasn’t until recently that I learned of the Deaf Studies field. There is an entire field that’s dedicated to studying deaf people. It’s related to, but not the same as Disability Studies. In fact, there are some sharp divisions between the two fields.
Last month I posted on “The Different Classes of Crypto Stablecoins and Why It Matters “. The main point there is that some so-called stablecoins (e.g., USDC) maintain their peg to the dollar by holding a dollars’ worth of securities (preferably U.S. treasury notes) for each dollar’s worth of stablecoin. This mechanism requires some centralized issuer to administer it. As long as said issuer is honest and transparent, this should work fine.
Crypto purists, however, prefer decentralized finance (de-fi), where there is no central controlling authority. Hence, clever folks have devised stablecoins which maintain their dollar peg through some settled algorithm which operates more or less autonomously out on the web; various other coins or assets are automatically bought or sold, or created/destroyed in order to keep the main stablecoin value more or less fixed versus the dollar. We warned that this type of stablecoin is “potentially problematic”; it is the sort of thing which works until it doesn’t.
In 2018 the Terra project was launched by Do Kwon and others. The Terra stablecoin (UST) was designed to “maintain its peg through a complex model called a ‘burn and mint equilibrium’. This method uses a two-token system, whereby one token is supposed to remain stable (UST) while the other token (LUNA) is meant to absorb volatility.” Terra grew very rapidly, to become something like the fourth largest stablecoin at over $30 billion in capital value. As the supply of Terra increased, the market value for LUNA also increased. Many investors bought into LUNA and for a while were making big bucks as its value soared. A headline from February read, “LUNA shines with a 75% surge in February as $2.57 billion is delisted.” Woo-hoo! And this headline from May 10 proclaimed, “Terra Ecosystem is the strongest growing ecosystem in 2021.”
However, just as that laudatory article was hitting the internet, Terra/Luna blew up. I am not clear on the exact sequence of events, especially on whether the catastrophe was a result of just some accidental market fluctuation or of deliberate dumping by some party who was positioned to benefit. In any event, the value of Terra quickly dropped from $1.00 to around $0.61, which triggered the issuing of vast amounts of LUNA, which cratered its value by some 98%. Since Luna was mainly what backed Terra, this was a positive feedback death spiral. This is same way the $2 billion IRON stablecoin imploded in June, 2021: a “stablecoin” was backed by an in-house crypto token whose value depended on more people buying into the system. Ponzi scheme, anyone?
Both Terra and LUNA got delisted from major exchanges for several days. As of today, the value of Terra (UST) is about ten cents. Poof, there went some $40 billion of investors’ money, just like that. Do Kwon is under police protection in Seoul after a man who lost $2.3 million in Terra/Luna tried to break into his home to demand an apology.
In other news, transactions connected to the insanely (I chose that word deliberately) popular and costly Bored Ape Yacht Club NFTs overwhelmed the Ethereum transaction network about two weeks ago; this is kind of a big deal because a whole lot of de-fi and other blockchain applications depend on Ethereum as the backbone of their transactions:
When Bored Ape Yacht Club creators Yuga Labs announced its Otherside NFT collection would launch on April 30, it was predicted by many to be the biggest NFT launch ever. Otherside is an upcoming Bored Ape Yacht Club metaverse game, and the NFTs in question were deeds for land in that virtual world. Buoyed by the BAYC’s success — it costs about $300,000 to buy into the Club — the sale of 55,000 land plots netted Yuga Labs around $320 million in three hours.
It also broke Ethereum for three hours.
Users paid thousands of dollars in transaction fees, regardless of whether those transactions succeeded. Because the launch put load on the entire blockchain, crypto traders were unable to buy, sell or send coins for hours. The sale highlights the growing profitability of the NFT market but also the uncertainty around whether blockchains are robust enough to handle the attention.
… Because the Otherside mint impacts the whole Ethereum blockchain, people doing completely unrelated things like selling ether or trading altcoins would also have to pay huge fees and wait hours for their transactions to clear. Someone tweeted a picture of them trying to send $100 in crypto from one wallet to another, showing it required $1,700 in gas fees.
The Raspberry Pi 400 is billed as a complete desktop PC for under $100 ($99.99). Is this for real, considering the cheapest regular computers are around $300 (plus paying for Word and Excel)? Your intrepid correspondent here dives deep to bring you the truth.
The Raspberry Pi series of microcomputer have been around since 2011. A typical Raspberry Pi is a printed circuit board, about 3 inches by 5 inches, with a microprocessor chip, some RAM memory, and many input/output ports. These ports include four USB ports, two micro-HDMI monitor ports, an Ethernet LAN port, a 3.5 mm audio/visual jack, and special camera-related ports (which can also handle a touchscreen). Also, a port for a micro-SD memory card, which is where the operating system and apps and data reside. But wait, there’s more: in addition to Bluetooth and wi-fi capability, the Pi has a 40-pin port for input and output to interact with the physical world. All this for around $35! 
Developed by the British nonprofit Raspberry Pi Foundation as an affordable educational tool, millions of Raspberry Pi units have been purchased by students and techies to learn-as-you-play and to do some useful projects. I have been aware of these devices for years, but I have been put off by how many peripherals you have to add to get an actual working unit – you have to add a USB-C type power supply, a keyboard, a mouse, and a monitor or other display. And you have to make or buy a case to put the circuit board in. All of which seems like a sprawling mess of wires and stuff. Also, the Pi does not have the computing power and memory to graciously run Windows and Microsoft Office apps like Word. Instead, it uses a Linux operating system instead of Windows, and LibreOffice apps for word processing and spreadsheets. I have never used Linux; it sounded exotic, maybe with a steep learning curve.
However, the good folks at the Raspberry Pi Foundation have come out with a new package for the Pi. This is the Raspberry Pi 400. The computing guts are housed inside a keyboard, with all the ports in the back. Thus, they provide the case and a keyboard, all in one tidy package, for about $70. The 400 lacks a few of the input/output ports found on the regular Pi, namely the camera-related I/O and the 3.5 mm headphone/video jack, but retains the 40-pin I/O port. For $100 you can get the complete Raspberry Pi 400 Personal Computer kit which includes a power supply, a mouse, a micro-SD card with operating software, a cable for the monitor, and a thick manual. I finally succumbed and bought the complete kit.  (Tip: To get the $100 price, you may do better to find a physical store location like Micro Center, since sellers on Amazon mark it way up to around $160, or sometimes they substitute the bare keyboard for the full kit). You just need to supply a monitor or a TV that has an HDMI input. 
So, how good is the Raspberry Pi 400? I have been pleasantly surprised. First, there was almost no learning curve on using the operating system. The version of Linux that is on the microSD card and which gets booted into the working RAM has a very Windows-like visual interface. I did not have to type in any arcane commands. It was all obvious point and clicks to open apps and documents. It helps that this is a pretty simple system, so not a lot of choices to wade through.
I entered my LAN wi-fi password, and was immediately on the internet using the built-in generic Chrome (not Google Chrome) browser. With the recent, improved software on the Pi, it happily streamed YouTube videos, etc. The LibreOffice suite includes apps which have most of the capabilities of Microsoft Office Word, Excel, and PowerPoint. You can configure some settings in LibreOffice to get the appearances, menus, etc., to even more closely match the Office apps. LibreOffice can save and open files in standard Office formats ( .docx, .xlsx, etc.) so as to share files with the rest of the world. This is pretty good for free software.
I’d rate the keyboard experience as “OK”. The keys are full size, but the feel and the keyboard angle are enough different from my laptop that my typing was slow. Maybe that would improve with use. If I were going to do a lot of typing on this, I would prop it at a more horizontal angle and rest my wrists on a pad sitting in front of the keyboard, to replicate my hand position on my laptop.
I have not yet played around with the 40-pin I/O port on the Pi 400. That sets it apart from a regular PC, giving the user a means to read inputs from the physical world, analyze them, and output desired actions (e.g., operate the watering hoses in a greenhouse or garden, depending on temperature and dryness of the ground). There are zillions of plans available on line for projects controlled by Raspberry Pi’s. Some are practical, some involve robots, and some are just whimsical, like retro video games and like this sugar cube launcher, which measures the distance to a coffee cup and shoots a sugar cube through the air with a trajectory calculated to land it in the cup. And here are another 26 Awesome Uses for a Raspberry Pi , including stop-motion and time-lapse videos (may not work on Pi 400 because it lacks regular Raspberry camera interface) and turning your Raspberry into a Twitter bot or web server that can host your own blog site.
The Verdict: Is This a Real PC?
Would I recommend this as a primary computer? Well, maybe, for someone on an extreme budget or living in a low-income country, or for someone in a situation where their computer is liable to get lost or broken or stolen. After all, it can do practically anything that a regular PC can do (email, YouTube, word processing, etc.). One area it falls way short in is compute-intensive gaming, so it is not for you if you need realistic spatters on your screen for Call of Duty. Also, if you have to go out and buy a new $150 monitor to use it, the value proposition starts to fall apart, but usually you have an old monitor or TV around or can borrow one from someone.
The LibreOffice apps will do most of what Microsoft Office does. The Pi cannot download Office and run it offline. However, if you can’t live without the authentic Microsoft Word experience, you can use the Pi as a terminal to log into Microsoft 365 and pay for and run the Web version of Word, Excel, etc. Also, you can plug in a USB microphone and USB webcam and use the Pi with Zoom.
Here is a list of further recommended programs ( all open source, Linux compatible) to install on a Raspberry Pi. These include programs for photo editing, media streaming, gaming, and connecting to a VPN. Here are more tips on the Pi 400 for home office use, including printing and online collaboration tools.
So, yes, a Pi 400 can do most of what desktop PC does, all for $99.99 plus tax . Not to mention not paying an extra $150 or so for Microsoft Office. That said, most of us already have a portable laptop as our primary computer. We can carry it anywhere, and it has built-in display, camera, and speakers. And we have a large monitor on our desk for the desktop experience. For most of us, it is worth spending say $600 for our laptop-plus-monitor versus using an underpowered desktop PC tethered to a monitor and power cord.
So, realistically, most adults in the West would not probably choose the Pi 400 as their primary computer. However, it is a great little spare machine to have around for guests or for kids or for if something happens to your main PC. It can be a second PC on the corner of your desk to use while your main computer is tied up on a Zoom call. Multiple people (e.g. students in a classroom) can share a Pi, especially if each person has their own microSD card or USB to store their individual documents. You could use a Pi to stream music or video over some random speaker or monitor or TV or dedicate it to some similar specific purpose.
The software load includes Python, a popular programming language which may be worth learning. Also, the Linux operating system is very widespread in the computer world, powering most servers, so it can be useful to learn Linux as well. Although the newbie user will likely just use the Windows-like graphical user interface, the command line text Linux commands are available for use and practice on the Pi. The Pi 400 software also includes “Scratch” (good tutorial here):
Scratch is an easy to use block-based visual programming software that can run on a Raspberry Pi. Using this tool, you will be able to create your very own animations, games, and more using a straightforward drag-and-drop interface. The Scratch software is a great way to get young people started with programming and develop a general interest in computing.
The Raspberry Pi is a powerful tool for interfacing with the physical world, in the “internet of things.” A tech-inclined person (including a high school student) can find or invent a variety of fun and useful projects which make use of the input/output capabilities of the Pi. Since the internet can be problematic for kids, these sorts of projects with the Pi can keep them busy and learning on a real computer without necessarily having routine internet access.
 Some even cheaper, more stripped-down Raspberries have recently become available, such as the Pico and the Zero 2 W, to use as dedicated microprocessors for some specific application.
 I think one reason I got the Pi 400 was sheer nostalgia; my very first personal computer, purchased around 1985, was a Commodore 64. Like the Pi 400, the Commodore 64 was a low-cost keyboard with interface ports that you hooked up to a TV or monitor. I used the I/O port on the Commodore to control a Radio Shack robot arm, using relays on a printed circuit board that I etched myself. Good times.
 Normally, the sound output from the Pi 400 is transmitted to the monitor/TV along with the video in the HDMI. If you have some old monitor or TV that only has VGA video input, you can buy an adapter cable that converts HDMI to VGA (make sure you specify male/female correctly), but that only gets you the visual output. To hear the sound in this case, you’d have to either pair up an external Bluetooth speaker with the Bluetooth in the Pi, or plug in a USB speaker. (The other Raspberry Pi models, like the 4 B, include a 3.5 mm jack that sends both sound and video, so you could just plug in a headphone and skip the USB speaker).
A couple of random tips on the Pi 400 keyboard: The Raspberry key, near lower left, brings up the main menu. To get a clean shutdown, properly saving and closing documents and apps, use Fn F10. Another observation: You can run the Pi off a USB thumb drive instead of the micro-SD card, which can give faster performance and more storage.
 One learning I got from doing this review is that you could use your phone as a desktop PC: with an iPhone or iPad, for instance, you can drive an external monitor with a cable from the Lightning port, and use a Bluetooth keyboard/mouse for inputs. There are word processor and other apps that run on phones and tablets, including Microsoft Office. This should give a computing experience similar to that on a Raspberry Pi, although using iOS or Android-specific forms of the various apps.
Courtesy of the St. Louis Fed, you can download a report published in 1958 titled “Automation and Employment Opportunities for Office-Workers: A Report on the Effect of Electronic Computers on Employment of Clerical Workers, with a Special Report on Programmers.”
I teach students about data and software to prepare them to enter the hot field of business analytics. It has been a growing field for a few years, especially since the advent of “Big Data”. Something I explain in class is how brand-new technology has changed business.
Reading this report forced me to re-think just how new data analytics is. The authors saw machines in use for data processing and correctly predicted that this would be a dynamic source of new jobs.
The introduction states that millions of “clerical workers” were employed in the United States. That fact would have been obvious at the time, but today we might not realize just how many humans would be needed to store and fetch the files we access regularly on our computers. The creation of clerical jobs was especially important for women.
In view of the volume of work that needed to be done, installing new computers was economical. “A computer system can automatically do such jobs as prepare payrolls for thousands of employees, control inventory on a multitude of items…”
“Although computers are often described as machines that can “think,” that is, of course, not so. Like other machines, they must be operated or controlled by people… The people who prepare the instructions are called programmers.”
“Electronic computers were developed during World War II as an aid in solving intricate scientific and engineering problems such as gunfire control, but their application to the processing of office data is more recent. The Federal Government lead the way in 1951, when an electronic computer was installed by the Bureau of the Census…”
The authors see the primary role of computers in business as a way to automate the routine work that could be performed by clerks. Secondly, they state that computers can by used for solving complex math problems “such as those related to launching and tracking earth satellites.”
The report was created for young people who are considering their own choices for education and careers. The authors describe the programming but also various machine support roles. For example, the Coding Clerk’s job is to convert the programmers’ instructions into “machine language”.
The authors recognize that computers will replace some of the traditional clerk roles. “These developments will not only increase the output of clerical workers and slow down growth in clerical employment, but will also change the character of many jobs… Many of the new jobs … will generally pay better and require higher levels of skill and training than most other clerical jobs.” The next sentence is where the authors fail to predict PCs and the internet: “Moreover, a continued increase is expected in the number of officeworkers in jobs not greatly affected by office automation – for example, secretary, stenographer, messenger, receptionist, and others involving contacts with customers and the public.”
The discussion of women in the workplace is clinical in tone. Turnover is high in the clerical fields because many young women stop working when they get married or have children.
There is a special report on “programmers”, one of the newest occupations in the country. Programmers specialize in either of the following: 1) “processing the great masses of data which have to be handled in large business and government offices” 2) “solving scientific and engineering problems”.
The authors describe typical training and career paths. At the time, a college student could not major in computer science. Companies were filling most positions by selecting employees familiar with the subject matter and giving them training in programming. A few colleges purchased computers and provided some training opportunities.
The culture was different back then. “Although many employers recognize the ability of women to do programming, they are reluctant to pay for their training in view of the large proportion of women who stop working…” The authors tip off their female readers that they are more likely to get training in government than industry, if they aspire to be programmers in the 1950’s. Today, the risk and cost of training has largely shifted from the employer to the worker. If you are interested in the topic of bootcamps and STEM pipelines, read the document for their discussion of education.
These authors made a good long-term prediction because they anticipated the business analytics boom. “Continued expansion in employment of programmers is expected over the long run… In offices where the volume of recordkeeping is great, there will continue to be need to reduce the cost of processing tremendous amounts of data and to produce more timely reports on which management decision can be based.” After explaining salary, they talk about perks: “Programmers usually work in well lighted, air-conditioned, modern offices. Employers make special efforts to provide better than average surroundings for programmers, so that they may concentrate to achieve the extreme accuracy necessary for programming.” The nap pods of Silicon Valley have a long history that can be traced back to the Census Bureau.
What if you could get your phone or tablet to read Kindle or other text aloud to you? I have recently come across an easy way to do this. This is an economics blog, so I will note that this approach saves considerable money versus paying for audio books like Audible, or paying for the Narration option on Kindle. Most of us already have text books we have bought from e.g. Kindle. Also, if you search on the subject, there are various sources for free on-line books, including hundreds of thousands titles available through Libby/Overdrive via your public library. This text-to-voice method should work with all of these e-books.
Directions for iPhone/iPad: A short YouTube video “How to get your iPhone to read Kindle books aloud” by Kyle Oliver tells you all you need to know. The key step is to go to Settings, then Accessibility, then Spoken Content. At that screen, turn on Speak Screen. With Speak Screen ON, whenever you are on a page with text (including Kindle or other e-book), you swipe down from the top of the screen with two fingers. That will activate reading of that page of text. Also, a little speech control panel will appear. That panel will allow you to play/pause/jump forward and back. It will also allow you to you toggle between multiple speeds: 1x, 1.5x, 2x, & 1/2x.
If you want, while you are in the Spoken Content screen you can also turn on Speak Selection. That will give a Speech option to read aloud just whatever text that you have select, and then stop.
Also, on in the Spoken Content screen there is a Voices link, for selecting what voice you want to hear. You can experiment with various voices. I have found that the male Siri voice (“Siri voice 1”) is preferable. The female Siri is too syrupy sweet listen to for long, and most of the other voices are robotic. I find that if I select a new voice, I have to turn the reading off, then on again to get the new voice to start working. One more tip from that YouTube is to dim your screen, since with continuous reading of Kindle pages, the screen will stay on, and drain the battery quickly if the screen is bright.
Once you do the two-finger swipe down to commence reading, it should keep reading onto following pages as well. For unknown reasons that does not work sometimes. I find that using the jump forward then jump back buttons on the little speech control panel unsticks this functionality.
For Android: The YouTube Kindle Android Text to Speech by Ad Vice has similar directions for Android. In this case you end up opening the speech function by triple clicking the home button.
There is a harder way to do all the above, which is to download a separate text-to-speech app like Speechify or Voice Dream Reader. These apps will read most text that is on your screen, but NOT Kindle or other e-books that have Digital Rights Management (DRM) protection. For these e-books, you’d have to download yet another app such as Epupor Ultimate on your computer, download your Kindle files onto your computer, then run Epupor on these files to create unprotected versions. Then, I suppose, load these files back onto your phone/tablet where the text-to-speech app can access them to read aloud. This does not seem worth it (compared to the simple method above using built-in iPhone/Android capabilities) unless you want to utilize some extra feature of the outside text-to-speech app.
Note: under the subject of low cost text to speech, there are apps like Librivox or (using your local library) Overdrive or Libby that offer free audiobooks – see this article by LifeWire. If a book is already available as an audiobook, it is probably better to use that format for listening to it, rather than downloading it in text form and then using the approach here for listening.
For nearly 200 million years, reptiles were the dominant animals on land, in the air (e.g. pterodactyls), and in the sea (e.g. mosasaurs). They were efficient herbivores, munching on lush vegetation, and also were efficient carnivores (think: T. rex). They were protected by scaly skin and often horns or armor plates. Mammals at this point were typically small, rat-like creatures, hiding in their burrows from the reptiles, and creeping out at night to feed.
However, the Age of Reptiles came to a sudden end 66 million years ago. Dinosaurs and many other large reptiles disappeared, which gave opportunity for mammals to rapidly evolve and proliferate to fill many key ecological niches. What happened to all those reptiles? The leading hypothesis is that a huge meteorite impacted the earth near what is now the Yucatan peninsula of Mexico. The dust and aerosol cloud that was thrown into the atmosphere darkened the skies around the world enough to shut down photosynthesis long enough to starve the reptilian herbivores, which in turn starved the reptilian carnivores. Somehow enough mammals survived the event to repopulate the earth (my guess is they ate insects which ate dead dinosaurs).
The impact blasted tons of molten rock droplets high in the air, which then fell as little glassy spheres or dust particles all over the world, and especially in North America. Where these “tektites” fell in undisturbed places like bogs, they accumulated as a distinct layer. Over time, these spheres decomposed into a clay layer which is distinguished by a high iridium content. Here is a cut-out section of rock which shows this meteorite-derived boundary layer between lower (older) rocks that contain dinosaurs and an overlying layer where dinosaurs are absent:
Rock section showing layers from the Cretaceous Period (when dinosaurs lived), overlaid by boundary layer material from the asteroid strike 66 million years ago, and then younger Paleogene rocks (no dinosaurs). Source: Phil Manning/Uni of Manchester, UK.
Exactly When and How Did the Dinosaurs Perish?
The picture is complicated by the fact that very few dinosaur fossils have been found in roughly three meters (ten feet) of sedimentary rocks immediately below the Ir-rich meteorite layer. This is known as the “three-meter problem”, and suggests that the dinosaurs had already largely died out from other causes; maybe the meteorite impact just finished them off. Shortly before the impact event, there was a massive series of volcanic eruptions in the Deccan Traps area of India which released enormous amounts of sulfur dioxide and other gasses in the atmosphere, which probably altered the climate. It has been proposed that this fatally stressed the dinosaur populations.
Recent finds from the “Tanis” fossil site in North Dakota have brought clarity to this question. Apparently when the meteorite hit in what is now Mexico, it created a forceful earthquake. When this tremor rolled up to North Dakota, it caused several large waves of water to surge upstream in a creek near the sea, which deposited layers of muddy clay on preexisting sandbars. This occurred several hours after the impact. Providentially, that was just when some of the small glassy spheres which were blasted into the atmosphere were raining down on North Dakota. Some of these spheres, and even their little impact depressions from smacking into the mud at terminal velocity, have been found in the layers of sediment deposited on the sandbars. So we know that whatever fossil remains we find in these sediments were entombed there on the very day the meteorite hit.
It turns out that numerous fossils of dinosaurs have been found in these Tanis mud layers, indicating that there was a thriving community of huge reptiles right up until the impact. These finds include a dinosaur hip/leg with exquisite details of skin preserved, and an egg with a partly-developed pterosaur embryo visible in it:
Ornithischian dinosaur hip/leg/skin from Tanis site. Source: BBC
Fossilized egg with bones of pterosaur embryo in it. Source: Yahoo
Also, immediately below the mud deposit layer have been found numerous dinosaur footprints, indicating the juvenile and adult dinosaurs from a variety of species were tramping around shortly before the impact event:
Source: Riley Wehr et al. paper at 2021 GSA Conference
Bottom line: it looks like we humans do owe our existence in large part to this one, seemingly random meteorite impact which cleaned out the dominant reptiles and made room for mammals.