Thoughts and Links I

AI’s impact on jobs, Germany’s problem with corporatism, and Berlin lawns.

AI
Economics
Germany
Author

Simon Grimm

Published

October 11, 2023

[1] The economist Michael Webb spoke about the economic impacts of artificial intelligence on the 80,000 Hours podcast. He lays out why AI might not cause large-scale unemployment as is commonly believed but might instead provide more, higher-paying jobs:

So before ATMs, there were individual humans in the bank. You’d go up to them and show some ID and get your account details, and they would give you some cash. Bank tellers, I think they were called. And you would think, ATM comes along, that’s it for those people

[…] the ATM did indeed reduce the number of people doing that specific task of handing out money. [..]

What happened was the ATM meant there were fewer staff per bank branch, but enabled the opening of many more bank branches overall. And that actually offset the first impact. So fewer staff per bank branch, but so many more bank branches that the total number of people in bank branches actually went up.

In economic terms, Michael calls this “demand elasticity in the presence of complementarity.”:

Demand elasticity ” means when you reduce the price of something, you actually want more of it. So automation generally brings the cost of things down. But what normally happens is, one doesn’t say, “Great, I’ll have the same amount of stuff.” They say, “No, I want more of that stuff now. Give me more, more, more.”

Then “in the presence of complementarity”: so “ complementary ” we think of, if humans are complementary to the automation, the technology, whatever it is, in some way, there’s still some humans involved — fewer than before, per unit of output, but still some. Then because people now want more and more of this stuff, each unit of the thing is more automated, but there’s still some humans involved. And therefore, you end up possibly having ever more humans totally in demand, doing slightly different things, but still roughly in the same ballpark.

Also, Michael is hiring for his org Quantum Leap.

[2] One crux here is the extent to which humans can be entirely replaced (e.g., how much economic value are apes currently reaping?) In the last edition of Asterisk, Matt Clancy and Tamay Besiroglu debated this topic with regard to AI’s impact on growth rates. Tamay believes that automation will be more complete, thus leading to far higher growth rates soon. There is far more at the link.

[3] The New York Times reports on Manifest.

[4] Germany’s health minister thinks that advances in large language models could increase the risks of bioterrorism.

[5] Jacob Edenhofer talks about the negative impacts of Germany’s corporatist economic structure on Germany’s state capacity. More generally, understanding how Germany’s large labor unions and industrial groups cooperate in seeking rents is (likely) a significant explanatory factor for the low number of new companies (after BioNTech, Germany’s youngest large company1 is SAP, founded in 1972). Anyway, here are the most important bits by Jacob:

Germany’s “informational capacity” is notoriously low, despite it being a relatively wealthy country with a strong Weberian administrative tradition. I wonder whether this is, at least partly, attributable to Germany’s corporatist structures. If the state had higher informational capacity, it would know (a lot) more about, inter alia, companies’ profitability, their costs of production, and, crucially, as the gas embargo debate illustrates, about their elasticity of substitution […].

It would then be more difficult for companies to extract informational rents, as they tried to do last Winter. In effect, they tried to “scare” politicians by understating their true elasticity of substitution.

Finally, the value of asymmetric information and the associated information rent is much higher in corporatist systems, where businesses and politicians as well as labour regularly bargain with one another. When both business and labour stand to gain from information rents (as they did during last winter), they will try to prevent investments into better info. capacity to maintain or increase their relative bargaining power vis-a-vis the state. […]

[6] Matt Clifford explains why only 100 people will attend the UK’s AI Safety Summit. (Here is a thought by myself on the summit.)

[7] Long-lasting preventative HIV medication could prevent infection in resource-poor settings. Pre-exposure prophylaxis (PReP) medication has already been effective in preventing new infections in richer countries. But they are hard to deliver in poor settings. Now, a jab providing two-month protection is already available, and a 6-month jab is in the works.

[8] Regarding X/Twitter’s continuous, anti-competitive action against Substack, this 2019 essay by Stratechery’s Ben Thompson is a worthwhile read. He explains the unique ways in which aggregators like Twitter can suppress competition:

Aggregators collect a critical mass of users and leverage access to those users to extract value from suppliers. The best example of an Aggregator is Google. Google offered a genuine technological breakthrough with Google Search that made the abundance of the Internet accessible to users; as more and more users began their Internet sessions with Google, suppliers — in this case websites — competed to make their results more attractive to and better suited to Google, the better to acquire end users from Google, which made Google that much better and more attractive to end users.

Twitter, in this case, is an aggregator of news and takes. Thompson then goes on to describe vertical foreclosure, where aggregators can make it harder for 3rd parties to function on their platform:

Aggregators can also ban 3rd-parties — Google can remove a site from search, or Facebook can remove links from the News Feed […]

He doesn’t advocate for government action against anti-competitive actors, though, as users can switch aggregators (Threads, BlueSky) or directly access 3rd parties (e.g., Substack Notes).

For the same reason, though, Aggregators are less of a problem. Third parties can — and should! — go around Aggregators to connect to consumers directly; the presence of an Aggregator is just as likely to spur innovation on the part of a third party in an attempt to attract consumers without having to pay an Aggregator for access.

I still find X’s actions not cool.

[9] Miguel Urquiola lays out why it might be a bad idea to make US universities less selective or more uniform:

  1. Research shows students learn more when schools tailor instruction to their level. Limiting selectivity could hurt all students’ learning. Just like forcing all restaurants to serve a random draw of customers could hurt average quality [e.g. Duflo, Dupas, and Kremer 2011].

  2. The U.S. historically let schools develop different strategies and strengths—it promoted diversity rather than legislated uniformity. This produced inequality, true, but also helped generate the best research universities in the world.

To see the effect of enforced uniformity, look at countries like Germany or Spain. They generally do not let universities differ much in resources or prestige. This keeps inequality down. But it helps prevent the rise of a Stanford or an MIT. Again, a tradeoff.

There’s more at the link. Also, see my post on why German uniformity in higher education is bad for talent sorting.

[10] In light of the US’s great performance at this year’s Nobels, a graph by the same Miguel Urquiola on the frequency with which Nobel winners’ biographies mention universities in different countries:

The frequency with which Nobel winners’ biographies mention universities in different countries. It plots countries’ share of total mentions per year.