What to Read on the AIEO*
*Not a vowel exercise but the Artificial Intelligence Executive Order
It’s easier than ever to get published and harder than ever to get read, and A.I. may exacerbate the imbalance. So rather than write my own new take on President Biden’s executive order, here are some I think are especially well-informed and appropriately humble about our ability to anticipate the future.
wrote a long, thoughtful post with this abstract: “The President’s Executive Order on Artificial Intelligence is a premature and pessimistic political solution to unknown technical problems and a clear case of regulatory capture at a time when the world would be best served by optimism and innovation.” His essay is convincing and scary. Excerpt:Instead, this document is the work of aggregating policy inputs from an extended committee of interested constituencies while also navigating the law—literally what is it that can be done to throttle artificial intelligence legally without passing any new laws that might throttle artificial intelligence. There is no clear owner of this document. There is no leading science consensus or direction that we can discern. It is impossible to separate out the document from the process and approach used to “govern” AI innovation. Govern is quoted because it is the word used in the EO. This is so much less a document of what should be done with the potential of technology than it is a document pushing the limits of what can be done legally to slow innovation.…
Unfortunately, the history of executive orders is that of them becoming increasingly important as the legislature becomes less effective or writes bills that are increasingly vague. We saw President Trump reverse a bunch of President Obama executive orders on his first day. Many people cheered. Then we saw President Biden reverse a bunch of President Trump orders on his first day. Many different people cheered. This might be fun politically but when you consider innovation this is a disaster. Such an approach is akin to trying to build something amazing while working with a constant threat of a reorg or having resources pulled out from under the team. EOs are not a way to govern effectively in general, and they are specifically not a way to “govern” innovation….
We sit in a country surrounded by nuclear warheads, but we have few nuclear power plants equally capable of harnessing the power of the atom. The choice to deploy physics in this manner was entirely that of the government, not of people. We are here today in this spot because the government decided what was best. A lot of people today remain committed to generating power from the atom, but the government stands in the way of that innovation.
Had the government not backed down from deciding that encryption was not the equivalent of “justice, security, and opportunity for all” we might be in a world without end-to-end encryption or even secure online commerce.
I find it appalling how many people in DC see the light-handed regulatory approach taken to the internet as a mistake. No technology is without downsides, but strict regulation would have stifled an amazing source of knowledge, wealth, and consumer benefits—and wouldn’t have addressed some of the most prominent side effects people worry about now. I know because I was there. Nobody was thinking about political polarization or depressed teenage girls. They weren’t even thinking about photographs. (Steve Jobs didn’t mention that killer app when he introduced the iPhone!)
If you have any interest in AI or regulation, read the whole thing (or at least skim it).
Complementary (it quotes Sinofsky) and somewhat shorter, Ben Thompson’s take on is also excellent. I particularly appreciate his opening context highlighting the importance of remembering how little we know about the paths a new technology may take:
Embrace uncertainty and the fact one doesn’t know the future.
Understand that people are inventing things — and not just technologies, but also use cases — constantly.
Remember that the art comes in editing after the invention, not before.
Thompson sees corporate opportunism at work. I don’t know how much incumbent businesses are trying to use regulation to lock in their technological head starts and how much they’re sincerely concerned about AI dangers. But, whatever the motives, regulation tends to favor the existing big players, blocking new ventures and new ideas.
Adam Thierer, author of Permissionless Innovation, has a considerably less voluminous take here. Excerpt:
The EO represents a potential sea change in the nation’s approach to digital technology markets, as federal policymakers appear ready to shun the open innovation model that made American firms global leaders in almost every computing and digital technology sector. With the United States now facing fierce competition from global AI companies in China and other nations, the danger exists that the country could put algorithmic innovators in a regulatory cage, encumbering them with many layers of bureaucratic permission slips before any new product or service could launch. Biden’s new EO could accelerate the move to tie the hands of algorithmic entrepreneurs even if Congress does not pass any new legislation on this front….
Of greater concern is the EO’s green light for the Federal Trade Commission (FTC) to expand its focus on AI policy. While the agency does possess broad powers to police unfair and deceptive practices within all markets, the administration’s call for it to exercise greater regulatory authority over the AI ecosystem creates the potential for preemptive overreach. The FTC’s controversial Chair, Lina Khan, has radicalized the agency and pursued aggressive actions against digital technology companies since her tenure began. The FTC has made it clear that AI systems are in its sights, and the agency could be positioning itself to serve as America’s de facto AI regulator. Because the Biden administration’s new EO (in addition to its previous AI Bill of Rights) suggests that broad-based harms are omnipresent within algorithmic systems, it could serve as an open-ended invitation for the FTC to overzealously harass AI innovators and micromanage developing markets.
Adam invited me to join this panel on “The Case for Optimism in the Age of AI.”
On the panel I read from this short post by Steven Heller, a prominent graphic designer and author of many books on the subject. He reminds us that earlier digital tools, such as Photoshop, aroused many of the fears currently evoked by AI. Rather than regulation, he emphasizes the need of users to embrace new tools and apply them to enhance rather than erode human creativity:
When Photoshop was introduced, some smart yet short-sighted folks mourned the loss of brain and handwork that had defined the field for a hundred plus years, but quickly the software was trained, tamed and put to use as the designer’s most valuable tool. The same concerns about AI are being bandied about—not that there are not red flags—threats galore about the dangers of stolen IP, false truth and invented reality … but false has been true for ages. AI has dangers built into it, but we must be prepared for them and certain it cannot come a-trespassing onto our range, causing a-ruckus, or ravage and ruin. AI needs domesticating, and now’s the time to do it. And pronto!
Back when I wrote The Substance of Style I thought that aesthetic production was likely to be the killer app for new software. There were already signs of the transformative effects of widely available graphic software, from frequently changed restaurant menus to the typeset flyers with clip art that people looking for housecleaning jobs were leaving at my door. That was twenty years ago, when the trend was barely beginning. Now it has spread to video, yet graphic professionals are still in demand.
As software with built-in design capabilities has become widely available, the minimum acceptable level of design has gone up. But if you want more than than merely acceptable, professional skills matter.