I Have Good News and Bad News

Here's the (sort of) good news from David French: Colleges Have Gone off the Deep End. There Is a Way Out. But let it not be said he's unaware of the inner rot:

University complicity in chaos isn’t unusual. In a case I worked on when I was president of the Foundation for Individual Rights and Expression, we discovered that administrators at Washington State University’s Pullman campus had actually helped plan a disruptive protest against a play put on by a student director, an intentionally provocative show that mocked virtually every group on campus.

University or faculty participation in unlawful protest isn’t confined to the cases I worked on. At Oberlin College, administrative facilitation of ugly and defamatory student protests outside a local business ultimately cost the school $36 million in damages. At Columbia, hundreds of sympathetic faculty members staged their own protest in support of the student encampment on the quad, and there are reports that other faculty members have attempted to block members of the media from access to the student encampment.

Boy, do I have an easy go-to for a faculty members at the University Near Here supporting unlawful protest:

`

So what's the good news? French suggests the "way out": institutional neutrality on "matters of public dispute"; don't "permit one side to break reasonable rules that protect education and safety on campus." Pretty easy. And I'm pretty happy that UNH seems to be getting this mostly right.

As promised, the bad news:

  • Fortunately, there's just one set of rules for speech, right? Ah, you wish. Abigail Shrier notes: There Are Two Sets of Rules for Speech. For example:

    In 2017, an anonymous jerk put flyers up around American University’s campus. The flyers displayed a Confederate flag, a stem of raw cotton, and read “Huzzah for Dixie” and the like.

    American University immediately launched into emergency response mode, treating the flyers as a criminal threat. It published CCTV video and solicited help from the public in identifying the man who posted the flyers. An all points bulletin called “CRIME ALERT” went out for the man’s arrest. The New York Times covered the incident; the words “free speech” do not appear once in the article. Instead, it approvingly noted that in a previous incident—when bananas were found hanging from nooses around campus—the FBI had been called to investigate.

    Nor could I find any evidence of any free speech organization rushing to defend the man who posted the flyers—nor the racist provocateurs in any of dozens of similar incidents. No prominent “free speech absolutists” appear to have considered the expressive value of “Huzzah for Dixie” worth defending. Nor did pundits claim that inviting law enforcement to investigate such acts of hate—i.e., “calling the police on your own students”—was in any sense inappropriate or disproportionate. In almost every single case—at schools like Stanford, Johns Hopkins, Michigan State, University of Florida, Duke, and American University—where a symbolic noose was discovered on a campus, it was treated as a criminal threat, never as speech.

    After the Huzzah for Dixie flyers were found, the president of American University quickly issued a statement: “I ask you to join me in standing together and show that we will not be intimidated. AU will respond strongly to attempts designed to harm and create fear,” she wrote. “When one of us is attacked, all of us are attacked.”

    Today, in the face of months of bloodthirsty cries aimed at Jewish students (“globalize the Intifada”), university presidents line up to assure the protesters of their right to free speech.

    An example from the Bad Old Days (2004) at the University Near Here: University of New Hampshire Evicts Student for Posting Flier

  • OK, but … hey, raising the tax cap would save Social Security, right? Wrong, says Brian Riedl: Raising the Tax Cap Cannot Save Social Security.

    The reason Social Security taxes are capped is that Social Security benefits are, too. Because the program is a social-insurance system, retirees can claim that they “earn” their benefits because the benefits are tied to their tax contributions. The Social Security tax reaches its ceiling at $168,600 in wages (adjusted annually for inflation) because any wages earned above that level no longer earn additional benefits. Raising the limit without adjusting benefits accordingly would delink the two, turning Social Security into more of a traditional welfare system.

    That said, even if we did do away with the tax ceiling—with no corresponding benefits provided—doing so would not come close to bringing long-term solvency to the program. The Congressional Budget Office projects that Social Security’s annual shortfall will level off at about 1.7 percent of GDP within 15 years. Yet, abolishing the cap would raise 0.9 percent of GDP, closing a little more than half of those shortfalls. In fact, Social Security actuaries calculate that the system would fall back into deficits within just five years.

    Riedl has a more detailed analysis at the Manhattan Institute: Don’t Bust the Cap: Problems with Eliminating the Social Security Tax Cap.

  • Hey, we can solve the 'wicked problems' of central planning with AI, right? Sorry, pilgrim, Arnold Kling says nay: AI Can't Solve the 'Wicked Problems' of Central Planning.

    The term wicked problem has become a standard way for policy analysts to describe a social issue whose solution is inherently elusive. Wicked problems have many causal factors, complex interdependencies, and no ability to test all of the possible combinations of plausible interventions. Often, the problem itself cannot be articulated in a straightforward, agreed-upon way. Classic examples of wicked problems include climate change, substance abuse, international relations, health care systems, education systems, and economic performance. No matter how far computer science advances, some social problems will remain wicked.

    The latest developments in artificial intelligence represent an enormous advance in computer science. Could that technological advance give bureaucrats the tool they have been missing to allow them to plan a more efficient economy? Many advocates of central planning seem to think so. Their line of thinking appears to be:

    1. Chatbots have absorbed an enormous amount of data.
    2. Large amounts of data produce knowledge.
    3. Knowledge will enable computers to plan the economy.

    These assumptions are wrong. Chatbots have been trained to speak using large volumes of text, but they have not absorbed the knowledge contained in the text. Even if they had, there is knowledge that is critical for economic operations that is not available to a central planner or a computer.

    Anyone who's read The Moon is a Harsh Mistress knows that even Mike, the self-aware AI on the moon, couldn't provide a free lunch.

  • [Amazon Link]
    (paid link)
    But… we need to regulate AI to prevent it from running amok, right? Nay, friend: Ronald Bailey makes the plausible case that AI Regulators Are More Likely To Run Amok Than Is AI.

    Deploying the precautionary principle is a laser-focused way to kill off any new technology. As it happens, a new bill in the Hawaii Legislature explicitly applies the precautionary principle in regulating artificial intelligence (AI) technologies:

    In addressing the potential risks associated with artificial intelligence technologies, it is crucial that the State adhere to the precautionary principle, which requires the government to take preventive action in the face of uncertainty; shifts the burden of proof to those who want to undertake an innovation to show that it does not cause harm; and holds that regulation is required whenever an activity creates a substantial possible risk to health, safety, or the environment, even if the supporting evidence is speculative. In the context of artificial intelligence and products, it is essential to strike a balance between fostering innovation and safeguarding the well-being of the State's residents by adopting and enforcing proactive and precautionary regulation to prevent potentially severe societal-scale risks and harms, require affirmative proof of safety by artificial intelligence developers, and prioritize public welfare over private gain.

    The Hawaii bill would establish an office of artificial intelligence and regulation wielding the precautionary principle that would decide when and if any new tools employing AI could be offered to consumers.

    As always, Virginia Postrel's The Future and Its Enemies saw all this coming. Amazon link at your right.

Stay tuned. None of this stuff is going away, and neither am I.