I’ve begun to be skeptical of progress itself; the techno-utopianism of the ‘90s feels quaint and naive. Increasingly, the promises of the rich lords of technological advancement are looking more and more like Tesla’s bizarro Cybertruck: weird and unnecessary polyhedrons that you have to rent forever, created by people isolated from human need and also the desires of the ordinary person. The derisive naming of all techno-skeptics as “Luddites”—in addition to erasing the scarcity and pain that led to that uprising—is also an effective erasure of legitimate reasons for criticism, a brush to tar those who point out all the broken people left behind under the “move fast and break things” ethos that has led us to this precipice.
I mean, obviously I’m not neutral on this; I write words for a living, generally words that are excruciatingly earnest or at least interestingly florid, and I would like to be paid for them and not have them exploited as abstracted, minute pieces of a “corpus” used to feed a machine that will eventually make money for grotesquely rich people. Living as a writer is increasingly precarious—with staff jobs for an vanishingly privileged few, the rest of us clawing at the margins—and the idea that these mega-conglomerates are eager to wrench even the few bucks from our hot little hands disgusts me.
In part as a consequence of the Israel-Hamas war, more journalists are posting news and analysis on Meta’s Threads platform. From QZ’s Ananya Bhattacharya:
Since its inception, Threads has decided to steer clear of handling hard news—and the Israel-Hamas war hasn’t altered its stance. Yet the platform is quickly becoming a home for reporting on the conflict.
When Meta launched its Twitter-killer app in July, Instagram boss Adam Mosseri said the Threads app is “not going to do anything to encourage” politics and “hard news.” He clarified that the platform won’t “discourage or down-rank” such posts but that the company won’t “court” them either. …
In a blog post today (Oct. 13), Meta said it set up a “special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation in real time.”
In the three days following Hamas’ attack on Israel on Oct. 7, Meta removed or marked as disturbing more than 795,000 pieces of regional-language content. On Instagram, a number of hashtags have been restricted, and the use of live-streaming is restricted for people who have previously violated certain policies. The company also is labeling messages forwarded by people who were not the original sender, so that recipients can tell the information came from a third party.
Given the density and frequency of Hamas-related content, Meta is currently taking down content “without strikes, meaning these content removals won’t cause accounts to be disabled.” It’s also sharing tools to let third-party fact checkers more easily find and flag content, and to let users filter out offensive messages and appeal erroneous content decisions.
The whole piece is very good.
I have replaced Twitter with Threads on my iPhone, though I have not started posting there in earnest yet. I will soon. It’s a better platform at this point in terms of tone, and there are far, far fewer trolls around.
Rick Beato’s recent interview with Björn Ulvaeus of ABBA focused as much on AI technology as on the creation of ABBA’s songs and records. The co-writer of “Waterloo,” “SOS,” and “Dancing Queen” was mostly sanguine – indeed, enthusiastic – about artificial intelligence’s likely effect on human creativity generally and on musical composition specifically. (Ulvaeus – who is president of the International Confederation of Societies of Authors and Composers – has some intelligent prescriptions for handing copyright and royalty protections in this new era, too.) It’s an edifying interview.
China put these measures in place “[i]n order to promote the healthy development and standardized application of generative artificial intelligence, safeguard national security and social public interests, and protect the legitimate rights and interests of citizens, legal persons, and other organizations.”
The regulations apply to the use of generative AI technology to provide services for generating text, pictures, audio, video and other content. … AI services must “[r]espect intellectual property rights” and “the legitimate rights and interests of others … and must not infringe on the portrait rights, reputation rights, honor rights, privacy rights, and personal information rights of others.” …
Under the newly adopted regulations, all generative AI providers must register their services and submit these services to a security review by the Cyberspace Administration of China, the state cyberspace and information department, prior to their public release. …
The regulations also require that all content created by generative AI be properly marked or labeled as such to prevent any generative AI material from being mistaken as human-authored content. …
I have added Ethan Mollick’s substack blog, “One Useful Thing,” to our Resources list (above). A professor at the Wharton School of the University of Pennsylvania, Mollick writes that he’s “trying to understand what our new AI-haunted era means for work and education.” I have found his posts terrifically useful, in particular his recent discussion “How to Use AI to Do Stuff,” which I recommended to my third-year students recently, drawing attention to these caveats:
Some things to worry about: In a bid to respond to your answers, it is very easy for the AI to “hallucinate” and generate plausible facts. It can generate entirely false content that is utterly convincing. Let me emphasize that: AI lies continuously and well. Every fact or piece of information it tells you may be incorrect. You will need to check it all. Particularly dangerous is asking it for references, quotes, citations, and information for the internet (for the models that are not connected to the internet). …
It also can be used unethically to manipulate or cheat. You are responsible for the output of these tools. [emphasis mine]
Two key points that remain true about AI:
AI is a tool. It is not always the right tool. Consider carefully whether, given its weaknesses, it is right for the purpose to which you are planning to apply it.
There are many ethical concerns you need to be aware of. AI can be used to infringe on copyright, or to cheat, or to steal the work of others, or to manipulate. And how a particular AI model is built and who benefits from its use are often complex issues, and not particularly clear at this stage. Ultimately, you are responsible for using these tools in an ethical manner.