61% of pastors now use AI weekly to prepare sermons. A third of Christians trust AI spiritual advice as much as their pastor's. One AI-generated sermon included a fabricated Maimonides quote. Only 5% of churches have any policy on this. Also: scientists invented a fake disease and AI made it real, a folk singer had her voice stolen, and small towns across America are blocking data centres.
A survey of 594 pastors found that 61% now use AI weekly or daily, up from 43% just a year ago. A quarter use it every single day. ChatGPT is the most popular tool at 26%, followed by Grammarly and Microsoft Copilot. The primary use case is sermon preparation: researching scripture, generating outlines, polishing language. Most churches have no policy governing any of this. Only 5% have written AI guidelines. The congregation, in almost every case, has not been told.
The problems are already showing up. At least one AI-generated sermon included a fabricated quote attributed to Maimonides. Nobody in the pews caught it. Deepfake videos impersonating prominent pastors have been used to solicit donations from congregations. Father Mike Schmitz, a Catholic priest with over a million YouTube subscribers, had to publicly warn his followers about fake videos using his likeness to ask for money. And here's the number that should keep church leaders up at night: a Barna Group study found that 34% of practicing Christians now trust spiritual advice from AI as much as they trust advice from their pastor. Among Gen Z and millennials, it's 40%.
65% of pastors worry that AI could displace their spiritual guidance. 70% worry it could diminish their congregants' trust. But only 12% feel comfortable teaching their congregation about navigating the technology. There is something genuinely poignant about this. The people whose entire vocation is guiding others through difficult moral questions are themselves unsure how to handle the biggest moral question of the decade. And they're solving the problem the same way everyone else is: quietly using the tool and hoping nobody asks.
Murphy Campbell is an independent folk musician from North Carolina who sings traditional Appalachian songs. She discovered AI-generated covers of her music appearing on her own Spotify profile, uploaded by someone who scraped her YouTube performances and ran them through voice-cloning software. Then it got worse. A person using the name "Murphy Rider" uploaded the cloned tracks through Vydia, a Gamma-owned distribution platform, and filed copyright claims against Campbell's real videos via Content ID. The songs being claimed? "In the Pines" and "Darling Corey." Public domain pieces from the 1870s.
The system allowed all of this to happen automatically. Nobody at Spotify, YouTube, or Vydia flagged that a brand-new account was claiming ownership of 150-year-old folk songs using a cloned voice. Campbell said she assumed there were checks in place. There were not.
Thomas de Grivel submitted a full ext4 filesystem implementation for OpenBSD. It was generated entirely by ChatGPT and Claude Code with human review. It passes e2fsck, supports read and write, but has no journaling. The OpenBSD community's concern wasn't whether it works. It was whether the LLM essentially laundered GPL-licensed Linux kernel code into a BSD-licensed project.
Nobody can answer this question. If an LLM was trained on GPL code and then generates functionally equivalent code under a different licence, is that a derivative work? Copyright law doesn't have an answer yet. The OpenBSD project, which takes licensing more seriously than almost any community in open source, is now staring at a piece of working software that might be legally radioactive. The code is fine. The provenance is unknowable.
This isn't one story. It's a pattern that surfaced across five states this week and none of it is making the tech press.
In Imperial County, California, residents chanting "Get out!" in Spanish followed a data centre developer through a parking lot. They've filed for a November ballot initiative to ban data centres entirely. In Port Washington, Wisconsin, 66% of voters approved a ballot measure requiring public approval for any large-scale tech project, directly rejecting a Trump-backed AI data centre. In Boulder City, Nevada, residents packed a public meeting to oppose an 88-acre facility in open desert, worried about electricity bills and water usage. In Colorado Springs, so many people showed up to confront a developer that the line stretched from the lobby to the parking lot. In Archbald, Pennsylvania, population 7,500, residents face eviction on April 15 as multiple data centre campuses move in.
Nearly half of all US data centres planned for 2026 have now been cancelled or delayed. The AI revolution needs physical buildings, and the physical buildings are landing in places that don't want them.
Alexis Martinez-Arizala, 22, created a deepfake video showing four men in suits breaking into a Seminole County deputy's patrol vehicle. Then he showed the video to the actual deputy. The deputy exited a store and approached his vehicle with his hand on his weapon. A former detective warned that any bystander standing near the patrol car could have been put on the ground at gunpoint. Martinez-Arizala was eventually arrested in Puerto Rico. A security researcher demonstrated he could create equally convincing face-swaps in seconds using publicly available tools.
Stephen Brigandi, a San Diego attorney, submitted court documents containing 23 fabricated legal citations and 8 false quotations, all generated by AI. On April 4, he received $110,000 in sanctions, believed to be the highest ever for AI hallucination in legal filings. The specificity of the numbers is what gets you. Twenty-three fake cases. Eight fake quotes. He didn't check a single one.
Separately, a court ruled in February that conversations with public AI chatbots are not protected by attorney-client privilege. HOA boards and property managers had been pasting their lawyers' legal advice into ChatGPT to "simplify" it for residents, accidentally exposing confidential legal strategy to discovery in lawsuits. Lawyers across the country are now scrambling to tell clients to stop.
Kyle Kingsbury, known in tech circles for Jepsen (the tool that finds bugs in distributed databases), published a long essay this week about AI reliability. The opening anecdote: a conference speaker cited a quote attributed to Kingsbury himself. The quote was fabricated by an LLM. It was presented as fact in a talk, in front of an audience, sourced to a person who never said it and was sitting in the room.
He goes on to describe ChatGPT spending 45 minutes unable to add white shoulder patches to a blue shirt illustration. Traders losing hundreds of thousands of dollars using LLM agents that couldn't do basic arithmetic. His core observation: AI has a "jagged competence frontier." It can handle multivariable calculus but fails at counting. And you cannot predict which you'll get on any given request. The essay is the most readable thing published this week on what AI actually can't do.
A group of researchers made up a condition called Bixonimania. They described it as a subtype of periorbital melanosis associated with blue light exposure. They wrote it up in the format of professional medical literature. Then they published the fake papers and waited. ChatGPT initially flagged it as made-up. A few days later, it started confidently describing Bixonimania as a real condition, offering symptoms, risk factors, and treatment options. Then actual medical journals, including titles under Springer Nature and Cureus, cited the bogus papers. Real researchers referenced a condition that does not exist, because the literature looked legitimate and nobody checked. The model treats form as a proxy for truth. So does, apparently, peer review.
Fortune reported this week that 80% of white-collar workers are outright refusing workplace AI adoption mandates. It's being called a "quiet rebellion" driven by fear of obsolescence. The word being used is FOBO: Fear Of Becoming Obsolete. Companies are mandating AI adoption. Employees are nodding in meetings and then not using the tools. It's the corporate version of agreeing to read the book for book club and then just reading the Wikipedia summary, except the book is your replacement.
A solo developer built a tool that spawns programs in a pseudo-terminal, reads the screen as text, and sends keystrokes, letting AI agents drive GDB debuggers, Python REPLs, and anything else that expects a human at the keyboard. The "smart wait" feature, which debounces screen stability to figure out when a program is ready for input, is a clever solution to a problem that's been blocking agent workflows since agents became a thing. AI can call APIs and run shell commands fine. The moment something waits for interactive input, it's stuck. This fixes that.
An AI tool that inverts the usual flow. Instead of "AI generates the design," you draw the design by hand and an AI agent writes the CSS and HTML. 155 points and 95 comments on Show HN, which is strong traction for a launch. The appeal is obvious: it respects the human's creative intent while eliminating the tedious translation step. Nobody became a designer because they love writing flex-direction: column.
An open-source project on GitHub reverse-engineers SynthID, Google's AI watermarking system used in Gemini outputs. It hit 162 points on Hacker News with 52 comments. The implications are straightforward: if the watermark can be reverse-engineered, it can be removed. And if it can be removed, the entire premise of detecting AI-generated content through watermarking starts to fall apart. Google has been betting on SynthID as part of its responsible AI strategy. This project suggests the bet may not hold.
github.com/aloshdenny/reverse-SynthID
A high school student in Germantown, Maryland built Evion, an AI tool that uses cheap consumer drone footage to generate crop health maps for farmers. No expensive agricultural sensors needed. A venture capital firm offered him $300,000 to drop out and run it as a startup. He said no and kept it free. The anti-Silicon Valley AI story. Small, local, useful, and built by someone who doesn't want to be a founder.
Three things stood out today.
The infrastructure is the story now. Five different small towns in five different states pushed back against data centre construction this week. Nearly half of planned US facilities are cancelled or delayed. The AI industry has an assumption baked into every forecast: that compute will keep scaling. That assumption requires physical buildings in physical places, and the people who live in those places are saying no. This isn't a PR problem. It's a planning problem.
Form is eating truth. Scientists invented a fake disease and it became real because it looked like real science. A lawyer filed 23 fake citations because they looked like real law. A folk singer lost her own songs because fake uploads looked like real copyright claims. The pattern is the same everywhere: systems that rely on format as a signal of legitimacy are being exploited, and AI makes the exploiting trivially easy. The question isn't whether AI hallucinates. It's whether the institutions downstream of AI can tell the difference.
The backlash isn't where you'd expect it. Gen Z is the angriest generation about AI. Farmers are using it for taxes, not tractors. The largest programming community on Reddit banned AI discussion for a month. 80% of white-collar workers are refusing adoption mandates. The resistance isn't coming from Luddites or technophobes. It's coming from the people closest to the technology, which should worry AI companies far more than any regulation.