-
Guardrails don’t fit with scholarship
This is a fascinating summary of a big problem with some AI-enhanced library products: The AI powered Library Search That Refused to Search (Aaron Tay). The link to the ACRLog post is also worth a read: “We Couldn’t Generate an Answer for your Question”. In a nutshell, it appears that Microsoft Azure's OpenAI content filter,…
-
Keeping AI Closer to the Vest with Sovereignty and Privacy in Mind
Three news releases in quick succession made my antennae stand up, though they've actually been trickling out over the past month. In the order I saw them: Introducing Lumo, the AI where every conversation is confidential. A new privacy-focussed LLM Chatbot from Proton. I've only poked at it a bit, but it seems solid, aside…
Posted Under: AI -
The monster behind the LLM
I just spent some time exploring the site at Systemic Misalignment: Exposing Key Failures of Surface-Level AI Alignment Methods, and it's a thought-provoking place. In the context of AI, "alignment is the process of encoding human values and goals into large language models to make them as helpful, safe, and reliable as possible.[1]" Researchers at…
-
I’m no longer concerned about energy use and ChatGPT
Last week, I tripped across a couple of posts from MIT that I thought included the clearest explanations I’d seen of the cost and *reasons* for the cost of training and using Generative AI. Explained: Generative AI’s environmental impactandThe multifaceted challenge of powering AI I added those to my list of bookmarks on the topic,…
-
Librarians and teachers amongst the heaviest users of AI – The 2025 AI Index Report
OK, that's a clickbait title, but only a little. They're actually amongst the heaviest users of Claude, according to Anthropic (PDF), via the 2025 AI Index Report from Stanford's Institute for Human-Centered AI. The report itself is a 456-page PDF, so do start with the key takeaways, but then either search for specific words of…
