ai

I know I said I won't share an AI post for a while but I've come across the best response to Thomas Ptacek's post, My AI Skeptic Friends Are All Nuts, by Nikhil Suresh. His response was thorough and pointed out my qualms with the article. I thought the rest was well written but Nikhil Suresh makes some well written arguments why they're not — so I had to share.

Read from link

This is my fourth AI link today and probably my last for a while. Tim Bray's post, AI Angst, is one that I most align with. I don't deny there's benefits to using generative AI for coding but there's long term questions that I'd like answers to. Tim Bray poses that question here, as he comments on Thomas Ptacek's post, My AI Skeptic Friends Are All Nuts, one I just shared.

Suppose that, as Ptacek suggests, LLMs/agents allow us to automate the tedious low-intellectual-effort parts of our job. Should we be concerned about how junior developers learn to get past that “easy stuff” and on the way to senior skills?

Tim Bray also comments on the cost of the carbon footprint training and using these tools, and how seemingly useless this is in professional communication. This is worth the read.

Read from link

Thomas Ptacek makes some good arguments for using generative AI in code, particularly agents. Using agents to orchestrate LLMs with the right context is your codebase to bang out feature requests and bugs.

There's many other arguments addressed in the article including the perceived increase in efficiency that may reduce headcount in some businesses. In all honesty I'm not entirely convinced by the author take. They seem to have just dismissed the concern.

We’re a field premised on automating other people’s jobs away. “Productivity gains,” say the economists. You get what that means, right? Fewer people doing the same stuff. Talked to a travel agent lately? Or a floor broker? Or a record store clerk? Or a darkroom tech?

The same goes for the argument on plagiarism too.

Apart from those two points, I found the other rebuttals well written.

Read from link

In a recent post, I commented on how Simon Willison wouldn't have agreed with the definition of vibe coding shared by the author. So I thought I link to his post as well.

I’m concerned that the definition is already escaping its original intent. I’m seeing people apply the term “vibe coding” to all forms of code written with the assistance of AI. I think that both dilutes the term and gives a false impression of what’s possible with responsible AI-assisted programming.

Here's a snippet of the original intent by Andrej Karpathy.

There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard.

Vibe coding was never meant as a means to push critical code to production. Simon Willison talks about uses in low-stakes projects, and to consider the security of the code produced by LLMs.

Read from link

The term vibe coding and the act of vibe coding have been popping up all around me. Stefano Marinelli writes about how vibe coding will rob us of our freedom and I'm inclined to agree.

Many developers are terrified of losing their jobs for this very reason: AIs sometimes program better than them. And, in my opinion, they are right to be afraid. But I'm more afraid of a world (and not just in IT) where code will depend exclusively on the companies that sell us AIs.

I don't think it's too hard to imagine a world where non-programmers vibe code their product or service and hire developers by the hour to squash bugs. The gig economy for developers?. That is if there are any developers around still capable of writing code. I would assume there would be a shift in skill from merely wiring code to bring code reviews, which is arguably harder to master.

Stefano Marinelli defines vibe coding as:

this methodology (if we can call it that) where developers, pressured by deadlines, are no longer trained on code structure, but on the "vibe" – that is, on giving the right prompts to AIs and testing only if the output seems to work.

I can see Simon Williston disagreeing with this definition.

Another thing that popped into mind is that those in favour of using LLMs to generate large quantities of code frequently justify the technology claiming how productive it makes them. However, every productivity boost only means the company needs one less full time developer. Going back to the original post, the author wrote a follow-up to address some of the criticism aptly named When We Become Cheerleaders for Our Own Demise.

Discovered via Andreas from 82MHz.

Read from link

Even without LLMs, it’s possible StackOverflow would have eventually faded into irrelevance – perhaps driven by moderation policy changes or something else that started in 2014. [...] May 2025: the number of monthly questions is as low as when Stack Overflow launched in 2009.

Can't say I didn't notice the decline in questions since I recently decided to start actively answering questions on StackOverflow again — only now I have data to back it up. I just hope people gravitate towards LLMs and private chat platforms like Discord for help, bring back forums!

Discovered via Stefan Judis' weekly newsletter.

Read from link

I've let the dust settle a bit before looking into Model Context Protocol (MCP), but I've been assimilating bits of information all over the web as it crosses my path. This has been the first article I've actually read on the protocol, which I found through Arne's weekly link roundup, and it confirms a lot of what I've already heard. Still worth the read to get additional context.

Right after reading this post I followed through to the protocols documentation to learn more.

Read from link

So I've just discovered a floating orb above the new message icon on WhatsApp. What is it? AI, of course!

There's no way to disable as far as I can tell, and this Wired article agrees. However, you can clear data it has on you from the AI's chat, if you use it. Also according to Wired, Meta AI was already enabled for users in the US and Canada, and it now making it's way to Europe.

Oh it also clogs up your search with silly suggestions for things I would never use WhatsApp for. Who looks up meditation tips on WhatsApp?

Read from link

Some threads on PhysicsForums have ChatGPT generated answers from the past. The authors, David and Felipe, investigate how this was possible and why it was done. It seems the forum maintainer used accounts of old users to reply to threads with LLM generated responses. Was it consensual?

It’s hard to imagine that 110 existing users gave consent to be used as test accounts, for 115,000 posts, over four waves spanning almost a year. The idea that these are test accounts gone wrong, or a bot accidentally mislabeled, doesn’t seem to align with the facts.

Discovered via 82MHz.

Read from link

Seth Larson talks about how about the irony that an AI busting services, like CloudFlare's AI Labyrinth, use generative AI to combate the web crawlers used to create generative AI models.

Seth then goes on to talk about how heavily subsidized the AI industry is and draws an interesting comparisons, one that most of us have lived through first have.

Today this subsidization is mostly done by venture capital who want to see the technology integrated into as many verticals as possible. The same strategy was used for Uber and WeWork where venture capital allowed those companies to undercut competition to have wider adoption and put competitors out of business.

Read from link

This is an interesting project by David de la Iglesia Castro from Mozilla.ai on mapping in OSM using computer vision. Using a combination of YOLOv11 for object detection and SAM2 for segmentation they were able to map swimming pools from satellite imagery from Mapbox.

This isn't something completely new as Meta's RapidEditor for OSM is and to provide AI assistance. I have experience using Microsoft's GlobalMLFootprints model through RapidEditor while completing HOT tasks while impressive I disable it every time. There are always alignment errors, un-squared corners, false positives and overlapping polygons. Whenever a change is made by blindly accepting the AI recommendations it's obvious even without looking at the tags. Sometimes, it is more time-consuming to modify the changes than simply starting over.

The alignment and un-squared issues still seem to be present in David's project as seen from the screenshots. I haven't set up the project locally and the live demo was taken down after some discussion in the HackerNews comments. It was pointed out that the tool made it too easy for contributors to submit AI-generated submissions.

After OSM prides itself in quality submissions.

I don't want to take away from how impressive the work done was, as someone who has dabbled with YOLO and some mapping the source code seems approachable to tinker with.

Read from link

Paweł Grzybek celebrates 10 years of blogging, congratulations! In his post, he addresses why writing is still important in the current AI-hype cycle. This related to a quote I shared from Michał's blog about writing only providing fuel to LLMs without much benefit to the writer. Paweł writes about the benefit provider to the writer using research supported by Microsoft.

At first glance, the situation does not look like the slow process of blogging is an idea worth pursuing. Precisely the opposite is true! Critical thinking required for writing (and other acts of creation) is the only thing that can save us from becoming idiots. Microsoft, the same one that made a pretty close partnership with OpenAI, funded the interesting research about “The Impact of Generative AI on Critical Thinking.”

The study mentioned surveys 319 knowledge workers. On the topic of writing in related works, relying on LLMs fo write can hinder self development, however using the tool to request feedback can help the writer improve. The overall conclusion to the study says

Analysing 936 real-world GenAI tool use examples our participants shared, we find that knowledge workers engage in critical thinking primarily to ensure the quality of their work, e.g. by verifying outputs against external sources. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort.

Read from link

Michael Miszczak writes about the effects of Google search over the years on smaller bloggers and publishers. First through snippets of information directly on the search page and now through AI summaries and down ranking smaller sites.

Google is often called a tech company, but that’s a misnomer. It might have been true a decade ago, but that label no longer applies to the Alphabet of today. What Google has actually become is the largest advertising company in the world. They feed you ads and make money that none of us can dream of making.

Discovered via Ana Rodrigues.

Read from link

Microsoft seems to pushing Copilot to its Microsoft 365 users for an additional $3 per month unless the users switch to the "classic" plan before the next billing cycle. Leonard French, in his YouTube video, comments on the implications this could have for confidentiality in healthcare, legal and other similar industries. This was prompted after Kathryn Tewson started a thread on Bluesky on the issue after speaking to Microsoft support. She writes:

  1. It is impossible to disable Copilot in OneNote, Excel, PowerPoint, or Windows itself.
  2. It will not become possible to do so for another month AT THE EARLIEST.
  3. While they couldn't be sure, they think it's likely that Copilot ingests organizational data via the systems and applications it's embedded into even when not invoked.
  4. They were unable to determine if such ingested data would "bleed over" into files other than those it was sourced from
  5. They were very clear that organizational data would not be used to "train foundational models," but couldn't rule out the possibility that it could leave our organization in some way and pass beyond our custody and control.
Read from link