ai

I've let the dust settle a bit before looking into Model Context Protocol (MCP), but I've been assimilating bits of information all over the web as it crosses my path. This has been the first article I've actually read on the protocol, which I found through Arne's weekly link roundup, and it confirms a lot of what I've already heard. Still worth the read to get additional context.

Right after reading this post I followed through to the protocols documentation to learn more.

Read from link

So I've just discovered a floating orb above the new message icon on WhatsApp. What is it? AI, of course!

There's no way to disable as far as I can tell, and this Wired article agrees. However, you can clear data it has on you from the AI's chat, if you use it. Also according to Wired, Meta AI was already enabled for users in the US and Canada, and it now making it's way to Europe.

Oh it also clogs up your search with silly suggestions for things I would never use WhatsApp for. Who looks up meditation tips on WhatsApp?

Read from link

Some threads on PhysicsForums have ChatGPT generated answers from the past. The authors, David and Felipe, investigate how this was possible and why it was done. It seems the forum maintainer used accounts of old users to reply to threads with LLM generated responses. Was it consensual?

It’s hard to imagine that 110 existing users gave consent to be used as test accounts, for 115,000 posts, over four waves spanning almost a year. The idea that these are test accounts gone wrong, or a bot accidentally mislabeled, doesn’t seem to align with the facts.

Discovered via 82MHz.

Read from link

Seth Larson talks about how about the irony that an AI busting services, like CloudFlare's AI Labyrinth, use generative AI to combate the web crawlers used to create generative AI models.

Seth then goes on to talk about how heavily subsidized the AI industry is and draws an interesting comparisons, one that most of us have lived through first have.

Today this subsidization is mostly done by venture capital who want to see the technology integrated into as many verticals as possible. The same strategy was used for Uber and WeWork where venture capital allowed those companies to undercut competition to have wider adoption and put competitors out of business.

Read from link

This is an interesting project by David de la Iglesia Castro from Mozilla.ai on mapping in OSM using computer vision. Using a combination of YOLOv11 for object detection and SAM2 for segmentation they were able to map swimming pools from satellite imagery from Mapbox.

This isn't something completely new as Meta's RapidEditor for OSM is and to provide AI assistance. I have experience using Microsoft's GlobalMLFootprints model through RapidEditor while completing HOT tasks while impressive I disable it every time. There are always alignment errors, un-squared corners, false positives and overlapping polygons. Whenever a change is made by blindly accepting the AI recommendations it's obvious even without looking at the tags. Sometimes, it is more time-consuming to modify the changes than simply starting over.

The alignment and un-squared issues still seem to be present in David's project as seen from the screenshots. I haven't set up the project locally and the live demo was taken down after some discussion in the HackerNews comments. It was pointed out that the tool made it too easy for contributors to submit AI-generated submissions.

After OSM prides itself in quality submissions.

I don't want to take away from how impressive the work done was, as someone who has dabbled with YOLO and some mapping the source code seems approachable to tinker with.

Read from link

Paweł Grzybek celebrates 10 years of blogging, congratulations! In his post, he addresses why writing is still important in the current AI-hype cycle. This related to a quote I shared from Michał's blog about writing only providing fuel to LLMs without much benefit to the writer. Paweł writes about the benefit provider to the writer using research supported by Microsoft.

At first glance, the situation does not look like the slow process of blogging is an idea worth pursuing. Precisely the opposite is true! Critical thinking required for writing (and other acts of creation) is the only thing that can save us from becoming idiots. Microsoft, the same one that made a pretty close partnership with OpenAI, funded the interesting research about “The Impact of Generative AI on Critical Thinking.”

The study mentioned surveys 319 knowledge workers. On the topic of writing in related works, relying on LLMs fo write can hinder self development, however using the tool to request feedback can help the writer improve. The overall conclusion to the study says

Analysing 936 real-world GenAI tool use examples our participants shared, we find that knowledge workers engage in critical thinking primarily to ensure the quality of their work, e.g. by verifying outputs against external sources. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort.

Read from link

Michael Miszczak writes about the effects of Google search over the years on smaller bloggers and publishers. First through snippets of information directly on the search page and now through AI summaries and down ranking smaller sites.

Google is often called a tech company, but that’s a misnomer. It might have been true a decade ago, but that label no longer applies to the Alphabet of today. What Google has actually become is the largest advertising company in the world. They feed you ads and make money that none of us can dream of making.

Discovered via Ana Rodrigues.

Read from link

Microsoft seems to pushing Copilot to its Microsoft 365 users for an additional $3 per month unless the users switch to the "classic" plan before the next billing cycle. Leonard French, in his YouTube video, comments on the implications this could have for confidentiality in healthcare, legal and other similar industries. This was prompted after Kathryn Tewson started a thread on Bluesky on the issue after speaking to Microsoft support. She writes:

  1. It is impossible to disable Copilot in OneNote, Excel, PowerPoint, or Windows itself.
  2. It will not become possible to do so for another month AT THE EARLIEST.
  3. While they couldn't be sure, they think it's likely that Copilot ingests organizational data via the systems and applications it's embedded into even when not invoked.
  4. They were unable to determine if such ingested data would "bleed over" into files other than those it was sourced from
  5. They were very clear that organizational data would not be used to "train foundational models," but couldn't rule out the possibility that it could leave our organization in some way and pass beyond our custody and control.
Read from link