It seems somewhat recently the VS Code Marketplace has stopped offering download links to .vsix
files. This means you can only download them through the VS Code Extensions section of the application instead of the browser. Fortunately, StackOverflow user twj has described a way to construct the download URL by hand using the extensions unique ID. Neat.
technology
YouTube provided me with a solid recommendation, Cursed Controls is a YouTube channel run by an electrician. In their video series, Home PLC, they go about setting up a control cabinet to be able to do home automation with... a PLC. I'm out of my depth to comment on the technicalities of the setup but it's definitely interesting to watch, and I've picked-up a fair bit of information throughout the series. So far there's four episodes:
Read from linkLoops discovers YouTube testing a new anti-ad blocking measure which adds a buffer to the start of a YouTube that's 80% the length of the ad. Also included in the post is an interesting high level overview of you YouTube videos are served.
Discovered via Arne's weekly newsletter.
Read from linkThe title says it all.
Read from linkGoogle has created a way to simply list search results, simply by including the &udm=14
in the URL. I've long since switched away from Google, but it's nice to see.
It’s essentially Google, minus the crap. No parsing of the information in the results. No surfacing metadata like address or link info. No knowledge panels, but also, no ads. [...] It’s worth understanding the tradeoffs, though. My headline aside, a simplified view does not replace the declining quality of Google’s results
From Tedium via Stefan Judis' weekly newsletter.
Remember, what big brother gives, big brother can take away.
Read from linkJeff Geerling talks about timing and synchronization for broadcasts while visiting NAB 2025. The video was sponsered by Meinberg, but the content is still solid.
Read from linkFor the viewers at home to have a good time, all the equipment and producers in the stadium have to have a good time.
The Slack DevEx team reduced execution time of their end-to-end testing pipeline by reusing frontend assets when no changes where made. A seemingly simple change that's undoubtedly complex with a large monorepo being worked on by hundreds, maybe thousands, of developers simultaneously and an ecosystem of integrated tools.
Identifying changes to frontend files was done efficiently by git diff
and finding the last frontend build was done using "straightforward S3 storage concepts". The latter was never elaborated on.
With hundreds of PRs merged into this repository daily, identifying a prebuilt version that was fresh enough required robust asset management. By using straightforward S3 storage concepts, we were able to balance recency, coherent file naming, and performance to manage our assets.
This leaves me to speculate. - Is the latest build always stored with the same prefix? - Is the prefix of the last frontend build recorded in a key-value store that can later be retrieved? - Is the last frontend build information retrieved from git and the prefix pieced together on retrieval? - Or is it something more complex? Probably.
Given the quote from the blog, it leads me to believe the prefix is made up of the date, with the commit hash in front for better prefix performance. It's all just a guess for now.
Read from linkMicrosoft used their fundamental advantage as an incumbent software leviathan to bypass the terms on which we wanted to fight them. With very few exceptions, they didn’t steal our customers. Our customers loved Slack. But they tapped the massive market they already had and greatly reduced our chance to ever reach them. A company that was already paying a steep premium for the essential tools of Office 365 got Teams “for free” and it seemed fine to them.
That's sounds like 🎩 to me. Winning an antitrust case in the EU in 2024—filed in 2020—seemed to be too little, too late.
Read from linkFiled in 2020 and resolved in 2024, the suit alleged that Microsoft had used its monopoly power to bundle Teams into the Office suite and give it away for free, thereby undercutting competition and unfairly skewing the market in their favour. In response, Microsoft has agreed to unbundle and charge for Teams.
A look what silica gel are, what that do and why we use them so much.
[A] single gram of silica gel could have an internal surface area of eight hundred square meters—the size of almost two basketball courts.
Discovered via Tom Scott's Newsletter.
Read from linkI've posed a similar question before which is often met with hesitation, or dismissal, but today, I came across this post circulating online, and it's great to see some discussion around it.
This is my favourite line.
Read from linkAd companies are never going to regulate themselves—it's like hoping for heroin dealers to write drug laws.
Ploum talks about a component of enshitification, Androidification.
Read from linkAndroidification is not about degrading the user experience. It’s about closing doors, removing special use cases, being less and less transparent. It’s about taking open source software and frog boiling it to a full closed proprietary state while killing all the competition in the process.
Ready for a hit of crotcheted nostalgia? Nicole Nikolich created large crotchet pieces depicting Minesweeper, Neopets, Solitaire and the Sims which are currently on exhibit at The Delaware Contemporary until 25 May 2025.
Discovered via Ben Daubney.
Read from linkWhen we choose to adopt any new dependency – whether it’s a framework, library, or any other tool – we are making a bet. We’re gambling that the velocity gain from this new tool will not be lost to its maintenance burden. If a shiny framework is overcomplicated, or poorly maintained for our needs, we’re gonna have a bad time.
Allen Pike goes on to highlight boring and well-maintained frameworks for several scenarios in the seemingly fast paced world of JavaScript. For example, Fastify or NestJS for backend-based persistent applications, and React Router Framework or Next.js for progressive applications.
Read from linkAs the title suggests, Jeija demonstrates how WiFi signals travel through our surroundings by capturing them through an ESP32 antenna array. The video dives further into the physics behind the signals through WiFi signal captures, visuals and easy to understand explanations.
Discovered via Tom Scott's Newsletter.
Read from linkClara and I have an AppleTV, a curious device made by a company that seemingly cares little for it. This is fantastic news from a large IT company in 2025, because it means it has mostly escaped from being stuffed with new “features” nobody wants.
Funny to think my favourite piece of tech is my old Kindle Paperwhite. The battery still works great, the power button hasn't hammed, I can load books from various stores, I can download books directly from the Kindle store if I choose, and it's not plagued with pop ups or other obnoxious attention grabbing features. The same can't be said with the Kindle store itself, but that may be a story for another time.
Read from linkSharing Adam Leventhal's drawing of what happens if that one dependency from that XKCD fails.
Read from linkA look into a risk assessment done by the Hachyderm Infrastructure team as a precaution to limit exposure to entities that may be governed by US law. The post breaks down each site and components giving an indication of the impact to service if the component were disrupted, and effort to migrate. There's a lot of interesting detail on what it takes to host a large distributed social network like Mastodon well.
Read from linkThis post from Candost reflects on traits of a good software engineer. It's very well thought out.
Read from linkJoshua writes some good pointers on running your own public API from his experience running Pushover's API. I recommend reading his full post, I've distilled it down here with some of my notes.
-
Host the API on its own hostname (i.e.
api.example.com
) to allow for separate control of IP, TLS settings, and so on. I've made this mistake before of figuring I could just use NGINX in front of everything on a single domain. Not entirely scalable. -
Account for users doing the bare minimum to get their application working, including inadvertently using non-conforming requests. Supporting them now will mean supporting them for all eternity unless you start working on a deprecation strategy. This reminded me of when I built a bot to send notifications of new posts on a subreddit. I created the URL for the post by inserting the subreddit name returned by the API. Within the URL I included "/r/" and the API also returned "/r/" before the subreddit name, which mean the URL generated had "/r/" twice. This still worked for a while before Reddit decided to reject those requests and I had to adjust my code. Had to dig through the Discord message archives for that one. Joshua also gives an example of not being too stringent:
Pushover's API has a message size limitation of 1,024 characters. If the message parameter is larger than that, I could reject the request because it's not correct, but then the user's message is lost and they may not have any error handling. In this case, I truncate the message to 1,024 characters and process it anyway [...] The user still receives something, and if they care that it's truncated, they can properly implement continuation or smarter truncation.
-
Use API tokens were possible for authentication, and make them easy to rotate.
-
Include a unique ID with every request and ask for them when providing customer support.
-
Assume humans will read your error messages, make them descriptive. Please, this.
-
Keep up with user failures, Pushover's API "short-circuits" the API logic after a set number of 4xx errors with a 429 response for an hour, along with a descriptive message. Use the API tokens to map the erroneous requests with a user and notify them via email.
-
Prefix any tokens you create ot help sorting out random strings. This is one thing I've seen used a lot but never considered doing myself until reading this.
I learnt about the localisation efforts of Ibrahima SARR's team to create the Fulah language pack for Firefox from Eric Bailey's blog post about localising content. It shows a new perspective on common technical idioms used that may not directly translate well into other languages. The example described was the word "crash" which translated to hookii meaning "a cow falling over but not dying".
This also reminded me of a web page by Rogério de Lemos on mapping technical terms in Portuguese to their English counterparts when describing dependability.
Read from linkA comic on the annoying design of modern UIs discovered via Tom Scott's Newsletter.
Read from linkMultiple university professors and researchers from different institutions comment on how students from 2017 and onwards store files in a flat structure and are unaware or unwilling to use folders.
I came across this 2021 article from a comment left by Simon Willison on HackerNews on how the HTML for People book asks the reader to "create a folder".
A notion of why this mental shift could be attributed to moving away from storting physical filling cabinets but also from the way content is consumed today with curated feeds rather than having to store or locate a file within a folder. Another reason could be with the emergence of good full-text (and image) search, I'm leaning more to this being a contributing factor.
Read from link