I'm coming round to the view that optimum documentation is one web page per function.

This gives you space for prose, examples, related functions, version history and even user comments!

It's not the norm though, and requires substantial content writing.

I've noticed that data small enough to fit in your clipboard rarely gets saved to an individual file.

You get collections of data, but it would be odd to save a single URL to a file.

The clipboard less persistent, but it's incredibly convenient.

Rust's dbg! macro (new in 1.32) is delightful. You write dbg!(my_var) and you get a print statement that writes:

[src/my_file.rs:123] my_var = "value of my_var"

It's a huge ergonomic help when debugging!

Find yourself regularly reviewing pull requests just to comment "please update the changelog"? Automate it! danger.systems/js/

Unlike a CI tool, this is automating PR commenting.

Would we take the time savings, or would we just build more complex, more featureful projects?

I'm suspicious the latter may have happened to some extent.

Food for thought: suppose better libraries/tools over the next 5 years doubled software productivity. A six month project today would take three months in 2024.

Would we even notice? How often do we compare like-for-like projects for improvements in development speed?

Is writing a perfectly space efficient data format comparable to hand writing assembly today? Today's generic compression algorithms are excellent.

Moving from SMS (charged per message) to data-based chat apps shows how billing impacts usage.

It's quite common for me to split text up into several messages for clarity. I didn't do that when I first used SMS!

Being able to choose your target browsers in Babel is a little like -march in a native code compiler!

Browser targeting has the nice property that you can collect popularity metrics to make quantitative decisions too.

A reminder of how people from different programming language backgrounds will judge your language choices. (Everyone has biases and their own favourite stack too.)

lwn.net/Articles/775963/

Despite the library churn, JS is a very stable language underneath. All my old projects still work, although some of them depend on long abandoned libraries.

Does a "blog comments" style system make sense on a PL docs site? E.g. official PHP docs: secure.php.net/manual/en/funct or community Clojure docs: clojuredocs.org/clojure.core/c

You want to promote contributions, but official docs are better off being patched. Tradeoffs!

GitHub enables you to do a ton via the browser, but AFAICS there's no way to rebase a pull request you've opened against someone's repo.

This is really handy when upstream has fixed tests. I've used it in some $JOB environments and it saves a few precious clicks.

Building a bug bounty system that attracts talented researchers, how much they earn, and how many bugs they find:

blog.trailofbits.com/2019/01/1

"In world championship [chess] matches, [...] players were ensconced behind polarized glass walls to prevent anyone in the audience from passing computer advice through signals."

AI and human chess today: project-syndicate.org/commenta

I think this means there's a big potential market for computer interfaces that read brain waves. Devices exist today, but they're designed for medical usecases and require training.

It seems like the natural conclusion.

I suspect this is the primary appeal of fingerprint readers on phones. On laptops they were more security features. On phones, they reduce the time to interaction because you put it in your hand when you start use!

Part of the success of smartwatches is because it removes this delay. You don't even have to use your pocket!

Voice activated ambient computing (Alexa etc) feels like the next step. The computer is always available.

Show more
Mastodon

Follow friends and discover new ones. Publish anything you want: links, pictures, text, video. This server is run by the main developers of the Mastodon project. Everyone is welcome as long as you follow our code of conduct!