Since some people are talking about the “death of the URL” … again … it felt like the right time to re-post something I wrote 2 years ag on the topic including a link to research that seems to indicate people DO know what URLs are.

· Mastodon Twitter Crossposter · 6 · 42 · 39

Amazing how much more engagement I got on this via the than on the Bird Site.

@torgo @bgcarlisle Is it possible companies want you searching for them and clicking through so the search engines think people are interested in their site and promote it?

@torgo This is just anecdotal evidence, but one thing I noticed in Japan is that all the subway ads showed a little search bar (with a magnifying glass beside it) and the search term inside. My hunch is that they did it because it's hard for non-Westerners to remember a bunch of Latin characters to type into a URL bar.

@aral not sure what you mean? The Pew research was about people.

@torgo I meant that corporations like Google are trying to kill the URL. There isn't a mass call from everyday people to do so. Was just being pedantic.

@aral @torgo a lot of people I know don't know how to type URLs. The first thing they do when they want to login to a website is to go to a search engine, or dig through FB or their email to find a link to it. It's a thing that people can be *really* intimidated by digital tech, so they just learn sequences of actions that get things done, however inefficiently. For these people, the service that can provide the outcome they want in the shortest sequence of clicks wins, hands down.


To me the main point is that modern URLs are so overloaded with tracking junk they look like line noise.

@strypey @aral @torgo

@alanz @xxbc @aral @torgo certainly some websites produce more human-readable URLS than others. I remember we had a huge problem with upgrading some #Indymedia sites to a new codebase because the URL used article # (eg instead of title info (eg It's a lot easier to successfully redirect URLs to each article's location in a new database, especially with multiple upgrades, if it's clear what they're actually meant to point at.

@alanz @xxbc @aral @torgo after 3-4 codebase upgrades of the Aotearoa Indymedia site, on the last one the techies ended up just dumping the old database to a static archive, where it's languished ever since, unsearchable from the new site:


The URL as published may be clean.

But when it is disseminated via email, search, as a link in a site etc, it gets all the junk added, including redirects, utm params and the like.

@xxbc @aral @torgo

@alanz @strypey @aral @torgo site that is also a really valid, though somewhat secondary, concern

@torgo Asking people what "URL" stands for as a way of measuring if people know what URLs are seems a little bit like asking people what solanum tuberosum is as a way of measuring if people are familiar with potatoes.

@invaderxan I don't disagree but I am struggling to find any more applicable research. It doesn't help that it's almost impossible to search for anything by the words URL or link. Any suggestions?

@torgo I'm not really sure you made a case for URLs, tbh. Just because you've got a slightly better than random chance of finding a person in the US that can expand the acronym correctly doesn't say much at all. The examples you gave weren't even URLs anyway, they are "relative references". So you've maybe got an argument that the DNS is handy for advertisements? But the centralisation and identity problems of URLs are basically caused by the DNS, so no win there anyway.

@mjog well this was a post about one particular use of URLs - out-of-band communication of a web address (could be advertising but could equally be any public information). DNS is distributed, but you're right that there is some centraliation (ICANN / root servers). I would argue that it's distributed "enough" considering how many DNS domains are out there.

@torgo The DNS infrastructure is certainly very distributed, but it's management is very highly centralised. In any case it's a terrible stand in for identity, and hence it is subject to all of the spoofing/fishing/typosquatting/etc/etc that we see today. There needs to be another layer on top of the DNS so that it (and URLs) can be treated like the implementation detail that they are.

@mjog I am not sure I share your conclusions. A new layer would introduce additional complexity and an additional point of failure. "Making URLs more usable" (i.e. improving what we have) would seem to me to be a better goal.

@torgo Yes I'm sure that's what people said when the DNS was first introduced, too. ;) Don't look at how many layers and how many protocols and how many different software stacks a request for a URL that you type in to a browser already has to traverse before the resource it represents is displayed – you may be horrified!

Sign in to participate in the conversation

Follow friends and discover new ones. Publish anything you want: links, pictures, text, video. This server is run by the main developers of the Mastodon project. Everyone is welcome as long as you follow our code of conduct!