Pinned toot
Pinned toot

That's it everyone, my timeline is now tuned (chef kiss) perfectly

@Gargron @enkiv2 @kitkat @eq

4
12
Pinned toot
Pinned toot

So uh hello everyone, meet my (literal) big brother @pdcull .

He's been working with at-risk teenagers in the favelas of Brazil for the last 20 years; is a certified CERT (Community Emergency Response Team) trainer and Emergency Manager; is studying for a Masters in Emergency Management with a special focus on empowering communities to develop resilience.

He has seen a bit of crap in his time (corrupt cops, drug dealers etc) so he can *probably* cope with you all.

Probably.

Be nice.

0
11
Pinned toot

Aw man the problem of deliberate pollution of search engines by web page authors has its own name: 'spamdexing'.

Spamdexing is what gave us the dominance of Google. (well, part of what did)

en.wikipedia.org/wiki/Spamdexi

1
3

Code Show more

1
1

Code Show more

0
1

Code Show more

0
0

Code Show more

2
0
Nate Cull boosted

like okay, even images? We might want to crop an image, etc?

Instead of taking a screenshot of our screen, bringing that into an imaging program, cropping, etc,

We should be able to select a crop of an image and then save a small record saying 'X image, with Y crop settings applied...'

Okay, that COULD backfire horribly for privacy, say you crop an image and whoops the whole original gets sent, so we would need to think through some of the issues there.

1
1

But the fact that we have to resort to screenshots, that's telling us that something has gone horribly wrong.

Our data should be granular enough, and we should have the access rights and the indexing schemes and the UIs that expose that granularity, to be able to 'copy and link' rather than 'copy and paste' data at any scale we want. And have caching/transfer protocols sort out the rest of the busywork.

6
5

One simple argument that the HTTP/HTML-based web is not true Hypertext in the Ted Nelson sense:

That everyone, even here on Mastodon, has to use screenshots to share complex data.

We should be able to copy and link that data as a unit, somehow, in its underlying.... whatever.... source format.

But even though we have close to (not actually, but close to) a Universal Whatever Format in HTML... we can't transclude it into our posts. Because security, or... a whole bunch of valid reasons.

8
8
Nate Cull boosted

Boost this if you have no idea what you're doing.

285
118

and of course it will be your ISP's server you send that HTSP message to, and it's gonna say YES YES to everything you send and THEN redirect you to its corporate partner headquarters, so...

... and this was the situation literally even with the search engines before Google took over. The web servers were lying. The web PAGES were lying about their content, in their content headers. The search ENGINES were lying by putting in for-pay search results. Google came in and seemed to be honest.

0
2

It would be awesome if we could just trust that we send, eg, an imaginary HTSP (Hypertext Search Protocol) message to a server and ask it 'hey what have you got about DANCING CATS'

but you KNOW that every lying server out there, which will be ALL of them because they will all go to the same shady SEO seminars that tell them today to put 'join our mailing list popups' on their blogs... every server will answer 'YES YES WE SUPER HAVE THAT PAGE' to every query, whether they do or not.

0
1

Anyway: solving these kind of search reliability problems is HARD, so I think that sort of stuff needs to be 'user-code layer' somehow, ie, there needs to be a way to run arbitrary functions that do stuff on datasets, any complex algorithm we can't rely on baking it into the firmware once and getting it right...

but relying on a big central server to do all the search? That was the big mistake. It allowed the centralisation to take place. Lots of big money needed that to happen, of course.

1
2

Like, you might think SEO is bad now, with Google running everything? but omg it was FAR FAR WORSE before Google, in the AltaVista era, when search engines naively relied on pages putting in keywords into their documents and then just STRAIGHT UP LYING AND FAKING EVERYTHING.

You have no idea, if you came onto the Web after 2000. Just how bad it was.

I guess a sensible search protocol would just burn all the lying pages with fire and the servers thet rode in on? Some kind of reputation system?

2
13

... and, yeah, I think the connections need to be two-way? Perhaps? I don't really know about that. I think the network would be more efficient if the links were two-way-ish (eg: publish-subscribe ), but real networks have to deal with connections just dropping out, so the HTML approach of only one way links was pragmatic, to a point.

I think the existence of Google is a failure. Search should have been part of the protocol from the beginning. But... there were so many pages that just lied.

1
3
Nate Cull boosted

Computers operational in the Homebrew Computer Club, February 1977.

3
9
Nate Cull boosted

But yeah: a soup of objects, connected by links into a graph, and those links may include arbitrary computations (so... maybe not a graph, or not a finite-sized graph). I think that's what the future looks like (or even the present, if you look at it through something like Powershell, where you can make an object out of anything).

I think those objects need to be not quite 'objects' but pure-functional, though, for safety; and I think their types need to be describable (and creatable) somehow.

0
0

There's a whole world of hurt, of course, in the details as soon as you bring 'typing' into the picture. Since there are so many different type systems, and they don't play nicely with each other!

That's kind of why I started looking at the problem of what typed objects are made OF and ended up with 'Prolog terms, but as S-expressions' as my attempt at a cut at it.

Because if there's typed data out there we need to have a way of describing the types *themselves*.

0
0

Juan's getting part of it, though:

<<the ability to reference any node in the Graph. A sort of URL on steroids. URLs can point to arbitrary resources but are limited in that they cannot point to data inside or referenced by the resource....

In Membrane, you use “Refs” which are analogous to URLs but actually designed to work with programmable interfaces, for example, Refs are typed and arguments are explicit>>

Yeah, that's sorta what I want! 'Addresses' which are functions. Or methods?

1
0
Show more
Mastodon

Follow friends and discover new ones. Publish anything you want: links, pictures, text, video. This server is run by the main developers of the Mastodon project. Everyone is welcome as long as you follow our code of conduct!