This article is disingenuous on a number of levels.

The idea that they are somehow against Google and monopolists while being paid by them, and having them as the default search. The idea that statistical data about precisely how you use the browser is somehow not personal.


I feel angry by just reading the title.

It's not just disingenuous.

They are lying to their users.

@Shamar It's not just lieing. It's the assumption that the reader doesn't know where Mozilla gets its money. Trying to take advantage of people's ignorance.
@Shamar @alcinnz You can also get creative with it. Make some nice visuals about how Firefox gathers statistics. "My Firefox history". Give a CCC talk, or somewhere similar where people might pay attention. Turn it into graphs and artistry. Point out the relevant sections of code.

@Shamar @bob I have a few thoughts:

1. Anymore I like to encourage people to stick with their default browsers (except ofcourse on elementary). That provides the most competition now.

2. It's actually quite easy to implement a functional browser, just embed WebKit, Chromium, or Gecko. What's difficult is adding all the features to make it good.

3. I wish it didn't take so much effort to make a browser engine, and would love for JavaScript to get replaced with something simpler and easier.

@Shamar @bob 4. I didn't come into this knowing anything more than any of you. Just a desire to try things out.

And a love for elementary OS.

@Shamar @bob The concept I've heard suggested and quite like is that all HTTP requests should only ever be made onload or in response to informed and explicit user interaction. Basically AJAX should be nothing more than fancier links, rather than a full programming language.

And maybe there'd be a limited ability to move some of this processing client side. As long as that doesn't become remote code execution.

@Shamar @bob I don't think I explained that well, so you should just read "Let's Replace JavaScript With Something Better".

I'm saying I don't think we need Remote Code Execution as a Service. Fancier links can account for most Ajax. Fancier CSS can account for most interactive controls.

We should see how far that takes us, what else we need to preserve the best of the Web post-JS, and how we can design permission prompts to tie into human laziness.

@alcinnz @Shamar @bob
I think one interesting direction to explore would be a predefined HTML element for infinite scrolling, that contains a template for the items it's gonna generate, and a link to a json or sth from which it should progressively download the data.

@alcinnz @Shamar @bob
moreover, I think that many other problems solved with javascript reduce can be reduced to this (maybe slightly generalized) ifinite scroll element.

IOW, we should think of the simplest tool which can be used to do most of the good stuff that requires JS right now, but is easier to implement, reason about, and sandbox, than JS.

@alcinnz @Shamar @bob

If you've ever played Minecraft , you may've noticed, that many updates introduce only one new item, with fairly simple mechanics. However, that item, combined with all the items existing before, has so many uses, that it replaces 10s of items from different mods.

I think we need something like this.

@Shamar @bob @alcinnz

>all interaction between the user and the network should happen out of conscious decisions of the users.

Oh, I overlooked this one.
Yeah, sounds like an interesting axiom.

Which gives me another idea:
maybe we need a more axiomatic approach?
Maybe we should first define a good (small but sufficient) set of axioms that the Web should satisfy, with a strong and well-described justification for each of them.

@Wolf480pl @Shamar @bob As long as there's others interested in defining and experimenting with these ideas, I'd be more than happy to pitch in!

Just tell me where you'd want to type up these ideas and I'd join you!

@Shamar @alcinnz @bob @enkiv2

I'd like to take part in defining, but not sure how much time I'll have.

Regarding where:
Maybe some etherpad, eg. ?

@Shamar @Wolf480pl @bob @enkiv2 I've created a CryptPad document that we can use to agree on some principles for designing these specs, and what to do with them.

My thoughts are strongest at this highlevel, so I'll write them down now and seek your feedback and additions.

@Shamar @Wolf480pl @bob @enkiv2 We can link to a Mastodon thread at least, so I think I might paste it into that document?

@Shamar I incorporated some of your feedback, but I think I'll ask some more questions here to iron the rest of it. @Wolf480pl @bob @enkiv2

Don't get me wrong, I totally believe in your principle of locality. I just think it'd be good to iron out how it would relate to these standards. And I'm trying to figure out how it gels with avoiding clientside Turing-completeness.

@Shamar @Wolf480pl @bob @enkiv2 Makes sense.

Though when it comes to voice input, the principles I laid out there suggest that the web browser should be interpreting voice input with the page having no clue that's where it's input comes from.

@Wolf480pl @Shamar @bob Certainly! For me any thoughts that way mostly comes from wanting to preserve sites like OpenStreetMaps, Google Maps, etc.

The interactions on those sites loosely correspond to (truly infinite) scrolling, particularly on mobile.

@Wolf480pl @Shamar @bob As for my security principles, it suggests that there'd need to be some sort of communication that this is an infinite scrolling element before any scrolling happens. This would be easier on some devices than others.

As for maintaining the useful geoposition feature for map sites (and a few others), I'd argue it's best to move the activation into the browser chrome. That way users can be confident in what it does and a separate confirmation is unnecessary.

@alcinnz @Shamar @bob html5+css3 ils Turing complete, so even if you completely remove JavaScript you still have possibilities for remote code execution.

We'd need to dramatically reduce the API surface.

@Wolf480pl @Shamar @bob Oh yeah forgot about Servo for once! I believe it can be.

@alcinnz @Shamar @bob

This comment made me realize that such a new browser, one that adds support for another client-side language, could work as a bridge from the current “web-as-application-platform” architecture (which I think is a mistake) to something better.

Sorry for the lack of context here, this just might solve a large problem I’ve been noodling on for a long time...🤔

@Shamar @alcinnz @bob


What I have in mind is extending a browser this way as a first step *away* from that architecture.

I’m in pursuit of a path to get applications off the web :)


Very cool!

Exciting to see others working in this direction, thanks!

@alcinnz @bob

@jjg @Shamar @bob What I'd be interested in proving is that replacing JavaScript in a browser engine entirely leads to a faster browser.

I really think it would! Because
1) The standards require the HTML parser to be paused whilst each <script> element is being executed.
2) The DOM standards define potentially the slowest possible AST for HTML to construct.
3) The DOM doesn't work well with the JavaScript optimizations browsers have added, nor JS devs.


@jjg @Shamar @bob

4) Exposing layout information to JavaScript was a mistake, because the JavaScript engine should be in full control of when that's computed. As is it can be a huge slowdown when these APIs are used.

But I can't complain about JavaScript itself being slow. Browsers have put an excess of effort into that.

@Shamar @jjg @bob I'm complaining that webpages should only be laid out once per frame because of how inherently computationally expensive it is, but with JavaScript you can trigger additional layouts.

This is commonly used to build custom layouts atop the browsers which can be useful for communication. But to be efficient any standard for that should be applied during or after CSS layout.

Sign in to participate in the conversation

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!