@dajbelshaw When it comes to the web though, there is now no alternative. Google run the w3c and have recently announced they will start banning all but their and Mozilla's engines from Google products, Mozilla are 99% dependent on Google for revenue and are forever playing catch-up. No other browser engine can keep up with deliberately rapid-shifting standards. There'd been a steady downward trend in practical browser-engine choice: only 3 remain. The world wide web is dead.

@dajbelshaw That's not to say "hope is lost" it's to say "hope is elsewhere" - the next "web" insofar as the spirit and dream of the web, is perhaps to be found in , though I have issues with the "get-only" design philosophy and I think they need to anticipate the likelihood that someone will introduce client side scripting like it or not, if it gets popular. But it has the look and feel of an early alternative.
Mozilla are captured in the dying WWW. I think they're done.

The web was a mistake, but gemini is not solving many of its problems.

@federico3 @dajbelshaw No indeed, the impression is of a "reset" with some thoughtful changes to the foundations, but I agree with you that right now, if all other things went the same as for the web and Gemini "succeeded", we would see the re-emergence of HTTP/HTML/CSS/JS but with different dialects at every layer.
I think it would be prudent to start thinking about how to pre-empt these things and offer thoughtful alternatives that prevent the web's abuses.

@federico3 @dajbelshaw For example: people are already playing with "styling" with terminal escape codes, which of course only work in TUIs and are likely to break other interfaces unless handled deliberately by client devs.. so te cruft is already building. Web client devs decided to try and accomodate "broken" pages early and it lead to the pattern of site devs inventing mew bad ideas and client devs following along. Maybe Gemini's culture should learn from this..

@federico3 @dajbelshaw ..or, when client side scripting arrives, who implements it first will get to set the tone. Is CS-Scripting a tool for out of band communication, which gets used for spying? Or dynamic content on page, for rich but increasingly inscrutable interfaces? Or client side computation which can enrich client:server interaction, but also off-load computation unnecessarily onto clients and enable fingerprinting?
How do we get the good bits, but no bad?

@federico3 @dajbelshaw This could lead to design decisions that would be alien to current frontend devs, like:
* can only access page content from the document scope in which they are embedded, and the descendent scopes, but no higher.
* can only use a finite number of computing steps before stopping.
* have local-only, temporary scopes and cannot communicate with other scripts.
* only contact the origin domain. And only upon a human page interaction. Ever.

If we keep around the concept of page we get another web. The actual web mashes together transport, page structure, styling, interactivity because it started "simple"

Sign in to participate in the conversation

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!