Show HN: An interactive guide to how browsers work

(howbrowserswork.com)

231 points | by krasun 19 hours ago

14 comments

  • domnodom 17 hours ago
    Not all browsers had or have a DOM, and some didn’t until later versions.

    Early browsers without DOMs (with initial release date): WorldWideWeb (Nexus) (Dec 1990), Erwise (Apr 1992), ViolaWWW (May 1992), Lynx (1992), NCSA Mosaic 1.0 (Apr 1993), Netscape 1.0 (Dec 1994), and IE 1.0 (Aug 1995).

    Note: Lynx remains a non-DOM browser by design.

    AOL 1.0–2.0 (1994–1995) used the AOLPress engine which was static with no programmable objects.

    The ability to interact with the DOM began with "Legacy DOM" (Level 0) in Netscape 2.0 (Sept 1995), IE 3.0 (Aug 1996), AOL 3.0 (1996, via integrated IE engine), and Opera 3.0 (1997). Then there was an intermediate phase in 1997 where Netscape 4.0 (document.layers) and IE 4.0 (document.all) each used their own model.

    The first universal standard was the W3C DOM Level 1 Recommendation (Oct 1998). Major browsers adopted this slowly: IE 5.0 (Mar 1999) offered partial support, while Konqueror 2.0 (Oct 2000) and Netscape 6.0 (Nov 2000) were the first W3C-compliant engines (KHTML and Gecko).

    Safari 1.0 (2003), Firefox 1.0 (2004), and Chrome 1.0 (2008) launched with native standard DOM support from version 1.0.

    Currently most major browser engines follow the WHATWG DOM Living Standard to supports real-time feature implementation.

    • userbinator 12 hours ago
      The last time I checked, Dillo also has no DOM in any reasonable definition of the term; instead it directly interprets the textual HTML when rendering, which explains why it uses an extremely small amount of RAM.
    • krasun 15 hours ago
      Thank you for the suggestion! Would be writing something like "DOM in the modern browsers" more correct then?
      • magicalist 13 hours ago
        > Would be writing something like "DOM in the modern browsers" more correct then?

        No, I don't think so. I don't know why the GP comment is at the top beyond historical interest. If you continue with your plans mentioned elsewhere to cover things like layout, rendering, scripting, etc, under this standard almost everything will have to have the "in modern browsers" added to it.

        Part of the problem is the term "DOM" is overloaded. Fundamentally it's an API, so in that sense it only has meaning for a browser to "have a DOM" if it supports scripting that can use that API. And, in fact, all browsers that ever shipped with scripting have had some form of a DOM API (going back to the retroactively named DOM Level 0). That makes sense, because what's the point of scripting if it can't interact with page contents in some way?

        So, "Lynx remains a non-DOM browser by design" is true, but only in the sense that it's not scripted at all, so of course it doesn't have DOM APIs, the same way it remains a non-canvas browser and a non-webworker browser. There's no javascript to use those things (it's a non-cssanimation browser too).

        There's a looser sense of the "DOM", though, that refers to how HTML parsers turn an HTML text document into the tree structure that will then be interpreted for layout, rendering, etc.

        The HTML spec[1] uses this language ("User agents must use the parsing rules described in this section to generate the DOM trees from text/html resources"), but notes it's for parsing specification convenience to act as if you'll end up with a DOM tree at the end of parsing, even if you don't actually use it as a DOM tree ("Implementations that do not support scripting do not have to actually create a DOM Document object, but the DOM tree in such cases is still used as the model for the rest of the specification.")

        In that broader sense, all browsers, even non-modern ones (and Lynx) "have a DOM", since they're all parsing a text resource and turning it into some data structure that will be used for layout and rendering, even if it's the very simple layouts of the first browsers, or the subset of layout that browsers like Lynx support.

        [1] https://html.spec.whatwg.org/multipage/parsing.html

  • chrisweekly 18 hours ago
    Cool project, thanks for sharing. HN readers should also check out https://hpbn.co (High-Performance Browser Networking) and https://every-layout.dev (amazing CSS resource; the paid content is worth it, but the free parts are excellent on their own).
    • konaraddi 17 hours ago
      HPBN is really well written, chapter 4 helped me understand TLS enough to debug a high latency issue at a previous job. There was an issue where a particularly incomplete TLS frame received and no subsequent bits for it led to a server waiting 30 min for the rest of the bits to arrive. HPBN was a huge help. I haven’t finished reading it but I remember there’s part of it that goes over the trade offs of increasing vs decreasing TLS frame sizes which is a low level knob I now know exists because of HPBN. Not sure if I’ll ever use it but it’s fascinating.
    • KomoD 16 hours ago
      Hpbn is really interesting, thanks for linking it
  • utopiah 18 hours ago
    Neat, it's like an exciting way to dive into https://browser.engineering without having anything to install.

    I'm wondering if examples with Browser/Server could benefit from a small visual, e.g. a desktop/laptop icon on one side and a server on the other.

    • krasun 18 hours ago
      I am planning to add more sections with more details. But decided first to collect some feedback.

      Thank you! It is a good suggestion. Let me think about it.

  • schoen 7 hours ago
    I'd also like to suggest a little more work on the URL parsing (even though most users probably won't enter anything that will be misinterpreted). For example, if a protocol scheme other than https:// or http:// is used, the browser will probably still treat it specially somehow (even though browsers typically seem to support fewer of these than they used to!). It might be good to catch these other cases.

    https://en.wikipedia.org/wiki/List_of_URI_schemes

  • arendtio 17 hours ago
    I like it very much --> bookmarked :-)

    The step I am missing is how other resources (images, style sheets, scripts) are being loaded based on the HTML/DOM. I find that crucial for understanding why images sometimes go missing or why pages sometimes appear without styling.

    • krasun 16 hours ago
      I thought about this, but I tried to keep it simple. Let me figure out how to add these blocks without over-complicating the guide.

      Thank you!

  • vivzkestrel 1 hour ago
    stupid question: what if we scrapped dns looksups completely and made all the computers actually work with human readable domain names instead?
    • webdevver 1 hour ago
      i have an even stupider question, which is what if we scrapped ip addresses and just used ethernet addresses to route everything? just make the entire internet network be one big switch.

      i think the guy who created tailscale wrote about something like this...

  • philk10 18 hours ago
    For a narrow browser window (< 1170) the contents section floats over the contents which is distracting
    • krasun 18 hours ago
      Thank you! Fixing it...
  • edwinjm 16 hours ago
    Bit unfortunate that more than half of the page is dedicated to network requests, but almost all work and complexity of the browser is in the parsing and rendering pipeline.
    • krasun 15 hours ago
      Will cover the rendering engine in more details. I didn't know at what sections to go deeper. So just stopped and published it to gather more feedback.

      Thank you!

    • LoganDark 15 hours ago
      And the DOM (though it can be argued that's part of the rendering pipeline).
  • GaryBluto 6 hours ago
    This doesn't account for all browsers, only Safari and Chrome. There isn't even a passing mention of separate search and address bars.
  • logicallee 17 hours ago
    This is pretty relelevant to a project I'm working on - a new web browser not based on Chromium or Firefox.

    Web browsers are extremely complex, requiring millions of lines of code in order to deal with a huge variety of Internet standards (and not just the basic ones such as HTML, JavaScript and CSS).

    A while ago I wanted to see how much of this AI could get done autonomously (or with a human in the loop), you can see a ten-minute demo I posted a couple of days ago:

    https://www.youtube.com/watch?v=4xdIMmrLMLo&t=42s

    The source code for this is available here right now:

    http://taonexus.com/publicfiles/jan2026/160toy-browser.py.tx...

    It's only around 2,000 LOC so it doesn't have a lot of functionality, but it is able to make POST requests and can read some Wikipedia articles, for example. Try it out. It's very slow, unfortunately.

    Let me know if you have anything you'd like to improve about it. There's also a feature requests page here: https://pollunit.com/en/polls/ahysed74t8gaktvqno100g

    • CableNinja 16 hours ago
      Took a quick glance through the code, its a pretty decent basic go at it.

      i can see a few reasons for slowness - you arent using multiprocessing or threading, you might have to rework your rendering for it though. You will need to have the renderer running in a loop, re-rendering when the stack changes, and the multiprocessing/thread loop adjusting the stack as their requests finish.

      Second, id recommend taking a look at existing python dom processing modules, this will allow you to use existing code and extend it to fit with your browser, you wont have to deal with finding all the ridiculous parsing edgecases. This may also speed things up a bit.

      Id also recommend trying to render broken sites (save a copy, break it, see what your browser does), for the sake of completion

      • logicallee 16 hours ago
        thank you for your quick code review and for these many helpful tips! I'll take a look at them and see what I can put into practice.

        EDIT: Unfortunately, it seems that the code is getting near the limit of the context window for Claude, so I'm not able to add several of the feature suggestions you added with the present approach. I'll look into breaking it up into multiple smaller files and see if I can do any better.

    • GaryBluto 6 hours ago
      If you're interested in Python web browsers, may I suggest you take a look at Grail?

      https://grail.sourceforge.net/

  • amelius 17 hours ago
    When I was a kid I had an electronics book about how (CRT based) TVs work.

    Posts like this are the modern version of that.

  • LoganDark 15 hours ago
    Claims that browsers transform "d.csdfdsaf" -> https://d.csdfdsaf, but they don't. They only transform domains with valid TLDs, unless you manually add the URL scheme.
    • krasun 14 hours ago
      It is a good one to fix. Thank you!
      • myfonj 13 hours ago
        The "guesswork" done by browsers is actually pretty nuanced and not standardised in a slightest way. Some defaults are pretty common, and could be maybe considered de-facto standard, but I wouldn't want to draw the line where "most" browsers agree or should agree.

        Personally, I have my browser set up to "guess" as little as possible, never do the search from the URL bar unless explicitly told to do so using a dedicated search keyword (plus I still keep separated auto-collapsing search bar). I have disabled all guessing for TLDs, auto prepending www. In short, when I enter "whatever" into my URL bar, my browser tries to load to "http://whatever/", what could be my local domain and I could get an answer -- it is is a valid URL after all. In a related note, I strongly doubt that any browser does the web search for "localhost".

        The rabbit hole could naturally go even deeper: for example most browser still interpret top-level dataURIs. It is not that long browsers interpreted top-level `javascript:` URIs entered into URL bar, now surviving in bookmarklets but taken from all users for the sake of a pitiful "self-XSS prevention".

        So I would be really careful telling what happens -- or, god forbid, should happen -- when someone types something into their URL bar: "whatever" could be a search keyword with set meaning: - it could be bound to http URL (bookmark), - the bookmark URL could have a `%s` or `%S` and then it would do the substitution, - it could be a `javascript:…` bookmark ("bookmarklet"/"favelet"; yes, most browser still let you do that, yet alas, mostly fail to treat CSP in a way it would remain operational). - It could be a local domain.

        The fact that, statistically, "most" browsers will do a web search using some default engine is probably correct but oversimplifying claim that glosses over quite a lot of interesting possibilities.

    • ranger_danger 13 hours ago
      Who or what gets to say what a valid TLD is? Especially when people take advantage of their own local resolvers, they could create anything at any time.
  • jeffbee 10 hours ago
    Perhaps worth editing the DNS section in light of RFC 9460 ... depending on the presence and contents of the HTTPS RR, a browser might not even use TCP. Here's a good blog post surveying the contents of the HTTPS RR a few years ago. https://www.netmeister.org/blog/https-rrs.html
  • raghavankl 13 hours ago
    This is cool