Overview

Every time you hit a URL, your request passes through at least four protocol layers before it reaches the server. Each layer solves exactly one problem and trusts the layers below it to handle the rest. This is the core design philosophy of network protocols: layered abstraction. You don’t think about voltage on copper when you write an HTTP request, and that’s the whole point.

Note

There are two models people reference: OSI (7 layers, mostly a teaching tool) and TCP/IP (4 layers, what actually runs the internet). In practice, the OSI Presentation and Session layers get folded into the Application layer, so the TCP/IP model is what you’ll encounter in real systems.


The TCP/IP Stack

LayerResponsibilityKey Protocols
ApplicationEnd-user data formats and semanticsHTTP, DNS, SMTP, SSH
TransportEnd-to-end delivery, reliability or speedTCP, UDP
InternetAddressing and routing across networksIP (v4/v6), ICMP
LinkHop-to-hop delivery on a single networkEthernet, Wi-Fi (802.11), ARP

The key insight here is that each layer only talks to its peer on the other machine. Your browser’s HTTP layer talks to the server’s HTTP layer, even though the actual bits pass through every layer on both sides. This is what makes the whole thing composable.


TCP vs UDP

This is one of those comparisons that comes up constantly, and the tradeoff is straightforward once you see it: TCP gives you reliability at the cost of overhead, UDP gives you speed at the cost of “figure it out yourself.”

PropertyTCPUDP
ConnectionConnection-oriented (3-way handshake)Connectionless
ReliabilityGuaranteed delivery, in-order, retransmissionsBest-effort, no retransmission
Flow/congestion controlYes (sliding window, AIMD)None built-in
OverheadHigher (20-byte header + state)Lower (8-byte header)
Use casesWeb, email, file transfer, SSHDNS queries, video streaming, gaming, VoIP

TCP’s three-way handshake: SYN, SYN-ACK, ACK. After that, data flows as a reliable byte stream. The sender adapts its rate using congestion-control algorithms (Reno, Cubic, BBR).

Tip

A quick way to remember which to use: if losing a packet would break your application (file transfer, database queries), use TCP. If a dropped packet just means a slightly choppy frame (video call, game state), UDP is probably fine.

UDP shows up in DNS because queries are small, latency matters, and if you don’t get a response you just ask again. No need for a full connection setup for a single question-and-answer exchange.


DNS Resolution

DNS translates human-readable names (example.com) into IP addresses (93.184.216.34). The resolution process is hierarchical, which is something I didn’t fully appreciate until I traced through it:

  1. Client asks its recursive resolver (usually ISP or 8.8.8.8).
  2. Resolver queries a root nameserver (.), which points to the TLD nameserver (.com).
  3. TLD nameserver points to the authoritative nameserver for example.com.
  4. Authoritative server returns the A/AAAA record.
  5. Results are cached at each hop (TTL-bounded).

Warning

DNS caching means changes don’t propagate instantly. If you update a DNS record, old values can persist until the TTL expires at every cache in the chain. This catches people off guard during migrations.


HTTP

A request-response protocol that originally ran over TCP, now increasingly over QUIC (HTTP/3):

GET /index.html HTTP/1.1
Host: example.com

HTTP/1.1 200 OK
Content-Type: text/html
Content-Length: 1256

<!DOCTYPE html>...

The evolution of HTTP tells you a lot about what bottlenecks mattered at each stage:

  • HTTP/1.1 added persistent connections and chunked transfer (stop opening a new TCP connection for every image on a page).
  • HTTP/2 introduced binary framing, multiplexed streams, and header compression (stop waiting for one resource before requesting the next).
  • HTTP/3 moved to QUIC (UDP-based, built-in TLS) to eliminate head-of-line blocking at the transport layer. TCP’s guarantee that bytes arrive in order actually hurts when you’re multiplexing independent streams, because one lost packet stalls everything behind it.

Putting It Together: What Happens When You Type a URL

This is the classic interview question, and walking through it connects all the layers:

  1. DNS: resolve example.com to an IP address (UDP port 53).
  2. TCP: open a connection to that IP on port 443 (three-way handshake).
  3. TLS: negotiate encryption (certificate exchange, key agreement).
  4. HTTP: send GET / over the encrypted channel.
  5. Response: server returns HTML; browser parses, discovers linked CSS/JS/images, and repeats steps 1-4 for each (often reusing the TCP connection).

At the IP layer, routers forward packets hop by hop using routing tables. At the link layer, each hop uses ARP (or NDP for IPv6) to map IP addresses to MAC addresses for local delivery.

Note

The browser reusing TCP connections is a huge performance win. Without connection reuse, every resource on a page (and modern pages load dozens) would require a fresh three-way handshake plus TLS negotiation. HTTP/2 takes this further by multiplexing multiple requests over a single connection.