Why doesn’t every computer have 256 char domain name, along with a private key to prove it is the sole owner of the address?
Edits: For those technically inclined: Stuff like DHCP seems unnecessary if every device has a serial number based address that’s known not to collide. It seems way more simple and faster than leasing dynamic addresses. On top of that with VOIP I can get phone calls even without cell service, even behind a NAT. Why is the network designed in such a way where that is possible, but I can’t buy a static address that will persist across networks endpoint changes (e.g. laptop connecting to a new unconfigured wifi connection) such that I can initiate a connection to my laptop while it is behind a NAT.
- Yes, it would be a privacy nightmare, I want to know why it didnt turn out that way
- When I say phone number, I mean including area/country code
- AFAIK IP addresses (even static public ones) are not equivlent to phone numbers. I don’t get a new phone number every time I connect to a new cell tower. Even if a static IP is assigned to a device, my understanding is that connecting the device to a new uncontrolled WiFi, especially a router with a NAT, will make it so that people who try to connect to the static IP will simply fail.
- No, MAC addresses are not equivalent phone numbers. 1. Phone numbers have one unique owner, MAC addresses can have many owners because they can be changed at any time to any thing on most laptops. 2. A message can’t be sent directly to a MAC address in the same way as a phone number
- Yes, IMEI is unique, but my laptop doesn’t have one and even if it did its not the same as an eSim or sim card. We can send a message to an activated Sim, we can’t send a message to an IMEI or serial number
Well, endpoints then were largely mainframe type systems, long before PCs existed, let alone network-capable PCs and http. So it was a different idea than what we have today.
Before internet, you could connect two physically disparate systems using point-to-point, permanently switched connections (so it always consumed a potential connection even when no data was being transmitted). If you had Point A connected to Point B, you need a third connection to comm with Point C. The idea was, if B already had a connection to C, why not share that bandwidth/connection so A only needed one connection? And then apply a data-switching concept (e.g. Packet switching), instead of circuit-switching.
We were still using P-to-P connections in the late 90’s because internet capabilites weren’t quite up to what some systems needed for latency, timing, and bandwidth.
At first, just getting two endpoint mainframes connected was a big deal, and individual user devices wasn’t much of a thought, yet. Most stuff was still mainframe based, so having those connected was sufficient for user communication/data sharing anyway. Since user connectivity wasn’t the main concern - moving data from one system to another was, say an entity has 2 locations, and needs to sync the systems in those two locations. So you either use a circuit-switched P-to-P, with downtime for users when sync is happening, or send physical tapes (magnetic or even punched paper tapes) cross-country to move data, with that data being out-of-sync and requiring manual updates to re-sync.
Routing was necessary primarily for backbone transit, secondarily for organizations with multiple systems, hence the IP Classful approach.
DHCP is a local network requirement - ask any Admin about static IP addresses - that’s a nightmare. I don’t even like it at home with a handful of devices.
NAT is a result of the limited IP address space, not DHCP - there’s simply not enough addresses in 32bits for every local device to have a public IP (nor would you want this), plus having multiple services behind a router using local addressing. Even with static local addresses, you’d need NAT.
Also, look at the time - if you had a LAN in the late 80’s, it was something like Banyan Vines or Netware IPX (neither of which was routable originally), for local comms between local systems. Any internet/external network requirements were for (again) moving data between disparate locations. The idea that a workstation needed specific internet/non-local access to (what?) really didn’t make sense. It would comm with a local data source (mainframe/IBM 360, etc), and that system would manage retrieving or syncing data from elsewhere. A workstation was largely a dumb terminal before PCs (other than actual “workstations” which is a different animal) .