NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Google Search to redirect its country level TLDs to Google.com (searchengineland.com)
snowwrestler 81 days ago [-]
I wonder if this is related to the first party cookie security model. That is supposedly why Google switched maps from maps.google.com to www.google.com/maps. Running everything off a single subdomain of a single root domain should allow better pooling of data.
abxyz 81 days ago [-]
Subdomains were chosen historically because it was the sane way to run different infrastructure for each service. Nowadays, with the globally distributed frontends that Google's Cloud offers, path routing and subdomain routing are mostly equivalent. Subdomains are archaic, they are exposing the architecture (separate services) to users out of necessity. I don't think cookies were the motivation but it's probably a nice benefit.

https://cloud.google.com/load-balancing/docs/url-map-concept...

ndriscoll 81 days ago [-]
L4 vs L7 load balancing and the ability to send clients to specific addresses seem like major differences to me? I'm not seeing how subdomains are "archaic". It's also obviously desirable on the user side in a world of everything using TLS to allow better network management (e.g. blocking some services like youtube and ads on a child's device while allowing e.g. maps. Or if reddit used subdomains, you could allow e.g. minecraft.reddit.com without allowing everything else) without needing to install CAs on every device to MITM everything.
abxyz 81 days ago [-]
Subdomains are archaic in the context of high availability. 15 years ago, it was impractical to expect a single system (google.com) to reliably handle hundreds of millions of requests per second and so distributing services across subdomains was important because it distributed the traffic.

Today, hundreds of millions of requests per second can be handled by L4 systems like Google Cloud and Cloudflare. Today traffic to a subdomain is almost certainly being routed through the same infrastructure anyway so there is no benefit to using a subdomain. That's why I describe subdomains as archaic in the context of building a highly available system like Google's.

If you're Google in 2010, maps.google.com is a necessity. If you're Google in 2025, maps.google.com is a choice. Subdomains are great for many reasons but are no longer a necessity for high availability.

ndriscoll 81 days ago [-]
It has nothing to do with high availability. It's useful to separate out different traffic patterns. e.g. if you're serving gazillions of small requests over short lived connections, you want a hardware accelerated device for TLS termination. You'd be wasting that device to also use it for something like video (large data transfers on a long-lived connection). An L7 loadbalancer (i.e. one that's looking at paths) needs to terminate TLS to make its decision.
abxyz 81 days ago [-]
You're making a different point. Of course, there are use cases for subdomains, I'm talking specifically about the transition of maps.google.com to google.com/maps. google.com/maps always made sense but wasn't technically viable when Google Maps launched and that's why they've transitioned to it now. I'm arguing that Google Maps being on a subdomain was an infrastructure choice, not a product choice.
msm_ 81 days ago [-]
I'm not trying to be argumentative, but by saying:

>Subdomains are archaic

You presented a bit different argument. Also I disagree - maps.google.com is a fundamentally different service, so why should it share a domain with google.com? The only reason it's not googlemaps.com is because being a subdomain of google.com implies trust.

But I guess it's pretty subjective. Personally I always try to separate services by domain, because it makes sense to me, but maybe had the internet went a different path I would swear path routing makes sense.

noinsight 81 days ago [-]
> allow better network management

Yeah, this would definitely block that.

DNS-based (hostname) allowlisting is just starting to hit the market (see: Microsoft's "Zero Trust DNS" [1]) and this would kill that. Even traditional proxy-based access control is neutered by this and the nice thing about that is that it can be done without TLS interception.

If you're left with only path-based rules you're back to TLS interception if you want to control network access.

[1] https://techcommunity.microsoft.com/blog/networkingblog/anno...

snowwrestler 81 days ago [-]
Yes it’s easy route paths, I’ve been using Fastly to do it for years.

But the vast majority of users don’t care about URL structure. If a company goes through the effort to change them, it’s because the company expects to benefit somehow.

gardenhedge 81 days ago [-]
Subdomains can be on the same architecture
westurner 80 days ago [-]
Has anything changed about the risks of running everything with the same key, on the apex domain?

Why doesn't Google have DNSSEC.

tptacek 80 days ago [-]
To a first approximation, nobody has DNSSEC. It's not very good.
westurner 77 days ago [-]
DNSSEC is necessary like GPG signatures are necessary; though also there are DoH/DoT/DoQ and HTTPS.

Google doesn't have DNSSEC because they've chosen not to implement it, FWIU.

/? DNSSEC deployment statistics: https://www.google.com/search?q=dnssec+deployment+statistics...

If not DNSSEC, then they should push another standard for signing DNS records (so that they are signed at rest (and encrypted in motion)).

Do DS records or multiple TLDs and x.509 certs prevent load balancing?

Were there multiple keys for a reason?

tptacek 77 days ago [-]
So, not remotely necessary at all? Neither DNSSEC nor GPG have any meaningful penetration in any problem domain. GPG is used for package signing in some language ecosystems, and, notoriously, those signatures are all busted (Python is the best example). They're both examples of failed 1990s cryptosystems.
dfc 77 days ago [-]
Do you think the way Debian uses gpg signatures for package verification is also broken?
westurner 77 days ago [-]
Red Hat too.

Containers, pip, and conda packages have TUF and now there's sigstore.dev and SLSA.dev. W3C Verifiable Credentials is the open web standard JSONLD RDF spec for signatures/attestations.

IDK how many reinventions of GPG there are.

Do all of these systems differ only in key distribution and key authorization, ceteris paribus?

jsheard 81 days ago [-]
Google owns the .google TLD, could they theoretically use https://google or is that not allowed?
equinoxnemesis 81 days ago [-]
Not allowed for gTLDs. Some ccTLDs do it, http://ai/ resolved as recently as a year ago though I can't get it to right now.
sph 81 days ago [-]
You need a dot at the end for it to resolve correctly

https://ai.

It’s unreachable anyway

weird-eye-issue 81 days ago [-]
That is cursed
sph 81 days ago [-]
It is not, it is for your local resolver to distinguish a top-level domain from a subdomain (i.e. `foo` gets rewritten to `foo.mydomain.com` or `foo.local`)

man resolv.conf, read up on search domains and the ndots option

Natfan 81 days ago [-]
You're currently browsing `news.ycombinator.com.`[0].

[0]: https://jvns.ca/blog/2022/09/12/why-do-domain-names-end-with...

banana_giraffe 81 days ago [-]
https://xn--l1acc./ and https://uz./ connect, though there's a cert issues in both cases.
jsheard 81 days ago [-]
Going by the Wayback Machine it looks like it used to redirect to http://www.ai, which still works, but only over HTTP.
Thorrez 81 days ago [-]
Well of course http://www.ai would work. That's no different from http://foo.ai .

http://uz./ serves a 500 error.

redserk 81 days ago [-]
Not sure if that’s allowed, but that sure feels like a throwback to AOL keywords if it was — just at the DNS level.
81 days ago [-]
phillipseamore 81 days ago [-]
This has been working the other way up until now right?

At Google scale redirecting requests to ccTLD versions uses up plenty of resources and bandwidth:

Get request to .com (like from urlbar searches)

GeoIP lookup or cookie check

Redirect to ccTLD

Much of this is then repeated on the ccTLD.

This change should decrease latency for users (no redirect, no extra DNS lookups, no extra TLS handshake) and enhance caching of resources.

spiderfarmer 81 days ago [-]
This is so they can use the same tracking cookies across all their products.
franze 81 days ago [-]
Seems like a pretty big SPOF
beardyw 81 days ago [-]
I think they probably know what they are doing.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 19:46:11 GMT+0000 (UTC) with Wasmer Edge.