• ExoticPearTree 6 hours ago

    Oversubcription is the norm for residential customers. And it makes sense in the way that you don’t download at max speed 24/7 content from the internet and you don’t upload either. The usage tends to be: access an article/picture, then read/watch, rinse and repeat. So for those few seconds/minutes you don’t use the internet connection.

    When people watch a movie on Prime/Hulu/Netflix, sure, you have constant usage, but 4K streaming is less then 20Mbps…

    Most people never use the full amount of even the lowest guaranteed availabke bandwidth per customer.

    • paulryanrogers 3 hours ago

      Game downloads can be tens of GiB per day, and are only growing. Multiple 4K streams can add up. I wouldn't be surprised if folks are saturating links during prime time hours.

      • qmarchi 3 hours ago

        As is the risk of over subscription. However, most game services tend to schedule their downloads in the middle of the night, and video stream is very "bursty", and downloads chunks of data every few seconds.

        Also, with people/services moving to AV1 encoding, we're already seeing improvements in efficiencies in regards to bandwidth.

        Disclaimer: Former YouTube Serving Engineer

        • Fabricio20 3 hours ago

          > However, most game services tend to schedule their downloads in the middle of the night

          Except that usually doesn't matter because almost all game updates are released slight before prime time (or on prime time) so players can enjoy it when they get home! (I guess this only applies to multiplayer games, but that's a MAJOR part of the industry nowadays).

          • martin_bech 2 hours ago

            I would guess about 10% of an ISPs customers are gamers. Maybe lower. Again game updates are not that big, compared to your connection, so most of the day your connection is idle

        • jsnell 2 hours ago

          > Multiple 4K streams can add up.

          Not really at these scales. Back of the envelope calculation:

          A Netflix 4k video stream is 15 Mbps.

          The article states the most common Swiss deployment is 10Gbps shared between 32 customers. 10Gbps is about 600 of those Netflix 4k streams, 20x more than the number of customers sharing that bandwidth.

          • martin_bech 2 hours ago

            Yeah.. they are not.. even tens of GiB per day is not a lot compared to your connections 1000gb/s. Netflix 4k stream is not the norm, and still only about 20mb/s so you would need to be running 50 4k streams, and even then, chances are you neighbours out of the house, or flipping instagram, so the bandwith is available..

            Btw netflix did colocate with isps at one point, they might still do.

            • HumblyTossed 3 hours ago

              Do most people game? Or actually stream 4K?

              • paulryanrogers 3 hours ago

                If you look at households, I'd say a significant minority do both.

          • tomaskafka 5 hours ago

            I am more concerned with underutilization. I am specifically looking at Disney Plus delivering barely 720p to browsers no matter the available bandwidth (while easily streaming hdr 4K to Apple TV on a same network).

            • ronsor 5 hours ago

              That's DRM for you. If you pay, you get a worse experience.

              • codetrotter 5 hours ago

                And if your girlfriend dies because a restaurant at Disneyland served them food with allergens, they’re gonna claim that because you paid for Disney Plus you can’t sue them for killing your girlfriend.

                https://wdwnt.com/2024/08/disney-dismissal-wrongful-death-la...

                Fuck paying for Disney Plus.

                • dmoy 5 hours ago

                  Don't worry, as long as there's a news cycle they'll drop their motion to dismiss

                  https://apnews.com/article/disney-allergy-death-lawsuit-b66c...

                  • cute_boi 3 hours ago

                    It is still unfair tbh.

                    • dmoy an hour ago

                      Oh yea for sure, agreed. My reply was meant to be in cynical agreement, because they shouldn't be able to bring an insane motion like that in the first place

                  • sva_ 3 hours ago

                    It was a free trial account

                • benwaffle 3 hours ago

                  That can also be affected by the supported DRM level. Try safari or edge.

                  • throawayonthe 3 hours ago

                    [dead]

                  • jagrsw 7 hours ago

                    init7's post about overbooking is somewhat misleading. As their customer, I think it's more about how overbooking hurts them as a provider, not just us end-users.

                    I used to have a 10Gbps connection through Swisscom (Switzerland's de-facto main fiber provider) using XGS-PON. My internet service was from a third-party ISP, but the underlying infrastructure was Swisscom's. My speed tests consistently showed around 8Gbps up and down (10G minus XGS-PON overhead)

                    Init7's issue is that XGS-PON requires specific equipment and network access from Swisscom. Swisscom charges them for this, and it can also create other limitations (I remember problems with IPv6). With P2P fiber, init7 can connect directly to their own equipment in a local hub, giving them more control.

                    For most users, the occasional latency spikes or brief speed drops from 8Gbps to 1Gbps aren't a major concern - at least I never had serious problem with FPS games where latency matters.

                    • Havoc 8 hours ago

                      I wish there was more concerted effort being made to utilize the idle upstream in residential connections. Something like bittorrent.

                      The average consumer fibre install in a building needs to cater for weekend mass netflix watching. There is no equivalent for up. It's just sitting idle.

                      Think we could make a decent dent in the amount of datacenters needed if that was solved.

                      • jeanlucas 8 hours ago

                        As a civilization we really under utilize distributed models for several solutions.

                        • faggotbreath 7 hours ago

                          Respectfully, nah. I don’t want to troubleshoot compute nodes in varying states of abuse. I’d rather deal with 10,000 of them crammed into a dc for consistency purposes.

                          • Havoc 6 hours ago

                            Things like BitTorrent are quite resilient and cope well with unreliable nodes.

                            Shouldn’t be that hard to design something that is primarily distributed and falls back on straight download.

                            Things like hugging face serving petabytes of files that are huge and don’t really change.

                            Compute is probably a bit harder to reliably stick onto residential

                          • immibis 2 hours ago

                            In China they have something called PeerCDN which does this and pays people money to use their upload. ISPs disconnect subscribers who are caught using it. To avoid suspicion from up/down ratio as some ISPs use, many PeerCDN subscribers also download lots of dummy data from random internet servers, increasing network load substantially.

                            • Intralexical 4 hours ago

                              There's Peer5, which got acquired by Microsoft and got rebranded "ECDN":

                              https://peer5.com/

                              https://news.ycombinator.com/item?id=13501581 Launch HN: Peer5 (YC W17) – Serverless CDN

                              https://www.ycombinator.com/companies/peer5 P2P delivery network that enables high quality video streaming

                              Also PeerCDN apparently, which got acquired by Yahoo?

                              https://web.archive.org/web/20150810065820/https://peercdn.c...

                              If you run a Web search for some combination of "P2P", "Peer", and "WebRTC", combined with "CDN", you can also find a smattering of other (vaguely spammish) company blogs and (small, personal) GitHub libraries that seem to talk about something similar:

                              https://www.w3.org/wiki/Networks/P2P_CDN

                              https://github.com/AgustinSRG/webrtc-cdn

                              https://mediafoundation.medium.com/the-rise-of-peer-assisted...

                              https://github.com/vardius/peer-cdn

                              https://www.cachefly.com/news/deciphering-p2p-the-basics-and...

                              https://castr.com/blog/ecdn-or-p2p-cdn/

                              https://github.com/Peer-to-Peer-CDN/P2P-CDN

                              https://blog.blazingcdn.com/en-us/next-generation-content-de...

                              So there appears to actually have been quite a bit of "concerted effort" in this direction, both open-source and with company funding. Either it's kept fizzling out for some reason, or became standard and is now taken for granted.

                              IMO the most prominent project in this direction is PeerTube, in that it's using idle upstream bandwidth as a big part of its pitch (It's in the name!):

                              https://en.wikipedia.org/wiki/WebTorrent

                              https://en.wikipedia.org/wiki/PeerTube

                              https://news.ycombinator.com/item?id=38443855#38444796 (V6 dropped WebTorrent but still does "P2P via WebRTC".)

                              But there are some privacy and security issues with this idea too. What happens if your website uses up all of a customer's data plan by uploading to peers when they leave it open in the background? And do you really want every webpage you visit to be NAT-hole-punching, announcing your presence, and uploading who-knows-what data to random strangers?

                              A surveillance op could construct a list of everybody who visits a given page by simply refreshing the page every few seconds and logging the peers it connects to. And a vulnerability anywhere in the WebRTC stack, from router to browser, could let you get `pwn`ed by a random peer even if the website you visit is clean.

                              • A4ET8a8uTh0 6 hours ago

                                I am personally of two minds about it, because while I might be partial to the example mentioned, I can immediately tell how it would in real life. It would be just like with the Simpsons[1].

                                "Oh, please. For a nickel a person tax increase... we could build a theater for shadow puppets. Balinese or Thai? Why not both? Then everybody's happy."

                                In other words, no one will be happy. Let individuals keep making individual decisions.

                              • martin_bech 2 hours ago

                                I worked at a large ISP and telco in my youth, and ofc this is how its done, its been this way forever, and is also basicly how phone networks where and are built, because you know that everyone is not going to call everyone at the same exact time, so you use an Erlang B calculation to figure out what you actually need..

                                For reference ths ISP i worked for, that was pretty damn big, started out with 40 modems, and that lasted a looong time..

                                Btw. Biggest issue with traffic when i worked there was the advent of p2p sharing, that required massive upgrades

                                • PaulRobinson 4 hours ago

                                  Overbooking used to be called the "contention ratio" back in my day when I was working with Nortel CVX 1800 modem racks (and before that too with older, crappier, less dense racks running off E1s or even ISDN).

                                  A full rack of 1000 modems at 56kbps means 7Mbps bandwidth needed to feed the box. We had ~20 racks, so we'd need (in theory), 140Mbps up to the internet to service them all at a 1:1 contention ratio. In truth, we had ~35Mbps - a very generous 4:1 contention ratio.

                                  Why does it work? Well, when you're browsing the web, you load a page (and use bandwidth), and then you read it and don't use bandwidth.

                                  For every second of download, you probably have 30+ seconds of idle bandwidth while you're reading those emails or web pages or whatever - and yes, one manager was pushing us to get to 30:1 contention, which was industry norm at the time - so this actually works fine. One colleague who had worked at a major competitor told us they regularly pushed to 100:1 and no customer seemed to care. We actually benchmarked them (and a bunch of other ISPs), and yeah, was a little slow at peak, but actually not awful most of the time.

                                  The problems started to come when people wanted to max out their connection more because of an evolution of applications, technology and expectations (in short DSL, the rise of P2P services and people "being online" more was a triply whammy), and so more customers wanted to use every bit of bandwidth every second, all day, every day. In the UK, even the BBC was being asked to do something because the launch of iPlayer absolutely crucified a lot of ISPs.

                                  I was out of the game by then, but approaches seemed to include citing breach of acceptable use policies (very unpopular), blocking certain services (one of the reasons why the major ISPs block TPB in the UK), or - in at least one case I heard about - traffic shaping those "greedy" customers until they went away. If you throttle someone who is paying for 10Mbps down to 1Mbps, because they've been taking all 10Mbps 24x7 for the last three months, they'll get frustrated and leave your service with the only real cost being a bad online review you can blame on backhaul network/bad copper/bad modem, and which is drowned out by other "normal" users claiming you're a great ISP. And yes, I do know of at least one ISP who did this regularly - a former staffer told me a few years after he left.

                                  • crote 3 hours ago

                                    I've always known that internet is oversubscribed (it's the only way to do it, really), but I'm still surprised just how far they have gone.

                                    The 1:32 split of XGS-PON is already bad enough - especially with providers selling 8Gbit connections - but a 32Gbit uplink for 4000 customers?! If GunBlastGame 5000 releases an update, the 5 kids from the same school all starting an update when they come home is already enough to saturate the entire neighborhood's connection. I was expecting something more like 400Gbit for a city of 75.000 - less total capacity, but far harder to saturate at a local level.

                                    I think one interesting topic missed in the article is how the GPON network is actually constructed. Some providers have the splitters placed close to the customers, while others have them placed close to the upstream connection. Having splitters close to the customer means you need far less fiber and can easily add an extra home, but the customer is always stuck with their neighbors - it's expensive to add extra capacity. Having a centralized splitter cabinet makes it way easier to patch a customer onto another splitter - or even give them a dedicated connection. On the other hand, you have to lay far more fiber as every home now needs its own connection to the cabinet, and adding a new home can become quite tricky if you haven't planned ahead.

                                    • matt-p 2 hours ago

                                      Quite popular in the UK is the concept of

                                      A) Splitter cabinets which you get a fibre back to and then get put on one of several splitters. These cabs are completely passive and come in 48 or 96 (maximum) ports (usually fed by 12f back to the exchange). You can then patch a user onto a different splitter or even a dedicated fibre back to the exchange for P2P ethernet.

                                      B) Split at 8:1 near to the end user and then split again at the exchange, sometimes upto 8:1 again, so 1:64 on XGS-PON. The advantage of this is as a PON gets busy it's a 10 second job to change it to 2 X 4:1 splits in the exchange and so on all the way down to having upto 8 subscribers on a port. If you have a business park you can make a central decision to not further split at the exchange so they get 8:1, and you don't have to change your physical plant at all, so the field engineers build to one standard regardless of if it's domestic or business.

                                      • matt-p 3 hours ago

                                        4096 would be the theoretical maximum number of subscribers. Most providers will average about half of a 32 split PON actually having subscribers on. So using this case the maths now looks like 16Mb per customer in a failure scenario, 32Mb per customer in a normal scenario. The OLT can also take 100GB unlink cards and its only the cheap folks who are buying these 4x10G cards (which are about half the price).

                                        • bee_rider 2 hours ago

                                          I think it is only inside the household (and maybe some additional restrictions, not sure), so not very useful, but IIRC Steam actually has some ability for users to provide downloads to each-other, peer-to-peer style. Applying that at a neighborhood level could be really nice.

                                        • boingo 8 hours ago

                                          i didn't realize fiber could be overbooked similar to cable. I had a 3gbit plan but downgraded to 500mbit because barely any transfers would go over 300mbit during the day. I kept thinking the servers I download from are overloaded, but it makes more sense my provider overbooks than the entirety of the internet being slow... time to put in some complaint calls during peak hours!

                                          • toast0 3 hours ago

                                            Many residential fiber deployments are PON with some amount of overbooking on the last mile. If the fiber is 2.5g down/1.25g up and split 4 ways, not everyone can have the full 1g/1g service.

                                            But even if you have point to point fiber, that probably has an upstream connection that's less than the aggregate bandwidth of the end user connections it manages. And so on until you get through your providers network to internet transit/peering. And once you get through to the hosting network too.

                                            I ran hundreds of 1g/10g boxes at a facility with 80g aggregated to the world.

                                            Those boxes would be 40 or so to a rack, with 2x10g to the switch (800gbps aggregate) and probably 4x10g to the upstream switch. Maybe 4x40g to the upstream.

                                          • ofrzeta 8 hours ago

                                            I've always wanted to know this. The local provider keeps selling nominal 1GBit/s (to the DSLAM) but does the uplink hold up to this if every endpoint is saturating its connection?

                                            • marcus0x62 7 hours ago

                                              There is no provider network (or even non-trivial corporate network) without over-subscription somewhere. It is completely impractical to achieve. Consider a simple Ethernet network with 4 nodes each with a 10gbps connection:

                                                  A
                                                B # C
                                                  D
                                              
                                              Assume the switch in the middle (the "#") is a non-blocking cross-bar switch, so every port has a dedicated internal 10gbps path to every other port. Even in this simple example, you already have over-subscription which cannot be resolved except by having every node directly connected to every other node: while each node can transmit (and receive) 10gbps independently, if, say, nodes A and B both want to transmit 10gbps towards node C, the egress buffer for port C on the switch will fill up and overflow.

                                              Ok, you say, I'm just worried about ISP networks, not some free-for-all hypothetical Ethernet network where the nodes want to talk to each other. All the subscriber nodes just want to talk to the uplink, not each other:

                                                ----+ 100 gbps uplink +----
                                                | DSLAM or similar device |
                                                ----+-+-+-+-+-+-+-+-+-+----
                                                    | | | | | | | | | |
                                                    A B C D E F G H I J ...
                                              
                                              Here we have a (practically speaking) non-blocking node. But, ISPs need to have way more than 100 subscribers to be practical. So we need a lot of these devices. And we've just moved our contention up a level or two:

                                                 [                      uplink switch                      ]
                                                     +         +         +         +         +         +
                                                 [ DSLAM ] [ DSLAM ] [ DSLAM ] [ DSLAM ] [ DSLAM ] [ DSLAM ] ...
                                              
                                              If we have 10 of these DSLAMs connected to an uplink (still quite small for an ISP in a moderate-sized town,) we need 1tbps of bandwidth going to our core router. If we have 100, 10tbps, and so on.

                                              From there, at a typical big ISP, you'd have connections going toward (directly, or via another intermediate location) two different exchange points. And there, the math, which is already completely implausible, becomes totally bonkers. The reason being that a typical ISP wants to do what's called settlement-free peering to exchange traffic with other providers, rather than buy connectivity from one or more of them.

                                              So, to keep our network "oversubscription free", whatever max uplink (likely in the thousands of terabits per second range at this point, at the very least,) has to be provisioned with every single provider our hypothetical ISP peers with. After all, we aren't doing any statistical traffic analysis. We're just making sure that there is no point of contention between any of our customers and any provider we peer with, anywhere in our network.

                                              Even if you take peering out of the equation, all you've done is move the problem around: a provider could buy, say, 10,000 tbps of bandwidth from another provider, I guess, but then that provider which is peering with hundreds of other ISPs has the same problem. Otherwise, its just a shell game where provider A says "we don't oversubscribe" knowing full well their upstream provider does.

                                              This is all completely insane. Nobody does it. What good operators do, instead, is proactively monitor their network and add capacity as needed, preferably before anything gets saturated.

                                              • obscurette 5 hours ago

                                                There is no any practical network without over-subscription. Power, water and waste grids, roads etc – none of these are designed to handle "all users use all their last mile pipes can handle" loads. It would be just impossible.

                                            • robertclaus 6 hours ago

                                              As a consumer I always assumed the network capacity scaled exponentially at each aggregation point. Sounds like it's much less than one would assume given the number of consumers...

                                              • philxor 3 hours ago

                                                There is always an oversubscription ratio at pretty much every hop in the network, at least in a commercial service provider network. That doesn't mean packet loss is occurring all the time, for the most part providers monitor and plan their networks so that doesn't happen, at least during steady-state non-failure.

                                                Utilization patterns can also vary quite a bit based on geography and the type of users being served.

                                                US cable providers do keep track of users who consume their last mile bandwidth all the time. It's typically against their terms of use, so if you are nailing up 300Gbps 24/7, you will get a letter from them and may get your connection turned off.

                                              • XlA5vEKsMISoIln 9 hours ago

                                                    [data-aos="res-fadeIn"] { opacity: unset; }
                                                    body { font-weight: unset; }
                                                • mschuster91 4 hours ago

                                                  > PS: Most complaints about “slow Internet” are due to poor WiFi quality.

                                                  The exception to this is Deutsche Telekom. They demand atrocious pricing for peering and only have very limited peering at DE-CIX and BCIX - 20 Gbit/s at DE-CIX [1], compared to the 2x600 Gbit/s that, say, Vodafone has.

                                                  The result is massive complaints about peering with virtually anyone, even the big dog Cloudflare [2].

                                                  I recommend every fellow German to subscribe with a regional provider - in Bavaria for example, book a M-net connection. The last mile will be handled by Telekom who are subcontracted by M-net or whomever else you choose, but there's government-regulated exchange points so you bypass Telekom's shitty peering.

                                                  [1] https://www.golem.de/news/de-cix-nutzer-leiden-unter-geringe...

                                                  [2] https://telekomhilft.telekom.de/t5/Festnetz-Internet/Peering...

                                                  • jeanlucas 8 hours ago

                                                    Yeah, this was noticeable during lockdown when everyone was doing video calls while working from home.

                                                    This was one more reason why I was skeptic about VR, the infra isn't there for mass adoption as it was marketed at the time.

                                                    • gjvc 4 days ago

                                                      they've oversold the flight!