• envoked 9 months ago

    It’s great to see that Mitmproxy is still being developed - it indirectly made my career.

    Back in 2011, I was using it to learn API development by intercepting mobile app requests when I discovered that Airbnb’s API was susceptible to Rails mass assignment (https://github.com/rails/rails/issues/5228). I then used it to modify some benign attributes, reached out to the company, and it landed me an interview. Rest is history.

    • danmur 9 months ago

      It's absolutely insane how many core devs argued against change there

      • JeremyNT 9 months ago

        To this day it remains incredibly useful to me, and weirdly obscure to people who I would've thought should know better.

        Sometimes it's easier to use mitmproxy with an existing implementation than to read the documentation!

        • RamRodification 9 months ago

          > Rest is history

          ;)

        • febusravenga 9 months ago

          Only slightly related ...

          > Chrome does not trust user-added Certificate Authorities for QUIC.

          Interesting. In linked issue chrome team says:

          > We explicitly disallow non-publicly-trusted certificates in QUIC to prevent the deployment of QUIC interception software/hardware, as that would harm the evolvability of the QUIC protocol long-term. Use-cases that rely on non-publicly-trusted certificates can use TLS+TCP instead of QUIC.

          I don't follow evolution of those protocols, but i am not sure how disallowing custom certificates has anything with "evolvability" of protocol ...

          Anyone knows are those _reasons_?

          • filleokus 9 months ago

            If I were to guess, it's to allow Google freedom in experimenting with changes to QUIC, since they control both the client and large server endpoints (Google Search, Youtube etc).

            They can easily release a sightly tweaked QUIC version in Chrome and support it on e.g Youtube, and then use metrics from that to inform proposed changes to the "real" standard (or just continue to run the special version for their own stuff).

            If they were to allow custom certificates, enterprises using something like ZScaler's ZIA to MITM employee network traffic, would risk to break when they tweak the protocol. If the data stream is completely encrypted and opaque to middleboxes, Google can more or less do whatever they want.

            Kinda related: https://en.wikipedia.org/wiki/Protocol_ossification

            • dstroot 9 months ago

              Companies that use something like Zscaler would be highly likely to block QUIC traffic to force it onto TCP.

              • josephg 9 months ago

                That’s exactly what Google is hoping will happen. If QUIC is blocked entirely, there’s no risk that small tweaks to the quic protocol will break Google’s websites for any companies using these tools.

                • tecleandor 9 months ago

                  Well, my company is doing it already. They split VPN traffic depending on the target domain (mostly for benign reasons), and that can't do it with QUIC, so they have to block QUIC traffic.

                  • account42 9 months ago

                    What benign reason could there possibly be that isn't better based on IP addresses rather than domains.

              • cpitman 9 months ago

                Middle boxes (https://en.m.wikipedia.org/wiki/Middlebox) are a well known source of protocol stagnation. A protocol with extensibility usually needs the client and server to upgrade, but with middle boxes there are N other devices that potentially need updating as well. Where the user (client) and service provider (server) are motivated to adopt new feature sets, the owners of middle boxes might be far less so. In net, it makes it hard for protocols to evolve.

                • sbinder 9 months ago

                  Perhaps they're referring to this famous objection of financial institutions to TLS 1.3, motivated by them not wanting to update their MitM software needed for compliance: https://mailarchive.ietf.org/arch/msg/tls/CzjJB1g0uFypY8UDdr...

                  • eptcyka 9 months ago

                    TLS1.3 breaks MITM boxes because a client can establish a session key outside of the network with the middle box and continue using it afterwards in the middlebox’s network.

                  • remus 9 months ago

                    > I don't follow evolution of those protocols, but i am not sure how disallowing custom certificates has anything with "evolvability" of protocol ...

                    One of the reasons for developing HTTP 2 and 3 was because it was so difficult to make changes to HTTP 1.1 because of middleware that relied heavily on implementation details, so it was hard to tweak things without inadvertently breaking people. They're trying to avoid a similar situation with newer versions.

                    • undefined 9 months ago
                      [deleted]
                    • intelVISA 9 months ago

                      QUIC exists to improve ad deliverability, to grant user freedom would counteract that goal.

                      • jsheard 9 months ago

                        How does QUIC improve ad deliverability?

                        • superkuh 9 months ago

                          The entire protocol puts corporate/institutional needs first and foremost to the detriment of human person use cases. HTTP/3 makes all web things require CA TLS and means that if something in the TLS breaks (as it does every couple years with root cert expirations, version obsolecence, acme version obsolecence, etc) then the website is not accessible. Because there's no such thing as HTTP+HTTPS HTTP/3, self-signed HTTPS HTTP/3, or even, as in this case, custom CA TLS HTTP/3. It's designed entirely around corporate/institutional needs and is a terrible protocol for human people. HTTP+HTTPS websites can last decades without admin work. HTTP/3 websites can only last a few years at most.

                          • mardifoufs 9 months ago

                            If it was about institutional needs, surely it would make it easier to mitm for middleboxes? The biggest opposition to QUIC came from big corporations and other institutional players

                            • persnickety 9 months ago

                              This doesn't have much to do with QUIC. If HTTP/3 was based upon another transport protocol, you'd have the exact same problems.

                              You can use QUIC with custom certs without any trouble.

                            • intelVISA 9 months ago

                              > We explicitly disallow non-publicly-trusted certificates in QUIC to prevent the deployment of QUIC interception software/hardware, as that would harm the evolvability of the QUIC protocol

                              For Chrome at least..!

                              • josephg 9 months ago

                                That has nothing to do with ad deliverability.

                          • ozim 9 months ago

                            There is a case of Kazakhstan installing certs to MITM citizens couple years ago and bunch of cases where bad actors can social engineer people to install certain for.

                            I think because of KZ case browsers and Chrome especially went for using only their own cert store instead of operating system one.

                            • jeroenhd 9 months ago

                              Browsers responded by blacklisting the Kazakh certificate the same way they blacklist the certificates that came with pre-installed spyware on laptops from shit vendors like Lenovo. You don't need to block all certificates to prevent against a well-known bad certificate.

                            • toast0 9 months ago

                              If your company requires communications to be monitored, the typical enforcement is a custom company CA installed on company equipment. Then they intercept TLS and proxy it.

                              Those proxies tend to be strict in what they accept, and slow to learn new protocol extensions. If Google wants to use Chrome browsers to try out a new version of QUIC with its servers, proxies make that harder.

                              • globular-toast 9 months ago

                                It can seem confusing but it all makes sense when you realise Chrome is designed to work for Google, not for you. I remember people switching their Grandmas to Chrome 15 years ago when they could've chosen Firefox. Many of us knew this would happen, but convenience and branding is everything, sadly.

                                • le-mark 9 months ago

                                  > Chrome is designed to work for Google, not for you.

                                  Maybe more accurately “chrome is designed to work for you in so far as that also works for google”. I share the long standing dismay that so many willingly surrendered their data and attention stream to an ad company.

                                  • fud101 9 months ago

                                    I don't really think Firefox cares about having users. The one killer feature Chrome has is being able to access all your state by logging into your Chrome account. Firefox refuses to provide this basic service which will allow you to seamlessly use your data on Firefox and then eventually stop using Chrome. I wish Firefox nothing but the worst.

                                    • mdaniel 9 months ago

                                      I may be feeding the trolls, but not only is there a sync mechanism, at least with Firefox you can self-host[1] such a thing, thereby doubly ensuring the data isn't used for something you disagree with

                                      If you're going to say that Firefox doesn't care about having users, point out its just stunningly stupid memory usage, blatantly stale developer tools (that one hurts me the worst because the Chrome dev-tooling is actually open source, so there's nothing stopping them from actually having Best In Class dev tooling other than no-fucks-given), or the har-de-har-har that comes up periodically of bugs that have been open longer than a lot of developers have been alive

                                      1: https://github.com/mozilla-services/syncstorage-rs#running-v...

                                      • fud101 9 months ago

                                        Don't care about self hosting. That's not a feature to me, it's a burden. I would rather some cloud provider do that for me, thankfully Google does it for free and the convenience is much appreciated. It's the same reason i'd put my personal code in Github than some hard drive in the basement which may die anytime.

                                        • globular-toast 9 months ago

                                          I find Firefox's memory usage and dev tooling better than Chrome.

                                        • chgs 9 months ago

                                          My Firefox installs on my various computers have a shard profile, so what are you on about?

                                    • Onavo 9 months ago

                                      Do http/2 and http/3 offer any benefits if they are only supported by the reverse proxy but not the underlying web server? Most mainstream frameworks for JS/Python/Ruby don't support the newer http standards. Won't the web server be a bottleneck for the reverse proxied connection?

                                      • AgentME 9 months ago

                                        Yes, because http/2 or http/3 will improve the reliability of the connection between the client and the reverse proxy. The connection between the reverse proxy and the underlying web server is usually much faster and more reliable, so that part would benefit much less from being upgraded to http/2 or http/3.

                                        • markasoftware 9 months ago

                                          the transport between reverse proxy <-> backend is not always http, eg python w/ uwsgi and php w/ fastcgi.

                                          And even when it is HTTP, as other commenters said, the reverse proxy is able to handshake connections to the backend much more quickly than an actual remote client would, so it's still advantageous to use http/2 streams for the slower part of the connection.

                                          • account42 9 months ago

                                            > the transport between reverse proxy <-> backend is not always http, eg python w/ uwsgi and php w/ fastcgi.

                                            That's just called a web server and not a reverse proxy then. Both are just evolutions of CGI.

                                          • masspro 9 months ago

                                            Probably not, but mitmproxy is not a reverse proxy for any production purpose. It’s for running on your local machine and doing testing of either low-level protocol or web security stuff.

                                            • codetrotter 9 months ago

                                              > mitmproxy is not a reverse proxy for any production purpose

                                              At a startup I was working on a few years ago, I set up mitmproxy in dev and eventually if memory serves right I also sometimes enabled it in prod to debug things.

                                              That being said, we did not have a lot of users. We had in fact very very few users at the time.

                                              • hedora 9 months ago

                                                I’ve been patiently waiting for someone to write a howto that uses mitmproxy to transparently obtain acme certificates for any web servers that are behind it.

                                                I’d totally pay a cloud provider to just do this and forward requests to my port 80 or 443 with self signed certificates.

                                                Https+acme is already open to this attack vector, so why inconvenience myself by pretending it is not?

                                                • codetrotter 9 months ago

                                                  In our setup, TLS was already being terminated by Nginx or Caddy (I don’t remember which, but it was one of those two) sitting in front of another web server on the same host.

                                                  So inserting mitmproxy into the setup was just a case of putting it between the Nginx or Caddy that did TLS termination, and the web server that served the backend API. So to mitmproxy it was all plain HTTP traffic passing through it, locally on the same machine.

                                                  I bound the mitmweb web UI to the VPN interface so that us devs could connect to the dev server with VPN and then have access to the mitmweb web UI to inspect requests and responses.

                                                  • dandandan 9 months ago
                                              • nitely 9 months ago

                                                Something not mentioned: web-browsers limit the number of connections per domain to 6. With +http/2 they will use a single connection for multiple concurrent requests.

                                                • lemagedurage 9 months ago

                                                  Yes. Besides other performance benefits, HTTP/3 saves a full roundtrip for connection by combining TCP and TLS handshakes.

                                                  • connicpu 9 months ago

                                                    Depends. If they're running on the same box, the reverse proxy will be able to initiate tcp connections to the web server much more cheaply. Even if they're just in the same datacenter, the lower round trip latency will reduce the time for establishing TCP connections. Plus, the proxy might be load balancing across multiple instances of the backend.

                                                    • apitman 9 months ago

                                                      Also browsers limit the number of HTTP/1.1 requests you can have in flight to a specific domain

                                                      • ahoka 9 months ago

                                                        The limit is much higher for proxies, though.

                                                        • connicpu 9 months ago

                                                          With a reverse proxy the browser doesn't know it's talking to one

                                                    • mhils 9 months ago

                                                      One of the main promises of HTTP/3 is better performance under worse network conditions (e.g. no head-of-line blocking as in HTTP/2, connection migration, 0-RTT). For all of that HTTP/3 between client and proxy is really great. HTTP/3 between proxy and server is not required for that.

                                                      • jeltz 9 months ago

                                                        Yes, for http/3 since it handles network issues better. Http/2 is of more doubtful value since it can choke really bad on packet loss.

                                                        • Narhem 9 months ago

                                                          http/3 seems to be an excellent opportunity to optimize HTMX or any of the libraries which leverage HTML fragments like JSX. The obvious advantage of http/3 is for gaming.

                                                          The servers which run the frameworks have to http/3. In most cases the advantages should be transparent to the developers.

                                                          • deznu 9 months ago

                                                            I’m curious what about HTTP/3 is particularly advantageous with HTMX?

                                                            • Narhem 9 months ago

                                                              A common use case of HTMX is sending fragments when scrolling.

                                                              Since http/3 uses udp to send the fragments, duplicate packet information doesn’t have to be sent.

                                                              Kind of funny the newer protocol effectively works in the opposite direction of GraphQl.

                                                              • greenavocado 9 months ago

                                                                Network congestion management is gonna be wild in the coming decade with the proliferation of udp based protocols

                                                                • undefined 9 months ago
                                                                  [deleted]
                                                          • rnhmjoj 9 months ago

                                                            Unfortunately there is still the issue[1] of fingerprinting. Until it can spoof the TLS handshake of a typical browser, you get these "Just a quick check..." or "Sorry, it looks like you're a bot" pages on about 80% of the web.

                                                            [1]: https://github.com/mitmproxy/mitmproxy/issues/4575

                                                            • account42 9 months ago

                                                              > Until it can spoof the TLS handshake of a typical browser, you get these "Just a quick check..." or "Sorry, it looks like you're a bot" pages on about 80% of the web.

                                                              Evidently Firefox is not a typical browser anymore.

                                                            • bluejekyll 9 months ago

                                                              Thanks for the shoutout to Hickory. It’s always fun to see what people build with it. Nice work!

                                                              • mhils 9 months ago

                                                                Thank you for your work on Hickory! It's super exciting to see how PyO3's Python <-> Rust interop enables us to use a production-grade DNS library with Hickory and also a really solid user-space networking stack with smoltcp. These things wouldn't be available in Python otherwise.

                                                              • nilslindemann 9 months ago

                                                                I wonder, can I use it like Privoxy/Proxomitron/Yarip? E.g. can I strip out script tags from specific sites, which I request with my browser (Ungoogled Chromium), using Mitmproxy as a Proxy? And how will this affect performance?

                                                                • jeroenhd 9 months ago

                                                                  In theory: yes. In practice: mitmproxy is written in Python so there will be a delay because of the language not being all that fast. When you're visiting web pages with hundreds of small delays, you'll notice.

                                                                  That said, for many people who care about this stuff, this could be an option. There's nothing preventing you from doing this technically speaking.

                                                                  There's a small risk of triggering subresource integrity checks when rewriting Javascript files, but you can probably rewrite the hashes to fix that problem if it comes up in practice.

                                                                • systems 9 months ago

                                                                  is mitmproxy an alternative to fiddler?

                                                                • 38 9 months ago
                                                                • undefined 9 months ago
                                                                  [deleted]