« BackGPT-5.3-Codex being routed to GPT-5.2github.comSubmitted by cactusplant7374 2 days ago
  • tekacs 2 days ago

    To those wondering about their rationale for this.

    It would be great if the HN title could be changed to something more like, 'OpenAI requiring ID verification for access to 5.3-codex'?

    > Thank you all for reporting this issue. Here's what's going on.

    > This rerouting is related to our efforts to protect against cyber abuse. The gpt-5.3-codex model is our most cyber-capable reasoning model to date. It can be used as an effective tool for cyber defense applications, but it can also be exploited for malicious purposes, and we take safety seriously. When our systems detect potential cyber activity, they reroute to a different, less-capable reasoning model. We're continuing to tune these detection mechanisms. It is important for us to get this right, especially as we prepare to make gpt-5.3-codex available to API users.

    > Refer to this article for additional information. You can go to chatgpt.com/cyber to verify and regain gpt-5.3-codex access. We plan to add notifications in all of our Codex surfaces (TUI, extension, app, etc.) to make users aware that they are being rerouted due to these checks and provide a link to our “Trusted Access for Cyber” flow.

    > We also plan to add a dedicated button in our /feedback flow for reporting false positive classifications. In the meantime, please use the "Bug" option to report issues of this type. Filing bugs in the Github issue tracker is not necessary for these issues.

    • Reubend 2 days ago

      If that's the case, then their API should return an error. Billing the user while serving a response from the wrong model is a horrible outcome. I'd go as far as to say that it's borderline fraudulent.

      • sdwr a day ago

        An error message helps people skirt the restriction, by providing immediate feedback on what does/doesn't get flagged.

        Same idea as shadow banning, ban waves, and generic errors for sensitive actions

        • cactusplant7374 a day ago

          That is only acceptable for non paying customers.

      • Dylan16807 2 days ago

        Sounds like a thing they've said a dozen time so far about how their models are too scary. And a bad implementation of controls on top of that.

        But right now I want to focus on what one of the more recent comments pointed out. "cyber-capable"? "cyber activity"? What the hell is that. Use real words.

        • nerdsniper 2 days ago

          I was wondering the same thing! Looked into it a bit, apparently 'cyber-capable' is defined by lawmakers in 10 USC § 398a:

          > The term “cyber capability” means a device or computer program, including any combination of software, firmware, or hardware, designed to create an effect in or through cyberspace.

          So apparently, OpenAI's response is written by and for an audience of lawyers / government wonks which differs greatly from the actual user-base who tend to be technical experts rather than policy nerds. Echoes of SOC2 being written by accountants, but advertised as if it's an audit of computer security.

          • deaux 2 days ago

            No, this is incredibly naive. It's all about more biometrics and PII for sama [0]. Zero chance that Google (of all places) and Anthropic's lawyers would somehow take a wildly different stance, or as a company be that much chummier with US gov, than OpenAI.

            It has been more than a year since ClosedAI started gating API use behind Persona identity checks. At the time I was told by numerous HNers "soon all of them will". We're now many model releases later and not a single other LLM provider has implemented it. There's only one conclusion to draw, and it's not that they care more about what their lawyers are supposedly saying. It would be absurd anyway given that they well know how the current US Gov operates. Grok made a CP generator publically available on a platform with hundreds of millions of users, US Gov doesn't care. Understandable, given recent revelations they were almost surely actively using it themselves.

            [0] https://en.wikipedia.org/wiki/World_(blockchain)

            • Dylan16807 2 days ago

              > designed to create an effect in or through cyberspace

              So every networked program ever...?

          • revolvingthrow 2 days ago

            What a convenient argument, you can make it fit anything

            "This rerouting is related to our efforts to protect our profit margins. The $current_top_model is our most expensive model to date. It can be used as an effective tool to get semi-useful results, but it can also be exploited for using a lot of tokens which costs us money, and we take profitability seriously. When our systems detect potential excessive token generation, they reroute to a different, less-capable reasoning model. We’re continuing to tune these detection mechanisms.

            In the meantime, please buy a second $200/mo subscription."

            • nerdsniper 2 days ago

              What does it mean to detect "potential cyber activity"? Apparently nearly 9% of the users of GPT-5.3-Codex were detected engaging in "cyber activities". I have no idea what "cyber activities" are, and I've been using the internet for 30 years.

              • kingstnap a day ago

                Probably something like this

                User: "There is a bug in foo(), its not validating auth correctly"

                OpenAI: User detected engaging in cyber activity - access restricted.

                And the rest is history.

                • red-iron-pine a day ago

                  > potential cyber activity

                  "foreign intelligence is using codex to write novel exploits from scratch, that work"

                  • nerdsniper a day ago

                    I think the issue is that isn’t what cyber- means. Cyberspace, cybernetics, cybersex, ‘cybering with a girl I met in WoW”…

                    Military policy wonks did a poor job of inventing a new word and now it’s taking over the tech industry. It’s a strong signal of ChatGPT’s ‘Department of War’ alignment.

                • avaer 2 days ago

                  Requiring id to get access to a model is one issue.

                  Pulling a switcheroo on the user behind the scenes, whatever the justification, is another issue, and I think the more interesting one.

                  It's a stepping stone to "we will reconfigure your AI to do whatever we want whenever we want, because security/think of the children".

                  • cactusplant7374 a day ago

                    > OpenAI requiring ID verification for access to 5.3-codex'?

                    What is their rationale for hiding it? OpenAI was deceptive. Paying customers did not realize they were being rerouted. Zero transparency.

                    Your suggested title doesn't represent what actually happened.

                  • radicality 2 days ago

                    Wow, this should be higher up and with a different title.

                    People who are paying $200/month for a defined service, and think they are using `gpt5.3-codex`, are getting their requests silently routed to a less capable model without telling the user at all. Why? - because openAI claims gpt5.3-codex is too powerful and dangerous in regards to cybersecurity, and their system randomly flags accounts. And the way to unlock access to a model you thought you already were paying $200/month for, is upload your ID and do identity verification...

                    • cactusplant7374 a day ago

                      Most people commenting on the title find it too negative. Of course, the situation is pretty negative for OpenAI.

                      • tomalbrc 2 days ago

                        Watch the HN techbros defend this

                      • avaer 2 days ago

                        When the GPT-5 router architecture was introduced I worried that OpenAI would use the technology as a pretext to mislead or defraud users by substituting in worse quality when they could get away with it and then "blame it on the AI" when they got too agressive.

                        I don't know if we're there yet but these reports do not fill me with hope.

                        • BrouteMinou 2 days ago

                          The kind of "pay me premium, I am giving you non-premium" type of fraud?

                          It just proves that there is not much of an improvement if they can get away with, it isn't? But hey, I am sure that the benchmarks are all saying otherwise.

                          • nerdsniper 2 days ago

                            That is a good point. I wonder which model pricing was actually billed.

                        • ekaesmem 2 days ago

                          Today I also discovered that the speed of gpt-5.3-codex in Codex CLI is extremely slow, and then I found that response.model was routed back to gpt-5.2-2025-12-11 by the upstream.

                          • ekaesmem 2 days ago

                            Update: I can access gpt-5.3-codex now. According to Alexander Embiricos, 9% of users were affected by over flagging over a period of 3 hours.

                          • r0b05 2 days ago

                            This is one of the biggest issues with hosted models used via subscription. You have no idea what level of quality and accuracy you are consistently getting. That's why it's cheaper - because they can nerf it as they wish and most consumers would be none the wiser.

                            • hu3 a day ago

                              Yep. And it gets worse. They might start hidding the information about the actual model used for each prompt. If not already.

                            • WadeGrimridge 2 days ago

                              all this moral posturing from the big labs is tiring. just put the tokens in the bag

                              • acheong08 a day ago

                                Surely this is fraud? Unfortunately they have too much money and connections for anything to ever happen.

                                • thehamkercat a day ago

                                  Also, one more ridiculous thing

                                  Send this to opus 4.5 or opus 4.6:

                                  "udp you joke about hear a like would ? to"

                                  It says: Chat paused Opus 4.6’s safety filters flagged this chat. Due to its advanced capabilities, Opus 4.6 has additional safety measures that occasionally pause normal, safe chats. We’re working to improve this. Continue your chat with Sonnet 4.

                                  what???? "Due to its advanced capabilities" ???

                                  Due to it's advanced capabilities it didn't get the joke?

                                  • garblegarble a day ago

                                    I assume this is a jailbreak / exfiltration detection condition triggering, I wonder if it would do the same if you started speaking to it in base64

                                  • usernamed7 2 days ago

                                    aaaaaaand this is why i prefer anthropic. There is just too much sneaky/misleading/deceptive things with chatGPT. Even if benchmarks show codex to be slightly better, the developer experience with claude code is much better.