Judging from how the DoD currently buys software, lots of money will be spent, many headlines will be written, awards will be handed out, and zero software will make it on to user workstations. End users will continue to use Excel for everything.
> End users will continue to use Excel for everything.
Wait, I thought AI is killing all these jobs?!
200 mil is chump change for them, if prototype turned to be good then good for them but if its not then they are not worry
Not all software is made public and used in workstations, especially not in military
Would you mind elaborating a bit?
https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai...
If the physical disconnect between killing a person (e.g. UAVs) wasn't enough to make that task easier then further offloading the decision of who to target might help.
> If the physical disconnect between killing a person (e.g. UAVs) wasn't enough to make that task easier then further offloading the decision of who to target might help
The physical disconnect hypothesis isn't really borne out by the lack of concern for collateral damage in pre-firearm warfare, when killing was mostly done face to face, compared to today.
Physical connect means that the person who is making the decision to kill is scared for their life. Physical disconnect means he's only scared for a piece of equipment.
Guess which one of those is more trigger happy.
“Let’s take another whack at real-time object identification built into night vision goggles.”
(Made-up but plausible example)
just giving the whole DoD chatgpt that's deployed in their servers would be pretty useful i guess for them?
I heard one thing AI is very good at declassifying documents.
Does anyone have any idea what the DoD could possibly want from OpenAI? Less accurate/more sycophantic missiles?
Some of the more popular models (NIPRGPT, the various DREN models) are “soft banned” and DoD is in need of a unified solution. MSFT’s GCC HIGH and GovCloud implementations have been slow to materialize. But more to your point - everyone is using LLM’s to pick up the slack from layoffs. Im sitting in meetings and watching my gov customers generate documentation and proposals everyday. Everything the commercial world uses AI for the US gov is doing the same. Cant directly speak to targeting but you can bet your ass there are 100 different offensive projects trying to integrate AI into ISR work and the like.
Planatir has an older demo of their chat like interface showcasing targeting selection, battle plans and formations, other advice. Kind of creepy, I assume it’s much more capable now.
Palantir is the poster child for a global panopticon
1. Secretary of Defense feels like bombing some place. Asks aide to write a report on, justification, logistics, and consequences.
2. Aide tells subordinate to write report.
3. Subordinate uses ChatGPT to write the 100-page report. Sends it to aide.
4. Aide uses ChatGPT to summarize report. Sends summary to SecDef.
5. SecDef accidentally posts summary on publicly-accessible social media page, then forwards to President.
6. Bombs go boom.
Not that the bomb answers: "I am sorry Dave, i can't do that!"
Yeah, tons. SIGNT / HUMINT analysis. After action report summaries. war gaming to optimize deterrence. human machine teaming. LLM-in-the-loop for warfighters. rapid code gen in field deployments for units to spin up software solutions. The list is endless, imho.
llm-in-the-loop for whatever a 'warfighter' is is basically the opposite of how fighting wars should go.
The DoD does plenty of things beyond putting boots on the ground. They’re the world’s largest employer. They have all the same boring problems that any employer has at gigantic scale.
Yep, pretty much.
why? it could help them asses threats, civilians / avoid collateral damage. Like any weapon or technology, it depends on its use. warfighter is the modern industry / academic term used for "soldier."
"help" (botch the job)
Automatically generated, native sounding, propaganda at scale - capable of interacting in real time. This was always the MIC money endgame for LLMs. This is also probably why they are enlisting tech execs from Meta, OpenAI, etc.
I look forward to our senators "living" to 100+.
AI explosives with personalities feature in https://en.m.wikipedia.org/wiki/Dark_Star_(film)
Wow, it was like forever ago that i've seen that movie. Didn't realize it was meant as a comedy!
You will be surprise how much work at the DoD has nothing to do with weapons.
which also can be botched
ChatGPT, do you know where the General left his keys?
> “This contract, with a $200 million ceiling, will bring OpenAI’s industry-leading expertise to help the Defense Department identify and prototype how frontier AI can transform its administrative operations, from improving how service members and their families get health care, to streamlining how they look at program and acquisition data, to supporting proactive cyber defense,”
Translated - they'll hand out GPT access to a bunch of service members and administrators. Except the UI will have a big DoD logo and words like "SECURE" and "CLASSIFIED" will be displayed on it a few dozen times.
You realize that the DoD has a huge amount of normal business work like logistics, project management, people management, benefits management, etc? Right?
The United States Military (Waterhouse has decided) is first and foremost an unfathomable network of typists and file clerks, secondarily a stupendous mechanism for moving stuff from one part of the world to another, and last and least a fighting organization. —Cryptonomicon
I suspect it's more than that.
“Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Defense Department said.
“National security challenges” is incredibly broad, providing the right size of boots to USCG rescue swimmers could be considered a national security challenge.
it says _critical_
Trenchfoot was a substantial source of casualties in WW1, and looking after your feet is a top priority for every military force in the field.
Ain’t nothing more critical than rescue!
Knowing the DoD, I bet it's not. I bet they just want their own secure servers or some sort of corporate data/encryption management, and they're willing to pay out the nose to not have to use asksage or some terrible DoD friendly clone
An on premise deployment ?
I would guess it’s for mass surveillance. Even just the ability to extract names and entities from audio, video, and text on every piece of public media would be useful.
DOD doesn’t really do this
Maybe they’d like to start
Only because they currently contract it out to Palantir (at least the bits that NSA isn't handling)
NSA is a DOD organization.
> The National Security Agency (NSA) is an intelligence agency of the United States Department of Defense, under the authority of the director of national intelligence (DNI).
https://en.wikipedia.org/wiki/National_Security_Agency
> William J. Hartman is a United States Army lieutenant general who has served as the acting commander of United States Cyber Command, director of the National Security Agency,
https://en.wikipedia.org/wiki/William_J._Hartman
They’re staffed by military people (alongside civilians) and their commander is always military — because much of what they do (abroad) could be construed as acts of war.
Easy PT plans
One AI per person ...
Nice ad slogan!
One AI per person
One voice. One vision. One AI - for you.
Sycophantic missiles would be desirable
Let's hope before they wire it directly to the controls "because speed" they've trained it on Stanislav Petrov up down and backwards..
I don’t understand but that sounds funny
https://en.wikipedia.org/wiki/Stanislav_Petrov
> On 26 September 1983, three weeks after the Soviet military had shot down Korean Air Lines Flight 007, Petrov was the duty officer at the command center for the Oko nuclear early-warning system when the system reported that a missile had been launched from the United States, followed by up to four more. Petrov judged the reports to be a false alarm.
So DoD will use OpenAI to write tweets bashing "the enemies of the empire"? They realise that Tucker Carlson and the likes are turning against forever wars, so they must deploy other tactics.
First Palantir used against US citizens. Now this.
Yes, teach the machines how to kill life, whatever could go wrong...
You guys have no idea how many DoD man-hours are spent on jobs like
"add up all the item counts in the inventory report and send a weekly email"
Yes maybe OpenAI is developing killer drones or maybe (imo likely) it's licensing a FedRAMP complaint AI for normal business work.
You don’t need AI to complain about FedRAMP
Technically I can still edit that post but now I think it's better this way.
So much for humanity’s greater good Sam.
Depending on your political views it may be good if it helps USA keeping its military edge over China and preventing China from invading Taiwan.
There's invasions going on right now that aren't being prevented, no need for theoretical ones.
said capabilities Hegseth is utterly gutting and undermining.
It's more likely China's next gen aircraft one should be wary of, than their AI. (as previewed in recent Indian Pakistani air engagements)
i really see this so-called AI race as a bullet to be dodged; a bubble to be waited out. it has been relentlessly pushed from on top, and we always find really pushy FOMO as the main driver.
I'm not impressed by non deterministic mechanisms that undo the zero overhead advantages hard won by decades of automation. this is not a CAD tool amplifying and articulating human intentions, but a vague floppy jelly blob of "i wonder what will come out"
Why do you even care about Taiwan?
Isn't this part of the true definition of "AGI" and its all for the benefit of humanity?
Or is it that are we finally realizing that we are getting scammed again on these so-called promises and it was all a grift.
Maybe we should just wake up.
On the way to benefit all humanity MS helped Sam back then, and now MS will get to wake up to the real Sam :)
https://www.reuters.com/sustainability/boards-policy-regulat...
“OpenAI executives have considered accusing Microsoft, the company's major backer, of anticompetitive behavior in their partnership …
OpenAI's effort could involve seeking a federal regulatory review of the terms of its contract with Microsoft for potential violations of antitrust law, as well as a public campaign,…“
People are practically irrelevant infants at this point. We are about to repeat the Iraq war, point by point with universal agreement. The same people in charge are recycling the same propaganda, selling the same lies to in many cases quite literally the same people again and it's working, so I don't know why you are expecting anyone to ever "wake up".
Further context https://www.pymnts.com/cpi-posts/senator-warren-presses-pent...
This, this is why I have such an issue with the amount of taxes I pay
Not because I’m anti social programs the way people like to immediately assume, but because of dumb shit like this that I have no control over
Honestly, why do you think it is dumb?
I think it is pretty well established that LLMs can be a great time saver when used appropriately. Why wouldn’t you want that productivity gain at the government level?
Reading and writing reports when peoples lives are on the line is arguably a hot topic, no?
One would imagine that a $200m contract would come with at least some minimal amounts of guidance on best practices. The DoD is not a spring chicken with it comes to automation. They’ve been a perennial early adopter.
and LLMs are the opposite of automation, the opposite of a human intention amplifier like CAD CAM, or chef puppet ansible terraform whatever, aka non deterministic
This gives me a sick feeling of unease.
That's the rational response.
OpenAI was supposed to be open; After making it a private company, it will become governmental & defense.
Good luck to Elon Musk for his trial for the open-source-ness of the organization.
That should shore up their financials given their.. checks notes $12B in operational costs. /s
Hope it's worth it.
My view is that it isn't really entirely about economics anymore at least on a traditional cost/benefit analysis basis. It is seen as a way to disrupt industries. Think of it more like war with arms race dynamics (winner takes all), or consolidation of power to capital over labor. Even if it is a net negative you need to play to stay in the game even if it disrupts your own revenue (e.g. Google) else lose entirely.
I suspect the capital class would throw good money after bad to make AI viable especially since a lot of the costs are fixed in nature (i.e. in training runs, not per query).
$10B run rate now so they can just plug the gap with $2B in ads!?! Hot DoD singles near you! Would you like me to generate an image of their stealth package ;) ?
directly hooking up the AI to the nuclear button is which chapter of the dont build the torment nexus book
Isn’t that the Department of Energy that does that, not DoD?
DoD would be involved in actual deployment of nukes, I would expect.
The epilogue.
The last published draft of the epilogue.