It feel silly to say that AI is curing cancer. Normally a phrase like that would sinal the apex of the hype cycle but I guess it actually has some meat in this situation. Using AI more like statistical inference to screen for medical conditions or predict treatment could be helpful. I remember Jeremy Howard from fast.ai did that with deep learning to detect things in medical images. Seems like a good thing for CERN to do as long as it works.
I can’t grasp what this is about. It starts saying it enables hospital to store and process their data onsite (not really an innovation in any way) but then later says the hospital computers still contact a main server for analysis. So is this just on premise data analysis (“ai”) but also cloud?
What is that overhead power and… other things… device in the image?
looks like https://www.hohenloher.com/en/products/fly-one/ this, though it looks as though the CERN images might be on a movable ceiling mounted track.
This is what AI should be doing instead of ChatGPT stuff.
Why not both? I think Claude is probably already more useful and helpful than many doctors. It would be nice to one day be able to just chat with an AI bot from home and get prescriptions or test sheets delivered. You could add automatic safeguards for certain medications or for drug interactions.
Obviously we're not there yet but many people in the world have access to no medical care or only expensive and crappy medical care.
Seems a bit outside their wheelhouse, no?
Considering LLMs are getting larger each generation, CERN is a great place to investigate their applications.
The data captured, stored, processed, and evaluated after just one of their more notable experiments generate petabytes of data.
They know a thing or two that directly apply.
model fitting and extrapolation from large data sets seems like it would be exactly their wheelhouse.
I always assumed ML was used in processing data from the LHC. I don't see any other way to deal with so much.
Things like boosted decision trees have been used in several instances, like this[1] or this[2]. Surely other methods as well.
[1]: https://indico.bnl.gov/event/10699/contributions/53933/attac...
It is and has been for a while, but most of the more flashy and exciting developments in ML and AI don't have very much applicability to LHC event processing. To be able to state any kind of finding about some aspect of physics based on the scattering of particles in the accelerator and their decays in the detector, you need to take the background of all events and make multivariate discriminants on the data in order to enrich your signal as much as possible while throwing as little as possible away. This requires you to have a rigorous and verifiable statistical "paper trail" from start to finish, so you can say with confidence intervals how much signal and background you ought to have, vs how much you measure in your data after processing it. An overly broad black box doesn't really work for this kind of introspection.
You mean like the web was also?
Good point!
Makes me think CERN has too much money
Nope too little, this 1000x rather than 1% optimization and data collection on ad preferences of tech giants.
CERN's side projects gave us WWW for example, so you could write your comment.
Projects get funding just like at any other public-funded science institutions: with a lot of scrutiny and advocacy, if you think these 153'000 CHF ($178'000):
https://knowledgetransfer.web.cern.ch/kt-fund/projects/novel...
...is too much money, then I'm glad: you probably don't have cancer!