Yeah, I've been using this trick to implement a hot reload library, to only update the specific functions that are changed without re-executing modules: https://github.com/breuleux/jurigged
I also use it in a multiple dispatch library (https://github.com/breuleux/ovld) to replace the entry point by specialized dispatch code in order to cut some overhead.
It's fun.
Also, why is every damn post these days somehow framed in an AI context? It's exhausting.
> Also, why is every damn post these days somehow framed in an AI context? It's exhausting.
Every 5/10-year segment of my life has somehow had one or two "This is the future"-hypes running concurrently with my life. Previously it was X, now it's Y. And most of the times, everything else is somehow connected to this currently hyped subject, no matter if it's related or not.
The only thing I've found to be helpful is thinking about and changing my perspective and framing about it. I read some article like this which is just tangentially related to AI, but the meat is about something else. So mentally I just ignore the other parts, and frame it in some other way in my head.
Suddenly people can write their articles with attachments to the hyped subject but I don't mind, I'm reading it for other purposes and get other takeaways that are still helpful. A tiny Jedi mind-trick for avoiding that exhaustion :)
I also find it useful to keep in mind (often more junior) people are learning new things and expressing their joy, which is a good thing. and most (junior) people learning things in tech right now are doing so in the context of AI, for better or worse
(idk if this author is “junior” per se, mostly just agreeing the shift in perspective is helpful to not get burnt out by things like this)
AI, block chain, rust, go, serverless, nosql, ruby on rails..... The list goes on and on :-)
Some of it gets really annoying on the business side, because companies like Gartner jump on the trends, and they have enough influence that businesses have to pay attention. When serverless was a thing, every cloud provider effectively had to add serverless things even if it made zero sense and no customers were asking for it, simply to satisfy Gartner (and their ilk) and be seen as innovating and ahead of the curve. Same thing happened with block chain, and is currently happening with AI.
wheels get reinvented again again again and again … this is quite unique to info tech … imagine mathematicians do the same, the world would be in chaos …
Oh that is really interesting, I was just aware of IPython's autoreload extension, I hadn't found your library. I'm also working on hot reload for python as I'm working on a development environment for python that aims to give it a development experience closer to lisp: https://codeberg.org/sczi/swanky-python/
Some minor details. You currently aren't updating functions if their freevars have changed, you can actually do that by using c-api to update __closure__ which is a readonly attribute from python:
ctypes.pythonapi.PyFunction_SetClosure.argtypes = [ctypes.py_object, ctypes.py_object]
ctypes.pythonapi.PyFunction_SetClosure(old, new.__closure__)
Also I think you should update __annotations__, __type_params__, __doc__, and __dict__ attributes for the function.Rather than using gc.get_referrers I just maintain a set for each function containing all the old versions (using weakref so they go away if that old version isn't still referenced by anything). Then when a function updates I don't need to find all references, all references will be to some old version of the function so I just update that set of old functions, and all references will be using the new code. I took this from IPython autoreload. I think it is both more efficient than gc.get_referrers, and more complete as it solves the issue of references "decorated or stashed in some data structure that Jurigged does not understand". The code for that is here: https://codeberg.org/sczi/swanky-python/src/commit/365702a6c...
hot reload for python is quite tricky to fully get right, I'm still missing plenty parts that I know about and plan on implementing, and surely plenty more that I don't even know. If you or anyone else that's worked on hot reload in python wants to talk about it, I'm happy to, just reach out, my email is visible on codeberg if you're signed in.
Thanks for the tips, I'll try to look into these when I get some time! Didn't know you could modify the closure pointer.
I'm not sure what you mean by "maintaining a set of old versions". It's possible I missed something obvious, but the issue here is that I have the code objects (I can snag them from module evaluation using an import hook)... but I do not have the function objects. I never had them in the first place. Take this very silly and very horrible example:
def adder(x):
def inner(y):
return x / y
return inner
adders = {}
def add(x, y):
adders.setdefault(x, adder(x))
return adders[x](y)
The adders dictionary is dynamically updated with new closures. Each is a distinct function object with a __code__ field. When I update the inner function, I want all of these closures to be updated. Jurigged is able to do it -- it can root them out using get_referrers. I don't see how else to do it. I quickly tested in a Jupyter notebook, and it didn't work: new closures have the new code, but the old ones are not updated.Oooh now that is interesting. What I by mean stuff I don't even know that I don't know :)
Yes mine doesn't handle that, it is the same as jupyter there. Smalltalk is supposed to be best at interactive development, I wonder if it will update the old closures. I don't know it to try, but I do know Common Lisp which is also supposed to be quite good, and fwiw it behaves the same, new closures have the new code, but the old ones are not updated:
(use-package :serapeum)
(defun adder (x)
(flet ((inner (y) (/ x y)))
#'inner))
(defparameter *adders* (dict))
(defun add (x y)
(ensure (@ *adders* x) (adder x))
(funcall (@ *adders* x) y))
(add 3 6) ; => 1/2
(add 3 9) ; => 1/3
;; change / to + in inner
(add 4 9) ; => 13
(add 3 10) ; => 3/10
> Lisp and Smalltalk addressed this by not unwinding the stack on exceptions, dropping you into a debugger and allowing you to fix the error and resume execution without having to restart your program from the beginning. Eventually I'd like to patch CPython to support this
yea i've been meaning to do this for a while as well...
I haven't started really looking into it yet, but I found this blog that looks like a good description of what exactly happens during stack unwinding in python and gets a large part of the way to resuming execution in pure python without even any native code: https://web.archive.org/web/20250322090310/https://code.lard...
Though the author says they wrote it as a joke and probably it is not possible to do robustly in pure python, but I assume it can be done robustly as a patch to CPython or possibly even as just a native C extension that gets loaded without people needing a patched build of CPython. If you know any good resources or information about how to approach this, or start working on it yourself, let me know.
Jurigged is awesome. It works really well and saves me tons of time. Thank you for making it!
I do wish there were callbacks I could subscribe to that would notify me whenever my file changed or whenever any code changed, so I could re-run some init.
My other feature request would be a way to replace function implementations even when they are currently on the stack, as some other hot reload implementations can. But I certainly understand why this would be difficult.
> why is every damn post these days somehow framed in an AI context? It's exhausting.
It’s even in the real world now - most of my conversations with people in tech end up at AI eventually.
It kind of reminds me of the 2010s when non-tech people would ask me about crypto at social events.
In some respects that's even nicer to use than a typical editor-integrated live-coding REPL, because one doesn't have to think about what code needs (re)sending from the source to the REPL. Just save the file and it'll be figured out which parts meaningfully changed.
Jurigged is really cool - thanks for the tool!
I use jurigged in conjunction with cmd2 to make command line driven tools. The combination works well because I can issue a command, watch the output, make a change, hit up-return and see the change just like that.
Thank you a bazillion for making it. It works quietly in the background without fuss, and I'm grateful for it every time I use it.
jurigged is great, love using it for quick GUI prototyping with imgui!
And wouldn't it be nice if that Python code, instead of a string, was just more python? Then you could use your existing Python code to append, or transform sections of your code!
That's what Lisp is!
Once you see how cool that is, then you can begin to appreciate why Lisp was the defacto standard for AI programing all the way back in the 1960s!
Ah, so in Python, you have "normal code" then you have AST code. Imagine that they were exactly the same, and whenever you're writing "normal code", you're at the same time writing AST code and vice-versa.
So whenever you want, you can start using "normal code" for manipulating the "normal code" itself, and hopefully now we have yet another perspective on the same thing, for why Lisps are so awesome :)
S-expressions are a nice gimmick, but it’s not the fundamental reason why some Lisps support dynamic patching of code at runtime. Indeed you can easily imagine Lisp written in M-expressions and still support that. Or you can imagine other dynamic languages with reflection gaining this capability with a bit of will, like… check notes… Python.
I think there is potential to do something like that with template strings [1] (upcoming feature in Python 3.14). The choice of {} for interpolation isn't ideal because any code with dict and set literals becomes super awkward, but besides that, it could be super interesting for codegen.
I get that it works, and it's actually pretty cool that you can do this. But honestly, it feels like it would good, readable code into a tangled mess pretty fast.
There is not 2 different kinds of code. That's like saying Lisp has normal code and list code.
Is there a good way to verify self-modifying code - in Lisp, or Combo (MOSES), or Python - at runtime against a trusted baseline at loader time?
Dynamic metaprogramming is flexible but dangerous. Python is also "dynamic", meaning that code can be changed at runtime instead of only being able to accidentally pass null function pointers.
Python's metaclasses function similarly to Lisp's macros but in a consistent way: most Python code uses the standard metaclasses so that the macros don't vary from codebase to codebase. The Django "magic removal" story was about eliminating surprise and non-normative metaclassery, for example.
Does this tool monkey patch all copies of a function or just the current reference? There are many existing monkey patching libraries with tests
Ok now do the part where lisp is actually used for any remotely useful project today...
Also check the history of monkey patching
I did this by accident in bash scripts many years ago when I was just getting into linux. I'd be running a script, and editing the script at the same time. It caused some REALLY weird issues before I figured out what was happening. For instance I'd change the text somewhere and it would change in the running program, or the program would get into states it should never be in. I didn't use it constructively, I just avoided editing running programs after that.
Other than Lisp, is it possible to do this in other languages?
Any language running on the JVM (Java, Kotlin, etc.) and CLR (C#, VB.NET, etc.). As long as you don't compile directly to native code and you aren't running in a locked-down environment where codegen is disabled, you can generate code at runtime and execute it alongside existing code.
For example, there's java.lang.reflect.Proxy [1] and System.Reflection.Emit [2].
[1]: https://docs.oracle.com/javase/8/docs/technotes/guides/refle...
[2]: https://learn.microsoft.com/en-us/dotnet/api/system.reflecti...
Several! You're correct that Lisps is the most famous, and there's also languages like Erlang that have this as a core functionality. But it's also used in things like game engines for C/C++. You do have your "updateAndRenderFrame()" function in a dynamic library, having it take a pointer to the full game state as an argument. When you want to reload, you recompile your dynamic library, and the main loop can swap out the implementation, and the game keeps running with the new code. I don't see a reason why you couldn't do this in Rust, though I imagine it's trickier.
And don't forget smalltalk!
Greenspun's tenth rule, restated for Python instead of C/Fortran.
I used live patching of the function byte code to enforce type safety in python as an experiment. It was quite fun, took about a weekend or so :) not something for production though, due to the performance hit.
One of the problem with dynamically loaded code like this is when you raise exceptions in those functions - the traceback will then end with something like `File "<magic>", line 8, in something`, which will at least annoy you when debugging.
I've contemplated patching Python's asyncio package so that the list of funcs isn't a weakref... but it's not that hard! This is way harder than it needs to be.
I'm not sure what the point of this blogpost was. As far as I can tell, the author discovered eval() but is making it more complicated for no reason? There also isn't any actual patching going on.
There is a reason similar approaches are called 'monkey patching'.
Just cause you can do something doesn't mean you should. I send thoughts and prayers for the people debugging programs where this is in place.
I really couldn't follow the use case. it looked like, they have a chain of method calls with some kind of `mark_circle().encode().properties()`, OK, so, if you want to make those methods do something different, you reach into OOP wisdom from the 1980s and write an appropriate "impl" used by whatever `alt.Chart()` is.
Someone explain to me, an old, aging programmer old enough to know UML, why this isn't some we presume very young person who has no idea how to write OOP coming up with some horrible convoluted way to do something routine?
> aging ... UML
Oh don't worry... they still cram that down our throats in CS undergrad in one of the courses.... Forget which one. I did my UG from 2020-24
;)
Different, but you may find interest in changing function signatures, too: https://github.com/dfee/forge
Ah, self-modifying code, the more things change the more they stay the same.
Wasn't SMC one of the LISP-associated AI fields a few decades ago? iirc it's been mostly abandoned due to security issues, but some of it survives in dynamic compilation.
this will be the next big breakthrough for agents lmao
Monkey patching python modules is waaaaay more straightforward than this.
setattr(mod, name, new_func)
Another use case is applying small patches to a large function or method. Overriding would mean copy pasting largely similar code. It’s a bit ugly.
https://github.com/eidorb/ubank/blob/master/soft_webauthn_pa...
Aka I just learned how to modify python code whilst it's running.
Next step, here's how to load modules, resolve a dependency. Handle capabilities and dynamically inject more functionality 'live'.
Patching running machine code in memory for compiled objects is the same but you just need to work around the abstraction that is introduced by languages trying to make the whole stack human parseable.
Concurrency?
wait a sec ... i thought you could monkey patch python code. you can only do it by using this technique?
Imagine if you've lisp to make ide and in a closed loop integration with LLM which extends and tests this IDE to achieve the task at hand.
Would it be much different than a REPL?
Python being riddled with security anti-patterns -- or at least security-unfriendly ergonomics -- is one reason I tried to stop working with it.