The whole README is heavily AI-edited (the final output is all by AI), and the worst thing is that the image diagrams seem to be generated (likely with 4o), see for example
https://github.com/MaliosDark/wifi-3d-fusion/blob/main/docs/...
"Wayelet CSi tensas"
That makes me question the authenticity of the project.
The badges are ridiculous. There’s a YAML badge in there.
The interface and code smacks of Claude. It's basically someone's AI pet project wrapping legitimate third-party tools.
This "wrapping 3rd party tools" thing is a weird kind of critisizm to me. Like name 1 project that could not be described that way?
Blender, Godot, Audacity, Firefox, Git, Linux, ... I could name 100 projects that could not be described that way. Most couldn't. There are only a few projects that I can think of that are really just wrappers (even though they add a lot of value), e.g:
* Handbrake, wraps ffmpeg (it does more stuff but that's the main thing most people use it for)
* Ollama, wraps llama.cpp
That repo has more github badges that a north Korean general has metals on their uniform...
lol!
Here is a link to a video of what it looks like (estimated) in video.
We built this system at the UofT WIRLab back in 2018-19 https://youtu.be/lTOUBUhC0Cg
And link to paper https://arxiv.org/pdf/2001.05842
I do actually really want this, to integrate into Home Assistant. I don't want to have to put a bunch of mm-wave detectors around the house to see where people are, I want to use the emitters and receivers I've already got. The current alternatives aren't that great.
I'm dying to know though, what's the practical resolution like? Can it tell the difference between my cat and a bag I dropped, or is it more like "a blob moved over there"?
Ok let's say I'm making a robot spider in my garage half the size of a Tesla and with as much horsepower. I'm putting nvidias new Jetson brain as the chip. If I use enough of these can I replace a lidar package for autonomous control?
On one hand, the potential privacy invasions enabled by this technology (e.g. Xfinity (of course Comcast) a few months ago[1]) are pretty scary.
On the other hand, the technology seems potentially extremely useful. I've had an interest in pose estimation for many years, but doing it with normal cameras seems tricky to do reliably because of the possibility for visual occlusion (both from the body itself and from other objects). I'm curious to see if I can use this for something like tracking my posture while I use my computer so I can avoid back pain later in life.
If you want good posture and prevent back problems and pain, just do resistance-based training (get a good coach and/or physical therapist to get started). There are a lot of excercises for strengthening the back and the neck in particular. It is never too late to start.
Posture (how you position your body) isn't the cause or prevention of back pain.
Your muscles need strengthening, strengthening comes from movement, movement comes from mobility.
But you are right in that it is an interesting hammer to find nails for
I'm interested but am also incredibly dubious. Not because it seems impossible but the opposite. On one hand, an open source repo like this making an approach for hackable extension should be praised, but the "Why Built WiFi-3D-Fusion" section[0] gives me very, very bad vibes. Here's some excerpts I especially take issue with:
> "Why? Because there are places where cameras fail, dark rooms, burning buildings, collapsed tunnels, deep underground. And in those places, a system like this could mean the difference between life and death."
> "I refuse to accept 'impossible.'"
WiFi sensing is an established research domain that has long struggled with line of sight requirements, signal reflection, interference, etc. This repo has the guise of research, but it seems to omit the work of the field it resides in. It's one thing to detect motion or approximately track a connected device through space, but "burning buildings, collapsed tunnels, deep underground" are exactly the kind of non-standardized environments where WiFi sensing performs especially poorly.
I hate to judge so quickly based on a readme, but I'm not personally interested in digging deeper or spinning up an environment. Consider this before aligning with my sentiment.
[0] https://github.com/MaliosDark/wifi-3d-fusion/blob/main/READM...
what i want to know is if you need multiple senders and receivers, or you just run it on a esp32 and it can visualize? usually they need a sender and a receiver to make sense of it all?
I didn't see any reference to a sender or actively blasting RF from the same access point. I think the approach relies on other signal sources creating reflections to a passively monitoring access point and attempting to make sense of that.
Seems like it is based on this paper from CVPR 2024:
https://aiotgroup.github.io/Person-in-WiFi-3D/
Frankly I'm shocked it's possible to do this with that level of resolution.
5GHz WiFi has a wavelength of ~6cm and 2.4GHz ~12.5cm. Anything achieving smaller is a result of interferometry or a non WiFi signal. Mentioning this might not add much substance to the conversation, but it felt worth adding.
This resolution is probably enough, as they use human skeleton pose estimators and human movement pattern detectors too.
The US military has been using tech like this for years. Some public, some not. The stuff not public is supposedly pretty good (bits and pieces of info have slipped in various publications).
If you’re interested in this stuff, check out Lumineye.
I scrolled through two pages of badges and hit counters. I have to be honest, that makes me very scared to run the underlying code.
This is what 1998 felt like.
Github is the new geocities!
The UI looks like it was built by a Hollywood set designer