Show HN: AI Baby Monitor – local Video-LLM that beeps when safety rules break
github.comHi HN! I built AI Baby Monitor – a tiny stack (Redis + vLLM + Streamlit) that watches a video stream and a YAML list of safety rules. If the model spots a rule being broken it plays beep sound, so you can quickly glance over and check on your baby.
Why?
When we bought a crib for our daughter, the first thing she tried was climbing over the rail :/ I got a bit paranoid about constantly watching her over, so I thought of a helper that can *actively* watch the baby, while parents could stay *semi-actively* alert. It’s meant to be an additional set of eyes, and *not* a replacement for the adult. Thus, just a beep sound and not phone notifications.
How it works
* *stream_to_redis.py* captures video stream frames → Redis streams
* *run_watcher.py* pulls the latest N frames, injects them + the rules into a prompt and hits a local *vLLM* server running *Qwen 2.5 VL*
* Model returns structured JSON (`should_alert`, `reasoning`, `awareness_level`)
* If `should_alert=True` → `playsound` beep
* Streamlit page displays both camera and LLM logs
I’m dumbfounded at the negative comments in here. Read the git read me, it’s a hobby for op, not some crazy commercial venture and he makes it clear it’s to help in the event one gets distracted for a few seconds. The only valid comment is his daughter being likely too big for her crib, but sheesh lay off op’s approach to actually trying something. Ps: am not op’s alt account.
Cheers!
Yes, I tried to stress it hard that it's in no way a replacement for an adult, just an additional eye as a safeguard
The only negative comment that is perhaps overly negative is the one mentioning "killing babies" (but they make a good point still).
Everyone else's criticism is effectively entirely warranted even if it's just a hobby project if only because the criticisms are mostly pretty mild and reasonable things like "don't try to automate parenting with tech that still isn't ready yet".
Tools that can cause harm deserve scrutiny, regardless if OP make money or not.
Also who knows ? With enough popularity, OP might launch this into a startup.
> Tools that can cause harm deserve scrutiny
How on earth would this project, by itself, cause harm?
Sure, inattentive parents can lead to children hurting themselves, but that can happen while browsing gmail, or even when frigate is setup, does that mean gmail and frigate cause harm to children? Obviously not.
Gmail and Frigate aren't AI Baby Monitors. If you use Frigate as a baby monitor, it's absolutely worth scrutinizing your setup. The exact same thing can be said of a human nanny too. You wouldn't hire someone with no research.
No, but what about people planning terror attacks via gmail? Does that count as gmail being a "harmful" tool? You've seemed to have missed my point completely.
Yeah, which is why Google scans your messages for illegal content. Not sure if they scan for terrorism, but they certainly do for CSAM.
I had this same idea for monitoring my pool while I’m away. Watching for things in pool, low or high water levels, cloudy water, stray dogs, etc.
There are actually hundreds of applications for this basic idea. Common sense applied to a video feed.
try Watchman: https://news.ycombinator.com/item?id=44087499
I am the developer and happy to answer questions.
You can basically setup your own instructions and setup you own observation solutions...you can imagine everything from security to farm operations, the sky's the limit.
I want this for pedestrians and cyclists at intersections, and particularly HAWK beacons.
https://www.roundabout.tech/ tackles this problem
There’s an entire Curb Your Enthusiasm season about that.
(Just get a fence is the conclusion)
I've thought of the same for my lake shore, which is very much not a "just build a fence" situation.
We have 50 ft of lake shore with neighbors on either side. Assuming I fenced my 50 ft there is still a path around said fence on either side.
At the very most I could gate the dock, but again, there are about 8 other docs readily available.
Except it’ll fail and a kid will die. Please keep physically blocking off pools.
Eh, I don't think anyone is saying they are going to stop blocking off pools...
I had a friend that almost lost his kid with a pool with a self closing gate, an object happened to get caught in the latch when someone was leaving the pool and their 2 year old capitalized on this to almost get themselves killed. They had to perform CPR and rush to the hospital.
A backup system that could alert is just another layer of the security onion.
Are we moving to a world where AI will be fed so much of our cameras for analysis?
Love this!
A long time ago I built a cat detector to keep my cat out of the baby play area, this was before modern AI systems and I'm sure it could be so much better now. https://www.twilio.com/en-us/blog/baby-proofing-raspberry-pi...
Reminds me of: https://www.youtube.com/watch?v=uIbkLjjlMV8
That would work great to keep babies from peeing on things, too!
Sorry, this is an interesting but terrible idea.
If your crib is unsafe or undersized, the solution is to make it safe by adding guardrails or buying a safer/correct sized model. Or remove the hazard entirely by putting their bed on the floor.
Adding AI/tech to catch deficiencies is not the right way to go about it. You're willing to risk injury/death in case the AI is wrong, you don't hear the beep, or there are too many false positives and you end up ignoring it?
This is the same thing as self driving. It seems good enough to that it takes your attention away - until it doesn't. And it's AI, not a model specifically trained on catching the issue.
I usually replace all my fuses with plain wire, since I know not to plug broken things in in the first place.
way too many people get hung up on the idea OP led with.
what OP has is a fully self hosted, private video feed that can alert for more sophisticated events like:
- is anything happening that shouldn’t be? - are things not where they’re supposed to be? - has anything fallen? (plants, things into the pool)
good job OP. i’m going to take a look at this in the next couple months.
What a cool project with local processing! Will check it out, thanks!
What about hardware?
Absolutely. I want a "Cat on the Counter" detector, but a) the hardware needs to be cheap, and b) it can't take more than a few seconds to analyze a frame.
Totally doable! Raspberry pi and YOLO.
Especially doable since the owner can probably get lots of pictures of their cat in different poses and lighting conditions and really overfit on their cat instead of just any cat.
How would you go about to tune a model using your own images? Any easy/clear methods (e.g. $ llama.ccp -tune ./mycatpicturesfolder )
+1
the gemma 3n looks promising.
Btw, it doesn't really sound like the problem needs a video as an input to llm. Feels like sending an image is okay. So that makes it less demanding(?)
How reliable is this, i.e., what is the failure rate? False positives / negatives?
It's a bit tricky, the fp rate is not ideal, it does wrongly beep from time to time. I haven't really had a serious false negative, but did have some true positives :)
About the hard numbers, it's tough to test it quantitatively, because there's not a lot of data for babies in danger :D and I hope it stays that way
In general, I'm hoping that the open models will get better, there has been a lot of acceleration in video modality recently
> When we bought a crib for our daughter, the first thing she tried was climbing over the rail
Maybe get a proper crib then?!
IANAL but I would be scared of getting sued. For example, if I try to give a perfectly good car seat to good will they refuse to take it for liability reasons. Baby safety is serious business.
Yeah it's an interesting project but it seems there should be lower stakes use cases that should be tried first.
> Baby safety is serious business.
Are "regular" baby monitors any more complicated than a dumbed down cheapest you can build it walkie-talkie? Society really needs to stop wanting other people to be responsible for their actions. The choice of what devices you use on your kids should first and foremost be on you. AI or no AI. Fear mongering with literal "someone think of the kids" is getting old, IMO.
I don't agree this is a "think of the children" issue. Nobody is saying "don't use this on your kids" they're saying "understand if by sharing this you might be exposing yourself to potential financial consequences." Baby safety is serious business.
* Summer Infant Baby Monitor Overheating Settlement – $10 million after reports of overheating monitors leading to fire hazards.
* Angelcare Monitor Recall Lawsuit – $7 million settlement due to defective cord placement that led to strangulation risks.
* Levana Baby Monitor Overheating Lawsuit – $5.5 million awarded in cases of monitors causing burns to children.
* VTech Baby Monitor Battery Defect Settlement – $6.2 million after reports of exploding batteries causing fire risks.
* Motorola Baby Monitor Signal Failure Class Action – $4.8 million settlement after claims of poor signal reception leading to missed emergencies.
* Owlet Smart Sock Monitor Lawsuit – $6.5 million awarded due to inaccurate heart rate readings that caused false alarms and panic among parents.
* Graco Digital Monitor Lawsuit – $5 million settlement after a lawsuit citing defective monitors that stopped functioning during critical moments.
* Philips Avent Baby Monitor Lawsuit – $4.2 million after several reports of overheating and potential fire hazards.
* Samsung Baby Monitor Fire Hazard Settlement – $3.5 million awarded due to incidents of overheating leading to home fires.
* Infant Optics Monitor Class Action – $4 million settlement after claims of faulty batteries and wiring causing sudden shutdowns during use.
https://www.personalinjurysandiego.org/product-liability/saf...
I looked for primary sources on two randomly selected ones there, the VTech Baby Monitor Battery Defect Settlement and the Motorola Baby and I didn't find anything. Only the linked site. I feel like GPT just invented those lawsuits.
Just checked Infant Optics Monitor Class Action and also didn't find anything.
Shoot, you're totally right. Horrifying. I should know better by now not to let my guard down like that.
He's not selling a product, he's sharing a library which includes his terms and conditions.
This is such a fun project! I am curious about the hardware used? Or is the LLM hosted on a remote server?
Hey! It's fully local, I was trying to build it privacy first. All the things related to kids are very sensitive, I didn't want to send anything to cloud.
But you can still run the inference remotely, changing that should be just a matter of changing the address.
This sounds interesting, I would give it a try coming weekend.
Pretty cool - I might try this with my kiddo
will it be able to process sounds, so it can beep while baby crying? i work with music in headphones, would be nice
This could probably be achieved very simply by a device with a microphone and a script that just checks the noise level. Either checking for consistently high noise levels, or if you want to get fancy, maybe do some heuristic stuff to pick out crying specifically.
Might be a cool project to do with a cheap microphone and an SBC.
This is how a lot of baby monitors do it. Mine goes off if someone with a loud muffler drives by too which is convenient because my son will frequently be awakened by that so I get a little extra time.
I remember when we were preparing to have our first kid 10 years ago we bought a monitor with the pad underthe mattress on the advice of other parents. As a former cpr instructor I was dumbfounded when I realised that parents would spend 100s on monitors but not one person had bothered to learn infant cpr.
Any reliable places online to learn? Or is it a course that you should only do in person?
I do not know about infant CPR. But I would never trust myself administering CPR to even an adult, after watching something online.
I have had regular first responder refresher courses at work every two years - and i have to say: I always relearned something every two years, because not having done it (luckily no-one needed my first aid) meant I had forgotten quite a bit in those two years.
Especially how it feels to administer CPR. and how to position the person.
So not sure where you are located, but in Germany, you can volunteer to become a first aid responder for your company - they love that, because from a certain size on they must have enough of us. And you get a certificate every time you retrain (you need to do this regularly every two years, as said).
We even once hat a baby puppet for training, but I was not able to test it that time (as it was not mandatory, I wanted that the parents actually had the chance - or those planning for kids).
CPR is one of those things where you hope never to have to be the best in the room at.
Thank you for volunteering.
The hospital we had our kids at had a baby CPR class and an expectant parent class. The CPR class didn’t give you a certification but you learned what to do for CPR and choking.
I did an in-person class. I highly recommend it.
Great idea! Is it possible to run this on latest Macbook Pros? No GPU unfortunately :)
The latest MacBook pros all have gpu cores built in.
why qwen 2.5 specifically?
If your daughter can climb out of the crib, then she is too big for the crib.
Also, if I accept your premise that a baby sleeping is inherently dangerous, then this is just an added layer of safety. It doesn’t remove safety.
But really safety is a highly lame way of framing this. what you want is more hours asleep so you have this thing try to hypnotize the baby with lights, sound, vibration, and it only alerts the parent(s) if it fails Eventually it could just straight up talk to your three year-old read it a second bedtime story, dispense a cup of water convince it not to hang out with the bad kids at school etc
How about judging the software rather than speculating on what's in OP's mind?
[dead]
A small pilot study for more efficient nanny state technology. Just add scale.
The state is several steps ahead of you. Have you wondered why the push for AI all of a sudden, everywhere, all at once, really quite far ahead of its actual capabilities to help most of us? Huge monies thrown at it? An effectively-instantaneous turnaround on decades of anti-nuclear sentiment just to power AI?
It's not because it's going to be good for us.
"Don't be snarky."
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
https://news.ycombinator.com/newsguidelines.html
p.s. If your comment was intended as a joke on the word 'nanny' then I take this back.
I don't mean it to be a dismissal or snark. It wouldn't be as terrifying if I could do that. On the contrary it is remarkable how directly this exact baby monitor model can be scaled up to an automated intelligent panopticon.
(Thank you, yes, baby monitor -> nanny state)
At least give them the benefit of the Doubtfire!
You hardly need AI at the state scale. They can just process footage at regular speed.
As a parent of several kids, I have no idea what is the use case for this. When does it ever happen that a kid too small to be safe by himself has to be let alone, with a camera to record him? My eyes are a good enough camera, and my brain holds a reasonable list of safety rules. What problem does this solve that isn't already near perfectly solved by my existence?
You never ever sleep? Did your children have an awake adult watching them 24/7 all year around? I doubt most people can have that situation since that would require a team of people taking shifts sitting looking at the child
Why would you need to have someone awake 24/7 while your child sleeps?
> When does it ever happen that a kid too small to be safe by himself has to be let alone, with a camera to record him?
Nap time? Overnight?
What about the AI playground monitor?:
https://news.ycombinator.com/item?id=44087499
I am the developer :)
That is a demo of course but I think what sets LLM tools like this apart from what came before is that implemented correctly, the user gets to decide what it is, and can change the meaning at any time, in other words what it should be looking for at any time.
That is of course if the solution is implementation correctly.
There is immense potential for these type of capabilities if they are done in a way that leaves the specific use case implementation up to users.
The same set of problems that is solved by an audio or AV baby monitor.
Surely you didn’t have the baby in the same room as an awake adult 24 hours per day for 730 straight days, right?
Perfect now we can put more good willing parents in jail for “neglect”. For every good idea, a bad idea will be born.
Even if this fails only in 0.1% of cases, given enough users this will kill babies.
(Of course you should compare this to humans, but in any case, do the math! And get a good lawyer.)
This thing will not kill babies. That's like saying seatbelts kill people because they don't save everyone in a car accident. It is precisely this attitude that prevents good things from flourishing - the idea that if something is involved however tangentially in a safety important subsystem it must have perfect results. No, we should not have this view. If something is net positive, we should promote it.
Tesla autopilot has some disclaimers saying you should always be prepared to grab the wheel, and we all read the stories in the news ...
In other words, if you want AI to take over tasks of humans you either do it well or not at all.
It's not taking over tasks, it's helping to support those tasks. For the times when you might not be looking directly into the camera, it attempts to tell you about anything suspicious.
Tesla autopilot could steer you into a concrete wall, a baby monitor with some LLM attached to it does not directly harm the child.
This is stupid
When people decide that the nannycam works, they will rely on it. Then, when it fails, their inaction will kill babies.
It is amazing/horrifying to me how many people are intent on reincenting the reasons why we have UL, the FDA, the FCC, traffic laws, seatbelts, electrical codes, fire marshals and unions.
What inactions? Apart from creating safe conditions beforehand (but perhaps that is your point), once my kid is asleep there's not much more I can do? Most of that time I'm sleeping myself.
Exactly, there's not much more you can do when you're asleep. However, this system might just nudge you awake if something happens when you're asleep. Or it won't and you would be no worse off than if you didnt have it.
It's not going to solve all problems.
Parent 1: what's that sound? Should we check it?
Parent 2: Nah, the baby monitor would have warned us.
But I love babies!!!
(I just can't eat a whole one.)