this post was submitted on 07 Mar 2026
312 points (96.2% liked)

Selfhosted

57265 readers
468 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

My wife needed a cycle tracker. Everything out there was either Flo (which got sued twice for sharing health data) or an abandoned GitHub project. So I built Ovumcy. Single Go binary, SQLite, Docker-ready. No analytics, no third-party APIs, no cloud. Your data stays on your server. Features: period tracking, symptom logging, predictions (ovulation, fertile window), statistics, CSV/JSON export, dark mode, Russian and English. Just pushed v0.2.5. Looking for feedback from real users.

you are viewing a single comment's thread
view the rest of the comments
[–] rimu@piefed.social 39 points 12 hours ago (3 children)

I was going to recommend this to someone I know but when I realised your readme.md is entirely AI-generated, I guess the whole project is probably vibe-coded. I can't in good conscience recommend someone trust their health data to a vide-coded app because they tend to have security problems.

Also all ai-generated code is public domain so your AGPL license is kinda empty. Might as well use MIT.

[–] mortalic@lemmy.world 5 points 6 hours ago (1 children)

Thanks for doing this, I was debating doing the same. It needs to exist.

[–] rimu@piefed.social 2 points 5 hours ago (1 children)

F-Droid has Drip, Bluemoon and Periodical.

[–] terraincognita@lemmy.world 0 points 4 hours ago

Yes, I’m aware of those apps. They’re great local-first mobile trackers. Ovumcy explores a slightly different approach - a self-hosted web app that can run on infrastructure you control and be accessed from multiple devices.

[–] terraincognita@lemmy.world 37 points 11 hours ago (4 children)

I do use AI tools while developing this project, but I also have a BSc in Computer Science. AI is a productivity tool.

Security is something I take seriously, especially since the project deals with health data. All code has test and you're welcome to inspect the repository yourself or point out any specific security concerns if you notice them.

Regarding licensing: the AGPL license applies to the project as a whole regardless of the tools used to write parts of the code.

If you have concrete technical feedback or security issues, I’d genuinely appreciate it.

[–] IanTwenty@piefed.social 2 points 3 hours ago (1 children)

The danger being raised with the licensing is that you can't license something if you're not considered to be the author. There are growing examples of courts and lawmakers determining AI output to be public domain:

The US Supreme Court recently refused to reconsider Thaler v. Perlmutter, in which the plaintiff sought to overturn a lower court decision that he could not copyright an AI-generated image. This is an area of ongoing concern among the defenders of copyleft because many open source projects incorporate some level of AI assistance. It's unclear how much AI involvement in coding would dilute the human contribution to the extent that a court would disallow a copyright claim.

https://www.theregister.com/2026/03/06/ai_kills_software_licensing/

This is an evolving, global situation and hard to know what to do right now. I think what you've got is fine though - you've made it clear your intention is to license with AGPL. It's just that depending on the jurisdiction it might be public domain instead.

This is another reason to be clear about the use of AI in the README so your users can make an informed decision.

[–] terraincognita@lemmy.world 1 points 3 hours ago

I agree, though there is a difference in case you rovided and mine. It is a human-directed work. Thousands of libraries, Kubernetes, Kubernetes still live and license is valid.

[–] militaryintelligence@lemmy.world 5 points 6 hours ago (2 children)

How does AI help with productivity? I've gotten so many false answers that I quit trusting it

[–] moriquende@lemmy.world 4 points 4 hours ago

Because it's able to write boilerplate faster than a human. And because it's able to perform refactorings that are not possible with IDEs or regex due to their lack of structure. Also because you can ask it to review your files and it does find bugs that would otherwise be missed at first. There's a huge difference between vibe-coded slop and using the tools available to you effectively.

[–] prenatal_confusion@feddit.org 7 points 5 hours ago

Imagine you are on the ground under your car and need a different tool. You ask for it and somebody hands it to you. That person is young and inexperienced. It is up to You to check if it's the right tool, and if not pass it back (and in this example tell the person about the error and help them correct it).

And sure, You can always crawl out and get the tool yourself and sometimes that is the only option and in coding terms in my opinion best practice. But you can be faster with your helper. Use it appropriately and see how it affects your work. And that's the point, your work. Don't pass responsibility or thought off to AI.

[–] sonofearth@lemmy.world 25 points 9 hours ago* (last edited 9 hours ago) (5 children)

You should add a disclaimer stating that you have used an LLM. I have done so for a tool I built with an LLM that I needed, because I don’t know jackshit about coding and I am not gonna pretend I do.

[–] chicken@lemmy.dbzer0.com 5 points 3 hours ago* (last edited 3 hours ago)

because I don’t know jackshit about coding and I am not gonna pretend I do.

But if OP does know and applies that knowledge to what they are doing, it's not the same thing and doesn't make sense to have the same disclaimer.

[–] terraincognita@lemmy.world -4 points 9 hours ago (1 children)

You can see that I use some of metrics, like test coverage, estimates and so on to prove its validation as potentially serious project, that will grow from a pet one.

[–] Tibi@discuss.tchncs.de 1 points 3 hours ago

Testcoverage by ai generated Tests is close to worthless. "Tests are only as good as the person writing them"

Did you generate your tests?

[–] terraincognita@lemmy.world 0 points 9 hours ago

Partially agree, but I do know how to code and use it as a tool.

[–] dogs0n@sh.itjust.works -4 points 4 hours ago

I'm guessing you let the AI make the tests and everything, which wouldn't give me much reassurance that any of the code is good. Sadly AI will jump through any hoops it can to get tests to pass if it can't get the code working.

I think people who let AI run wild to create a whole app should write the tests themselves or at least only with line completion (jusdging by a quick look at the project files, I am guessing an AI did everything).

Could be food for thought?

[–] CameronDev@programming.dev 7 points 11 hours ago (1 children)

Charitably, it could be an AI readme and hand rolled code, but it definitely is a smell.

[–] rimu@piefed.social 12 points 11 hours ago (4 children)

Yeah there are other signs too. Look at those commit messages, all vague, all perfectly capitalized. All with a nice long description with bullet points.

No one does that in a project they're building for themselves.

[–] EdTheMessenger@lemmy.world 7 points 7 hours ago* (last edited 7 hours ago)

Judging code quality by use of LLM in a documentation and commit messages is weird.

While I write all of my code myself and I'm against vibe coding etc., there is one place where I let a LLM write for me: readmes, commit messages and Javadoc comments.

I know how to write code but at the same time I'm shit at both my native language and even more so at English. So I let Language Models write natural language texts for me and just fix them when necessary. My documentation is more clear, grammatically correct and more detailed than in any of my previous projects, and I can focus on writing code.

And I wouldn't say "No one does that in a project they're building for themselves". I do that for projects that only I will ever see, and OP shared his project with others, so it's great that he included a clear documentation

[–] helix@feddit.org 1 points 5 hours ago

No one does that in a project they’re building for themselves.

Speak for yourself, I always did that and I found it easier with LLMs nowadays.

I hate most AI shite with a passion but when it helps my colleagues write commits which are more than "add stuff", "fix some things" I'm fine with it.

I rarely use AI to generate code, usually only when I need a starting point. It's much easier to unfuck AI code than to stare blankly at a screen for an hour. I'd never commit code I don't fully understand or have read to the last byte.

I hope OP is doing the same. LLMs fail at 90% of coding tasks for me but for the other 10% (mostly writing tests, readmes, boilerplate) it's really OK for productivity.

Ethics of LLMs aside, if you use them for exactly what they're built for – being a supercharged glorified autocomplete – they're cool. As soon as you try to use them for something else like "autocompletion from zero" aka "creativity", they fail spectacularly.

[–] terraincognita@lemmy.world 0 points 11 hours ago

I answered earlier, that I use AI and this is just a commit skill for an agent.