Wild how many folks vibe code a thing and then claim to have created something that they ask us to plug into critical infrastructure with the ability to read, write, and execute.
We have deifinitely used AI just like everyone else, but we are senior 4+ years of experience. Also Gitmore doesn't have the ability to read your code nor execute or write. We only get data from webhooks which are commits/ PR info with no code. Thanks for your Attention.
"Just like everyone else" is a crazy take on AI, no part of AI has ever touched my production environment nor will it ever. 4 years is also not senior at anything in my experience.
Of course i haven't asked every dev that exists whether they use AI or not but most companies/devs does, you choose to not use it, good luck with that.
You may be mistaken. There is a good (and growing) number of folks who have noticed that vibe-coding isn’t always the right fit. There have also been a number of instances where AI agents have destroyed production environments.
In your 4+ years of “experience,” have you acquired the experience necessary to protect anybody’s GitHub, slack, or any other enterprise systems from the numerous security concerns that you’re just hand-waiving away?
Not all “devs” use AI, and very few companies would trust a fully vibe-coded enterprise system plugin with no security team, no enterprise support, no GDPR documentation, and all fielded by a team with fewer than five years of experience.
That seems like the path to breaches, or to having an agent take destroy sensitive systems, or both.
I haven't said that the app is fully vibe coded, i said we used AI. The app is not fully vibecoded but we have used AI assistance and i am aware of the security concerns that comes with github/ slack implementation. Its a question of how you use AI in your app the system is fully designed by us so we know how it exactly behaves and how the data and tokens are stored/ exchanged.
You mention tokens, what else is in your threat model? Is your AI functionality a custom model?
I am concerned that you haven’t adequately explored and mitigated security and reliability risks involved here before asking folks to YOLO your app into their critical infrastructure.
your privacy argument is valid but it is true for all new startups. If your repository on github than you are already giving your data to big corp why do u trust them?
Backup your allegation on vibe code, i don't see any mention of vibe coding on website.
Who said anything about privacy? Sure, privacy is a concern, but I’m more worried about a vibe-coded app produced by an inexperienced team without the assistance of a security team causing a breach, or an agent-caused outage.
It seems more likely that such a team would have poor security controls, insufficient staff training, and may themselves be threat actors.
For an enterprise tool like this, one which integrates with two or more other sensitive systems, I would expect a vendor to have some manner of independently audited security certification such as ISO-27001.
It's more wild that everyone's first reaction to seeing a new product is "probably vibe-coded AI slop". We held so little respect for the craft of software engineering that AI managed to kill it completely in about two years.
Pictures on websites have always been either stolen from other sites a long time ago, then stock pictures from Unsplash or Pexels for a while, and now they're AI because that's just the easiest way to get images now. I don't think it's fair to developers to assume that the ones using AI generated images are also vibe-coding the software though. It's not like the ones writing hand-crafted artisanal code are also out with their cameras taking pictures for the website after all.
FWIW I don't disagree with you. I also assume people are vibe-coding things. I just don't think it's fair to assume that means the devs are taking that code and firing it straight up to a production server. They're probably fixing the problems and making it better. I know I do that in my code (most of the time.)
No thanks.