Hacker Newsnew | past | comments | ask | show | jobs | submit | apsurd's commentslogin

That sounds really powerful, but also like burden shifts to the people that will maintain all this stuff after you're done having your fun.

Tbh, I'm not exactly knocking it, it makes sense that leads are responsible for the architecture. I just worry that those leads having 100x influence is not default a good thing.


My thought is that the markdown is the code and that Claude code/Codex is the “compiler”.

The design was done by me. The modularity, etc.

I tested for scalability, I checked the IAM permissions for security and I designed the locking mechanism and concurrency controls (which had a bug in it that was found by ChatGPT in thinking mode),


Can we help get your infra cost down to negligible? I'm thinking things like pre-generated static pages and CDNs. I won't assume you hadn't thought of this before, but I'd like to understand more where your non-trivial infra cost come from?

I would be tempted to try and optimise this as well. 100000 hits on an empty domain and ~200 dollars worth of bot traffic sounds wild. Are they using JS-enabled browsers or sim farms that download and re-download images and videos as well?

> Hey, we'd boycott Instagram too if we could, but we need it to get this message to you.

everything on the list has a justification.

You gotta believe me that I haven't had Prime or owned a car for 80% of my adult life. And I live in LA! So I want to believe in this message, but it loses it's credibility and plausibility with above statement.

These companies are not evil. Bezos didn't take power. We gave it to him for fast shipping of unlimited cheap plastic comforts that comes to us on our toilet.


The accounting saas dores presumably uses doesn't "automate spreadsheets" as its core value prop.

related: i'm thinking these vibe coded solutions are revealing to everyone how important and under appreciated good UX is when it comes to implicit education of any given thing. Like given this complex process, the UX is holding your hand while educating you through a workflow. this stuff is part of software engineering yet it isn't "code".


yeah the boon of LLM is how it gives a masked incentive for every jane and joe to be intentional communicators.

I've run into schema issues specifically with things like supabase: the huge benefit to supase is really fast and effortless prototyping. But after that actually maintaining your app becomes really hard and you pay the tax back over time.

In this regard, once past prototyping, i agree i've never had issues with LLMs running into schema problems, given they're doing a full feature, they're in line with how things need to change across the app.

LLMs do great at inspecting tables via Rails models and db adapters that can run sql commands to inspect all schema.


That makes sense, and we've been seeing the same. But in our case, instead of having the LLM inspect an external system, TypeScript is the source of truth for both the schema and everything else, it's all code-defined, so it automatically both catches type mismatch and also makes it instantly readable both to developers and agents.

I think this actually makes a lot of sense. How do you automate transitions? Do you generate code to map old data to the latest format, or do you actually migrate the data?

A lot of this is still work in progress, but the key idea is that the framework and the cloud environment have to work together for this, because what you have in the current version of the code has to be compared with what the environment has before the deployment.

We don't automatically generate migration code - there's a set of structured mapping / guardrails, so for example if you add a new field without marking it as optional, you should get a warning/error when deploying it on an environment with existing data that has old records without that field set.

Modelence also has built-in support for user-defined migration scripts for more complex cases, but in these simpler cases we will be adding easy mappings with existing patterns, for example "set the field to X as the default value for all existing data".

Our focus here is the guardrails rather than the migration itself - LLMs today (especially Opus) are smart enough to figure out how to do the migration, but the guardrails make sure they don't miss it under any circumstances.


good point, single type source is very appealing and LLMs are so good with TS because of those type interfaces.

only downside is forced js ecosystem x_X


Agree, as for the JS/TS lock-in, we could have applied a similar approach with other languages too, but we intentionally chose to focus on one single stack to create a seamless end-to-end experience instead of providing a generic solution for multiple stacks, because a lot of the problems we're solving are different based on what stack you choose.

I value your weird rant. Yes it did go on as a thought stream, but there's sense in there.

I've been thinking a lot around a kind of smart-people paradox: very intellectual arguments all basically plotting a line toward some inevitable conclusion like super intelligence or consciousness. Everything is a raw compute problem.

While at the same time all scientific progress gives us more and more evidence that reality is non-computable, non linear.


> While at the same time all scientific progress gives us more and more evidence that reality is non-computable, non linear.

What scientific problems are non-computable?

ANNs are designed to handle non-linearities BTW, thats the entire point of activation functions and multi layer networks


non computable, non-linear as in given known input parameters you can determine the output parameters.

we can't do that for mostly any complex physical system, as would be for something like living organisms.


> non computable, non-linear as in given known input parameters you can determine the output parameters.

These two words do not mean the same thing.

Non-linear functions do not mean you cannot determine the output for a given input.

All non-linear means is that the condition f(x+y) = f(x) + f(y) and f(kx) = kf(x) do not hold for arbitrary x,y,k

For example f(x) = x^2 is a non-linear function. Can you determine what f(x) for arbitrary x?

Perhaps you meant what used to be called "chaotic systems", those which were highly sensitive to initial conditions. Yes, they are non-linear but they are completely deterministic. A classic example would be the n-body problem in physics under most conditions.

And I'm not sure what you understand what non-computable means. It means that the computation will not halt in a finite amount of time for a general input. For a particular input, it may indeed halt in a finite amount of time.

Most real numbers are non-computable, such as the square root of 2 or Pi.

Practically speaking however, we can get approximations as close as we want. In other cases, such as the Busy Beaver function, we can set bounds


You're correct. I only have a very casual understanding of these things. For the non-linear thing, I just mean that for any advanced system there are say trillions of parameters, like cellular systems, and even if you mapped them in you couldn't be sure what the output would be.

    > And I'm not sure what you understand what non-computable means. It means that the computation will not halt in a finite amount of time for a general input. For a particular input, it may indeed halt in a finite amount of time.
Sounds familiar, the "halting problem"? I suppose I'm too loosely tying concepts together. Particular vs general input is same as simple vs complex input above, given a complex enough input, the compute involved approaches boundless/infinite.

In practice, yes, as I understand it, modern science is all about stochastic approximations and for all intents and purposes it's quite reliable.

I probably should stop using "non-linear" terminology. I really just mean that it's not 1:1. You mention how systems can be deterministic and I looked it up and yes wave function collapse specifically says:

    > The observable acts as a linear function on the states of the system
We can compute the possible states, but not the exact state. We can't predict the future.

Thanks for the reply, this is much more interesting to me as it approaches philosophy, so admittedly I too loosely throw words-that-mean-things around.


OT: Your visual on "stacked PRs" instantly made me understand what a stacked PR is. Thank you!

I had read about them before but for whatever reason it never clicked.

Turns out I already work like this, but I use commits as "PRs in the stack" and I constantly try to keep them up to date and ordered by rebasing, which is a pain.

Given my new insight with the way you displayed it, I had a chat with chatGPT and feel good about giving it a try:

    1. 2-3 branches based on a main feature branch
    2. can rebase base branch with same frequency, just don't overdo it, conflicts should be base-isolated.
    3. You're doing it wrong if conflicts cascade deeply and often
    4. Yes merge order matters, but tools can help and generally the isolation is the important piece


If you’re interested in exploring tooling around stacked PRs, I wrote git-spice (https://abhinav.github.io/git-spice/) a while ago. It’s free and open-source, no strings attached.


If you're rebasing a lot, definitely set up rerere (reuse recorded solution) - it improves things enormously.

Do make sure you know how to reset the cache, in case you did a bad conflict resolution because it will keep biting you. Besides that caveat it's a must.



After a quick read it seems like gitflow is intended to model a release cycle. It uses branches to coordinate and log releases.

Stacking is meant to make development of non-trivial features more manageable and more likely to enter main safer and faster.

it's specific to each developer's workflow and wouldn't necessarily produce artifacts once merged into main (as gitflow seems to intentionally have a stance on)


Please don’t use git-flow. Every time I see it, it looks like an over-engineer’s wet dream.


Can you say more as to why? The concept is not complex and in our situation at least provides a lot of benefits.


I think the guy that created it has even stated he thinks it's a bad idea


Literally the reason’s for git’s existence is to make merging diverging histories less complicated. Adding back the complexity misses the point entirely.


do LLMs arrive at these replies organically? Is it baked into the corpus and naturally emerges? Or are these artifacts of the internal prompting of these companies?


Reinforcement learning.

People like being told they are right, and when a response contains that formulation, on average, given the choice, people will pick it more often than a response that doesn't, and the LLM will adapt.


we've lost the plot.

you can't compete with an AI on doing an AI performance benchmark?


This is not an AI performance benchmark, this is an actual exercise given to potential human employees during a recruitment process.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: