Hacker Newsnew | past | comments | ask | show | jobs | submit | ripperdoc's commentslogin

This type of arguments come up often here on HN, and I would guess many developers including myself will agree that things are bloated and bloated is not good. But the solution to this problem has to be in small actionable steps, not built on assuming (or shaming) everyone to put nice principles above the everyday toil of software development.

And it's a darn messy problem to try and solve even in small actionable steps, because I think most of the choices that lead to bloat are choices that make sense at that point in time. E.g. "use a dependency instead of code it myself", "use AI instead of thinking through every angle", "write 2 unit tests instead of the 20 that would give full coverage", "don't write extra tools to measure the speed of everything, it seems fast enough". Taking the route that minimizes bloat can be very time-consuming and demanding on the individual average developer. And any solution we come up with will not give us security guarantees - but any solution that moves the average one point in the right direction is still a good thing!

I sometimes find these type of question fall into old tropes like "it's Javascript's fault, better go back to C". But I think that is a fallacy. Javascript in a browser can easily run millions of ops a second. The big time offenders come from elsewhere, most likely network operations and the way they are used.

Some things that I think would help (and yes, some of them exist, but not as mainstream tools for the common tech stacks)

Tools that analyze and handle dependency trees better. We need better insights than just "it's enormous and always change". A tool that could tell me things like: - "adding this package will add X kb of code" - "the package you are adding has often had vulnerabilities" - "the package you are adding changes very often" - "this package and pinned version is trusted by many and will not likely need to change" - "here are your dependencies ranked by how much you use them and how much bloat they contribute with" - "your limited usage of this package shows that you would be better off writing the code yourself"

Tools that help us understand performance better. Performance monitoring in production is a complicated task in itself (even with stuff like Sentry) and still is poor at producing actionable insights. I would want tools that tell me things like: - This function you are writing is likely to be slow (due to exponential complexity, due to sequential slow/network operations, etc) - This function has this time distribution in prodution, as reported from your performance monitoring system - There are faster versions of this code (e.g. reference jsperf) - This library / package / language feature has this performance characteristic - Here are outliers in the flamegraph generated by this function or line - This code is X% slower than similar solutions - Making developers load their apps at the average speed users access them (e.g. throttling) - Bots that can produce PRs to open source projects to find common low hanging fruits in reducing complexity or increasing performance

Tools to evaluate complexity and tech debt over time: - Can a tool tell us what the lifetime cost of a solution is? How can a development organization make tradeoffs between what's fast to get out the door vs what it takes to maintain over time?


LOL, I'm reading Njal's saga right now and this was exactly what I needed.


It's a cool example and I guess there are or will be very convenient apps that will stream the last X min of screen recording and offer help with what you see.

But it just hurts my programmer soul that it is somehow more effective to record an app, that first renders (semi-)structured text into pixels, then record those millions of pixels into a video, send that over the network to a cloud and run it through a neural network with billions of parameters, than it is to access the 1 kilobyte of text that's already loaded into the memory and process locally.

And yes there are workflows to do that as demonstrated by other comments, but it's a lot of effort that will be constantly thwarted by apps changing their data-structures or obfuscating whatever structure they have or just because software is so layered and complicated that it's hard to get to the data.


Not directly related to Dorkly, but we've implemented feature flags (with our own system) and found them not super-useful and was hoping for more - but we may be doing it wrong. I can certainly see feature flags working well for us when activating e.g. new mostly UI-related features, but when many services and APIs need to change in unison for new features it seems a lot harder to use feature flags in practice. Then it goes beyond just putting new feature code behind conditions, as you might need to load different dependencies, offer different version of server APIs, run on different database schemas, etc. But maybe we are missing something?


You need to organise your features in a way that these are not a problem. Need different dependencies - load both. Need a different schema - write both versions, drop old later. Need new services/APIs - feature flip only the user-visible one.

The flags are really useful for things like enabling just a fraction of traffic, or ensuring you can switch a feature off much quicker than a full deploy would take.


Everyone laughs when I say this but pull up a thesaurus. When you change the semantics of a thing, you have to change names to have old+new live at the same time. Don’t trust your mastery of English (especially if it’s your second language). There’s a synonym out there that describes the new behavior as well or even better.


Exhibit A.


For systems with many services that need 99.9..% uptime, the ways to do ANY change is things like expand-contract.

In most such cases you have several instances of your backend running in parallel for scaling and redundancy and when making a release, instances of several versions run concurrently. So you don't have a "atomic upgrade" available

With multiple services coordinating upgrade is even harder.

Patterns like expand-contract helps you manage this..E.g. first add the new endpoint to server, then move clients over, then remove old endpoint.

So..feature flags is just a way of dragging this process over longer time period, and roll over % of traffic. Instead of coupling changes to service releases you roll over using config.

Used them a lot in backend to backend, backend to DB etc scenarios, has been hugely useful to us and would never work without it.

But, of course depend on context what you are doing.


I'm also unrelated to Dorkly but I'm a big believer in feature flagging. It's key to moving fast as an engineering/product team.

be happy to walk you through how we use it. Shoot me an email if you want to chat. wayne at inaccord.com


Last time I did this, we ended up routing our flags through a reloadable config system which used Consul for distribution.

We almost never shared flags across our fairly chunky servicees. We usually found a softer way to do it.

Even with Consul, you can still have enough skew that a few requests in the middle might see A and !A if your request rate is high enough. So it depends on your business model and architecture if that’s acceptable.


You aren’t. This doesn’t mean that feature flags are useless or not worth it, but there’s no silver bullet. It is exactly as hard an engineering problem as it sounds.


I wish there was a way to syn from Apple Notes to Obsidian (or maybe there is?) Apple Notes is just faster and more convenient for daily notes, but I want Obsidian to be my repository, so I want some mechanism of automatically syncing certain notes or importing at some intervals.


Either, export your Apple Notes as Markdown:

https://apps.apple.com/us/app/exporter/id1099120373?mt=12

Or use the official Importer:

https://help.obsidian.md/import/apple-notes

Btw, Shortcuts can slam things into your Obsidian vault because it's just text files. That can be as fast or faster than Notes. You might need a shortcuts helper like Toolbox Pro that can remember a path to your vault and write to it from your shortcuts:

https://apps.apple.com/us/app/toolbox-pro-for-shortcuts/id14...


This doesn't seem to be it, but I always wondered if it would make sense to have an extension to the IDE that uses the aggregated data from Sentry to highlight lines of code that have caused errors or slowdowns.


Nice! JSON schemas are really useful, we use them a lot for code generation. Another library that does this for multiple languages is https://quicktype.io/ . It's great, but not so actively developed.


I have the library demo linked from the homepage bookmarked: https://app.quicktype.io/ I use it every time I need to go from a pile of JSON to TypeScript types or zod declarations.


I use Nginx as reverse proxy, and each service runs on the same internal port. There is a way to configure Nginx natively to dynamically route to the container with the same name. If I need multiple services up locally for development, I bring up Nginx there too, and each service is mapped to a domain that ends with .test, which I have added to local DNS (in my case /etc/hosts ). I find that it's anyway better to run development with reverse proxy to find errors that otherwise only would appear in prod.

The main thing I want to improve is to not use one big compose file for all services, as it would be cleaner to have one per service and just deploy them to the same network. But I haven't figured the best way to auto-deploy each service's compose file to the server (as the current auto-deploy only updates container images).


Can you please elaborate on how to "dynamically route to the container with the same name" with nginx?


Something like this:

  location ~ ^/([a-z0-9_-]+)/  {
    proxy_pass http://internal-$1:8000;
  }
We pickup the service name from the URL and use it to select where to proxy_pass to. So /service1 would route to the docker container named internal-service1 . We can reach it via the name only as long as Nginx is also running in Docker and on the same network.


For validating the actual values and types in config files, I've been planning to use JSON Schema to define the configuration data and then validate files against it. But I haven't seen others talk about that a lot - is it something I'm missing?


Should run it with bun instead of Node.js for that extra speed!


Give it a try and let me know what happens!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: