Hacker Newsnew | past | comments | ask | show | jobs | submit | more mooktakim's commentslogin

The missing feature is the ability to run on multiple servers. I want to be able to put it behind a load balancer. Kamal also has a huge issue where you can't set the "default host". This means load balancers can't work as "/up" never succeeds. I created an issue on github but the big honcho closed it without reading lol I understand the idea behind single server, but realistically you need at least two app servers to switch over when needed.


I know of a bunch of apps that deploy to multiple servers behind a load balancer using Kamal. Here's an article explaining the process https://guillaumebriday.fr/how-to-deploy-and-scale-your-rail...


The article only talking about single app per server without using hostnames. That you can do as the app will respond to the IP address. Currently you can't run multiple apps on each servers behind load balancer as the health check will not be successful as there is no way to set the default host.


CRUD doesn't mean CRUD all the things. If your use case requires it, you can build a create-only environment where you never delete anything.


Bureaucracy is usually needed because there are too many people to manage and monitor.


It reminds me of the old lastminute.com (I think) button that would turn the whole front page into an Excel spreadsheet so when the manager walks by, they only see spreadsheets on your screen lol


Now that is a real use case!


How would anyone know that you can use the shift key? Closing the tab/page is just more natural as its something you do all the time.


It's advised in the implementation documentation to add a page explaining it. Shift is also used naturally when inputting information, with the visual feedback inside the button giving an opportunity for discoverability.


those old books are still good though. There's only new syntax for latest ruby versions.


If you compare, say, C++03 and C++14, it's also technically true that "there's only new syntax", but in practice this often means that hacks that used to be idiomatic before are frowned upon now.


Its not anything like that. new ruby version has "better" short hands, like {:test => 123} to {test: 123}.

Anyway, there have been updated versions of the books and content online if people are interested.


Ruby has evolved slowly language-wise compared to C++, or even Python.

Most changes have been in libraries and interpreter / VM implementation.

Updating your knowledge from Ruby 1.8 (mid 2000s) to 3.x (current) takes little effort.

But yes, sparse API documents were always a problem because a big chunk of the community was Japanese.


You opted for a fancy website rather than actually explaining what it is.


It appears to be a product that requires the Cloudflare edge computing service.


This is not an issue. Technology moves forward. You don't adapt, you fall behind. There were other editors and IDE's before the one you use. New devs will use it.

Anyway, I don't use them either. I prefer to use ChatGPT and Claude directly.


Technology also moves into dead ends. Not every change is progress. You can only tell a posteriori which paths were fruitful and which were not.


Everything ends. Even things you used for a long time.


Almost every program I've used 20 years ago still available today. I think that I switched from Eclipse to Idea like 15 years ago, but Eclipse is still rocking. IT really frozen in 1990s. OS didn't change at all, they just switch fancy colors and border radius every few years. Software is the same, they just add more annoying bugs and useless features, but nothing really changes. I'm still using the same unix shell and unix tools I've used 20 years ago, I'm still greping and seding files around.


Stone tablets and chisel technically still available also.


Overall I agree with everything you’ve said and I also use ChatGPT and Claude directly. The issue is that:

Good at integrating AI into a text editor != Good at building an IDE.

I worry about the ability for some of these VSCode forks to actually maintain a fork and again, I greatly prefer the power of IDEA. I’ll switch if it becomes necessary, but right now the lack of deep AI integration is not compelling enough to switch since I still have ways of using AI directly (and I have Copilot).


I'm guessing using AI will fundamentally change how IDE even works. Maybe everything IDE's offer right now is not needed when you have a copilot you tell what to do.

I'm a long term vim user. I find all the IDE stuff distracting and noisy. With AI makes it even more noisy. I'm guessing the new generation will just be better at using it. Similar to how we got good at "googling stuff".


My coworkers do just fine with vim.


"past performance is not indicative of future results"


Is it not though? It's not a guarantee but definitely an indication.


Not really. Only thing you can guarantee is things change.


Let’s just throw away all past experience then?

It’s a mistake to assume that there will be 100% correlation between the past and future, but it’s probably as bad of a mistake to assume 0% correlation. (Obviously dependant on exactly what you are looking at).


0% maybe not. But it's the outliers and the didn't see that comings that kill ya. Sometimes literally.

So while the odds at the extremes are low, they cannot be ignored.

No one can predict the future. But those that assume tomorrow will be like today are - per history - going to be fatally wrong eventually.


So the choices are 100% or 0%?


That’s my point – they are not. Your previous comment implied to me a belief that any attempt to draw inference from past events was doomed to failure!

Each circumstance is different. Sometimes the past is a good guide to the future – even for the notoriously unpredictable British weather apparently you can get a seventy percent success rate (by some measure) by predicting that tomorrows weather will be the same as todays. Sometimes it is not - the history of an ideal roulette wheel should offer no insights into future numbers.

The key is of course to act in accordance with the probability, risk and reward.


I did not speak with certainty. Everything I said is guess and opinion.


vim is the "just put your money in an index fund" of text editors


This is exactly what OpenAI and other want you to believe. "OH NO, I need to use LLMs for coding otherwise I will fall behind". No, no. Most of what makes a good software engineer cannot be replaced by LLMs. A good software engineer has a deep understanding of the problem space, works on the right things, and elevates their team members by coaching, helping etc. It's not about how fast you type your code.


There's still time to find out if what you say is true


I refuse to believe there were ever editors before vim.

Vim has been around since the Stone Age.

Jokes aside, I don’t really see why ai tools need new editors vs plugins EXCEPT that they don’t want to have to compete with Microsoft’s first party AI offerings in vscode.

It’s just a strategy for lock-in.

An exception may be like zed, which provides a lot of features besides AI integration which require a new editor.


They probably said the same thing when someone created vim, or vi.


Sorry, I’m not understanding what you mean.

Vi and vim were never products sold for a profit.

Who was saying what? And what were they saying?

EDIT: ah I think I understand now.

The thing is, I don’t see any advantage to having AI built into the editor vs having a plug-in. Aider.vim is pretty great, for example.

The only reason to have a dedicated editor is a retention/lock in tactic.


Every time there's a new editor, or anything else, people complain why we need another one. Sometimes that new thing is innovative.


Sure, I just don’t see what an AI first editor would have over vscode, vim, or whatever already exists + an extension.

The editor handles the human to text file interface, handling key inputs, rendering, managing LSPs, providing hooks to plugins, etc. AI coding assistants kind of sits next to sits it just handles generating text.

It’s why many of these editors just fork vscode. All the hard work is already done, they just add lock in as far as I can tell.

Again, zed is an exception in this pack bc of its CRDT and cooperative features. Those are not things you can easily add on to an existing editor.


Falling behind what?


If I knew answer to that question I wouldn't be falling behind


That's cool

So you're just out here wasting my time

See you


It doesn't matter how advanced debugging gets. At the end we always use print. Only reliable thing to do.


Sometimes you can't even print and have to resort to toggling a GPIO pin...


I've worked with systems that were so utterly and entirely broken that the only way I could confirm that a particular code path was followed was by inserting an infinite loop and observing that the system then hung instead of crashing.

Combine that with a build system that is so utterly and entirely broken that the only way to be sure is to do a fresh checkout each time, and with a hardware set-up that involves writing the binary onto flash memory and plugging it into a device that is located under somebody else's desk in another room and then perhaps you have the Debugging Cycle From Hell.


When I was programming at home om my Atari ST I thought debuggers was the greatest invention ever. It was wonderful to be able to step through assembler code line by line, instead of looking at BASIC print statement output and guessing what was going on and where. Made life so much easier.

Don't people believe in debuggers any more?


Once you get to a complex enough system, sometimes a debugger just isn't enough.

E.g. I have a multi-threaded and multi-process robot control system - I can't put breakpoints in the controller to debug why the robot misbehaves, because then the control loop timing is broken and the robot faults. Instead, you have to put the time into effective logging tools, so that you can capture the behavior of the running system and translate that into a simpler and smaller example that can be examined offline. Maybe those you run under a debugger, but you probably can express what values you want to examine and when more cleanly in code than in the debugger, with the significant advantage that it's easier to communicate "run this code and look at the output at step n" than "run this code with these breakpoints and these debugger scripts".

My view at this point is that the conditions I would normally examine in a debugger with breakpoint and stepping are usually so rare (e.g. a few in potentially thousands/millions of iterations) that I need to write logic to express what checks and where I want to make them, and I would rather write that logic in the context of the program itself than do so in the debugger.


Ofcourse there are edge cases that don't work with a debugger. I have been there too. Timing sensitive applications like controlling the fat-TV live on scanline/pixel level can't be debugged. Physical objects that move over a certain speed can't be live debugged because of physics. This is still edge cases.


> This is still edge cases.

Arguably, once you are working on a sufficiently complex and mature system with good-enough tooling and tests and development practices, these edge cases can come to dominate. I don't spend time debugging simple things on their own, because the simple things on their own are generally well tested, so the failures that do happen emerge from their combination and integration into complex systems.

That said, I do spend a fair bit of time using a debugger as my first-line response to an issue - but overwhelmingly it's to examine a crashdump of the failure rather than investigate a live process.


I guess that once you reach a certain level of coding, static verification, strong typing, solid unit tests, you only got timing multi-threaded Heisenbugs left to find...


And even without that when you're faced with a bug caused by a large input of some kind it's often easier to dump a bunch of data and look for what doesn't fit.

I've had two Heisenbugs, although no threading involved:

1) Long ago, interface library for going from protected mode to real mode in I believe it was Borland Pascal (I can't recall for sure where this was relative to the name change.) Step through at the assembly level, it worked. Anything else it might work but that was unlikely. The only outcomes were correct or segment error, it never returned a wrong answer. Culprit: Apparently nobody used the code. The whole file was riddled with real mode pointers declared as pointers. Oops, when asked to copy a pointer the compiler emitted code that loaded it into a segment:offset pair and then stored it. If the segment part happened to be valid, fine. If it wasn't a valid segment, boom. The debugger apparently did not actually execute a single step, it emulated it--except for not failing on an invalid segment register value. In any other case the attempt to dereference it would have blown anyway, but this wasn't being dereferenced.

2) Pretty recently, C#. I had a bunch of lazy-initialize data structures--and one code path that checked for the presence of data without triggering the initialization. But the debugger evaluated everything for display purposes, triggering the initializer. There is a way to tell the debugger to keep it's hands off but I hadn't heard of it until I hit it.


Some people just don't believe in tooling full stop. Kind of mind-blowing. They're essentially coding with a fancy notepad.exe.


What’s weird is that debuggers are so advanced now. rr and Pernosco are to regular debuggers like regular debuggers are to inserting an infinite loop into your code.


I used the debugger all the time when I was writing in Pascal (and later Delphi). It was great.

Then I switched to Haskell. No (useful) debugger there.

Now I write TypeScript, and.. somehow I never figured out how to do debugging in JS properly. Always something broken. Breakpoints don’t break, VSCode can’t connect to Node, idk. Maybe I should try again.


The Borland Pascal had a problem with too much debug data. By the time that program got retired I could turn on the symbols for a few percent of the code, set a breakpoint and examine the situation when it triggered but not continue at that point.


This is a hacking technique too -- I've seen it used for extracting entire databases via SQL injection by putting delays in SQL statements and then measuring how long the web page hangs, when you can't force any output on the page. You put different delays in for different string matches and eventually you can get all the table and column names this way.


I sort of laugh when using chatgpt/claude to code anything, if you ever mention to it that something isn't quite working right it'll pepper the entire code with printed debug statements rather than assisting you with any more advanced debugging methods.

even the bots do it (joke)



I worked for Sun Microsystems as a placement year. The first time I saw Sun Rays, I couldn't believe it wasn't already used everywhere. We had badges that let us go from desk to desk, from one building to another, and even to our home, without losing the session.


It was used widely, it was just called Citrix.


I did hear rumours that CIA/NSA used them. It meant no one could steal a computer and take data. Only thin clients everywhere. There were thin client "laptops" too.


Citrix was different remote desktop software. The way the sunrays worked was seamless. Thin clients and servers. Alas, that's not needed anymore.


Citrix/Terminal Services was certainly worse in the design sense, but from the corporate buyer's perspective (i.e. the only buyer) it was significantly better: They could deploy hundreds or thousands or tens of thousands of cheap, disposable PC's as thin clients, while centrally managing everything that mattered. And unlike Sun Rays, they could natively run Windows applications.

There's a reason Citrix ended up worth $16.5B when they went private a couple of years ago, they were highly successful propagating the thin client vision that Sun championed but fumbled.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: