Hacker Newsnew | past | comments | ask | show | jobs | submit | kcoddington's commentslogin

Either have Claude /compact or have it output things to a file it can read in on the next session. That file would be a summary of progress for work on a spec or something similar. Also good to prime it again with the Readme or any other higher level context

Learning an entirely new editor is a barrier. Documentation or not, that's brand new muscle memory you have to develop alongside the actual task of coding.

I get that using vim typically includes obsessive forms of efficiency, but some people just want to focus on coding in a way that's comfortable to them. Sometimes that means having a side panel.


>Sometimes that means having a side panel.

I do not even need that. Modal editing is enough to keep me away from all the VI clones. I hate it with a passion.

I have a fully customized Emacs that I use for anything Lispy and it's great for that purpose but everything else is just "ok".

I try to use Zed but since it is a commercial offering it is just a matter of time until it gets entshitified too.

Vscode is/was really good but it seems to get worse and it's Microsoft.... I run out of editors it seems.


Honest question, what is it that you hate about modal editing?


> Honest question, what is it that you hate about modal editing?

It's annoying. I'm fine with being in one mode. I'm fine with selecting 7 lines and typing "whatever". I hate "7ddiwhatever<esc>".

And yes, it may be the case that you are faster in VI(m) than I am in (choose the editor) but that doesn't matter. For me speed of typing is never critical.


I tried Helix and Kakoune too. They all have the same problems.

First. It adds friction. Every damn time I need to write, I forget to enter insert mode. You have no idea how many times I ended up with a strange buffer. Hopefully there is undo. But it gets boring fast. I need to write as soon I enter the editor. I don't need to move the cursor to read the text that visible on the current page.

Second, how the hell I'm supposed to make small movements when I'm in insert mode, with the arrow keys? Like move cursor to the left by 3 chars. Do I enter normal mode, press l 3 times. Or delete the whole word and rewrite it?

Third. Why some movements are symbols? Like, line ending is $. Beginning of line is 0. so much so for home row movements.

Fourth. Could never remember if f or t includes the char I'm looking for.

Fifth. How cumbersome is to press ESC on the top left corner every damn time. Yes, there exists Ctrl+[. But still. So much so - again - homerow movement.

Not directly related to modal editing.

Sixth. I could not make copy/paste work reliably in remote a linux server from a Windows machine via SSH. Hell, I could not make it work with WSL2.

Sevent. Debuggers sucks. There is no comparison to JetBrains Debugger GUI. Not even VSCode come close to it.

Sorry for the rant.


The rant is fine. I'll just provide some explanation.

First: Vim comes from vi, which is a visual mode for ex, which is a supercharged version of ed, which is (the standard editor) a line editor. With ed, you don't really write, you issue commands that does things to the file. Think of the file as a long roll of paper and the program as an assistant. So a command could be "replace your on line 14 with you're" or "delete line 34 to 45". Ex added more commands, Vi made it interactive, and Vim added even more features (syntax highlighting, scripting, location list,...). But still the same mode of interaction remains. The cursor is what you control. It's not just an indicator where the next character will appear or be deleted. It's the starting point of more powerful interactions.

Second. You're not supposed to move the cursor that much in insert mode. For 3 characters to the left, I just backspace and rewrite. For more, I go to normal mode and use F, f, T, or t which will land me to the character I want. Then I can use something like x (delete character) or r (replace character) without having to enter insert mode. There's a lot of movement beside hjkl, and I rarely uses h or l for things that further away than two characters.

Third. There's not a lot of key on the keyboard. $ is end of line in most regex dialect, ^ is beginning of line which would be actually the first character, but most people would assume it's the first non-whitespace character, so that's how they went. In C, curly braces mark blocks of code, so it's a small leap to use it for blocks of text, aka paragraphs.

Fourth. My mnemonics are f (find) and t (to). The latter does not include the character.

Firth. A lot of people remaps the caps lock to either Ctrl or ESC.

Sixth. They're different computers so there's no shared clipboard. Sharing information between the clipboard can be done using escape sequences, but I've never bother to. I just maximize the current buffer so I can use the terminal selection. And if I wanted more than a screen (dmesg), I'd pipe the command to a file and then download that file with sftp.

Seventh. Jetbrains only have debuggers for a handful of programming languages while `printf` is universal. And there's no law forbidding installing an IDE alongside your editor.


I think if you google ADM-3A terminal keyboard you'll see half your issues explained :)

There's a ton of historical baggage there. Thankfully, a bit of it can be resolved by setting esc and/or ctrl in caps lock.


Agreed. I think I'm getting more fatigue from the AI slop callouts than the actual AI slop.


It's at a point where I just flag it. FWIW, the callouts are against the guidelines:

> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

Just pretend "this is AI slop" is also in the list. Don't complain about that; complain about the information being wrong or something else insightful. Discuss the conclusions that were included in the article. Assume (also per the guidelines) that the author reviewed the article before posting; they are signing off on its validity and therefore any callouts, AI-related or otherwise, should be calling that validity into question rather than simply saying that a particular tool was used for getting the words on the page.


Except my 1-sentence comment was not an AI callout that said: "Hey this is AI slop". I validate the content of the article but express personal disappointment in the AI writing style.

I admit it is perhaps unkind of me to not provide the laundry list of AI-tells in their article, but I see their response to me as being a direct lie ("I haven't used AI to write this post, that's just my style") so what explanation do I owe them?


> Except my 1-sentence comment was not an AI callout that said: "Hey this is AI slop".

Not literally, but this was the thesis of your comment, regardless of your four words of validation.

> I admit it is perhaps unkind of me to not provide the laundry list of AI-tells in their article

Please don't do this. It doesn't matter.

> what explanation do I owe them?

What explanation do they owe you?


Uhh? I didn't ask anyone for an explanation.


I'm not seeing where they are coming up with RNT as a cause, other than a lot of theory. Wouldn't it be a symptom of cognitive decline instead? Dementia patients, particularly those with Alzheimers, tend to become depressed because of confusion and memory loss. Wouldn't it be more likely that these depression symptoms are being caused by deteriotating brain function rather than the other way around?


Of course. It would be bizarre if there weren't a relationship between Lewy Body dementia, Alzheimer's, or vascular dementia (which in old people, means you've gone into heart failure) and repetitive negative thoughts. For one, you know you've got an incurable disease that will inevitably destroy your mind, and you've become one of the rare class of people for which assisted suicide has almost no controversy, it's something you're putting down payments on. For two, you can't finish thoughts.

My father was just diagnosed with Parkinson's a few months ago, and he already has trouble following any conversation, and knows it. If that didn't lead to depression, that's what would be notable. And any insight that he reaches that gives him comfort might be gone an hour later.

It just seems like a silly study.


Wait down payments? Is that metaphorical? How much does that cost?


They don't claim it's a cause. In fact, they explicitly state more research is needed to determine the relationship.


The first thing that jumps out at me is the concept of perseveration (repeated fixed obsessions) that happens in dementia syndromes. It would be interesting to consider whether this is a chicken-or-egg scenario, whether individuals tended to ruminate in earlier life.


No one’s saying anything about it being a cause though … association is not cause


Several comments already seem to assume that already though


I believe there have been other studies showing people with a history of depression develop dementia at higher rates. There are some that have shown the structural/signal changes that happen after longterm depression as well. These are things that occur years or decades before the dementia.


This is the problem of correlation being reported in media, people read it as "causation found"

When really it's "We've found an interesting association, and we are going to explore it more to see if there's an causation that we can influence"


I think they need to use a different word than associated. That's what's causing the confusion?


Personally I think that reporting of correlations should be dropped altogether - they're very prevalent (reports on correlations) in every day news, and I think that they're damaging because they imply, or outright claim, that the discovery is that some causative effect has been observed.

It's really clickbait territory sometimes (IMO)


> Personally I think that reporting of correlations should be dropped altogether…

It really should be a shared responsibility to report and understand the meaning of statistically significant correlation. Unfortunately, few journalists seem to have much interest in understanding it. And given that their readership likely has about average (i.e poor) numeracy and iffy understanding of probability, it’s a bad combination. The widespread misunderstanding about the iterative way that science converges on truth also contributes to this problem.

That said, I would rather know about interesting findings such as this if for no other reason than to start digging for the original paper.


I mostly agree. Although for many years the only evidence for harm from smoking was correlational.


As far as I know, it still is. I'm happy to be shown otherwise.


A lot of modern medicine is based on just correlations and an more or less educated guess. I do not think ignoring them makes sense, it is just that the reporting needs to be more clear and less sensationalist.


That is really because we don't actually understand the human body, and how it reacts to various stimuli.

We used to think stress caused ulcers, based on a correlation. We now know the actual cause is a bacterium.


Yes, sometimes we are right, sometimes we are wrong.


Which is more damaging to society?

Having things that are wrong, but we don't know any better, or having things that are right, but we don't have the skills to prove it?

We often laugh how people in times before thought "crazy" things about health maintenance, but we're no better.


True, but it still might be a good proxy to estimate how well a client is doing.


Gwern on correlation and causation: https://gwern.net/correlation


CSV is data only. Excel handles way more than that. XLSX is the preferred file format because it's compressed XML that can hold all kinds of things.

Also, CSVs seem to open just fine on my Excel. If it's not formatted with a standard delimiter or isn't handing quoted strings the proper way, sure maybe the data wizard is needed.

Excel is terrible in a lot of aspects, but CSVs seem to be something it handles as well as anything else in my experience.


Not everything needs to be a novel idea. 99% of blogs and books wouldn't be written if that were the case. Sometimes repeating information means somebody learns something they weren't aware of or is presented in a way that finally clicks for them. Meta-analysis is also useful. So is repeating experiments. Our entire world is driven by summaries and abstractions.


I wouldn't think there are many use cases for Windows, but I imagine supporting legacy .NET Framework apps would be a major one.


Is there any limitation in running older.NET Framework on current Windows? Back when I was using it, you could have multiple versions installed at the same time, I think.


You can, but there are companies that also want to deploy different kinds of Windows software into Kubernetes clusters and so.

Some examples would be Sitecore XP/XM, SharePoint, Dynamics deployments.


While I don't disagree with this approach, it only works for some digital tasks. AI won't clean my house or exercise my body or engage in obligatory social interactions. In these cases, just getting it done by shutting off your brain is often the best way to get it going.

Also, it's not all or nothing. You can decide to engage more in the task as it's ongoing, which could contribute to higher quality output. The hard part is usually starting.


Yeah.. for awhile I tried exercising by walking/jogging on a treadmill with headphones watching youtube to keep my brain entertained through the drudgery.. but yeah, one constantly drowns in the noise of being jostled.

For me completely shutting down the brain (when/if I'm even capable of doing so) is just a function that activates sleep. While I haven't tried this while exercising in particular, I have more than learned my lesson from trying it merely standing up and the result is very much falling down.


Humans crave novelty. That's all. I'm sure people were bored of hearing about the internet too.

I'm with you on the optimistic outlook, for the most part. But I think there will also be quite a bit of pain felt by a lot of people (job loss, bad code, bad info, etc.) until we can find ways to correct.


There has always been filler entertainment that caters to the lowest denominator. The new medium types just allow for more of it. Maybe I'm in a very small minority, but I only consume HN and a heavily curated YouTube account.

I don't even see most of the viral content unless somebody else shows me. Anf since I don't have an account with most social media sites, they have to show me directly on their own devices. I also filter on all email with the word 'unsubscribe' and route it appropriately.

The short of it is: do better at filtering with allow listing or aggressive block lists. Consume content you search for. Or accept the fact that an algorithm will spoon feed you 99% filler.


I am still mad that I had to give up facebook because of the garbage. It was and is a great way to see what my old school friend's kids are doing (we never were close, and I've moved a long way away so I wouldn't call, but I still go back to visit every class reunion and seeing those pictures helps have something to talk about). However all the garbage has made the platform useless and I gave up. I spent a year trying to block it all, but I wasn't making progress.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: