Hacker Newsnew | past | comments | ask | show | jobs | submit | bArray's commentslogin

I think that poorly optimised software is killing small VPS hosts... We used to have very usable systems with less than 1MB of RAM.

I've personally built a smart watch that connects to the internet, helps me perform organisation duties, all whilst running (micro-)Python, in far less RAM. To do the equivalent on desktop would require some form of sandboxed browser engine, JavaScript and some weird client-server stack.

I recently upgraded my servers with RackNerd from 768MB to 1GB because it was near free to do so, but I didn't need that extra memory. Before anybody claims they don't do anything, these servers deal with thousands of users a day.

I'm cautiously hoping that we start to see a period of optimisation again. There is currently zero incentivize to optimise anything any more, even if it negatively affects the end user.


> It's is the latest symptom of a long preoccupation with these beautiuful mathematical objects.

It's not just a beautiful concept, they have other uses too. It turns out some patterns can be pretty efficient for search if there is a spatial movement cost.

Some questions that interest me:

1. Can you get these search patterns as part of an emergent behaviour?

2. Can multiple searchers work together in order to produce these patterns efficiently?

3. Instead of searching linear 2D and 3D space, what if you want to search non-linear spaces?


> I don’t personally do things that require dynamic memory management in C often, so I don’t have many practices for it. I know that wellons & co. Have been really liking the arena, and I’d probably like it too if I actually used the heap often. But I don’t, so I have nothing to say.

> If I find myself needing a bunch of dynamic memory allocations and lifetime management, I will simply start using another language–usually rust or C#.

I'm not sure what the modern standards are, but if you are writing in C, pre-allocate as much as possible. Any kind of garbage collection is just extra processing time and ideally you don't want to run out of memory during an allocation mid-execution.

People may frown at C, but nothing beats getting your inner loops into CPU cache. If you can avoid extra fetches into RAM, you can really crank some processing power. Example projects have included computer vision, servers a custom neural network - all of which had no business being so fast.


Maybe 20 years ago... As a student, the school had an email server that allowed rules to be set. You could set an email to be sent as a result of another email.

IT were not stupid though, and set a series of rules:

1. You cannot have a rule trigger to email yourself.

2. You cannot reply to an email triggered by a rule.

3. You have ~50MB max of emails (which was a lot at the time).

Playing around one lunch, my friend had setup a "not in office" automated reply, I setup a rule to reply to any emails within our domain with a "not in office", but put their name in TO, CC and BCC. It turns out that this caused rule #2 not to trigger. After setting up the same rule on my friend's email, and sending a single email, the emails fired approximately one every 30 seconds.

A few hours later we returned to our email boxes to realise that there were thousands and thousands of emails. At some point we triggered rule #3, which in turn sent an email "out of space", with a small embedded school logo. Each one of these emails triggered our email rule, which in turn triggered an email "could not send message", again with an embedded logo. We desperately tried to delete all of the emails, but it just made way for more emails. We eventually had to abandon our efforts to delete the emails, and went to class.

About an hour later, the email server failed. Several hours later all domain logins failed. It turned out that logins were also run on the email server.

The events were then (from what I was told by IT):

* Students could not save their work to their network directory.

* New students could not login.

* Teachers could not login to take registers or use the SMART white boards.

* IT try to login to the server, failure.

* IT try to reboot the server, failure.

* IT take the server apart and attempt to mount the disk - for whatever reason, also failure.

* IT rebuild the entire server software.

* IT try to restore data from a previous backup, failure. Apparently the backup did not complete.

* IT are forced to recover from a working backup from two weeks previous.

All from one little email rule. I was banned from using all computers for 6 months. When I finally did get access, there was a screen in the IT office that would show my display at all times when logged in. Sometimes IT would wiggle my mouse to remind me that they were there, and sometimes I would open up Notepad and chat to them.

P.S. Something happened on the IT system a year later, and they saw I was logged in. They ran to my class, burst through the door, screamed by username and dragged me away from the keyboard. My teacher was in quite some shock, and then even more shocked to learn that I had caused the outage about a year earlier.


You were not the root cause of that outage.

> IT were not stupid

Everything else you described points to them being blundering morons. From an email forwarder that didn’t build loop detection into its header prepending, fucking up a restore, and then malware’ing the student that exposed them into kafkaesque technology remand, all I’m taking away here is third-degree weaponised incompetence


Yes and no. This was the IT of a school, most likely low-paid College/University graduates trying to patch together a working system on a shoe-string budget 20 years ago. Maybe they were fully aware of the issues and struggled to get time to deal with them - try convincing an uneducated management that you need to fix something that is currently working.

I remember IT were continuously fixing computers/laptops broken by students, fixing connectivity issues (maybe somebody has pushed crayons into the Ethernet ports), loading up software that teachers suddenly need tomorrow, etc. Maybe they also have to prevent external actors from accessing important information. All the whilst somebody well above your pay grade is entering into software contracts without knowing anything about software.

Things are likely far more plug & play now for IT infrastructure, back then (XP I think) it was more the Wild West. Only five years ago I know that a University login system used to send username and password credentials via plaintext, because that's how the old protocols worked. The same University also gave me sudo to install/run programs, which provided sudo over all network drives.

You would probably be horrified to know how much infrastructure still runs on outdated stuff. Just five years ago the Chinese trains stopped working because Adobe disabled Flash [1]. I know of some important infrastructure that still uses floppy disks. Not so long ago some electrical testing could not be conducted because the machine that performed it got a corrupted floppy disk.

[1] https://arstechnica.com/tech-policy/2021/01/deactivation-of-...


Ah well having operated at all levels of institutional hierarchies I include the hapless/indifferent management within functional and operational scope of the term “IT”, and they are accountable in any case, however understanding you choose to be of the struggling folks at the pointy end. So there’s your root cause.

Glad I wasn't the only person who did this.

Forgetting Haskell, technically speaking, everything could be expressed as a function. I thought about writing a type of shell script that reflected everything as a function to clean up the likes of bash.

Take the command `ls` for example, it could be expressed as either:

    ls -la *.png # lazy
    ls(-la, *.png); # formal
For pipes, they could be expressed as either:

    ls -la *.png | grep cat # lazy
    ls(-la, *.png) | grep(cat)
    |(ls(-la, *.png), grep(cat)); # formal
I thought about writing this with something like Micropython that could have a very small memory footprint.

> My 30k ft view is that the stock will inevitably slide as AI datacenter spending goes down. Right now Nvidia is flying high because datacenters are breaking ground everywhere but eventually that will come to an end as the supply of compute goes up.

Exactly, it is currently priced as though infinite GPUs are required indefinitely. Eventually most of the data centres and the gamers will have their GPUs, and demand will certainly decrease.

Before that, though, the data centres will likely fail to be built in full. Investors will eventually figure out that LLMs are still not profitable, no matter how many data centres you produce. People are interested in the product derivatives at a lower price than it costs to run them. The math ain't mathin'.

The longer it takes to get them all built, the more exposed they all are. Even if it turns out to be profitable, taking three years to build a data centre rather than one year is significant, as profit for these high-tech components falls off over time. And how many AI data centres do we really need?

I would go further and say that these long and complex supply chains are quite brittle. In 2019, a 13 minute power cut caused a loss of 10 weeks of memory stock [1]. Normally, the shops and warehouses act as a capacitor and can absorb small supply chain ripples. But now these components are being piped straight to data centres, they are far more sensitive to blips. What about a small issue in the silicon that means you damage large amounts of your stock trying to run it at full power through something like electromigration [2]. Or a random war...?

> The counterargument to this is that the "economic lifespan" of an Nvidia GPU is 1-3 years depending on where it's used so there's a case to be made that Nvidia will always have customers coming back for the latest and greatest chips. The problem I have with this argument is that it's simply unsustainable to be spending that much every 2-3 years and we're already seeing this as Google and others are extending their depreciation of GPU's to something like 5-7 years.

Yep. Nothing about this adds up. Existing data centres with proper infrastructure are being forced to extend use for previously uneconomical hardware because new data centres currently building infrastructure have run the price up so high. If Google really thought this new hardware was going to be so profitable, they would have bought it all up.

[1] https://blocksandfiles.com/2019/06/28/power-cut-flash-chip-p...

[2] https://www.pcworld.com/article/2415697/intels-crashing-13th...


> This approach still works, why do something else?

One issue is that the time provided to mark each piece of work continues to decrease. Sometimes you are only getting 15 minutes for 20 pages, and management believe that you can mark back-to-back from 9-5 with a half hour lunch. The only thing keeping people sane is the students that fail to submit, or submit something obviously sub-par. So where possible, even for designing exams, you try to limit text altogether. Multiple choice, drawing lines, a basic diagram, a calculation, etc.

Some students have terrible handwriting. I wouldn't be against the use of a dumb terminal in an exam room/hall. Maybe in the background it could be syncing the text and backing it up.

> Unless you're specifically testing a student's ability to Google, they don't need access to it.

I've been the person testing students, and I don't always remember everything. Sometimes it is good enough for the students to demonstrate that they understand the topic enough to know where to find the correct information based on a good intuition.


I want to echo this.

Your blue book is being graded by a stressed out and very underpaid grad student with many better things to do. They're looking for keywords to count up, that's it. The PI gave them the list of keywords, the rubric. Any flourishes, turns of phrase, novel takes, those don't matter to your grader at 11 pm after the 20th blue book that night.

Yeah sure, that's not your school, but that is the reality of ~50% of US undergrads.


Very effective multiple choice tests can be given, that require work to be done before selecting an answer, so it can be machine graded. Not ideal in every case but a very quality test can be made multiple choice for hard science subjects


True! Good point!

But again, the test creator matters a lot here too. To make such an exam is quite the labor. Especially as many/most PIs have other better things to do. Their incentives are grant money, then papers, then in a distant 3rd their grad students, and finally undergrad teaching.any departments are explicit on this. To spend the limited time on a good undergrad multiple choice exam is not in the PIs best interest.

Which is why, in this case of a good Scantron exam, they're likely to just farm it out to Claude. Cheap, easy, fast, good enough. A winner in all dimensions.

Also, as an aside to the above, an AI with OCR for your blue book would likely be the best realistic grader too. Needs less coffee after all


This is what my differential equations exams were like almost 20 years ago. Honestly, as a student I considered them brutal (10 questions, no partial credit available at all) even though I'd always been good at math. I scraped by but I think something like 30% of students had to retake the class.

Now that I haven't been a student in a long time and (maybe crucially?) that I am friends with professors and in a relationship with one, I get it. I don't think it would be appropriate for a higher level course, but for a weed-out class where there's one Prof and maybe 2 TAs for every 80-100 students it makes sense.


> Very effective multiple choice tests can be given, that require work to be done before selecting an answer, so it can be machine graded.

As someone who has been part of the production of quite a few high stakes MC tests, I agree with this.

That said, a professor would need to work with a professional test developer to make a MC that is consistently good, valid, and reliable.

Some universities have test dev folks as support, but many/most/all of them are not particularly good at developing high quality MC tests imho.

So, for anyone in a spot to do this, start test dev very early, ideally create an item bank that is constantly growing and being refined, and ideally have some problem types that can be varied from year-to-year with heuristics for keys and distractors that will allow for items to be iterated on over the years while still maintaining their validity. Also, consider removing outliers from the scoring pool, but also make sure to tell students to focus on answering all questions rather than spinning their wheels on one so that naturally persistent examinees are less likely to be punished by poor item writing.


Pros and cons. Multiple choice can be frustrating for students because it's all or nothing. Spend 10 minutes+ on question, make a small calculation error and end up with a zero. It's not a great format for a lot of questions.


They're also susceptible to old-school cheating - sharing answers. When I was in college, multiple choice exams were almost extinct because students would form groups and collect/share answers over the years.

You can solve that but it's a combinatorial explosion.


A long time ago, when I handed out exams, for each question, I used to program my exam questions into a generator that produced both not-entirely-identical questions for each student (typically, only the numeric values changed) and the matching answers for whoever was in charge of assessing.

That was a bit time-consuming, of course.


For large classes or test questions used over multiple years, you need to take care that the answers are not shared. It means having large question banks which will be slowly collected. A good question can take a while to design, and it can be leaked very easily.


Scantron and a #2 pencil.


Stanford started doing 15 minute exams with ~12 questions to combat LLM use. OTOH I got a final project feedback from them that was clearly done by an LLM :shrug:


> I got a final project feedback from them that was clearly done by an LLM

I've heard of this and have been offered "pre-prepared written feedback banks" for questions, but I write all of my feedback from scratch every time. I don't think students should have their work marked by an LLM or feedback given via an LLM.

An LLM could have a place in modern marking, though. A student submits a piece of work and you may have some high level questions:

1. Is this the work of an LLM?

2. Is this work replicated elsewhere?

3. Is there evidence of poor writing in this work?

4. Are there examples where the project is inconsistent or nonsensical?

And then the LLM could point to areas of interest for the marker to check. This wouldn't be to replace a full read, but would be the equivalent of passing a report to a colleague and saying "is there anything you think I missed here?".


> Some students have terrible handwriting.

Then they should have points deducted for that. Effective communication of answers is part of any exam.


> Then they should have points deducted for that. Effective communication of answers is part of any exam.

Agreed. Then let me type my answers out like any reasonable person would do.

For reference…

For my last written blue book exam (in grad school) in the 90s, the professor insisted on blue books and handwriting.

I asked if I could type my answers or hand write my answers in the blue books and later type them out for her (with the blue book being the original source).

I told her point blank that my “clean” handwriting was produced at about a third of the speed that I can type, and that my legible chicken scratch was at about 80% of my typing rate. I hadn’t handwritten anything longer than a short note in over 5 years. She insisted that she could read any handwriting, and she wasn’t tech savvy enough to monitor any potential cheating in real time (which I think was accurate and fair).

I ended up writing my last sentence as the time ran out. I got an A+ on the exam and a comment about one of my answers being one of the best and most original that she had read. She also said that I would be allowed to type out my handwritten blue book tests if I took her other class.

All of this is to say that I would have been egregiously misgraded if “clean handwriting” had been a requirement. There is absolutely no reason to put this burden on people, especially as handwriting has become even less relevant since that exam I took in the 90s.


I personally don't believe that terrible handwriting should have any hold over a computer science student.


Doctors (medicine) get away with it.


> Then they should have points deducted for that. Effective communication of answers is part of any exam.

...even when it's for a medical reason?


Copy the URL and manually paste it into a new tab, no referrer then.


I wrote something similar years ago, which would instead convert an image into a mesh of polygons. The idea was to have a vector low-size SVG that could be used as an image placeholder or background for web pages.

I think I lost the code, but it was initially a genetic algorithm that randomly placed overlapping polygons, but the later improved method had connected polygons that shared points - which was far more computationally cheaper.

Another method I explored was to compose a representative image via a two-colour binarised bitmap, which provided a pixelated version of the image as a placeholder.

The core idea is that you drop the image as a small Data URI straight into the page, and then fetch the high-detail version later. From the user's perspective, they are getting a very usable web page early on, even on poor connections.


> How much time could you conceivably use this for?

I also thought the same. It's a nice feature to have, but typing out significant code on that keyboard is a burden compared to a full-sized keyboard. Plus, it's not small enough to carry on the daily.

I personally believe the correct form factor for such devices is a smart watch, where code is written off-device and deployed to it, and the results of the code can be enjoyed throughout the day.

For example, I've developed(/ing) a Micropython based smart watch [1] where code is deployed onto the watch. The idea is to be able to deploy apps and interact with them via a simple interface. Being able to interact with my code daily helps keep the device relevant and to be able to make continual progress.

[1] https://coffeespace.org.uk/projects/smart-watch-v2.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: