Hacker Newsnew | past | comments | ask | show | jobs | submit | rbanffy's commentslogin

I guess it's hard to see oppression when it benefits you...



They made some beautiful computers. I really want to, eventually, get an M20, or wait until 3D printers get good enough to print one. ;-)

It's probably faster.

> The wonderful animation is brought to you by <marquee>

I’m shocked it still works.


We need a TLA authority to help prevent collisions in the acronym space. It’s enough that MCP is also the Burroughs/Unisys mainframe operating system, now TSO is also the time-sharing option on IBM mainframes.

TSO? Did you mean tso.architecture.cpu?

Either that or https://www.ibm.com/docs/en/zos-basic-skills?topic=interface...

I would prefer we didn’t have so many collisions in the TLA space.


The last company I worked at created TLAs for everything, or at least engineering always did. New company doesn't seem to have caught that bug yet though, thankfully.

Can you upload code to be executed on a stock 1541/1571? Would be fun to see the drive doing things like "read this file, but sorted on columns 3-10" or "add these two files line by line into a third file".

Can you upload code to be executed on a stock 1541/1571?

Yes. There were disk duplicators that ran entirely on the drives.

You'd upload the program to a pair of daisy-chained drives, put the source disk in one, and the destination disk in the other and they'd go about their business.

You could then disconnect the computer and do other things with it while making all the disk copies you wanted.

I've always wanted a modern equivalent. I thought FireWire might make it happen, but it didn't. And it's my understanding is that USB doesn't allow this kind of independent device linking.

The closest thing I've seen in modern times was a small box I got from B&H that would burn the contents of a CF card onto a DVD-RW.


Yes you can, back in the day this is how fast loaders worked, they uploaded an optimized serial protocol to the ram on the drive and called into it.

Absolutely. The eprom of TFA was only needed for standalone usage. But 2K of RAM are not much.

As usually the schematics were available in the manual it was not too hard to add some additional static ram. There were unused address lines available which could be used for chip select.

If the OS could load specific code into the drive memory. It’s a bit how “channel programming” worked on mainframes. Not sure modern ones still get advantages from that.

That was because of the slow serial interface on the VIC and C64 side - IIRC, the UART required was removed from the 64 as a cost-cutting measure and it shipped having to bit-bang data to the drive. Overall, this is a very solid design idea.

With a little extra smarts, the drive could deal with ISAM tables as well as files and do processing inside the drive itself. Things like sorting and indexing tables in dBase II could be done in the drive itself while the computer was doing things like updating screens.

OTOH, on the Apple II, the drive was so deeply integrated into the computer that accelerator boards needed to slow down the clock back to 1MHz when IO operations were running. Even other versions of the 6502 would need to have the exact same timings if they wanted to be used by Apple.


The designers planned on using a shift register in the 6522 VIA chips to implement fast serial loading, but an undocumented bug in that chip forced them to fall back to the slow bit banging version that shipped

I don't know how many of you have seen a 1541 floppy drive in person either but it is massive, it's heavier and possibly bigger that an actual Commodore 64 and pretty expensive at the time too.

it's fun seeing c64 people on the defensive about it, a nice change from getting lectures from them about how their graphics were the pinnacle of 8-bit computing


Part of the size was the internal power supply. And that thing got hot, too. I used them at school, but at home only had the smaller 1541-II with an external power brick.

The Apple II disk drives, on the other hand, were not only cheap (Apple was different then!) and fast, but were powered by the ribbon cable connecting them to the computer.


Oh its MUCH better than that. Commodore did this because they had incompetent management. They shipped earlier products (VIC-20, 1540) with hardware defective 6522, but:

- C64 shipped with 6526, fixed version of 6522 without shift register bug

- C64 is incompatible with 1540 anyway

They crippled C64 and its floppies _for no reason_.


It was not for no reason. When adding a screw hole in the motherboard so it could be mounted in the case, they accidentally removed the high speed wire, dooming the C64 to the same slow data speed of the VIC-20 with it's faulty VIA.

The C64 data speed actually ended up being even slower than the VIC-20. You can read the full detailed here: https://imrannazar.com/articles/commodore-1541


I have been in similar meetings a couple times, when the client didn't really understand the long-term ramifications of they game plan - and we did. Of course, we weren't as fortunate as Microsoft, but on one instance I remember clearly - and where I made remarks privately as we left the building - the client was more or less forced to acquire, for a unreasoinably high amount, the tiny vendor a couple years down the line.

> not games, desktops, web/db servers, lightweight stuff like that.

Things like games, desktops, browsers, and such were designed for computers with a handful of cores, but the core count will only go up on these devices - a very pedestrian desktop these days has more than 8 cores.

If you want to make software that’ll run well enough 10 years from now, you’d better start using computers from 10 years from now. A 256 core chip might be just that.


why do you think lightweight uses will ever scale to lots of cores?

the standard consumer computer of today has only a few cores that race-to-sleep, because there simply isn't that much to do. where do you imagine the parallel work will come from? even for games, will work shift off the GPU onto the host processor? seems unlikely.

future-proofing isn't about inflating your use of threads, but being smart about memory and IO. those have been the bottleneck for decades now.


> why do you think lightweight uses will ever scale to lots of cores?

Because the cores will be there, regardless. At some point, machines will be able to do a lot of background activity, learn about what we are doing, so that local agentic models can act as better intelligent assistants. I don't know what will be the killer app for the kilocore desktop - nobody knows that, but when PARC made a workstation with bit-mapped graphics out of a semi custom built minicomputer that could easily serve a department of text terminals we got things like GUIs, WYSYWIG, Smalltalk, and a lot of other fancy things nobody imagined back then.

You can try to invent the future using current tech, or you might just try to see what's possible with tomorrow's tools and observe it first hand.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: