Hacker Newsnew | past | comments | ask | show | jobs | submit | davidst's commentslogin

It's a wonderful problem for optimizing code. Michael Abrash hosted a performance contest for word counting back in... 1991? (If my memory serves.) The article and code can be found here:

There Ain’t No Such Thing as the Fastest Code: Lessons Learned in the Pursuit of the Ultimate Word Counter

Article: https://www.phatcode.net/res/224/files/html/ch16/16-01.html

Code: https://www.phatcode.net/res/224/files/html/ch16/16-05.html


[Disclaimer: Former Amazon employee and not involved with Go since 2016.]

I worked on the first iteration of Amazon Go in 2015/16 and can provide some context on the human oversight aspects.

The system incorporated human review in two primary capacities:

1. Low-confidence event resolution: A subset of customer interactions resulted in low-confidence classifications that were routed to human reviewers for verification. These events typically involved edge cases that were challenging for the automated systems to resolve definitively. The proportion of these events was expected to decrease over time as the models improved. This was my experience during my time with Go.

2. Training data generation: Human annotators played a significant role in labeling interactions for model training-- particularly when introducing new store fixtures or customer behaviors. For instance, when new equipment like coffee machines were added, the system would initially flag all related interactions for human annotation to build training datasets for those specific use cases. Of course, that results in a surge of humans needed for annotation while the data is collected.

Scaling from smaller grab-and-go formats to larger retail environments (Fresh, Whole Foods) would require expanded annotation efforts due to the increased complexity and variety of customer interactions in those settings.

This approach represents a fairly standard machine learning deployment pattern where human oversight serves both quality assurance and continuous improvement.

The news story is entertaining but it implies there was no working tech behind Amazon Go which just isn't true.


That's some fascinating background, thanks! Probably explains why they keep operating it in stadiums but not grocery stores. Works pretty well with a small handful of items, does not scale up reliably to shopping carts full of stuff.


I wrote Qlife, a game of life "compiler" that generated X86 asm code, for one of Michael Abrash's optimization challenges in the early 90s.

Description: https://www.phatcode.net/res/224/files/html/ch18/18-01.html

Code: https://www.phatcode.net/res/224/files/html/ch18/18-03.html

Hashlife is far superior so while Qlife was interesting it wasn't the best implementation.


I don't think you're missing anything. Years ago, I made the same observation, and ran a test to compare independent hash functions vs salting. It showed no difference. I would be interested if someone here has a different opinion and a rationale to explain what I might have missed.


You are processing more data if you use salts. I would think this is almost certainly slower than just using hard coded hash functions with different constants. But maybe not, or maybe the difference isn't noticeable.


Seems legit. Good hash function digests should be as independent from each other respecting seed, as they are from digests of other hash functions with the same seed.


This memory goes back almost thirty years. In 1993, most products still shipped on floppy disks, and costs were increasing as products grew in size. Shrinking a popular product by even one disk could save over a million dollars. Publishers were using data compression but the tools were of the same class that were intended for regular users. That is, compression and decompression required roughly similar compute resources. I noted a few things:

1. Software products would be compressed just once but decompressed millions of times.

2. It should be possible to create an asymmetric codec that expended tremendous resources on compression but still kept decompression light.

3. Publishers didn't care how much compute compression would require as long as it was still tractable and would save them money. If they had to buy an expensive computer and let it run overnight it would be fine.

I didn't know anything about compression at the time but the premise seemed strong. I took unpaid time off to tackle this problem and in a few months (after a lot of trial-and-error learning) I began to have a workable product.

Borland, Novell, and Microsoft (for the .CAB files) were the first licensees. The compressor was called Quantum and was typically 20-30% better than PkZip. My sales technique was to take a product, recompress it with Quantum, and show the company how much money they could save. (As I recall, I was able to demonstrate reducing Windows for Workgroups by two disks.)


I was responsible for the Turbo C (later Borland C/C++) run-time library in the late 80s and early 90s.

Tanj Bennett created our original 8087 floating point emulator. It was real mode, 16-bit code, that kept an 80-bit float in five 16-bit CPU registers as a working value during its internal math calculations. If you ever coded for the PC you will appreciate just how precious register space was on the 8086/8088.

It's been a few decades and my memory is fuzzy but I don't recall any critical places where the registers had to be saved and restored. He choose math algorithms and code paths specifically to keep all five live at all times. Tanj's code flowed with an economy of motion that brought to mind the artistic performance of a professional dance troupe. I did not have the math skill and could not have ever created it as he had. It brought me genuine pleasure just to read his code.

Eventually, it came time to port it to 32-bits (real and protected modes.) I wrote new exception handlers and replaced the instruction-decode logic with a nice table-driven scheme but did not touch the core math library. It still ran in 16-bits under the hood exactly as Tanj designed it.

Tanj, if you happen to see this, I have fond memories of your emulator. It was beautiful.


Hi David, thanks for the kind words. I still have the source for that I think, it was written around 1984 originally for a project of my own, then sold to Logitech, Zorland, and Borland (before I went to work for Borland). The Borland one was probably tweaked a bit - like you my memory does not include details.

There were 7 registers to play with. AX, BX, CX, DX, SI, DI, and occasionally BP. MS-DOS had no rules about frame pointer although later Borland would prefer it was kept to a standard use so that exceptions can be handled.

It would have run faster in 32 bit. Many fewer cross products for multiply and faster convergence for division and sqrt, plus just fewer registers needed. But updating the entry and exit code may have been the largest win. By the mid 1990s the FPU was moving onto the CPU chips even at the low end so the main goal would have been stability and backwards compatibility.

I wrote assembly code for many machines before and after that. In the early days such careful juggling of registers was routine. I generally wrote my algorithms in Pascal (later in Modula or Ada and by Borland days, in C++) and then hand compiled down to assembly. That separation of algorithm and coding stopped it getting too tangled.

Thanks for the shout out! These days I write System Verilog (algorithms modelled with C++) for hardware projects, and Julia for the pure software.


Tanj, these are all the most excellent reasons why we were proud to have you as part of our Borland team. Software craftsmanship.


I have shared this with Tanj. I had the privilege to work with him in Microsoft!


Awesome!!

Would it be possible to release this bit of the library, if not the whole thing (for any definition you like of "whole thing") à la Movie Maker (https://twitter.com/foone/status/1511808848729804803)?

More specifically there are two components to this question - it being OK for the code to be released, and the code itself being available to be released :). I'm specifically asking after the former; if noone knows where the code actually is right now, well, it can be dug out later.


Nice idea, but I honestly don't think it has much value for study. It was a solution to a problem which is no longer important, and what impressed David (and was fun for me) was implementing it under constraints (8086) that are no longer relevant. I would vote for some of the other stuff mentioned by others, like TeX as an example of mastery of both the application requirements with beautiful algorithms and inspirational documentation, PostgreSQL as a thriving large system with brilliant modularization that has enabled research, or LLVM as a pinnacle of code generation which has enabled a golden era of language and hardware design over the last 20 years.


Borland made some copies of Turbo C available for free. E.g. https://cc.embarcadero.com/item/26014

Some sites probably also have versions for download without the registration requirement


You just have to love HN, thank you!


I would have loved to read that code!


This is a great interview question I don't want to be 'that guy' by picking at what may be a detail but there is another response to look for from a candidate.

We should think very carefully about adding a multiplication command because it introduces a failure mode that may be unanticipated by the client. Code that previously worked could begin to fail after this command goes into use.

Specifically, if the client needs to revert a series of operations on integers, and if the operations are transitive, there is no need for a mechanism to ensure they occur in any particular order (the usual caveat about working within the limits of precision applies.) This holds true for addition and for multiplication, in isolation, but is not true if they are combined. Change the order and the end result will change. Adding multiplication puts a burden on the client to understand this risk and be explicit in the ordering.

Some people may argue that the client should already understand it and they have a point. We can't defend against every possible misunderstanding. I think there is a good discussion to be had on this question and if the candidate were to go there, it would be a favorable sign of experience on their part.


This is a REALLY SOLID response to the question, and I'd hope it would get some strong 'hire' points.

But, to be fair, there's already technically this issue in the memcached API as described in the post, in that it supports both append and add, and "append 0" (on a value that add could also act upon) is effectively the same thing as "mult 10". If a field contains "1" and successive "add 1" and "append 0" commands come in, depending on what order they arrive in the result could be 20 or 11.

So I think the interviewer would be justified in saying 'yeah, let's assume we've evaluated that risk and we plan on adding some really thorough documentation warning people about that risk, so can you just go ahead and try and implement it?'

But absolutely, this was the thought that came into my mind when reading the spec too. Not enough developers think about APIs in terms of compositional, algebraic terms, and being able to see that adding 'multiply' and 'add' together at the same precedence in an API might cause trouble is a really valuable skill.


This is a REALLY SOLID response to the question, and I'd hope it would get some strong 'hire' points.

I agree -- but unfortunately the interview question doesn't filter for this level of thinking.

If anything, it filters for the exact opposite: your ability and willingness to shove a random new feature into the codebase in 3 hours (or GTFO) -- stability and other consequences be damned.


You can’t expect every question to give you an opportunity to show off every skill that you have. If I were asked this question, I would certainly comment on whether it seemed like a smart thing to do, but I wouldn’t refuse to do it. I can demonstrate both my ability to do some software archeology, and do some coding, and figure out the design of memcached first, and then also mention to the interviewer that I had reservations about the soundness of the operation. The interviewer may even have more questions to ask along those lines, and might be disappointed in candidates who don’t bring it up.

But they would certainly be disappointed in candidates who don’t take the opportunity to show off the programming skills that they have put on their resume.


As long as we agree that, at the end of the day -- it's basically a crapshoot, as to what you can really tell about the other person based on whether they answer these kinds of questions on time and in as much detail as you like.

That is -- I'm not saying these questions don't tell you anything about the candidate. But at the end of the day ... I just don't think they tell you that much.

Certainly not in the divining-rod, "finally I found a question that will sniff out the true h@ck3rz from the wannabe drudges" sense that people seem to think questions of this sort are imbued with.


> “question that will sniff out the true h@ck3rz from the wannabe drudges”

No one ever said that it would. No single question can tell you everything you want to know about a candidate. This question is designed to weed out candidates who talk well but can’t actually do the work. It won’t tell you which of those passing candidates are great and which are merely good; that’s what the rest of the interview is for.

I agree that interviewing is, unfortunately, a “crapshoot” for the candidates. As a candidate you are going to interact with dozens or hundreds of companies, and most of them won’t do a good job of interviewing you. Most of them end up with more candidates than they can really handle, so they end up passing up plenty of good prospects. But I disagree that this question is a “crapshoot”; it gives you specific information that you really want about each candidate, and it does so without a lot of the irritating artificiality we often take for granted in interview situations.


At the cost of 3 hours of the candidate's time (on top of all the other time demands, and to some extent necessarily irritating artificiality of the rest of the interview process).

Let's just hope the total compensation offered is in line with these demands, then.


A lot of commenters have gotten hung up on the three hours, which I think is pretty funny. As explained elsewhere, it was really just one hour and most candidates did the work in half that. I think that this question fits nicely into a standard interview process, where a candidate spends half a day on site (or in video conference) meeting the team and doing interviews with several people.


Because anyone who has a different point of view about things must be... just all hung up about something or other.

Bottom line is -- if you tell a candidate "this could take up to 3 hours of your time" -- then boom, right there, you've asked them to carve 3 hours out of their life (away from their spouse who may be chronically ill, or who knows what else they might have going), in addition to the all the other hours they need to invest in your process before you can begin to take them seriously.

If that's your process, fine -- just be up front please, and own up to it.


I think you misunderstood. Regardless of what the blog post said, this was a one–hour exercise, not a three–hour project. https://news.ycombinator.com/item?id=31065845

I agree with you that interviews which incorporate a larger project that takes multiple hours are usually a waste of time. Usually the project turns out to be too unfocused and too subjectively judged to provide useful information about the candidates. I spent four hours on one once where the only feedback I got was that my solution “wasn’t object oriented enough.”


Fair enough - thanks for clarifying.


Having just gone through this question for fun, the codebase DECR command has this clipping code in it to avoid the value going negative:

       if(delta > value) {
           value = 0;
       } else {
           value -= delta;
       }
A caller reverting operations by sending the same values with the operations flipped around must already keep track that they didn't ask to drop below 0, or they may not get back to the original value.

Plus, the atomic update happens with a lock/release between each operation, so while you might get the same result at the end of your rearranged ordering, clients may see intermediate results and changing the order would change which values they see, which may or may not matter.


I agree it's a great interview question, but I would never ask anyone to spend three hours doing this for free. Someone comes to me and says they can program, and that's that. They have other employers and coworkers in their life, or just teachers and other students -- people I can ask what it's like to work with them if I want, but at the end of the day, I'll know if I've been lied to in the first 30 days and neither of us want that, so I think all I'm trying to find out at the interview-stage is whether I want to spend 30 days with this person, and I don't need to study the outcome of three-hours of them guessing at what I want to help me do that.

But I like to talk about programming, and this question just creates so many different kinds of ideas in my head too. I like how you immediately think about ordering of events and the implications of that. Maybe memcached needs a division operator too. I think that's fun to talk about. Maybe overflows are important; Maybe the task is to add some new types to memcached to protect against that. Maybe we should add some stats to track the number of overflows (or otherwise give some estimate of the accuracy).

Now I am looking at the slide-rule on my desk and wondering what the required precision for the use-case is; That is, is it possible that instead of modifying memcached and having to support your freaky-version forever that you can simply instruct the application to increment by log multiplied out by the desired precision, then reverse with division+exp on output?

And now I am thinking about supportability: Once we've decided what we want out of memcached, is it worth trying to get those changes into the upstream memcached so we don't have to worry so much about that feature disappearing (or becoming difficult in the future)? Talking to people about the social aspects of programming with open-source can be important too.

So yeah, lots of reasons to like this question. I'm probably going to use some variant of it myself, because wherever the candidate goes with it is going to be informative, but I'm extremely disappointed by the rest of the process (and all of part 2); it's definitely not for me.


I don't know the rest of the process, but 3 hours for an interview seems 100% fine. And practical tests are so much better than random abstract questions IMO. I think your solution is to watch and plan on firing many people within 30 days, which seems worse for everyone.


> I think your solution is to watch and plan on firing many people within 30 days, which seems worse for everyone.

Why did you add the word "many" where?

I didn't use it. I can count on one hand the number of people who flat-out lied to me about being able to program and I needed to fire them for it, and I've been managing and developing software for around thirty years or so.

Do you think that's a lot? Or do you think people lying about what they do is common?


> Or do you think people lying about what they do is common?

Probably depends on the offered salary range.


How do you mean?


He/she means that they think people will be more likely to try to lie their way into a job if the salary is higher.


Higher? How much higher? Higher than what? I haven't said what salaries I hire for.

This doesn't sound very much like thinking to me.


Hiring easily with an open mind to firing quickly sounds like a good idea but civil rights law, at least in the US, makes it very difficult. Let's say that "fire quickly" is one of the few women or blacks you've hired. Even worse, let's say that the "fire quickly" contingent is disproportionately women or racial minorities, through no fault of your own. I hope you're ready to pay a lawyer and a decent settlement. It doesn't matter whether you've actually done anything wrong. It just comes down to the economics of the chance they draw a sympathetic judge, the chance a jury will buy their sob story, the amount of money it would cost you to litigate the case, and the amount of money they're willing to go away (i.e. settle) for. Hope no one in your workplace sent a stupid joke over email that will make you look awful in front of a jury. That makes the settlement number go up by a lot.

You can maybe get away with this practice with a small enough company, but even then, I'd advise extreme caution. If you can filter people out before offering them a job, you're in a much safer position legally.

Edit: I think you're in the UK. I don't know much about employment law over there, but I gather the situation is actually better.


> Edit: I think you're in the UK. I don't know much about employment law over there, but I gather the situation is actually better.

I'm in Portugal actually, but I've lived in the UK and the US (I've actually worked there for almost 20 years). In a past life I was hired as the country manager for a British company working out of New York, and I had to receive training on employment law in both the UK and US because they actually take it very seriously in the UK, much more so than in the US; An employee (or former employee) can ask tribunal to decide if their termination was fair, and they absolutely do look for race and gender selection. I can appreciate Americans might not get a very good education on workers rights, so it is perhaps worth mentioning to an American when I fire someone, I'm doing it after I've had that decision reviewed by council, and with that training.

Now when I said they lie to me, perhaps you got some idea that was just my opinion or something. I didn't mean to imply any amount of whimsy and tried to avoid any words that might suggest it; I admitted elsewhere I've fired less than 5 people for lying to me, and that's paper evidence of a lie. I've probably hired hundreds of people at this point in my life, so we're talking about a 1-2% problem where even if we have to pay a six month settlement, that's peanuts compared to what we save:

See, if I had to spend 3 hours for each candidate, and maybe out of hundreds of people I've interviewed thousands, we're talking literal years of my life. I can't realistically do anything else with that time. And not just my life- I have a fiduciary duty to the company I work for, I can't in good-conscience spend the money in the budget for my salary to avoid a much smaller potential penalty that probably won't even happen.

But I think it's important to think about some of the things you're saying: If someone thinks they're doing what I'm doing, and find themselves firing so many people that there is any racial or gender bias in the pool they're firing then I don't think they're doing what I'm doing. I might even wonder if they're racist or sexist myself, because my point is this shouldn't happen often enough to worry about.


The problem is that there's often a gender and racial bias in the pool of qualified workers, but the courts are basically willfully blind to that. Deviation from the overall demographic breakdown is taken as evidence of discrimination, even if those demographic groups have very different breakdowns of qualifications.


I think if you're worried about what a judge and jury would think of something, you probably shouldn't do it.

I'm not worried about firing someone for lying on their CV.


Do you mean "associative" instead of "transitive"?


You are right! This is why I appreciate HN so much. I don't see how to edit my comment but thank you for the correction.


you can only edit your comments for a limited duration, like one hour. Otherwise open the link for that comment (on the date) and there should be an edit link next to the parent link.


This is what experience and skill looks like, careful consideration and really trying to understand what you are doing. Too bad it's so difficult to market this in a superhero genius package that managers will buy


There's another potential issue that may or may not matter - since the value is stored as text it probably can't overflow in memcached - but the multiplication COULD overflow internally in the C code; addition could do this also - I would wonder how it's implemented.


David Stafford here. Microsoft has the rights to publish it with Windows. My former Cinematronics co-founders (Mike Sandige, Kevin Gliner) and I would be happy to help it return, gratis.


Off topic, but one of the things I love most about HN is when a person referenced in an article/post/comment happens to be viewing it and chimes in with their first hand experience


I am one of the original authors of Space Cadet pinball (along with Mike Sandige, lead programmer, and Kevin Gliner, producer and designer.) It is surprising and gratifying to see interest is still alive for our old game. And I can't help but be impressed by the ingenuity shown by both the decompiling effort and the playable web-based game.


I'm impressed as well! It's exciting to see folks so enthusiastic about the game.

I took a deeper look at the github project. It's been a long time since I worked on the Space Cadet code, but the decompiled github code is pretty familiar. It's formatted differently of course, but I think it's actually better than the original. And, nice! Check it out David and Kevin: k4zmu2a got the quotes in! https://github.com/k4zmu2a/SpaceCadetPinball/blob/master/Spa...

Now I'll never live down how long I spent working on the flippers.

Bonus points for anyone who knows why the classes in the source are prefixed with a 'T'!

-Mike Sandige (Lead programmer on Space Cadet)


From wikipedia: "...[Danny Thorpe] in 1994 while at Borland, he contracted with Santa Cruz startup Cinematronics (David Stafford and Mike Sandige) to build a component model and collision physics engine for a software pinball game. Cinematronics licensed an early version of the pinball engine to Microsoft" Maybe Borland and Delphi has something to do with the 'T' prefixes. Source: https://en.wikipedia.org/wiki/Danny_Thorpe


Yep, this is the reason. Danny's initial code used Delphi with this convention. I'm not actually sure why that convention is used there, though. I remember asking Danny why, and I think his answer was just that was the Delphi convention. But it's been a while, so perhaps I have forgotten the details of his answer. I later had to migrate to C++ to integrate with the Windows build. And I retained the naming convention, mostly because it was quicker than changing it and I didn't have a lot of time. But I was always uncomfortable with the T prefix on the ramp class.


I remember using lots of T classes back in the early 90s when I was building software using Turbo Vision, a text user-interface framework bundled with Borland C++ (https://en.wikipedia.org/wiki/Turbo_Vision)


Absolutely not. "T" prefix stands for "type" and has its roots in case-insensitive Pascal

  /* eg common C style notation involving type identifier and var identifier is illegal in Pascal */
  Rect rect;
  (* so Pascal requires some distinct identifiers eg *)
  rect: TRect;
the other common prefix is "P" which stands for pointer type


Borland's C++ frameworks use the same convention, to this day.


Thanks to both of you. It took me a while to realize the depth built in. I tried making a pinball game many moons ago but couldn't get the ball to "feel" right (flippers as well).

How was that process? Did you go play physical tables, go for a realistic approach, or tweak magic numbers until it "felt right"?


We played every table we could get our hands on. Also rented tables weekly to be brought into the office once we moved to Austin. I tried to dig as deeply into the history of pinball too, to understand why tables had evolved the way they had.

Mike (Sandige) built a scripting system that allowed me to tweak the physics, materials, etc. of each component. But our constant exposure to real tables helped us form a "feels right" baseline to target. I also applied whatever I'd learned at that point about game design fundamentals (it was early in my career).

After we finished 3D Pinball and started on Full Tilt, I got put in touch with a seasoned designer of real pinball tables who had worked on some hits from the 70s. He took me to task for a bunch of mistakes I made in 3D Pinball, and some of those corrections found their way into the Full Tilt version of Space Cadet (and more so in the other tables in Full Tilt).

   Kevin Gliner (designer and producer for 3D Pinball, etc)


Wow, great info. Thanks!


Always interesting to see the things hidden in games. When I was in the biz I was aware of many easter eggs, a lot of which are still unknown to the public.

For instance, this sign in GTA:SA has the word "TFT" on the back which is a reference to a secret video game "Illuminati": https://gta-myths.fandom.com/wiki/Signs_(GTA_SA)?file=Egg_8....


> Bonus points for anyone who knows why the classes in the source are prefixed with a 'T'!

Was it written in, or ported from, Delphi?


Has to be ported, since Delphi doesn't generate PDB7's.

Or at least written by someone familiar/used to Delphi, since Age of Empires utility classes are also prefixed with T, but the game was obviously written in Visual Studio - but some dev tools were written in Delphi, so somebody seems to have taken the naming scheme from there in that case.


Borland C++ frameworks used the same coding conventions as Turbo Pascal/Delphi.

They still use them.


yep, this is it. The project started out in Delphi, but I had to port it to C to integrate with the Windows Build. For some reason they didn't want to include Delphi tooling in the Windows build.


Great to see you commenting here! :D

Were those quotes some kind of cheat system?


These were just a fun little Easter egg that made us smile. Typing the right code displayed memorable quotes collected from various folks at Cinematronics in the hectic time we were developing Space Cadet. Maybe I can find some time to check with the folks that they are ok with it, I can put together a commit that comments who said them. I don't recall who said all of them, but I'm sure David and Kevin can fill in my gaps. There was a more deeply hidden Easter egg that shows the credits as well. Code to trigger that is not in this repo, though.


Borland C++ OWL and Turbo Vision. :)


"She may already be a glue bottle"

What?


This quote is mine, and is related to a game we were working on for Microsoft before we pivoted to pinball. There was a bug - probably mine - where the game displayed the previous state for an object, but the object had changed state, so the resulting behavior was confusing. 'she' and 'glue bottle' were referring to imagery used for the placeholder prototype artwork. I was trying to explain what was going on, and it made sense, but only in that incredibly constrained context. Outside of that context it's nonsense, but that's what makes it memorable and humorous to us.


Definitely thought that was about a childhood horse


I can't help but wonder for old closed-source utilities and games like this: whatever happened to the original source code?

Is it still around, stashed in a vault a MSFT, or is it lost for ever?

Do you know?

And what would it take to convince a large corp. like MSFT to actually release the original under some open source license ... not like the thing has much value by now other than historical.


The rights belong to Electronic Arts today. They acquired Maxis who had acquired us (Cinematronics.) Microsoft may still have rights to continue publishing a version with the Windows operating system.

Microsoft, if you're reading this, we would be glad to provide assistance in getting 3D Pinball running again on the latest Windows OS.


Given that the reason it was removed was IA-64 which isn't a thing anymore... this should be plausible. But also given where MS has gone with games as late I'd be surprised if they do. EA should just release all of Full Tilt! Pinball on steam or GoG if they haven't already.


It was removed for Vista, because it couldn't transition to AMD64. This was after Microsoft had essentially completely ditched Itanium.

(I'd love to see a GOG or Steam or even Origin release of Full Tilt Pinball, though. The Full Tilt version of Space Cadet is better-- higher-resolution graphics plus some gameplay tweaks.)

[edit] Digging a little deeper, that might not be entirely true. The first consumer version of Windows it didn't appear in was Vista, but it sounds like the decision to drop Pinball was made during the XP era (even though it shipped with x86 Windows XP). So issues with the IA64 port (or with another architecture) might've been the reason it was dropped, then that decision was not revisited for Vista even if Pinball might've worked under AMD64 Windows.


Actually 3d Pinball is available in the AMD64 Window XP 64 bit versions. It was not available (sort of... it's there but not actually installed from media) on IA-64 versions. There was a youtuber that dug into it (probably at the cost of far too much of their own sanity). So there is no "might have" it works (albeit with some rendering issues).


Was it compiled as a 64-bit executable on AMD64 Windows XP?


Yes... https://www.youtube.com/watch?v=3EPTfOTC4Jw

Warning contains a youtuber bashing their head firmly against the pain of Itanium to test this.


Wow, this game was a staple in my childhood! Just wanted to let you know that your work brought me lots of happiness as a kid, so thank you :)

Out of curiosity, have you ever written about the experience? Technical challenges, the development culture, etc?


Thanks!

There were a couple of interviews with gaming magazines years ago. They would be hard to find today and didn't cover the topics you mentioned. Mike, Kevin, and I, really need to get together to tell the story some day.


Please do! There are loads of us who would love to hear it all direct from you guys.


Yes, that would be amazing! I absolutely adored playing this game on our first family PC back in the day.


I've been meaning to write up a history, but it's one of those things that's been on my to do list for years...Maybe someday, soon.

  Kevin


Well, in this day and age of bazooka DMCA take down, it's refreshing to see a game author happy about his game being gutted, and put online to play.


> it's refreshing to see a game author happy about his game being gutted, and put online to play.

The game author is typically not the copyright holder.


I played it before it was bundled with Windows, at the time I didn't think that much of it (I was more into Epic Pinball, spent so much time on the shareware version with the android themed board <_<), but it was a game we'd fire up Windows for. We probably got it off one of those shareware CD's, but I don't recall. Might have been off a diskette?


Thank you sir, you brought many many hours of enjoyment to my childhood. I remember being excited to go to my Aunt's house because she had a computer with Windows XP (my family only had Windows 2000) and I could play your game.

And only on hacker news will I be replying to one of the authors.


if the game would be available as open source and/or run on linux, that would be super... this was my first pc game when i was a little child


You can easily compile it on Linux following the instructions in the repo and grab the game assets from here: https://www.reddit.com/r/vitahacks/comments/pro8x1/comment/h... Tested on Arch Linux, works flawlessy!


Check the SDL port or the emscripten


It runs perfectly on Void as my example.


We played the hell out of it on our Windows 2000 machines during particularly boring IT labs in college, in early 2000s.

The physics just seemed so spot on. I installed Win2K on a VM recently, just for this.

Start Menu, type in "pinball". Ah....


Was it available on Windows 2000? I know it was a default on Windows XP. I was way too young back then to remember


Definitely. It was available (apparently) even with a Win95 upgrade pack, so it's very old.

I was not aware of it, however. I am not sure it was as easy to launch before, by just typing in "pinball" in the Start menu.


I played this game for hours a day, for an entire summer.

And it was the best summer of my life.

I remember getting up early, making cereal, and then playing this.

You created one of my favorite childhood memories, thank you so much.


it was the most joyful game that shipped with windows and it was also fun! I would like to play it again :)


This game blew my mind! How did you get the physics to be so realistic? At the time I tried a number of different pinball implementations and unrealistic physics was a big problem.


Thanks! We modeled collision boundaries with high resolution, and tried to use physically appropriate material settings. But this had to run on relatively slow machines, so the physics model itself was pretty basic. Except for the flippers. I really wanted them to feel like physical flippers - predictable, controllable, and without computational problems that let the ball pass through them sometimes. :( I put lots of time into the flippers, and had some trouble balancing those requirements with the available CPU - it wouldn't do to have the game slow down to figure out how the ball should move when you hit the flippers.


I noted this in another reply, but Mike had written a scripting model that let me adjust the physics and materials for each component separately. That allowed me to iterate rapidly when tuning the feel of the game. A solid physics engine is a pre-requisite, but what you do with it from there is also critical (and the goal was to replicate how real world tables felt, not how they actually behaved).


Well, it is simply one of the best pinball games I've ever played on a computer. Interesting scoring system. Fair. Not gimmicky. Pure beauty.


Great to see you here David. Thanks for all the work over the years.


'twas a great little game. good job!


I remember this, Dave. I was confident I had the fastest possible algorithm and you proved me wrong. It was a humbling experience but it forced me to throw out my assumptions and start over. It taught me to assume there was always a faster or better way just waiting to be discovered. TANSTATFC

You should post your code when you get a chance.


Very awesome that both of you are on here!


Truly. That's the magic of Hacker News, the people behind it always show up and even give more context and insights.


Recently I also had a similar experience, when I was about to call it a day, then discovered that there is another 10%.


> but it forced me to throw out my assumptions and start over.

You already forgot what Chapter 3 was all about :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: