Hacker Newsnew | past | comments | ask | show | jobs | submit | mrandish's commentslogin

Good post. For anyone wondering "why do we have these particular resolutions, sampling and frame rates, which seem quite random", allow me to expand and add some color to your post (pun intended). Similar to how modern railroad track widths can be traced back to the wheel widths of roman chariots, modern digital video standards still reverberate with echoes from 1930s black-and-white television standards.

BT.601 is from 1982 and was the first widely adopted analog component video standard (sampling analog video into 3 color components (YUV) at 13.5 MHz). Prior to BT.601, the main standard for video was SMPTE 244M created by the Society of Motion Picture and Television Engineers, a composite video standard which sampled analog video at 14.32 MHz. Of course, a higher sampling rate is, all things equal, generally better. The reason for BT.601 being lower (13.5 MHz) was a compromise - equal parts technical and political.

Analog television was created in the 1930s as a black-and-white composite standard and in 1953 color was added by a very clever hack which kept all broadcasts backward compatible with existing B&W TVs. Politicians mandated this because they feared nerfing all the B&W TVs owned by voters. But that hack came with some significant technical compromises which complicated and degraded analog video for over 50 years. The composite and component sampling rates (14.32 MHz and 13.5 MHz) are both based on being 4x a specific existing color carrier sampling rate from analog television. And those two frequencies directly dictated all the odd-seeming horizontal pixel resolutions we find in pre-HD digital video (352, 704, 360, 720 and 768) and even the original PC display resolutions (CGA, VGA, XGA, etc). To be clear, analog television signals were never pixels. Each horizontal scanline was only ever an oscillating electrical voltage from the moment photons struck an analog tube in a TV camera to the home viewer's cathode ray tube (CRT). Early digital video resolutions were simply based on how many samples an analog-to-digital converter would need to fully recreate the original electrical voltage.

For example, 720 is tied to 13.5 Mhz because sampling the active picture area of an analog video scanline at 13.5 MHz generates 1440 samples (double per-Nyquist). Similarly, 768 is tied to 14.32 MHz generating 1536 samples. VGA's horizontal resolution of 640 is simply from adjusting analog video's rectangular aspect ratio to be square (720 * 0.909 = 640). It's kind of fascinating all these modern digital resolutions can be traced back to decisions made in the 1930s based on which affordable analog components were available, which competing commercial interests prevailed (RCA vs Philco) and the political sensitivities present at the time.


> For example, 720 is tied to 13.5 Mhz because sampling the active picture area of an analog video scanline at 13.5 MHz generates 1440 samples (double per-Nyquist).

I don't think you need to be doubling here. Sampling at 13.5 MHz generates about 720 samples.

    13.5e6 Hz * 53.33...e-6 seconds = 720 samples
The sampling theorem just means that with that 13.5 MHz sampling rate (and 720 samples) signals up to 6.75 MHz can be represented without aliasing.

There's some history on the standard here: https://tech.ebu.ch/docs/techreview/trev_304-rec601_wood.pdf


Non-square pixels come from the legacy of anthropomorphic film projection. This was developed from the need to capture wide aspect ratio images on standard 33mm film.

This allows the captured aspect ratio on film to be fixed for various aspect ratios images that are displayed.

https://en.wikipedia.org/wiki/Anamorphic_format


> Similar to how modern railroad track widths can be traced back to the wheel widths of roman chariots

This is repeated often and simply isn't true.


I based that on seeing the BBC Science TV series (and books) Connections by science historian James Burke. If it's been updated since, then I stand corrected. Regardless of the specific example, my point was that sometimes modern standards are linked to long-outdated historical precedents for no currently relevant reason.

> I find it funny that casual observers on the outside are so morally opposed. ... It's really going to help mid market and below, for films that don't have Disney budgets.

Agreed. I also have a few decades of experience in film and television production, mostly in creating and deploying new digital tooling paradigms from 'desktop video' in the 90s to virtual sets to real-time 3D environments. New digital production tools have almost always had the biggest impact enabling low-end and mid-tier creatives, not big budget studio productions. In the early 90s the Amiga-based Video Toaster enabled upstart productions like Mystery Science Theater 3000 and Babylon 5. The Toaster also enabled about 95% more cable-access crap and bad porn but the other 5% was fantastically creative new stuff which couldn't have existed on indy budgets. Dramatic new production paradigms tend to unleash both democratization and disruption. Most people welcome the democratization yet reflexively fear the disruption. Today, few recall the early 90s predictions from the professional production industry of desktop video causing economic and creative doom, despite being widespread and echoed across mainstream media.

While machine learning-based production tools aren't flexible or granular enough yet for more than limited experiments, there's no reason they won't become increasingly useful for real work. IMHO they'll likely have the same kind of democratizing impact as desktop video and the Toaster - 95% more regrettable crap, some of which we're already starting to see but, eventually, also 5% more wonderfully creative stuff which wouldn't have existed without it. The crap will quickly fade away but the bold new stuff will remain pushing creative frontiers and shaping tomorrow's classics.


Yeah, I have a feeling rebranding might be in their future. As one of the somewhat rare crossover people who are both technical but with some marketing chops, it never ceases to surprise me how branding can elude some people who are technically brilliant.

News agencies like AP have already come up with technical standards and guidelines to technically define 'acceptable' types and degrees of image processing applied to professional photo-journalism.

You can look it up because it's published on the web but IIRC it's generally what you'd expect. It's okay to do whole-image processing where all pixels have the same algorithm applied like the basic brightness, contrast, color, tint, gamma, levels, cropping, scaling, etc filters that have been standard for decades. The usual debayering and color space conversions are also fine. Selectively removing, adding or changing only some pixels or objects is generally not okay for journalistic purposes. Obviously, per-object AI enhancement of the type many mobile phones and social media apps apply by default don't meet such standards.


This article seems like a purposeful PR 'leak' attempting to deflect criticism of the deal structure. I'm skeptical about the approximate numbers claimed but even if they're ballpark accurate, I suspect the generally rosy impression the article gives is leaving out a lot of non-rosy parts.

Even if this particular deal concluded like the Grinch movie with Jensen showering uncharacteristically generous gifts on the residents of Whoville, ultimately this type of asset acquisition deal undermines the 'startup promise' the valley was built on.


Nvidia practically bought everything that makes Groq function without any of the baggage or even regulatory scrutiny, and now pushing PR speak to do damage control.

Investors with enough into the deal to fight it in court get enough to not fight it. Key employees needed by the 'not acquirer' get compensation sufficient to retain them, although increasingly much of this is under a deferred vesting arrangement to ensure they stay with the 'not acquirer'.

Non-essential employees and small investors without the incentive or pockets to fund a legal fight get offered as little as possible. This structure also provides lots of flexibility to the 'not acquirer' when it comes to paying off existing debts, leases, contracts, etc. Basically, this is the end of being an early employee or small angel investor potentially resulting in a lucrative payoff. You have to remain central and 'key' all the way through the 'not acquisition'. I expect smaller early stage investors will start demanding special terms to guarantee a certain level of payout in a 'not acquisition'. I also expect this to create some very unfortunate situations because an asset sale (as they used to be done), could be a useful and appropriate mechanism to preserve the products and some jobs of a failing (but not yet fully failed) company - which was better for customers and some employees than a complete smoking crater.


> "In the summer of 1973, during their last day working at Xerox, Ashawna Hailey, Kim Hailey, and Jay Kumar took detailed photos of an Intel 8080 pre-production sample"

I was interested in this and followed the links to the original interview at: https://web.archive.org/web/20131111155525/http://silicongen... which was interesting:

> "Xerox being more of a theoretical company than a practical one let us spend a whole year taking apart all of the different microprocessors on the market at that time and reverse engineering them back to schematic. And the final thing that I did as a project was to, we had gotten a pre-production sample of the Intel 8080 and this was just as Kim and I were leaving the company. On the last day I took the part in and shot ten rolls of color film on the Leica that was attached to the lights microscope and then they gave us the exit interview and we went on our way. And so that summer we got a big piece of cardboard from the, a refrigerator came in and made this mosaic of the 8080. It was about 300 or 400 pictures altogether and we pieced it together, traced out all the logic and the transistors and everything and then decided to go to, go up North to Silicon Valley and see if there was anybody up there that wanted to know about that kind of technology. And I went to AMI and they said oh, we're interested, you come on as a consultant, but nobody seemed to be able to take the project seriously. And then I went over to a little company called Advanced Micro Devices and they wanted to, they thought they'd like to get into it because they had just developed an N-channel process and this was '73. And I asked them if they wanted to get into the microprocessor business because I had schematics and logic diagrams to the Intel 8080 and they said yes."

From today's perspective, just shopping a design lifted directly from Intel CPU die shots around to valley semi companies sounds quite remarkable but it was a very different time then.


> This is a self-reported survey paper.

Thank you for this post! The headline claim struck me as one that would be difficult to evidence with any scientific rigor. Reading the abstract furthered this feeling but I couldn't be bothered to read the methodology, so thanks for doing it.

> Everything about it, from the self-reported survey format to the idea itself, looks like someone started with a highly specific idea (Super Mario reduces burnout) and wanted to p-hack their way to putting it in a paper.

Indeed. Even the idea that individuals can reliably self-diagnose "burn-out" in an objective way is highly dubious.


Yeah, even as a software engineering type I immediately thought the question was too broadly posed. I assume the OP must have had something narrower in mind.

> But it tends to happen for a short amount of time, mostly for events

I expect you're correct. While it's fantastic tech, it's also very expensive to keep highly-precise, carefully calibrated micro-machinery like this aligned and operating 12+ hours a day outdoors where temps vary from 50-110 degrees. Disney thinks in total cost of operation per hour and per customer-served.

While there's probably little that's more magical for a kid than coming across an expressively alive-seeming automaton operating in a free-form, uncontrolled environment, the cost is really high per audience member. Once there are 25 people crowded around, no new kid can see what all the commotion is about. That's why these kind of high-operating cost things tend to be found in stage and ride contexts, where the audience-served per peak hour can be in the hundreds or thousands. For outdoor free-form environments, the reality is it's still more economically viable to put humans in costumes. Especially when every high-end animatronic needs to always be accompanied by several human minders anyway.


> the cost is really high per audience member.

Disney has problems with that. Their Galactic Starcruiser themed hotel experience cost more to the customer than a cruise on a real cruise ship, and Disney was still losing money on it. The cost merely to visit their parks is now too high for most Americans.

It's really hard to make money in mass market location-based entertainment. There have been many attempts, from flight simulators to escape rooms. Throughput is just too low, so cost per customer is too high.

A little mobile robot connected to an LLM chatbot, though - that's not too hard today. Probably coming to a mall near you soon. Many stores already have inventory bots cruising around. They're mobile bases with a tall column of cameras which scan the shelves.[2] There's no reason they can't also answer questions about what's where in the store. They do know the inventory.

[1] https://en.wikipedia.org/wiki/Star_Wars:_Galactic_Starcruise...

[2] https://www.simberobotics.com/store-intelligence/tally


Similarly, I was talking with my then wife, who is a Star Trek fan about the Star Trek Experience in LV, she wasn't aware of it... we looked it up and discovered it was literally going to be the last day of it the next day... so we got up at 4:30am and drove from Prescott, AZ to LV, spent the day there and drove back that night... I don't recommend doing this in a single day... Was definitely fun.

I'm not sure that a Disney experience needs to be much more/different than this... and even maybe having smaller experiences that are similar... 1-2 rides and a restaurant, exhibit and shop as a single instance... spreading the destinations around instead of all in a single large park. This would mean much lower operational costs per location, being able to negotiate deals at a smaller level with more cities, and testing locations/themes beyond a large theme park expense.

Just a thought. Of course, I did also go to a "Marvel Experience" that seemed to be a mobile experience closer to a carnival that setup and moved to different locations. That was kind of an over-priced garbage experience that I wouldn't have done had I known ahead what it was like.


“ The cost merely to visit their parks is now too high for most Americans.”

I always wonder why people say things like this. It’s as if we’re just regurgitating stuff that feels right. Humans and LLMs behave the same sometimes.

Disneyworld alone gets 50 million visits a year. Magic Kingdom tickets are like $150. That’s approximately the average American’s monthly cell phone bill.


I don't think that's an incorrect statement to say it's too expensive for most Americans, even if there's still high traffic at the parks.

Disney has become significantly less accessible for the average family of 4. Aside from ticket costs, there's almost nothing free in the parks anymore... you have to pay for lightning lane passes for all the cool rides, there's minimal live entertainment, etc.

The demographics have significantly shifted. Only 1/3 visitors now come from households with children under 18, and millennials and gen z have started taking frequent trips (friend groups, couples, etc).

So while they still get the same number of "attendance", the demographics have started to shift toward older, more affluent repeat visitors.

Source: https://www.businessinsider.com/why-disney-parks-top-destina...


The article you linked to indicates anything but how you’re portraying it.

First it talks about young adult who goes there several times a year, sometimes with her parents, because it’s cheaper than traveling overseas.

Then it says childless people have more discretionary income than parents (duh).

The general population, also, has drifted toward older people without kids. 20 years ago nearly 50% of Americans had a child under 18. Now it’s under 40%. So this whole article just indicates that the population is shifting and Disney is adapting to it by making the parks more palatable to single adults.

“In the last year, 93% of respondents in a consumer survey agreed that the cost of a Disney World vacation had become untenable for ‘average families’”. And yet the statistics indicate that more than 7% of families actually likely did go to a Disney park. (Presumably even more could afford it but just went somewhere else.)

Which illustrates my point, this is a thing that feels correct but likely isn’t, and part of the reason it feels correct is that people regurgitate it factlessly.


> Magic Kingdom tickets are like $150.

What's the cost to travel there? To sleep? To eat? What's the actual experience like with that $150 ticket vs the options that are more expensive? Will you spend your entire day there waiting in line?


Those 50 million visits are the sum of daily visits across four parks, so it’s probably at most 30 million people. Even if they were all American (they aren’t), that’s like 9% of the population.

The average cell phone bill you cite is for more than one person.

I think it’s entirely fair to say that “most” Americans would find it too expensive to visit Disneyworld.


Estimates put the percent of Americans who actually HAVE been to Disney north of 75%. So it would seem unfair to say most find it too expensive, most have done it.

30 million uniques at one Disney location (there are two in the country, I think the other one increases that to at least 40 million, or roughly 12% of the entire population) per year is pretty high so that stat isn’t unbelievable. I’m sure not everybody can afford to go there every year.


The “average American” doesn’t have $600 for an emergency.

Also, your “cell phone bill” number is only good if you live within walking distance of Disney World, and pack your meals.

and go alone.


That’s also a drastic misstatement that illustrates what I’m talking about. A poll showed that the average persons specifically designated “emergency savings fund” is $600. Many people have lots of money but don’t specifically refer to some as an emergency fund.

Also thanks to credit one does not need to have $600 to spend $600. That’s why we’ve got so many people with no savings.


You’re still missing the part of your comment where you convince us Americans have expendable cash.

Not everyone is you.

> Many people have lots of money

is a gross exaggeration.


Somewhere between 70 and 90 percent of Americans have actually been to a Disney park. Does the fact that the vast majority of people have done something not prove that most people can afford it?

I’m not sure why the burden of proof falls not on the original comment (most Americans can’t afford to go to Disney) but rather the person asking for proof, but here you have it anyway.


Doing something once in a lifetime is far different than being able to regularly or even every few years. Also, $150 ea is just for the ticket into the park... you still need quite a bit more for food and drinks for the day and souvenirs. That also doesn't cover travel and hotel arrangements... For a family of 4, I'd be surprised if it didn't cost closer to $2500 for a Disney trip, if your family only earns the average national family income, that's a significant expense after housing, car(s), food and other bills.

So a family might have gone once, but that dpesn't mean they can do it anything resembling regularly. I went to Disneyland once as a kid (around 8yo)... th eonly time my family went growing up, and I haven't ever been back... My sister went as a young adult every year until she had kids, then it's been every few years... but she and her husband are doing much better than the typical American family.


How many adults went to Disney in a wildly different economy does not prove the point you’re looking to.

We probably won’t authoritatively prove anything, here - we’re just comparing our own world views and anecdata.

Hopefully you’re okay with that:

https://fb.com/reel/1540171337151246


But that’s the point. I didn’t make an unprovable assertion, I called someone out for doing so. I haven’t made a single point based on my own experience or anecdotes either.

People say things that “feel right”. This is a left leaning community, when the right is in power everything is a dumpster fire. Over on the right wing communities, the opposite is true.

None of it means anything. Data is the guide post.

See the link you just sent me which is people at Disney World who cannot afford to be at Disney!


They talked about their (unaffordable, laughable) underwater car payments as well.

I think we might be agreeing with each other with different words.

People are still going to Disney.

Whether they can afford to or not has almost nothing to do with it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: