Bob Metcalfe, self professed “conservative hippie,” inventor of the Ethernet, founder of 3Com, of Xerox Parc fame, has this to offer:
> One of the few things that government should do is finance research, because — I have learned from many years — the only companies that can afford to do research are monopolies. Real companies cannot afford to do research other than monopolies. And there are some famous ones. The telephone monopoly — Bell Labs. The computer monopoly — Watson Labs. The copier monopoly — Xerox Parc. And on it goes. In retrospect the monopolies aren't worth it for the research they do. It's nauseating how much we hear about how cool Bell Labs was, but other than the transistor and Unix and the princess telephone, what did we get for all that money? And then for years AT&T as a monopoly sat on innovation — and IBM after that, and Xerox after that — it's just not worth it, so let's kill those monopolies. And if we need research, have it done at research universities. And the other spin I would offer there: as a practitioner of technological innovation, I worry about technology transfer — how do you get technology transferred from the lab into the marketplace. And the best way to do that is with people; and it is the business of universities to graduate people. So let's do our research there, and I think the ARPANet is a great example where government financed the research.
> other than the transistor and Unix and the princess telephone, what did we get for all that money?
Bell Labs may not have been able to commercialize all of its discoveries, but humanity certainly got a lot more than Unix and the transistor. Quote [1]:
"Researchers working at Bell Labs are credited with the development of radio astronomy, the transistor, the laser, the photovoltaic cell, the charge-coupled device (CCD), information theory, the Unix operating system, and the programming languages B, C, C++, and S. Nine Nobel Prizes have been awarded for work completed at Bell Laboratories."
Bell Labs was also a training ground for a great many people who went on to make highly impactful discoveries [2], from Hamming to Bengio.
> Bell Labs may not have been able to commercialize all of its discoveries, but humanity certainly got a lot more than Unix and the transistor.
This point illustrates that human benefits and profit do not always align. If we want to focus more on making things that benefit humanity, we have change our system that runs off of profit.
It's true that human benefits and profit don't always align at all points of contact. They do align at some.
Hence I don't think it's true that for-profit systems that are not focused on "making things that benefit humanity" do not actually benefit humanity. Bell Labs clearly has, even though it has had a concomitant commercial objective.
Conversely, it is also not true that not-for-profit systems necessarily produce human benefits -- sure, they can, but not always.
I worked in an university applied research setting for close to a decade before joining industry. I can tell you the incentives and pressures to produce something that "works in practice" and is "commercializable" (whether it is actually done or not) is much greater in industry. Not-for-profit environments are better for long-range explorations with high failure rates and uncertain outcomes, i.e. basic research. For-profit environments are better at the discipline of producing applied knowledge (whether it is actually applied or not).
That said, Germany has a hybrid model called the Fraunhofer model (Fraunhofer Societies do applied research and is 70% funded by industry/government) which produced the MP3 standard. So there are alternatives. But I'm not convinced the industrial R&D model is fatally flawed.
I just finished reading Mitchell Waldrop's book The Dream Machines which is largely about J.C.R. Licklider and it has a lot of stories about the things Metcalfe is talking about during the 60's and 70's and into the 80's. It's a long, detailed book and I recommend it.
After finishing the book it made me feel like progress has slowed tremendously. Look at the changes from 1960 to 1970 or from 1970 to 1980 and compare that to 2010 to 2020. The last decade produced lots of incremental improvements but where are the giant technological shifts?
I was born in 1970 and if I look back over my life, the giant change was going from a non-networked world to a networked world. I think it's similar to how my grand parents could remember the world before and after commercial flight or how my great-grandmother remembered the arrival of automobiles.
I wonder what it is my kids will point to 40 years from now. A colony on the moon or Mars maybe?
I wasn't around those earlier decades so I don't have that perspective, but I do remember the transition from dial up to Tracphones to smartphones and IMO the change hasn't slowed down, it's just become more distributed.
I recently designed a PCB for a custom built 3D printer for a tenth of the cost (fab + assembly) of a board of similar complexity I made nearly a decade ago and the final cost of the 3d printer (based on an open source design) was a 20th of the price of the Stratasys Dimension Elite I purchased, also a decade ago. One of my next projects is to make a DIY semiconductor fab using just TI DLP micromirrors and a microscope because I want to experiment with DIY bio microfluidics - all based on technology and knowledge so common that it has trickled down to the hobbyist level. We've now got people building electron microscopes and genetically modifying organisms in their garages!
The robots I've worked with in clinical diagnostics are much more advanced now than they were a decade ago and are now cheap enough for the average lab. The metrology instruments and machining tools necessary to make high precision parts have become commonplace at manufacturers, factory automation robotics like 6-DoF arms have trickled down to the researcher and hobbyist level in the last 5 years, and agriculture is currently going through it's n-th revolution with software, remote sensing, and data science.
Every time I leave an industry for a few years and come back, the pace of change is staggering - it's just hidden behind the iron curtain.
I didn't think of CRISPR and related genomics work. It definitely has potential to change the world like commercial flight or the internet, but I don't think it has yet...
Definitely something I need to keep thinking about. Biology probably is the next area to see some giant advances like physics did 80 years ago.
I think the most recent big shift, on a society level, was taking that networked world from our offices and homes to mobile devices. It‘s not exactly between 2010 and 2020 but definitely between 2000 and 2020.
I'd say it started in a relatively primitive way around 2000 with mobile WAP, fairly widespread texting, etc. You then had Blackberries and Treos. It was around 2010 though that the iPhone (introduced 2007) was really taking off.
genuine question (asking because i have a biased perspective): why dont you consider advances in ai to be a major shift? in the past decade, there has been a rapid increase in capabilities in computer vision, language understanding, robotics, etc.
The technology has advanced a lot but what major changes have these advances in AI really made to the average person's life? I can't think of any that are anywhere near as dramatic as the invention of cars/air travel/the internet/smartphones.
AI researchers have been over-promising and under-delivering for 70 years now.
It seems like AI has settled into making discriminators. Maybe intelligence really is just pattern matching and grouping? I don't think it is, but I could be wrong.
Many cognitive scientists would argue that it's also about building models which include statistical/Bayesian models but more than that.
To your broader point, it's not hard to find some specific domains of machine learning where there have been huge advances. But zoom out and look at the broader picture and the results aren't so impressive.
It doesn't seem that AI and robotics over-promise. Sure, we're not at AGI, but if you look at the potential(i.e. startups) to replace jobs, there are tons of that.
The same was said of expert systems in the 1980’s. Yes, there’s potential, but that potential hasn’t been realized yet. At least not to the point where people compare the impact of AI to the impact of semi-conductors and integrated circuits.
Maybe, given the many startups doing real work with machine learning(besides the many that just hype), it's reasonable to guess that that the potential Will be fullfiled.
Metcalfe offers a very interesting conservative perspective. The article IMHO has a more convincing conservative counterpoint:
Labs, compared to university researchers, also maintain a constant link with delivering value and, ultimately, profitability. University incentive, prestige, and funding regimes suffer from the standard problems around non-profits: if you’re not trying to make profits, what are you trying to do? How do you know that what you’re doing is socially useful? Working without a profit signal can lead to deeply broken incentives systems and extremely wasteful admin burdens that weigh heavily on productivity, along with research that is in no way tied to improving human lives. Some estimates suggest that university scientists spend just a third of their time on active research. Historically, labs have seemed less prone to this problem.
My response has some holes in it, but my view there is that profit/value has a pretty good feedback mechanism that social good doesn't. If you are a normal business, your customers basically give you a mandate to continue, maybe even grow. It may be a substandard metric to control for with regards to society, but at least it is a control system. The mechanics of social benefit feel a lot more complicated, I don't really know how to sum them up in one way without just being like, "the government pays for things it finds beneficial".
The issue is that a valuable endeavor doesn't have to be good, e.g. maybe social media.
This will be a bit poorly thought out, but the response goes like this: why invest in corporate R&D with uncertain returns when you can invest in index funds with guaranteed return?
The stock market is supported by ample goverment intervention, the rest of society not at all. Just consider the major stock market indices and compare that with continuing unemployment claims.
Now you have a feedback mechanism that supports investment banking at the expense of everything else, and here we are.
Have you looked at what the Federal Reserve has been buying since March? Compare that to the extra 600 dollars in unemployment benefits that ran out last month, and also consider the complete fuckedupness that the public health response has been in large tracts of the nation.
There's "too big to fail" and network effects, can't let Bank of America or Boeing go bankrupt and wipe out shareholders, especially pension funds are major shareholders. There is so much government backstop going around that returns are certain.
Compare that to bringing a new pharmaceutical compound into the clinic. The time from discovery to approval is ten years, with a billion spent over the lifetime of the project, and the success rate is ~ 30 %, most compounds fail. For innovative projects this is compounded by the fact that you need to do target discovery and validation, something you can't patent but every competitor will make use of. No surprise the pace of innovation has slowed down.
Crumbs - I've never heard anyone say "returns are certain" before everyone gets stripped, ever, in any financial setting.
Huge chunks of the indexes are now made up of a small handful of stocks, and as the stocks (FANNGS) go up so the indexes buy more, and guess what. They go up. And this will carry on, and on, until at some point a structural failure will occur. This may well be fraud, it may be government (not even the US government) intervention, it may be a supply chain failure or a market destruction. For example in November there may be a civil war in the USA... At that point the stocks will fall and if they fall disproportionately (which they will if the artificial pump that is floating them up turns off) then the indexes will unwind their positions... in fact they have to unwind. And this will glut the market, which will force more unwinding.
This is made worse by how the indexes actually track the market - you see, guess what! They (often) don't actually buy the stock! They trade synthetic securities over the movements of the stock with counterparties... and these counterparties never ever ever fail - they are solid players like investment banks, like Bear Stearns and Lehman Brothers, and Merrill Lynch and RBS. So - safe as houses.
So making a profit most of the time means that you have 100,000 customers whose live are better off because of what you're doing.
Sometimes this breaks down because you make other people's lives worse in order to make the 100,000 better, i.e. externalities.
But continuing to get funding for doing a social good means you're convincing someone you're making someone's else's life better(i.e. nonprofits). The problem is sometimes this convincing is entirely unrelated to actually making people's lives better.
Or you're convincing a beaurocrat that you'll make his life better. And hopefully the incentive structures are aligned such that this involves making someone's else's life better. But this can be incredibly tricky with the layers of reporting up to some political appointee who reports to a politician who reports to the people. And any break in incentives or more likely attention along the way can misalign the work with social good.
It's pretty close. The idea that profits and social good are opposites was tested to destruction by communism. Profits mean that at least some people, somewhere, find your work more than worth the effort in a very real sense (they gave up some money to get it).
Social good is undefined, so it ends up meaning whatever the speaker wants it to mean. Lots of university professors think critical theory is socially good, and lots more people strongly disagree. There's no actual way to resolve that dispute because academia is disconnected from any kind of reality check.
The whole interview (multiple hours) is fascinating, like a good episode of "On the Metal". Heavy recommend for people interested in CS history and Innovation.
In the US at least (less so here in the EU) a ton of scientific research is funded and performed with funds earmarked for national security, health and defense (often just called the military budget). That money goes to way more than just pure military though.
I could not agree more with this. I'm very concerned because China has been ramping up their R&D spending while the US has been decreasing, at a time we need to be increasing.
But it is really like that. It depends on the groundwork that's there, technical capabilities and tooling. Once that's in place it only takes a lucky guy to pluck the discovery out of the air.
Consider dual-space methods in crystallography. Herb Hauptman is credited with the discovery. My undergraduate advisor obliquely mentioned that he had done something similar in 1986 - obtain atom positions from direct methods, then do a few cycles of tangent refinement from the highest peaks. It reduces map noise considerably.
Computers were simply too slow at that time to use this as a general method of solving 1000-atom structures, so the approach remained an afterthought. But the possibilities were noted once the computing power was there.
The problem is, that lucky guy can take an indefinitely long time to show up.
I heard recently about the Scythe Project. Apparently, the scythe was never invented in some places, including India. That's a long time for the lucky guy to never show up.
Probably yes. But let's say the transistor would have been invented 20 years later by a university. The world would look completely different today. Simply saying we could have invented it some other way disregards the continous impact a result in research makes.
> No one is quite sure why the lab model failed. It’s obvious that a scenario where Xerox is paying scientists to do research that ultimately mostly benefits other firms, potentially even competitors that help to put it out of business, could never survive.
I don't agree with this. The lab companies had gotten really good at attracting some of the smartest people (in academia and elsewhere), designing models to motivate them to invent while giving them tremendous latitude, and then profiting from their discoveries.
See people like Wallace Carothers[1], a chemist that DuPont hired out of Harvard and let him run a lab. He ended up inventing nylon which was obviously tremendously profitable for DuPont. So the model did work.
I don't have data for this, but I think what actually happened was just a lot more short-termism as investors, companies, and executives focused more and more on short-term profits, so large recurring R&D expenses that might require longer horizons to recoup investments made less sense.
This reads a lot like this blog post [0] that was shared here a while ago, the discussion is quite interesting as well [1]. Although I appreciate the additional background and citations.
Time is money on different terms for different people.
So science costs money.
How much science would you like?
>the tension between managing scientists with their own pure research goals in such a way that they produce something commercially viable, while still leaving them enough latitude to make important leaps, seems huge. But these problems were always there in the model. What is harder to identify is an exogenous shock or set of shocks that changed the situation that existed from the 1930s until somewhere between the 1960s and the 1980s.
Not that hard.
During this period the US dollar had been tied to gold at $35 per ounce while Americans and their firms were prohibited from owning or speculating in gold.
The strength and stability of the dollar gave confidence in what it would purchase in the future. This is very important if long-term projects are to be considered. Especially if the projects themselves have uncertain outcomes. Which is actually supposed to be intentional for research.
It was pretty good knowing that 35 dollar bills in your wallet would get you an ounce worth of gold, even if you were only allowed to own a limited amount of it, and jewelry form.
Then one day Nixon comes along in the early 1970's and destroys the currency by fiat.
No longer backed by gold, the dollar floats. Like the Titanic.
Gold can't float it was the backing, realistically since before biblical times.
As rapidly as possible as many Americans as possible and crucially other target dollar holders were only worth about half of what they were a year or two earlier. From the top corporations to the everyday citizen, it was devastating.
With wage & price freezes, middle class was bumped right down to lower-middle with downward pressure going forward.
The rich were still rich, but only half as rich. So their losses were actually very significant and added more pain to the economy as a whole. Even in some of the biggest corporations the cutbacks were huge. No way they were going to be able to afford research again until they can get as much of it for $35 as they did when it was really working.
There was no recovery from the Nixon Recession.
The malaise continued and by the 1980's even the most well-insulated from the initial shock had adjusted to the _new normal_.
Never again were Americans going to be able to fit enough dollar bills in one wallet to purchase an ounce of gold, not even close.
Never again would the dollar be trustworthy enough to accomplish as legendary a form of scientific progress.
It would have to be done in some other way on some other terms for labs that take longer to build than the dollar can be expected to remain stable, with no risk of further losing value.
Inflation doesn't change the value of an investment. It increases the price that the new products fetch as well. You need to look at rates when trying to explain this, and rates and inflation have been low for decades without a return to major industrial R&D.
I remember watching as it died a painful death not without dismemberment, and as we have seen there have been no rates capable of bringing it back to life since.
Inflation occurred after people had been made too poor to even continue some projects, much less start new ones.
Everyone who really made the big science possible never got rich enough to do it again.
I think the big driver for a lot of these research lab consists of two parts. First, the company having a lot of money (many times from having a monopoly) and a desire to keep a bunch of smart people from going to any potential competitors. They really did t need the smart people to make any money, they just needed to keep them locked away from competitors. The best way they could do that was to have these research labs that provided decent money and a lot of support and resources to pursue their intellectual curiosity.
In addition, given that capital funding was a lot harder to acquire back in the day, it was about as good a deal as a smart person could get.
Nowadays, a couple of things have changed. First, a corporate job is no longer expected to be lifelong. In addition, if you have a brilliant idea, it is a lot easier to get funding and do your own startup for that idea, and maybe get acquired by one of the big companies.
Having talked to a lot of old folks who worked there the tax breaks were a huge incentive too. Lowering the corporate tax rate since the 60s has meant that corporations no longer needed to invest in R&D, just pay accountants instead.
Usually, taxes incentivize companies to spend their profit on making the company better - buy new machines, pay the workers more, invest in r&d. This way, they reduced their taxable profit by spending the money on themselves. The companies profited from getting better, the state (or society) profited from growth and innovation.
Now with reduced taxes, but even more importantly tax evasion and offshoring, this incentive is gone, companies hoard money and pay out to the shareholders.
And that is just lazy. Giving money to the shareholders is like telling your investor that their money is better spent elsewhere.
Tax incentives are also responsible for the return of open office plan since 1960s, as depreciation rules regarding buildings and furniture changed.
In addition, you have people believing that Milton Friedman's biggest lie is a law, namely that maximising shareholder value (which is often interpreted to include dividends) is fiduciary duty for the corporation. (It's not)
You are misrepresenting what is called the Friedman doctrine. He didn't claim it was the law of the land. As the wikipedia article states:
>...The Friedman doctrine, also called shareholder theory or stockholder theory, is a normative theory of business ethics advanced by economist Milton Friedman which holds that a firm's main responsibility is to its shareholders.
This shareholder primacy approach views shareholders as the economic engine of the organization and the only group to which the firm is socially responsible. As such, the goal of the firm is to maximize returns to shareholders.[1] Friedman argues that the shareholders can then decide for themselves what social initiatives to take part in, rather than have an executive whom the shareholders appointed explicitly for business purposes decide such matters for them
Thanks for the clarification. When you wrote "Milton Friedman's biggest lie" the implication to the reader is that Friedman is deliberately saying something is true when it is not. As the wikipedia article says, Friedman did not say it is recognized that maximizing returns to shareholders as a fiduciary duty simply saying he thought it should be. As a normative theory of business ethics, people can agree or disagree with the idea. (Just as in a different context, if someone argues prisons should focus on rehabilitation and someone disagrees, that doesn't mean the first person lied.)
By making it so that furnishing a building is much cheaper than other aspects, so cost structure started to "disincentive" making smaller offices vs. plopping big halls.
Also for r&d you need often a long time. I guess thats also a reason. I my company (~300 people) we have a project that runs 10 years or so, and we have almost but still not a product yet. You really have to want make this investment in r&d and your shareholders too. In our case, we have no shareholders. I guess it would be pretty difficult a such project present to the shareholders for so many years.
I mean, giving money back to investors is the foundation on which the entire stock market is built on. The whole original point of investing in a public company was to get dividends. After all, if you're not entitled to a share of the profits, what's the point?
Of course this has all changed, with many companies not paying dividends. But saying that a company shouldn't return money to the investors through either buybacks or dividends is kind of ridiculous.
> a desire to keep a bunch of smart people from going to any potential competitors...they just needed to keep them locked away from competitors
Companies still do this today, they just go about it differently. Google started Google X, Cisco has (had?) spin-ins, functionally everyone large enough has an unofficial rest and vest program.
Like another commenter said, it just seems like the financial incentives don't lend themselves to formal research orgs anymore.
I worked at a large industrial R&D lab around 2010 and it was awesome. It was not pure R&D, because the lab was funded by various business lines as “clients”, which sometimes came with strings attached, but as a model it worked pretty well.
By my estimation about half of the work was real fundamental research with no obvious immediate market or clearly guaranteed business value. There was some pressure to “sell” this work internally after the fact, but that usually ended up making it better.
The other half of the work was more directly commissioned by various business lines. They were responsible for product development, but would enlist us to look into 3rd wave innovation and technology (which we would be experts in because we spent the other half of our time on it).
The mixed model kept the lab grounded in reality, but also created enough freedom to do real R&D.
Well, I think a major factor is that with less friction for workers to move between countries / industries / companies, it no longer pays for companies to cultivate their talent from within.
Either the talent will leave, or it is easy to find some talent by hiring from elsewhere -- by the way, where the work to cultivate and support the talent has already been done for you. And all you're doing is applying the filter / cutting the top candidates to choose from.
Back when there was much more friction and lack of info for people to move about between good opportunities, you were tied to your employer and a better bet to invest R&D money in. Not now. Except for a few select industries where there are artificial barriers put in place to prevent this. (Certification, unions, national security, etc)
> Well, I think a major factor is that with less friction for workers to move between countries / industries / companies, it no longer pays for companies to cultivate their talent from within.
The fall of R&D labs also largely correlates with repeated corporate tax cuts: R&D labs were a way for corporations to lower their tax burdens while investing in themselves, which benefited society as a whole (either because the corp produced better value, or because others found avenues for that research). Today they're much better off paying accountants.
It's also a bit rich criticising workers when companies were the ones to break the "social contract" and start throwing employees out on the street with little to no warning or recourse.
This assumes that your employees will just leave even if you value them fairly. In truth it's the other way around: Companies don't care for their people anymore (as seen in payment, investment in their career / training, not firing on the spot etc.) and people respond to these incentives. I also think it has nothing to do with movement between countries. The same "problem" (more like a self-inflicted wound) exists within countries if you have competitors.
> with less friction for workers to move between countries
While I agree with regard to companies, I disagree with regard to countries. The equivalent for a nation is education, which would mean every nation would be happy to accept any immigrant who can speak the local language while also doing whatever they can to prevent people leaving. The higher the education, the more this matters, but even K-12 equivalents are ~13 years of state support. Potentially more if they were born somewhere where the state pays for early childhood and p/maternity.
I think most of the changes to corporate R&D in the past 30 years closely reflect the shift in focus toward short term profits, thereby reducing stockholder will to invest in high risk new new things like quantum computing or single cell genomics.
At the same time, the cultural rise of tech startups has encouraged corporate R&D heads to farm out much of their interest in cutting edge tech to underwrite those startups from afar. By investing in startups from funds separate from mainstream R&D, it's easier to lay blame for failures outside the internal R&D organization.
I've worked in big pharma R&D for 15 years and gov't/military R&D for 20 years before that. Increasingly I see the forms of R&D diversifying and extending outside the org's primary lab, just as 100% of DARPA projects are fielded external to the DoD labs. As external R&D investments and partnerships rise, it's little wonder that primary R&D labs shrink and their focus shifts to better serve short term RoI like improving / debugging internal processes and inefficiencies.
Buying off-the-shelf technology works if you want the cookie-cutter solution (which, don't get me wrong, is fine for a lot of things)
Now, newsflash for the MBAs, if you just sell the same thing as everybody else, you're just competing on price (and maybe marketing/sales channel). Otherwise you need to get your hands dirty. (Though apparently there's still money in dropshipping to be made, until people realize everything was already on DX/AliExpress)
But oh thank to the wise Master minds of Business Administration, everything is on the shelf to buy somewhere and what isn't doesn't exist. R&D? It's a cost center.
The argument being made here, that industrial research labs have disappeared because they did not produce enough return on investment, ignores competing sources of ROI. Today, the easiest way to make a lot of money is in finance, not discovering new things. Other stories on HN have bemoaned how Boeing has destroyed itself by shrinking engineering and expanding cost cutting. As is pointed out in another comment, if the Fed is subsidizing the stock market, finance is the place to be. Research not so much.
Corporations are basically unable to do R&D for a whole bunch of reasons. That's why we have all sorts of start ups and angle investment instead. Once an idea is half proven, then corporations can come in, finance it, run it effectively and market it etc. While also paying off the people actually taking the risk. This is a much better more efficient system for everyone.
How is that better for R&D scientists and engineers? They now have to come up with a possible technology progression, secure funding, build out the tech using some secondary business model, and then convince the corp to buy their company. It only seems more efficient for the corporation.
Also, I imagine that getting funding from a pool of VCs (looking to fund long shots, with some expertise in the field maybe) is easier than from a single CEO who happens to run the place and has to justify all this on quarterly earnings statements etc...
Part of the fall of the R&D lab is high-profile inventions like the silicon transistor, GUI, and digital camera that the corporation failed to capitalize on. Part of why you do all this research is to find the next big thing, but if you can't see the potential of these inventions and turn them into businesses, what's the point?
This is a good article but I'm not sure it completely nails it.
Firstly, it's not clear the industrial R&D lab has actually fallen. The article starts to engage with this at the end but doesn't really do so properly. It's easy to find examples of firms doing large scale expensive R&D:
- Many firms making big investments in AI
- Self driving cars
- SpaceX reusable rockets, plus all the other Musk firms
- Advanced database technologies (e.g. at Microsoft, Google, Amazon)
- Advanced compiler R&D (Oracle/GraalVM)
- The huge sums thrown around in pharma R&D
- Advanced graphics R&D is driven primarily by video game firms
- Intel/AMD/ARM/etc in CPU tech
- Tons of R&D in oil and gas
and those are just examples we're most familiar with. Are these firms running "labs"? No, not in the sense of Bell Labs. That's because this model is not the most efficient model available, and has thus mostly been phased out. Corporate R&D is much better integrated with actual product development than the standalone lab model captures. In the most successful cases we don't even really perceive it as an R&D lab because the entire company is doing R&D constantly, in a completely integrated manner.
Google is perhaps the clearest case of a company that has completely integrated R&D with its operations. There is an org called Google Research but IIRC it existed mostly for a trophy hire and isn't especially important: the bulk of actual research is/was done by product teams as an integrated part of their development process. There is no lab, yet at the same time, the lab is everywhere.
The second aspect is that it doesn't discuss the role of personality or personal visions of CEOs at all. Yet this is often critical to the decision to do R&D.
Why does Google research AI? It's not because it's an obvious, slam dunk move. Google has been doing ML research since the very start of the firm, but other search engine firms didn't. Why did Google research Google Glass? Or self driving cars? Google researches these things because Larry and Sergey thought it was cool. Page has even claimed in the past Google was founded specifically to do AI research (which isn't really right, but whatever).
What about SpaceX? Tesla? The Boring Company? Neuralink? These firms exist and do R&D because Elon Musk is a futurist. He doesn't do R&D for economic reasons, let alone for regulatory reasons. He does it because he wants to invent the future.
But this type of personality is rare. Our society is not kind to futurists. It doesn't really value them. They're seen as strange or dangerous; the story of inventing new technologies is usually a story of being attacked on all sides, by journalists or politicians or regulators or all of them at once. That's why most companies set up "innovation teams" - the exact opposite of the approach taken by the most innovative companies. They're little silos where people are taken and put in a corner, then told "go innovate". The non-technical CEO has no interest in the innovation team and likely never talks to them directly, let alone gives them direction or ideas. The team members know this and usually just mimic whatever trends they see around them (IoT, blockchain, AI, cloud, etc). Their projects are usually doomed from the start.
In the end, for research to be valued people with the power to deploy it must be excited by technology, not intimidated by it. And that isn't most corporate or government leaders today.
> Advanced graphics R&D is driven primarily by video game firms
Just going to speak from my experience here; this makes it very hard to get into computer graphics research. Any academic won't want you working on stuff that's "funded by industry" rather than funded by NSF and DOE. But you won't be able to get an industry research position unless you did graphics research during schooling. You can sometimes make a case with HPC and CUDA work, but I've found it to be a tricky dilemma.
Surely the issue there is academia? Some universities do publish papers in collaboration with industrial researchers, why do academics block that in graphics?
Industry blocks it. They want to keep all advances as IP and trade secrets, so they’d rather all research was done in house. Academics would love to collaborate, but they can’t if all the research is under-NDA and not publishable.
> One of the few things that government should do is finance research, because — I have learned from many years — the only companies that can afford to do research are monopolies. Real companies cannot afford to do research other than monopolies. And there are some famous ones. The telephone monopoly — Bell Labs. The computer monopoly — Watson Labs. The copier monopoly — Xerox Parc. And on it goes. In retrospect the monopolies aren't worth it for the research they do. It's nauseating how much we hear about how cool Bell Labs was, but other than the transistor and Unix and the princess telephone, what did we get for all that money? And then for years AT&T as a monopoly sat on innovation — and IBM after that, and Xerox after that — it's just not worth it, so let's kill those monopolies. And if we need research, have it done at research universities. And the other spin I would offer there: as a practitioner of technological innovation, I worry about technology transfer — how do you get technology transferred from the lab into the marketplace. And the best way to do that is with people; and it is the business of universities to graduate people. So let's do our research there, and I think the ARPANet is a great example where government financed the research.
Source: https://youtu.be/zKz07DdaKzw?t=3772