Maybe I'm missing the point as well, but what did it do wrong?
It seemed like you wanted to see if a search tool was working.
It looked to see. It tried one search using on data source KJ and found no matches. Next question would be is the quote not in there, is there a mis-remembering of the quote or is their something wrong with the data source. It tries an easier to match quote and finds nothing, which it finds odd. So next step in debugging is assume a hypotheses of KJ Bible datasource is broken, corrupted or incomplete (or not working for some other reason). So it searches for an easier quote using a different datasource.
It's unclear the next bit because it looks like you may have interrupted it, but it seems like it found the passage about Mary in the DR data source. So using elimination, it now knows the tool works (it can find things), the DR data source works (it can also find things), so back to the last question of eliminating hypotheses: is the quote wrong foe the KJ datasource, or is that datasource broken.
The next (and maybe last query I would do, and what it chose) was search for something guaranteed to be there in KJ version: the phrase 'Mary'. Then scan through the results to find the quote you want, then re-query using the exact quote you know is there. You get 3 options.
If it can't find "Mary" at all in KJ dataset then datasource is likely broken. If it finds mary, but results don't contain the phrase, then the datasource is incomplete. If it contains the phrase then search for it, if it doesn't find it then you've narrowed down the issue "phase based search seems to fail". If it does find and, and it's the exact quote it searched for originally then you know search has an intermittent bug.
This seemed like perfect debugging to me - am I missing something here?
And it even summarized at the end how it could've debugged this process faster. Don't waste a few queries up front trying to pin down the exact quote. Search for "Mary" get a quote that is in there, then search for that quote.
This seems perfectly on target. It's possible I'm missing something though. What were you looking for it to do?
What I was expecting is that it would pull up the KJV using the results returned from the wiki_source_search tool instead of going for a totally different translation and then doing a text match for a KJV quote
What a terrible reply to an interesting and genuine comment.
> but this is such a perplexing comment.
There is nothing perplexing about the comment it's extremely straightforward.
> You... seem to be upset that it leaves out some subjects.
It doesn't leave out "some" subjects it leaves out a ton of subjects which OP rightly raises. Just about every subject on maintenance.
> without shaming the author for omitting some subjects of your choosing.
The books title contains the phrase "Maintenance: Of Everything"! These aren't a few specialty obscure subjects that were left out. It left out just about everything and OP lists some extremely notable ones. And also calls out important topics for society that have previously been undervalued and appear to be undervalued here.
> How does one get upset that an author didn't include handwashing instructions in a book?
Do you not realize the importance that maintaining of hygiene has played in shaping modern society. To post such an insultingly dismissive reply with a comment like "didn't include handwashing instructions" is absurd.
I'd genuinely want to understand why we have such a different understanding of that comment.
Surely the title can't be taken literally, otherwise the book would be the size of wikipedia, no?
I didn't say the topics left out were obscure, but arbitrarily chosen. Can some book titled "How the world works" that talks about economy be criticised for not talking about effective communication or table manners?
And re the undervaluing, I mentioned that myself, but surely we can't expect every book to include arbitrarily chosen topics that happen to be undervalued? Hawking's book doesn't mention wealth inequality for example.
Not wanting to argue, I just don't understand why I'd see the original comment as out of line while you see mine in the same way.
I didn't take the parent comment to be dismissive or false advertising or that the parent commenter is even that upset about anything. It's just constructive criticism. The original comment says they will "probably read it"! I think we should all be more generous of each others comments.
Of course the book can't talk about everything but it claims to be maintenance of everything, and in general, there is a tendency to overlook the role and impact of marginalised communities in the histories. It's fine that the author hasn't done it, it's their book, but it's important to mention here because it could help the author go deeper into their point. Do you not think exploring those topics would be interesting in this book given the blurb? I certainly think it's an interesting point.
> No mention that for millenia we were mending our clothes, cleaning our houses, maintaining our food systems.
The omissions that the parent comment mentioned aren't arbitrary by the definition that we have been doing them for thousands of year.
> What a terrible reply to an interesting and genuine comment.
The "interesting and genuine [GP] comment" was hardly that: While it might not have been the GP commenter's intent, to me the comment came across as evidencing a faint sense of entitlement and tunnel vision — as in, "why hasn't the author of the book — which I haven't read — covered what I think should have been in this first volume of the series?"
I'm listening to the Audible version of the book. It's fascinating — especially the early chapter(s) about the approaches of Henry Royce of Rolls-Royce (costly, near-bespoke manufacturing, by highly-skilled engineers and mechanics, of splendid automobiles meant for the wealthy) versus that of Henry Ford (precision engineering of assembly-line machinery to enable mass production of workhorse cars that working people could afford).
(I hadn't known that in his youth, Stewart Brand was an Airborne-qualified U.S. Army infantry officer for two years after graduating from Stanford — this was back in the days of the draft. https://sb.longnow.org/SB_homepage/Bio.html)
The fact that people look to start a business based on searching foe a moat shows they don't believe in free markets. As well as shows what's wrong with Silicon Valley.
> in the early 80's, a time when you basically had to invent everything from scratch, so certainly there is no mental block to having to do so, and I'm aware there is at least a generation of developers that grew up with stack overflow and have much more of a mindset of building stuff using cut an paste, and less having to sit down and write much complex/novel code themselves.
I think this is really underappreciated and was big in driving how a lot of people felt about LLMs. I found it even more notable on a site named Hacker News. There is an older generation for whom computing was new. 80s through 90s probably being the prime of that era (for people still in the industry). There constantly was a new platform, language, technology, concept to learn. And nobody knew any best practices, nobody knew how anything "should work". Nobody knew what anything was capable of. It was all trying things, figuring them out. It was way more trailblazing / exploring new territory. The birth of the internet being one of the last examples of this from that era.
The past 10-15 years of software development have been the opposite. Just about everything was evolutionary, rarely revolutionary. Optimizing things for scale, improving libraries, or porting successful ideas from one domain to another. A lot of shifting around deck chairs on things that were fundamentally the same. Just about every new "advance" in front-end technology was this. Something hailed as ground breaking really took little exploration, mostly solution space optimization. There was almost always a clear path. Someone always had an answer on stack overflow - you never were "on your own". A generation+ grew up in that environment and it felt normal to them.
LLMs came about and completely broke that. And people who remembered when tech was new and had potential and nobody knew how to use it loved that. Here is a new alien technology and I get to figure out what makes it tick, how it works how to use it. And on the flip side people who were used to there being a happy path, or a manual to tell you when you were doing it wrong got really frustrated as their being no direction - feeling perpetually lost and it not working the way they wanted.
I found it especially ironic being on hacker news how few people seemed to have a hacker mindset when it came to LLMs. So much was, "I tried something it didn't work so I gave up". Or "I just kept telling it to work and it didn't so I gave up". Explore, pretend you're in a sci-fi movie. Does it work better on
Wednesdays? Does it work better if you stand on your head? Does it work differently if you speak pig-latin? Think side-ways. What behavior can you find about it that makes you go "hmm, that's interesting...".
Now I think there has been a shift very recently of people getting more comfortable with the tech - but still was surprised of how little of a hacker mindset I saw on hacker news when it came to LLMs.
LLMs have reset the playing field from well manicured lawn, to an unexplored wilderness. Figure out the new territory.
To me, the "hacker" distinction is not about novelty, but understanding.
Bashing kludgy things together until they work was always part of the job, but that wasn't the motivational payoff. Even if the result was crappy, knowing why it was crappy and how it could've been better was key.
LLMs promise an unremitting drudgery of the "mess around until it works" part, facing problems that don't really have a cause (except in a stochastic sense) and which can't be reliably fixed and prevented going forward.
The social/managerial stuff that may emerge around "good enough" and velocity is a whole 'nother layer.
No, the negative feelings about LLMs are not because they are new territory, it’s because they lack the predictability and determinism that draw many people to computers. Case in point, you can’t really cleverly “hack” LLMs. It’s more a roll of the dice that you try to affect using hit-or-miss incantations.
>the negative feelings about LLMs are not because they are new territory, it’s because they lack the predictability and determinism that draw many people to computers
Louder for those turned deaf by LLM hype. Vibe coders want to turn a field of applied math into dice casting.
>I found it especially ironic being on hacker news how few people seemed to have a hacker mindset when it came to LLMs
You keep using the word "LLMs" as if Opus 4.x came out in 2022. The first iterations of transformers were awful. Gpt-2 was more of a toy and Gpt-3 was an eyebrow-raising chatbot. It has taken years of innovations to reach the point of usable stuff without constant hallucinations. So don't fault devs for the flaws of early LLMs
Why? A few times in this thread I hear people saying "they shouldn't have done this" or something similar but not given any reason why.
Listing features you like of another product isn't a reason they shouldn't have done it. It's absolutely not embarrassing, and if anything it's embarrassing they didn't catch and do it sooner.
Because the value proposition that has people pay Anthropic is that it's the best LLM-coding tool around. When you're competing on "we can ban you from using the model we use with the same rate limits we use" everyone knows you have failed to do so.
They might or might not currently have the best coding LLM - but they're admitting that whatever moat they thought they were building with claude code is worthless. The best LLM meanwhile seems to change every few months.
They're clearly within their rights to do this, but it's also clearly embarrassing and calls into question the future of their business.
Best coding tool is what makes users use something, a good model is just a component of that.
I don't think "we have the current best model for coding" is a particularly good business proposition - even assuming it's true. Staying there looks like it's going to be a matter of throwing unsustainable amounts of money at training forever to stay ahead of the competition.
Meanwhile the coding tool part looks like it could actually be sticky. People get attached to UIs. People are more effective in the UIs they are experienced with. There's a plausible story that codeveloping the UI and model could result in a better model for that purpose (because it's fine tuned on the UIs interactions).
And independently "Claude Code" being the best coding tool around was great for brand recognition. "Open Code with the Opus 4.5 backend - no not the Claude subscription you can't use that - the API" won't be.
I think it's reasonable to state that at the moment Opus 4.5 is the best coding model. Definitely debatable, but at least I don't think it controversial to argue that, so we'll start there.
They offer the best* model at cost via an API (likely not actually at cost, but let's assume it is). They also will subsidize that cost for people who use their tool. What benefit do they get or why would a company want to subsidize the cost of people using another tool?
> I don't think "we have the current best model for coding" is a particularly good business proposition - even assuming it's true. Staying there looks like it's going to be a matter of throwing unsustainable amounts of money at training forever to stay ahead of the competition.
I happen to agree - to mee it seems tenuous having a business solely based on having the best model, but that's what the industry is trying to find out. Things change so quickly it's hard to predict 2 years out. Maybe they are first to reach XYZ tech that gives them a strong long term position.
> Meanwhile the coding tool part looks like it could actually be sticky. People get attached to UIs. People are more effective in the UIs they are experienced with.
I agree, but it doesn't seem like that's their m.o. If anything the opposite they aren't trying to get people locked into their tooling. They made MCPs a standard so all agents could adopt. I could be wrong, but thought they also did something similar with /scripts or something else. If you wanted to lock people in you'd have people build an ecosystem of useful tooling and make it not compatible with other agents, but they (to my eyes) have been continuously putting things into the community.
So my general view of them is that they feel they have a vision with business model that doesn't require locking people into their tooling ecosystem. But they're still a business so don't gain from subsidizing people to use other tools. If people want their models in other tools use the "at-cost" APIs - why would they subsidize you to use someone else's tool?
There's just not that much IP in a UI like that. Every day we get articles on here that you can make an agent in 200 LOCs, Yegge's gas town in 2 weeks, etc. Training the model is the hard part, and what justifies a large valuation (350B for anthropic, c.f. 7B for jetbrains).
It is embarrassing to restrict an open source tool that is (IMO) a strictly and very superior piece of software from using your model. It is not immoral, like I said, because it's clearly against the ToC; but it's not like OC is stealing anything from Anthropic by existing. It's the same subscription, same usage.
Obviously, I have no idea what's going on internally. But it appears to be an issue of vanity rather than financials or theft. I don't think Anthropic is suffering harm from OC's "login" method; the correct response is to figure out why this other tool is better than yours and create better software. Shutting down the other tool, if that's what's in fact happening, is what is embarrassing.
> It is embarrassing to restrict an open source tool that is (IMO) a strictly and very superior piece of software from using your model.
> Shutting down the other tool, if that's what's in fact happening, is what is embarrassing.
To rephrase it different as I feel my question didn't land. It's clear to me that you think it's embarrassing. And it's clear what you think is embarrassing. I'm trying to understand why you think it's embarrassing. I don't think it is at all.
Your statements above are simply saying "X is embarrassing because it's embarrassing". Yes I hear that you think it's embarrassing but I don't think it is at all. Do you have a reason you can give why you think it's embarrassing? I think it's very wise and pretty standard to not subsidize people who aren't using your tool.
I'm willing to consider arguments differently, but I'm not hearing one. Other than "it just is because it is".
If your value proposition is: do X, and then you have to take action against an open source competitor for doing X better, that shows that you were beaten at the thing you tried very hard at, by people with way fewer resources.
The competitor is not "doing X better"; it's more complicated than that.
CC isn't just the TUI tool. It's also the LLM behind it. OC may have built a better TUI tool, but it's useless without an LLM behind it. Anthropic is certainly within their rights to tell people they can only integrate their models certain ways.
And as for why this isn't embarrassing, consider that OC can focus 100% of their efforts on their coding tool. Anthropic has a lot of other balls in the air, and must do so to remain relevant and competitive. They're just not comparable businesses.
> CC isn't just the TUI tool. It's also the LLM behind it.
No, Claude Code is literally the TUI tool. The LLMs behind are the models. You can use different models within the same TUI tool, even CC allows that, regardless of the restriction of only using their models (because they chose to do that).
> consider that OC can focus 100% of their efforts on their coding tool.
And they have billions of dollars to hire full teams of developers to focus on it. Yet they don't.
They want to give Claude Code an advantage because they don't want to invest as much in it and still "win", while they're in a position to do so. This is very similar to Apple forcing developers to use their apps because they can, not because it's better. With the caveat that Anthropic doesn't have a consolidated monopoly like Apple.
Can they do that? Yes.
Should they do that? It's a matter of opinion. I think it's a bad move.
Is it embarrassing? Yes. It shows they're admitting their solution is worse and changing the rules of the game to tilt it in their favor while offering an inferior product. They essentially don't want to compete, they want to force people to use their solution due to pricing, not the quality of their product.
Claude Code is more than the TUI, it's the prompts, the agentic loop, and tools, all made to cooperate well with the LLM powering it. If you use Claude Code over a longer period of time you'll notice Anthropic changing the tooling and prompts underneath it to make it work better. By now, the model is tuned to their prompts, tools etc.
Why do you like or dislike Diet Coke? At some point, saying what I think is embarrassing is equivalent to saying why.
But, to accept your good faith olive branch, one more go: AI is a space full of grift and real potential. Anthropic's pitch is that the potential is really real. So real, in fact, that it will alter what it means to write software.
It's a big claim. But a simple way to validate it would be to see if Anthropic themselves are producing more or higher quality software than the rest of the industry. If they aren't, something smells. The makers of the tool, and such a well funded and staffed company, should be the best at using it. And, well, Claude Code sucks. It's a buggy mess.
Opencode, on the other hand, is not a buggy mess. It is one of the finest pieces of software I've used in a long time, and I don't mean "for a TUI". And they started writing it after CC was launched. So, to finally answer your question: Opencode is a competitor in a way that brings to question Anthropic's very innermost claim, the transformative nature of AI. I find it embarrassing to answer this question-of-sorts by limply nicking the competitor, rather than using their existence as a call for self improvement. And, Christ, OC is open. It's open source. Anthropic could, at any time, go read the code and do the engineering to make CC just as good. It is embarrassing to be beaten at your own game and then take away the ball.
(If that is what is happening. Of course, this could be a misunderstanding, or a careless push to production, or any number of benign things. But those are uninteresting, so let's assume for the sake of argument that it was intentional).
Thanks, while we in the end may not agree - I do feel I understand your thinking now. Also agreed, we've probably reached the fruitful end of this discussion and this will be my last reply on it. I'll explain my thoughts similarly as you.
To me it seems more akin to someone saying "I'm launching a restaurant. I'll give you a free meal if you come and give me feedback on the dish, the decor, service...". This happens for a bit, then after a while people start coming in taking the free plate and going and eating it at a different restaurant.
To me it seems pretty reasonable to say "If you're taking the free meal you have to eat it here and give feedback".
That said, I do acknowledge you see it very differently and given how you see it I understand why you feel it's embarrassing.
As a user it is because I can no longer use the subscription with the greater tooling ecosystem.
As for Anthropic, they might not want to do this as they may lose users who decide to use another provider, since without the cost benefit of the subscription it doesn't make sense to stay with them and also be locked into their tooling.
No one said it was complicated, and you might be imagining that I care more than I do. However if you can't understand why having a feature of a paid product removed is dissatisfying, then I cannot help you understand any further.
I am surprised that anyone would think the "product" is the web interface and cli tool though, the product is very clearly the model. The difference in all options is merely how you access it.
Feature, attribute, loophole. I really doubt we fundamentally disagree on the situation here. You can use your empathy to understand why people are disappointed, and I will pretend such a detail oriented thread has made me feel content. Anthropic can do what they want, it's their service.
The Claude plans allow you to send a number of messages to Anthropic models in a specific interval without incurring any extra costs. From Anthropic's "About Claude's Max Plan Usage" page:
> The number of messages you can send per session will vary based on the length of your messages, including the size of files you attach, the length of current conversation, and the model or feature you use. Your session-based usage limit will reset every five hours. If your conversations are relatively short and use a less compute-intensive model, with the Max plan at 5x more usage, you can expect to send at least 225 messages every five hours, and with the Max plan at 20x more usage, at least 900 messages every five hours, often more depending on message length, conversation length, and Claude's current capacity.
So it's not a "Claude Code" subscription, it's a "Claude" subscription.
The only piece of information that might suggest that there are any restrictions to using your subscription to access the models is the part of the Pro plan description that says "Access Claude Code on the web and in your terminal" and the Max plan description that says "Everything in Pro".
It is embarrassing, because it means they’re afraid of competition. If CC was so great, at least a fraction of they sell it, they wouldn’t need to do it.
> I just took delivery a few months ago of some handmade solid cherry bookshelves.
Was the seller of that furniture the number one furniture retailer in the world?
The majority of furniture purchased is flat pack. Which means the majority of furniture produced and sold is flat pack MDF. Saying that you bought something not flat pack means nothing really.
Yes, you very well could have bought a piece of furniture where the maker took the care to sand the wood with his tongue. But if you're looking to design furniture to be produced and sold, you should expect to be designing flat pack MDF.
If you are looking to become as huge and as profitable as possible with furniture, then sure, MDF may be the one true path. But J. Random Small Furniture Builder is unlikely to become that huge regardless of how much MDF they use. Exploring ways to distinguish oneself may be worthwhile.
Serving customers who want non-MDF options can be a perfectly sustainable and profitable — even if smaller — business. There are plenty of small furniture shops doing just that.
> (This is one of the reasons why only the economically illiterate would propose a tax on "unrealized capital gains": that means taxing people for income they have not actually received, but merely could theoretically receive. Which is both immoral and stupid.)
But your statement misses the important point of situations where people use the unrealized gains as collateral for a loan which they then use (for example to live off of). This in fact effectively "realizing" them without paying appropriate taxes on them. As long as gains are purely theoretical and not used for any transactions they should remain untaxed. As soon as they become "active" by being sold, or for example in unlocking additional assets by being collateral for a loan, they should be taxed.
Same concept as a retirement account. You can sell within a retirement account and rightfully don't have to pay taxes because you don't really have "access" to the cash. It's still "locked" within the account. Only when you withdraw to have access to it and make it active/real do you pay taxes. But if you take out a large loan leveraging that retirement account as collateral (or against an unrealized gain) you are not making it active and correctly should pay taxes.
If it's a loan, it must be repaid. At that point, the debtor is going to either sell some stocks, thereby actually acquiring real income (and paying capital gains taxes), or use some other source of cash and not touching the stocks. In the former case, he will pay tax on that income at the time when he actually sells the stocks. In the latter case, he paid taxes on that cash at the time when he received it as income, so it's already been taxed and shouldn't be taxed twice. (Which is why inheritance taxes are immoral — they're double-taxation — but that's a totally separate subject).
As for retirement accounts, same principle applies. Collateral is collateral, it's a contingency. It might or might not ever be touched. If it's touched, then there will be real income involved. If it isn't touched, then there was no income.
But as for the idea of paying taxes on things used as collateral for a loan, that only makes sense if you consider money received as a loan as income. And if you do, you're going to get yourself in serious trouble. LOANS ARE NOT INCOME. They have to be repaid, and they actually cost you money in the long term because you also have to pay interest. If you treat loans as if they were income, you'll quickly find yourself neck-deep in credit card debt and in serious financial trouble. On the other side of the equation, if the IRS were to treat loans as if they were income and tax them (or the collateral used to secure the loan), they would do immense damage to the economy.
That's what so surprising to me - they data clearly shows the experiment had terrible results. And the write up is nothing but the author stating: "glowing success!".
And they didn't even bother to test the most important thing. Were the LLM evaluations even accurate! Have graders manually evaluate them and see if the LLMs were even close or were wildly off.
This is clearly someone who had a conclusion to promote regardless of what the data was going to show.
> And they didn't even bother to test the most important thing. Were the LLM evaluations even accurate!
This is not true; the professor and the TAs graded every student submission. See this paragraph from the article:
(Just in case you are wondering, I graded all exams myself and I asked the TA to also grade the exams; we mostly agreed with the LLM grades, and I aligned mostly with the softie Gemini. However, when examining the cases when my grades disagreed with the council, I found that the council was more consistent across students and I often thought that the council graded more strictly but more fairly.)
At the risk of perhaps stating the obvious, there appears to be a whiff of aggression from this article. The "fighting fire with fire" language, the "haha, we love old FakeFoster, going to have to see if we change that" response to complaints that the voice was intimidating ... if there wasn't a specific desire to punish the class for LLM use by subjecting them to a robotic NKVD interrogation then the authors should have been more careful to avoid leaving that impression.
Tried it in earnest. Definitely detect some aggression, and would feel stressed if this were an exam setting. I think it was pg who said that any stress you add in an interview situation is just noise, and dilutes the signal.
Also, given that there's so many ways for LLMs to go off the rails (it just gave me the student id I was supposed to say, for example), it feels a bit unprofessional to be using this to administer real exams.
Not that bad? I gave it a random name and random net ID and it basically screamed at me to HANG UP RIGHT NOW AND FIGURE OUT THE CORRECT NET ID. Hahaha
That does not resemble any good professor I've ever heard. It's very aggressive and stern, which is not generally how oral exams are conducted. Feels much more like I'm being cross examined in court.
Also tried it and it could have been a lot better. If I had any type of interview with that voice (press interview, mentor interview, job interview) I would think I was being scammed, sold something, or had entered the wrong room.
The belligerence about changing the voice is so weird. And it does sort of set a tone straight off. "We got feedback that the voice was frightening and intimidating. We're keeping it tho."
I've got a long standing disagreement with an AI CEO that believes LLM convergence indicates greater accuracy. How to explain basic cause and effect in these AI use cases is a real challenge. The essential basic understanding of what an LLM is is not there, and that lack of comprehension is a civilization wide issue.
I don't think they're terrible, but I'm grading on a curve because it's their first attempt and more of a trial run. It seems promising enough to fix the issues and try again.
It's incredible there is something wrong with a group of people completely unable to see when someone is lying to them. And no matter how many times they are lied to, as long as they are rich enough they believe them.
I don't know what to think anymore about this. He has continuously conned his way along and does it just long enough to jump to the next con.
Tesla is crashing and somehow people though giving him a huge pay package made sense. Cyber truck is flopping but now he's again living off government graft by having another company buy up the dead weight supply. Tesla is only around because of govt subsidy and now that that's dead he's turned to another govt spigot. While supposedly being opposed politically to what he's doing.
And time and time again people still make up excuses because they can't believe they were conned.
Probably the biggest sign AI is going to flop is him starting talking about it being right around the corner.
Little technical skills, no forecasting ability, we saw how much his "efficiency management" philosophy flopped when done in public via DOGE (vs behind the scenes in a private company) and yet people keep falling for it. As long as he can keep spitting out BS, people keep falling for it.
On the other side of the coin, they really don't have a choice; either they attempt to provide leverage (and using non-realistic goals is excellent to avoid actually having to pay it), or any major misshap with any of the other businesses that may have as collateral tesla stock (either directly or indirectly) would basically bankrupt the company. And the scenario where Elon would attempt to do a sort of firesale on purpose just to take revenge isnt far-fetched either;
IMO The only way forward for them is to keep him happy for now, while attempting to either do damage control or graceful exits.
I think you think about it in the wrong way. The obvious con is what hypes the fan base. They think they are in on it and that they will fool the "NPCs" or what ever they call normal people.
I always thought the story ended with the emperor and his entourage being embarassed after the child said he's naked... but no, it ends even more close to real human behavior. (Sorry for writing a clickbait sentence).
"The market can remain irrational longer than you can stay solvent" applies here.
Luckily I don't bet, I would have taken a huge short position and lost a bunch of money on Tesla years ago because they were already over valued by any plausible revenue projection, and yet the stock went up and up.
But worth remembering, the South Sea Company was worth the equivalent of a few trillion dollars too.
The problem with the stock market is, even if you know with 100% certainty that EM is lying and Tesla is overvalued, you only can cash in that knowledge if the stock price makes contact with reality.
In fact even if every single shareholder in Tesla knows that the price is unsustainable they can still hold out for a greater fool for years. To a large extent you are betting on what the crowd will do, not what the company will do.
For this to work every single shareholder has to be in on the game. I wonder if the only reason it has gone on this long is because TSLA has so many required institutional investors stabilizing the market.
Any serious shareholder with a significant investment in it is surely aware that it's an overvalued meme stock that will continue to print money as long as the reality distortion field is maintained.
They'd be utter idiots if they weren't. (And if they are utter idiots, you shouldn't expect them to behave rationally.)
This is exactly it: they're making a perfectly rational decision keeping Musk on the way he is, because the alternative once he's out is the stock crashes due to the uncertainty and the fanboys bailing.
Why have less money when you actually don't care what happens if you have more money? So long as the stock retains its value, you can do things like borrow against your holdings, leverage that into other investments etc.
Beyond a certain point it becomes self-reinforcing. You will distort everything else about your world view to support that lie. You will surround yourself with other people who believe it and live in a completely internally consistent reality, surrounded by a vast conspiracy trying to bring you down.
The really killer part is, I can't even be 100% certain that it's not me. I'm quite sure, and justify it solidly, but then, I would.
Maybe the smart people are the ones who can intuitively feel the stupidity of the masses and take advantage of that, whereas the dumb are the ones who are too cautious about houses of cards and unstable Ponzi schemes...
If something is literally incredible, then it's prudent to stop and consider whether it should be believed or that you have made an incorrect assumption. In this case, you wrongly assume that Musk is somehow being rewarded for something that happened in the past, or for something that might not even happen. The reality is that the pay package will only have value if Elon manages to dig Tesla out of the hole.
Despite how much conning you believe Musk has done (I won't refute it), Tesla is a company that actually builds cars, and while the Cybertruck flopped and anyone could see that coming from a mile away, that doesn't really affect the Tesla bottom line. That Musk grifted the government into buying them doesn't really do anything besides saving Tesla some money.
I wouldn't buy Tesla shares, I still don't really see their crazy valuation, but I would buy a Tesla car, as they are ostensibly awesome. If you disregard all the lying Musk has done it's still an epic car with unrivaled self driving capabilities.
That he starts talking about something historically has been a sign that some part of it is going to be a reality. You can stand apart from the crazy people who worship the ground he walks on, and still appreciate that he accomplishes great things. Whether it's through conning and grifting, or hard work and keen insight, there are still an electric car company and a rocket company where there weren't before.
Just stop reacting to people believing or shouting things or grotesque behaviors, and just look at the actual reality. It'll do you a lot better than just believing everything Musk says is BS.
It seemed like you wanted to see if a search tool was working.
It looked to see. It tried one search using on data source KJ and found no matches. Next question would be is the quote not in there, is there a mis-remembering of the quote or is their something wrong with the data source. It tries an easier to match quote and finds nothing, which it finds odd. So next step in debugging is assume a hypotheses of KJ Bible datasource is broken, corrupted or incomplete (or not working for some other reason). So it searches for an easier quote using a different datasource.
It's unclear the next bit because it looks like you may have interrupted it, but it seems like it found the passage about Mary in the DR data source. So using elimination, it now knows the tool works (it can find things), the DR data source works (it can also find things), so back to the last question of eliminating hypotheses: is the quote wrong foe the KJ datasource, or is that datasource broken.
The next (and maybe last query I would do, and what it chose) was search for something guaranteed to be there in KJ version: the phrase 'Mary'. Then scan through the results to find the quote you want, then re-query using the exact quote you know is there. You get 3 options.
If it can't find "Mary" at all in KJ dataset then datasource is likely broken. If it finds mary, but results don't contain the phrase, then the datasource is incomplete. If it contains the phrase then search for it, if it doesn't find it then you've narrowed down the issue "phase based search seems to fail". If it does find and, and it's the exact quote it searched for originally then you know search has an intermittent bug.
This seemed like perfect debugging to me - am I missing something here?
And it even summarized at the end how it could've debugged this process faster. Don't waste a few queries up front trying to pin down the exact quote. Search for "Mary" get a quote that is in there, then search for that quote.
This seems perfectly on target. It's possible I'm missing something though. What were you looking for it to do?
reply