With HN recently devolving into a propaganda site for the lab leak conspiracy theory, recent UFO mania, and now long-debunked miracle cure truthers making a comeback, I guess this will be my last visit to this site. The chaff is just not worth the wheat anymore. Have fun everyone!
Thanks for submitting this interesting issue. Out of interest, what system are you on, and did jq come from a package manager? If you build it from source, it should work: https://news.ycombinator.com/item?id=27362060
So the basic bug is fixed, jq has included a bignum library for > 2 years. I don't know if Mint (and thus presumably Ubuntu, and thus possibly Debian) includes an older version of jq or sets nonstandard user-unfriendly flags on purpose, but I'm somewhat underwhelmed in either case.
> I’ve long thought that a sensible constitutional guarantee would be the right to leave the country at any time.
In totalitarian regimes, constitutions aren't worth the pixels they are printed on. Rights guaranteed by laws must be enforced by the judiciary and the executive. But that only happens if (1) cases actually reach a court, (2) that court is independent, and (3) the executive is willing to enforce the court's decisions.
Since jq does something completely different from a browser, it would be reasonable for it to try harder in some respects. A tool that is supposed to pass certain data through unchanged... Should not change that data. Even if we can expect that data to eventually be rounded by the eventual consumer at some later time.
Every enterprise timesheet and expense system I've ever used was a slow, barely usable, confusing disaster. But I don't think that was because the VM failed to pre-allocate memory for some operations.
Without the superlatives and strawman attacks and with an actual description of how things should be done instead this would be an interesting read.
> I believe, computer science is a branch of mathematics that deals with large but finite structures (so they need an algorithmic description).
This is a strange claim since the entire field was founded upon the investigation of potentially (and often actually) infinite computations.
> Compare with most of "legacy" mathematics, which studies countable structures (so the description can use arbitrary series).
Define "most". Do it in a way that makes real and complex analysis and topology (and probably many other branches) the smaller part of mathematics.
Most importantly though, my problem with this kind of discussion is that the question itself is meaningless. Not everything can be classified into neat " X is a Y" relationships. Not everything needs to be classified into such relationships. Even if the discussion reached a consensus, that consensus would be meaningless. Computer science is a part of math? OK, but so what? Computer science is not a part of math? OK, but so what? Neither conclusion would tell us anything useful.
> Computer science is a part of math? OK, but so what? Computer science is not a part of math? OK, but so what? Neither conclusion would tell us anything useful.
I assumed the implication here is that CS, like math, is considered by many to not be a science, but rather a field of construction based on logic. The obvious problem with calling computer science a science is that it isn’t fundamentally based on measuring empirical evidence of a natural process. Maybe that still lands in the ‘OK, but so what?’ category, on the other hand this has been much discussed re: math, and it may be useful to clarify in what ways CS is not employing scientific method.
> it isn’t fundamentally based on measuring empirical evidence of a natural process.
What leads you to say this? If computation is in some sense the construction of certain forms of mathematics, is computer science not then the empirical study of computers (the objects which instantiate the math) and computation (the process of instantiation)? Of course there is abstract theory as well, but that's just as true in physics
Newell and Simon had some thoughts: "We build computers and programs for many reasons.
We build them to serve society and as tools for carrying out the economic tasks of society. But as basic scientists we build machines and programs as a way of discovering new phenomena and analyzing phenomena we already know about... the phenomena surrounding computers are deep and obscure, requiring much experimentation to assess their nature."[0]
The fact that digital computation is a new process doesn't make it "unnatural", it might be argued; some also contend computation takes place not merely in digital computers but much more generally, in which case distinctions between computer science/cognitive science/physics blur
Agree with your broader point, though. I'm not aware of any consensus on the epistemological or ontological status of computer science, or on its relation to the other sciences. It seems (to me) subject to many of the same philosophical questions that dog mathematicians, re: discovery vs. invention, the uncertain reality of various abstractions, generalizability, etc
Likewise agree that consideration of the methods employed in computer science can be fruitful, in particular if the goal is not so much to establish once and for all which category CS falls most naturally into, but simply to stimulate critical thought about the fundamental questions
I meant natural in the sense of originating from nature, specifically as opposed to something built by people. The fact that digital computation is a synthetic construction of humans is what makes it “unnatural”, that it’s new is just a byproduct humans having invented it recently.
I’d agree there are ways that we can observe computation as a scientist and form hypotheses and perform experiments, especially if, for example, I write a program I don’t fully understand and don’t know how to predict the behavior of, or more maybe much more commonly when I observe software written by other people.
Thinking about the analogy to telescopes, the implication is that computers are an instrument for measuring something. Telescopes measure things about planets and stars, physical things that occur in nature. But what exactly do computers measure if they’re to be considered a measuring device? It’s fun to think of a computer being a physical device that measures pure logic; we can physically observe something that doesn’t occur in nature.
On the other hand, I’m hesitant to not draw some kind of line between CS and the hard sciences like physics, chemistry, biology, because there seem to be real differences between them. (I was going to point out examples, but realized it’s fundamentally tricky to nail down and I’d be setting a trap for myself. ;)) Yes I agree the philosophy of where CS lands, and what CS really is, does land in the same ambiguous camp as mathematics (probably because CS and math both truly are in the same category of abstract logic, not directly tied to physical observations.) Maybe more useful and abstract tools are more difficult to categorize precisely because they are used as part of all the sciences and arts...
> This is a strange claim since the entire field was founded upon the investigation of potentially (and often actually) infinite computations.
To me, that is not that surprising, although it's a good point.
My view has to do with history of mathematics. People were fascinated with infinities long time before they considered that large but finite systems can be also interesting. I think applications of mathematics, mainly geometry and physics, are responsible too.
The development of more finitist taste in problems (somebody else mentioned the constructivism, which I think is fitting) came with the practical need to do computations and developing algorithms.
So I am not that surprised that one of the early forays into theory of computation are through the lens of infinite, rather than finite.
> Most importantly though, my problem with this kind of discussion is that the question itself is meaningless.
Of course, I forewarned that it's just my view, and you're free to ignore it.
Look, partly why I mention it, it seems rather surprising to me; I would consider infinite structures to be more complicated, in some sense; yet, in the history of mathematics (which lately includes CS, as a study of large but finite), these were studied first. There was nothing that would prevent Ancient Greeks (or Euler) from discovering, say, lambda calculus, or how to do sorting efficiently. Although, it seems in many fields we progress from more complicated to simpler methods, in some way. But I think it's partly precise because the finite is often considered uninteresting by mathematicians, it was overlooked. And that's the philosophical point I am trying to emphasize. Different fields of math perceive (in the way they treat them) the same structures differently, and I gave an example of natural numbers. Another example is the notion of the set cardinality, in most areas of mathematics people only care about countable/uncountable distinction.
> “ Not everything can be classified into neat " X is a Y" relationships.”
I’m with you on skepticism of the x-is-y relationship; however, I read the comment as comparing math versus computing academics.
Then, the so-what answer would be informative for the neophyte or youth who is interested In computing but struggles with mathematics instruction. Right? That’s a real thing in education.
In fact I find this to be the principal benefit of online MOOC courses. You can compare styles of instruction, and pedagogy from major universities from across the US (and internationally).
That might be true if there were no more natural mathematical representations of computation. But as I've pointed out in another comment (https://news.ycombinator.com/item?id=27334163), there is such a representation - the lambda calculus.
> If I'm reading conference papers, why would I worry about whether one of them is the product of review collusion?
Because the one you are reading may have crowded out a better one. Even if the current review system is essentially random, replacing it with something that is essentially a contest of well-connectedness is worse. Young researchers with good ideas but fewer connections, or people from less well-known institutions would have their ideas suppressed.
So you should be worrying about stagnation, and about not reading what might actually be new and exciting.
None of that reflects negatively on the paper. There is no additional caution warranted when reading papers. That's just a question of "are you happy with the state of the world?". You can think about that question any time.
I think it does. Good papers are written by people who care about doing good research, whose primary motivation is to do good research. Such people will not accept to collude or commit academic fraud to get their papers published, because they are idealistic and are in research because they want to do useful work. To such people, committing fraud is anathema, for personal reasons that have nothing to do with economic or other incentives.
There are such people in academia but they are also crowded out, to borrow tom_mellior's turn of phrase, from others, who don't hesitate to commit academic fraud to get published and who don't give two flying figs about the quality of their own work. This is obviously a concerning state of affairs that can only be detrimental to the overall quality of research.
So I'm sorry but you're dismissing the issue out of hand without having thought about all the consequences. Academic fraud is like, I don't know, broken windows? It just perverts everything around it and creates a black hole of bullshit that sucks everything in it. Good research cannot thrive in such conditions.
Actually, today an article was posted to HN (not by me) that points to Basquiat's parable of the broken window:
The parable of the broken window was introduced by French economist Frédéric Bastiat in his 1850 essay "Ce qu'on voit et ce qu'on ne voit pas" ("That Which We See and That Which We Do Not See") to illustrate why destruction, and the money spent to recover from destruction, is not actually a net benefit to society.
In microeconomic theory, opportunity cost is the loss of the benefit that could have been enjoyed if the best alternative choice was chosen instead.[1]
"Diet = dental health" is bullshit. My partner was raised sugar-free and has horrible teeth with many cavities. So does their entire family. I was raised on a diet which included many forms of sweets, and I have never had a cavity. Same for my entire family. Genetics or other biological predisposition seems to be a major factor. Diet might help a bit, but don't expect any magical effects.