This sounds really good! If the largest population and probably the largest manufacturer in the world goes green, that is going to be really good for climate change.
As another comment mentioned, this website does look like Time Cube at first sight.
However, the explanations of the second law of thermodynamics on the second page are quite decent and written in a humorous way. Of course it is not fully accurate because it does not use any math but I think it does a good enough job of explaining it to the lay person.
The explanations about human life at the third page are analogous at best. The situations that the author describes are similar to the workings of the second law but not a first principles outcome of it.
I am surprised I have not seen LabView mentioned in this thread. It is arguably one of the most popular visual programming languages after Excel and I absolutely hate it.
It has all the downsides of visual programming that the author mentions. The visual aspect of it makes it so hard to understand the flow of control. There is no clear left to right or top to bottom way of chronologically reading a program.
LabView’s shining examples would be trivial Python scripts (aside from the GUI tweaking). However, it’s runtime interactive 2D graph/plot widgets are unequaled.
As soon as a “function” becomes slightly non trivial, the graphical nature makes it hard to follow.
Structured data with the “weak typedef” is a minefield.
A simple program to solve a quadratic equation becomes an absolute mess when laid out graphically. Textually, it would be a simple 5-6 line function that is easy to read.
Source control is also a mess. How does one “diff” a LabView program?
When I had some customers working with it a few years ago, they were trying to roll out a visual diff tool that would make source control possible.
I don't know if they ever really delivered anything or not. That system is such an abomination it drove me nuts dealing with it, and dealing with scientists who honestly believed it was the future of software engineering and all the rest of us were idiots for using C++.
The VIs are really nice, when you're connecting them up to a piece of measurement hardware to collect data the system makes sense for that. Anything further and it's utter garbage.
Take a look at FME, another visual 'programming language'. They've done a lot of work with their git integration, including diffing and handling merge conflicts.
Python's equivalent of LabView would be Airflow. Both solve the same CS problem (even though the applications are very different).
Airflow it almost universally famous for being a confusing, hard to grasp framework. But nobody can actually point to anything better. But yeah, it's incomparably better than LabView, it's not even on the same race.
> Source control is also a mess. How does one “diff” a LabView program?
With LabVIEW, I'm not sure you can. But in general, there are two ways: either by doing a comparison of the underlying graphs of each function, or working on the stored textual representations of the topologically sorted graphs and comparing those. On a wider view, in general, as different versions of any code are nodes in a graph, a visual versioning system makes sense.
And Simulink. I lost years in grad school to Simulink, but it is very nice for complex state machine programming. It’s self documenting in that way. Just hope you don’t have to debug it because that’s a special hell.
I quite like Simulink because it's designed for simulating physical systems which are naturally quite visual and bidirectional. Like circuit diagrams, pneumatics, engines, etc. You aren't writing for loops.
Also it is actually visually decent unlike LabVIEW which looks like it was drawn by someone who discovered MS Paint EGA edition.
Simulink is based on the block diagram notation used in control theory for decades earlier - before personal computers and workstations. The notation is rigorous enough you can pretty much pick up a book like the old Electro-Craft motor handbook (DC Motors Speed Controls Servo Systems), enter the diagrams into Simulink, and run them. With analogous allowances to how you would enter an old schematic into a SPICE simulator.
LabView was significantly more sui generis and originated on Macintosh about a decade earlier. I don't hate it but it really predates a lot of more recent user experience conventions.
This is exactly why a visual representation of code can be useful for analyzing certain things, but will rarely be the best (or even preferred) way to write code.
I think a happy medium would be an environment where you could easily switch between "code" and "visual" view, and maybe even make changes within each, but I suspect developers will stick with "code" view most of the time.
Also, from the article:
> Developers say they want "visual programming"
I certainly don't. What I do want is an IDE which has a better view into my entire project, including all the files, images, DB, etc., so it can make much better informed suggestions. Kind of like JetBrains on steroids, but with better built-in error checking and autocomplete suggestions. I want the ability to move a chunk of code somewhere else, and have the IDE warn me (or even fix the problem) when the code I move now references out-of-scope variables. In short, I want the IDE to handle most of the grunt work, so I can concentrate on the bigger picture.
Most industrial automation programming happens in an environment similar to
LabView, if not LabView itself. DeltaV, Siemens, Allen-Bradley, etc. Most industrial facilities are absolutely full of them with text-based code being likely a small minority for anything higher level than the firmware of individual PLCs and such.
A lot of these environments inherit a visual presentation style (ladder logic) that comes from the pre-computer era, and that works extremely well for electrical schematics when conveying asynchronous conditional behaviors to anyone, even people without much of a math background. There's a lot of more advanced functions these days that you write in plain C code in a hierarchical block, mostly for things like motor control.
I like function block on Schneider platform for
Process control with more analog values than Boolean. It visualizes the inputs, control loop, and output nicely.
Ladder, SFC and FBD are all graphical languages used to program PLC's. Ladder is directly based on electrical ladder schematics and common in the USA. The idea was electricians and plant technicians who understood ladder schematics could now program and troubleshoot industrial computers. SFC and FBD were more common in Europe but nowadays you mostly see Structured Text, a Pascal dialect (usually with bolted on vendor OOP lunacy.)
I will admit that for some programs, Ladder is fantastic. Of course ladder can be turned into horrid spaghetti if the programmer doesn't split up the logic properly
I think the whole flow concept is really only good for media pipelines and such.
In mathematics, everything exists at once just like real life.
In most programming languages, things happen in explicit discrete steps which makes things a lot easier, and most node based systems don't have that property.
I greatly prefer block based programming where you're dragging rules and command blocks that work like traditional programming, but with higher level functions, ease of use on mobile, and no need to memorize all the API call names just for a one off tasks.
What would be useful is a data flow representation of the call stack of a piece of code. Generated from source, and then brought back from the GUI into source.
Labview does have diff and merge tools. It feels kind of clunky in practice, kind of like diffing/merging MS Office files. In my experience people think of versions of LabView code as immutable snapshots along a linear timeline and don't really expect to have merge commits. Code versions may as well be stored as separate folders with revision numbers. The mindset is more hardware-centric; e.g., when rewiring a physical data acquisition system, reverting a change just means doing the work over again differently. So LabView's deficiencies in version control don't stand out as much as they would in pure software development.
I used Labview as part of a course in my degree (EE), so I already knew it.
If you know other languages I would say it's very easy to pick up. Probably the easiest out of any language out there. Instead of heaving to guess/learn the syntax, you just pick functionality from icons/lists and drag and drop.
As someone who used to use (and hate) LabVIEW, a lot of my hatred towards it was directed at the truly abysmal IDE. The actual language itself has a lot of neat features, especially for data visualization and highly parallel tasks.
If we define a computer in very broad terms: a system used to emulate/simulate another system, could we call a wind tunnel a computer? It is a system that is used to infer what would happen high up in the atmosphere or on the race track. Taking it a step further, do animals used for drug testing count as computers? They are used to infer any potential adverse effects in a human body.
Although quite specialized, I think these things would still classify as a computer.
I think it would make more sense to limit the analysis to technologies that let you build a Turing-complete machine, but indeed sometimes you find people counting your examples as computers, because they are computing a specific function.
That's assuming that wind tunnels are not Turning-complete. Terry Tao has this fantastic idea of proving the Navier-Stokes equations blow up by proving the existence of a 'fluid computer'. https://terrytao.wordpress.com/2019/05/31/searching-for-sing...
This is so cool! "Dissecting" a processor like this could be a fun educational activity to do in schools similar to dissecting a frog, but without the animal rights issues.
Personally, I think everyone should try opening up a chip. It's easy (if the chip isn't in epoxy) and fun to look inside. You need a metallurgical microscope to examine the chip closely, but you can see interesting features even with the naked eye.
I didn't know there is such a thing as a metallurgical microscope. What makes them different from biological microscopes? And what is there primary purpose? I am assuming they don't make microscopes just for dissecting chips.
A regular biological microscope shines the light from below. This is good for looking at cells, but not so useful when looking at something opaque. A metallurgical microscope shines light from above, through the lens. They are used for examining metal samples, rocks, and other opaque things.
An external light works for something like an inspection microscope. But as you increase the magnification, you need something like a metallurgical microscope that focuses the light where you are looking. Otherwise, the image gets dimmer and dimmer as you zoom in.
In some places, you've shown the same part of the circuit both with and without the metal layers. How did you find the same location on the die after taking the die out of the microscope, removing the additional layers and putting it back?
I figured that I would want to study the standard-cell circuits, so I made a detailed panorama of one column of standard-cell circuits with the metal. Then after removing the metal, I made a second panorama of the same column. This made it easy to flip back and forth. (Of course, it would be nice to have a detailed panorama of the entire chip, but it would take way too long.)
Biological microscopes illuminate the sample from below, as the samples are typically transparent. Metallurgical microscopes illuminate reflective samples from above.
*"Below" meaning "on the opposite side from the objective" - you illuminate _through_ the sample.
Metallurgical microscopes illuminate the sample "from the top side". The actual implementation even goes as far as making sure the illumination happens on the optical axis of the objective (as if the light was emitted from your eyes/camera, reflected from the sample and then seen by your eyes/camera). They are also called reflected light or epi-illumination microscopes.
Biological microscopes, on the other hand illuminate the sample from the back side (which doesn't work for fully opaque objects).
Discarded RFID cards and the like provide a practically free source of minimally-encapsulated ICs, also often made on an old large process that's amenable to microscope examination.
Having looked at a few RFID cards, there are a couple of problems. First, the dies are very, very small (the size of a grain of slat) so they are hard to manipulate and easy to lose. Second, the die is glued onto the antenna with gunk that obstructs most of the die. You can burn it off or dissolve it with sulfuric acid, but I haven't had success with more pleasant solvents.
Decapping a processor produces toxic waste, which has to be disposed of. Processors, properly handled, last a lot longer than frogs, and can be re-used again and again: to a first approximation, processors do not wear out. I would expect that manufacturing a new processor causes more suffering to more frogs than is caused by killing a frog for dissection.
That said: we have video players in our pockets. Sure, dissecting one frog might be a more educational experience than watching somebody else dissect a frog, but is it more educational than watching 20 well-narrated dissections? I suspect not. I don't think we need to do either.
This is not just a curiosity though. Nuclear pulse propulsion like Orion is actually the most advanced form of space propulsion that is within technical reach and doesn't use science fiction ingredients like large amounts of antimatter. When you read the article, you see that Orion was considered quite doable even decades ago. There actually is one system that would be even more efficient than Orion, the Medusa:
I enjoy reading posts like this. Very thorough description. I am wondering if someone has insights into how the publishing model for more traditional publishers like Wiley and Elsevier work. Those guys are selling books for more than $200. Does the author get more money from their sales or is it all absorbed by the publisher?
This article does a great job at explaining interval arithmetic. However, the introduction says
>Instead of treating each as exactly 7 feet, we can instead say that each is somewhere between a minimum of 6.9 feet and a maximum of 7.1. We can write this as an interval (6.9, 7.1).
Yes we can use an interval to express an uncertainty. However, uncertainties in physical measurements are a little bit more complicated.
When I measure something to be 7 plus minus 0.1 feet, what I am saying is that the value of the measured variable is not known for sure. It can be represented by a bell curve centred on 7 and 95% of the area under the curve (95% probability) that the true value lies between 6.9 and 7.1. The value of the measured variable is much more likely to be 7 than 6.9. There is also a small chance that the value lies outside of the 6.9 to 7.1 range.
In an interval, there is no probability distribution. It is more like an infinite list of numbers.
In practice, interval arithmetic is seldom used for uncertainty analysis for scientific experiments.
To close the loop:
The connection is called an alpha-cut.
In the Gaussian case it would cut the normal distribution horizontal at a defined height. The height is defined by the sigma or confidence you want to reflect.
The length of the cut resp. The interval on the support is how you connect propability and intervals.
In gvar everything by default is normally distributed, but you can add_distribution, log-normal is provided, for example. You can also specify the covariance matrix between a set of values, which will be correctly propagated.
It's hard for me to understand the goal of this comment. Nothing in it is incorrect. It's also not really a meaningful critique or response to the article. The article did not attempt to describe "uncertainty analysis for scientific experiments". It blatantly began by describing interval arithmetic and ended by justifying it as being meaningful in two contexts: IEEE floating point numbers and machining tolerances. Neither are experimental domains and both do have a meaningful inbuilt notion of interval that would be not be served by treating intervals as gaussians.
Gaussian distributions are a horrible choice for representing measurement uncertainty. If the tool is properly calibrated, 100% of the probability mass will be within (6.9, 7.1). A normal distribution would have probability mass in negative numbers!
There's also no motivation for choosing a normal distribution here - why would we expect the error to be normal?
What I hear is that similar techniques should/could be used by explicitly modeling it not as an interval (6.9, 7.1) but as a gaussian distribution of 7±0.1, and a computer can do the arithmetic to see what the final distribution is after a set of calculations.
You could use intervals to prove the codomain of a function, given its domain is an interval, using the same arithmetic.
Would actually be useful in programming as proving what outputs a fn can produce for known inputs - rather than use unit tests with fixed numerical values (or random values).
There is no reason to assume a normal distribution. If you have a tool that measures to a precision of 2 decimal places, you have no information about what the distribution of the third decimal place might be.
This is correct, which is why intervals don't choose an interpretation of the region of uncertainty.
If you do have reason to interpret the uncertainty as normally distributed, you can use that interpretation to narrow operations on two intervals based on your acceptable probability of being wrong.
But if the interval might represent, for example, an unknown but systematic bias, then this would be a mistake. You'd want to use other methods to determine that bias if you can, and correct for it.
> There is no reason to assume a normal distribution.
There absolutely is with sane assumptions about how any useful measurement tool works. Gaussian distributions are going to approximate the actual distribution for any tool that's actually useful, with very few exceptions.
When fabricating, we'll often aim for the high end of a spec so you have material remaining to make adjustments. Most of our measurements actually follow double-tail or exponential distributions.
I'm sorry but if I give you a measuring tape that goes to 2 decimal places and you measure a piece of wood at 7.23 cm, when you get a more precise tape you have no information at all about what the third decimal place will turn out to be. It could be anywhere between 7.225 and 7.235, there is no expectation that it should be nearer to the centre. All true lengths between those two points will return you the same 7.23 measurement and none are more likely than any other given what you know.
I wonder if open source software can play a role in this. Maybe we can have an open source algorithm for determining credit ratings and private companies only provide a secure database of ratings.
It will also offer the lay person insights into how the credit rating is exactly determined. They can know what is causing their rating to be less than desired and take appropriate action, instead of watching a random youtube video titled "5 ways to quickly improve your credit score".
Presumably the reason they have a lower score than desired is because they already failed to do this in one form or another.
> "5 ways to quickly improve your credit score".
Have no inquiries. Have no forced account closures or writeoffs. Have as much total open credit as you can without triggering the first two. Have at least one secured or unsecured installment loan open and then paid off every 5 years. Always pay your bills on time.
It's not quick, I suppose, but the recipe is already pretty well known.
Thinking of my last loan application, they ask you if you own or rent and how long you've been at that address. Also current job and income and how long there.
At least in the UK this is done as well as pulling your credit score, traditionally it's not a factor feeding into your credit score. Banks check both your history of paying off debts and your ability to continue doing so, it doesn't matter how good your score is, if you ask for an unsecured loan that's 20x your annual income over the next 5 years it's going to get refused.
This is all somewhat complicated by recent products from credit agencies, which make use of the Open Banking standard to pull data direct from your bank accounts and use that data to feed into credit scoring as well.
Pretty much, yeah. A credit score is a descriptor of the risk of financial loss when lending the individual concerned some money.
So the only real way to grow and keep the score high is:
* Pay your credit card and loan statements when they are due (late payments imply you don't have money).
* Keep credit inquiries to the minimum necessary (an inquiry means you're asking for a loan, implying you don't have money).
* Don't max out your credit limits if possible (you're taking and maxing out lines of credit, implying you don't have money).
* Keep old credit cards open even if you don't use them, if it's practical (a longstanding open line of credit implies you have money).
* Keep doing all of the above for many years (a good credit score implies you have money and will pay back debts incurred).
There's no magic or mystery to it, it just takes a lot of time to grow and keep high because you're building and maintaining trust with banks. You know that old saying? Trust is built over years but destroyed in a second? Yeah.
More than half of the above don't imply that you don't have money. Lack of money is only one of the possible reasons for those situations.
* an inquiry means you're asking for a loan, implying you don't have money
Entities with tons of money seek loans all the time for liquidity and risk mitigation.
* you're taking and maxing out lines of credit, implying you don't have money
Nope, lack of understanding how CC scoring works (scoring designed to keep you in the credit mill) can lead to maxing out while being perfectly comfortable financially.
* Keep old credit cards open even if you don't use them, if it's practical (a longstanding open line of credit implies you have money).
What in tarnation.
This entire charade is a grotesque dance of mad clowns.
> can lead to maxing out while being perfectly comfortable financially.
Total credit usage can have an impact. So if all your lines of credit are at maximum, this is a negative signal. If one or more is, but your total utilization is 75% or less, it should have little to no impact.
This is why the installment loan part is useful. It starts at maximum balance and you immediately pay that down. It's not as strong of a positive signal until you hit payoff but it's a pretty massive one the day you do.
>Lack of money is only one of the possible reasons for those situations.
As far as a lender is concerned, if you don't pay back your debts you might as well not have money even if you actually do.
>Entities with tons of money seek loans all the time for liquidity and risk mitigation.
And each and every one of those inquiries will lower your credit score, because you're taking on more debt. Do you have money? Will you pay the debt back? The more inquiries there are (the more you ask for loans) in a given span of time, the less likely it is you have money and will pay debts back.
>Nope, lack of understanding how CC scoring works (scoring designed to keep you in the credit mill) can lead to maxing out while being perfectly comfortable financially.
Banks hate seeing lines of credit maxed out. Ask any banker worth his salt and they will all tell you the same.
If it wasn't obvious already, banks don't like lending money. That might sound strange, but for a bank (the lender) a loan is an investment and investments are risks. The more loans (debt) someone has, the more risk they are carrying and thus their credit score will reflect that.
>What in tarnation.
A line of credit in good standing that has been open for a long time means you've been making your payments properly, meaning the risk of lending money to you is lower than someone who does not have a line of credit as old. Thus, your credit score will be higher.
The age of your credit is usually determined by your oldest open line(s) of credit. Closing an old line of credit means it will eventually fall off your credit report and stop being reflected in your credit score, which will fall to reflect the new and younger age of your credit.
Again: Everything about credit score is solely about the risk you might pose to a lender. Anything that increases that risk will lower the score, and vice versa, even if it's just an implication.
Not quite. If you're not lending money someone else has deposited, you're not a bank. Banks have to lend money. Problems arise when they lend too much or lend badly.
If we want to spitball, we don't really need banks. In the age of computers, the central bank could take on their ledger function without breaking a sweat. It could then contract out the lending and deposit functions separately.
The deposit function is trivial. All deposit institutions would be 100% trustworthy. They'd just basically be ATMs for your account at the Fed.
The lending function is slightly hairier. The Fed would set risk parameters and performance-based revenue sharing. The lenders would have one client to please, and would be barred from many of the shenanigans they do today. However, they'd have a rent-seeking incentive.
Inquiries impact your credit score negatively not because you took on more credit, but because you (most likely) didn't.
The correlation is that if you get rejected by lender A, and try a new application at lender B, and again at lender C, you will have a lot more inquiries than some-one who got credit extended at the first try. FICO don't know if you actually got rejected, or if you were just checking rates, nor do they know what the reason for rejecting you was (maybe they don't even serve your area but their funnel doesn't filter on that early enough) - they just know you were checked.
This particular one is a bit iffy, my bank's UI essentially tricked me into a credit check. Then again, all of them are quite iffy and based on a few datapoints that FICO has access to, which omits many of the things you'd look at during any kind of manual underwriting.
It's not about having/not having money, it's about propensity to pay. Obviously, people need money to pay, but someone can have all the money in the world and still default on loans by not paying.
This is an edge illogical case. Like technically you can sell your apple shares for $5/share, but no broker even has the functionality to let you voluntarily take a loss.
When someone with money defaults on a loan, it's usually because a.) they don't actually have money or b.) the loan is for their company, not them.
All that to say that "having money" is functionally equivalent to "propensity to pay".
> Keep credit inquiries to the minimum necessary (an inquiry means you're asking for a loan, implying you don't have money).
This one should be outright banned, as it's effectively anticompetitive - there are thousands of banks on the planet, and it should be anyone's right to make an inquiry at each and every single bank to make sure one gets the best rates.
Yes, but the reason they say to do all the inquires around the same time is b/c many inquires spread out is a negative signal. It can mean that banks are turning a borrower down for whatever reason so they are now looking for a bank to take on the risk.
Not only do banks and credit agencies provide a "recipe" for improving your score, most do so free of charge (for existing customers).
For example, I know my score swings by +/-30 points/month. I'm fairly confident that is due to the balance on my CCs varying when the score is calculated (there is nothing else about my financial situation changing - same house for a decade, same car loan for 5 years, no new credit lines/loans, etc). But, I pay the cards off every month, and the score always rebounds.
> I'm fairly confident that is due to the balance on my CCs varying when the score is calculated
Yes. It feels wrong that the current balance of credit cards is considered debt. It should only be considered debt once (if) you start paying interest on it. So if you pay it off fully every month, it shouldn't be seen as debt.
But whatever, they consider it debt so it can make the credit score swing up and down a lot. I see this every late summer when I pay my childs school bill for the upcoming year on a credit card. It is a very large amount so suddenly my credit utilization goes up and my credit score drops around ~70 points. Then a month later I pay it off and the credit score goes back up the same ~70 points.
Yeah, I was surprised at how much it swings, but it's high enough it shouldn't matter (and easy enough to not use the cards for a month, let it rebound, then borrow whatever I need to borrow).
The report from my bank never says why. It does list factors that contribute to my score, but they're all "good" (low usage as % of available, all payments on time, etc). And never change.
I'm signed up for all the credit bureaus free accounts so I can freeze/unfreeze my credit. They send out reports monthly, along with one of my CCs. All of them have the reason. And yeah, the score is ~800 so it doesn't really matter. Still interesting to see how it moves with relatively small balance changes.
Because changing the definition would require getting two diametrically opposed parties to cooperate, one of whom weaponizes the debt ceiling whenever it wants to do something inhumane, racist, or just plain stupid.
And the other is run by an oligarchy that makes its money shipping of US$ to the rest of the world for their own enrichment while pretending to care for the poor and the workers. And also engaging in racism by exporting abortion culture to the developing world so that there can be fewer brown people.
I hope India follows suit.