Hacker Newsnew | past | comments | ask | show | jobs | submit | more throwawayjava's commentslogin

> It may have to do with how regexes of any complexity look like machine code to me.

Regexes get a bad rap because programmers who write otherwise maintainable code throw code hygiene out the window when writing long regexes. Your host language is much more complicated than the regex language, but writing shitty unreadable regex is, for whatever reason, acceptable. Some unfortunate and unnecessary limitations of typical regex libraries make the problem worse.

1. Regexes are typically written as one-line strings instead of as structu/red code. When writing C/Java, programmers understand that they should put each element of a sequence on a separate line and use indentation to visually signal branching and loops. But for some reason when writing character-processing programs that also have sequences, branching (|), and loops, programmers almost always golf it and put the whole complex program all on one line.

If you're writing a regex longer than 5 or 10 atoms, place each sequence on a separate line and use newlines+indentation to visually offset disjunctive choices (|) and loops (star).

Rule of thumb: you should be able to roughly sketch the rough shape of the finite automata by crossing your eyes and eyeballing the shape of the regex, just like you should be able to roughly sketch the control flow of a program by crossing your eyes and eyeballing the shape of the code.

2. Character group names are too terse (e.g., \s instead of "\whitespace" and \d instead of \digit), probably because of #1. Also, very few people give names for long subexpressions (e.g., factoring code out into functions with well-chosen names).

3. Gotos/try...catch (i.e., backtracking) are not used judiciously and aren't well-documented/tested. I often see backtracking or confusing mixes of lazy and greedy matching instead of just writing out a slightly longer disjunction.

4. Regexes are often used for languages that aren't even almost regular. A bit of backtracking is OK (gotos are sometimes OK), but if there's a lot of backtracking then you need to use a different class of languages/machines.

5. There's no way to embed non-regular matchers into a regex.

Due to the combination of 1-5, matching an email address with an optional recipient name (so something like "asdf@asdf.com" or something like "John Smith <asdf@asdf.com>") you need an insane regex that many people implement with lots of backtracking and so on all stuffed into a single line.

But something like this would work just fine and is much more readable:

    \emailAddress := ...
    # todo: need to support dashes in names.
    \name := (
        [a-zA-Z]*
        \whitespace?
        [a-zA-Z]*
    )
    # matches asdf@asdf.com or John Smith <asdf@asdf.com>
    (
        \emailAddress
    )
    OR
    (
        \name
        \whitespace? 
        <
        \emailAddress
        >
    )
where e.g., the implementation of \emailAddress could be written as a stand-alone parser in the host language. But even without digging into email address you can already see how this is way more readable than:

    \emailAddress|[a-zA-Z]*\s?[a-zA-Z]*\s?<\emailAddress>
Writing readable regular expressions shouldn't be difficult -- just treat the regex like any other piece of code and allow inter-op with the host language. But few people/libraries put in the effort, and for whatever reason golfing regexes in production code is considered acceptable even in orgs where you'd be fired for code golfing in the host programming language.


Lua's Lpeg module (http://www.inf.puc-rio.br/~roberto/lpeg/) is probably what you are after:

     lower = lpeg.R("az")
     upper = lpeg.R("AZ")
     letter = lower + upper
Personally I prefer though the terseness of regular expressions.


Python has another somewhat reasonable solution. Either of those solutions can be combined with good programming to constitute a reasonable solution.

> Personally I prefer though the terseness of regular expressions.

I think there are legitimate use-cases for both.

If you're quickly hacking out a small good-enough parser for something regular or "almost regular", terseness can be great.

However, if you're parsing a large regular language, terseness isn't really a benefit. Perl is on its bed for a reason, and overly terse regular expressions should die for a similar reason. Overly-clever write-only coding culture sucks.

But the terseness of regular expressions is basically terrible beyond maybe a few hundred characters. E.g., the following has no place in production code -- you might as well just include a binary:

    (?:(?:\r\n)?[ \t])*(?:(?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t]
 )+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:
 \r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(
 ?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[
 \t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\0
 31]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\
 ](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+
 (?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:
 (?:\r\n)?[ \t])*))*|(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z
 |(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)
 ?[ \t])*)*\<(?:(?:\r\n)?[ \t])*(?:@(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\
 r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[
 \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)
 ?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t]
 )*))*(?:,@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[
 \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*
 )(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t]
 )+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*)
 *:(?:(?:\r\n)?[ \t])*)?(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+
 |\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r
 \n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:
 \r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t
 ]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031
 ]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](
 ?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?
 :(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?
 :\r\n)?[ \t])*))*\>(?:(?:\r\n)?[ \t])*)|(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?
 :(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?
 [ \t]))*"(?:(?:\r\n)?[ \t])*)*:(?:(?:\r\n)?[ \t])*(?:(?:(?:[^()<>@,;:\\".\[\]
 \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|
 \\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>
 @,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"
 (?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t]
 )*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\
 ".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?
 :[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[
 \]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*|(?:[^()<>@,;:\\".\[\] \000-
 \031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(
 ?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)*\<(?:(?:\r\n)?[ \t])*(?:@(?:[^()<>@,;
 :\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([
 ^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\"
 .\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\
 ]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*(?:,@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\
 [\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\
 r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\]
 \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]
 |\\.)*\](?:(?:\r\n)?[ \t])*))*)*:(?:(?:\r\n)?[ \t])*)?(?:[^()<>@,;:\\".\[\] \0
 00-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\
 .|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,
 ;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?
 :[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])*
 (?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".
 \[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[
 ^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]
 ]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*\>(?:(?:\r\n)?[ \t])*)(?:,\s*(
 ?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\
 ".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(
 ?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[
 \["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t
 ])*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t
 ])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?
 :\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|
 \Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*|(?:
 [^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\
 ]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)*\<(?:(?:\r\n)
 ?[ \t])*(?:@(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["
 ()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)
 ?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>
 @,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*(?:,@(?:(?:\r\n)?[
 \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,
 ;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t]
 )*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\
 ".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*)*:(?:(?:\r\n)?[ \t])*)?
 (?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".
 \[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(?:
 \r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\[
 "()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])
 *))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])
 +|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\
 .(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z
 |(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*\>(?:(
 ?:\r\n)?[ \t])*))*)?;\s*)


Thanks for your post, I learned a lot and I feel excited when I come across an experience to use a regex again to put this into practice.


> What is the purpose of having a law requiring approval for post-secondary education?

There are many.

Perhaps the most compelling is a long history of scams. California's regulations might be arduous, but they didn't come out of nowhere.

Another reason is that the customers almost by definition doesn't know how to asses quality.

> It just seems like most of our laws are passed in secret.

This law wasn't passed in secret.


I've always felt US mathematics education is a case study in what happens when "lies for children" goes terribly wrong because the teachers never actually encounter a version of the field that isn't a lie.

My experience volunteering in an AP CS class makes me wonder if something similar is happening in K12 CS at the moment. The students knew by heart the JavaDoc for String.format but struggled with loops. The teacher also struggled with simple nested for loops. Kind of made me want to throw away the damn laptops and just spend some time solving problems on pencil and paper...


The basic question is whether the reinsurance companies are properly capitalized given the homogeneity of risk taken on by the insurance companies they cover.

There can be a lot of modeling homogeneity without lots of unanticipated risk (e.g., natural disasters in disparate geographic regions).


Seriously?

That's the first/only question I ask about vacation policy because it's usually the only detail that isn't included in the first draft of the offer letter. Of course, I also ask before signing the contract.

PTO is a significant part of your total compensation. Not clarifying how accrual works is just irresponsible.

I honestly don't see how this is any different from asking about other time-based things, such as:

- whether your salary is paid every other week or at the end of the month.

- when your signing bonus is paid and how long you have to stay to keep it.

- when your stock units are granted and how long you have to stay to keep it.

- when the company's 401(k) contribution will hit your account.

All of which you should definitely be asking... and preferably before accepting the position.


" Of course, I also ask before signing the contract."

I can't stress enough how important that is. You better know what you're signing.


"How does it accrue?" might be a better way of asking than "Can we start taking it right now?"


It's a warehouse job, not the PR department. Maybe pay a little better if you expect employees with refined diplomatic skills.


I have absolutely no skin in this game.


The worst place I was at treating vacation as if it was granted (it expired at the end of the year) but actually accrued it to you


Anecdotal, but I have a pretty good before/after view on a community that got an Amazon fulfillment center.

As far as I can tell, there hasn't been much impact on wages in either direction in that particular local labor market. Without strong unions, the labor market keeps warehouse wages at "just high enough that you can have a slightly better life than what you could get working in the service industry". There's always excess supply because warehouse work -- although physically demanding -- is relatively accessible.


Students at good programs will check all the technical boxes just by attending required classes. Those schools tend to also have enough project based courses and career prep coaching that the portfolio box is also checked.

The intersection of that and communication skills is rarer, but it happens often enough, especially for people who pick up a second major in the humanities or did a lot of public speaking prior to/during college. Again, not the average case, but not terribly uncommon.


Could you point me to the website of a school which makes you check all the boxes just by attending?


Mine[0] did the vast majority of them.

[0] https://sigarra.up.pt/feup/en/CUR_GERAL.CUR_PLANOS_ESTUDOS_V...


Cool! That looked like an extensive program. I studied at the Norwegian University of Science and Technology, and my partially self-chosen courses covered most of it too.

I think we may differ on what we mean by "should". I don't (really) doubt that it's possible and valuable to learn all of this in university (forgive my use of hyperbole in the original comment). But I don't think it's necessary for most people with a major/master's degree in CS to do so.

The ideal and the necessary should not be confused.


Oh I completely agree with you. Most are "nice-to-haves" but in no way necessary.


Really? They seem equally readable to me.

I can't tell why a sane person would care much one way or the other, other than the way sane people care about vim vs emacs or tabs vs spaces or Kirk vs Picard...


It is possible to write safe and secure software. It's expensive, but it's possible.

Digitizing the grid has enormous upside, to the tune of billions in savings and improved resiliency/response to weather related outages. It we could do it safely and securely it'd be a no-brainer.

We're just trading a devil we know for a (preventable) devil we don't.

BTW: digitized grids aren't even necessarily more vulnerable. In a complex system, the increased latency and miscommunication opportunities introduced by human operators are also a potential attack vector...


Digitizing the grid also opens up efficiency gains which we sorely need. Smart water heaters which run when we have daytime solar surplus, etc.


Why not send a price signal separate from the grid, or use AC frequency deviation as a price signal?


The neat thing is that already basically exists in the form of voltage.

If you've ever watched your voltage, you'd noticed that it isn't a perfect 110 or 220. It is often higher or lower. When it is higher, there is a local surplus, when it is lower, there is a high load.

We could do this today. We might not have current pricing, but we do have load vs production information.


> When it is higher, there is a local surplus, when it is lower, there is a high load.

Or perhaps the voltage got too low, and an on-load tap changer in one of the transformers increased the output voltage. Voltage does not necessarily follow the load. AFAIK, the thing generators themselves use as the main feedback signal is not voltage, but frequency; but it's not a useful signal for consumers, given that generators are much stronger at keeping the frequency at its nominal value.


Frequency doesn't change with load.

Load causes the voltage to drop (that's what's happening when a "brown out" is triggered). Some loads cause the current to lead or lag the voltage wave (Inductive vs Capacitive loads, most are Inductive, particularly with heavy duty equipment). But that isn't changing the frequency but rather the phase of the current. This is all tied up with a number referred to the "Power factor" (see https://en.wikipedia.org/wiki/Power_factor ). essentially, the farther shifted current is from voltage, the more work is done by the power plants essentially heating grid wires (rather than doing something useful)

So, power grids will do 2 things. First, they'll work to keep the current and voltage phase in sync. They do this by adding extra capacitors/inductors.

Second, they work to maintain the voltage of their tie in to to the grid.

Generally speaking, the type of power plant matters as well. Base load plants will simply dump onto the grid at a constant rate (without really caring about what the voltage is) while peaker and load following plants will attempt to vary output relative to their voltage to try and keep the grid voltages stable.

You are correct, the voltage variance can be misleading at the customer level if the transformer is actively adjusting it's voltage ratio. I didn't consider that.


This is exactly how it's already done :)

https://en.m.wikipedia.org/wiki/Zellweger_off-peak


> It is possible to write safe and secure software. It's expensive, but it's possible.

And we've done it before (safe, anyway). Is our electric grid as critical as space shuttle software?


Maybe not the space shuttle, but:

NASA’s Mars Climate Orbiter; crashed or is now inoperable and orbiting the sun due to a bad numerical conversion.

ESA’s Ariane 5 Flight 501; manual self destruct triggered after a 64bit number being truncated into 16bits caused faults to be thrown.

https://raygun.com/blog/costly-software-errors-history/#

I don’t think we should go back to manual operation (as the default, but should be overridable). Instead we should be using stricter compilers, and better unit, integration tests, and fuzz testing to test as many edge conditions as possible.


Matt Parker's book Humble Pi is full of math/programming errors like this.


Not to be a pessimist, but just because a software error never caused a catastrophic space shuttle accident doesn't necessarily mean that the software actually was safe and secure.


...and university/company sometimes makes you wipe out the git commit log, so you can't always just use "HEAD at time of submission".

Pro tip: at paper submission time, md5sum your code and also git tag it in your private repo. When you release the code, if you have to / want to release with a clean history, make submission-time your initial commit and then make the current state of the repo your second commit. I've never encountered an institution that won't allow that level of history in the code release, even places that are pretty hard core about wiping pre-release history.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: