Hacker Newsnew | past | comments | ask | show | jobs | submit | cmpitg's commentslogin

This is the exact reason why I keep telling myself to slow down all the time, using pen and paper, drafting the problems and solutions before getting my hands dirty, writing them down and writing them bold. One tip I learn from my current CEO is that whenever you feel you're in a heating situation, get yourself a glass of water or coffee, drink it slowly, enjoy and the moment and do nothing. That quiet moment would help you slow things down, recharge your mental energy to do better.


I don't think it's fair to separate the code and the software itself. I personally don't feel proud of the result of a piece of software if the code is crappy, i.e. spaghetti code, (almost) unreadable, unmaintainable, ...

I think what the author wanted to emphasize was the importance of code clarity, as many of us developers don't actually care. Bad code builds up negative tech debts that somebody has to pay at some point in the future.


Perhaps a bit off-topic but serious question. I don't understand whenever someone says language A is next to bare metal? What does a language have anything to do with bare metal or performance? Isn't it the compiler/interpreter that performance is about?


If you pry open your computer and look through a microscope, you won't see any closures or lambdas. You won't see any private class members, much less any kind of inheritance or dynamic types. You won't see any microchips dedicated to try/catch/throw, you certainly won't see an 'exec' circuit. You won't see any kind of DOM, shadow or not.


I don't think it has anything to do the relationship between the language and its (compiler/interpreter) performance. Besides, the subset of a large number or programming language could be considered bare metal, right? If so, why mentioning bare metal at all?


Most things you do in a language like C, have direct counterparts in the physical machine. If you get into stuff like game hacking, you can literally "see" these things through whatever RAM-hacking tool you're using.

Dereference a pointer? That translates to a tiny fragment of machine code for "dereference pointer".

Create a lambda? That translates to a big blob of machine code for "set up this bunch of paperwork that will be used to painstakingly emulate a lambda".


And what does it have anything to do with C++ compared to any other language X? The bare-metal characteristics could be created as a subset of any non-safe non-garbage-collected language.


You're right. I think someone else already pointed out asm.js as a good example.

Of course, it won't happen magically: someone has to go to the effort of defining the subset and building the compiler for it.

You can't expect the general-purpose compiler to automatically be as optimal as possible just because a program happens to use the subset. For example, the program might be being compiled into something like a java .class file meant for importing to a less conservative calling program. This would be incompatible with putting those bare metal optimizations in the .class file.


It's about how much the language intrinsically prevents the compiler from optimising the output. Think Javascript versus asm.js, same language, one is just a subset of the other, completely different performance outcomes for the JIT compiler.


Languages like C and C++ compile down directly to machine code, and the resulting machine code is run from then on the bare metal without any further translation.

Interpreted languages such as Python or Ruby (or languages that run on a VM, like Java) go through an interpretation step _every time_ they are run that translates them down to machine language.

This gives compiled languages an inherent advantage in terms of runtime performance; programs written in these languages don't have the overhead of translating to machine code at runtime. This is typically why people advocate "next to bare metal" as having better performance.


That's technically true, but I think it has more to do with the programming model. 

In C and (sort of) C++ you can deal with actual bytes in actual memory locations. You can find the exact memory address of a data structure, and you can read or write directly to that location.

You can even prod the compiler to suggest that it uses real hardware registers for variables.

You can also control how memory is allocated and released.

In most languages an array is just an array. You usually don't care where it is in memory.

In functional languages you concentrate more on designing symbolic operations, and the fact that there's real memory under there somewhere is almost irrelevant.

But C/C++ are bare-metal-ish. In assembler you work directly with memory, but you can control program flow. You can jump/branch to any address in memory. Sometimes you can even overwrite your own code.

C/C++ won't let you do that.

As a former assembler guy, I find that C++ is the worst of both worlds. You get the cognitive overhead of dealing with memory management and pointer arithmetic and iterators, but the options for not working with them when you don't want to are limited - so it's hard to concentrate on the functional level, and you do a lot of the work the compiler/interpreter would usually do, but without the "do whatever you want" freedom of assembler.


That said, it's a about the implementation not the language itself. The phrase is technically incorrect, right? One could write a compiler that compiles Python or Ruby down to machine language without the interpretation overhead, and not necessarily faster or slower than their original interpreters, or other C/C++ compilers.


If the language defers explicitness until runtime then there's nothing a compiler or interpreter can do to get rid of the overhead.

With things like a JIT it can see it at runtime, but that's still overhead to handle the detection and the bail-out situations.


I agree on the overhead. There are information available only at run-time that could be (and actually has been, in case of interpreter like PicoLisp) used to have better optimization compared to compiled code. It isn't technically faster or slower.

Edited: Correct my bad grammar.


You might want to combine Geiser with Quack[1] for best result with Scheme. Also, I think it's worth a shot to have a look at the fairly new Racket-mode[2] written by Greg[3]. Geiser + Quack are rather old and for Scheme in general, Racket-mode focuses on and is tightly integrated with Racket.

[1] http://www.neilvandyke.org/quack/ [2] https://github.com/greghendershott/racket-mode [3] http://www.greghendershott.com/


racket-mode is good in that it at least indents Racket code properly. It implements it's own integration with the REPL, IIRC, which makes working with Geiser at the same time not ideal. But still manageable, just a bit of tweaking needed. Disclaimer: I may remember wrong or it may have changed since I was last configuring Racket support in Emacs, so don't get discouraged and just go check racket-mode anyway :)


I've been following Hy for a while and it's sad that macros are still part of on-going discussions with no definite plan. It's one of the reason I prefer Scheme-inspired Lisps to Common Lisp. Hy just looks Lispy but it's no where near Lisp without all the macro system.


Hy has macros, and they work fairly well. I'm using plenty of macros in Adderall, would not be possible without them. There are a few rough edges with Hy macros, mostly due to the underlying Python, but I at least, can live with that, and they don't hinder my work.


My bad, you're right. I didn't know the documentation section for macro was added recently. It looks pretty Common Lispy. Reader macro looks nice, still at its beginning stage though.


How do you make sure the correctness of the implementations? Shouldn't there be tests?


Precisely. This is something that I have plans on implementing, but obviously I don't have the whole day 24/7 to manage this repo man. I'm doing other things right now (work), that's why I took so long to even answer the initial comments in this thread.


Tests and behavior should be defined before the code.

I recommend you comment out all code, write the tests and then un-comment the code line by line.

This not only tests the code, but tests the tests and assures that you are covering the edge cases the code contains already.


Update: Unit tests are now a convention.


The code is the test. If the code runs without compilation errors, then use it for healthcare.gov </sarcasm>


You forgot to open the the sarcasm tag :)


That's part of the hidden DOM. Implicit. ;-)


Sorry, but explicit > implicit for markup.

Btw, how's that repo going? :)


Sorry, haven't started it yet ;-)


Perhaps it should be opt-out?


So how about something like "Sign in to ABC with Persona", or "Sign in to ABC with Personal using your email" (more specific)?


It's the same as "Sign in with Google", "Connect with Facebook", "Sign in with Twitter". It's implied.


Well, I personally don't think they do imply the same thing from a user's perspective.

* Google, Facebook, Twitter, ... have become so well-known brands comparing to the new Persona. Normal users could quickly get a sense of what "Sign in with Google/Facebook/Twitter/..." means, but not "Sign in with Persona" means.

* Persona is not a social network while other mentioned brands are or provide a sense of social network. A user might perceive "signing in with <your-favourite-social-network-provider>" as an act of making the site a part of the network; while with Persona it's totally different.


Vim counterpart everyone? :-) http://vimcasts.org/

Being a fan of both, a lot of cool things from Vim to could be brought to Emacs workflow.


TRAMP & eshell would help you achieve 100% your local Emacs productivity on remote machines, and 100% smaller (you actually don't need to install anything on remote machine) :-)


Emacs is there, too, of course, ed is just for the guys who complain that they want vi :-)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: