Hacker Newsnew | past | comments | ask | show | jobs | submit | more parrt's commentslogin

Hiya. That's funny because it's exactly what caused us to write this article. Jeremy and I were working on an automatic differentiation tool and couldn't find any description of the appropriate matrix calculus that explained the steps. Everything is just providing the solution without the intervening steps. We decided to write it down so we never have to figure out the notation again. haha


We originally had that generic ML target in mind but figured a DL bent would make it a wee bit more interesting.


I agree that the font should be bigger. I need to learn more CSS in order to switch between font sizes per platform. The font of the text is easy but all of the images were generated from latex using a specific font size. I need to scale the in-line equation images as the font size bumps up.


the magic incantation here is probably media queries

@media (max-width: 768px) {

    p {

     font-size: 1rem;
    }
}


We have to also adjust the image sizes for the in-line equations. That’s what I need to figure out :)


Terence here. Jeremy's role was critical in terms of direction and content for the article. Who better than he to describe the math needs for deep learning. :)


Glad to see you in this space Terrance! Been a long time since the traveling parser revival and beer tasting festival!


The unwritten corollary of course is that "almost nobody writes commercial compilers." :) Almost all of us do, however, write parsers for data, config files, languages etc... all the time. I'd personally used ANTLR of course for all my parsing needs beyond the trivial.


One possible angle for improvement of this technology: use a deep learning net to conjure up a different feature vector than the one I handcrafted from language/grammar expertise. I believe this was your idea. :) Glad to have you on board teaching and doing research!


I'm pretty sure this would work really well.

I've trained CharCNN on log files, and it generates really good examples files. To me that shows that even a comparatively simple model can capture syntax rules, so I'd imagine a LSTM would generate really good feature vectors.


First step to using CodeBuff would be getting an ANTLR grammar for C++ or at least a fuzzy version. It's still a prototype but should do pretty well.


Yep, that's it. Somebody has ported from Java to C# as well. Next step is really to convert to use a Random Forest classifier. I'm stuck elsewhere at the moment.


Heh, that's cool. I tried to do a random phrase generator at one point but it's hard!


If you're referring to human language phrases, I understand that Markov Chains are the usual approach.


Thanks for the ptr. :) I just wish I had time to rewrite that book in ANTLR 4 (it's in ANTLR 3).


I also loved that book! It is in my list of best books on building DSLs (https://tomassetti.me/domain-specific-languages#books)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: