Am I reading this right, there are interns and engineers running PHP in a debugger on production data? How else would he be able to memorialize his own account and trigger the email?
That is a good point, how did no-one else pick up on that???! It reminds me of engineers where I used to work, who would run through a £0.01 transaction on their credit card to check the changes to the charging system were still working...
I've seen talks with engineers that say this is this case. I think the one in particular was from @Scale. They've had the occasional issue with an intern dropping a table in production, but they said they haven't really run into issues with granting access to production data.
When trying to crawl a URL that sends a 302 with a relative URI reference in Location, it fails. E.g. if http://www.example.com sends a 302 with "Location: /en/".
Probably due to the garbage SQL the framework generated for you, or the write-through cache it failed to provide.
Yes, it's true that a simple request router will probably not add a lot of overhead, whether it came from a framework or was something you rolled yourself (there's only so many ways to parse a URL). The real overhead comes from the other stuff the framework does, and chances are the heavy lifting will be in an ORM and a couple helper functions with badly written regular expressions.
Disclaimer: I haven't looked at the code for this framework, but that's pretty common among all of them.
Yeah I don't really see the point with your average web app requests, if anything is so intensive as for this framework to make a difference it is likely better done in the background rather than as part of a user request.
Granted it may well be good for common functions required in intensive background processes.
You are using the term bottleneck incorrectly, and it's a very common error.
A bottleneck is something you get when you have multiple concurrent actions and one of them is slowing down the actions of the others. No matter how fast the other actions get, the slowest one is the bottleneck.
Framework overhead is not a bottleneck at all, it's just overhead. If your framework takes 10ms to load, it doesn't matter if your database is taking 200ms or 2ms to return it's result, you still pay the extra 10ms for the framework loading. This is because the 10ms is not concurrent with anything, you just pay it at the start regardless.
Now you can argue that the 10ms is not significant when compared to the 200ms your spending in your database, but this is an entirely different kind of argument.
No, you still have a problem in your database. Running a database smoothly with large datasets is the main performance problem for web applications. As fast as you can get with smart DB architecture, DB calls still take the bulk of the time compared to code running time.
Sorry, but that simply isn't true for plenty of applications, particularly if they happen to be built on relatively slow platforms like PHP, Ruby, Python, etc. That's even before we get to the bulky frameworks built on top of those!
If your data store structure and queries (SQL or otherwise) are designed to perform and scale, you can get in and out very quickly (milliseconds or less), after which you'll spend most of your time turning the data into something useful.
(That's even before we talk about caching so you don't have to talk to the data store at all...)
Facebook didn't write HipHop (and reduce CPU utilisation by 50%) because their database queries were slow!
I think the point is the problem isn't with the database, but with the code not cacheing results. This isn't some new problem. The database shouldn't be the problem because you should be cacheing results.
If you squint, every database problem is a caching problem. The only reason we store the relations in their up-to-date form is that it's too expensive to continually replay the transaction log from the beginning of time.