Until now I haven’t had to worry much about the speed of sites made using Haxe / Ufront, – none of the sites or apps I’ve made have anywhere near the volume for it to be a problem, and the general performance was fast enough that no one asked questions. But I’m going to soon be a part of building the new Haxe website, which will have significant volume.
So I ran some benchmarks using ab
(Apache’s benchmarking tool), and wasn’t initially happy with the results. They were okay, but not significantly faster than your average PHP framework. Maybe I would have to look at mod_tora or NodeJS for deployment.
Then I remembered something: a single line of code you can add that vastly increases the speed: neko.Web.cacheModule(main)
.
Benchmarks
Here is some super dumb sample code:
class Server {
static var staticInt = 0;
static function main() {
#if neko
neko.Web.cacheModule(main); // comment out to test the difference
#end
var localInt = 0;
trace ( ++staticInt );
trace ( ++localInt );
}
}
And I am testing with this command:
ab -n 1000 -c 20 http://localhost/
Here are my results (in requests/second on my laptop):
- Apache/mod_php (no cache): 161.89
- NekoTools server: 687.49
- Apache/mod_neko (no cache): 1054.70
- Apache/mod_tora (no cache): 745.94
- Apache/mod_neko (cacheModule): 3516.04
- Apache/mod_tora (cacheModule): 2185.30
First up: I assume mod_tora has advantages on sites that use more memory, but a dummy sample like this is more overhead than it’s worth.
Second, and related: I know these tests are almost worthless, we really need to be testing a real app, with file access and template processing and database calls.
Let’s do that, same command, same benchmark parameters:
- Apache/mod_php (no cache): 3.6 (ouch!)
- NekoTools server: 20.11
- Apache/mod_neko (no cache): 48.74
- Apache/mod_tora (no cache): 33.29
- Apache/mod_neko (cacheModule): 351.42
- Apache/mod_tora (cacheModule): 402.76
(Note: PHP has similar caching, using modules like PHP-APC. I’m not experienced setting these up however, and am happy with the neko performances I’m seeing so I won’t investigate further)
Conclusions:
- the biggest speed up (in my case) seems to come from
cacheModule()
, not mod_tora. I believe once memory usage increases significantly, tora brings advantages in that arena, and so will be faster due to less garbage collection. - this could be made faster, my app currently has very little optimisation:
- the template system uses Xml, which I assume isn’t very fast.
- a database connection is required for every request
- there is no caching (memcached, redis etc)
- I think I have some terribly ineffecient database queries that I’m sure I could optimise
- Ufront targeting Haxe/PHP is not very fast out-of-the-box. I’m sure you could optimise it, but it’s not there yet.
- This is running on my laptop, not a fast server. Then again, my laptop may be faster than a low end server, not sure.
Usage
So, how does it work?
#if neko neko.Web.cacheModule( main ); #end
The conditional compilation (#if neko
and #end
) is just there so that you can still compile to other targets without getting errors. The cacheModule function has the following documentation:
Set the main entry point function used to handle requests.
Setting it back to null will disable code caching.
By entry point, it is usually going to mean the main()
function that is called when your code first runs. So when the docs ask for a function to use as the entry point, I just use main
, meaning, the static function main()
that I am currently in.
I’m unsure of the impact of having multiple “.n” files or a different entry point.
The cache is reset whenever the file timestamp changes: so when you re-compile, or when you load a new “.n” file in place.
If you wanted to manually disable the cache for some reason, you use cacheModule(null)
. I’m not sure what the use case is for this though… why disable the cache?
Gotchas (Static variable traps with cacheModule)
The biggest gotcha is that static variables persist in your module. They are initialized just once, which is a big part of the speed increase. Let’s look at the example code I posted before:
class Server {
static var staticInt = 0;
static function main() {
#if neko
neko.Web.cacheModule(main); // comment out to test the difference
#end
var localInt = 0;
trace ( ++staticInt );
trace ( ++localInt );
}
}
With caching disabled, both trace statements will print “0” every time. With caching enabled, the staticInt
variable does not get reset – it initializes at 0, and then every single page load will continue to increment it, it will go up and up and up.
What does this mean practically:
- If you want to cache stuff, put it in a static variable. For example:
- Database connections: store them in a static variable and the connection will persist.
- Templates: read from disk once, store them in a static variable
- App Config, especially if you’re passing JSON or Xml, put it in a static and it stays cached.
- Things which should be unique to a request, don’t store in a static variable. For example:
- Ufront has a class called
NekoSession
, which was entirely static methods and variables, and made assumptions that the statics would be reset between requests. Wrong! Having the session cached between different requests (by different users) was a disaster – everytime you click a link you would find yourself logged in as a different user. Needless to say we needed to refactor this and not use statics :) To approach it differently, you could use astatic var sessions:StringMap<SessionID, SessionData>
and actually have it work appropriately as long as the cache stayed alive. - Avoid singletons like
Server.currentUser
, or evenUser.current
– these static variables are most likely going to be cached between requests leading to unusual results.
- Ufront has a class called