This is the blog post version of a talk I gave at the Perth Web Accessibility Conference. I also repeated the talk at a “BrownBag” team lunch at Culture Amp, which you can watch here, or you can read the blog-post version below. I’ve got a live example (open source! try it yourself!) at the end of the post.
On the front-end team at Culture Amp, we’ve been working on documenting and demonstrating the way we think about design, with a design system – a style-guide and matching component library for our designers and developers to use to make our app more consistently good looking, and more consistently accessible.
But first, a story.
Here’s a photo of me, my older sister, and younger brother:
Me and my brother are both red-green color blind. Most of the time color-blindness isn’t a big deal and compared to other physical limitations, it doesn’t usually make life difficult in any significant way.
But growing up, my brother Aaron really wanted to be a pilot. Preferably an air-force pilot, like in Top Gun. But for a generation that grew up with every TV show telling us “you can be anything if you try hard enough”, there was a footnote: anything except a pilot. He couldn’t be a pilot, because he was red / green color blind. The air-force won’t even consider recruiting you for that track. They’ll write you off before your old enough to join the air cadets.
Why? Because the engineers who designed the cockpits half a century ago made it so that the only way you could tell if something changed from safe to dangerous was if an LED changed from green to red. So people with red-green color-blindness were out, and my brother was told he couldn’t be a pilot.
Now, becoming an air-force pilot is super-competitive, and he might not have made it anyway, but to have your dream crushed at the age of 10, because an engineer built a thing without thinking about the 8% of males who are red/green color blind, is pretty heartbreaking.
Luckily, as web professionals we’ve got a chance to create a digital world that is accessible to more people, and is a more pleasant experience, than much of the real world. We just have to make sure it’s something designers and developers are thinking about, and something they care about.
So, design systems
One of the big lessons we’ve learned in the web industry over the last few years is that if you want your site, product or service to leave a lasting impression, it’s not enough to do something new and shiny and different. What’s important to a lasting impression is consistency: consistency builds trust, and inconsistency leads your users into confusion and frustration and disappointment.
It’s true of your branding, it’s true of your language and tone, it’s true of your information architecture, and it’s especially true of your commitment to creating accessible products and services. For example if your landing page is screen-reader friendly but your product is not, you’re going to leave screen-reader users disappointed. Consistency matters.
But as a company grow, consistency gets harder. It’s easy to have a consistent design when you have a landing page and a contact form. It’s harder when you have a team of 100 people contributing to a complex product.
The Culture Amp team has experienced those growing pains – we’ve grown from 20 employees three years ago to over 200 today, almost half of them contributing to the product – and it’s easy to lose consistency as users navigate from page to page and product to product. The UI built by one team might feel different and act differently to the UI built by another team.
So we started looking into design systems.
Design systems are a great way to bring consistency. By documenting the way we make design decisions, and demonstrating how they work in practice, we can help our whole team come together and make a product that looks and feels consistent – and that consistency is the key to a great experience for our users.
As we codify our design thinking we are lifting the consistency of our app – not just of our branding and visual aesthetics, but of our approach to building an accessible product.
Culture Amp’s approach to color
So we’re a start-up, with three overlapping product offerings build across half a dozen teams. And we want to make that consistent.
One way to do that would be to have a design dictator who approves all decisions about color usage, making sure they’re on-brand and meet the WCAG contrast guidelines. But one of our company values is to “Trust people to make decisions”, and that means trusting the designers and front-end engineers in each team to make the right call when it comes to picking colors for the screens they are in.
How do we let them make the call, but still ensure consistency?
Well, as a group our designers worked together to define the palette they would agree to use. They consisted of three primary colors (Coral, Paper and Ink), six secondary colors (Seedling, Ocean, Lapis, Wisteria, Peach, Yuzu), as well as Stone for our standard background.
Every color on the page should be one of the colors on the palette.
But what about when you need it slightly lighter or slightly darker? When you need more contrast, or want just a slight variation? We allow designers to use a color that is derived from the original palette, but with a tint (white mixed in) or a shade (black mixed in).
We can actually figure these tints and shades out programmatically, using SASS or Javascript:
(The embed here demonstrating programmatically generating colour palettes no longer works, sorry)
(Note: the SASS code is even easier. You can use the lighten() or darken() functions, or the custom mix() function if you’d prefer to tint or shade with a custom color. All three of these functions are built in.
So now we have three primary colours, and six secondary colours, and computationally generated tints and shades for each in 10% increments, resulting in 171 colour variations, which all fit with our brand. Woo!
This range gives enough freedom and variety to meet individual teams needs on a page-by-page basis, while still bringing consistency. Designers are free to move within this system and make the right decision for their team.
So what about color contrast?
Currently Culture Amp has committed to complying with WCAG AA standard contrast ratios. This means the contrast between the text color and the background color must be at least 4.5 for small text, and at least 3.0 for large text. (If we wanted to go for WCAG AAA standard contrast ratios, those values 7.0 and 4.5 respectively).
How do we get the designers and developers on our team thinking about this from the very beginning of their designs? We could audit after the designs after-the-fact, but this would be frustrating for designers who have to revisit their design and re-do their work. Making people re-do their work is not a way to win friends and advocates for your color contrast cause.
<Note: I had an embed here, that demonstrated auto-generated colour palettes, but it no longer works>
So we can actually check whether our colors will be able to hold white or black text with a sufficient contrast ratio. And because we derive our color values programmatically, we can check if all 171 of our derived color values are accessible with large text or small text, black text or white text, and display all of that information at a glance:
Now our designers can come to this page, explore every color within our palette, and at a glance know which of these colors will be able to display text with sufficient contrast to be considered accessible.
For bonus points, we can also programmatically determine if a background color would be better suited to have text colored white or black:
<Note: I had an embed here, that demonstrated auto-generated colour palettes, but it no longer works>
If you build it, they probably won’t come
So we’ve made a great page where designers can explore our palette colors, and at a glance gain an understanding of which combinations will have sufficient contrast. But by this point everyone in the software industry should hopefully know that “if you build it, they will come” is simply not true. If you want people to engage with your design system – or with anything you’ve built – you need to offer something of value, you need to solve a real problem for them.
So how do we get the designers and developers across our different teams to care enough to come look at this page? We need to offer them some convenience or solve a problem they have.
What are the most common things our team needs help with when thinking about our brand colors? They usually want to explore the range of the palette, and then find a HEX code or a SASS variable to start use that color.
So we tried to make our design system colors page as helpful as possible, providing a way to explore the colors, see the shades and tints, see what colored text it best pairs with, and copy color values to your clipboard.
Next time someone needs to reference our brand colors, this feature set means they’ll come to our design system page first, because they know they can explore the colors, and get the correct codes in whatever format they need. We’re solving a problem for them, and, while we have their attention, using the opportunity to inform get them thinking about color contrast and accessibility.
What else?
So we’re just getting started on our journey of using design systems to improve the consistency of our design and our accessibility. But color contrast is a great place to start, and it’s already making me think about how we can use the design system project to put accessibility front-and-center in the design culture of our team.
The web’s most popular component library, Bootstrap, solves a problem for designers and developers by allowing fast prototyping of common website elements, but by offering components with accessibility concerns baked in, and by encouraging good accessibility practice in their documentation, they’ve used their design system to lift the level of accessibility on millions of websites.
If you have other ideas on how design systems could be used to bake accessibility into your team culture and product design, I’d love to hear about it! It’s an exciting project to be part of, raising the design consistency and the accessibility consistency across the various products offered at Culture Amp.
If you’d like to join us, Culture Amp is hiring front end developers – either in Melbourne, or remote within Australia. It’s an amazing place to work, I’d encourage you to apply. See the Culture Amp Careers page for more info.
Bonus #1:
Our whole Color Showcase component is now available open source. You can view the Color System Showcase repo on Github or even try embed it online by entering JSON code on this page. (Sorry, this link no longer works).
Here’s a live iframe preview with Culture Amp colors inserted:
Bonus #2:
If you want to test how your site would look to a completely color-blind person, you can type this into the browser console:
I’ve been using Haxe for a long while, and for about 2-3 years I was using Haxe full time, building web applications in Haxe, so I know how important managing your dependencies is, and I know how painful it was with Haxelib, especially if you had a lot of dependencies, a lot of projects, or needed to collaborate with people on different computers.
Haxelib is okay when you’re just installing one or two libraries, and they’re libraries with stable releases where you don’t change versions often, and if you don’t need to come back to your code after long gaps in time. Basically, haxelib is fine if you’re doing weekend hackathons or contests like Ludum Dare where your projects probably aren’t too complex, you’re not collaborating with too many other people, you’re using existing frameworks, and you don’t have to worry about if it will still work fine in 4 months time. Otherwise, it can be quite painful.
I tried to help with Haxelib at one point in time (I still am in the top 4 contributors on Github, most of that was back in 2013 though), but it proved pretty unruley – even skilled developers were afraid of changing too much or refactoring in a way that might break things for thousands of developers. And some changes were impossible to make without first changing the Haxe compiler. So it’s largely sat in the “too hard” basket and has not had many meaningful improvement since it first became it’s own project in 2013.
(No offense to anyone who has been working on it – you are a braver soul than I! But I think we all agree it’s not as good as it needs to be.)
Since mid 2016, I have been working in other jobs where I don’t use Haxe full time, instead spending more time with JS: using tools like NPM, Yarn, Webpack. And they’re certainly not perfect when it comes to dependency management, but there are a few things that they do right (Yarn especially).
Part 2: What the JS ecosystem gets right.
In Node JS land (and eventually normal JS land), there was a package manager called NPM – Node Package Manager. It had a registry of packages you could install. It would also let you install a package from Github or somewhere else. The basic things.
Here’s what I think it did right:
Used a standard format (package.json) to describe which packages a project uses.
Put all of the libraries in a standard location (node_modules/${my_cool_lib}/)
NodeJS didn’t care if you used NPM or not. As long as your stuff was in node_modules, it would be happy.
Why was this a good move? Because it allowed some talented people to build a competitor to NPM, called Yarn. By having simple expectations, you can have two competing package managers, and innovation can happen. Woo!
Yarn is what I use at work on a big project with 119 dependencies (and about 1000 sub-depedencies). Here’s what yarn did right:
Reproducible builds. While package.json has information about which version I want (say, React 16.* or above), Yarn would keep information in a file called yarn.lock which says exactly which version I ended up using (say, React 16.0.1). This was when my friend joins the project and tries to install things she won’t accidentally end up on a newer or older version than me – Yarn makes sure we’re all using exactly the same version, and all of our dependencies and sub-dependencies are also exactly the same.
A global cache. When Yarn came out, it was several times faster than NPM on our project because it kept a cache of dependencies and was able to resolve them quickly when switching between projects and branches. NPM has caught up now – but that’s the benefit of competition!
Part 3: Introducing lix (and its friends: switchx and haxeshim)
In 2015 I remember chatting to my friend Juraj Kirchheim (also one of the key contributors who just kind of gave up) about what an alternative might be, and he described something that sounded great, a futuristic utopian alternative to haxelib.
2 years later, and it turns out, it’s been built! And it’s called “lix”.
(What’s with the name? I’m guessing it is short for “LIbraries in haXe”, a leftover from when every Haxe project needed an X in it for cool-ness, and Haxe was spelt as haXe. That, and the lix.pm domain name must have been available).
Lix also depends on two other projects: haxeshim and switchx. The names aren’t super obvious, so here is my understanding of how it all works:
Haxe Shim intercepts calls to Haxe and does some magic. The Haxe compiler on it’s own explicitly calls haxelib, so you literally can’t replace haxelib without intercepting all calls to the compiler and getting rid of -lib arguments. So haxeshim is a shim that intercepts Haxe calls and sorts out -lib arguments so that haxelib is never needed.
As a bonus, it also supports switching to the right version of Haxe for the current project. But for that, we also need “switchx”.
SwitchX lets you pick the Haxe version you need for your project, and automatically switches Haxe versions for whatever project you’re in. If you change between project A, on Haxe 3.4.3, and project B, in a different folder and running Haxe 4, it will always use the correct one.
How?
When you start a project you run switchx scope create. This makes a .haxerc file which says that this folder is a specific project, or “scope”, and should use the Haxe version defined in the .haxerc file.
How do you change the version?
You run switchx use latestor switchx use stable or switchx use nightly or switchx use 3.4.3 etc. It lets you instantly switch between different versions, and for the correct version to always be used while you’re in your project folder.
Nice!
Lix is a package manager that you use to install packages. It is made to work with Haxe Shim, and creates a “haxe_libraries” folder, with a new hxml file for each dependency you install. It’s super fast because it uses a global cache (like yarn) and it makes sure you always have the correct version installed (like yarn). It supports installing dependencies from Haxelib, Github, Gitlab or HTTP (zip file). Anytime you update or change a dependency, one of the haxe_libraries/*.hxml files will be updated, you commit this change to Git, and it will update for all of your coworkers as well. Magic.
These tools are (for now) built on top of NodeJS, so you can install them with NPM or Yarn.
If you want to install each of these, you basically run these commands (warning: these will replace your current Haxe installation):
# Install all 3 tools and make their commands available.
yarn global add haxeshim switchx lix.pm
# Create a ".haxerc" in the current directory, informing haxeshim that
# this project should use a specific version of Haxe and specific
# `haxe_libraries` dependencies
switchx scope create
# Use the latest stable version of Haxe in this project.
switchx install stable
Part 4: What lix can do that haxelib cannot do (well).
With this setup, here’s what I can do that I couldn’t do before:
Be certain that I always have the exact right version installed, even if the project is being set up on someone else’s machine. Even if I pulled from a custom branch, using something like lix install github:haxetink/tink_web#pure (install the latest version of tink_web from the “pure” branch), when I run this on a different machine, it will use not only the same branch, but the exact same commit that it used on my machine, so we will be compiling the exact same code.
Easily get up and running on a machine where they don’t even have haxe installed. I tried this today – took a project on Linux, and set up its dependencies in Lix. It used a combination of Haxelib, Github, Gitlab, and custom branches. It was a nightmare with Haxelib. I also added haxeshim, switchx and lix.pm as “devDependencies” so they would be installed locally when I ran yarn intall. I opened a Windows machine that had Git installed, but not haxe, cloned the repo, and yarn install. It installed all of the yarn dependencies, including haxeshim, switchx, and lix, and then running lix download installed all of the correct “haxe_libraries”, and then everything compiled. Amazing!
Know if I’ve changed a dependency Today I was working on a change for haxe-react. In the past I would have used haxelib dev react /my/path/to/react-fork/. Now I edit haxe_libraries/react.hxml and change the class path to point to the folder my fork lives in. The great thing about doing this is, Git notices that I’ve changed it. And so when I go to commit the work on my project, git lets me know I’ve got a change to “react.hxml”, I’ve changed that dependency. In this case, I knew what to do: push my fork to Github, and then run lix install gh:jasononeil/haxe-react#react16 to get Lix to properly register my fork in a way that will work with my project going forward. I then commit the change, and people who use my project will get the up-to-date fork.
Start a competing package manager The great thing about all of this, is that “lix” has some great features, but if I want to write better ones, I can. Because of the way “haxeshim” just expects dependencies to haxe a “haxe_libraries/*.hxml” file, I could write my own package manager, that does things in my own way, and just places the right hxml file in the right place, and I’m good to go. This makes it possible to have multiple, competing package managers. Or even multiple, co-operating package managers.
Part 5: Vote on the future
So, I think Lix has learnt from a lot of what has gone “right” in the NodeJS ecosystem, and built a great tool for the Haxe ecosystem. I love it, and will definitely be using it in my Haxe projects going forward.
The question is, do we really need “haxeshim” and “switchx” and other such tools just in order to have a competing package manager? For now sadly, because of the way haxe and haxelibare tied at the hip, you do need a hack like this. But there’s a discussion to change that. (See here and here).
If you care about Haxe projects having maintainable dependency management, you can help by voting up comments in a discussion that’s happening right now. Here are the comments that I think will help Haxe support something like Lix, and more competing package managers, as first class citizen going forward. Feel free to upvote with a thumbs up emoji:
Feel free to have a look and contribute to the discussion. For now though – if you don’t mind installing haxeshim and switchx, there is a very good solution for managing your haxelibs and dependencies in a reliable, consistent, but still flexible way. And it’s called Lix.
Update: I ended up getting a new job which came with a new laptop, so don’t have the XPS 9365 anymore. I hope this post is still helpful to people but I won’t be able to provide any more support. The official Fedora support page is over here: http://ask.fedoraproject.org/ Good luck everyone!
While having breakfast on Friday morning, my 5 year old laptop was going fine. Then Firefox froze. I pressed alt-tab, nope, everything is frozen except the mouse. Then the mouse was frozen. Then I reset the computer, and got this message “Operating System Not Found”. My hard drive had died.
Rather than spend a weekend fiddling to repair it, I decided to spend my tax-return money to buy a new laptop – a Dell XPS 13 9365 2-in-1. Fancy as! But, whenever you buy a fairly knew and fancy laptop, less than 12 months old, with the intent to install Linux – you should probably set aside some time because you just know there’s going to be issues.
One weekend later, I’m the happy owner of a XPS 13 2-in-1 running Fedora 26. Here’s all the tips and gotchas and cry-into-a-pillow moments that I had to get through to make it this far.
Trying Fedora instead of Ubuntu
Before I made the purchase, I was doing some Googling to see if Ubuntu would even load on an XPS 13 9365. The verdict seemed to be that it would load, but there was some difficulty getting suspend/resume to work, but it was possible. I decided to go ahead with the purchase. But in my reading, I came across this comment:
I was unable to uninstall Ubuntu on the XPS at all. And out of frustration I tried Fedora and I was simply BLOWN away by the polish. And today we have Fedora 26 that is even better. I am semi-validated by Ubuntu moving to Gnome as well. Ubuntu was simply too unpolished with Mir + Unity.
I decided to give Fedora a go. Now that most of my development work happens in Docker, I’m not too worried about which distro I have running on bare-metal – and I’m up for trying something new!
Verdict: I’ve enjoyed Fedora – the polish in Fedora 26 really is there compared to Ubuntu 16.04 (admittedly – it is 12 months newer so that is to be expected).
To get started with Fedora, download the “Fedora Media Writer” which will prepare a Live USB for you. See the Fedora installation guide for more info.
Shrinking a Windows Partition is beyond my pay-grade
At first I was interested in keeping Windows 10 installed and dual booting, because it might be nice to occasionally see test how it works etc. But part of the dual-boot process involves resizing the Windows partition to make space for Linux.
I had a 460GB Windows partition, with 30GB used. For the life of me I couldn’t shrink it smaller than 445GB – leaving only 15GB for Linux. I tried following different tips, tricks and tutorials for about 30 minutes, and then decided that I’ve lived without Windows for a decade, I can keep going without it now.
SATA mode has to be AHCI
By default the 9365 has it’s SATA hard drive configured to be in “RAID” mode rather than “AHCI”. To be able to install Fedora, I needed to change this to AHCI. Not sure why. Here’s a question / answer that prompted me to make the change.
It’s worth noting that if you intend to dual boot, changing from “RAID” to “AHCI” can cause serious problems for Windows unless you do some prep work first. You can change it and change back, but if you want to dual boot, you will need both to be on AHCI.
A painful firmware bug (that makes you think your laptop is dead forever)
This bug had me thinking my laptop was bricked and would need to be sent for warranty. It would literally sit on the DELL logo for what felt like forever, but turned out to be 5 to 10 minutes. I can’t explain how relieved I was to read a blog post where someone described the same symptoms:
When changing the SATA drive setting from RAID to AHCI, and disabling the “Secure boot” option in the BIOS (both actions are needed to install Ubuntu), the booting process gets stuck in the Dell logo for a long time, around 5 minutes, before it makes any progress. Even trying to enter the BIOS again to change those settings makes me have to wait that long.
Also, when booting when those settings on and entering the BIOS, the whole user interface of the BIOS menu, even just moving the mouse cursor around, is extremely slow. Clicking on a menu option on the BIOS makes the screen refresh to the next screen with a very slow transition of about 3 seconds.
I’m have upgraded to the latest BIOS firmware as of April 8, 2017 (Version 01.00.10, 3/9/2017). This bug is currently preventing me from setting up a dual-boot mode with Windows 10 + Ubuntu, which makes the system not usable for my specific use cases. I’d really appreciate if these issues could be resolved soon.
The fix:
You can’t have “SATA MODE = AHCI” and “SECURE BOOT = FALSE” at the same time.
Because “SATA MODE = AHCI” is required for a Fedora install, we need “SECURE BOOT’ to be true. Turns out, this is actually okay.
One final thing to do in BIOS: configure it to boot from USB.
Because we’re using SecureBoot, this is not as straight forwarded as choosing an option from a boot menu.
Steps:
Ensure “Disable Legacy Boot ROMs” is ticked. It will need to be ticked for secure-boot to be ticked.
Ensure “Secure Boot” is ticked. It’s on a different page in the settings.
Ensure “Boot mode” is “UEFI” not “Legacy”.
This will show a list of boot options. The terrible GUI interface will require you to scroll down to find the “Add Boot Option” button. Click it.
Add a boot option named “Fedora” and click the “…” to open the file browser.
Find your USB drive in the list (mine was named “Anaconda” by the Fedora Media Writer).
Load the file “/EFI/BOOT/grubx64.efi“
Save the new boot item. Use drag and drop to move it to the top, so it has the highest priority.
Save your settings, and restart – and hopefully – Fedora will load up and kick into Live CD mode.
Before you install
Before I hit install, I did a quick check:
Wifi works: yes
Sound works: yes
Touchscreen works: yes
Webcam works: yes
Suspend / Resume works: no. Bummer – but my research had suggested this was probably going to be an issue, so I continued anyway.
In the install options I deleted the Windows 10 partition, and got it to auto-partition from there. Then hit install. Woo!
Getting suspend and resume to work.
**Update:** It turns out I’m still having supend/resume issues. I think figuring out how to install the 4.13 version of the kernel while SecureBoot is enabled is what I will need.
After the install, almost everything worked as expected, and the whole experience was really nice – it’s a beautiful laptop, and the new version of Fedora with Gnome 3 is quite pleasant. Until you close the lid and it suspends. Because then it won’t wake up again.
What would happen:
The screen would go dark, but the keyboard backlight would stay on.
Pressing the tiny power button on the side of the case does nothing at first.
If you keep holding the power button for like 10 seconds, the login screen lights up, and everything is still there, but the moment you let go, it suspends again.
If you hold it down long enough, it eventually turns off. You’ll need to do this to get out of the broken suspend, but it takes forever and feels like you’re pressing the little power button so hard you’ll break it.
[jason@jasonxps enthraler]$ uname -a
Linux jasonxps 4.11.8-300.fc26.x86_64 #1 SMP .....
I read a bunch of Q&A suggestions on tips for getting this to work, but none helped that much – reading through the bug report above though convinced me that I needed to upgrade from 4.11 to 4.12 or 4.13.
Upgrading to the very latest kernel (4.13-rc4) seems easy, but as the wiki page notes, it won’t work with SecureBoot – so that turned out to be a dead end for me. (Signing the kernel for SecureBoot might be possible, but I couldn’t be bothered learning enough to understand the tutorials).
4.12 isn’t released yet, but it’s supposed to be in testing. Unfortunately, enabling the “updates-testing” repository and running “dnf upgrade” didn’t install the new kernel. I’m not sure if it was supposed to.
Be careful here that you don’t override the kernel you’re currently using. You may need to add options to “dnf” if it suggests that it’s going to remove the package for the kernel you’re currently on.
After restarting, test that the new kernel is working:
[jason@jasonxps enthraler]$ uname -a
Linux jasonxps 4.12.5-300.fc26.x86_64 #1 SMP ...
And now, you can close the lid and expect it to suspend and resume. For me, I still have to hold the power button for like 6 seconds to get it to resume, but hey, at least it comes back. I’m hoping 4.13 will come out and fix that problem too.
Note – I also changed the setting in Gnome Power Settings for “When the Power Button is Pressed” from “Suspend” to “Nothing”. Reason: sometimes holding down the power button that long to resume it would then trigger another “supsend”. So I set the button to do nothing. I can just close the lid to supsend.
Ubuntu Unity-like keyboard shortcuts
Overall I’ve really enjoyed Gnome 3 over Ubuntu Unity. One thing I missed though was being able to press “Win+1” to open my file manager, “Win+2” to open Firefox, “Win+3” to open Visual Studio Code, “Win+4” to open Chrome etc. Basically, my most common applications all sit in the dock on the left, and I can use a quick keyboard shortcut to switch to that app – and if it’s not open already, it will open it. Gnome doesn’t have this by default.
Well, that was certainly not something I’d trust my Grandma to complete successfully. But hey – at least it works. If I learn any new tricks for getting Fedora to run nicely on a Dell XPS13 9365 2-in-1, I’ll post here.
If you have any questions, feel free to ask – no guarantees I’ll be able to help though :)
Lately I’ve been confused by the cross, and confused about why I’m confused. As I went through Easter this year I still was moved by the idea that God somehow loved us enough to die – but I couldn’t explain what it all means, and I couldn’t verbalise what it was about the traditional explanation that made me so uneasy.
I haven’t made it past the first chapter yet, but by setting apart time to think about this, I think I finally (subconsciously?) was able to piece together what I want to believe, in a way that draws contrast to my understanding growing up.
My understanding growing up:
God is love, and he loves you, but he’s also “just”, “righteous” or “perfect”, and can’t stomach your disobedience (sin), it’s because he’s perfect, and that perfection just doesn’t mix with sinfulness, and a blood sacrifice was needed to make things right for some reason. Animals weren’t enough. Jesus died so it didn’t have to be you. That was enough for God the Father. Now when he looks at you he doesn’t see sin, he’s just full of love again.
What I don’t like:
If Jesus is the image of the invisible God, then we should get a good idea of what God is like by looking at what Jesus is like. A person with uncontrollable anger problems that needs satiating when people don’t do what they want… is not at all the type of person we see in Jesus.
Most (admittedly not all) of the anger I see the bible describe God as having, is the same kind of thing you see Jesus get angry over: injustice, caused by some humans, that crushes other humans. This pattern seems to be well in place by the time you get to the prophets in the Hebrew Bible. The “lash out” kind of anger is admittedly seems to be more present in earlier books like Joshua, and occasionally in later places like Ananias and Sapphira… but it seems the overall thrust is that God’s anger / anguish is about humans hurting each other, rather than about us offending his righteousness.
What I finally realised I want to believe:
Jesus was killed by humans. It wasn’t God’s anger that put him there, it was ours. We have the human condition, a tendency to lash out, to scapegoat, to viciously attack anything that exposes our frailty, futility or hollowness. Jesus exposed how hollow the power structures of the day were, and showed very clearly how the actions of the religious powers and the actions of the political powers were not the actions of God – these people did not represent God. He exposed the leaders, and like most humans, they lashed out.
But rather than fighting back and perpetuating the sinful violence of humanity, he took it and did not return it, in fact, while they were still killing him, expressed forgiveness and love. To apply MLK’s famous truth: Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.
On a physical level, the power-structures of Rome and of 1st century religious leaders killed him, because he threatened them. On a sociological level, his non-violent response changed the game in a way where his followers continued to subvert the brutal power of Rome despite intense persecution.
What about theologically or spiritually? Is there any meaning to it?
If Jesus really is God-incarnate, God-as-a-regular-human-being, (which I should clarify I want to believe), then him suffering the same wrath of humans as the rest of us shows that that wrath is from humans, not from God. The angry and violent human condition that causes us to crush each other (which I feel sums up most of the concept of sin) is actually from us, not from an angry God.
In other words, what I want to believe: God was never angry that we’re not perfect. That anger was always ours, that violence was always ours. It took God himself suffering under that violence, as Jesus on the cross, for us to understand that the violence wasn’t coming from God. The anger was never his.
So in a bizarre way, the cross is a sign that God is not angry: it shows me that he always loved and was not the one who was angrily lashing out. And it is a sign that God is not leaving us to suffer alone: that God himself would suffer under our angry violence, it shows he knows our suffering and is not keeping his distance. And it is a sign that God is working at a rescue plan: that he overcame hatred with love, darkness with light. This is the butterfly effect – where one small act of love overcoming hatred is cascading and rippling outwards until hatred, violence and even death is overcome by love, and that the God of the universe is putting his full weight and power behind this plan.
And so, when I look at the cross, I do realise that God loves me and is not angry. And I do realise that he loves me enough to enter into that suffering. And I do realise that he has a plan for salvation, the rescue of the world – to transform this suffering through redemptive love.
Just because I want it doesn’t mean it’s true.
So that’s what I want to believe, I can finally articulate it.
But I’m looking forward to reading the rest of Tom Wright’s book, because admittedly: this is just the worldview that sits well with me, given what I’ve experienced and what I’ve learned and the cultural leanings that go with that.
Tom’s book looks like it will go through it in a more rigorous, systematic way: examining early evidence and early texts, examining the changing understanding of the crucifixion event by two millennia of theologians, and generally being a little more grounded than my “this is what I want to believe” write-up.
But, it’s good to be able to write down, as a product of my life and culture and upbringing and current understanding, what it is that I most want to believe about a loving God who had to die.
I’m looking forward to seeing where I land. If you’re wondering any of the same things, asking the same questions, or exploring the same topics – I would love to hear about it in the comments!
When I type the word “painstaking” in my phone, it automatically suggests “work” as the next word. Do you ever hear it paired with any other word?
Painstaking work is often necessary, even good. We know this and brace ourselves for it, muster our endurance and strength and willpower.
But painstaking love is something we don’t give much thought to. But maybe we should. Let me try this definition of love: improving the life of another person, regardless of the effect on your own life. When you define it like that, love is sometimes going to get painstaking: really challenging, pushing us to the edge of our capacity, more than most people are willing to undergo. Much like painstaking work, painstaking love achieves the strongest effect.
One key thing that’s changing: my view around why Jesus died. There’s a cognitive dissonance when you speak of a “God of love” who loves you so much that he will punish another to satisfy his own rage, or to satiate his sense of honour. We condemn honour killings, but it’s okay for God?
I still believe in God (though, what I mean by that statement, is also something that is changing). But if it’s Jesus that I’m attracted to, and it’s Jesus that showed us what the god behind the universe is really like as a person, then I don’t think God is the sort that wants to kill people to defend his sense of honour and justice. In fact, one of the stories I like most is of Jesus non-violently de-escalating a situation, saving a woman from being the victim of an honour killing.
So what did Jesus death on the cross mean? It’s something I want to learn more about. I want to read NT Wright and I want to hear about the “new perspective on Paul” that is actually decades old. But I read an article today that had good food for thought.
He became the lightning rod where the pent up oppositional energy of the powers that be (the world) became focused. In bearing the hate, evil and animosity of the world, he exposed it and exhausted it, thus overcoming it..
We, too, are called, on behalf of the kingdom of God, on behalf of mercy and justice, on behalf of what is good, right, true and just, to be lightning rods, to bear the hate of the world without returning it, so that it might be exposed and so that forgiveness is given a chance.
On Sunday I finished reading “Making Ideas Happen” by Scott Belsky (founder of Behance).
I’m the sort of person that has ideas. So many ideas. Some are ideas for apps or products, side projects or start-ups, cool programming libraries or fun creative projects. When I look back at my work history, I am proud of the things that I have “shipped” and finished, or if not finished, at least gotten it “finished enough” that other people could start using it.
But there is definitely a graveyard of good ideas that I was very excited about at one point in time, that I even thought were game-changing, and maybe even started work on, but never managed to finish or get out there.
As I was wrapping up 2016 and planning for 2017, I realised that I have several ideas that I actually want to see become reality. Two ideas in particular (tentatively named Enthraler and Model School) I’ve wanted to do for at least 5 years, and have kept not doing.
When we finally launched Today We Learned I found out the pain of launching a good idea too late, and seeing someone else carry a very similar idea to success. If I wanted 2017 to be different, and if I wanted to be part of these 2 ideas becoming reality, I had to get better at executing on ideas, pushing them forward and getting them out there. And all of this while having a full-time job that I really love and am already feeling stretched in.
So when my my sister Clare and her husband Zac gave me a book for Christmas that promised to help me “Overcome The Obstacles Between Vision And Reality”, I was eager to get stuck into it.
Disclaimer and caveat: It’s worth pointing out that I’m viewing this book as advice on creative projects, which I’m viewing as distinct to start-ups. Both of them involve ideas, innovation and execution, but there’s a crucial difference. If you want a start-up to succeed, you should probably not start with an idea. You should start with a problem that real people hate so much they’ll pay someone to solve it for them. Find a gnarly problem first, then let your ideas develop around that. Creative projects on the other hand, start with your idea. It’s a work of art, as if inspiration has visited and requested that you make an idea real, and it’s your job to make it. It’s as much about self-expression and creative fulfilment as it is about business development.
Overview
The first half of the book (“Organisation and Execution”) is all about getting it done. It’s full of practical tips to stay organised, stay focused, and keep pushing ideas forward until they’re ready. These have made a massive difference for me so far. The second half (“The Forces of Community” and “Leadership Capability”) focus on the relational aspects of ideas. Ideas feel their most exciting, and most pure, when they’re in your head. The moment you start sharing them, and inviting other people in to participate, things change. This might feel like your idea is losing potency as it gets diluted by others, but it’s only with their help (and with the new aspects they bring to the project), that your idea is going to be successful. How well you utilise these forces has a massive impact on your ability to consistently bring ideas into reality.
Organisation and Execution
The competitive advantage of organisation
The action method: work and life with a bias towards action
Prioritisation: managing your energy across life’s projects
Execution: always moving the ball forward
Mental loyalty: maintaining attention and resolve
I was really surprised by how prescriptive Scott is in this first section. His basic argument is that they key to creative success is actually finishing your projects and getting them out the door, and doing that as often as possible. The trouble is that ideas are exciting at the start, boring in the middle, a hard slog near the end, and then only get exciting again right before you launch.
So the key to all of Scott’s advice is to just keep you moving forward, one small step at a time, to get you through the trough and out the other side where your idea is finally a reality. To do this, it takes dozens of small actions. Focusing on these actions is the key principal in “The Action Method” – Scott’s big idea on how to do this.
Most other productivity books I’ve read have talked about the motivation and less about the mechanics of getting it done, but Scott is quite prescriptive. For example:
Organise your whole life into “Projects”. Not just work projects. Not just side projects. Even mundane things like “grocery shopping” and “remember to call mum” go into a project. He isn’t prescriptive about where to store your projects, but I’ve used Trello. I have one Trello board for my whole life. The lists I use are “Home Admin”, “Personal and Social”, “myEd”, “Enthraler”, and “Model School”.
Each project has 3 types of things you can store:
Action Steps – literally, a specific action you can take to move the project forward. The first word should be a verb: “Create nice styling for multi-choice component and upload to Github”. Or “Read through things Cassie sent me and send through feedback”. My friend Stephen had an interesting take on this, going one step further and having a super specific first step in the action. “Open gmail, read through the docs Frank sent me…”. The trick here is to make it so obvious what the next step looks like, so whenever you have half an hour to work on something, you can easily pick an action step, start it, and push a project forward.
Back-burner Items – these is where you keep possible future action steps that you’re not ready to commit to yet. Maybe they are ideas for much later in a project, or maybe they need some more thought before you commit to starting them. Keeping them in a separate place, where you can come back to them, but where they’re not confusing you as to what the next steps should be, helps hugely.In my “Enthraler” project I have 8 ready-to-go Action Steps, and 29 items in the Back-burner. That goes to show that when I start a new project, I’m excited about the possibilities and I keep thinking them up. But I know if I want to keep moving the project forward, I just pick one of the Action Steps I’ve already committed to, and all of the exciting future ideas are in the “Backburner” list where they won’t distract me.
References – this is for things which you want to remember but they’re not actionable. An example is finding a colour scheme I really liked for a project. In the past, I may have taken a screenshot of the colour scheme and added it to a task. But it’s not really an action I can take! It’s just something I want to store so I can look back if I need to. Now I keep things like this in references. Also helpful: meeting notes, contact details, etc. Keeping all these things in one separate spot helps keep the clutter down so you can focus on your next action steps.
Have a regular “review” session, say once a fortnight or once a month, where you go over your projects, see if the actions are still relevant and clearly defined, check what’s in your back-burner and references, and update things as necessary. He recommends doing this some place nice, like your favourite coffee shop. This is a part I haven’t had much practice with yet!
The Forces of Community
Harnessing the forces around you
Pushing ideas out to your community
One of the headings in this section is “Seldom is anything accomplished alone”, and I find it a perfect reminder. Some ideas are small enough that they can be accomplished alone, but even something like writing a song will benefit from collaboration. Not to mention the help you would need in producing, mixing, mastering, distribution and release strategy. And every project is like this – if you want it to become more than a hobby, you are going to have to bring other people into it.
A key part of this is overcoming the fear of people judging and criticising your work. Overcome the fear, then get feedback early, and get it often. You don’t have to listen to what people say, if you want you can hold on to the exact vision you have. But if you overcome the fear, and get the feedback, you will probably hear things that will make your project better, so it’s worth putting yourself out there.
There was a beautiful story in here about a story-telling course Scott went to, where people practised sharing a story, but the audience were not allowed to give “constructive criticism”. Instead, they were only allowed to share what they really appreciated, what made the story come alive for them. He found that as people iterated on their stories, this approach helped the “alive” pieces of the story shine even more, and somehow, the weak parts of the story began self-correcting with each iteration, but without losing the strength of the parts that really shone. I loved that!
One of the things that challenged me the most in the section was to actively self-promote what you’re working on, and to build an audience of people who care about what you’re working on. If you’ve ever read case studies on a site like “Indie Hackers”, you realise that a common story in side-project success or start-up success is that the person starting had an audience who really cared about what they were working on, so when they launched a new project, they had someone to launch it to. So don’t be afraid to self-promote and build an audience, and show them what you’re working on regularly. The fear you have shouldn’t be that you’re inflating yourself in front of others, it should be that you’re not giving your idea any air, and it might die in your mind without ever becoming a reality.
The way Scott consistently reinforced this was a wake-up call for me.
Leadership Capability
The rewards overhaul
The chemistry of the creative team
Managing the creative team
Self leadership
I’d been reading the book thinking primarily about my side projects, which are solo-shows for now. Scott offered distilled wisdom and advice from his experience and from the research conversations he conducted, and while it’s not immediately applicable to the projects I’d been thinking about, I can definitely see how it ties into my work at myEd, and how it will be important to attract more contributors for any project I do going forward. It identified a weakness in my approach to executing so far – the tendency to do it all myself.
Much of the advice given in this section was focused on character. So not so much “how to run a meeting”, but “when in a meeting, let other people talk first and make sure you actually listen”. Things like that. I really do believe that humble leaders attract talented collaborators, but more importantly, their humility and strength of character breeds loyalty – which you just don’t see as clearly when it is obvious the leader isn’t paying attention to your input and your ideas.
The difference it has made for me
Reading this book came at the perfect time for me. After closing down Today We Learned last year I began working full-time at myEd, but found I was still swimming with ideas for outside projects. By the time the new year rolled around, I had multiple projects I really wanted to run with, but was just not convinced I could push them all forward while still being effective at work 5 days a week.
This book, especially the focus on the action method, has helped me dramatically – and people have noticed. Most significantly my wife Anna – who keeps commenting that she can’t believe the change in how I work and in my energy levels, and in my ability to keep things moving – not to mention staying on top of things at home more effectively.
On a practical level, it has meant that I’m staying focused and effective at work, not getting stressed out by home administration (keeping track of finances, bills, investments etc) because I know I have everything tracked, and if I sit down at work, or sit down to do some home admin, or to give time to a side project, the next actions are right there for me to continue with.
Because of the stage of life we’re in – we moved across the country to focus on our work and our creative pursuits, and we don’t have young kids to care for and hang out with – we have a fair amount of spare time. I try to give 1 hour each weeknight to push a side project forward, and then a few hours of blocked out time on either Saturday or Sunday. With this rhythm I’ve been able to push forward my two main side projects to the point where both are almost at MVP (minimum viable product) level. And this has been during one of the most incredibly busy and stressful periods at work – coming home and having a different project that I can get into has in many ways helped me maintain my energy and positivity during some of the more stressful moments at work.
The challenge to think about how the community around me has also been incredibly helpful. As well as the two side-projects I’ve been pushing forward, there were another two ideas that I couldn’t shake, and that I wanted to help become reality – but you can only stretch yourself so thin before you stop being effective. By reaching out and sharing these ideas with friends, I’ve actually found other people working on similar things, and have been able to support them rather than carry it forward myself, and I’ve found this incredibly rewarding, while at the same time satisfying the creative urge that demands I be part of making that particular idea happen.
It’s been an incredibly timely book to read (thanks Clare and Zac!) and has helped me hugely so far. To anyone who has ideas but can’t seem to get them off the ground, I recommend it highly. I’m excited to see how the rest of my year pans out as I continue pushing things forward one action step at a time.
“The old men used to say that we should each look upon our neighbour’s experiences as if they were our own. We should suffer with our neighbour in everything and weep with him, and should behave as if we were inside his body; and if any trouble befalls him, we should feel as much distress as we would for ourselves.”
But actually, Jesus said that if you live violently you’ll die violently. The best defence is probably nothing to do with fighting your adversary, but rather loving them. As a best case scenario, they’ll return the favour – and you both win. Worst case scenario, you love them, and they take advantage of you. Which sucks, true, but at least you’re behaving like God, and that is probably more important than winning anyway.
A freelancer sits down in a coffee shop in Portland to get some work done, and finds himself distracted by a senior citizen wanting to talk about computers. At this point I groaned, but, (spoiler alert), the old man turns out to be the most remarkable person this freelancer has ever met. This old man was Russell Kirsch, whose team built the first internally programmable computer. Him and his wife used to program the computer while standing inside it. This man invented computers as we know them today. (He also invented digital photographs and the idea of a pixel. What a boss!)
When Joel the freelancer realized just how impressive Russell is, this conversation took place:
“You know Russell, that’s really impressive.”
“I guess, I’ve always believed that nothing is withheld from us what we have conceived to do. Most people think the opposite – that all things are withheld from them which they have conceived to do and they end up doing nothing.”
“Wait”, I said, pausing at his last sentence “What was that quote again?”
“Nothing is withheld from us what we have conceived to do.”That’s good, who said that?
“God did.”
“What?”
“God said it and there were only two people who believed it, you know who?”
“Nope, who?”
“God and me, so I went out and did it.”
What a life changing exchange! Unbelievable. When I read the blog I wanted to know if he was imagining God talking in the spiritual, pentecostal, voice-in-the-head sense, or something else. It turns out he was quoting the story in Genesis 11 about the Tower of Babel.
The Lord said, “If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them.
In the creation story, whether literal or metaphorical, there had been a flood, Noah’s family survived, and God had commanded them to spread out and fill the earth. On the way they began developing culture and technology, songs and tools, and when they found a nice place, they stopped spreading. Instead they settled and built a city. In their pride they wanted to build a tower so tall that the world would always remember them.
The tower they were building was probably a Ziggurat, and the story of a giant tower in the area of Babylon (modern day Iraq) seems to be shared with other ancient cultures. To me the confusing of languages could easily happen in the modern day world – a giant corporation or city gradually has multiple cultures rise up in it, they drift apart, can no longer work together, and so abandon the project to go their own ways. It doesn’t necessarily seem to me like a divine punishment where they spoke one language one minute and all spoke different languages the next.
Let’s look back at God’s words: “nothing they plan to do will be impossible for them”. The 400 year old King James translation swaps “plan to do” for “can imagine”: nothing they can imagine will be out of their reach. Nothing they aspire to and plan for and set their mind on is out of reach. God acknowledges the amazing potential of human creativity and ingenuity. They like to create new things, do things that have never been done before, and they’re good at it. In that way, they take after their creator.
I always used to read the story of Babel in a fairly negative light – God didn’t like the ingenuity, or saw it as a threat, and so shut it down. John Stott points out two things God may have been offended by: the disobedience of settling rather than spreading out to “fill the earth”, and the presumption that they could reach into Heaven, and be like God – the same as the original sin from the Adam and Eve story. So there’s the failure to explore the earth and develop it’s potential. And there’s the pride and hubris, being concerned about our fame and wishing ourselves to be like God.
It wasn’t until reading Russell’s conversation that I began to read this statement differently. Maybe it wasn’t the ingenuity God objected to, maybe he isn’t threatened by us building amazing things. It is, after all, part of our nature as creative beings. But our resourcefulness can be twisted and our inventions result in a world worse-off, not better. (Read the story of the Gatling Gun or Dynamite for examples). And this is all the more likely if we’re acting in our own self interest, for our own fame and power and comfort. But if we were to instead align our efforts with the command of God – to fill the earth and subdue it, meaning to manage it responsibly and for the benefit of all – then perhaps our efforts would align with God and we could see truly astounding things accomplished.
It can go either way: jets for transport and jets for bombing, nuclear power or nuclear weapons, curing diseases or inventing new ones, programmable computers and systems of government. Humans have the creativity and the resolve to build incredible things. And that can work out really well or really horribly.
There are two questions to consider then: do you, like Russell Kirsch, believe God that what you can imagine and resolve, you can do? Because if you do, maybe you’ll go out there and do something that’s never been done.
The second question is, are you working towards your own fame, power and comfort, or towards the mission laid out by God: to responsibly manage and care for the earth we’ve inherited, and to care for the people we share it with?
Once you have more than one project you’re building in Haxe, you tend to run into situations where you use different versions of dependencies. Often you can get away with using the latest version on every project, but sometimes there are compatibility breaks, and you need different projects to use different versions.
There is a work-in-progress issue for Haxelib to get support for per-project repositories. Until that is finished, here is what I do:
cd ~/workspace/project1/
mkdir haxelibs
haxelib setup haxelibs
haxelib install all
And then when I switch between projects:
cd ~/workspace/project2/
haxelib setup haxelibs
What this does:
Switch to your current project
Create a folder to store all of your haxelibs for this project in
Set haxelib to use that folder (and when I switch to a different project, I’ll use a different local folder).
Install all the dependencies for this project.
Doing this means that each project can have it’s own dependencies, and upgrading a library in one project doesn’t break the compile on another project.
Hopefully that helps someone else, and hopefully the built in support comes soon!
So far so good. But what if we have a 3rd user class, “Moderator”, that actually has a constructor that requires 3 arguments, not just the username and password.
createUser( Moderator, “bernadette”, “mypass” );
This compiles okay, but will fail at runtime – it tries to call the constructor for Moderator with 2 arguments, but 3 are required.
My first thought was, can we use an interface and specify the constructor:
interface IUser {
public function new( user:String, pass:String ):Void
}
Sadly in Haxe, an interface cannot define the constructor. I’m pretty sure the reason for this is to avoid you creating an object with no idea which implementation you are using. Now that would work for reflection, but wouldn’t make sense for normal object oriented programming:
function createUser( cls:Class<IUser>, u:String, p:String ) {
var u:IUser = new cls(u,p); // What implementation does this use?
}
So it can’t be interfaces… what does work? Typedefs:
typedef ConstructableUser = {
function new( u:String, p:String ):Void,
function save():Void
}
And then we can use it like so:
function createUser( cls:Class<ConstructableUser>, u:String, p:String ) {
var u:ConstructableUser = Type.createInstance( cls, [u,p]);
u.save();
}
createUser( StaffMember, “jack”, “mypass” );
createUser( Subscriber, “aaron”, “mypass” );
createUser( Moderator, “bernadette”, “mypass” ); // ERROR – Moderator should be ConstructableUser
In honesty I was surprised that “Class<SomeTypedef>” worked, but I’m glad it does. It provides a good mix of compile time safety and runtime reflection. Go Haxe!
`AcceptEither`, a way to accept either one type or another, without resorting to “Dynamic”, and still have the compiler type-check everything and make sure you correctly handle every situation.
Not being a teacher or a parent (yet…), people sometimes ask why I decided to make my life’s work about using technology to improve education. I made that decision while in rural Cambodia in 2010. In a country still struggling to recover from the brutal genocide 40 years earlier, we were visiting a learning centre that ran afternoon classes and learning activities, complimenting the local school’s morning-only classes.
The centre was run by Sonai, an incredibly entrepreneurial lady only a few years my senior. She was the first person in her village ever to graduate high school. (She jokes that she only did it because she couldn’t bear the thought of being a farmer the rest of her life. I don’t blame her!)
Together with a team of other young teachers and mentors, they were providing food, learning and leadership development to hundreds of students in that village. She is determined to lift her village out of subsistence living through her brilliant mix of education and entrepreneurship. It worked for her, it can work for these kids too.
When I got back to Australia, I began chatting with teachers, and my admiration for the profession grew more and more. These people were fiercely determined to provide their students with the best opportunities for a life worth living. Even in a country as wealthy as Australia, a good education often makes the difference between a life shaped by hope and opportunity, and a life that just scrapes by. And we weren’t without our own educational struggles: remote indigenous education, catering to special needs, struggling with new national standards and international competition.
I don’t have the personal make-up to be an effective classroom teacher, and I don’t pretend to know all the best practices or solutions to all of these problems. What I can do, is work with the most innovative teachers to craft solutions to the most difficult problems. They bring their teaching expertise, I bring the design, tech and startup know-how. (The idea for ClassHomie came out of one such meeting with Aaron Gregory, a teacher I have so much respect for. It has since been refined by input from dozens of teachers).
I strongly believe that entrepreneurs and teachers can, and should, work together to solve the difficult problems of education. By improving learning, we improve lives. This work matters, and that’s why I’m building educational apps, starting with ClassHomie.
Preaching about a man who died and then began showing up again, evidently no longer dead, with a new evolution of the human body, and claiming this resurrection as validation of his claims to be king of his nation, saviour of humanity, and founder of a new world order where humans live empowered by a supernatural spirit to live an entirely different style of life which occasionally ignores the realities of physics or biology or politics…. I feel it should carry a whole different level of energy, challenge, hope, discomfort and urgency than it normally does.
And yet if I was preaching, I have no idea what that would look like.
Maybe I need to explore the reality (or unreality) of this story in my own existence first.
(Disclaimer: My startup journey and my faith overlap in this post. For those of you who want to avoid the theology stop reading now!)
I’m almost at the end of PhDo, a 6 week startup night class that I have been LOVING. One of the key lessons has been about getting your idea started in the quickest, cheapest, fastest way possible.
The idea is that if you wait too long, you work on your idea in secret, you never put it in front of customers, it’s just an idea. You don’t know if it’s something they’ll want or need or appreciate. Until you reach out and touch a human being, you might as well not have done anything.
Sam, the guy running the course, coined a word. Rather than planning a digital masterpiece of an app that might take years to build, is there a simple, manual, analogue way you can solve the same problem for the same person, now. Not once you have all the resources and the app and the staff team and investment and… No. Can you help meet a need today? Start meeting a need now, help someone out, see if it’s well received, then worry about scaling the solution up for more people.
Get analoguer. Get dirty. Get doing – solve a problem today. Scale later.
I did this wrong with my School Management System app. I never met with the staff, and completely underestimated how complex the problem of tracking student attendance is. My solution was way too simplistic, and would never be adequate. The project blew out by 10 months and caused a lot of frustration as a result.
I tried to be a messiah and solve this problem, thinking it would be easy. But I never entered in and felt / understood the pain first. How can you offer help if you have not stood with people and felt their pain and understood the complexity of the problem first?
This is the difference with how Jesus chose to work. He could try solve the world’s problems from in Heaven. Or send some prophet to do the dirty work. Instead he chose to get dirty, get personal, and get in touch with those he was trying to help. Understand their pain and show his solidarity, feel the full weight and complexity of the problem, and show those facing it that you are eager to help, in any way you can, even if you get dirty doing so. Even if it means suffering with them. Even if it means dying with them.
Jesus left the ivory tower of Heaven. He was God, but put his rights as God and abilities as God behind him, he became an ordinary human, wrapped in ordinary human flesh. He got analogue.
Today is the first day of Advent – the season where we anticipate the coming of Jesus to earth, culminating in Christmas.
We join Mary in expecting the birth of the baby Messiah, the God who gave it all up to come and understand our problems and join us in this often-painful world, and resolved to help us in any way he could, no matter the cost.
And we join with the people of faith around the world expecting the second time Jesus will come, having grasped the full complexity and pain of the human condition, and having felt it for himself, he’ll be back with a solution that scales.
Until then, let’s get analoguer and show love to people, not waiting for the perfect plan, strategy or opportunity, but starting right now in a way where you get as close to the problem as you can, and give what you can today, even if it’s not a full and perfect solution.
I’ve mentioned it a few times, but I’ve found Marcus Buckingham’s Standout profile and website really helpful. The profile for me was accurate and insightful, and the tips I get sent each week are great.
One of them encouraged you to log the things in a week that you love (that leave you energized, strengthened) and the things that you loathed (that leave you drained, weakened). This wasn’t about what other people do to you (a police officer gave me a fine – I loathed it, my boss gave me a pay rise, I loved it). Rather this is about the work you’re doing, and what parts of your work energize you and what drains you.
Here’s my list.
Loved It
Working on new business strategy, or figuring out the brand/story that makes a new product have meaning.
Writing code for new APIs or frameworks that will be well designed, get use by many people, speed up development. While I’m doing this I’m engaged, I’m learning and tweaking my skills, and I’m making something for that benefits both me and others.
Delivering a feature that has immediate customer benefit
Meeting people, getting them on board with a new idea. This week it was a friend (and possibly future business partner), a learning support officer (who is an aspiring entrepreneur) and a friend who is wanting to get into graphic design.
Trying to consider how my faith world mixes with my business world. Particularly this week: how can I see people as people, not resources or assets. How can I help them find their particular spot in the world, and help them grow into it, rather than “how can I get you to do what I want”.
Maintaining old projects. Especially if it’s something I didn’t care for to begin. (For me this week, that includes Koha, Canvas, and Moodle to a lesser degree)
Being apart from Anna for too much of the week
Working on projects that feel like they will never end or progress. I need a sense of momentum and an expectation that one thing will finish so that new things can start.
Having to report to (and problem solve with) a group of people who don’t understand the technical nature of the problem.
Having to answer questions where the “correct” answer is “drop quality, deliver faster”
Avoiding answering / helping people, because other deadlines are too heavy
Having to pretend something is good/ready when it’s not. I’d rather be honest. Drop projects if they suck, or at least admit it.
There’s my list for this week. I’m in the middle of an extremely busy work season, so this is mostly focused on work and doesn’t touch much into my family life or faith life – both of which also have strengthening moments and weakening moments. Still, a great exercise.
I just posted a quick gist to the Haxe mailing list showing how one way that abstracts work.
They’re a great way to wrap a native API object (in this case, js.html.AnchorElement) without having to create a new wrapper object every single time. Which means they’re great for performance, the end result code looks clean, and thanks to some of the other abstract magic (implicit casts, operator overloading etc) there is a lot of cool things you can do.
Have a look at the sample, read the Haxe manual, and let me know what you think or if you have questions :)
I just sent an email abuse report to MailChimp, documenting the unsolicited emails I have been receiving from the Australian Labor party. I understand that one of their members may have accidentally entered the wrong email address once, but it is unfair that they have subscribed me to 8 separate email lists so far, and I have to take the time to unsubscribe from each one – when I never subscribed in the first place. This is illegal under the SPAM ACT 2003.
Here is the email I sent them:
Hi
In Australia we have 2 main political parties, one of them being Labor. They’ve recently been voting for a new leader. I’m not a member of the party, but I’ve received an email from Labor people almost every week for the last month – all from different lists, and I have unsubscribed every single time, choosing the “I never subscribed” option.
5 of the 8 lists I received unsolicited email from were hosted by MailChimp, the remainder by NationBuilder (good luck competing! As a web developer I like your product better)
EMAIL 3
Subject: I want to make a difference
Sender: NationBuilder
EMAIL 4
Subject: Join Bill Shorten for a conversation with country members tonight
Sender: MailChimp
X-campaignid: mailchimp2a0336f66d93a2c444ff3d779.ed092fdc95
EMAIL 5
Subject: Make your vote count
Sender: NationBuilder
EMAIL 6
Subject: I’m supporting Bill
Sender: MailChimp
X-campaignid: mailchimpd1d04c8e8a8375877ba044bc4.1636a97be2
EMAIL 8
Subject: Thank you for your support
Sender: MailChimp
X-campaignid: mailchimp429f4375fe72549c8e09fe0fd.53fef0c08a
As you can see, the Australian Labor party is creating many different lists, and is subscribing my email address to each of them without my permission. I am assuming another Jason O’Neil entered an incorrect email address (this happens often), but as it stands I can only assume the Labor party is importing a bulk list of email addresses into MailChimp, and while I can unsubscribe from each individual mailing list, they will just create new lists and re-import my address.
Unsolicited email is illegal in Australia under the SPAM ACT 2003. I have attempted to contact them but was unable to resolve the issue, and I have received multiple emails since then.
I guess what I am asking:
– Can you contact the owners of these lists and remind them of their responsibilities under law and under your terms and conditions?
– Is there a way to prevent my address from being imported into new lists during a bulk import?
When I see the human face behind a political issue, the emotions of apathy, indignation or anger become less. Then empathy (understanding their suffering), sorrow (grieving with them), remorse (that collectively, the humans in my country did this to other humans) – these emotions become stronger. Finally, there is hope – these people have courage, and see a better future. They want to fight injustice, and fight for the rights of those who follow. I hope, at the end of my life, I can say that I partnered with these guys, not the powers that locked them up and stole their childhood.
I struck up a conversation with him, and he casually mentioned that he was having trouble adjusting to Columbia, due to his “previous situation.” So I asked him to elaborate.
“I was born in Egypt,” he said. “I worked on a farm until 3rd grade with no education. I came to the US for one year, started 4th grade, but was pulled out because my father couldn’t find work and returned to Egypt for a year. The first time I went to an actual school was middle school, but the whole school was in one classroom, and I was working as a delivery boy to help the family. It was illegal for me to be working that young, but I did. When I finally got into high school, my house burned down. We moved into a Red Cross Shelter, and the only way we could live there is if we all worked as volunteers. I got through high school by watching every single video on Khan Academy, and teaching myself everything that I had missed during the last nine years. Eventually I got into Queens College. I went there for two years and I just now transferred to Columbia on a scholarship provided by the New York Housing Association for people who live in the projects. It’s intimidating, because everyone else who goes to Columbia went to the best schools, and have had the best education their entire lives.”
I’m not lapsed. I am a Catholic in waiting — waiting for my church to remember the Gospels, to be a justice and peace-seeking community, to be fully inclusive of women and to be welcoming to people who are not hetero-normative.
I want to be an Entrepreneur. So do a lot of people these days.
It’s been made famous by the likes of Steve Jobs, Bill Gates, and Mark Zuckerburg and more recently startups like AirBNB, DropBox, Instagram and Roxio (Angry Birds) have painted the vivid portrait of the glorious life of a founder: working on something you’re passionate about, creating wealth, pleasing users and making your mark on the world.
People like Paul Graham, whose startup school “Y-Combinator” has funded over 800 startups, argue strongly that startups are the best model for business and innovation, and I tend to agree. They also point out how hard it all is.
While on holiday, I was thinking about what it means to live subversively – inside one world, with it’s values (being cool, making money, making a mark on the world) while belonging to another reality (living selflessly, trying to bring justice, hope and love into the world, trusting God to help you do things you couldn’t do on your own).
For most people, startups are about one of these things:
Making your mark on the world (“leaving a dent in the universe”, to quote Steve Jobs)
Making a lot of money very quickly
Getting paid to do something you love doing
The thrill of doing something risky. Not choosing a boring life.
I have no problem with any of these motivations, other than the possible vanity and self absorption that could come with success. But I wanted to think about what the drive is to run a startup when you come at it from a Christian world-view.
I came up with this line:
“An Exploration of Grace”
(Note: this was an unpublished draft from September 2013. I had a few extra sentences and probably intended to write more, but I can’t remember where it was going. Publishing this in 2023, a decade later, I love this line and will choose to leave it there, and let the sentence speak for itself and evoke what it will).
According to Marcus Buckingham’s StandOut profile, this is how to get me doing my best work when I’m working with you:
I am resourceful and can fill the gaps quicker than most. If there’s a project to begin that lacks details or data, I can get it off to a good start.
Tell me that if I try to serve everyone I wind up serving no-one. I must make a choice about who to serve well, and then serve them well. Know that I will be sensitive to any criticisms.
If you’d like to grab my attention, tell me I am not moving boldly enough. Tell me that you expect me to be the first person to challenge an existing way of doing things, the first person to spot, bump into, and report back on a new threat, or a new opportunity.
The overall StandOut profile was disturbingly accurate for me and a friend who took it with me, and this tip also resonates strongly. I would recommend the profile to anyone seeking to understand their work self better, and I’d recommend this advice to people trying to work better with me :)
A while ago I posted about neko.web.cacheModule, a way to make your site run dramatically faster on mod_neko or mod_tora. After spending some time making ufront compatible with this mode of running, I was excited to deploy my app to our schools with the increased performance.
I deployed, ran some tests, seemed happy, went home and slept well. And woke up the next morning to a bunch of 501 errors.
The Error
The error message was coming from the MySQL connection: “Failed to send packet”.
At first I just disabled caching, got everyone back online, and then went about trying to isolate the issue. It took a while before I could reproduce it and pinpoint it. I figured it was to do with my DB connection staying open between requests, thanks to the new module caching.
A google search showed only one Haxe related result – someone on IRC that mentioned when they sent too many SQL queries it sometimes bombed out with this error message. Perhaps leaving it open eventually overran some buffer and caused it to stop working? Turns out this was not the case, I used `ab` (Apache Benchmark tool) to query a page 100,000 times and still didn’t see the error.
Eventually I realised it was to do with the MySQL Server dropping the connection after a period of inactivity. The `wait_timeout` variable was set to 28800 by default: so after 8 hours of inactivity. So long enough that I didn’t notice it the night before, but short enough that the timeout occured overnight while all the staff were at home or asleep… So the MySQL server dropped the connection, and my Haxe code did not know to reconnect it. Whoops.
The Solution
I looked at Nicolas’s hxwiki source code, which runs the current Haxe site for some inspiration on the proper way to approach this for `neko.Web.cacheModule`. His solution: use http://api.haxe.org/sys/db/Transaction.html#main. By the looks of it, this will wrap your entire request in an SQL transaction, and should an error be thrown, it will rollback the transaction. Beyond that, it will close the DB connection after each request. So we have a unique DB connection for each request, and close it as soon as the request is done.
My source code looks like this:
class Server
{
static var ufApp:UfrontApplication;
static function main() {
#if (neko && !debug) neko.Web.cacheModule(main); #end
// Wrap all my execution inside a transaction
sys.db.Transaction.main( Mysql.connect(Config.db), function() {
init();
ufApp.execute();
});
}
static function init() {
// If cacheModule is working, this will only run once
if (ufApp==null) {
UFAdminController.addModule( "db", "Database", new DBAdminController() );
ufApp = new UfrontApplication({
dispatchConfig: Dispatch.make( new Routes() ),
remotingContext: Api,
urlRewrite: true,
logFile: "log/ufront.log"
});
}
}
}
Here is my servant whom I have chosen
the one I love, in whom I delight;
I will put my Spirit on him,
and he will proclaim justice to the nations.
He will not quarrel or cry out; no one will hear his voice in the streets.
A bruised reed he will not break,
and a smoldering wick he will not snuff out, till he leads justice to victory. In his name the nations will put their hope.
Matthew 12:18-21
—–
my servant
Jesus was a servant, and as his disciples, we are his. Have no illusions, we are here to serve, not to be served.
I have chosen
Each person chosen and assigned their role in ushering in God’s Kingdom, according to their unique God-given skills, strengths and gifts.
I love
Our strength and courage draws on this love God has for us.
in whom I delight
Our motivation is his delight. Not to earn it, but to revel in it and enjoy it and immerse ourselves in it.
my Spirit on him
This isn’t merely natural work and effort, this is work empowered and affirmed by God’s Holy Spirit.
proclaim justice
Equality, fairness, hope, safety, opportunity
will not quarrel or cry out
It’s not about the sport, spectacle or stardom of society’s idea of success.
no one will hear his voice
Less talk, more action. Less brand and perception and posturing, more life change.
bruised reed… smoldering wick
The hurt, oppressed, poor, hopeless and helpless, sick, overlooked.
till
Mercy is the strategy and the game plan. We hold to the strategy until the end.
leads justice to victory
Justice will win out, but it’s slow and requires action, leadership.
In his name the nations will put their hope
This is my life’s work:
to offer Jesus’ hope to all that you can,
to work as he did,
empowered as he was,
with the values he carried
and the strategy he adopted
to the same end he strived for:
the victory of justice,
the hope of the nations,
the delight of the Father.
My token one day a week I continued on my Node-Webkit project. This time I made externs for Kue (externs, which appear to be working) and FFMpeg (externs, not functional just yet). Still enjoying working with Node-Webkit, and with the Node-API library especially. Sad I didn’t get to make more progress on it this week.
Ufront:
Make tracing / logging work reliably between multiple requests. After enabling neko.Web.cacheModule(), I began to find areas where Ufront was not very multiple-request-friendly. These would have surfaced later with a port to Client JS or Node JS, but it’s good to find them now.One problem was that our tracing and logging modules were behaving as if there was only one request at a time. This could result in a trace message for one request ending up being output to somebody else’s request, which is obviously bad.
The problem is a tricky one, as trace() always translates to haxe.Log.trace(), and you with Ufront’s multiple-request-at-a-time design, you can’t know which request is the current one from a static method. If I think of a clever way to do it, possibly involving cookies and sessions, then I might include a HttpContext.getCurrentContext() static method. This would probably have to be implemented separately for each supported platform.
The solution for now, however, was to not keep track of log messages in the TraceModule, but in the HttpContext. Then on the onLogRequest event, the trace modules get access to the log messages for the current context, and can output them to the browser, to a file, or whichever they choose.
The downside is that you have to use httpContext.ufTrace() rather than trace(). I added a shortcut for this in both ufront.web.Controller and ufront.remoting.RemotingApiClass, so that in your controllers or APIs you can call uftrace() and it will be associated with the current request. There is also ufLog, ufWarn and ufError.
I also made RemotingModule work similarly with tracing and logging – so logs go to both the log file and the remoting call to the browser.
Fix logging in ErrorModule. One of the things that made debugging the new ufront really hard was that when there was an Error, the ErrorModule displayed, but the trace messages did not get sent to the browser or the log file. I did a bit of code cleanup and got this working now.
Fixed File Sessions / EasyAuth. Once able to get my traces and logs working more consistently, I was able to debug and fix the remaining issues in FileSession, so now EasyAuth is working reliably, which is great.
Added Login / Logout for UF-Admin. With UF-Admin, I added a login screen and logout, that works with EasyAuth for now. I guess I will make it pluggable later… For now though it means you can set up a simple website, not worry about auth for the front end, but have the backend password protected. If you use EasyAuth for your website / app, the same session will work on the ufadmin page.
Created uf-content for app-generated files. I moved all app-generated files (sessions, logs, temp files etc) into a folder called “uf-content”. Then I made this configurable, and relative to httpContext.request.scriptDirectory. You can configure it by changing the contentDirectory option in your UfrontConfiguration. This will make it easier when deploying, we can have instructions to make that single directory writeable but not accessible via the web, and then everything that requires FileSystem access can work reliably from there.
Pushed new versions of the libraries. Now that the basics are working, I pushed new versions of the libraries to Haxelib. They are marked as ufront-* with version 1.0.0-beta.1. From here it will be easy to update them individually and move towards a final release.
Demo Blog App. To demonstrate the basics of how it works, and a kind of “best practices” for project structure, I created a demo app, and thought I would start with a blog. I started, and the basic setup is there, including the config structure and each of the controller actions, and the “ufadmin” integration. But it’s not working just yet, needs more work.
Identified Hair website. I have a website for a friend’s small business that I’ve been procrastinating working on for a long time. On Saturday I finally got started on it, and set up the basic project and routes in Ufront. In about 4 hours I managed to get the project set up, all the controllers / routes working, all the content in place and a basic responsive design with CSS positioning working. All the data is either HTML, Markdown or Database Models (which get inserted into views). Once I’ve got their branding/graphics included, I’ll use ufront to provide a basic way to change data in their database. And then if they’re lucky, I might look at doing some Facebook integration to show their photo galleries on the site.
Imagine being married to someone who was absolutely intent on providing a nice home for you. And by “intent”, I mean obsessive. They clean everyday – and not just tidy, but dust, mop, scrub and disinfect, and then put nice smelling candles everywhere. They constantly decorate and set things up just the way you like it. They maintain the garden keep the yard orderly. They put on an amazing dinner everyday, starting it early so that it finishes right on time as you get home.
And then the moment you walk through the door, they head off to their own section of the house. No welcome, no eye contact, no hugs, no acknowledgement at all. For the rest of the night, you try to enjoy the beautifully pristine house on your own. Apparently they do it because they love you. But you can’t help but feel if they really loved you they’d at least want to see you, talk to you, spend time with you and touch you. The service is there, and it’s great – as good as you’d get in a hotel. But, much like the hotel, this is not love, it’s not a relationship, it’s just service.
I imagine this is how God feels about people who are too busy being religious to spend time getting to know him.
For I desire mercy, not sacrifice. And acknowledgement of God rather than burnt offerings.
Hosea 6:6
Hello! For a reason I can’t comprehend, this page is the most visited page on my blog. If you’re looking for information about logging in Haxe, the “Logging and Tracing” page in the manual is a good start.
If you can be bothered, leave a comment and let me know what you’re looking for or how you came to be here. I’d love to know!
Jason.
Every week as part of my work and as part of my free time I get to work on Haxe code, and a lot of that is contributing to libraries, code, blog posts etc. Yesterday was one of those frustrating days where I told someone I’d finish a demo ufront app and show them how it works, but I just ran into problem after problem and didn’t get it done, and was feeling pretty crap about it.
After chatting it out I looked back at my week and realised: I have done alot. So I thought I should start keeping a log of what I’ve been working on – mostly for my own sake, so I can be encouraged by the progress I have made, even if I haven’t finished stuff yet. But also in case anything I’m working on sparks interest or discussion – it’s cool to have people know what I’m up to.
So I’d like to start a weekly log. It may be one of those things I do once and never again, or it may be something I make regular: but there’s no harm in doing it once.
So here we go, my first log. In this case, it’s not just this week, some of it requires me to go back further to include things I’ve been working on, so it’s a pretty massive list:
Node Webkit: On Monday’s I work at Vose Seminary, a tertiary college, and I help them get their online / distance education stuff going. Editing videos, setting up online Learning Management Systems etc. I have a bunch of command line utils that make the video editing / exporting / transcoding / uploading process easier, but I want to get these into graphics so other staff can use it. Originally I was thinking of using OpenFL / StablexUI. I’m far more comfortable with the JS / Browser API than the Flash API however, and so Node-Webkit looked appealing. On Monday I made my first Haxe-NodeJS project in over a year, using Clement’s new Node-API repo. It’s beautiful to work with, and within an hour and a half I had written some externs and had my first “hello-world” Node-Webkit app. I’ll be working on it again this coming Monday.
neko.Web.cacheModule: I discovered a way to get a significant speed-up in your web-apps. I wrote a blog post about it.
Ufront: I’ve done a lot of work on Ufront this week. After my talk at WWXthis year, I had some good chats with people and basically decided I was going to undertake a major refactor of ufront. I’m almost done! Things I’ve been working on this week (and the last several weeks, since it all ties in together):
Extending haxe.web.Dispatch (which itself required a pull request) to be subclassed, and allowing you to 1) execute the ‘dispatch’ and ‘executeAction’ steps separately and 2) allow returning a result, so that you can get the result of your dispatch methods. This works much nicer with Ufront’s event based processing, and allows for better unit testing / module integration etc. The next step is allowing dispatch to have asynchronous handlers (for Browser JS and Node JS). I began thinking through how to implement this also.
After discovering neko.Web.cacheModule, I realised that it had many implications for Ufront. Basically: You can use static properties for anything that is generic to the whole application, but you cannot use it for anything specific to a request. This led to several things breaking – but also the opportunity for a much better (and very optimised) design.
IHttpSessionState, FileSession: the first thing that was broken was the FileSession module. The neko version was implemented entirely using static methods, which led to some pretty broken behaviour once caching between requests was introduced. In the end I re-worked the interface “IHttpSessionState” to be fairly minimal, and was extended by the “IHttpSessionStateSync” and “IHttpSessionStateAsync” interfaces, so that we can begin to cater for Async platforms. I then wrote a fresh FileSession implementation that uses cookies and flat-files, and should work across both PHP and Neko (and in Future, Java/C#). The JS target would need a FileSessionAsync implementation.
IAuthHandler / EasyAuth: At the conference I talked about how I had an EasyAuth library that implemented a basic User – Group – Permission model. At the time, this also was implemented with Static methods. Now I have created a generic interface (IAuthHandler) so that if someone comes up with an auth system other than EasyAuth, it can be compatible. I also reworked EasyAuth to be able to work with different a) IHttpSessionState implementations and b) different IAuthAdapter’s – basically, this is an interface that just has a single method: `authenticate()`. And it tells you if the user is logged in or not. EasyAuth by default uses EasyAuthDBAuthAdapter, which compares a username and password against those in the database. You could also implement something that uses OpenID, or a social media logon, or LDAP, or anything. All this work trying to make it generic enough that different implementations can co-exist I think will definitely pay off, but for now it helps to have a well thought out API for EasyAuth :)
YesBoss: Sometimes you don’t want to worry about authentication. Ufront has the ability to create a “tasks.n” command line file, which runs tasks through a Command Line Interface, rather than over the web. When doing this, you kind of want to assume that if someone has access to run arbitrary shell commands, they’re allowed to do what they want with your app. So now that I have a generic interface for checking authentication, I created the “YesBossAuthHandler” – a simple class that can be used wherever an authentication system is needed, but any permission check it always lets you pass. You’re the boss, after all.
Dependency Injection: A while ago, I was having trouble understanding the need for Dependency Injection. Ufront has now helped me see the need for it. In the first app I started making with the “new” ufront, I wanted to write unit tests. I needed to be able to jump to a piece of code – say, a method on a controller – and test it as if it was in a real request, but using a fake request. Dependency injection was the answer, and so in that project I started using Minject. This week, realising I had to stop using statics and singletons in things like sessions and auth handling, I needed a way to get hold of the right objects, and dependency injection was the answer. I’ve now added it as standard in Ufront. There is an `appInjector`, which defines things that should be injected everywhere (modules, controllers, APIs etc). For example, injecting app configuration or a caching module or an analytics API. Then there is the dispatchInjector, which is used to inject things into controllers, and the remotingInjector, which is used to inject things into APIs during remoting calls. You can define things you want to make available at your app entry point (or your unit test entry point, or your standalone task runner entry point), and they will be available when you need them. (As a side note, I now also have some great tools for mocking requests and HttpContexts using Mockatoo).
Tracing: Ufront uses Trace Modules. By default it comes with two: TraceToBrowser and TraceToFile. Both are useful, but I hadn’t anticipated some problems with the way they were designed. In the ufront stack, modules exist at the HttpApplication level, not at the HttpRequest level. On PHP (or uncached neko), there is little difference. Once you introduce caching, or move to a platform like NodeJS – this becomes a dangerous assumption. Your traces could end up displaying on somebody else’s request. In light of this, I have implemented a way of keeping track of trace messages in the HttpContext. My idea was to then have the Controller and RemotingApiClass have a trace() method, which would use the HttpContext’s queue. Sadly, an instance `trace()` method currently does not override the global `haxe.Log.trace()`, so unless we can get that fixed (I’m chatting with Simon about it on IRC), it might be better to use a different name, like `uftrace()`. For now, I’ve also made a way for TraceToBrowser to try guess the current HttpContext, but if multiple requests are executing simultaneously this might break. I’m still not sure what the best solution is here.
Error Handling: I tried to improve the error handling in HttpApplication. It was quite confusing and sometimes resulted in recursive calls through the error stack. I also tried to improve the visual appearance of the error page.
Configuration / Defaults:The UfrontApplication constructor was getting absurd, with something like 6 optional parameters. I’ve moved instead to having a `UfrontConfiguration` typedef, with all of the parameters, and you can supply, all, some or none of the parameters, and fall-backs will be used if needed. This also improves the appearance of the code from:new UfrontApplication( true, “log.txt”, Dispatch.make(new Routes()) );
to
new UfrontApplication({ urlRewrite: true, dispatchConf: Dispatch.make( new Routes() ), logFile: “log.txt” });
More ideas: last night I had trouble getting to sleep. Too many ideas. I sent myself 6 emails (yes 6) all containing new ideas for Ufront. I’ll put them on the Ufront Trello Board soon to keep track of them. The ideas were about Templating (similar abstractions and interfaces I have here, as well as ways of optimising them using caching / macros), an analytics module, a request caching module and setting up EasyAuth to work not only for global permissions (CanAccessAdminArea), but also for item-specific permissions: do you have permission to edit this blog post?
NodeJS / ClientJS: after using NodeJS earlier in the week, I had some email conversations with both Clement and Eric about using Ufront on NodeJS. After this week it’s becoming a lot more obvious how this would work, and I’m getting close. The main remaining task is to support asynchronous calls in these 3 things: Dispatch action execution calls, HttpRemotingConnection calls, and database calls – bringing some of the DB Macros magic to async connections. But it’s looking possibly now, where as it looked very difficult only 3 months ago.
CompileTime: I added a simple CompileTime.interpolateFile() macro. It basically reads the contents of the file at macro time, and inserts it directly into the code, but it inserts it using String Interpolation, as if you had used single quotes. This means you can insert basic variables or function calls, and they will all go in. It’s like a super-quick and super-basic poor-man’s templating system. I’m already using it for Ufront’s default error page and default controller page.
Detox: this one wasn’t this week, but a couple of weeks ago. I am working on refactoring my Detox (DOM / Xml Manipulation) library to use Abstracts. It will make for a much more consistent API, better performance, and some cool things, like auto-casting strings to DOM elements:”div.content”.find().append( “<h1>My Content</h1>” );
My Work Project: Over the last two weeks I’ve updated SMS (my School Management System project, the main app I’ve been working on) to use the new Ufront. This is the reason I’ve been finding so much that needs to be updated, especially trying to get my app to work with “ufront.Web.cacheModule”.
Until now I haven’t had to worry much about the speed of sites made using Haxe / Ufront, – none of the sites or apps I’ve made have anywhere near the volume for it to be a problem, and the general performance was fast enough that no one asked questions. But I’m going to soon be a part of building the new Haxe website, which will have significant volume.
So I ran some benchmarks using ab (Apache’s benchmarking tool), and wasn’t initially happy with the results. They were okay, but not significantly faster than your average PHP framework. Maybe I would have to look at mod_tora or NodeJS for deployment.
Then I remembered something: a single line of code you can add that vastly increases the speed: neko.Web.cacheModule(main).
Benchmarks
Here is some super dumb sample code:
class Server {
static var staticInt = 0;
static function main() {
#if neko
neko.Web.cacheModule(main); // comment out to test the difference
#end
var localInt = 0;
trace ( ++staticInt );
trace ( ++localInt );
}
}
And I am testing with this command:
ab -n 1000 -c 20 http://localhost/
Here are my results (in requests/second on my laptop):
Apache/mod_php (no cache): 161.89
NekoTools server: 687.49
Apache/mod_neko (no cache): 1054.70
Apache/mod_tora (no cache): 745.94
Apache/mod_neko (cacheModule): 3516.04
Apache/mod_tora (cacheModule): 2185.30
First up: I assume mod_tora has advantages on sites that use more memory, but a dummy sample like this is more overhead than it’s worth.
Second, and related: I know these tests are almost worthless, we really need to be testing a real app, with file access and template processing and database calls.
Let’s do that, same command, same benchmark parameters:
Apache/mod_php (no cache): 3.6 (ouch!)
NekoTools server: 20.11
Apache/mod_neko (no cache): 48.74
Apache/mod_tora (no cache): 33.29
Apache/mod_neko (cacheModule): 351.42
Apache/mod_tora (cacheModule): 402.76
(Note: PHP has similar caching, using modules like PHP-APC. I’m not experienced setting these up however, and am happy with the neko performances I’m seeing so I won’t investigate further)
Conclusions:
the biggest speed up (in my case) seems to come from cacheModule(), not mod_tora. I believe once memory usage increases significantly, tora brings advantages in that arena, and so will be faster due to less garbage collection.
this could be made faster, my app currently has very little optimisation:
the template system uses Xml, which I assume isn’t very fast.
a database connection is required for every request
there is no caching (memcached, redis etc)
I think I have some terribly ineffecient database queries that I’m sure I could optimise
Ufront targeting Haxe/PHP is not very fast out-of-the-box. I’m sure you could optimise it, but it’s not there yet.
This is running on my laptop, not a fast server. Then again, my laptop may be faster than a low end server, not sure.
Usage
So, how does it work?
#if neko neko.Web.cacheModule( main ); #end
The conditional compilation (#if neko and #end) is just there so that you can still compile to other targets without getting errors. The cacheModule function has the following documentation:
Set the main entry point function used to handle requests.
Setting it back to null will disable code caching.
By entry point, it is usually going to mean the main() function that is called when your code first runs. So when the docs ask for a function to use as the entry point, I just use main, meaning, the static function main() that I am currently in.
I’m unsure of the impact of having multiple “.n” files or a different entry point.
The cache is reset whenever the file timestamp changes: so when you re-compile, or when you load a new “.n” file in place.
If you wanted to manually disable the cache for some reason, you use cacheModule(null). I’m not sure what the use case is for this though… why disable the cache?
Gotchas (Static variable traps with cacheModule)
The biggest gotcha is that static variables persist in your module. They are initialized just once, which is a big part of the speed increase. Let’s look at the example code I posted before:
class Server {
static var staticInt = 0;
static function main() {
#if neko
neko.Web.cacheModule(main); // comment out to test the difference
#end
var localInt = 0;
trace ( ++staticInt );
trace ( ++localInt );
}
}
With caching disabled, both trace statements will print “0” every time. With caching enabled, the staticInt variable does not get reset – it initializes at 0, and then every single page load will continue to increment it, it will go up and up and up.
What does this mean practically:
If you want to cache stuff, put it in a static variable. For example:
Database connections: store them in a static variable and the connection will persist.
Templates: read from disk once, store them in a static variable
App Config, especially if you’re passing JSON or Xml, put it in a static and it stays cached.
Things which should be unique to a request, don’t store in a static variable. For example:
Ufront has a class called NekoSession, which was entirely static methods and variables, and made assumptions that the statics would be reset between requests. Wrong! Having the session cached between different requests (by different users) was a disaster – everytime you click a link you would find yourself logged in as a different user. Needless to say we needed to refactor this and not use statics :) To approach it differently, you could use a static var sessions:StringMap<SessionID, SessionData> and actually have it work appropriately as long as the cache stayed alive.
Avoid singletons like Server.currentUser, or even User.current – these static variables are most likely going to be cached between requests leading to unusual results.