Here is a list of notable Web projects that I have worked on, either for work or as my own personal projects,
ordered from most recent to oldest, with the years in which the projects were in active development displayed to
the right of the project name. In the sub-heading of each site I have listed a few of the
significant technologies/languages/services used - i.e. CSS and JavaScript, for example, won't
be listed unless they played a large part in the project.
The project names are links that will take you to the site in question, a demonstration of the software, or a
gallery of screenshots from the project if the project isn't publicly available (whichever is applicable).
Over a total of approximately eight months spanning a three-year period, I maintained, fixed and improved the
NCRM's main website in my role as a consultant developer. Some of the larger tasks included applying a brand
new design to the whole site, and creating a new "portal" area of the site, complete with a secure registration
and authentication system which allowed the user to choose between creating a new account for the NCRM site or
logging in using their existing institutional ID (e.g. University account), which required integrating with a
third-party authentication system.
Designed to be used exclusively on four Apple iPad devices at NCRM's bi-annual Research Methods Festival, this
website/app asked a simple question and invited users to enter their own textual response. As we had full
control over the end devices, we were able to use some technologies that otherwise would have been impractical
or required too much development time. The result is a very animated, fluid interface that looks much like a
native app while requiring only fairly basic Web technologies to create.
And yes, someone answered the question with just one word: "Boobs".
I make heavy use of a distributed IRC client called Quassel, which stores all of its chat logs in a central MySQL
database on my main remote server. This system is very convenient for seamlessly jumping between many client
devices (desktop PC, laptop, smartphone, etc.), but it has the disadvantage of making it difficult to quickly
search through what can amount to many years' worth of chat logs.
In order to facilitate easy searching of these logs, a Web-based log search script was created by someone by the
name of m4yer. While functional, this script was
rather basic and the code seemed a little outdated. As such, I set out to rewrite it with up-to-date techniques
such as full OO programming, MVC, and using systems like Bootstrap, Composer, Bower and SCSS, while also adding
extra functionality at the same time.
Unfortunately this project is still in-progress and frustratingly-incomplete, however
the code that I've pushed to github
is, at least, a good indication of the direction in which I was going, as well as the style of code that I write
when I have few restraints controlling me.
Rather like the photo/image gallery listed below, this script aims to present a directory of video files to a visitor via a Web interface - again, with no/minimal configuration in each directory in which it's used. It operates on two directory levels, allowing for one level of categorisation of the videos.
Each video is analysed by the script (using an ffmpeg/libav library) which extracts details such as the video's dimensions, length, video and audio codecs, number of audio channels, and so on. These data are then cached in a hidden file to minimise subsequent load time and load on the server. The script also generates both static and animated thumbnail images (using ffmpeg/libav and ImageMagick) from the video source: the static image is displayed normally, being temporarily replaced with the animated image when the user hovers their mouse cursor over the video details. These thumbnail images are obviously also cached.
There's no built-in way of displaying/embedding the videos within the browser window, as this is both extremely difficult to get to work without using plugins, transcoding video, etc., and because it was simply not something I needed the script to do! Instead it's easy for viewers to just download the video files to their own computers and watch them there (or stream them, if their browser or other video player supports that).
This was developed for the Breakthrough Breast Cancer charity as a way of communicating difficult information to women regarding screening for breast cancer. It was designed to be very striking, "different" and a bit playful in order to lighten the mood of what can be a very dark and tough subject.
To that end we used a jQuery library that I had previously discovered called Scroll Path which causes a Web browser to appear to scroll in a non-linear manner - instead of merely scrolling up and down, the the standard ways of scrolling through a Web page instead cause the browser to appear to scroll in many directions in a convoluted path through the content. The simple fact that most people had never seen a website like this before caused it to gain a lot of attention for the charity and no doubt had a large part to play in its subsequent winning of four separate awards (see below).
A common problem at MAXX was that clients who owned and retained control of their own domain names would need to set up the appropriate DNS records to point their domains at MAXX's Web servers. This only required two records, but for people who aren't already familiar with DNS this would often still prove too complicated and was only further complicated by certain domain registrars/hosts having incomprehensible, functionally-limited or downright broken control panels - and some even had staff who claimed that the records that were required were actually impossible to set.
Since I was the most technically-inclined person in the office, I was always the one who was asked (either by colleagues or clients) to check on the state of client domains to see if the domain had yet been set up correctly - this saved them the time of contacting the remotely-located sysadmin. This rapidly became a large waste of my time, having to frequently type and run a handful of `dig` commands. I therefore wrote, over the period of a few weeks, a script to automate all of the work for me, to the point that it was possible for me to supply a URL to colleagues, or even clients, which would check the domain, report exactly what was wrong and how to fix it. This could even be supplied to clients' domain hosts, though I don't think that ever happened.
Essentially this is just an example of me writing a PHP script to automate a common, time-consuming task (in this case it was a Web-based script, but many of my automation scripts are purely command line-based).
Britannia Pharmaceuticals asked MAXX to create for them a Web system that would enable the sharing of documents between themselves and their worldwide partners. This site was the result.
It's relatively basic and mostly consists of an authentication system, an upload form with a way of selecting which organisations should be allowed to view which documents, and a system for searching, browsing and viewing the uploaded documents.
The complex part is that each uploaded document has all textual content extracted so that the search engine is able to find matches inside the uploaded documents. The formats searched include Microsoft Word, Excel and PowerPoint (both the old Office pre-2007 .doc/.xls/.ppt formats and the new, XML-based, .docx/.xlsx/.pptx formats) and PDF. The text extraction was primarily conducted using third-party libraries, such as `antiword`, and the XML-based formats are easy for a PHP script to interpret, however I was unable to find a reader for the old PowerPoint (.ppt) format and ended up reverse-engineering much of the PPT format myself, as the documentation was far too long and complex for the time I had available.
This is the most recent iteration of my personal website. This time around it was created using the
Zend Framework in fully-MVC mode, with semantic HTML5 markup and utilising
some of the newer features of CSS3, such as shadows, gradients and rounded corners. The RSS
feed still exists, and it now has a dedicated printer-friendly stylesheet and a dedicated
mobile-device stylesheet, both of which are activated automatically when printing a page or
viewing the site on a small-screen mobile device (smartphones, not tablets), respectively.
As this is only my personal site, very little effort was put into making the design work fully in all browsers.
As such, older browser versions (especially of Internet Explorer) will have some style quirks, such as lacking
support for the gradient backgrounds, shadows or rounded corners. However, all browsers should at least be able
to display the site in a way that makes it readable and usable, if not necessarily pretty - even down to
IE6.
You may have heard of the Geek Code - essentially a way to describe one's
"geekiness" in the form of a string of characters - although, being originally written in 1993 and not receiving
an updated since 1996, it is greatly showing its age these days (with a heavy emphasis on UNIX flavours besides
Linux, discussion of Netscape as a major Web browser, Windows not existing beyond Windows 95, OS/2 mentions and
even VMS - oh, and apparently DOOM is the best game ever). Well, this is the same thing, but for
furries.
Given the complexity of these codes, many people have written encoder and decoder applications or websites for
the geek code, furry code and the other "codes", however, due to the age of these codes (few, if any, received
any attention after the 1990s...), the interfaces are similarly dated. I decided that it was high time to bring
the furry code kicking and screaming (or, perhaps, clawing and meowing) into the 21st century.
This script, therefore, is a complex work of JavaScript, jQuery and
jQuery UI. PHP is involved in generating the HTML, but only to reduce the repetitiveness and
redundancy of the source code - if one were to save the generated HTML, it would run quite happily in any
modern Web browser.
This script makes extremely heavy use of jQuery UI for its interface, to the point that it can take a couple of
seconds simply to render the HTML and execute the JavaScript when the page loads, even with 2012's JavaScript
engines such as SpiderMonkey
(Firefox) and V8 (Chrome) - unfortunately, no
amount of JavaScript compiling will increase the speed of DOM manipulations, and that is what the vast majority
of the code in this project does (primarily it's the calls to jQuery UI's button() method to
convert checkboxes and radio buttons to more consistently- and attractively-styled buttons).
In fact, it can be enlightening to view the page with JavaScript disabled (and with one CSS tweak to make the
tables actually visible), so that
it becomes clear just how much
of the page is rendered dynamically. Of note is that all buttons revert back to being either radio buttons
or checkboxes, and all without labels - their labels are read, by JavaScript, from HTML5 "data" attributes on
the buttons themselves. The "Jump To Section" box also stops functioning and becomes empty, as it is populated
solely based on the content of the page.
This script was designed to be a very simple, graphical replacement for the Apache Web server's
autoindex module - i.e. the script could
be placed inside a directory of image files and it would, when accessed, automatically display a list of the
images in that directory, along with showing thumbnails and the EXIF data
stored by digital cameras, if applicable.
This script is also designed to display conveniently on mobile devices, where it transforms into
a single column of images instead of its normal grid formation.
As part of its display of digital cameras' EXIF data, if there is any geographical information
stored in the image (rare with standalone digital cameras, but very common among smartphones' cameras), it will
use the Google Geocoding API to translate the stored latitude/longitude coordinates into a human-readable
address, as well as linking to the Google Map view of the exact location. This can be used to excellent effect
when the photo's subject is a large feature of the landscape, such as
this photograph of a recessed, circular
area in Basingstoke Common, where the
Google Maps satellite view
for the image's coordinates accurately shows the location from which the image was taken and offers an
alternate perspective of the same area of land.
I created this Web app because I was unable to find a shopping list application for my phone with the features
that I required without having too many features and, therefore, becoming bloated and confusing.
Therefore, I designed this application to do exactly what I need it to do and, as such, it has no real
configuration to speak of and the list of displayed products is stored as a PHP array in a hard-coded file on
the server, with no user interface for editing it.
This Web app utilises HTML5's "offline" support to allow it to be used even without an Internet
connection, which is a vital feature when using the app inside a supermarket, where the mobile signal is often
very poor or non-existent. It also uses custom mobile CSS to ensure that it is as user-friendly on mobile
devices as is possible.
Viper Cart, an
e-commerce/shopping cart platform, was the primary software product that I developed at my
previous job, at Craig Brass Systems Ltd.. It was created with
two goals in mind: to replace the cart software that was powering the existing
Its Elixir store, and then to sell the finished software product to
other businesses looking for a similar solution. The development of Viper Cart spanned several years, starting
while I was working for the company during the industrial placement year of my university course, and then
continuing after I graduated. For much of that time period I was working alone which, while slowing down
development somewhat, did give me a great deal of freedom to implement functionality as I saw fit and removed
the inevitible complexities of working as a team. However, for a couple of years we had some additional
developers working for us, during which time my job also involved managing those developers,
reviewing their code and ensuring that it was of a suitable quality and wasn't buggy or insecure.
A major component of Viper Cart was the creation of a framework - a large collection of classes
that were designed to be portable and easily used in other projects in order to speed up
development by re-using common components and functionality. This became known as the Craig
Brass Systems Framework, or "CBS Framework", as I've referred to it elsewhere on this page. By
the end of the development process, the CBS Framework consisted of more than 45,000 lines of PHP code
spread over 150 classes, in addition to four third-party libraries
(SwiftMailer,
NuSOAP, MailChimp and
an email address validation class). The
framework allowed easy use of, amongst other things, our templating system, user-input
validation, database access (wrapping
PDO, which I consider to be overly verbose in normal
usage), error-handling/reporting, Ajax functions, caching, CAPTCHA
image-generation, an extensive date/time class, access to functions for formatting filesizes,
English sentences and similar, a class to easily and transparently handle IPv4 and IPv6
addresses including range calculations and database storage, pagination, currency
display and manipulation, and sessions. Several large JavaScript classes, totalling almost
7,000 lines, were also created as part of the framework, providing Ajax,
user-input validation and graphical dialog box (replacing JavaScript's native
alert(), prompt() and confirm() functions, which Internet Explorer has
started being awkward with) functionality, amongst other features.
This framework and the JavaScript classes were created because, when the project was started in 2007, products
such as the Zend Framework and jQuery
were in their infancy and not well-known or widely-used. jQuery UI hadn't
even been released at that time. Later in the development process, we did start using jQuery
and jQuery UI to speed up development, and I wrote several plugins to ease the
use of certain functionality.
I also created
a PHP
script for exporting a MySQL database to a format that makes sense for version control systems (two files
for each table - a file to define the structure and a file containing the table data, with one record per line,
so that `diff`s between two versions would be readable and useful), as well as importing the same files back
into the database when another developer has updated them. I also wrote a 9,000-line installation
script that downloads, extracts and installs the entirety of Viper Cart, sets up the database and
configures the software as desired by the user; a 6,000-line upgrade script which, rather than
blindly replacing changed files, applies a "diff" of changes to the existing file (performing some initial
sanity checks first), so that, in theory, users' own modifications to the software can be
retained, which will hopefully result in adminstrators being more willing to upgrade/patch their
installations, knowing that there's a good chance that upgrading won't revert their modifications; and
a 4,500-line PHP script to generate both the install and upgrade scripts by exporting requested
revisions from version control and packaging up both the most recent version and the modifications between the
two specified revisions.
Viper Cart integrated with many payment gateways, including
2CheckOut, Authorize.Net,
Direct One, HSBC, NoChex, PayPal, SagePay,
WorldPay and Google Checkout, and also integrated with Google's
Products system. These payment gateways were implemented as part of a plug-in system
that was designed to allow new gateways to be added with just the addition of an extra PHP class file, and no
changes to the existing code. It was designed from the ground-up to be search engine-friendly
and, to that end, each product, category, manufacturer, news article and information page had what we referred
to as a "SEO name" - an alphanumeric string that was used to reference that entity in the URL. This allowed
Viper Cart's URLs to look like "/manufacturer/example-manufacturer/index.html" instead of the more common, but
arguably cheating, SEO URL method of "/manufacturer/123-example-manufacturer.html", with the entity's numeric ID
in the URL and the system's page dispatcher would ignore the name and only read the ID. I also created a
Sitemaps group of classes that produced the required XML for search
engines supporting the Sitemaps system.
Other notable features that I wrote for Viper Cart include giving it
full IPv6 support,
an OpenSearch widget,
a custom, advanced product search system,
a Web browser user-agent analyser
(including support for several mobile devices) for statistics collection, and
fine-grained access control lists
(ACLs) for the adminstration system. It also integrated with the
Postcode Anywhere API and a database of USA ZIP
codes and Canadian post codes in order to do post/zip code-to-address lookups for
customers in the US, UK and Canada, saving customers time when entering addresses and helping to ensure that
addresses are entered accurately.
Having Its Elixir using an early version of the software provided
invaluable real-world usage and testing data, as well as an extensive database of product and customer data that
greatly aided in development. The version of Viper Cart currently running on Its Elixir is a couple of years
old, however most of the changes since then have been in the creation of the administator control panel; the
customer side of the software is fully-functional.
The Craig Brass Systems website is the corporate site for my former employer. It consists of various
informational pages, pulls data from both IP.Board and an OpenFire XMPP
messaging server, and has a login-protected customer area allowing customers to place orders, download purchased
software products and submit support tickets. This site also uses the CBS Framework.
As this site has an order process, it integrates with payment gateways to allow customers to
securely make payments on a third party website (either PayPal or Worldpay) before being redirected back to this
site to download their purchased software.
This is the previous incarnation of this very website, my personal blog-and-such site. I created it in 2004,
using MySQL to store the blog posts, comments and gallery information, and with
Smarty as the templating engine. The main system behind this site didn't then change much until
2012 when the entire site was re-created (see its own entry above).
One major change that I did make more recently was the addition of Search Engine Optimisation
(SEO) techniques, primarily to the URLs used. This changed the URLs from being, for example,
http://www.lorddeath.net/?page=blog to being http://www.lorddeath.net/Blog/.
I created this website in a few weeks for my previous employer to be used as his personal site to host a blog and
a few information pages. It utilises the aforementioned CBS Framework, and pulls in data from
Twitter, Flickr and YouTube. The graphical design of this
site was created by a third-party designer, though I wrote most of the HTML for it.
My final-year project at university
consisted of the creation of an improved version of Amazon.com's product recommendation engine.
Amazon's existing product recommendation system does not (or, at least, did not in 2008 when I started the
project) helpfully handle gift purchases. This would cause your own product recommendations to be polluted by
suggestions of products that would appeal to the recipient of the gift you've bought, instead of ones that would
appeal to yourself.
Amazon offers/offered a checkbox for each product in your purchase history to exclude it from recommendations,
however I felt that this could be taken a step further by using that purchase to suggest products to the gift's
recipient, instead of simply ignoring the purchase entirely. I also included ideas for extending the system with
additional social networking features, such as reminding people of a friend's upcoming birthday
along with suggesting a suitable gift.
This project involved the use of MySQL stored procedures for performance reasons, and the PHP
code included in the appendices of the report
document is a reasonable guide of my personal code style and quality (whereas the PHP
projects on GitHub that are linked to at the side of this page are mostly personal projects with little need to
be nicely-structured or documented, so they generally aren't).
Web2Messenger was a joint project between me and a Dutch friend of mine, Frans-Willem Hardijzer, that allows
users to receive anonymous messages from their own Web sites directly to their .NET Messenger
Service client such as Windows Live Messenger, as well as utilising
dynamically-generated images detailing their online status. Due to time constraints and others reasons, we had
to shut down Web2Messenger in 2011, though its development had been stagnant since 2008 and it was seeing very
little use. By the time we shut it down, the service had delivered almost 70,000 messages to more than
16,000 users, making it by far my most-used website.
The server-side code is PHP utilising Smarty and MySQL with the client-side
code being a mix of XHTML 1.0, CSS and JavaScript, and a RSS feed is available. The back-end "bot" processes
that connect to the Messenger Service were originally written by me in PHP, however they were
subsequently re-written, by Frans-Willem, in C++ for performance reasons. Web2Messenger has also been
translated into Spanish, French and Portuguese and the language of the whole Web site can be changed instantly
and updated easily if necessary.
Towards the end of the useful lifetime of Web2Messenger, I created a Facebook application that
inserted the user's Messenger status on their Facebook profiles, together with a link back to that user's
Web2Messenger page, allowing profile visitors to send them instant messages.
Even though the service has been shut down, the website is still running for historical interest. I also have a
more in-depth article about Web2Messenger on the
Web2Messenger page of
this site.
A fan-site for an on-line Role-Playing Game (RPG) I used to play regularly, Neverwinter Nights, uDCX is primarily
a collection of guides and similar documents and, as such, doesn't use MySQL very much, but the standard PHP,
XHTML 1.0, CSS and JavaScript combination is still employed. The colour scheme of this site was chosen to fit
with the game itself, which has a very dark interface - I do not, generally, make a habit of designing sites
with such a heavy use of black as this one does. The code behind this website was mostly written by me, with a
friend and fellow gamer writing most of the content and playing a large part in the site's graphical design.
An extension to the above site is this map of the DC:X game-world which is powered by the Google Maps
API, including a completely custom "skin" (designed to replicate the in-game map interface) and
tileset (which was created by manually piecing together screenshots of the in-game map - I had
a lot more free time back then...). Using the API itself and creating the interactivity of the rest of the page
vastly improved my JavaScript knowledge as well as allowing me to learn how to utilise the Google Maps API
itself.