BY PAUL READ
Actually, let’s not. It’s only fair to warn you this piece is about servers, big computers, so you have the opportunity to disappear and wash your hair or something.
Few companies these days run their business without the use of computers and, once larger than a relatively modest size, the organisation needs to centralise its data to allow sharing and avoid duplication. To that end a need for servers becomes apparent. A server is simply the big brother to your typical desktop or laptop, albeit more powerful.
The name gives the game away to a large extent, a server stores data and ‘serves’ it to multiple users in different forms. The Country Squire page you are reading was served by WordPress running on a web-server, the content (text, pictures, etc.) is held in a database server. The data are not stored as pages but as units of information assembled by your browser, such as Firefox.
Getting your hands on a server has been refined over many years, part with a relatively large sum and one or more boxes will be delivered. They don’t work straight out of the box though so the next stage involves a technician putting it together and setting it up. This can be a time-consuming process.
Servers can have many roles and, to avoid unnecessary cost, these roles (such as web-server, file-server and database-server) are sometimes combined on the same physical computer. The disadvantage here is it becomes a single point of failure, if the server fails the business stops. On the other hand, servers are often under-worked. Hence the rise in use of virtual servers.
A physical server can be touched, you can look at one while a virtual server can perform the same roles but you can’t see it, it exists only in software. Of course, a virtual server needs a physical server to run on but a single physical server can support multiple virtual servers. You can, therefore, split the roles among, say, five virtual servers controlled by the Hypervisor which is the software that manages all the virtual servers. The Hypervisor runs on the physical server.
To eliminate the single point of failure in the physical server we simply implement a second physical server running a Hypervisor that monitors the ‘heartbeat’ of the virtual servers on the first physical server. The heartbeat is a signal each running virtual server sends out periodically, should it disappear for too long the second Hypervisor simply starts up a standby virtual server that is a replica of the failed system. All nice, tidy and fully automated. Now the building is the single point of failure.
Of course, the larger organisation can simply duplicate this entire set-up in another location. Thus, if the headquarters fails, the servers in the alternate location can be used. In terms of ‘fail’ this may not necessarily mean the building burned down, it may simply have become inaccessible.
The smaller organisation is likely to find this non-viable from either a financial perspective or, perhaps, they only have one location. For them the only viable alternative would be a reliance on backing their data up and hoping, in the event a server was irreparable, new hardware could be obtained rapidly. A risky prospect but their only option until recently.
It is now possible to build an entire infrastructure in the Cloud (somewhere on the Internet), complete with virtual servers and the means to securely access them. Data can be protected by encryption, backed-up and replicated to multiple locations. Virtual servers can be configured to fail-over to a standby. This provides a highly available service accessible from anywhere in the World where an Internet connection is available. Pricing is monthly and usually based on usage for any given month rather than a fixed fee.
Since the Cloud spans the globe your main virtual servers will be located relatively close for best performance but the replicas can be on a different continent. It would be a huge disaster that could disrupt such a widely distributed system and, should a disaster of that magnitude occur, it is unlikely you would be worrying terribly much about your data!
Wait, there’s more! It’s now possible to ‘snapshot’ a virtual server, take a copy that is, and physically relocate it. This is tremendously advantageous when considering upgrades as they can be tested on the copy before being installed on production systems. Upgrades and new installations are high-risk activities so testing on a copy is a huge benefit.
To set-up systems in this manner only requires a basic computer and a reasonable Internet connection but the best is yet to come. I have just delivered such a system, the client committed to a monthly payment (recurring income for me). Delivery can be rapid, no need to worry about manufacturing lead-times, no-one needs to visit to put all the boxes together. I can even remotely configure the client’s computers to enable access.
The client can work from anywhere, the user experience is exactly the same wherever they are. The best part, however, is I can work from anywhere too.
No more motorway miles and dubious hotels for me!