Article by Guest Poster
This article was written by a guest contributor. The author's views below are entirely his or her own and may not reflect the views of WHSR.
Numbers on paper can appear to mean the world.
But how much can you really trust them, and how accurate of a depiction are they to the performance of the device you are using? When it comes to having your own dedicated server, for whatever use you may have, it would be ideal to have the best performance you can, right? I don’t think many people are going to spend the money on a server and say “Hey, I don’t mind if it underperforms”, and if they do, well then maybe they should reconsider owning a dedicated server in the first place.
Let me begin with clarifying a few things. If you want server performance to outdo the competition, you will need a dedicated server. If you simply want great performance relative to the money you are spending, then a virtual server is possibly a better option for your webhosting needs. If you honestly don’t care about performance and just want to have an online presence, then some kind of free or cheap web hosting service for your website is the way to go. I’m not going to get to the ins and outs of what do you need specifically for your business, I will just be going over how to make your server perform at it’s peak, which would assume it’s therefore a dedicated server that you own. This gives you the benefit of complete customisation for all hardware and software, and allows all resources in the machine to dedicated to you and your needs.
If your server does not have a RAID setup, then I highly recommend you get it. Not only can you help reduce the chance of losing your data by increasing the fault tolerance level, but you can increase performance too. Having a system in RAID 0 allows you to increase performance by spreading the information over two drives, therefore allowing the read and write speed to theoretically double as one piece of information is only being written to one disk while the second piece can be simultaneously written to the second drive.
In the image above, if you can image the string ABCDEF makes a file, by spreading the information over two drives, the read/write speed is in a sense doubled. ‘A’ goes to drive one, ‘B’ goes to drive two, ‘C’ goes to drive one and so on. RAID 1 is different for the fact it copies the information and duplicates it over two separate drives, so the time it takes to write or read the string ABCDEF is less than that of drives in RAID 0. You can see a depiction of RAID 1 below.
This presents some issues though, as the chance of hardware failure is now doubled due to having two drives. You can always increase the fault tolerance level by setting up in either RAID 01 or RAID 10. Either way, performance can be increased with a system in RAID, assuming its one of the many types of RAID that stripe the data.
IOPS are important. A lot of people seem to pass this by and only think of read/write speeds. As important as reading and writing is, the amount of times something can be read and written is also very important. If your server is going to be handling many small read operations, e.g. reading small files over and over at an unrelenting pace, then you need to consider a storage medium with a relatively high IOPS level. IOPS stands for Input/Output Operations per Second for those who don’t know.
A storage drive is rated to a certain amount of IOPS, and with a 7200 RPM SATA HDD you are looking at roughly 100 IOPS. Compare this to a solid state drive with around 40 000 you can clearly see one outperforms the other. These SSDs aren’t even the limit, if you pick yourself up an SSD that connects to the motherboard through PCI, you can expect up to a whopping 120 000 IOPS with ridiculously high read/write speeds. The OCZ Revodrive that offers this performance is already setup in RAID 0 to help get this incredible speed, however I have been told (but cannot confirm as I have never seen it done myself) that apparently you can get two of these bad boys and RAID them. A good middle ground for this I find is the OCZ Vertex 4. At around $1 per gig with an IOPS rating averaging 90 000, this thing should handle both fast read/write needs and high IOPS needs without spending thousands of dollars on PCI storage systems.
Now that we have storage out of the way, we can move onto Bottlenecking. In short, for those who don’t know, bottlenecking is where something in the system has the capability to perform at a high level, but another piece of hardware down the line restricts the ability for the entire system to perform as fast as the best piece of hardware. This concept is similar to the term ‘you can only walk as fast as the slowest person in a group’.
This is something that a lot of people end up doing if they do not do their homework. To tech heads, this stuff is second nature, however some places that ‘run on the cheap’ and use older generation hardware could be bottlenecking their system by using SATA2 devices, or installing slower RAM (even dreaded DDR2, but that would be very rare) but then have a fast CPU and advertise this fact as a selling point. It is vital that if you want a system to perform at it’s peak there needs to be minimal bottlenecking to let everything run smoothly. For example, if you are writing information to a HDD with a slow read speed of 100MB/s and use a CPU such as an Intel i7 3960X that can process 51.2GB/s, you are severely bottlenecking the system compared to using an SSD. By taking into account bottlenecking, not only can you optimise a system, but also take into account the fact you are able to save money on hardware by defining how many resources you need, and saving money by buying hardware that isn’t unnecessary for your system.
This is something that is really out of the hands of a dedicated server owner. The only real option here is to ensure the company you are hosting it with has sufficient bandwidth allowance available for your needs. If you are streaming 100GB from your website per month and your hosting company provides only 50GB per month, you will find yourself in a bit of trouble. This is general knowledge though, and realistically doesn’t need to be explained.
Latency is another aspect to consider, you need to find out where your audience is and have servers located as close as you can to the majority of them. You can’t control ping to a large extent; better switches can reduce the latency, however the data still needs to be transferred around the world. I guess the only ‘optimisation’ you can do here is to pick the right company to host the servers, or host it yourself if you have the resources and a close enough audience.
Basically you need to have a dedicated server (not a VPS hosting, not as much control there), use a storage drive that has high read/write speeds (preferably an SSD) with a good IOPS reading, put them in a performance increasing RAID setup, research your hardware to ensure there is minimal bottlenecking, and place the server close to the people who are going to be using it. Simple stuff, right?