Do you get tired of hearing about the cloud?
Me too.
You can blame Google CEO Eric Schmidt for the cloud computing. He’s credited with introducing the term at a conference in August, 2006.
While network computing had been around since the 1960’s, Schmidt was the first to pitch cloud as we know it today. You may be fielding a lot of questions from your boss, your customers or development team about why you’ve not moved to the cloud. After all, who would not want the scalability, redundancy, on-demand, services the cloud promises?
The cloud offers considerable potential, but few small businesses can leverage what the cloud has to offer. Inflexible operations, inexperience and basic business needs often mean a dedicated server is the better hosting solution.
If you are not convinced, here are X reasons why you should still use a dedicated server.
Performance
We find that dedicated servers offer the best performance – especially on a per-dollar basis.
We have used Rackspace, Softlayer and AWS. None can deliver the power of a properly configured dedicated server.
This is especially true when it comes to disk IO. With most cloud systems, the network and underlying storage are shared among customers. This can cause disk I/O to be unpredictable. If another customer starts to send large amounts of write requests to the storage array, you may see slowdowns. The upstream network is shared so you can experience bottlenecks there too.
When we fix performance problems for customers using cloud or VPS, we typically find disk IO issues. These are often not fixable within the cloud framework.
Most cloud vendors give you more storage, not faster storage.
While you can easily scale up CPU and RAM with most cloud vendors, scaling disk IO is often not possible. Even while Amazon offers some high disk IO instances, many users still build RAID arrays out of EBS instances to get the performance they need.
In short, if your operations are relatively simple, a single dedicated server with RAID 10 will usually out perform more costly and complex cloud offerings.
Transparency
When debugging performance issues, transparency is key. Transparency is why we are fans of NewRelic. NewRelic allows you to peer inside of your application and find the bottlenecks. This transparency is key to solving complicated performance and reliability issues.
Cloud services often obscure hardware and network problems.
- As a shared service, cloud suffers from two key issues that you typically do not occur with dedicated servers.
- Other users directly impact your workloads
Underlying hardware errors are the reason for outages
With cloud, you share resources with others. This includes disk, RAM, CPU and network. Cloud software attempts to fence in your neighbors, but the fence has holes. Often due to inherent design or more often configuration choices, a single user can overwhelm a local compute node. This can result in temporary outages and performance issues for your operations that have nothing to do with you.
Unfortunately, most providers will never recognize or even catch this problem -leaving you to track performance phantoms.
Hardware errors are another issue. With SoftLayer’s service, when we have suspected hardware issues on a computing instance, there’s no way for us to confirm our suspicions. We simply migrate the instance to another physical node to see if the problem persists.
Cloud makes these migrations easy, but a dedicated server may make these migrations unnecessary. With a dedicated system, we can easily check the hardware and rule out issues. This allows us to focus diagnostic efforts on the right problems.
Redundancy
A common misconception about the cloud is that it is inherently redundant. This is often not the case.
A node in a cloud computing service is usually no more reliable than a single dedicated server.
With cloud computing, the compute node is usually just a commodity server minus the storage. If that node dies, so does your workloads. This is not much different than a CPU, RAM or power supply failure on a dedicated server.
Even with the cloud you have to build redundancy into the system. Moving to AWS is not going to make your SMB’s hosting service more reliable or more redundant unless you make it that way.
Just take a look at this setup on AWS. This is very complex and requires significant time setting up, monitoring and maintaining.
Due to the added complexity and lack of transparency, you may find that single cloud instances are less reliable than their dedicated server counterparts.
Complexity
Complex is not good.
As you can see in the example above, a truly redundant cloud operations can be very complex.
Cloud infrastructure, especially with AWS, adds layers of complexity that you may not need. With AWS, your IP addresses are not automatically bound to any specific EC2 instance. If you stop and start that instance, you must remember to re-associate your IP address. Similarly, the EBS storage can be terminated by error. AWS offers various tools to help keep you from making mistakes, but you have to enable them.
With a dedicated server, these issues do not exist. Why pick a complex infrastructure when you will not actually utilize it?
I’ve managed infrastructure for over a decade, and one thing I have learned: simpler is better. This is why I continue to recommend dedicated servers for simple hosting operations.
Also, complexity adds cost – both in terms of hardware and getting expert help.
Costs
Cloud costs more.
This is true for many small businesses, especially web development and design firms.
Consider a web marketing firm that hosts their customer’s sites. Typically you will have common applications such as WordPress, Joomla, Drupal and other popular CMS programs. You also probably need a hosting control panel such as Plesk or cPanel.
When you examine the technical requirements needed to ensure reliable performance for your sites, you will often find that dedicated servers give you the best bang for the buck.
The main reason is disk performance. I often see VPS or cloud systems struggle with either a large number of sites or high concurrency. You can solve disk IO issues with cloud by building RAID arrays out of the storage units, but this drives up costs. By the time you add bandwidth, control panels and IP addresses, the cost savings starts to evaporate.
Dedicated Server | AWS M3.XLarge | |
CPU | 8 Core | 8 vCPU |
RAM | 16 GB | 15 GB |
Storage | 500GB RAID 1 | 500GB EBS |
IPs | 8 | 8 |
Bandwidth | 10 TB | 1 TB |
Panel | cPanel | None |
Cost | $195/month | $375.00/mo |
And the AWS fee does not even include cPanel – that adds another $20-40/mo.
Finding direct cost comparisons between cloud and dedicated is difficult. Sometimes, you may want to over-deploy your cloud infrastructure to solve some performance issues. Even when you know what you need, costs are not fixed with many cloud services. Just take a look at AWS’s “simple” monthly calculator. There’s nothing simple there.
Lock-in
Don’t get locked-in.
As a kid, I attended a overnight lock-in at a stake rink. That was fun. Getting locked into a vendor’s platform is not. Migration can be painful and costly.
With many cloud vendors, if you begin to integrate more complex services, you may find you are locked into their solutions. This can be dangerous if their support, services or pricing changes. Even if the vendor does not change, your business or technical requirements may change. So you need to evaluate your migration options before you select a cloud vendor.
While the compute portion of cloud services is generally similar between vendors, advanced services such as object based storage, database abstraction layers and other technologies often have different APIs. If you build your app to use Amazon’s S3, you may have to reengineer it to work with another object based storage model. This can make migrating challenging and expensive.
Often, I see companies use cloud vendor’s advanced services when neither the business or technical needs require such a solution. This creates vendor lock-in where it could be avoided.
Dedicated servers are commodities. If you use a hosting control panel such as Plesk or cPanel, migration to another server or service provider is a simple, well-documented process.
So when a dedicated server can meet your business and technical needs, why risk vendor lock-in?
Scalability
You are not ready to scale.
One of the chief marketing points for cloud services is scalability. While you can scale your computing resources, your applications or operations may not be ready to scale.
If you use a hosting control panel, your scalability options are limited. You can increase your CPU/RAM or add a dedicated database, but you already have these options with dedicated servers. Cloud just makes it easier.
As I mentioned earlier, scaling disk IO is often not available or limited with cloud. In our performance optimization work, disk IO is often the main performance problem, especially with shared hosting operations.
Don’t be fooled by the advertising. You cannot simply plant your operations in a cloud vendor’s garden and expect it to grow magically.
You must build applications and manage them with scalability in mind. Trying to cram legacy applications into a modern, scalable, cloud framework often results in failure.
Also, ask why you need to scale anyway?
If your web sites are slow, perhaps optimizing your server’s configuration and fixing bottlenecks in your application will fix the problem. The cloud will not solve fundamental programming inefficiencies.
Go Dedicated
If you are a small business with relatively simple hosting operations, then don’t ignore dedicated servers. I know the pressure from customers to use the cloud is powerful, but that’s because they only see the marketing hype.
The reality is that a dedicated server, properly managed, will generally provide greater performance and reliability at lower costs than current cloud service options.