A large data center normally involves an extensive open area, which is divided into racks and cages, to hold the servers themselves, as well as the power and communication connections used to link each individual server with the rest of the data center network. This network would reside in a building with sufficient architecture to allow for rapid data communication, and similarly high-performing connections to the outside world.
The building itself is normally a large and highly secure edifice, constructed from reinforced building materials, as to prevent physical compromise. It is often located on a campus that is itself physically guarded with high fences and rigid gates.
DATA CENTER HARDWARE
DATA CENTER STRATEGY
The Servers Themselves: What Is In Your Data Center?
Inside the building (the data center) exists a complex cooling and ventilation system, to prevent the heat-inducing computing devices from overheating. The campus is supported by redundant power systems, to allow the network to run, even if the main power grid experiences interruption or shutdown. The inner workings of the data center are designed to prevent downtime, but the materials used in construction can vary. Consider a pencil made from wood vs. a pencil made from plastic. Consider further a pencil manufactured from metal built to protect a thin and fragile graphite fragment.
The ways in which end users can attain access to the resources in a data center can vary due to the fact that cloud provisioning can occur in many layers.
Option A: Cloud Provider = Data Center
Sometimes the cloud provider is itself the data center. Most often this is the case when you want to use server space from a data center, or else wish to collocate your hardware in a data center. For instance, as a customer, you might procure new hardware and move it to one of US Signal’s data centers in a colocation arrangement. This allows you to benefit from US Signal’s physical security, network redundancy, high-speed fiber network, and peering relationships, to allow for a broad array of high-speed communications.
Option B: Cloud Provider = Data Center Management Firm
Sometimes the cloud provider is an organization that manages the allocation and management of cloud resources for you — they serve as an intermediary between the end customer and the data center. For instance, EstesGroup partners with US Signal. We help customers choose the right server resources in support of the application deployment and management services that we provide for ERP (Enterprise Resource Planning) customers.
Moreover, not all data centers are created equal. Data centers differ in countless ways, including (but not limited to) availability, operating standards, physical security, network connectivity, data redundancy, and power grid resiliency. Most often, larger providers of cloud infrastructure actually provide a network of tightly interconnected data centers, such that you’re not just recruiting a soldier — you’re drafting an entire army.
As such, when choosing a cloud provider, understanding the underlying data centers in use is as important as understanding the service providers themselves. That said, what are some of the questions that you should ask your provider when selecting a data center?
Is the provider hosting out of a single data center or does the provider have data center redundancy?
Geo-diverse data centers are of great importance when it relates to overall risk of downtime. Diversely-located data centers provide inherent redundancy, especially beneficial when it comes to backup and disaster recovery.
But what defines diverse? One important consideration relates to the locations of data centers relative to America’s national power grid infrastructure. Look for a provider that will store your primary site and disaster recovery site on separate power grids.
This will bolster you from the potentially of an outage to one of the individual grid locations. Think of the continental divide. On separate sides of the divide, water flows in one of two directions. When it comes to national power grids, support comes from different hubs. Look for a provider who has redundant locations on the other side of the divide to protect you in the event of a major power outage.
Are they based on a proprietary data center, collocated, or leveraging the state-of-the art technology of a leading data center?
A provider of hosting services may choose to store their data in one of many places. They may choose to leverage a world-class data center architecture like US Signal’s. Conversely, they may choose to collocate hardware that they already own in a data center. Or they may choose, like many managed services providers do, to leverage a proprietary data center, most often located in their home office.
Colocation is not uncommon among first steps in the cloud. If you own hardware already, and would like to leverage a world-class data center, colocation is a logical option. But for cloud providers, owning hardware becomes a losing war of attrition. Hardware doesn’t stay current, and unless its being procured in large quantities, it’s expensive. These costs often get passed along to the customer. Worse still, it encourages providers to skimp on redundancy, making their offerings less scalable and less robust in the event of disaster events.
Proprietary data centers add several layers of concern to the colocation option. In addition to the hardware ownership challenges, the provider is not responsible for all the infrastructure responsibilities that come with data center administration, such as redundant power, cooling, physical security, and network connectivity.
Moreover, proprietary data centers often lack the geo-diversity that comes with a larger provider. Beyond infrastructure, security is a monumental responsibility for a data center provider, and many smaller providers struggle to keep up with evolving threats. In fact, Estes recently onboarded a customer who came to us due to their Managed Service Provider’s propriety data center getting hacked and ransomed.
Is the cloud provider hosting out of a public cloud data center?
Public cloud environments operate in multi-tenant configurations where customers contend with one another for resources. Resource contention means that when one customer’s resource consumption spikes, the performance experienced by the other customers in the shared tenant will likely suffer. Moreover, many multi-tenant environments lack the firewall isolation present in private cloud infrastructures, which increases security concerns. Isolated environments are generally safer environments.
Is the cloud provider proactively compliant?
Compliance is more than the adherence to accounting standards — it is a means to guarantee that your provider is performing the necessary due diligence in order to ensure the business practices of an organization do not create vulnerabilities that can compromise the security and reliability assertions of the provider. What compliance and auditing standards does your cloud provider adhere to?
Is your cloud provider compliant according to their own hardware vendor’s standards?
Hardware providers, such as Cisco, for instance, offer auditing services, to ensure their hardware is being reliably deployed. Ensure that your provider adheres to their vendor’s standards. How about penetration testing? Is your provider performing external penetration testing to ensure PCI security compliance? In terms of industry standard compliance frameworks, such as HIPAA, PCI/DCC, and SOC I and SOC II, ensure that your provider is being routinely audited. Leveraging industry standards through compliance regulation best practices can go a long way to make sure they are not letting their guards down.
What kind of campus connectivity is offered between your data centers and the outside world?
Low national latency is of utmost importance from a customer perspective. Efficient data transfer between the data centers themselves and from a given data center to the outside world is fundamental to a cloud customer. Achieving transactional efficiency is achieved in multiple ways.
For a network to be efficient, the data itself must take as few “hops” from one network to another. This is best achieved through tight partnerships between the data center and both the national and regional ISPs that service individual organizations.
Within the data center network, an efficient infrastructure is helpful. US Signal, for instance, has a 14K mile network fiber backbone connecting its data centers and connecting them to regional transfer stations. This allows US Signal to support 3 ms latency between its 9 data centers, and to physically connect with over 90 national ISPs. This results in an extremely low national latency.
What kinds of backup and disaster recovery solutions can be bundled with your cloud solutions?
Fundamental to a cloud deployment is the ability to provide redundancy in the event of a disaster. Disaster recovery is necessary to sustaining an environment, whether on premise or in the cloud. But a disaster recovery solution must adhere to rigorous standards of its own if it is to be effective. Physical separation between a primary and secondary sight is one such baseline need. Additionally, the disaster recovery solution needs to be sufficiently air-gapped, in order to hit your desired RPO and RTO targets, while avoiding potential cross-contamination between platforms due to an event of hacking, viruses, or ransomware.
What kinds of uptime and reliability guarantees are offered by your data center?
All of the above aspects of a data center architecture should ultimately result in greater uptime for the cloud consumer. The major public data center providers are notorious for significant outages, and this has deleterious effects on customers of these services. Similarly, smaller providers may lack the infrastructure that can support rigorous uptime standards. When choosing a provider, make sure to understand the resiliency and reliable uptime of the supporting platform. EstesGroup can offer a 100% uptime SLA when hosted in our cloud with recovery times not achievable by the public cloud providers.
Uptime has a planned/unplanned component that must also be considered. Many larger cloud providers do not give advanced warning when instances will be shut down for upgrades, which can be extremely disruptive for consumers, and result in a loss of control that conflicts with daily business initiatives. Ensure that planned downtime is a service that is communicated and understood before it happens.
How scalable is the overall platform?
Scalability has to do with flexibility and speed. How flexible can the resources of an individual virtual machine (VM) be tweaked and how quickly can these changes be made. Ideally, your cloud provider provides dynamic resource pool provisioning — this allows for dynamic allocation of computing resources when and where they are needed.
Some provider environments support “auto-scaling,” which can dynamically create and terminate instances, but they may not allow for dynamic resource changes to an existing instance. In these cases, if a customer wishes to augment resources of any instance, it must be terminated and rebuilt using the desired instance options provided by other providers. This can be problematic. Additionally, provisioning, whether to a new VM or an existing one, should be quick, and not require a long lead time to complete. Ensure that your cloud provider specifies the lapsed time required to provision and re-provision resources.
What are the data movement costs?
The costs associated with the movement of data can significantly impact your total cloud costs. These are normally applied as a toll fee that accumulates based on the amount of data that moves over a given time. So these costs can be unpredictable. But what kinds of data movements occur?
- Data ingress: data moving into the storage location, as it is being uploaded.
- Data egress: data out of the storage location, as it is being downloaded.
Data centers rarely charge for ingress movement — they like the movement of data into their network. But many will charge for data egress. This means that if you want your data back, they may charge you for it.
Sometimes these fees even occur when data is moving within the provider’s network, between regions and instances. If you’re looking for a cloud provider, check the fine print to determine whether egress fees are applied, and estimate your data movement, to understand your total cost. EstesGroup gives you symmetrical internet data transfer with no egress charges, so your data movement does not result in additional charges. This means that your cloud costs are predictable.
Does the cloud provider offer robust support?
Downtime can come from one of many situations. Your ISP could experience an outage, and may need to fail over to your secondary provider. Or you may encounter an email phishing scam resulting in a local malware attack. Or you may experience an outage, due to a regional power grid issue. In these extenuating circumstances, you may find yourself in need of contacting your cloud provider in a hurry.
As such, you’ll want a provider that offers robust pre-sales and post-sales support that is available 24/7/365. Many providers offer high-level support only if you subscribe to an additional support plan, which is an additional monthly cost. Wait times are also an issue — you may have a support plan, but the support may be slow and cumbersome. Look for a cloud provider that will guarantee an engineer in less than 60 seconds, 24/7/365.