Select Page
SYSPRO Error Message: Operator Already Logged In

SYSPRO Error Message: Operator Already Logged In

A warning message that most SYSPRO users will commonly see is the “Operator already logged in” prompt. Under normal circumstances, this message means exactly what it says! The operator is already signed in.

SYSPRO Error Message Operator Sign In

However, the error message can appear for other reasons that may be puzzling to the user. It is most typically associated with users not exiting SYSPRO through normal means (crashes, forced computer shutdown, etc). It is good for both the ERP administrator as well as the end-users to know what this error means and why it may appear despite the user not being signed in.

SYSPRO Error Message Operator Logged In

What does this “SYSPRO Error Message – Operator Already Logged In” message mean?

SYSPRO’s database has a table called AdmOperator. Inside this table there is a column used to indicate whether a SYSPRO operator is currently signed in. The column value is set to “Y” when a user signs in and is cleared when SYSPRO is closed out normally by the user. The “Y” value can linger in the database if the user fails to close out of SYSPRO “gracefully”.

In that case, the “Operator already logged in” message will appear. The user has the option to proceed which will clear any lingering operator entries. If the user is in fact already signed in, any previous session is terminated by the system.

What causes this “Operator already logged in” message?

Besides the intended circumstance of the operator already being signed in on another computer, the message also appears if a user fails to close out of SYSPRO “gracefully”. Examples of this could include:

  • The user shuts down their PC while SYSPRO was running. 
  • The user closes SYSPRO forcefully using the Windows Task Manager. This is common in the event of SYSPRO freezing or crashing. 
  • Network failures between SYSPRO and the app server causing communication errors. 
  • If the user closes their web browser when using SYSPRO Avanti without using the logout functionality in the application.

Some of these events may also result in “Unknown Processes” lingering in SYSPRO. These will have to be closed out using administrative tools in SYSPRO. To learn about these processes, see our article on Handling Unknown Processes in SYSPRO.

So, what should you do about this message?

Clicking “OK” to proceed is all you need to do! If the warning appears because the user is in fact already signed in, that previous session will simply be terminated. If it appeared for any of the other reasons outlined above, the database fields are cleaned up from any incorrect flags and reset to their intended status. It is good to inform users that this error means no harm and that they can safely proceed if they do not believe that their operator is signed in anywhere else.

How to Handle Unknown Processes in SYSPRO

How to Handle Unknown Processes in SYSPRO

Handling Unknown Processes in SYSPRO as an ERP Administrator

In SYSPRO, an “Unknown Process” is the result of a SYSPRO client having lost its connection to the host server prematurely. When an unknown process is detected, it means that a process is still running on the host server despite the client connection having been disconnected. Unknown processes can occur in the event of network disruption or a SYSPRO client shutting down unexpectedly.

Unknown Processes in SYSPRO ERP Admin

While SYSPRO generally catches common disconnects and clears these processes gracefully, in some cases, a process may linger and be declared as a “runaway” process. From an administrative point of view, it is important to stay on top of unknown processes as they can hog up valuable resources for others and can cause general instability if they are not terminated on a regular basis. Additionally, unknown processes can even consume user licenses which can affect other operators’ access to SYSPRO if you have an environment with limited user licensing.

To monitor and terminate any current unknown processes, you can use SYSPRO’s “Users” (IMPUSN) program. You can access the program by going to Main Menu > Administration > Logout Users. This program displays a list of all currently signed-in operators using SYSPRO. On the left-hand side, there is a “Processes” pane that you can filter for “Unknown”. A list of unknown processes will be displayed in the pane once selected. If you have any unknown processes in your system, the “End All Unknown Processes…” button will be enabled. Clicking it will clear the hung processes on the application server, and the previously hogged up server resources will once again be available.

SYSPRO Unknown Processes

To monitor operators seeing frequent disconnects, you can use the built-in Client-Server Diagnostic program (IMPDG5). Note that this program can only be run from a client machine. You can also make use of the System Audit Query program (IMPJNS) where you are able to filter for various system-related events such as client-server disconnects. These tools are sure to provide you with detailed information about any potential operator seeing frequent disconnects or unexpected client shutdowns.

SYSPRO ERP System Audit Screen

Please be aware that terminating unknown processes is only a temporary solution to the potential problem that is causing them to begin with. Be sure to monitor the specific client machines or operators encountering frequent disconnects.

Here are some helpful tips to reduce the number of unknown processes seen in your SYSPRO environment:

  • Educate your users about the importance of exiting SYSPRO “gracefully”. Unless SYSPRO is unresponsive, do not shut down Windows or use the Task Manager to kill SYSPRO. 
  • Set a “timeout” value against operators so that SYSPRO disconnects the user after a given time of inactivity. This can be done through the “Operators” program (IMPBOP). 
  • Schedule a task that performs a logout of all users in SYSPRO at a time where the system is not in use (generally overnight). 
  • Stay up to date with available SYSPRO hotfixes and the latest SYSPRO product releases to remain within SYSPRO’s product support. New hotfixes are usually only developed for the latest versions of SYSPRO.  

Looking for help with your SYSPRO ERP environment?

As an ERP Administrator handling Unknown Processes in SYSPRO, you know it’s crucial to vigilantly manage and terminate these processes to prevent resource depletion and licensing issues. The “Users” and “Client-Server Diagnostic” programs offer valuable tools for monitoring and addressing disconnects and unexpected shutdowns. However, it’s essential to address the root causes by educating users on proper exit procedures, setting timeout values, scheduling logouts during system downtime, and staying updated with SYSPRO hotfixes and releases to maintain product support and stability. Proactive management ensures the efficient operation of your SYSPRO environment.

If you find managing SYSPRO ERP processes and maintaining system stability a challenging task, consider reaching out to our team at EstesGroup. With our expertise in SYSPRO ERP consulting and our comprehensive suite of managed cloud and IT services, we can provide the support you need to streamline your operations, optimize performance, and ensure the smooth functioning of your SYSPRO environment. Don’t hesitate to leverage our experience and solutions to enhance your ERP management and IT infrastructure. Trust us at EstesGroup to help you navigate the complexities of SYSPRO with confidence.

SYSPRO Data Integrity: A Guide to Balance Functions

SYSPRO Data Integrity: A Guide to Balance Functions

How to Ensure Data Integrity Within SYSPRO – Balance Functions Explained

Ensuring data integrity is a top priority for any software product, especially for an ERP such as SYSPRO. As users go about performing their daily activities, various problems can arise, even with the most mature ERP systems. The most common issues seen within SYSPRO that can lead to data instability are users being disconnected, programs freezing up, or business objects unexpectedly stopping in the middle of processing. With tens (or potentially hundreds) of daily active users, it is imperative for your business that the data within SYSPRO stays consistent. So how does SYSPRO combat data integrity problems and maintain the overall stability of its data? The answer is SYSPRO balance functions!

SYSPRO Data Integrity Balance Functions

A balance function in SYSPRO is a detailed process used to correct and adjust database information if discrepancies are detected. They are built in to SYSPRO’s Period-End programs and are supposed to be run prior to posting GL entries or performing Month-End/Year-End tasks. SYSPRO’s balance functions can help “balance” a module by comparing user transaction data to its own control totals and correcting any noticed discrepancies. Some examples of these discrepancies that it can correct include:

  • GL journal entries that have not been properly completed or are still marked with “in-process” flags if they were abandoned. Users unexpectedly disconnecting from the system can be a cause of this.
  • Failed inventory transactions. Minor hiccups from bugs or networking issues during inventory transactions can result in inaccurate inventory counts. For example, a stock code may display as having available quantity on-hand but an attempt to issue or release the quantity results in errors.
  • Specific key documents being locked down by users for maintenance that fail to be released once complete. Again, a potential result of unexpected user disconnects or program errors. These are commonly encountered within sales/purchase order entry and customer/supplier setups programs.

Scheduling SYSPRO Data Integrity Tasks

While most SYSPRO environments generally only run these balance functions during their period-end tasks, it is strongly recommended to schedule balance functions to run regularly. Sites with heavy user activity (including custom business object activity) may want to run balance functions overnight several days throughout the week. The result of this will be an improved and overall smoother SYSPRO experience for all users.

Balance functions are not found separately within their own respective program. Instead, they are usually part of and located within period-end programs. The naming convention of some of these programs may not be clear and it is not easy to identify all of them. As such, here is a full list of the programs within SYSPRO that contain or can perform the functionality of a “balance function”:

  • AP Period End
  • AR Period End
  • AR Bank Deposit Slip
  • Cash Book Period End
  • Assets Period End
  • Inventory Period End
  • Sales Order Purge
  • Purchase Order Purge
  • GRN Purge
  • Sales Analysis Update

It is imperative to understand that some of these programs contain critical data-altering functionality within SYSPRO relating to period-end module closures or purging of data. You should tread with caution when accessing these programs and ensure you only have “Balance” selected. NOT any unwanted options pertaining to period-end and/or data purge functionality! 

Some of the above-listed programs may have an option called “Reset lowest unprocessed journal”. As it is not always checked by default, it is recommended to enable this option prior to executing a balance function. It performs an additional data-stability feature intended to fix GL journal issues. 

SYSPRO Data Integrity Balance Functions AR Period Example

SYSPRO environments that are not familiar with the power of balance functions can (and will) encounter unwanted issues and potentially unstable data problems. Knowing how to utilize, execute, and schedule balance functions is key to ensuring your SYSPRO environment’s data remains both stable and trouble-free.

Ready to discover how an EstesGroup ERP consultant tackles data integrity challenges & ensures your company’s success? Chat with us now & sign up for a free demo to see what your business would look like in EstesCloud!

RPA DNA – What is Robotic Process Automation?

RPA DNA – What is Robotic Process Automation?

Robotic Process Automation (RPA) is a new software technology that has the potential, in conjunction with AI technologies, to transform business processes, policies, and Enterprise Resource Planning (ERP) systems.

Robotic Process Automation RPA

RPA removes workers’ mundane, time-consuming tasks so that they can alternatively focus on innovation and creation. With RPA, software robots, rather than humans, quickly and efficiently perform data system tasks. Simple software robots can log in to data systems, locate and move files, insert and alter information in data systems, and assist in analytics and reporting.

More advanced software robots, especially if they have AI technology, can interpret, organize, and make decisions in a cognitive, human-like way. Businesses will discover that RPA technology is relatively inexpensive to implement, and it is business-ready and scalable.

A variety of different industries — in manufacturing, finance, and healthcare — can benefit from adopting RPA technology into their business operations and processes. The benefits of RPA technology are expansive, and these benefits carry over into ERP system implementations and ERP processes. Ultimately, with RPA, businesses can focus on improving their workplace atmospheres so that they are more efficient and productive.

What are the benefits of Robotic Process Automation (RPA)?

As businesses seek to automate their work flows to become more efficient and productive, Robotic Process Automation (RPA) will continue to transform workplace atmospheres and advance processes and operations while increasing production and profits. By implementing RPA technology, businesses will realize the following benefits:

  • RPA is initially inexpensive to implement and is ready for use, with minimal coding, by most data systems.
  • RPA eliminates some of the monotonous, arduous tasks that fatigue workers.
  • RPA eliminates human error and encourages speed, efficiency, and accuracy of repetitive tasks.
  • RPA adapts to meet increased production needs and ultimately reduces costs and increases production.
  • RPA creates a happier working atmosphere in which employees can focus on customer relations and innovation rather than mundane tasks.
  • RPA encourages a strong increase in rate on investment (ROI).
  • RPA promotes consistent compliance with industry and government standards.
  • RPA enhances security by eliminating human interaction with sensitive, private information.
  • RPA can automatically generate reports and analytics that businesses can use to improve their processes and operations.

How does Robotic Process Automation (RPA) integrate with ERP systems?

Enterprise resource planning (ERP) systems are essential for businesses to tailor their workplace atmospheres, and by utilizing Robotic Process Automation (RPA), businesses can automate and reappropriate mundane tasks to software robots rather than humans.

Users will be able to realize the full benefits of ERP systems and focus on more foundational tasks while RPA accomplishes lower-skilled, mundane tasks. ERP systems will experience similar benefits that businesses have when integrating Robotic Process Automation (RPA). Some areas that ERP operations can utilize and benefit from RPA include:

  • Accurate data capture and transfer
  • Assistance with and automation of data migration
  • Inventory and supply chain management
  • Real-time data sharing
  • Real-time analytics and reporting necessary for compliance

Why should businesses integrate their ERP systems with RPA technology?

Many companies are hesitant to implement Robotic Process Automation (RPA) or fear that RPA will eliminate workers. RPA is cost-effective and easy for businesses to implement and integrate into their ERP systems. RPA software increases speed, productivity, and efficiency of processes and operations while encouraging a happier workplace atmosphere.

RPA doesn’t replace humans. It certainly is more consistent and reliable, but there will always be a need for human interaction. Although RPA will eliminate many of the lower-skilled, mundane tasks that humans must perform, workers will still be responsible for higher-skilled, fundamental tasks. As RPA streamlines workplaces and ERP systems, humans will be able to focus on more complex, meaningful tasks that will help businesses grow and maximize profits.

Combining an ERP system with new cloud-based technology allows businesses to experience all the benefits of both while approaching the future with automation and efficiency. Businesses will see cost reductions and great increases in rate on investment (ROI).

ERP systems with integrated RPA technology encourage streamlined workplace atmospheres, innovation, competitiveness, and ultimately, business growth. RPA lets workers enjoy their coffee, innovate, and communicate with customers while it does the grunt work.

Looking for answers to questions about how new technology can help your business? Meet with our team to learn how cloud-based solutions and services can help you achieve your goals!

Epicor Prophet 21 Performance – Real-World Issues

Epicor Prophet 21 Performance – Real-World Issues

Recently, I met with an Epicor Prophet 21 customer on a discovery call to review the issues they were encountering in relation to some ongoing P21 web UI slowdowns. ERP system performance is a common challenge across the ERP community, and in the Prophet 21 community, the subject of P21 performance is similarly of great importance. Coming out of the call, I thought I’d collect a few of the talking points and add a few additional P21 system performance considerations that can impact the speed and responsiveness of your Prophet 21 web UI.

Epicor Prophet 21 Performance Distribution Industry

Epicor Prophet 21 system performance can be a maze to navigate.

We had originally characterized the issue as a problem with the P21 API loading, and we began looking more broadly. As you might know, the Prophet 21 sits on top of Microsoft’s Internet Information Services web server platform, known colloquially as “IIS”.  There are several things to consider if your P21 web server is slowing down throughout the day, and with an ERP system like P21, the issues actually affecting the performance of the Prophet 21 web interface may reside many layers below the P21 web server.

Background:

It might be helpful to initially review the composition and operation of websites. Websites are comprised of both static and dynamic pages. A static page is pre-defined on the web server and is ready to be served up. A dynamic page is generated at run time and may dynamically differ each time it is generated. In terms of HTML pages that comprise the P21 user interface, generally speaking, the P21 application pool can only respond to a certain number of requests at a time. If it is busy responding to requests for dynamic pages, then it may not have any threads left to serve the static pages. For this reason, a code problem on a dynamic page can create the illusion that the static pages are being served “slowly”. My point is, don’t rule out code or SQL. As an example, if you have 100 pages all hitting a database or API at the same time, and all 100 await a response, request 101 may be blocked until one of the first 100 requests completes.

Diagnosing the Degradation:

Beyond explicit issues like request load, there are plenty of things that you can do to help you diagnose performance problems with your Prophet 21 web application:

Load Profiles: What does your load profile look like normally? This makes a big difference – it may be that you always have an issue, but you can’t see the impact until your site receives load. You could try to test this (in staging) with something like JMeter.

Reviewing your logs: Does your application have logs? If not, you should consider adding some logging. If you already have logs, what do they say? Are there exceptions being thrown by your application? Is there something that is consistently failing?

IIS Logs: Enable IIS logs if you haven’t already. Reviewing your P21 IIS logs can help you see which requests are taking the longest. You can use something like Microsoft’s Log Parser to run SQL-like queries against your logs. You may even want to dump your logs into a SQL database if that makes your P21 logs easier to review. Once you know which pages are taking the longest, you can focus some of your attention on them.

Memory: How much memory is your application pool using? A memory leak is an obvious candidate but should be quite easy to see. Use Windows’ inbuilt Performance Monitor to track memory consumed by your application pool over the day and see if this increases as the day goes on.

SQL Performance: The performance of your P21 SQL database may be an underlying cause of poor Prophet 21 user interface performance. SQL server provides a series of query structures called Dynamic Management Views, or DMVs, that can provide details about server and database health and performance. These can be very helpful in diagnosing performance issues at this level. One common DMV, sys.dm_exec_requests, can help you understand query properties such as wait_type, wait_time, blocking_session_id and the total_elapsed_time.

P21 Application Pool Connections: Check how many connections your application pool has open – using something like Microsoft’s TCPView. Your application pool will try to re-use connections where possible, but you’ll probably see a lot of open connections to your application pool. One interesting thing you can see from this is how many connections you have open to your SQL database or any external APIs your application is using.

Use an Application Performance and Monitoring Tool: Performance monitoring tools, like AppDynamics, will be able to help pinpoint slow performing parts of your code. Unfortunately, there’s a little bit of a learning curve to be able to use these tools effectively, but they can be very powerful in helping to diagnose problems with your applications.

SQL Server AutoGrowth Property: Review the property in your SQL database pertaining to AutoGrowth. You may encounter issues if the following are occurring:

1. If the database is a super-busy database, transactionally speaking.

2. If AutoGrowth is enabled.

3. The AutoGrowth default is a smaller MB amount. This may cause random slowdowns on the database engine, which could impact the API application pool response time. 

One thing to test would be to set that AutoGrowth size in MB to a very large number. That way, the AutoGrowth will only happen periodically.

Look for Memory Leaks: Once I had a customer experiencing IIS performance degradation issues with a custom web application we had built that was using asp.net and Crystal runtime integration. Ultimately, the issues with IIS and the web app related to memory leaks that were not obvious at all until we started doing some deep-dive testing. You will want to consider the possibility of internal memory leaks when building a support case against the application having performance-related issues that may or may not have been resolved in minor version changes. I know IIS also plays a part in this and how it manages internal garbage disposal with application pools, so this may be an area that you need to explore as well.

As you can see, Epicor Prophet 21 system performance can be a maze to navigate. To find your way through the P21 performance maze, there are many potential paths to take, and depending on the ultimate source of the problem, many might be dead ends. But in understanding the directions one might take in navigating the many potential Prophet 21 performance issues, P21 users can hopefully find themselves at the maze’s end – and moving on to bigger and better things.

Prophet 21 Cloud Migration Steps for Managed Hosting of P21
Data Center Strategy: How To Cloud Up For Uptime

Data Center Strategy: How To Cloud Up For Uptime

A Cloud is a Data Center and a Data Center is a Cloud?

Cloud applications ultimately sit upon the foundation of a server stack. You can view a cloud-based server as someone else’s computer, and picture these servers housed in a data center, which is their most likely location.

A data center can be simply described as a specified space within a building designed to securely house computing resources.
Data Center Considerations

Servers

Power

Communication

A large data center normally involves an extensive open area, which is divided into racks and cages, to hold the servers themselves, as well as the power and communication connections used to link each individual server with the rest of the data center network. This network would reside in a building with sufficient architecture to allow for rapid data communication, and similarly high-performing connections to the outside world.

The building itself is normally a large and highly secure edifice, constructed from reinforced building materials, as to prevent physical compromise. It is often located on a campus that is itself physically guarded with high fences and rigid gates.

Server

PHYSICAL SECURITY 

DATA CENTER HARDWARE

Cloud Security

CLOUD-BASED SECURITY

DATA CENTER STRATEGY

The Servers Themselves: What Is In Your Data Center?

Inside the building (the data center) exists a complex cooling and ventilation system, to prevent the heat-inducing computing devices from overheating. The campus is supported by redundant power systems, to allow the network to run, even if the main power grid experiences interruption or shutdown. The inner workings of the data center are designed to prevent downtime, but the materials used in construction can vary. Consider a pencil made from wood vs. a pencil made from plastic. Consider further a pencil manufactured from metal built to protect a thin and fragile graphite fragment. 

The ways in which end users can attain access to the resources in a data center can vary due to the fact that cloud provisioning can occur in many layers.

Option A: Cloud Provider = Data Center

Sometimes the cloud provider is itself the data center. Most often this is the case when you want to use server space from a data center, or else wish to collocate your hardware in a data center. For instance, as a customer, you might procure new hardware and move it to one of US Signal’s data centers in a colocation arrangement. This allows you to benefit from US Signal’s physical security, network redundancy, high-speed fiber network, and peering relationships, to allow for a broad array of high-speed communications. 

Option B: Cloud Provider = Data Center Management Firm

Sometimes the cloud provider is an organization that manages the allocation and management of cloud resources for you — they serve as an intermediary between the end customer and the data center. For instance, EstesGroup partners with US Signal. We help customers choose the right server resources in support of the application deployment and management services that we provide for ERP (Enterprise Resource Planning) customers.

Moreover, not all data centers are created equal. Data centers differ in countless ways, including (but not limited to) availability, operating standards, physical security, network connectivity, data redundancy, and power grid resiliency. Most often, larger providers of cloud infrastructure actually provide a network of tightly interconnected data centers, such that you’re not just recruiting a soldier — you’re drafting an entire army. 

As such, when choosing a cloud provider, understanding the underlying data centers in use is as important as understanding the service providers themselves. That said, what are some of the questions that you should ask your provider when selecting a data center? 

Is the provider hosting out of a single data center or does the provider have data center redundancy?

Geo-diverse data centers are of great importance when it relates to overall risk of downtime. Diversely-located data centers provide inherent redundancy, especially beneficial when it comes to backup and disaster recovery.

But what defines diverse? One important consideration relates to the locations of data centers relative to America’s national power grid infrastructure. Look for a provider that will store your primary site and disaster recovery site on separate power grids.

This will bolster you from the potentially of an outage to one of the individual grid locations. Think of the continental divide. On separate sides of the divide, water flows in one of two directions. When it comes to national power grids, support comes from different hubs. Look for a provider who has redundant locations on the other side of the divide to protect you in the event of a major power outage.

Are they based on a proprietary data center, collocated, or leveraging the state-of-the art technology of a leading data center? 

A provider of hosting services may choose to store their data in one of many places. They may choose to leverage a world-class data center architecture like US Signal’s. Conversely, they may choose to collocate hardware that they already own in a data center. Or they may choose, like many managed services providers do, to leverage a proprietary data center, most often located in their home office. 

Colocation is not uncommon among first steps in the cloud. If you own hardware already, and would like to leverage a world-class data center, colocation is a logical option. But for cloud providers, owning hardware becomes a losing war of attrition. Hardware doesn’t stay current, and unless its being procured in large quantities, it’s expensive. These costs often get passed along to the customer. Worse still, it encourages providers to skimp on redundancy, making their offerings less scalable and less robust in the event of disaster events. 

Proprietary data centers add several layers of concern to the colocation option. In addition to the hardware ownership challenges, the provider is not responsible for all the infrastructure responsibilities that come with data center administration, such as redundant power, cooling, physical security, and network connectivity.

Moreover, proprietary data centers often lack the geo-diversity that comes with a larger provider. Beyond infrastructure, security is a monumental responsibility for a data center provider, and many smaller providers struggle to keep up with evolving threats. In fact, Estes recently onboarded a customer who came to us due to their Managed Service Provider’s propriety data center getting hacked and ransomed. 

Is the cloud provider hosting out of a public cloud data center? 

Public cloud environments operate in multi-tenant configurations where customers contend with one another for resources. Resource contention means that when one customer’s resource consumption spikes, the performance experienced by the other customers in the shared tenant will likely suffer. Moreover, many multi-tenant environments lack the firewall isolation present in private cloud infrastructures, which increases security concerns. Isolated environments are generally safer environments. 

Is the cloud provider proactively compliant?

Compliance is more than the adherence to accounting standards — it is a means to guarantee that your provider is performing the necessary due diligence in order to ensure the business practices of an organization do not create vulnerabilities that can compromise the security and reliability assertions of the provider. What compliance and auditing standards does your cloud provider adhere to?

Is your cloud provider compliant according to their own hardware vendor’s standards?

Hardware providers, such as Cisco, for instance, offer auditing services, to ensure their hardware is being reliably deployed. Ensure that your provider adheres to their vendor’s standards. How about penetration testing? Is your provider performing external penetration testing to ensure PCI security compliance? In terms of industry standard compliance frameworks, such as HIPAA, PCI/DCC, and SOC I and SOC II, ensure that your provider is being routinely audited. Leveraging industry standards through compliance regulation best practices can go a long way to make sure they are not letting their guards down. 

What kind of campus connectivity is offered between your data centers and the outside world?

Low national latency is of utmost importance from a customer perspective. Efficient data transfer between the data centers themselves and from a given data center to the outside world is fundamental to a cloud customer. Achieving transactional efficiency is achieved in multiple ways.

For a network to be efficient, the data itself must take as few “hops” from one network to another. This is best achieved through tight partnerships between the data center and both the national and regional ISPs that service individual organizations.

Within the data center network, an efficient infrastructure is helpful. US Signal, for instance, has a 14K mile network fiber backbone connecting its data centers and connecting them to regional transfer stations. This allows US Signal to support 3 ms latency between its 9 data centers, and to physically connect with over 90 national ISPs. This results in an extremely low national latency.

What kinds of backup and disaster recovery solutions can be bundled with your cloud solutions?

Fundamental to a cloud deployment is the ability to provide redundancy in the event of a disaster. Disaster recovery is necessary to sustaining an environment, whether on premise or in the cloud. But a disaster recovery solution must adhere to rigorous standards of its own if it is to be effective. Physical separation between a primary and secondary sight is one such baseline need. Additionally, the disaster recovery solution needs to be sufficiently air-gapped, in order to hit your desired RPO and RTO targets, while avoiding potential cross-contamination between platforms due to an event of hacking, viruses, or ransomware.

What kinds of uptime and reliability guarantees are offered by your data center?

All of the above aspects of a data center architecture should ultimately result in greater uptime for the cloud consumer. The major public data center providers are notorious for significant outages, and this has deleterious effects on customers of these services. Similarly, smaller providers may lack the infrastructure that can support rigorous uptime standards. When choosing a provider, make sure to understand the resiliency and reliable uptime of the supporting platform. EstesGroup can offer a 100% uptime SLA when hosted in our cloud with recovery times not achievable by the public cloud providers.

Uptime has a planned/unplanned component that must also be considered. Many larger cloud providers do not give advanced warning when instances will be shut down for upgrades, which can be extremely disruptive for consumers, and result in a loss of control that conflicts with daily business initiatives. Ensure that planned downtime is a service that is communicated and understood before it happens. 

How scalable is the overall platform?

Scalability has to do with flexibility and speed. How flexible can the resources of an individual virtual machine (VM) be tweaked and how quickly can these changes be made. Ideally, your cloud provider provides dynamic resource pool provisioning — this allows for dynamic allocation of computing resources when and where they are needed.

Some provider environments support “auto-scaling,” which can dynamically create and terminate instances, but they may not allow for dynamic resource changes to an existing instance. In these cases, if a customer wishes to augment resources of any instance, it must be terminated and rebuilt using the desired instance options provided by other providers. This can be problematic. Additionally, provisioning, whether to a new VM or an existing one, should be quick, and not require a long lead time to complete. Ensure that your cloud provider specifies the lapsed time required to provision and re-provision resources.

What are the data movement costs?

The costs associated with the movement of data can significantly impact your total cloud costs. These are normally applied as a toll fee that accumulates based on the amount of data that moves over a given time. So these costs can be unpredictable. But what kinds of data movements occur?

  • Data ingress: data moving into the storage location, as it is being uploaded.
  • Data egress: data out of the storage location, as it is being downloaded. 

Data centers rarely charge for ingress movement — they like the movement of data into their network. But many will charge for data egress. This means that if you want your data back, they may charge you for it.

Sometimes these fees even occur when data is moving within the provider’s network, between regions and instances. If you’re looking for a cloud provider, check the fine print to determine whether egress fees are applied, and estimate your data movement, to understand your total cost. EstesGroup gives you symmetrical internet data transfer with no egress charges, so your data movement does not result in additional charges. This means that your cloud costs are predictable.

Does the cloud provider offer robust support?

Downtime can come from one of many situations. Your ISP could experience an outage, and may need to fail over to your secondary provider.  Or you may encounter an email phishing scam resulting in a local malware attack.  Or you may experience an outage, due to a regional power grid issue. In these extenuating circumstances, you may find yourself in need of contacting your cloud provider in a hurry.

As such, you’ll want a provider that offers robust pre-sales and post-sales support that is available 24/7/365. Many providers offer high-level support only if you subscribe to an additional support plan, which is an additional monthly cost. Wait times are also an issue — you may have a support plan, but the support may be slow and cumbersome. Look for a cloud provider that will guarantee an engineer in less than 60 seconds, 24/7/365.

Are you ready for a tour of one of the best data centers in the world? Meet with the EstesCloud team to get the right cloud strategy for your business.