Select Page
Data Center Strategy: How To Cloud Up For Uptime

Data Center Strategy: How To Cloud Up For Uptime

A Cloud is a Data Center and a Data Center is a Cloud?

Cloud applications ultimately sit upon the foundation of a server stack. You can view a cloud-based server as someone else’s computer, and picture these servers housed in a data center, which is their most likely location.

A data center can be simply described as a specified space within a building designed to securely house computing resources.
Data Center Considerations

Servers

Power

Communication

A large data center normally involves an extensive open area, which is divided into racks and cages, to hold the servers themselves, as well as the power and communication connections used to link each individual server with the rest of the data center network. This network would reside in a building with sufficient architecture to allow for rapid data communication, and similarly high-performing connections to the outside world.

The building itself is normally a large and highly secure edifice, constructed from reinforced building materials, as to prevent physical compromise. It is often located on a campus that is itself physically guarded with high fences and rigid gates.

Server

PHYSICAL SECURITY 

DATA CENTER HARDWARE

Cloud Security

CLOUD-BASED SECURITY

DATA CENTER STRATEGY

The Servers Themselves: What Is In Your Data Center?

Inside the building (the data center) exists a complex cooling and ventilation system, to prevent the heat-inducing computing devices from overheating. The campus is supported by redundant power systems, to allow the network to run, even if the main power grid experiences interruption or shutdown. The inner workings of the data center are designed to prevent downtime, but the materials used in construction can vary. Consider a pencil made from wood vs. a pencil made from plastic. Consider further a pencil manufactured from metal built to protect a thin and fragile graphite fragment. 

The ways in which end users can attain access to the resources in a data center can vary due to the fact that cloud provisioning can occur in many layers.

Option A: Cloud Provider = Data Center

Sometimes the cloud provider is itself the data center. Most often this is the case when you want to use server space from a data center, or else wish to collocate your hardware in a data center. For instance, as a customer, you might procure new hardware and move it to one of US Signal’s data centers in a colocation arrangement. This allows you to benefit from US Signal’s physical security, network redundancy, high-speed fiber network, and peering relationships, to allow for a broad array of high-speed communications. 

Option B: Cloud Provider = Data Center Management Firm

Sometimes the cloud provider is an organization that manages the allocation and management of cloud resources for you — they serve as an intermediary between the end customer and the data center. For instance, EstesGroup partners with US Signal. We help customers choose the right server resources in support of the application deployment and management services that we provide for ERP (Enterprise Resource Planning) customers.

Moreover, not all data centers are created equal. Data centers differ in countless ways, including (but not limited to) availability, operating standards, physical security, network connectivity, data redundancy, and power grid resiliency. Most often, larger providers of cloud infrastructure actually provide a network of tightly interconnected data centers, such that you’re not just recruiting a soldier — you’re drafting an entire army. 

As such, when choosing a cloud provider, understanding the underlying data centers in use is as important as understanding the service providers themselves. That said, what are some of the questions that you should ask your provider when selecting a data center? 

Is the provider hosting out of a single data center or does the provider have data center redundancy?

Geo-diverse data centers are of great importance when it relates to overall risk of downtime. Diversely-located data centers provide inherent redundancy, especially beneficial when it comes to backup and disaster recovery.

But what defines diverse? One important consideration relates to the locations of data centers relative to America’s national power grid infrastructure. Look for a provider that will store your primary site and disaster recovery site on separate power grids.

This will bolster you from the potentially of an outage to one of the individual grid locations. Think of the continental divide. On separate sides of the divide, water flows in one of two directions. When it comes to national power grids, support comes from different hubs. Look for a provider who has redundant locations on the other side of the divide to protect you in the event of a major power outage.

Are they based on a proprietary data center, collocated, or leveraging the state-of-the art technology of a leading data center? 

A provider of hosting services may choose to store their data in one of many places. They may choose to leverage a world-class data center architecture like US Signal’s. Conversely, they may choose to collocate hardware that they already own in a data center. Or they may choose, like many managed services providers do, to leverage a proprietary data center, most often located in their home office. 

Colocation is not uncommon among first steps in the cloud. If you own hardware already, and would like to leverage a world-class data center, colocation is a logical option. But for cloud providers, owning hardware becomes a losing war of attrition. Hardware doesn’t stay current, and unless its being procured in large quantities, it’s expensive. These costs often get passed along to the customer. Worse still, it encourages providers to skimp on redundancy, making their offerings less scalable and less robust in the event of disaster events. 

Proprietary data centers add several layers of concern to the colocation option. In addition to the hardware ownership challenges, the provider is not responsible for all the infrastructure responsibilities that come with data center administration, such as redundant power, cooling, physical security, and network connectivity.

Moreover, proprietary data centers often lack the geo-diversity that comes with a larger provider. Beyond infrastructure, security is a monumental responsibility for a data center provider, and many smaller providers struggle to keep up with evolving threats. In fact, Estes recently onboarded a customer who came to us due to their Managed Service Provider’s propriety data center getting hacked and ransomed. 

Is the cloud provider hosting out of a public cloud data center? 

Public cloud environments operate in multi-tenant configurations where customers contend with one another for resources. Resource contention means that when one customer’s resource consumption spikes, the performance experienced by the other customers in the shared tenant will likely suffer. Moreover, many multi-tenant environments lack the firewall isolation present in private cloud infrastructures, which increases security concerns. Isolated environments are generally safer environments. 

Is the cloud provider proactively compliant?

Compliance is more than the adherence to accounting standards — it is a means to guarantee that your provider is performing the necessary due diligence in order to ensure the business practices of an organization do not create vulnerabilities that can compromise the security and reliability assertions of the provider. What compliance and auditing standards does your cloud provider adhere to?

Is your cloud provider compliant according to their own hardware vendor’s standards?

Hardware providers, such as Cisco, for instance, offer auditing services, to ensure their hardware is being reliably deployed. Ensure that your provider adheres to their vendor’s standards. How about penetration testing? Is your provider performing external penetration testing to ensure PCI security compliance? In terms of industry standard compliance frameworks, such as HIPAA, PCI/DCC, and SOC I and SOC II, ensure that your provider is being routinely audited. Leveraging industry standards through compliance regulation best practices can go a long way to make sure they are not letting their guards down. 

What kind of campus connectivity is offered between your data centers and the outside world?

Low national latency is of utmost importance from a customer perspective. Efficient data transfer between the data centers themselves and from a given data center to the outside world is fundamental to a cloud customer. Achieving transactional efficiency is achieved in multiple ways.

For a network to be efficient, the data itself must take as few “hops” from one network to another. This is best achieved through tight partnerships between the data center and both the national and regional ISPs that service individual organizations.

Within the data center network, an efficient infrastructure is helpful. US Signal, for instance, has a 14K mile network fiber backbone connecting its data centers and connecting them to regional transfer stations. This allows US Signal to support 3 ms latency between its 9 data centers, and to physically connect with over 90 national ISPs. This results in an extremely low national latency.

What kinds of backup and disaster recovery solutions can be bundled with your cloud solutions?

Fundamental to a cloud deployment is the ability to provide redundancy in the event of a disaster. Disaster recovery is necessary to sustaining an environment, whether on premise or in the cloud. But a disaster recovery solution must adhere to rigorous standards of its own if it is to be effective. Physical separation between a primary and secondary sight is one such baseline need. Additionally, the disaster recovery solution needs to be sufficiently air-gapped, in order to hit your desired RPO and RTO targets, while avoiding potential cross-contamination between platforms due to an event of hacking, viruses, or ransomware.

What kinds of uptime and reliability guarantees are offered by your data center?

All of the above aspects of a data center architecture should ultimately result in greater uptime for the cloud consumer. The major public data center providers are notorious for significant outages, and this has deleterious effects on customers of these services. Similarly, smaller providers may lack the infrastructure that can support rigorous uptime standards. When choosing a provider, make sure to understand the resiliency and reliable uptime of the supporting platform. EstesGroup can offer a 100% uptime SLA when hosted in our cloud with recovery times not achievable by the public cloud providers.

Uptime has a planned/unplanned component that must also be considered. Many larger cloud providers do not give advanced warning when instances will be shut down for upgrades, which can be extremely disruptive for consumers, and result in a loss of control that conflicts with daily business initiatives. Ensure that planned downtime is a service that is communicated and understood before it happens. 

How scalable is the overall platform?

Scalability has to do with flexibility and speed. How flexible can the resources of an individual virtual machine (VM) be tweaked and how quickly can these changes be made. Ideally, your cloud provider provides dynamic resource pool provisioning — this allows for dynamic allocation of computing resources when and where they are needed.

Some provider environments support “auto-scaling,” which can dynamically create and terminate instances, but they may not allow for dynamic resource changes to an existing instance. In these cases, if a customer wishes to augment resources of any instance, it must be terminated and rebuilt using the desired instance options provided by other providers. This can be problematic. Additionally, provisioning, whether to a new VM or an existing one, should be quick, and not require a long lead time to complete. Ensure that your cloud provider specifies the lapsed time required to provision and re-provision resources.

What are the data movement costs?

The costs associated with the movement of data can significantly impact your total cloud costs. These are normally applied as a toll fee that accumulates based on the amount of data that moves over a given time. So these costs can be unpredictable. But what kinds of data movements occur?

  • Data ingress: data moving into the storage location, as it is being uploaded.
  • Data egress: data out of the storage location, as it is being downloaded. 

Data centers rarely charge for ingress movement — they like the movement of data into their network. But many will charge for data egress. This means that if you want your data back, they may charge you for it.

Sometimes these fees even occur when data is moving within the provider’s network, between regions and instances. If you’re looking for a cloud provider, check the fine print to determine whether egress fees are applied, and estimate your data movement, to understand your total cost. EstesGroup gives you symmetrical internet data transfer with no egress charges, so your data movement does not result in additional charges. This means that your cloud costs are predictable.

Does the cloud provider offer robust support?

Downtime can come from one of many situations. Your ISP could experience an outage, and may need to fail over to your secondary provider.  Or you may encounter an email phishing scam resulting in a local malware attack.  Or you may experience an outage, due to a regional power grid issue. In these extenuating circumstances, you may find yourself in need of contacting your cloud provider in a hurry.

As such, you’ll want a provider that offers robust pre-sales and post-sales support that is available 24/7/365. Many providers offer high-level support only if you subscribe to an additional support plan, which is an additional monthly cost. Wait times are also an issue — you may have a support plan, but the support may be slow and cumbersome. Look for a cloud provider that will guarantee an engineer in less than 60 seconds, 24/7/365.

Are you ready for a tour of one of the best data centers in the world? Meet with the EstesCloud team to get the right cloud strategy for your business.

What is CMMC: Cybersecurity Maturity Model Certification?

What is CMMC: Cybersecurity Maturity Model Certification?

CMMC: The Looming Cyber-Security Certification that Affects 60,000+ Companies

 

In 2019, the U. S. Department of Defense (DoD) announced a new security protocol program for contractors called Cybersecurity Maturity Model Certification (CMMC). CMMC is a DoD Certification process that lays out a contractor’s security requirements, and it is estimated that between 60,000-70,000 companies will need to become CMMC compliant in the next 1-3 years 

 

CMMC is basically a combination and addition to existing regulations in 48 Code of Federal Regulations (CFR) 52.204-21 and the Defense Federal Acquisition Regulation Supplement (DFARS) 252.204-7012, and includes practices from National Institute and Technology (NIST) 800-171, the United Kingdoms’ Cyber Essentials, and Australia’s Essential Eight requirements. International Traffic in Arms Regulations (ITAR) will remain a separate certification from CMMC – though companies that are ITAR Compliant will need to adhere to CMMC as well. 

 

CMMC Version 1.0 was released late January 2020. To view the latest CMMC document, visit the CMMC DoD site. 

 

CMMC Notables 

  • There are 5 levels of the security maturity process (basic is 1 and most stringent is 5). 
  • Any company that directly (or even some that indirectly) does business with DoD will adhere to CMMC –and that means direct DoD contractors and high-level CMMC companies’ supply chains must also adhere to, at minimum, base level requirements. 
  • There is no self-assessment (unlike NIST), and companies need to get certified through a qualified auditing firm. 
  • DoD will publish all contractor’s certification level requirements. 

Is My Business Affected by CMMC? 

 

This is easily answered with a 2-part question: 1) Is your business a direct contractor to the DoD, or 2) does your business do business with a company that is a contractor to the DoD*? If you answered “yes” to question 1, then your business will need to be CMMC compliant. If you answered “yes” to number two, then it is very probable that your company will need to be CMMC compliant. 

What are the CMMC Levels? 

  • Level 1 – “Basic Cyber Hygiene”  
    • Antivirus 
    • Meet safeguard requirements of 48 CFR 52.204-21 
    • Companies might be required to provide Federal Contract Information (FCI) 
  • Level 2 – “Intermediate Cyber Hygiene” 
    • Risk Management 
    • Cybersecurity Continuity plan 
    • User awareness and training 
    • Standard Operating Procedures (SOP) documented 
    • Back-Up / Disaster Recovery (BDR) 
  • Level 3 – “Good Cyber Hygiene”
    • Systems Multi-factor Authentication 
    • Security Compliance with all NIST SP 800-171 Rev 1 Requirements 
    • Security to defend against Advanced Persistent Threats (APTs) 
    • Share incident reports if company subject to DFARS 252.204-7012 
  • Level 4 – “Proactive” 
    • Network Segmentation 
    • Detonation Chambers 
    • Mobile device inclusion 
    • Use of DLP Technologies 
    • Adapt security as needed to address changing tactics, techniques, and procedures (TTPs) in use by APTs 
    • Review & document effectiveness and report to high-level management 
    • Supply Chain Risk Consideration* 
  • Level 5 – “Advanced / Progressive” 
    • 24/7 Security Operations Center (SOC) Operation 
    • Device authentication 
    • Cyber maneuver operations 
    • Organization-wide standardized implementation of security protocols 
    • Real-time assets tracking 

One important thing to note about CMMC is that unlike NIST and other current certifications, CMMC will require certification from an authorized 3rd-party CMMC authorized certification company. Currently, most companies can self-certify for DoD-related securities. EstesGroup is not a CMMC Certification Company, but we can help companies prepare and boost security up to meet new requirements.

For more specifics on CMMC, access the latest DoD’s CMMC Revision.

 

Learn more about CMMC with 5 Ways EstesGroup Helps with Your CMMC Compliance

 

Do you have questions about CMMC or about how EstesGroup can help your company with CMMC or other cybersecurity, compliance or data issues? Contact us or chat with us today.

12 Days of ECHO, Twelfth Day: My Admin Gave to Me, Ransomware 2020 the Good, Bad, and Ugly

12 Days of ECHO, Twelfth Day: My Admin Gave to Me, Ransomware 2020 the Good, Bad, and Ugly

Ransomware the hits keep coming going into 2020

 

By now, we’ve all heard about someone affected by ransomware. If it wasn’t a friend’s business, or a company you do business with, or the town you live in, or the hospital you visit – all you have to do is look at the news to see major enterprises being attacked and ‘taken out’ by this nefarious deed.  As long as people pay, the bad guys will keep using it as a tool. After all, they’re just chasing the money. 

 

So why do I title this “the good, the bad and the ugly”?  Well, if you’ve been hit, you know the bad part.  It’s expensive in both dollars and perception.  What good can come of ransomware? And besides the rising ransom amount, why is it about to get uglier! 

 

First, the good part.

 

It raises awareness that the bad guys are afoot. Wherever there’s profit, fame, political gain and more, there will be someone to play the villain (or hire them) to get the goods. Technology just made it easier. So, the good news is that you know about it!  

 

Second, the bad part.

 

Knowledge without action is a travesty. It would be even better if you acted on that knowledge and improved your defenses. Backups and disaster recovery plans are hopefully in place, but don’t assume YOUR backups and DR plans are solid.  Test them occasionally to find the problem before you need a restore. I can’t tell you how many businesses think their backups are solid to find out differently after the attack. 

 

Internet access should be a privilege, not a right. Virtually nobody should have unfettered access to any website they want. Users should get internet access based on their role in the company, not because they have a computer and a browser. ALL emails and internet access should be filtered, blocked, logged and if needed, analyzed. You need to be current on patches, antivirus, spam filtering, blah blah blah.  Sorry if I lost you there, but we’ve been beating that drum for years.  In fact, you might want to take away the internet from your users – let users surf only on their phones, on the guest wifi and NOT the corporate wifi.  Perhaps provide an internet kiosk that’s separate from the corporate network. 

 

Lastly, the ugly

 

The *really* ugly. Once you get ransomedyou can no longer assume that it’ll just lock your files up. That data of yours (oh, customer files, payroll info, vendor lists, etc.)  could have just as easily been copied to the attackers and then encrypted. So now, you don’t have your customer spreadsheet, but the bad guys do!  Imagine the horror when they go to all your clients to tell them you’ve been hacked and they have all this data about YOUR customers! If you are under HIPAA, you might as well close up shop, the HIPAA fines alone will knock a small practice down and out. What customer will solicit a company that not only leaked their information, but that same confidential information was POSTED on FaceBook? The depravity and damage can only be imagined at this time. 

 

So, if you got ransomed, and all you lost was a few (thousand) bucks, consider yourself lucky. It’s about to get a whole lot uglier.  The cities of Atlanta, Pensacola, and Baltimore will agree! 

 

Happy New Year to all and may 2020 be brighter, smarter and safer. 

If you liked reading the “Twelfth Day of ECHO” return to our main list to read all of the other “12 Days of ECHO” posts.

 

Do you have questions or need assistance with your ERP system or data security?  Please feel free to Contact Us and see if we can help get your bits and bytes in order.

12 Days of ECHO, Eleventh Day: My Admin Gave to Me, notes on Online Transaction Processing vs. Decision Support!

12 Days of ECHO, Eleventh Day: My Admin Gave to Me, notes on Online Transaction Processing vs. Decision Support!

Enterprise Resource Planning (ERP): Online Transaction Processing vs. Decision Support

 

So, you’ve got your ERP system up and running, and before long, the management team wants reportsdashboards and executive data out of the system. That makes perfect business sense, and most ERP systems (including Epicor) have a slew of built-in reports as well as a report designer – Epicor E10 uses SSRS, Microsoft’s flagship product for writing reports. 

 

However, there’s a potential problem. The activity of entering data, called “Online Transaction Processing” or OLTPis fundamentally different than the activity of reporting and summarizing that data, called “Decision Support”, or DS for short. Before we go further, let me also explain database locking. A lock is a basic database ‘tool’ that prevents other user from changing a piece of data that you are using. There are many types of locks, but for this discussion, a row (record) lock prevents others from editing that specific record– let’s say an invoice.  A table lock prevents anyone from editing anything in that whole table. It is our sincere desire to keep all locks as short as possible, for the longer the lock is held, the more likely it is for someone else to want that locked data. 

 

Online Transaction Processing (OLTP) locks individual records to allow parts to be sold, inventory to be adjusted, and invoices entered. Decision Support (DS) locks whole database tables to run a reportWhen managewants to see ainvoice report, nobody can be entering a new invoice while the report is being generated! While most locks are handled automatically, they cause delays and in rare cases of a deadlock, data loss. 

 

I’m oversimplifying the issue, but the long and short of it is that Online Transaction Processing (OLTP) and Decision Support (DS) fight each otherall day long.  In fact, locking contention is one of the main causes of database performance issues! There are several solutions, but a common one is to simply time the DS to occur after OLTP – that is, after the business closes. Many companies run their reports at night, not only because the system is more available, but all those pesky users aren’t entering data, locking records and causing issues! 

 

A more complex, but also common solution, is to copy the Online Transaction Processing (OLTP) database to a independent Decision Support (DS) database on a regular basis.  OLTP users get an optimized database for their activities, and the DS users can run reports all day long without locking the OLTP users out.  An ideal solution for a busy database, but it does have its downsides. You’ll need twice the disk space and a method to move the data from OLTP to DS.  Our clients use backup & restore, SQL replication, mirroring and all kinds of technology to duplicate the database and prevent the dreaded locking contention. 

 

Need help? Let us know and we’ll help you get your Online Transaction Processing and Decision Support properly segmented for best performance. 

If you liked reading the “Eleventh Day of ECHO” return to our main list to read all of the other “12 Days of ECHO” posts.

 

Do you have questions or need assistance with your Epicor system?  Please feel free to Contact Us and see if we can help get your bits and bytes in order.

12 Days of ECHO, Tenth Day: My Admin Gave to Me, Epicor Performance and Diagnostic Tool Checks!

12 Days of ECHO, Tenth Day: My Admin Gave to Me, Epicor Performance and Diagnostic Tool Checks!

SQL and the Reporting Engine

 

Epicor ERP 10 provides the Performance and Diagnostics Tool as part of your Epicor Administration Console.  While the tool is often installed when setting up an E10 solution, its often forgotten about afterwards.  The full tool has lots of capabilities, but I’d like to highlight the Config Check”. 

 

When first run, you have to go to Options-Settings and define which E10 application you are going to check, along with the Epicor username/password to access the data. 

 

Then, click the “Check Configuration” button and wait a few moments. The tool will go look at several parameters and find out which settings are Pass, Warning or Fail.  Depending on your environment, you’ll want to qualify those warnings or failures, as they might not be as disastrous as it seems.   

 

Here’s the output of one of our hosted Epicor servers running 10.2.500.  Looks like there might be something wrong with the SQL Setup. 

Using the ConfigCheck Details shows the underlying issues, in this case, there are several issues!  Some of the red lines are problems (like I might not have enough space in the SQL MDF file), while others are not (SIMPLE recovery mode is not a problem for this application)

In any case, before I start tuning, tweaking and fixing, I always export the result to Excel so I have a record of what it looked like today.  After I fix these items, I’ll re-run and re-export the check to show my client that the appropriate items were fixed.  Of course, if items are flagged but not fixed, I’ll include an explanation of why.  For example, Simple recovery mode on a SQL databases means I don’t have to worry about transaction log growth.  (See our prior post “SQL Transaction Log Maintenance”)

 

If you need details on how to correct each issue, you can drill into the ExternalLink provided. Warning – many corrections will require downtime, so while you can run the tool anytime, correcting things will likely be during a maintenance window.

 

I recommend running this tool as part of a quarterly or annual basis just to help keep your Epicor E10 system running smoothly.

If you liked reading the “Tenth Day of ECHO” return to our main list to read all of the other “12 Days of ECHO” posts.

 

Do you have questions or need assistance with your Epicor system?  Please feel free to Contact Us and see if we can help get your bits and bytes in order.

12 Days of ECHO, Ninth Day: My Admin Gave to Me, Fixes so SSRS Won’t Hog Epicor CPU!

12 Days of ECHO, Ninth Day: My Admin Gave to Me, Fixes so SSRS Won’t Hog Epicor CPU!

SQL and the Reporting Engine

 

Epicor ERP 10 requires two primary SQL functions, the SQL Engine and the Reporting Engine, also known as SSRS – SQL Server Reporting Services.   Each require a SQL licensed, but if you run both on the SAME operating system instance, you can pay for only one set of licenses. If you run the SQL Engine on one OS and SSRS on another, you must purchase TWO sets of licenses – an expensive choice.  Therefore, most clients choose to run SQL and SSRS on the same OS. 

 

This co-existence can be a problem when one or the other get resource hungry and crowds the other out.  For example, SQL (the engine) will use ALL available RAM, and starve any other application (sometimes even the Windows OS itself!).  Once of the tuning options we set is to limit SQL RAM to approximately 80% of the total server RAM.  If SSRS is running on the same OS, then we also need to leave some room for it to do it’s job. 

 

Another resource grab is for the CPU, if a poorly written SSRS report is let loose, it can take too much time and CPU and starve the SQL Engine for CPU cycles, effectively driving CPU utilization to 100% and before you know it, SQL is s-l-o-w, and Epicor responds in kind.  Users call up saying Epicor screens are frozen, reports are queueing up and not running, in general, the business comes to a full stop. 

 

Looking at Task Manager / Details and seeing SSRS service “ReportingServicesService.exe” at 100% is a dead giveaway. The quick fix is to restart that service.  Resources are released and SQL engine gets the horsepower it needs to keep running.  Unfortunately, the currently running reports fail. 

 

Microsoft used to include a function called “Resource Manager”, but they phased that out a few versions ago.  

 

The best solution is multi-pronged attacktake your pick 

  • Split SSRS off into its own server (don’t forget to buy the license!) 
  • Monitor CPU utilization on your SQL servers and alert someone if they get pegged at 100% 
  • Create a Performance Monitor Alert and Task to restart SSRS if it goes to 100% 
  • Ensure any problem SSRS reports are debugged on a test server where they won’t impact production performance. 
  • Use a 3rd party tool called “Process Lasso Server Edition” from www.bitsum.com to force SSRS to behave. 

 

In our EstesGroup Cloud Hosting ECHO hosting model, we use a combination of these solutions to ensure your Epicor ERP 10 system stays responsive. 

If you liked reading the “Ninth Day of ECHO” return to our main list to read all of the other “12 Days of ECHO” posts.

 

Do you have questions or need assistance with your Epicor system?  Please feel free to Contact Us and see if we can help get your bits and bytes in order.