Select Page
IT Security Gone “WFH” – Now What?

IT Security Gone “WFH” – Now What?


Recent “Work From Home” (WFH) mandates have quickly pushed manufacturing and distribution employees out of the familiarity of their work offices and into a new realm of IT security needs.  Currently, statistics are saying that 70% of the workforce that can work from home is and, after this crisis is over, more than 40% will STAY at home.  With this transition, IT security principles become part of a critical conversation, especially for companies with remote workers supporting on-site manufacturing or distribution activities.


What is your WFH IT security policy?


Many distributed businesses have responded to the telecommute directive without many changes, especially those companies with data residing in the cloud.  These companies have already established work-at-home policies and invested in the remote access/remote desktop technology to enable telecommuting with IT security in place.  Folks who invested fully in the Office 365 space are feeling little pain, but businesses with legacy on-premise servers, workstations and printers are probably still scrambling.


Don’t be fooled—the hackers have followed you home!  The increase in suspicious emails, bad websites, and malicious advertisements has skyrocketed, and the cybercrime community is just waiting for your users to click on something to ransom your hard-earned data away.


Without a written and agreed upon IT security policy, you are at the mercy of your users’ good intentions.  Imagine a home PC with a saved password left on the VPN all day while the kids are stuck at home from school.  The amount of data that could be lost or compromised is staggering!  At a minimum, make sure you have a document that instructs your WFH users to lock the keyboard when they step away (or implement a screen saver with a password).  Ensure your users don’t download documents to their local hard drive or USB drives.  The list goes on, but the human element is the riskiest of all!


If a home user gets infected on the VPN, their malware is the company’s malware!  Let me write that again:  If a home user gets infected on the VPN, their malware is the company’s malware.


How to connect securely to your enterprise data?


Many businesses have NOT invested in expensive VPN or Remote Desktop solutions, and now it might seem either too late or too expensive.  You need a low-cost, secure, and easy-to-deploy strategy to connect your home users with their corporate data:  desktops, servers, and printers at the office.  Many options exist, but without a budget and a vision, you’ll get lost in the storm.



Keeping your home PC safe!


Home computers are more vulnerable than corporate PCs.  Home PCs tend to fall behind on patches and updates.  Moreover, the computer might get repurposed for things like the kids’ Xbox.  Home firewalls never measure up to those provided by your IT department.  Most have no web filtering to speak of, and bad websites abound!  You’ll need that enterprise class security in a mobile-friendly package.





Another blog could certainly be written about home offices, with a good webcam and a quiet space, but that’s for another page.  People are people, and the distractions from working from home are numerous and easy to fall prey to.  We recommend easy-to-deploy software to ensure that your users arrive to their home office on time and ready to work (even if it’s in their PJ’s), ensuring that they are productive and not on YouTube or getting the latest Amazon order completed.




Looking to provide IT security for your remote workers?  Deploy the EstesCloud PC Security Stack on your home users’ PCs and rest easily, knowing that your WFH users are protected and productive!


Private Cloud Owners Regress with Egress Expense

Private Cloud Owners Regress with Egress Expense

Private cloud deployment is changing the way manufacturing and distribution companies install applications and store information.  While this is an exciting move for any business, the step from on-premise to cloud infrastructure can come with unexpected costs.  Many companies expect, and easily budget for, typical costs associated with the move to private cloud, but hidden expenses often blur into the fine print of the original pricing model.  Thus, it’s important for a manufacturing or distribution business to budget wisely when moving from on-premise to private cloud infrastructure.


Cloud costs vary according to several different factors, and data comes into play at all levels.  A company is its historical data applied to its future, or potential, data.  Private cloud protects the data of a business while also utilizing it in real-time, and this cloud data normally exists in one of three states:


  • Data moving in.  This is data as it moves into the storage location or as it is being uploaded.  This process is also known as data ingress.
  • Data moving out.  This is data as it moves out of the storage location or as it is being downloaded.  This is sometimes referred to as data egress.
  • Data “at rest.”  This can be data residing in a static manner in the storage location and not in transit on the network.



Data In, Data Out


Not surprisingly, costs are tailored around these types of data.  Storage budgets are related to the costs of data that is physically being held at a location.  Normally, the storage of “at rest” data receives the most attention, as cloud providers offer various pricing structures based on how much data is stored, where the data is located, how often it needs a backup, how often it tends to be accessed, and how quickly it needs to be retrieved.


Many cloud providers do not charge customers for data upload or ingress, and the reasoning is obvious:  the more data you upload, the more you get charged for “data at rest.”  But one of the most significant hidden costs of the cloud relates to data egress charges—the charges leveled by your cloud provider for accessing your own data.


Think of your old phone bill before the cell phone revolution—each call outside the local area was billable, and the costs varied according to the duration of the call and the location to which the call was made.  Egress charges work similarly and are based primarily on the amount of data transferred.  Over time, this becomes a matter of dialing for dollars.  Should the data transfer increase, the charges will follow.


At its worst, this could become a situation of data rationing, where users are instructed to minimize their pulls from the data source, to minimize costs.  This is akin to a mother in the 1980s locking up her new push button phone, out of fear that her toddler, enamored with the button tones, might mistakenly dial Hawaii.


Data rationing is hardly the outcome that one would expect from a move to the cloud, yet egress pricing models put companies in a precarious position.  This poses a challenge for companies new to the cloud.  Customers accustomed to comprehensive local area networks do not always realize the amount of data that leaves one area of the network to be consumed by another, and thus may be unaware of their ultimate egress requirements.  Also, companies may have difficulty in predicting spikes in usage.  Without understanding when data use may increase, manufacturing and distribution companies will have trouble predicting expenses.



Data Grows on Trees


Companies using applications that operate in a client-server manner may be similarly challenged when they choose to host their server in the cloud.  The data requirements of private cloud can be as surprising as they are significant.  A client-server application like Epicor ERP, for instance, is a rather chatty application, as it frequently performs “get” calls to refresh data, in relation to other transactions.  In such a case, each “get” would entail a “give” in the form of cold hard cash.  For companies utilizing manufacturing execution systems in which users are routinely downloading work instructions and product schematics, in support of manufacturing operations, the costs would further compound.


The complexity involved in manufacturing and distribution requires the innovation of private cloud technology.  To transition from on-premise architecture, Epicor ERP customers looking to host their application in a private cloud need predictable costs and reliable budgets—a pricing model that does not involve surprise charges linked to the amount of data traveling into or out of the cloud hosting environment.  Egress can cause a budgetary mess, but you have the option to choose a pricing model that doesn’t watch your every download move.  Your company can have the reliability and innovation of private cloud without any of the hidden data egress costs that currently abound in the fine print of the cloud market.






Looking for help moving your business to the cloud?  Check out our private cloud environment:  EstesCloud Managed Hosting (ECHO).  We don’t have ingress or egress charges—your data is your data, and you are entitled to it!  

Epicor Multi-Site Migration:  A Moving Guide

Epicor Multi-Site Migration: A Moving Guide

Out of Epicor Multi-Site, Out of Mind


An Epicor multi-site implementation, whether moving from a single site to multiple sites, or vice versa, is a common phenomenon in the manufacturing and distribution world.  Companies and sites in Epicor are rarely in a “set it and forget it” proposition.  As companies evolve, the need to further break out or combine business units can become necessary.  As your business grows, you might wonder if your satellite facility should be considered part of your main site or if it should be a site of its own.  Or, in a similar Epicor multi-site configuration puzzle, you might wonder how to map out the consolidation of a two-company environment into a single-company installation.


Assume, for example, that a company has four sites (A, B, C, D) split across two companies (1 and 2) in the following manner:

  • Company 1:  Sites A, B, C
  • Company 2:  Site D

And the company desires to combine operations under a single operating unit, such that the lone site under company 2 would be absorbed under the operations of company 1:

  • Company 1:  Site A, B, C, D

Such a change may seem like simply a matter of converting existing data—of pointing all records from the old company and site to the new company and sites.  Alas, systems are often less than accommodating in allowing for historical transactions to be modified in this manner.


Epicor Multi-Site Implementation Recommended


Epicor offers no toolsets to “move” a site across companies.  No toolset currently exists that provides the ability to change the company context of all records in a single instance, as to point them to another company context.  Historical transactions don’t like having their keys modified, and such revisionist history is fraught with potential challenges, and thus not recommended.


The recommended best practice in such an Epicor multi-site implementation scenario is to reimplement site 2-D in Company 1.  This would involve the setup and configuration of the site and the transfer of setup, master file, and open transaction data into the new site, and the continuance of business operations from within this new site.  This would not include the loading of historical transactions—it would essentially be an act of starting fresh in the new company, as if the site were converted from a legacy system and not from Epicor.


To achieve Epicor site consolidation, as described above, a four-stage process can create the desired infrastructure:

  • Plan:  The planning stage finalizes scope and organization through assessment, planning, resource organization, scheduling, etc.  The resolution of conflicts in data, customization, or configuration will occur as part of the testing phase.  Scripts will be developed for the movement of data and customizations.
  • Architect:  The architect stage involves the setup and testing of an environment that would represent the future state deployment, and the elicitation and resolution of any gaps, issues, or conflicts that would come from the planned deployment.  Key activities are the development of the test environment, testing of new site deployment, and gap/issue resolution.
  • Validate:  The validate stage is used to verify that we can successfully execute a cutover and run the business as intended, as if we were in a live environment.
  • Stabilize:  This is the final, successful deployment of a new site at cutover.


To address the scenario described above, the following environment map would be recommended:

  • Test Environment:  The initial environment used to model the construction of the new site.
  • Pilot Environment:  The environment in which conference room pilot (CRP) validation occurs.
  • Live Environment:  The existing Live environment, which will be updated at the cutover of the new site.


In a traditional Epicor implementation, we go through a five-stage process:  Plan, Educate, Architect, Validate, Stabilize.  This conventional waterfall approach to implementing an ERP application does not adequately address a scenario where training is not required, and prototyping is centered less on the use of the system than it is on properly implementing and integrating an existing Epicor process into a different Epicor company.  As such, the methodology is somewhat simplified, considering the team is already well-versed in the application.


The environmental build, relative to the four-stage model described above, would be as follows:



Initial Live Environment Setup:

  • Extract Setup information from 2-D.
  • Create New Site D in company 1.
  • Update Site configuration in 1-D.
  • Update Company configuration in company 1, as needed.

Test Environment Setup:

  • Build Test environment from copy of production.
  • Extract master file records from Company 2 (Live) and load into Company 1 (Test) and reconcile Conflicts as needed.
  • Extract open transaction records from Company 2 (Live) and load into Company 1 (Test) and reconcile conflicts as needed.
  • Extract open balances and open AR & AP from Company 2 (Live) and load into Company 1 (Pilot).
  • Perform user testing to verify functionality in Site 1-D and the effect of adding 1-D on sites 1-A, 1-B, and 1-C.


Pilot Environment:

  • Build Pilot environment from copy of production.
  • Extract master file records from Company 2 (Live) and load into Company 1 (Pilot).
  • Extract open transaction records from Company 2 (Live) and load into Company 1 (Pilot).
  • Extract open balances and open AR & AP from Company 2 (Live) and load into Company 1 (Pilot).
  • Perform conference room pilot (CRP) event to verify functionality in Site 1-D and the effect of adding 1-D on sites 1-A, 1-B, and 1-C.


Live Environment-Cutover:

  • Extract master file records from Company 2 (Live) and load into Company 1 (Live).
  • Extract open transaction records from Company 2 (Live) and load into Company 1 (Live).
  • Extract open balances and open AR & AP from Company 2 (Live) and load into Company 1 (Live).
  • Perform any ancillary cutover activities as required.
  • Resume operation in new environment.




As with any project, there are a number of considerations that should be clarified at the onset of an Epicor multi-site migration.  The following assumptions have been made:

  • There are always minor data transformations needed when loading data into a new Company and Site, such as the new Company and Site IDs.  You should give consideration to any transformations above and beyond these basic adjustments:
    • Are there data transformations that will need to occur beyond the necessary changes to allow the data to function correctly and without conflict in the new company?
    • Are there changes to setup tables that will need to be mapped when loading master files and open transactions?
  • There is a necessity to identify who will be building test case scripts and who will be performing the necessary testing and validation—and this validation is twofold:
    • Validate the functionality of the new site within the existing company.
    • Confirm that the addition of the new site does not introduce any processing issues with any of the existing sites.
  • It is critical to identify a process for resolving conflicts—the addition of a new site may introduce conflicts with the existing Epicor implementation, and a resolution strategy should be devised at the project’s onset.


Conflicts and Conflict Resolution


Potential conflicts may exist between Company 1 and Company 2, and these may surface as a function of the “move” of Site 2-D into Company 1.  These may include the following:

  • Company configurations in Company 1 may not be in agreement with those of Company 2.
  • Security and menu settings in Company 1 may not be in agreement with those of Company 2.
  • Conflicts between customizations and/or reports may exist between Companies 1 and 2.
  • System Agent configuration conflicts may exist between Companies 1 and 2.
  • Setup Table conflicts may exist between Companies 1 and 2.
  • GL structure conflicts may exist between Companies 1 and 2.
  • Master File conflicts may exist between Companies 1 and 2.


Historical Data Access and Epicor Multi-Site Design


With any such migration, questions exist as to how historical data will be accessed.  As noted above, our recommended approach does not include the loading of historical transactional data.  In the current situation, the historical data from site D in Company 2 will continue to reside in the database, but the ability to modify the data will be removed at cutover.  Read-only access and report access could still be permitted, to provide access to historical data.  Additionally, the utilization of cross-company BAQs could be applied for reporting from Company 1 and consolidating datasets between the legacy site 2-D and the new site 1-D.


As companies expand, it’s common for new physical facilities to be spun up and incorporated into the existing Epicor application in some form or fashion.  In other cases, companies manage parallel business within the same space, and may decide they need to better break these out, as to keep their operating interests independent of one another.  In all possible scenarios, Epicor multi-site implementation can be tricky, demanding a thorough migration plan.



Looking to tweak your company/site configuration as part of an Epicor upgrade?  Learn more about multi-company and multi-site upgrades here.

Epicor ERP Upgrade Considerations for Data Dailies

Epicor ERP Upgrade Considerations for Data Dailies

The further away you get from the “new release” of a software, the more challenging an upgrade becomes, and this is especially true for Epicor ERP upgrade customers.  If you’re coming from a Progress 4GL business logic-based version, which would include anything earlier than version 10, such as Epicor’s 905 and 803 platforms, you might feel a little lost in all your options.  The need to move beyond Epicor’s legacy platform is obvious—the analog film of the old version is deteriorating on the reels.  But the move to Epicor’s E10 platform is more of an epic picture than it is an opening trailer.  Thus, it is important to storyboard the flow of the narrative from the old to the new, before the cameras roll, knowing that your plot armor will only take you through the first act of your Epicor ERP upgrade.


Two Epicor ERP Upgrade Paths


The hero’s journey in any good film starts with a call to action, with a road diverged in a wood where the protagonist must choose one of two paths.  In general, you can consider any move from a legacy version of a software to an updated version to be an upgrade.  But there are two distinct paths for moving from your legacy version to E10, and these greatly affect the nature of the final implementation.


Path 1.  A straight, utility-driven upgrade from the legacy to the current version:  In a straight upgrade, a utility converts the data from the legacy version, generating an E10 database that adheres to the structure of version 10 schema.


Path 2.  A reimplementation:  This is an alternative method for moving to E10 from the legacy version, but this method does not actually upgrade the legacy data to make the E10 database.  Rather, the upgrading company reimplements the application in E10, normally using the legacy data—filtered, scrubbed and reworked—as the starting point.


Data as the Villain… or the Hero?


Regardless of your opening scene, one of the most important considerations in any Epicor ERP upgrade is what to do with your data.  Companies can have one of many data challenges as they anticipate an Epicor upgrade.  It is not uncommon for a company’s current Epicor implementation, along with all its legacy data, to be handed down from a previous administration.  A lot of staff changes over ten years, for example, can leave a company with a system that has a setup and configuration at odds with their current orientation.  In such cases, a given company may wish to have another shot at configuring the application—to undo some of the decisions of the past.


Some companies unfortunately inherit a legacy system with data that has been carelessly maintained and is thus dirty beyond recognition.  In these cases, customers may look to either clean up the data or cut it entirely.  In other cases, significant changes to the business may have led to an inordinate number of part, customer, or supplier records that are obsolete.  While these might have historical value, they may also unduly clutter the database and place the company in a no-win situation.  In extreme cases, the company may even have a company and site structure that has evolved to be radically different from what is depicted in the Epicor application.


The data of an ERP system can be broken up into a number of classes:

  • Setup data:  This refers to the foundational data that underlies the setup of subsequent master file records.  Examples of setup data might include part classes, product groups, buyers, sales persons, etc.
  • Master file data:  This normally refers to the three core master files in any ERP system—the part master, the supplier master, and the customer master.  Subsequent master files may also need to be set up that relate to the extension or relation of these master file records, such as supplier or customer part price lists.
  • Live transactional data:  This refers to the open transactions in your system, such as purchase orders, sales orders, jobs, and invoices.  The decision as to whether to reimplement or upgrade greatly affects how these are to be handled, given that in an Epicor ERP upgrade they come along as part of the ride, whereas in a reimplementation they would need to be loaded in the style of a new install.
  • Historical data:  Historical data refers to all of the transactions that have been processed in the past and are no longer active, and would include purchase orders, sales orders, jobs, and invoices that have long since been closed.

From Classic to New Release


In a straight Epicor upgrade, the data from the legacy version is updated and fine-tuned to fit into the version 10 schema.  In such a situation, limited changes can be made to setup and master file data—you get what you had in the legacy version, only now it’s in version 10.  Tweaks can be made—new setup files can be defined and new parts, customers, and suppliers can be defined that replace or supplement their legacy analogues.  But any data that has been transacted against remains in the database.  It can be inactivated, but it’s still there, and some customers have a problem with this much clutter.  Depending on the degree of the issues noted above, this may or may not be a big deal.  Also, one of the upsides of a straight ERP upgrade is the ability to retain historical data with minimal effort.


Given the potential issues with data that a company may face, reimplementation offers the ability to cleanly address these challenges.  It is common for businesses to transform significantly after their original implementations.  They may have a number of legacy companies, sites and warehouses that are no longer needed and may even burden the database and create confusion internally.  Customers may also wish to make radical changes to their company or site structures as a function of the upgrade.  In these cases, consideration should be made to performing a reimplementation in lieu of an E10 upgrade.


Typecasting your Data Forecasting


When you perform a straight Epicor upgrade, you are accepting that you will take all of your past performances with you, and this may create challenges that impede your future state aspirations.  Conversely, dumping your legacy database also involves leaving behind the historical data that amounts to the history of your business.  Epicor’s E10 ERP software is a big step up from its 905 and 803 antecedents, and with each point release, the gap between the new and legacy versions widens further.  Legacy Epicor customers looking to move their businesses forward will need to transition to a modern ERP platform, and Epicor’s E10 application provides a compelling case.  And when making this critical Epicor ERP upgrade rewrite, customers will do well to consider their data dailies.




Looking for a good storyline for your Epicor upgrade?  Learn more about E10 from our Epicor ERP team.  

Endpoint Security: A Powerful Endgame

Endpoint Security: A Powerful Endgame


You already know you need protection from the cybersecurity threats circulating the market, but you might not have the time to know the specifics—like what endpoint security is or why you need it.  If you have devices accessing a network, then you have an endpoint that needs protection.  This elusive endpoint is simply any device that interacts with your network—the touchpoint between your network’s perimeter and the outside world.  The bring-your-own-device (BYOD) movement that’s currently shaping the business world makes network security challenging because it creates a high demand for comprehensive endpoint security.  You need to protect your customers and your business by protecting your team, and this begins with endpoint security.




Bring Your Own Disaster


The BYOD movement introduces a number of specific challenges in securing networks.  The proliferation of devices interacting with a network, both in kind and in number, increases the number of endpoints and thus also increases the potential vulnerability of a network.  Each new endpoint is a potentially exploitable gateway.  The propagation of vulnerabilities demands a solution that can address this new circumstance.  The solution that companies are increasingly utilizing to address their evolving needs has come to be known as endpoint security.  Endpoint security helps ensure that all devices interacting with a network are compliant to the necessary security standards, protecting both the network and the devices themselves.


Endpoint security differs from traditional antivirus in the way that it detects and responds to threats.  Traditional antivirus operates by comparing a program’s signature to a database of known malicious programs.  Programs flagged as malicious would be stopped by the antivirus agent.  This method of threat prevention is, by design, a step behind the attackers.  Traditional antivirus can only detect malicious programs that have already been logged in the antivirus agent’s database.  This creates problems in detecting new threats—what are sometimes called zero-day attacks.  This also creates problems with newer “signatureless” attack methodologies that work to obscure their signatures, to work around the known signatures that antivirus looks for.


The question here is one of prevention vs. one of detection:  antivirus focuses on preventing attacks.  While this sounds logical, the tools available at its disposal, as we have seen, are limited.  Should a malware attack slip through, antivirus is ill-equipped to deal with it once it’s inside the network.  This brings in the need for more dynamic, behavioral-based detection methodologies that can leverage artificial intelligence and machine learning to detect suspicious application behaviors and react accordingly.


Leveling Up


Modern endpoint security platforms operate in a multi-level manner, protecting networks and network devices in multiple phases of vulnerability and response.

  • The pre-execution phase: This level is for threats as they enter the network.
  • The on-execution phase: This step is for threats that have entered the network and are in the process of acting out their program logic.
  • The post-execution phase: This involves the steps to mollify threats that have executed.

Combining static prevention with dynamic detection, modern endpoint security platforms leverage machine learning to detect threats on execution.  This becomes beneficial, not only for signatureless attacks, but also for “file-less” attacks that are operating exclusively in memory.

As part of our EstesCloud security stack, we work with several vendors to provide broad and comprehensive endpoint detection and response.  AI, combined with our SOC (Security Operations Center), provides the level of endpoint security that cannot be addressed by traditional antivirus.  Our cybersecurity solution comes with a strong warranty—cyber threat protection provides you with financial support of $1,000 per endpoint, or up to $1 million per company, securing you against the financial implications of a ransomware attack if your company indeed suffers an attack and our team is unable to block or remediate the effects.




Is your company in need of a security assessment?  Learn more about how EstesGroup can protect your business.