Select Page
Troubleshooting SYSPRO: General SYSPRO 8 Instabilities

Troubleshooting SYSPRO: General SYSPRO 8 Instabilities

Is Troubleshooting SYSPRO Troubling You?

General issues are to be expected when utilizing a client-server-based piece of ERP software the size of SYSPRO. Client machines not responding, crashing, or getting disconnected is not unusual. From an administrative perspective, it is important to know the various factors that can affect these general system stability issues. In this article, we have outlined some of the factors that can cause these problems as well as some tools to know, utilize, and review when diagnosing general SYSPRO instability. 

Troubleshooting SYSPRO Warehouse Worker

Antivirus and Firewall Exceptions

One of the general offenders causing SYSPRO issues are antivirus software. SYSPRO opens the required ports in the Windows Firewall when it is installed, however, most sites utilize additional antivirus and/or firewall software that will have to be configured separately. SYSPRO 8 uses port 30250 and 3702 by default but be sure to verify the ports used in your own environment. 

If you see installation errors relating to SYSPRO clients or client-side reporting tools, you can try disabling local antivirus during the installation. Make sure to turn it back on after the installation has been completed. 

SQL Health Dashboard

The SQL Health Dashboard (IMPDBS) is a built-in SYSPRO tool that analyzes your SQL Server and reports back with any potential issues. When you open the program, it will scan your SYSPRO databases one by one and provide you with an overview of potential issues. Any issues reported here will likely relate to system-wide issues more so than issues pertaining to specific client machines. Seeing this program report “green” across the board is vital for ensuring that your SQL Server is running at its best. 

System-Audit Query

System-Audit Query (IMPJNS) is another built-in tool that you can use to diagnose problems in SYSPRO. This one is particularly helpful to determine if disconnects or errors are unique to a specific operator and/or program. You can use the program to filter for client-server disconnects and determine who is encountering them, when they are encountering them, and where they are encountering them. This program can’t fix the problem, but it can tell you if one exists!

Troubleshooting SYSPRO ERP

SYSPRO Troubleshooting: SYSPRO System Audit

Client Folder Permissions

By default, the client-side installation of SYSPRO is found at “C:\SYSPROClient”. This folder contains the local SYSPRO install, user settings, and program files. Depending on default settings for a domain user, it is possible that there is insufficient access to this folder. If a local client machine is having unusual errors or access problems, try providing full control for authenticated users to this folder. You can do so by right-clicking the “SYSPROClient” folders and going to “Properties”. 

Check Resource Utilization

SYSPRO utilizes significant system resources during certain business functions and report generation. To ensure that SYSPRO has sufficient resources to perform its tasks, be sure to review local client machine hardware and memory availability on the SYSPRO server itself. If the server is suffering from limited resources, it can result in a system-wide slowdown for all client-machines connected to SYSPRO.

Troubleshooting SYSPRO with EstesGroup Experts

Troubleshooting general SYSPRO 8 instabilities can be a complex and time-consuming process. From managing antivirus and firewall exceptions to monitoring SQL health and system resource utilization, there are numerous factors to consider when diagnosing and resolving these issues.

However, by leveraging the power of private cloud hosting through EstesGroup, you can simplify your SYSPRO deployment and minimize the risk of encountering these instabilities altogether. With our expertise in managing SYSPRO environments and our state-of-the-art private cloud infrastructure, you can focus on running your business while we handle the technical complexities.

ChatGPT Security? Tell Me About Your Motherboard

ChatGPT Security? Tell Me About Your Motherboard

ChatGPT security concerns reveal that business owners are hesitant to let AI replace humans.

In November 2022, OpenAI introduced ChatGPT, an artificially intelligent, natural language chatbot. ChatGPT interacts with its users in uncannily humanistic and intelligent ways. 

ChatGPT Security EstesCloud

ChatGPT (Conversational Generative Pre-trained Transformer) is a new type of artificial intelligence technology that is being developed to improve the way people interact with machines. While it is intended to provide faster and more intuitive responses to queries, it also carries potential security risks, especially for business owners. 

The main concern is that, due to its complex nature, it could result in the loss of private data at great cost to companies and their employees. Furthermore, the technology could lead to a lack of control over data and give hackers the power to manipulate user behavior. This could be particularly damaging to those who rely on personal data to make decisions, such as financial services.

Additionally, ChatGPT could potentially cause unintended consequences, such as decreased privacy, as well as a lack of transparency. Therefore, it is essential to understand the implications of this technology before it is put into use.

The capabilities of ChatGPT and other Artificially Intelligent (AI) platforms are truly astounding. Users can ask ChatGPT questions and expect meaningful, accurate answers. However, these advancements in AI and chatbot technology come with their own set of compliance, privacy, and cybersecurity concerns. 

For instance, as these AI platforms become more sophisticated, they may begin to store more personal data and analyze user behavior. This could lead to potential privacy violations and other security risks:

  • AI-powered chatbots are particularly vulnerable to malicious attacks, as hackers may attempt to exploit vulnerabilities in AI platforms in order to gain access to sensitive information, manipulate data, or disrupt operations.
  • Additionally, AI-powered chatbots may be vulnerable to social engineering attacks, wherein hackers may use techniques such as phishing, impersonation, and disinformation to gain access to systems or manipulate people.
  • Furthermore, AI-powered chatbots may be vulnerable to data poisoning attacks, wherein hackers may input malicious data into AI systems in order to corrupt their output.
  • Finally, AI-powered chatbots may be vulnerable to adversarial attacks, wherein hackers may use sophisticated methods to fool the AI system into producing incorrect results.

These attacks can be used to gain access to valuable data, disrupt operations, or even cause physical harm. As such, it is important for businesses to take the necessary steps to protect their AI platforms from potential cyber threats. 

The question and answer exchange feature of a chat-based AI tool allows users to exchange information and collect personal data, making it easier to target specific audiences with tailored content.

AI security issues surface greater challenges in company data management.

Sophisticated chatbots provide an efficient way to generate content quickly, allowing users to quickly respond to customer requests or create high-quality content. As AI systems collect data, threat actors can scavenge for personal data, such as payment information or an email address. Something immediately helpful in customer relationship management soon becomes a data management nightmare.

Aside from the entertainment and educational capabilities of this new AI technology, ChatGPT and its other rival AI platforms have the potential to revolutionize the internet and working atmospheres.

In the technology realm, IT workers can use ChatGPT to enhance their development by asking the tool to quickly write or revise code. Considering the capabilities of AI platforms, it’s no wonder why companies are investing in and implementing AI technology.

However, like many other technological advances in history, AI platforms have potential privacy and cybersecurity risks. Recently, Italy, Spain, and other European countries have raised concerns about the potential privacy violations that could arise from using ChatGPT, an artificial intelligence (AI) platform. As a result, these countries have sought to introduce new regulations to ensure that ChatGPT respects the privacy of its users.

In particular, these regulations would require the platform to limit the collection, use, and disclosure of users’ personal data, as well as to ensure that users are able to access, modify, or delete the personal data they have provided to the platform. The regulations would require ChatGPT to take appropriate steps to ensure that any personal data collected is adequately protected from unauthorized access, use, or disclosure. This includes implementing appropriate technical and organizational measures such as encryption, pseudonymization, and secure storage systems. 

ChatGPT would also be required to provide users with clear and detailed information about how their personal data is being used, such as the purposes for which it is being collected and processed, the categories of data being collected, how long it will be stored, and who it will be shared with. Furthermore, ChatGPT would need to ensure that users are aware of their rights in relation to their personal data, including their right to access and to request rectification or deletion of their data.

Many countries have banned ChatGPT. Under the Biden administration, the United States will roll out a comprehensive national security strategy to address the growing threat of hacking and malicious use of artificial intelligence (AI) platforms. This strategy will involve the coordination of multiple federal departments and agencies, including the Department of Defense, the Department of Homeland Security, the Department of Justice, and the Office of the Director of National Intelligence. It will also require close coordination with international partners and allies, as well as the private sector and civil society organizations to ensure that the strategy is effective and comprehensive in scope.

The strategy will include a focus on protecting critical infrastructure, strengthening deterrence and detection capabilities, improving information sharing and collaboration, and developing new technologies to protect against malicious cyber threats and malicious AI use. The strategy will also involve enhancing international cooperation and engagement to counter malicious cyber activities, as well as increasing public and private investments in cyber security research and development.

The Biden administration will also be seeking to build public-private partnerships to improve the security of both public and private sector networks and systems. AI platforms are increasingly becoming popular due to their innovative and highly capable nature. However, these platforms are not without their risks and need to be assessed by multiple parties.

Cybercriminals are constantly looking for ways to take advantage of these platforms, targeting them in order to steal confidential information, generate malicious software, or gain access to data systems. These types of cyber attacks can have serious implications for the security of the platform and its users, resulting in the loss of valuable data, financial information, and sensitive personal information. Therefore, it is essential that organizations take the necessary steps to protect their AI platforms against these types of malicious attacks. This includes implementing robust security measures and regularly monitoring the platform for any suspicious activities. Additionally, it is important to stay up to date with the latest cybersecurity trends and technologies in order to ensure that the AI platform remains secure and protected.

Although OpenAI has programmed ChatGPT with the appropriate rules to prevent abuse, hackers have already figured out how to “jailbreak” the platform. In as little as a minute, hackers can generate malicious code for criminal intent. Prior to utilizing ChatGPT, their efforts may have taken days or even weeks.

AI-generated malware and cybersecurity attacks have already occurred. For example, hackers recently used ChatGPT to generate apps that successfully hijacked Facebook users’ accounts.

Preventing cybersecurity attacks and data breaches are of utmost importance for companies that desire to protect their sensitive data and minimize their costs, and now that hackers are using AI platforms to further their criminal activities, it is imperative, now more than ever, for companies to seek the best security solutions.

EstesGroup offers EstesCloud services to protect companies’ private data and systems from cybercriminals who may use new AI platforms for malicious intent. EstesCloud protects companies in a changing society in which AI technology is accelerating and enhancing hackers’ criminal activities. ChatGPT security is included in the private cloud and hybrid cloud infrastructures that we create for our clients.

ChatGPT security isn’t an issue when your powerful, highly capable AI and ERP tools are protected in a reputable data center. EstesGroup is ready to protect companies from hackers who use ChatGPT and other AI platforms to attempt to breach their data systems. The new AI technology will inevitably advance in the future, and as companies embrace and implement AI platforms, security solutions, like EstesCloud, will be necessary to safeguard private data and protect data systems.

EstesGroup realizes that innovation requires responsibility and security solutions, and the Estes’ team of highly skilled and dedicated professionals are ready to assist companies that seek the best cloud protection. Only time will tell how AI platforms will transform company atmospheres, but companies can rest assured that EstesGroup is ready for an artificially intelligent future.

Data Center Strategy: How To Cloud Up For Uptime

Data Center Strategy: How To Cloud Up For Uptime

A Cloud is a Data Center and a Data Center is a Cloud?

Cloud applications ultimately sit upon the foundation of a server stack. You can view a cloud-based server as someone else’s computer, and picture these servers housed in a data center, which is their most likely location.

A data center can be simply described as a specified space within a building designed to securely house computing resources.
Data Center Considerations

Servers

Power

Communication

A large data center normally involves an extensive open area, which is divided into racks and cages, to hold the servers themselves, as well as the power and communication connections used to link each individual server with the rest of the data center network. This network would reside in a building with sufficient architecture to allow for rapid data communication, and similarly high-performing connections to the outside world.

The building itself is normally a large and highly secure edifice, constructed from reinforced building materials, as to prevent physical compromise. It is often located on a campus that is itself physically guarded with high fences and rigid gates.

Server

PHYSICAL SECURITY 

DATA CENTER HARDWARE

Cloud Security

CLOUD-BASED SECURITY

DATA CENTER STRATEGY

The Servers Themselves: What Is In Your Data Center?

Inside the building (the data center) exists a complex cooling and ventilation system, to prevent the heat-inducing computing devices from overheating. The campus is supported by redundant power systems, to allow the network to run, even if the main power grid experiences interruption or shutdown. The inner workings of the data center are designed to prevent downtime, but the materials used in construction can vary. Consider a pencil made from wood vs. a pencil made from plastic. Consider further a pencil manufactured from metal built to protect a thin and fragile graphite fragment. 

The ways in which end users can attain access to the resources in a data center can vary due to the fact that cloud provisioning can occur in many layers.

Option A: Cloud Provider = Data Center

Sometimes the cloud provider is itself the data center. Most often this is the case when you want to use server space from a data center, or else wish to collocate your hardware in a data center. For instance, as a customer, you might procure new hardware and move it to one of US Signal’s data centers in a colocation arrangement. This allows you to benefit from US Signal’s physical security, network redundancy, high-speed fiber network, and peering relationships, to allow for a broad array of high-speed communications. 

Option B: Cloud Provider = Data Center Management Firm

Sometimes the cloud provider is an organization that manages the allocation and management of cloud resources for you — they serve as an intermediary between the end customer and the data center. For instance, EstesGroup partners with US Signal. We help customers choose the right server resources in support of the application deployment and management services that we provide for ERP (Enterprise Resource Planning) customers.

Moreover, not all data centers are created equal. Data centers differ in countless ways, including (but not limited to) availability, operating standards, physical security, network connectivity, data redundancy, and power grid resiliency. Most often, larger providers of cloud infrastructure actually provide a network of tightly interconnected data centers, such that you’re not just recruiting a soldier — you’re drafting an entire army. 

As such, when choosing a cloud provider, understanding the underlying data centers in use is as important as understanding the service providers themselves. That said, what are some of the questions that you should ask your provider when selecting a data center? 

Is the provider hosting out of a single data center or does the provider have data center redundancy?

Geo-diverse data centers are of great importance when it relates to overall risk of downtime. Diversely-located data centers provide inherent redundancy, especially beneficial when it comes to backup and disaster recovery.

But what defines diverse? One important consideration relates to the locations of data centers relative to America’s national power grid infrastructure. Look for a provider that will store your primary site and disaster recovery site on separate power grids.

This will bolster you from the potentially of an outage to one of the individual grid locations. Think of the continental divide. On separate sides of the divide, water flows in one of two directions. When it comes to national power grids, support comes from different hubs. Look for a provider who has redundant locations on the other side of the divide to protect you in the event of a major power outage.

Are they based on a proprietary data center, collocated, or leveraging the state-of-the art technology of a leading data center? 

A provider of hosting services may choose to store their data in one of many places. They may choose to leverage a world-class data center architecture like US Signal’s. Conversely, they may choose to collocate hardware that they already own in a data center. Or they may choose, like many managed services providers do, to leverage a proprietary data center, most often located in their home office. 

Colocation is not uncommon among first steps in the cloud. If you own hardware already, and would like to leverage a world-class data center, colocation is a logical option. But for cloud providers, owning hardware becomes a losing war of attrition. Hardware doesn’t stay current, and unless its being procured in large quantities, it’s expensive. These costs often get passed along to the customer. Worse still, it encourages providers to skimp on redundancy, making their offerings less scalable and less robust in the event of disaster events. 

Proprietary data centers add several layers of concern to the colocation option. In addition to the hardware ownership challenges, the provider is not responsible for all the infrastructure responsibilities that come with data center administration, such as redundant power, cooling, physical security, and network connectivity.

Moreover, proprietary data centers often lack the geo-diversity that comes with a larger provider. Beyond infrastructure, security is a monumental responsibility for a data center provider, and many smaller providers struggle to keep up with evolving threats. In fact, Estes recently onboarded a customer who came to us due to their Managed Service Provider’s propriety data center getting hacked and ransomed. 

Is the cloud provider hosting out of a public cloud data center? 

Public cloud environments operate in multi-tenant configurations where customers contend with one another for resources. Resource contention means that when one customer’s resource consumption spikes, the performance experienced by the other customers in the shared tenant will likely suffer. Moreover, many multi-tenant environments lack the firewall isolation present in private cloud infrastructures, which increases security concerns. Isolated environments are generally safer environments. 

Is the cloud provider proactively compliant?

Compliance is more than the adherence to accounting standards — it is a means to guarantee that your provider is performing the necessary due diligence in order to ensure the business practices of an organization do not create vulnerabilities that can compromise the security and reliability assertions of the provider. What compliance and auditing standards does your cloud provider adhere to?

Is your cloud provider compliant according to their own hardware vendor’s standards?

Hardware providers, such as Cisco, for instance, offer auditing services, to ensure their hardware is being reliably deployed. Ensure that your provider adheres to their vendor’s standards. How about penetration testing? Is your provider performing external penetration testing to ensure PCI security compliance? In terms of industry standard compliance frameworks, such as HIPAA, PCI/DCC, and SOC I and SOC II, ensure that your provider is being routinely audited. Leveraging industry standards through compliance regulation best practices can go a long way to make sure they are not letting their guards down. 

What kind of campus connectivity is offered between your data centers and the outside world?

Low national latency is of utmost importance from a customer perspective. Efficient data transfer between the data centers themselves and from a given data center to the outside world is fundamental to a cloud customer. Achieving transactional efficiency is achieved in multiple ways.

For a network to be efficient, the data itself must take as few “hops” from one network to another. This is best achieved through tight partnerships between the data center and both the national and regional ISPs that service individual organizations.

Within the data center network, an efficient infrastructure is helpful. US Signal, for instance, has a 14K mile network fiber backbone connecting its data centers and connecting them to regional transfer stations. This allows US Signal to support 3 ms latency between its 9 data centers, and to physically connect with over 90 national ISPs. This results in an extremely low national latency.

What kinds of backup and disaster recovery solutions can be bundled with your cloud solutions?

Fundamental to a cloud deployment is the ability to provide redundancy in the event of a disaster. Disaster recovery is necessary to sustaining an environment, whether on premise or in the cloud. But a disaster recovery solution must adhere to rigorous standards of its own if it is to be effective. Physical separation between a primary and secondary sight is one such baseline need. Additionally, the disaster recovery solution needs to be sufficiently air-gapped, in order to hit your desired RPO and RTO targets, while avoiding potential cross-contamination between platforms due to an event of hacking, viruses, or ransomware.

What kinds of uptime and reliability guarantees are offered by your data center?

All of the above aspects of a data center architecture should ultimately result in greater uptime for the cloud consumer. The major public data center providers are notorious for significant outages, and this has deleterious effects on customers of these services. Similarly, smaller providers may lack the infrastructure that can support rigorous uptime standards. When choosing a provider, make sure to understand the resiliency and reliable uptime of the supporting platform. EstesGroup can offer a 100% uptime SLA when hosted in our cloud with recovery times not achievable by the public cloud providers.

Uptime has a planned/unplanned component that must also be considered. Many larger cloud providers do not give advanced warning when instances will be shut down for upgrades, which can be extremely disruptive for consumers, and result in a loss of control that conflicts with daily business initiatives. Ensure that planned downtime is a service that is communicated and understood before it happens. 

How scalable is the overall platform?

Scalability has to do with flexibility and speed. How flexible can the resources of an individual virtual machine (VM) be tweaked and how quickly can these changes be made. Ideally, your cloud provider provides dynamic resource pool provisioning — this allows for dynamic allocation of computing resources when and where they are needed.

Some provider environments support “auto-scaling,” which can dynamically create and terminate instances, but they may not allow for dynamic resource changes to an existing instance. In these cases, if a customer wishes to augment resources of any instance, it must be terminated and rebuilt using the desired instance options provided by other providers. This can be problematic. Additionally, provisioning, whether to a new VM or an existing one, should be quick, and not require a long lead time to complete. Ensure that your cloud provider specifies the lapsed time required to provision and re-provision resources.

What are the data movement costs?

The costs associated with the movement of data can significantly impact your total cloud costs. These are normally applied as a toll fee that accumulates based on the amount of data that moves over a given time. So these costs can be unpredictable. But what kinds of data movements occur?

  • Data ingress: data moving into the storage location, as it is being uploaded.
  • Data egress: data out of the storage location, as it is being downloaded. 

Data centers rarely charge for ingress movement — they like the movement of data into their network. But many will charge for data egress. This means that if you want your data back, they may charge you for it.

Sometimes these fees even occur when data is moving within the provider’s network, between regions and instances. If you’re looking for a cloud provider, check the fine print to determine whether egress fees are applied, and estimate your data movement, to understand your total cost. EstesGroup gives you symmetrical internet data transfer with no egress charges, so your data movement does not result in additional charges. This means that your cloud costs are predictable.

Does the cloud provider offer robust support?

Downtime can come from one of many situations. Your ISP could experience an outage, and may need to fail over to your secondary provider.  Or you may encounter an email phishing scam resulting in a local malware attack.  Or you may experience an outage, due to a regional power grid issue. In these extenuating circumstances, you may find yourself in need of contacting your cloud provider in a hurry.

As such, you’ll want a provider that offers robust pre-sales and post-sales support that is available 24/7/365. Many providers offer high-level support only if you subscribe to an additional support plan, which is an additional monthly cost. Wait times are also an issue — you may have a support plan, but the support may be slow and cumbersome. Look for a cloud provider that will guarantee an engineer in less than 60 seconds, 24/7/365.

Are you ready for a tour of one of the best data centers in the world? Meet with the EstesCloud team to get the right cloud strategy for your business.

What is CMMC: Cybersecurity Maturity Model Certification?

What is CMMC: Cybersecurity Maturity Model Certification?

CMMC: The Looming Cyber-Security Certification that Affects 60,000+ Companies

 

In 2019, the U. S. Department of Defense (DoD) announced a new security protocol program for contractors called Cybersecurity Maturity Model Certification (CMMC). CMMC is a DoD Certification process that lays out a contractor’s security requirements, and it is estimated that between 60,000-70,000 companies will need to become CMMC compliant in the next 1-3 years 

 

CMMC is basically a combination and addition to existing regulations in 48 Code of Federal Regulations (CFR) 52.204-21 and the Defense Federal Acquisition Regulation Supplement (DFARS) 252.204-7012, and includes practices from National Institute and Technology (NIST) 800-171, the United Kingdoms’ Cyber Essentials, and Australia’s Essential Eight requirements. International Traffic in Arms Regulations (ITAR) will remain a separate certification from CMMC – though companies that are ITAR Compliant will need to adhere to CMMC as well. 

 

CMMC Version 1.0 was released late January 2020. To view the latest CMMC document, visit the CMMC DoD site. 

 

CMMC Notables 

  • There are 5 levels of the security maturity process (basic is 1 and most stringent is 5). 
  • Any company that directly (or even some that indirectly) does business with DoD will adhere to CMMC –and that means direct DoD contractors and high-level CMMC companies’ supply chains must also adhere to, at minimum, base level requirements. 
  • There is no self-assessment (unlike NIST), and companies need to get certified through a qualified auditing firm. 
  • DoD will publish all contractor’s certification level requirements. 

Is My Business Affected by CMMC? 

 

This is easily answered with a 2-part question: 1) Is your business a direct contractor to the DoD, or 2) does your business do business with a company that is a contractor to the DoD*? If you answered “yes” to question 1, then your business will need to be CMMC compliant. If you answered “yes” to number two, then it is very probable that your company will need to be CMMC compliant. 

What are the CMMC Levels? 

  • Level 1 – “Basic Cyber Hygiene”  
    • Antivirus 
    • Meet safeguard requirements of 48 CFR 52.204-21 
    • Companies might be required to provide Federal Contract Information (FCI) 
  • Level 2 – “Intermediate Cyber Hygiene” 
    • Risk Management 
    • Cybersecurity Continuity plan 
    • User awareness and training 
    • Standard Operating Procedures (SOP) documented 
    • Back-Up / Disaster Recovery (BDR) 
  • Level 3 – “Good Cyber Hygiene”
    • Systems Multi-factor Authentication 
    • Security Compliance with all NIST SP 800-171 Rev 1 Requirements 
    • Security to defend against Advanced Persistent Threats (APTs) 
    • Share incident reports if company subject to DFARS 252.204-7012 
  • Level 4 – “Proactive” 
    • Network Segmentation 
    • Detonation Chambers 
    • Mobile device inclusion 
    • Use of DLP Technologies 
    • Adapt security as needed to address changing tactics, techniques, and procedures (TTPs) in use by APTs 
    • Review & document effectiveness and report to high-level management 
    • Supply Chain Risk Consideration* 
  • Level 5 – “Advanced / Progressive” 
    • 24/7 Security Operations Center (SOC) Operation 
    • Device authentication 
    • Cyber maneuver operations 
    • Organization-wide standardized implementation of security protocols 
    • Real-time assets tracking 

One important thing to note about CMMC is that unlike NIST and other current certifications, CMMC will require certification from an authorized 3rd-party CMMC authorized certification company. Currently, most companies can self-certify for DoD-related securities. EstesGroup is not a CMMC Certification Company, but we can help companies prepare and boost security up to meet new requirements.

For more specifics on CMMC, access the latest DoD’s CMMC Revision.

 

Learn more about CMMC with 5 Ways EstesGroup Helps with Your CMMC Compliance

 

Do you have questions about CMMC or about how EstesGroup can help your company with CMMC or other cybersecurity, compliance or data issues? Contact us or chat with us today.

12 Days of ECHO, Twelfth Day: My Admin Gave to Me, Ransomware 2020 the Good, Bad, and Ugly

12 Days of ECHO, Twelfth Day: My Admin Gave to Me, Ransomware 2020 the Good, Bad, and Ugly

Ransomware the hits keep coming going into 2020

 

By now, we’ve all heard about someone affected by ransomware. If it wasn’t a friend’s business, or a company you do business with, or the town you live in, or the hospital you visit – all you have to do is look at the news to see major enterprises being attacked and ‘taken out’ by this nefarious deed.  As long as people pay, the bad guys will keep using it as a tool. After all, they’re just chasing the money. 

 

So why do I title this “the good, the bad and the ugly”?  Well, if you’ve been hit, you know the bad part.  It’s expensive in both dollars and perception.  What good can come of ransomware? And besides the rising ransom amount, why is it about to get uglier! 

 

First, the good part.

 

It raises awareness that the bad guys are afoot. Wherever there’s profit, fame, political gain and more, there will be someone to play the villain (or hire them) to get the goods. Technology just made it easier. So, the good news is that you know about it!  

 

Second, the bad part.

 

Knowledge without action is a travesty. It would be even better if you acted on that knowledge and improved your defenses. Backups and disaster recovery plans are hopefully in place, but don’t assume YOUR backups and DR plans are solid.  Test them occasionally to find the problem before you need a restore. I can’t tell you how many businesses think their backups are solid to find out differently after the attack. 

 

Internet access should be a privilege, not a right. Virtually nobody should have unfettered access to any website they want. Users should get internet access based on their role in the company, not because they have a computer and a browser. ALL emails and internet access should be filtered, blocked, logged and if needed, analyzed. You need to be current on patches, antivirus, spam filtering, blah blah blah.  Sorry if I lost you there, but we’ve been beating that drum for years.  In fact, you might want to take away the internet from your users – let users surf only on their phones, on the guest wifi and NOT the corporate wifi.  Perhaps provide an internet kiosk that’s separate from the corporate network. 

 

Lastly, the ugly

 

The *really* ugly. Once you get ransomedyou can no longer assume that it’ll just lock your files up. That data of yours (oh, customer files, payroll info, vendor lists, etc.)  could have just as easily been copied to the attackers and then encrypted. So now, you don’t have your customer spreadsheet, but the bad guys do!  Imagine the horror when they go to all your clients to tell them you’ve been hacked and they have all this data about YOUR customers! If you are under HIPAA, you might as well close up shop, the HIPAA fines alone will knock a small practice down and out. What customer will solicit a company that not only leaked their information, but that same confidential information was POSTED on FaceBook? The depravity and damage can only be imagined at this time. 

 

So, if you got ransomed, and all you lost was a few (thousand) bucks, consider yourself lucky. It’s about to get a whole lot uglier.  The cities of Atlanta, Pensacola, and Baltimore will agree! 

 

Happy New Year to all and may 2020 be brighter, smarter and safer. 

If you liked reading the “Twelfth Day of ECHO” return to our main list to read all of the other “12 Days of ECHO” posts.

 

Do you have questions or need assistance with your ERP system or data security?  Please feel free to Contact Us and see if we can help get your bits and bytes in order.

12 Days of ECHO, Eleventh Day: My Admin Gave to Me, notes on Online Transaction Processing vs. Decision Support!

12 Days of ECHO, Eleventh Day: My Admin Gave to Me, notes on Online Transaction Processing vs. Decision Support!

Enterprise Resource Planning (ERP): Online Transaction Processing vs. Decision Support

 

So, you’ve got your ERP system up and running, and before long, the management team wants reportsdashboards and executive data out of the system. That makes perfect business sense, and most ERP systems (including Epicor) have a slew of built-in reports as well as a report designer – Epicor E10 uses SSRS, Microsoft’s flagship product for writing reports. 

 

However, there’s a potential problem. The activity of entering data, called “Online Transaction Processing” or OLTPis fundamentally different than the activity of reporting and summarizing that data, called “Decision Support”, or DS for short. Before we go further, let me also explain database locking. A lock is a basic database ‘tool’ that prevents other user from changing a piece of data that you are using. There are many types of locks, but for this discussion, a row (record) lock prevents others from editing that specific record– let’s say an invoice.  A table lock prevents anyone from editing anything in that whole table. It is our sincere desire to keep all locks as short as possible, for the longer the lock is held, the more likely it is for someone else to want that locked data. 

 

Online Transaction Processing (OLTP) locks individual records to allow parts to be sold, inventory to be adjusted, and invoices entered. Decision Support (DS) locks whole database tables to run a reportWhen managewants to see ainvoice report, nobody can be entering a new invoice while the report is being generated! While most locks are handled automatically, they cause delays and in rare cases of a deadlock, data loss. 

 

I’m oversimplifying the issue, but the long and short of it is that Online Transaction Processing (OLTP) and Decision Support (DS) fight each otherall day long.  In fact, locking contention is one of the main causes of database performance issues! There are several solutions, but a common one is to simply time the DS to occur after OLTP – that is, after the business closes. Many companies run their reports at night, not only because the system is more available, but all those pesky users aren’t entering data, locking records and causing issues! 

 

A more complex, but also common solution, is to copy the Online Transaction Processing (OLTP) database to a independent Decision Support (DS) database on a regular basis.  OLTP users get an optimized database for their activities, and the DS users can run reports all day long without locking the OLTP users out.  An ideal solution for a busy database, but it does have its downsides. You’ll need twice the disk space and a method to move the data from OLTP to DS.  Our clients use backup & restore, SQL replication, mirroring and all kinds of technology to duplicate the database and prevent the dreaded locking contention. 

 

Need help? Let us know and we’ll help you get your Online Transaction Processing and Decision Support properly segmented for best performance. 

If you liked reading the “Eleventh Day of ECHO” return to our main list to read all of the other “12 Days of ECHO” posts.

 

Do you have questions or need assistance with your Epicor system?  Please feel free to Contact Us and see if we can help get your bits and bytes in order.