Select Page
SYSPRO Data Integrity: A Guide to Balance Functions

SYSPRO Data Integrity: A Guide to Balance Functions

How to Ensure Data Integrity Within SYSPRO – Balance Functions Explained

Ensuring data integrity is a top priority for any software product, especially for an ERP such as SYSPRO. As users go about performing their daily activities, various problems can arise, even with the most mature ERP systems. The most common issues seen within SYSPRO that can lead to data instability are users being disconnected, programs freezing up, or business objects unexpectedly stopping in the middle of processing. With tens (or potentially hundreds) of daily active users, it is imperative for your business that the data within SYSPRO stays consistent. So how does SYSPRO combat data integrity problems and maintain the overall stability of its data? The answer is SYSPRO balance functions!

SYSPRO Data Integrity Balance Functions

A balance function in SYSPRO is a detailed process used to correct and adjust database information if discrepancies are detected. They are built in to SYSPRO’s Period-End programs and are supposed to be run prior to posting GL entries or performing Month-End/Year-End tasks. SYSPRO’s balance functions can help “balance” a module by comparing user transaction data to its own control totals and correcting any noticed discrepancies. Some examples of these discrepancies that it can correct include:

  • GL journal entries that have not been properly completed or are still marked with “in-process” flags if they were abandoned. Users unexpectedly disconnecting from the system can be a cause of this.
  • Failed inventory transactions. Minor hiccups from bugs or networking issues during inventory transactions can result in inaccurate inventory counts. For example, a stock code may display as having available quantity on-hand but an attempt to issue or release the quantity results in errors.
  • Specific key documents being locked down by users for maintenance that fail to be released once complete. Again, a potential result of unexpected user disconnects or program errors. These are commonly encountered within sales/purchase order entry and customer/supplier setups programs.

Scheduling SYSPRO Data Integrity Tasks

While most SYSPRO environments generally only run these balance functions during their period-end tasks, it is strongly recommended to schedule balance functions to run regularly. Sites with heavy user activity (including custom business object activity) may want to run balance functions overnight several days throughout the week. The result of this will be an improved and overall smoother SYSPRO experience for all users.

Balance functions are not found separately within their own respective program. Instead, they are usually part of and located within period-end programs. The naming convention of some of these programs may not be clear and it is not easy to identify all of them. As such, here is a full list of the programs within SYSPRO that contain or can perform the functionality of a “balance function”:

  • AP Period End
  • AR Period End
  • AR Bank Deposit Slip
  • Cash Book Period End
  • Assets Period End
  • Inventory Period End
  • Sales Order Purge
  • Purchase Order Purge
  • GRN Purge
  • Sales Analysis Update

It is imperative to understand that some of these programs contain critical data-altering functionality within SYSPRO relating to period-end module closures or purging of data. You should tread with caution when accessing these programs and ensure you only have “Balance” selected. NOT any unwanted options pertaining to period-end and/or data purge functionality! 

Some of the above-listed programs may have an option called “Reset lowest unprocessed journal”. As it is not always checked by default, it is recommended to enable this option prior to executing a balance function. It performs an additional data-stability feature intended to fix GL journal issues. 

SYSPRO Data Integrity Balance Functions AR Period Example

SYSPRO environments that are not familiar with the power of balance functions can (and will) encounter unwanted issues and potentially unstable data problems. Knowing how to utilize, execute, and schedule balance functions is key to ensuring your SYSPRO environment’s data remains both stable and trouble-free.

Ready to discover how an EstesGroup ERP consultant tackles data integrity challenges & ensures your company’s success? Chat with us now or sign up for one of our newsletters to get SYSPRO case studies, white papers, customer testimonials & more!

RPA DNA – What is Robotic Process Automation?

RPA DNA – What is Robotic Process Automation?

Robotic Process Automation (RPA) is a new software technology that has the potential, in conjunction with AI technologies, to transform business processes, policies, and Enterprise Resource Planning (ERP) systems.

Robotic Process Automation RPA

RPA removes workers’ mundane, time-consuming tasks so that they can alternatively focus on innovation and creation. With RPA, software robots, rather than humans, quickly and efficiently perform data system tasks. Simple software robots can log in to data systems, locate and move files, insert and alter information in data systems, and assist in analytics and reporting.

More advanced software robots, especially if they have AI technology, can interpret, organize, and make decisions in a cognitive, human-like way. Businesses will discover that RPA technology is relatively inexpensive to implement, and it is business-ready and scalable.

A variety of different industries — in manufacturing, finance, and healthcare — can benefit from adopting RPA technology into their business operations and processes. The benefits of RPA technology are expansive, and these benefits carry over into ERP system implementations and ERP processes. Ultimately, with RPA, businesses can focus on improving their workplace atmospheres so that they are more efficient and productive.

What are the benefits of Robotic Process Automation (RPA)?

As businesses seek to automate their work flows to become more efficient and productive, Robotic Process Automation (RPA) will continue to transform workplace atmospheres and advance processes and operations while increasing production and profits. By implementing RPA technology, businesses will realize the following benefits:

  • RPA is initially inexpensive to implement and is ready for use, with minimal coding, by most data systems.
  • RPA eliminates some of the monotonous, arduous tasks that fatigue workers.
  • RPA eliminates human error and encourages speed, efficiency, and accuracy of repetitive tasks.
  • RPA adapts to meet increased production needs and ultimately reduces costs and increases production.
  • RPA creates a happier working atmosphere in which employees can focus on customer relations and innovation rather than mundane tasks.
  • RPA encourages a strong increase in rate on investment (ROI).
  • RPA promotes consistent compliance with industry and government standards.
  • RPA enhances security by eliminating human interaction with sensitive, private information.
  • RPA can automatically generate reports and analytics that businesses can use to improve their processes and operations.

How does Robotic Process Automation (RPA) integrate with ERP systems?

Enterprise resource planning (ERP) systems are essential for businesses to tailor their workplace atmospheres, and by utilizing Robotic Process Automation (RPA), businesses can automate and reappropriate mundane tasks to software robots rather than humans.

Users will be able to realize the full benefits of ERP systems and focus on more foundational tasks while RPA accomplishes lower-skilled, mundane tasks. ERP systems will experience similar benefits that businesses have when integrating Robotic Process Automation (RPA). Some areas that ERP operations can utilize and benefit from RPA include:

  • Accurate data capture and transfer
  • Assistance with and automation of data migration
  • Inventory and supply chain management
  • Real-time data sharing
  • Real-time analytics and reporting necessary for compliance

Why should businesses integrate their ERP systems with RPA technology?

Many companies are hesitant to implement Robotic Process Automation (RPA) or fear that RPA will eliminate workers. RPA is cost-effective and easy for businesses to implement and integrate into their ERP systems. RPA software increases speed, productivity, and efficiency of processes and operations while encouraging a happier workplace atmosphere.

RPA doesn’t replace humans. It certainly is more consistent and reliable, but there will always be a need for human interaction. Although RPA will eliminate many of the lower-skilled, mundane tasks that humans must perform, workers will still be responsible for higher-skilled, fundamental tasks. As RPA streamlines workplaces and ERP systems, humans will be able to focus on more complex, meaningful tasks that will help businesses grow and maximize profits.

Combining an ERP system with new cloud-based technology allows businesses to experience all the benefits of both while approaching the future with automation and efficiency. Businesses will see cost reductions and great increases in rate on investment (ROI).

ERP systems with integrated RPA technology encourage streamlined workplace atmospheres, innovation, competitiveness, and ultimately, business growth. RPA lets workers enjoy their coffee, innovate, and communicate with customers while it does the grunt work.

Looking for answers to questions about how new technology can help your business? Fill out the form to meet with our team to learn how cloud-based solutions and services can help you achieve your goals!

Epicor Prophet 21 Performance – Real-World Issues

Epicor Prophet 21 Performance – Real-World Issues

Recently, I met with an Epicor Prophet 21 customer on a discovery call to review the issues they were encountering in relation to some ongoing P21 web UI slowdowns. ERP system performance is a common challenge across the ERP community, and in the Prophet 21 community, the subject of P21 performance is similarly of great importance. Coming out of the call, I thought I’d collect a few of the talking points and add a few additional P21 system performance considerations that can impact the speed and responsiveness of your Prophet 21 web UI.

Epicor Prophet 21 Performance Distribution Industry

Epicor Prophet 21 system performance can be a maze to navigate.

We had originally characterized the issue as a problem with the P21 API loading, and we began looking more broadly. As you might know, the Prophet 21 sits on top of Microsoft’s Internet Information Services web server platform, known colloquially as “IIS”.  There are several things to consider if your P21 web server is slowing down throughout the day, and with an ERP system like P21, the issues actually affecting the performance of the Prophet 21 web interface may reside many layers below the P21 web server.

Background:

It might be helpful to initially review the composition and operation of websites. Websites are comprised of both static and dynamic pages. A static page is pre-defined on the web server and is ready to be served up. A dynamic page is generated at run time and may dynamically differ each time it is generated. In terms of HTML pages that comprise the P21 user interface, generally speaking, the P21 application pool can only respond to a certain number of requests at a time. If it is busy responding to requests for dynamic pages, then it may not have any threads left to serve the static pages. For this reason, a code problem on a dynamic page can create the illusion that the static pages are being served “slowly”. My point is, don’t rule out code or SQL. As an example, if you have 100 pages all hitting a database or API at the same time, and all 100 await a response, request 101 may be blocked until one of the first 100 requests completes.

Diagnosing the Degradation:

Beyond explicit issues like request load, there are plenty of things that you can do to help you diagnose performance problems with your Prophet 21 web application:

Load Profiles: What does your load profile look like normally? This makes a big difference – it may be that you always have an issue, but you can’t see the impact until your site receives load. You could try to test this (in staging) with something like JMeter.

Reviewing your logs: Does your application have logs? If not, you should consider adding some logging. If you already have logs, what do they say? Are there exceptions being thrown by your application? Is there something that is consistently failing?

IIS Logs: Enable IIS logs if you haven’t already. Reviewing your P21 IIS logs can help you see which requests are taking the longest. You can use something like Microsoft’s Log Parser to run SQL-like queries against your logs. You may even want to dump your logs into a SQL database if that makes your P21 logs easier to review. Once you know which pages are taking the longest, you can focus some of your attention on them.

Memory: How much memory is your application pool using? A memory leak is an obvious candidate but should be quite easy to see. Use Windows’ inbuilt Performance Monitor to track memory consumed by your application pool over the day and see if this increases as the day goes on.

SQL Performance: The performance of your P21 SQL database may be an underlying cause of poor Prophet 21 user interface performance. SQL server provides a series of query structures called Dynamic Management Views, or DMVs, that can provide details about server and database health and performance. These can be very helpful in diagnosing performance issues at this level. One common DMV, sys.dm_exec_requests, can help you understand query properties such as wait_type, wait_time, blocking_session_id and the total_elapsed_time.

P21 Application Pool Connections: Check how many connections your application pool has open – using something like Microsoft’s TCPView. Your application pool will try to re-use connections where possible, but you’ll probably see a lot of open connections to your application pool. One interesting thing you can see from this is how many connections you have open to your SQL database or any external APIs your application is using.

Use an Application Performance and Monitoring Tool: Performance monitoring tools, like AppDynamics, will be able to help pinpoint slow performing parts of your code. Unfortunately, there’s a little bit of a learning curve to be able to use these tools effectively, but they can be very powerful in helping to diagnose problems with your applications.

SQL Server AutoGrowth Property: Review the property in your SQL database pertaining to AutoGrowth. You may encounter issues if the following are occurring:

1. If the database is a super-busy database, transactionally speaking.

2. If AutoGrowth is enabled.

3. The AutoGrowth default is a smaller MB amount. This may cause random slowdowns on the database engine, which could impact the API application pool response time. 

One thing to test would be to set that AutoGrowth size in MB to a very large number. That way, the AutoGrowth will only happen periodically.

Look for Memory Leaks: Once I had a customer experiencing IIS performance degradation issues with a custom web application we had built that was using asp.net and Crystal runtime integration. Ultimately, the issues with IIS and the web app related to memory leaks that were not obvious at all until we started doing some deep-dive testing. You will want to consider the possibility of internal memory leaks when building a support case against the application having performance-related issues that may or may not have been resolved in minor version changes. I know IIS also plays a part in this and how it manages internal garbage disposal with application pools, so this may be an area that you need to explore as well.

As you can see, Epicor Prophet 21 system performance can be a maze to navigate. To find your way through the P21 performance maze, there are many potential paths to take, and depending on the ultimate source of the problem, many might be dead ends. But in understanding the directions one might take in navigating the many potential Prophet 21 performance issues, P21 users can hopefully find themselves at the maze’s end – and moving on to bigger and better things.

Prophet 21 Cloud Migration Steps for Managed Hosting of P21
Data Center Strategy: How To Cloud Up For Uptime

Data Center Strategy: How To Cloud Up For Uptime

A Cloud is a Data Center and a Data Center is a Cloud?

Cloud applications ultimately sit upon the foundation of a server stack. You can view a cloud-based server as someone else’s computer, and picture these servers housed in a data center, which is their most likely location.

A data center can be simply described as a specified space within a building designed to securely house computing resources.
Data Center Considerations

Servers

Power

Communication

A large data center normally involves an extensive open area, which is divided into racks and cages, to hold the servers themselves, as well as the power and communication connections used to link each individual server with the rest of the data center network. This network would reside in a building with sufficient architecture to allow for rapid data communication, and similarly high-performing connections to the outside world.

The building itself is normally a large and highly secure edifice, constructed from reinforced building materials, as to prevent physical compromise. It is often located on a campus that is itself physically guarded with high fences and rigid gates.

Server

PHYSICAL SECURITY 

DATA CENTER HARDWARE

Cloud Security

CLOUD-BASED SECURITY

DATA CENTER STRATEGY

The Servers Themselves: What Is In Your Data Center?

Inside the building (the data center) exists a complex cooling and ventilation system, to prevent the heat-inducing computing devices from overheating. The campus is supported by redundant power systems, to allow the network to run, even if the main power grid experiences interruption or shutdown. The inner workings of the data center are designed to prevent downtime, but the materials used in construction can vary. Consider a pencil made from wood vs. a pencil made from plastic. Consider further a pencil manufactured from metal built to protect a thin and fragile graphite fragment. 

The ways in which end users can attain access to the resources in a data center can vary due to the fact that cloud provisioning can occur in many layers.

Option A: Cloud Provider = Data Center

Sometimes the cloud provider is itself the data center. Most often this is the case when you want to use server space from a data center, or else wish to collocate your hardware in a data center. For instance, as a customer, you might procure new hardware and move it to one of US Signal’s data centers in a colocation arrangement. This allows you to benefit from US Signal’s physical security, network redundancy, high-speed fiber network, and peering relationships, to allow for a broad array of high-speed communications. 

Option B: Cloud Provider = Data Center Management Firm

Sometimes the cloud provider is an organization that manages the allocation and management of cloud resources for you — they serve as an intermediary between the end customer and the data center. For instance, EstesGroup partners with US Signal. We help customers choose the right server resources in support of the application deployment and management services that we provide for ERP (Enterprise Resource Planning) customers.

Moreover, not all data centers are created equal. Data centers differ in countless ways, including (but not limited to) availability, operating standards, physical security, network connectivity, data redundancy, and power grid resiliency. Most often, larger providers of cloud infrastructure actually provide a network of tightly interconnected data centers, such that you’re not just recruiting a soldier — you’re drafting an entire army. 

As such, when choosing a cloud provider, understanding the underlying data centers in use is as important as understanding the service providers themselves. That said, what are some of the questions that you should ask your provider when selecting a data center? 

Is the provider hosting out of a single data center or does the provider have data center redundancy?

Geo-diverse data centers are of great importance when it relates to overall risk of downtime. Diversely-located data centers provide inherent redundancy, especially beneficial when it comes to backup and disaster recovery.

But what defines diverse? One important consideration relates to the locations of data centers relative to America’s national power grid infrastructure. Look for a provider that will store your primary site and disaster recovery site on separate power grids.

This will bolster you from the potentially of an outage to one of the individual grid locations. Think of the continental divide. On separate sides of the divide, water flows in one of two directions. When it comes to national power grids, support comes from different hubs. Look for a provider who has redundant locations on the other side of the divide to protect you in the event of a major power outage.

Are they based on a proprietary data center, collocated, or leveraging the state-of-the art technology of a leading data center? 

A provider of hosting services may choose to store their data in one of many places. They may choose to leverage a world-class data center architecture like US Signal’s. Conversely, they may choose to collocate hardware that they already own in a data center. Or they may choose, like many managed services providers do, to leverage a proprietary data center, most often located in their home office. 

Colocation is not uncommon among first steps in the cloud. If you own hardware already, and would like to leverage a world-class data center, colocation is a logical option. But for cloud providers, owning hardware becomes a losing war of attrition. Hardware doesn’t stay current, and unless its being procured in large quantities, it’s expensive. These costs often get passed along to the customer. Worse still, it encourages providers to skimp on redundancy, making their offerings less scalable and less robust in the event of disaster events. 

Proprietary data centers add several layers of concern to the colocation option. In addition to the hardware ownership challenges, the provider is not responsible for all the infrastructure responsibilities that come with data center administration, such as redundant power, cooling, physical security, and network connectivity.

Moreover, proprietary data centers often lack the geo-diversity that comes with a larger provider. Beyond infrastructure, security is a monumental responsibility for a data center provider, and many smaller providers struggle to keep up with evolving threats. In fact, Estes recently onboarded a customer who came to us due to their Managed Service Provider’s propriety data center getting hacked and ransomed. 

Is the cloud provider hosting out of a public cloud data center? 

Public cloud environments operate in multi-tenant configurations where customers contend with one another for resources. Resource contention means that when one customer’s resource consumption spikes, the performance experienced by the other customers in the shared tenant will likely suffer. Moreover, many multi-tenant environments lack the firewall isolation present in private cloud infrastructures, which increases security concerns. Isolated environments are generally safer environments. 

Is the cloud provider proactively compliant?

Compliance is more than the adherence to accounting standards — it is a means to guarantee that your provider is performing the necessary due diligence in order to ensure the business practices of an organization do not create vulnerabilities that can compromise the security and reliability assertions of the provider. What compliance and auditing standards does your cloud provider adhere to?

Is your cloud provider compliant according to their own hardware vendor’s standards?

Hardware providers, such as Cisco, for instance, offer auditing services, to ensure their hardware is being reliably deployed. Ensure that your provider adheres to their vendor’s standards. How about penetration testing? Is your provider performing external penetration testing to ensure PCI security compliance? In terms of industry standard compliance frameworks, such as HIPAA, PCI/DCC, and SOC I and SOC II, ensure that your provider is being routinely audited. Leveraging industry standards through compliance regulation best practices can go a long way to make sure they are not letting their guards down. 

What kind of campus connectivity is offered between your data centers and the outside world?

Low national latency is of utmost importance from a customer perspective. Efficient data transfer between the data centers themselves and from a given data center to the outside world is fundamental to a cloud customer. Achieving transactional efficiency is achieved in multiple ways.

For a network to be efficient, the data itself must take as few “hops” from one network to another. This is best achieved through tight partnerships between the data center and both the national and regional ISPs that service individual organizations.

Within the data center network, an efficient infrastructure is helpful. US Signal, for instance, has a 14K mile network fiber backbone connecting its data centers and connecting them to regional transfer stations. This allows US Signal to support 3 ms latency between its 9 data centers, and to physically connect with over 90 national ISPs. This results in an extremely low national latency.

What kinds of backup and disaster recovery solutions can be bundled with your cloud solutions?

Fundamental to a cloud deployment is the ability to provide redundancy in the event of a disaster. Disaster recovery is necessary to sustaining an environment, whether on premise or in the cloud. But a disaster recovery solution must adhere to rigorous standards of its own if it is to be effective. Physical separation between a primary and secondary sight is one such baseline need. Additionally, the disaster recovery solution needs to be sufficiently air-gapped, in order to hit your desired RPO and RTO targets, while avoiding potential cross-contamination between platforms due to an event of hacking, viruses, or ransomware.

What kinds of uptime and reliability guarantees are offered by your data center?

All of the above aspects of a data center architecture should ultimately result in greater uptime for the cloud consumer. The major public data center providers are notorious for significant outages, and this has deleterious effects on customers of these services. Similarly, smaller providers may lack the infrastructure that can support rigorous uptime standards. When choosing a provider, make sure to understand the resiliency and reliable uptime of the supporting platform. EstesGroup can offer a 100% uptime SLA when hosted in our cloud with recovery times not achievable by the public cloud providers.

Uptime has a planned/unplanned component that must also be considered. Many larger cloud providers do not give advanced warning when instances will be shut down for upgrades, which can be extremely disruptive for consumers, and result in a loss of control that conflicts with daily business initiatives. Ensure that planned downtime is a service that is communicated and understood before it happens. 

How scalable is the overall platform?

Scalability has to do with flexibility and speed. How flexible can the resources of an individual virtual machine (VM) be tweaked and how quickly can these changes be made. Ideally, your cloud provider provides dynamic resource pool provisioning — this allows for dynamic allocation of computing resources when and where they are needed.

Some provider environments support “auto-scaling,” which can dynamically create and terminate instances, but they may not allow for dynamic resource changes to an existing instance. In these cases, if a customer wishes to augment resources of any instance, it must be terminated and rebuilt using the desired instance options provided by other providers. This can be problematic. Additionally, provisioning, whether to a new VM or an existing one, should be quick, and not require a long lead time to complete. Ensure that your cloud provider specifies the lapsed time required to provision and re-provision resources.

What are the data movement costs?

The costs associated with the movement of data can significantly impact your total cloud costs. These are normally applied as a toll fee that accumulates based on the amount of data that moves over a given time. So these costs can be unpredictable. But what kinds of data movements occur?

  • Data ingress: data moving into the storage location, as it is being uploaded.
  • Data egress: data out of the storage location, as it is being downloaded. 

Data centers rarely charge for ingress movement — they like the movement of data into their network. But many will charge for data egress. This means that if you want your data back, they may charge you for it.

Sometimes these fees even occur when data is moving within the provider’s network, between regions and instances. If you’re looking for a cloud provider, check the fine print to determine whether egress fees are applied, and estimate your data movement, to understand your total cost. EstesGroup gives you symmetrical internet data transfer with no egress charges, so your data movement does not result in additional charges. This means that your cloud costs are predictable.

Does the cloud provider offer robust support?

Downtime can come from one of many situations. Your ISP could experience an outage, and may need to fail over to your secondary provider.  Or you may encounter an email phishing scam resulting in a local malware attack.  Or you may experience an outage, due to a regional power grid issue. In these extenuating circumstances, you may find yourself in need of contacting your cloud provider in a hurry.

As such, you’ll want a provider that offers robust pre-sales and post-sales support that is available 24/7/365. Many providers offer high-level support only if you subscribe to an additional support plan, which is an additional monthly cost. Wait times are also an issue — you may have a support plan, but the support may be slow and cumbersome. Look for a cloud provider that will guarantee an engineer in less than 60 seconds, 24/7/365.

Are you ready for a tour of one of the best data centers in the world? Meet with the EstesCloud team to get the right cloud strategy for your business.

Doctor Who Regeneration for Digital Transformation

Doctor Who Regeneration for Digital Transformation

Regeneration as a Metaphor for Digital Transformation

As a Canadian living in the American diaspora, I’ve had fun, at times, playing with my adopted country’s misconceptions of my homeland. I once convinced a room full of Texans that I had a pet wolf back home, à la Jon Snow, and that I culled dinner from the nearby caribou herds with a hand spear. Easy pickings, they were — the Texans, not the caribou.

But as a Canadian, I’ve also fielded my share of awkward questions, most often in relation to my country of origin and its relationship with its ancestral United Kingdom. To summarize: no — we don’t send tax dollars to the queen anymore. And no — I couldn’t give a rip about Harry and Meghan. But when it comes to contextualizing Canada’s relationship with the UK, I often find myself quoting Robert Frost, who was himself quoting an Englishman, when he said, “Canada ripened off the tree — you fell off green!”

Digital Transformation ERP System Upgrades

Not to get too mired in post-modern discussions on post-colonialism, I will admit that I’ve long held onto my commonwealth membership card over the years, pulling it out whenever it was useful. One such case was the matter of Doctor Who. As part of my cultural inheritance, I was rather fond of that man of manners and madness. As a child, I remember wanting a characteristic Doctor Who scarf for my birthday almost as much as that red Michael Jackson leather jacket that was also popular at the time — the one with all the zippers… ah… the 80s…

ERP System Time Travelers

So when I heard that the latest rendition of the “The Doctor” was on the precipice of a regeneration into a new incarnation, it seemed fitting that my mind would wander into the dimension of digital transformation, and pluck a few parallels where they hung out in front of me. For all you time-travelers out there, the EstesGroup has helped countless companies over the years transition ERP systems that were 40+ years old — systems that go back to the Tom Baker era, if anyone is keeping track. For such companies, the shift from a character-based system to a contemporary ERP is enough to tear a hole in a company’s fabric of time. But what does that mean for a company facing such a change?

System Regeneration

Digital transformation is like a regeneration in the Doctor Who series. ERP systems are a new incarnation of the Doctor — they come into being, replacing their predecessors. They go on adventures, solve problems, and take their companion companies to unexpected places. And in so doing, they amass monumental amounts of experience and ingenuity, and ultimately encapsulate the worldview of the time in their rows and their columns.

The worldviews themselves amount to the business requirements of the organization, as they relate to the system in question. Worldviews are not fixed in time, and evolve gradually, as the system is further modified, fine tuned, reconfigured, and integrated with other systems. While this worldview continually changes, the changes are rarely as abrupt as a new body fitting an old suit. 

A migration to a new ERP system, on the other hand, amounts to a much more radical shift in worldviews. The challenges really have to do with the wisdom and knowledge that is bundled up inside the legacy system, and with finding a way to translate that information into the new ERP system without compromising the integrity of the new system.

Don’t blink — it’s no easy task. In this context, the question you must ask yourself relates to how you approach a regeneration, knowing that it must happen. This might be a good time to lean on the good Doctor for assistance. Fortunately, there are several of them from which to choose:

You might approach the needed changes in the spirit of the Tenth Doctor and simply exclaim “I don’t want to go!” That is, you can fight the new system and cling to the old, as it slips away, like breath on a mirror.

Or you might approach an impending regeneration in the spirit of the Eleventh Doctor, understanding that “times change and so must I.” That is, you can get ahead of the transition and maximize the time you have, to remember as much of the legacy system as possible, such that it is not forgotten in the new system. 

The truth is, regardless of your reaction, some form of digital transformation is inevitable. Any moment now, he’s a’ comin’.

I’ve had many customers migrate simply because the current state was no longer tenable: ancient hardware, out-of-date operating systems, applications lacking the faculties to keep up with the current needs of the business, much less lead them into the future.  

I‘ve also seen customers delay a regeneration until the 11th hour, or a minute before midnight, and have thus dragged into a transformation without preparation. When it comes to transformation, preparation is key. Good preparation allows you to understand the business requirements that underly your legacy system. This gives you a better chance of incorporating your requirements into your new system, without trying to forcibly alter the new system to mirror the old.

In working with system implementations, one comes to understand that over the course of a company’s existence, systems change. And that’s ok, that’s good. You’ve got to keep moving, so long as your system remembers all the systems that it used to be.  

We all wish that our digital transformations would have an orchestral accompaniment as the universe sings our legacy systems to their sleep. The truth is, you have to provide the soundtrack. And that soundtrack is a manifestation of the attitude you bring into your system’s story. The song of your legacy system is ending, but the story of your organization never ends — as long as time passes really slowly, in the right order, and the next season does not get cancelled.

Are you seeking an ERP system or technology update?

Talk to our consultants now to begin a conversation that will make your system sing. Get help now with business processes, ERP implementation, digital transformation initiatives and digital transformation strategy. Ready for digital transformation in ’22 style? Go cloud, and get ERP business consulting experts for time-consuming hard and soft digital technology upgrades. Create the ultimate user and customer experience with new cloud computing platforms, without losing historical data. Meet customer expectations by combining a new version of your ERP solution with cutting-edge technology and optimized control over both the data migration process and migrated data. Hoping to use a newer version of your software to recover from the Covid-19 pandemic? Use cloud hosting technology to compete with the best of digital businesses, incorporating third-party integrations easily, to maximize machine learning, artificial intelligence, and other cloud-based digital transformation services.

Putting Your Software Testing Strategy to the Test

Putting Your Software Testing Strategy to the Test

Testing is the process that should use the most time in any software implementation. Why test? You selected this software and, of course, it should process transactions, shouldn’t it? Start testing, and some surprises will be exposed.

Software Testing ERP Implementation

Testing basics, testing methods

To begin, you’ll need a testing team and a test suite. Form small teams of people from each discipline. The team leader will be from your implementation team and the remaining people will be on loan from the various functional groups. Select those people with care. They will become your “super user” core of trained people who will help others in their groups use the new software.

Pick any single-step transaction. Accounting might try a simple debit – credit journal entry. Customer service might enter a new sales order. Document the transaction: what general ledger account will you debit and which one gets the credit and how much money? What customer will place the order, what product will they buy, what is their purchase order number, and how much money is the order for?

Go to the transaction screen in the software and enter the transaction. Then enter the results in a log. If the transaction works as expected, record a green result. If the transaction completely fails, record a red result and note why it failed or why you think it failed. Sometimes the result will be yellow as it completed successfully but you found some kind of unexpected caution that probably should be corrected.

Corrective actions

The failure of a test could be a problem in the data loading. Maybe the general ledger you wanted to debit was not in your system. Try to figure out why and ask the data conversion group to correct the situation. When they make the fix, process your test again and now you might get a green result.

An unsuccessful test result could come from a failure in your training. You thought you could enter that new sales order but you need to read the instructions again.

There are many configuration settings in any system and these will affect test results. That sales order test failed because the customer you chose was limited to only buying products in a certain line and you chose a product that customer was not authorized to buy. The data team might have made an incorrect assumption which can be corrected. Their assumption might have been correct based on some other condition you were unaware of. Often more than one setting can be adjusted to yield the results your business needs. Keep the conversation going until a satisfactory result is found.

Test again and again

You performed a test today and gave it a green result. Tomorrow the same test was not green. People from across your business are performing tests in their functional groups and you will find the change they requested to fix their test inadvertently affected your test. This is normal. Your business is complex and the relationships within are also complex. Work through these changes and find what works for your entire organization.

More complex testing

As the single transactions become successful, begin to expand the testing to a series of transactions. You can receive the purchase order, now can you also see the product adding to your inventory and then can you pay your supplier? Late-stage testing might go from receipt of a customer order through producing the order, shipping the order, and collecting the payment.

Automated testing

Manual testing might not be the more cost-effective use of your technology staff’s time. Fortunately, AI-driven types of testing are now available at low cost. Software that can robotically reproduce tests is available and affordable. After the fifteenth time a group runs the same test, boredom begins. The test robot never gets bored. You had nothing but green for those fifteen tests. But only after the 115th test was there a failure because someone made a change. The robot will keep testing all day and night until you turn it off.

Even setting up and monitoring automated testing tools can be time consuming. Begin to formulate the best testing strategy for your business by fully assessing any system software in use.

There are many types of software performance assessments available to your business. EstesGroup’s IT experts are available for everything from basic operating system testing to full audits of your system. Our software testers and project managers can provide continuous testing services and external support when you need it: functional testing, exploratory testing, integration testing, unit testing, system testing, and more. Schedule a software assessment today to begin a conversation about how testing, checking, and testing your software again can help your business.

Ready to test your software in the cloud?

Attend an EstesGroup “Cloud Stories” webinar to learn about customer software journeys.

Click here (or on the video below if the presentation doesn’t automatically play) to watch a webinar on cloud options for ERP software.