Advertisement

Protecting the integrity and security of data is central to any organization’s operational success. In a laboratory environment, increased data integrity and quality accelerates workflows, enhances analysis, and ensures that products and results are of the highest quality. Therefore, it makes sense for businesses to continually be looking for new ways to secure and improve the quality of their data.

The issue of data integrity in the lab isn’t new—but there is a general misconception that data integrity failures are always the result of fraud on the part of the scientist. In most cases, this assumption is wrong, and there are steps companies can take to reduce the risks of data manipulation or human error, as well as deliberate wrongdoing.

Below are some tips to help ensure your business is generating accurate, high-quality data that facilitates effective reporting, and enables you to gain deeper insights, save time and get products to market faster and better.

Move away from paper
With so many benefits to getting data right, and so many risks to getting data wrong, organizations should be doing everything within their power to ensure data is as good as it can possibly be—and yet, many are still using cumbersome, error-prone, paper-based recording techniques. It just doesn’t make sense, especially when you consider the sheer volume of data being created on a daily basis. So, what’s the solution?

An obvious, but effective, first step for any organization wanting to ensure the integrity and quality of their data should be to remove paper-based records from the research and development process. Mistakes are made. That’s never going to change. But an electronic recording system for your data means those mistakes are less likely to happen and, if they do occur, less likely to have the same kind of impact on the ultimate accuracy of your data.

Automate where possible
As already pointed out, moving from paper will reduce some errors in your data, but it won’t remove them completely. To reduce human-attributed errors further, organizations should automate as much as possible, especially for tasks like transferring data between systems or data entry.

Automation saves time, improves outcomes and increases throughput, all while removing the opportunity for unnecessary faults to creep into your data. Electronic systems can also provide clear visual clues to users as to where a step has been missed or something is out of place or just plain wrong.

By using validated templates, businesses can provide secure transparent links between raw data files and final calculations for ease of audit, and they can apply consistent pre-established business rules that have been set and validated within the system. Also, by ensuring your system supports quality by design and audit by exception guidelines, you can review and verify information in real-time, guaranteeing that your data can be accurately reconstructed months, or even years, later.

Track and audit everything
For organizations to be truly confident in the quality and integrity of their data, they need to use a system that records and tracks all information relevant to the data, to ensure both the detection and prevention of data manipulation.

The use of a system that provides audit logs covering all data entry, calculations and reporting will help mitigate and prevent some of the risk of data manipulation. However, doing this only at the record level is not enough, and any data management platform you use needs to monitor and track changes at the individual data point level.

Additionally, having a well-structured security model will also ensure that only those with the correct privileges will be able to edit the data, in any given experiment. And, once this is in place, algorithms can be used to identify abnormal behavior and flag it to the system owner. Creating structured workflows and templates that allow all data to be captured correctly and automate calculation will also help protect electronic records. An electronic system can ensure that any data changes or updates are properly recorded, justified and electronically signed—something that can easily get lost in the traditional paper trail—ensuring a further block for anyone wishing to manipulate data.

Use a system that can be validated where required
Using the right system, procedures and behaviors can help firms prevent the manipulation of data. However, it is incredibly difficult to guarantee in all cases that data has not been manipulated. Organizations need to ensure that policies and procedures are in place at all levels for the storage and management of data. Any work completed outside a data management system will always be open to manipulation, so it’s important to ensure your system is structured and validated in the correct way.

Validation best practices are outlined at a government or local authority level, so make sure any system or platform you use can streamline the certification processes for regulatory bodies by providing a strong validation package and good audit trails, enabling audit questions to be answered and, more importantly, resolved promptly.

Implement a form of master data management
With the number of systems used in the modern laboratory, organizations should consider implementing some form of master data management (MDM). As the number and diversity of organizational departments, worker roles and computer applications has increased, so too has the need for an MDM.

An MDM can ensure that all the different siloed systems in use have clearly defined naming conventions with links, providing a common point of reference, removing any ambiguity from the process, and helping to strengthen the integrity of your data.

Introduce a cloud-based system
In an increasingly global world, it’s common for teams in different locations and time zones to work together. Naturally, projects spanning countries and regions can cause all kinds of logistical problems—with various parties needing to coordinate activities and transfer data and reports. Combined with the rise of collaborations with external partners, such as contract research organizations (CROs), joint ventures, academic partnerships and consortiums, it’s easy to see how a mix of disparate data files and formats could ultimately compromise data quality.

If you’re using a traditional on-premise solution to manage your data, there’s likely to be someone within your organization tasked with processing the documents, ensuring they are distributed to each relevant corporate repository. This is a time-consuming process and introduces more opportunity for data loss and corruption. That’s also a lot of risk and pressure to place on one individual or team, no matter how talented they are.

By using the right cloud-based platform, collaborators—both internal and external—can enter their data directly into your system, eliminating the need for disparate data files and formats and reducing the chances of errors creeping into work by standardizing the data formats used. No more misunderstandings and no more ambiguity: all data can be recorded and viewed as originally intended.

Maintain data quality
In the lab, data is at the heart of what we do. Every test, experiment or study relies on the quality and integrity of the data we record—so it’s one area laboratories should be investing their attention, time and money.

With organizations creating more and more data than ever before—it’s estimated that humanity will produce 33 zettabytes of data this year alone (only nine zettabytes lower than all human languages ever spoken)—it’s clear we need to take better care of how we look after our data, and ensure the integrity of the data we are creating. But to do that, organizations need to invest in forward-looking solutions.

It makes sense to consider implementing a modern technological solution to record your laboratory experiments. You’ll eliminate the concerns regarding lost IP, lack of communication, and data errors that occur with traditional paper-based record keeping, and enable research and development departments to focus on their true strengths and passions.

We all know the costs of getting data “wrong”: investigations, re-works, contract terminations and eye-watering numbers. Is it really worth the risk?

Advertisement
Advertisement