Category Archives: EDC

Becoming a Data Scientist {EDC Developer + Statistical Expert + Data Manager}

At an early age, I was drawn to computers. I did well in math; I love science and I started enjoying programming when my stepfather gave me a small computer to program games. This was my real experience with programming. I think the programming language was Basic. The computer had some built-in games and basic math problems in it but you could also play around with ‘Basic‘ codes and create your own.

Then I went to a technical school and into college where you take basic classes in information system /technology and took courses in telecommunication management.  Most of the courses were around IP, PBX and Network Administration.  As part of that curriculum, I took a basic programming course and VB.net. I really like that since it has a visual interface (drag and drop to create the interface) and when you click a button you create an event so I like the design aspect of it (I am known to be very creative) then I started to design for people (website design and development, small databases). A lot better than working in telecommunications. I thought VB was a great first language to learn. Later I took a Microsoft Access database development class and we learn database design (relational) and found out I was really good at that.

Before I graduated, I was already working for a well known pharmaceutical company as a database analyst within their data management and biometrics team. They really like what I did with their clinical operations data (investigator data – you know the one that now we need CTMS systems for nowadays). So this was a confirmation that ‘databases’ was my passion. I love designing it, managing and maintaining it.

During my early years in this industry, I spent a lot of time writing SQL codes and SAS programs.  We pulled the messy data (back in those years we used the Clintrial Oracle backend system) and very problem solving oriented. A business question was asked and we would go using either SQL or SAS and go into this messy database and figure it out the answer. I really enjoyed that.

In recent years, I take data from a {EDC} system then write scripts to summarize the data for reporting and put into a data warehouse and then I use a product called ‘IBM Cognos’, which points to the data warehouse to build those reports and worked with different users across different departments (a lot of different audiences for the data) with a lot of different interesting data in there. I have spent time using APIs to extract data via Web Services (usually in XML-ODM format) and generate useful reports in SAS or Excel XML.

People think that being a data analyst is just sitting around a computer screen and crunching data. A lot of it is design-oriented, people-oriented, and problem-solving. So when people ask a question, I get to dive into the data and figure it out the answer.

Next step is to get into predictive analytics and do more data mining and data forecasting.

Are you still excited about becoming a data scientist?

You can start by reading my blog about programming languages you should learn here!

Other tools and programming languages you should learn: Anaconda, R Programming, Python, Business Intelligence Software like Tableau, Big Data Analytics with Hadoop, create new representations of the data using HTML and CSS (for example when you use APIs, XML to extract data from third-party sources).

Anayansi, MPM, an EDC Developer Consultant and clinical programmer for the Pharmaceutical, Biotech, and Medical Device industry with more than 18 years of experience.

Available for short-term contracts or ad-hoc requests.  See my contact page for more details or contact me.

Fair Use Notice: Images/logos/graphics on this page contains some copyrighted material whose use has not been authorized by the copyright owners. We believe that this not-for-profit, educational, and/or criticism or commentary use on the Web constitutes a fair use of the copyrighted material (as provided for in section 107 of the US Copyright Law).

Advertisements

CTCAE: Common Terminology Criteria for Adverse Events

The National Cancer Institute issued the Common Terminology Criteria for Adverse Events (CTCAE) version 5.0 on November 27, 2017.

So what is CTCAE and what is it used for?

The terminology NCI CTCAE is a descriptive terminology that can be used for the declaration of adverse events (AEs). A grade scale (or severity) is provided for each term.

The oncology community has a standard classification and severity grading scale for adverse events in cancer therapy clinical trials and this is what it is described in the CTCAE reference.

The SOC (System Organ Class or Organ Class) is the highest level of the hierarchy of the
MedDRA dictionary. It is identified by a physiological or anatomical classification, etiological or a result (ex: SOC investigations for laboratory results). The terms of the CTCAE are grouped together according to the MedDRA primary SOCs. Within each SOC, the terms are listed and accompanied a description of the severity (grade).






An adverse event is an unexpected sign, symptom or disease, unexpected (this includes
biological results), associated chronologically with the use of a treatment, a procedure,
to be connected to this treatment or procedure. An IE is a unique term representing an event
specifically used for the medical report and the scientific analyzes. Each term of the CTCAE is a
MedDRA LLT level term (Low Level Term, lowest level of the hierarchy).
Grades refer to the severity of AEs. The CTCAE is divided into 5 grades, each with
unique medical description for each term, based on the following main lines:
Grade 1: Light; asymptomatic or mild symptoms; diagnosis on clinical examination only; born
not requiring treatment
Grade 2: Moderate; requiring minimal, local or non-invasive treatment; interfering with activities instrumentalities of everyday life
Grade 3: Severe or medically significant but without immediate life-threatening;
indication of hospitalization or prolongation of hospitalization; invalidating; interfering with activities elementary of everyday life
Grade 4: Life-threatening; requiring emergency care
Grade 5: Death related to AE and it is not appropriate for some AEs and therefore is not an option.
MedDRA code CTCAE v5.0 Term Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Definition
10007515 Cardiac arrest Life-threatening
consequences; urgent
intervention indicated<
Death A disorder characterized by cessation of the pumping function of the heart.

CTCAE is still the formal reporting for AEs and grading dependent upon clinician judgement of medical significance.

A copy is located here: CTCAE version 5.0.

Sources:

https://ctep.cancer.gov/protocolDevelopment/electronic_applications/docs/CTCAE_v5_Quick_Reference_8.5×11.pdf

Feature image: CTCAE-4 by Stefano Peruzzi (apple app)

Fair Use Notice: Images/logos/graphics on this page contains some copyrighted material whose use has not been authorized by the copyright owners. We believe that this not-for-profit, educational, and/or criticism or commentary use on the Web constitutes a fair use of the copyrighted material (as provided for in section 107 of the US Copyright Law).

Vandaag is GDPR dag! GDPR is in place! Are you ready for it?

According to the EU General Data Protection Regulation (GDPR), which comes into effect today, May 25th, 2018, most companies will need to inform you of their privacy policy for processing and protecting your personal information and your privacy.

The General Data Protection Regulation (GDPR) is already in place, but many companies are not yet ready - more precisely, only 45% of organizations said they had a structured plan to comply with it.

A recent survey also reveals that 54% of large organizations (with more than 5,000 employees) are better prepared to deal with GDPR; in small ones, this index drops to 37%. And, only 24% of companies use external consulting to become compatible.

With this  Regulation, individuals have the right to request that their personal data be erased or transferred to another organization. This raises questions as to what tools and processes they will need to implement. For 48% of respondents, it is a challenge to find only personal data in their own banks. In these cases, compliance with the GDPR rules will be an even more serious task.

55% of organizations are not prepared for GDPR

For EU citizens and residents, this is a welcome law. But for US citizens and residents, they will continue to suffer identity theft and data privacy violations in the hands of the same companies the EU is trying to fined and control under this law. The Googles, the Facebooks, the Twitters and most social media will be scrutinized heavily after this day.

Who does the GDPR affect?
The GDPR not only applies to organizations located within the EU but it will also apply to organizations located outside of the EU if they offer goods or services to, or monitor the behavior of, EU data subjects. It applies to all companies processing and holding the personal data of data subjects residing in the European Union, regardless of the company’s location.

What are the penalties for non-compliance?
Organizations can be fined up to 4% of annual global turnover for breaching GDPR or €20 Million. This is the maximum fine that can be imposed for the most serious infringements e.g.not having sufficient customer consent to process data or violating the core of Privacy by Design concepts. There is a tiered approach to fines e.g. a company can be fined 2% for not having their records in order (article 28), not notifying the supervising authority and data subject about a breach or not conducting an impact assessment. It is important to note that these rules apply to both controllers and processors — meaning ‘clouds’ will not be exempt from GDPR enforcement.

Source:

https://www.eugdpr.org/

https://ec.europa.eu/commission/priorities/justice-and-fundamental-rights/data-protection/2018-reform-eu-data-protection-rules_en

Fair Use Notice: Images/logos/graphics on this page contains some copyrighted material whose use has not been authorized by the copyright owners. We believe that this not-for-profit, educational, and/or criticism or commentary use on the Web constitutes a fair use of the copyrighted material (as provided for in section 107 of the US Copyright Law).

Got Medrio? The Next Best EDC…

Medrio is a low cost solution that offers easy mid-study                changes and intuitive phase I workflows.

Medrio

One of my favorite features of Medrio is the Skip logic functionality. So what is Skip logic?

Let’s demonstrate this feature by using the Demography form / Race field:

In many EDC systems that I am currently using or used in the past, we have to create separate fields for each option and write a custom edit check to flag when data has been entered under the specify field. This scenario request data on the specify field if the OTHER race option is checked but with skip logic, no other option will be allowed to enter data (e.g. White or Black or Indian) if the user did not select OTHER as an option and the required field ‘Specify’ is made visible and available (mandatory) for data entry.

Medrio

eCRF – DEMO – Medrio

 

 

 

 

 

 

 

 

DM form – Skip Logic

 

 

 

 

 

 

 

In the above screenshot,  the query resulting from the skip logic configuration if OTHER specify is not completed. In other words, when Race other than ‘OTHER’ is checked, the specify field will be skipped (not enterable). To make this work and as a best practice, you will need to make the ‘OTHER’ field required during data entry.

If you are looking for a study builder or clinical programmer to support your clinical trials and data management department, please use the contact form.

Source: medrio.com

Disclaimer: The EDC Developer blog is “one man’s opinion”. Anything that is said on the report is either opinion, criticism, information or commentary, If making any type of investment or legal decision it would be wise to contact or consult a professional before making that decision.

-FAIR USE-
“Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching,SCHOLARSHIP, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use.”

Hashgraph: Electronic Data Capture Future?

Hashgraph technology, created by Leemon Baird, it is probably going to replace blockchain technology.

Since the introduction of bitcoin there have been many thousands of blockchain based crypto currencies created and there are more being created every single day so what’s different about hashgraph?

Well it isn’t a blockchain it’s totally different in fact the way it works is a real Mindbender and a bit difficult to explain.

Instead of a block in a blockchain hash graph calls their packages of information “events”. Your computer takes a transaction like a payment or anything else for that matter such an action in the eCRF form (e.g. SDV) you want to record and puts it in the event for transmitting information quickly hash graph uses a technology that has been the gold standards in computer science for decades. Its super fast and its called ‘gossip protocol’. Your computer randomly tells another computer in the network about the even you’ve created and that computer responds by telling your computer any events it heard about then that computer tells another computer about your event and the other events it heard about and the computer its talking to responds by telling all the events its knows about. Its absolutely the best most efficient way to spread information and it’s exponentially fast  And the best part, it also includes the information of the time it heard it and who it heard it from and the time they heard it and who they heard it from and so on and it is called ‘gossip about gossip’ and it lets everyone knows what everyone else knows and exactly when they knew it and just fractions of a second.

Another key feature is ‘virtual voting’. Even though an old technology, it was slow but with hash graph, there is no voting instead because everyone already knows what everyone else knows you can mathematically calculate with 100% certainty how they would vote and allows hash graph to come to consensus almost instantly so instead of recording things on a block and adding it to the block chain once every 10 minutes hash graph events are added to the system instantly the moment they are created so they don’t have 10 minutes worth of information in them. That means they are small and contain far less data so they use very little bandwidth and are much easier to transmit and uses a minuscule amount of power which makes it fast, fair and more secure than block chain.

All events are time-stamped the moment they are woven into the system so the record of whose event came first and whose came second is instant and there is no such thing as soft forking or unconfirmed events.

It can also replace huge portions of the internet that are currently run by centralized servers by replacing them with the shared computing power of all of our own computers, iPads and cell phones.

It looks like hash graph might have all the potentials  of fulfilling all the original hopes and dreams of a true Electronic Data Capture system (e.g. eCRF forms collected at site, ePRO/eCOA data directly from subjects, external or local lab or ECG data from any lab, eSAEs, Inform consents, and more). In other words, sites, sponsors, labs, regulatory and all vendors working seamless with each other.

Imagine an investigator or research site completing an ‘event’ (e.g. Enrolled or randomized) and the system automatically sent the payment to the site at the end of each event.

The power to decentralize and remove the middleman with the speed at which technology is evolving the future is looking bright.

One of the challenges of any new technology is how do you really explain to people how do you make a compelling case for what makes a technology so compelling and one of the things that is so compelling about this technology is throughput – the speed.

You are probably thinking but what about eSignatures? Or Informed Consents? or Regulations? There is another technology ‘smart contracts‘ that could lead to substantial improvements in compliance, cost-efficiency and accountability.

What is a  Smart contracts? are contracts whose terms are recorded in a computer language instead of legal language. Smart contracts can be automatically executed by a computing system, such as a suitable distributed ledger system. The potential benefits of smart contracts include low contracting, enforcement, and compliance costs; consequently it becomes economically viable to form contracts over numerous low-value transactions. The potential risks include a reliance on the computing system that executes the contract. [Distributed Ledger Technology: beyond block chain, UK Government Office for Science, 2006]

Smart contracts

Bitcoin technology uses a tremendous amount of energy to run the system and also it is scaled up it became slower and slower to where it was no longer a currency it was basically a speculative vehicle you could make some gains in purchasing power. A transaction with bitcoin technology can take 4 hours for confirmation.  With hashgraph, compared to Bitcoin, uses no power.

If Bitcoin were to replace the entire world monetary system and financial markets, it would use more power than the entire world produces. It’s completely unsustainable.

Hashgraph, smart contracts, distributed ledgers and similar technologies offer new ways to share information, reduce errors, and it is cost effective to all users. Perhaps a new Electronic Data Capture system for clinical research will emerge.

Source:

Hashgraph.com

The Crypto Revolution

-FAIR USE-
“Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use.”

Freelancer / Consultant / EDC Developer / Clinical Programmer

* Setting up a project in EDC (Oracle InForm, Medidata Rave, OpenClinica, OCRDC)
* Creation of electronic case report forms (eCRFs)
* Validation of programs, edit checks
* Write validation test scripts
* Execute validation test scripts
* Write custom functions
* Implement study build best practices
* Knowledge of the process of clinical trials and the CDISC data structure

 

Understanding Audit Trail Requirements in Electronic GxP Systems

Computerized systems are used throughout the life sciences industry to support various regulated activities, which in turn generate many types of electronic records.  These electronic records must be maintained according to regulatory requirements contained within FDA’s 21 CFR Part 11 for US jurisdictions and Eudralex Volume 4 Annex 11 for EU jurisdictions.  Therefore, we must ensure the GxP system which maintains the electronic record(s) is capable of meeting these regulatory requirements.

What to look for in Audit Trail?

  • Is the audit trail activated? SOP?
  • Record of reviews? (most companies trust the electronic systems audit trail and generates electronic paper version of it without a full review)
  • How to prevent or detect any deletion or modification
    of audit trail data? Training of staff?
  • Filter of audit trail

Can you prove data manipulation did not occur?

Persons must still comply with all applicable predicate rule requirements related to documentation of, for example, date (e.g. 58.130(e)), time, or sequencing of events, as well as any requirements for ensuring that changes to records do not obscure previous entries.

Consideration should be given, based on a risk assessment, to building into the system the creation of a record of all GMP-relevant changes and deletions (a system generated “audit trail”).

Audit trail content:

Audit trail content and reason it is required:
Identification of the User making the entry This is needed to ensure traceability.  This could be a user’s unique ID, however there should be a way of correlating this ID to the person.
Date and Time Stamp This is a critical element in documenting a sequence of events and vital to establishing an electronic record’s trustworthiness and reliability.  It can also be effective deterrent to records falsification.
Link to Record This is needed to ensure traceability.  This could be the record’s unique ID.
Original Value  

This is needed in order to be able to have a complete history and to be able reconstruct the sequence of events

New Value
Reason for Change This is only required if stipulated by the regulations pertaining to the audit trailed record.  (See below)

FDA / Regulators findings and complaints during Inspection of Audit Trail Data:

  • Audit User sometimes is hard to describe (e.g. user123 instead use full names of each user IDs thus requirement additional mapping)
  • Field IDs or Variables names are used instead of SAS labels or Field Labels (map field names with respective field text (e.g.  AETERM displayed instead use Reported Term for the Adverse Event)
  • Default values should be easily explained or meaningful (see annotated CRF)
  • Limited access to audit trail files (many systems with different reporting tools or extraction tool. Data is not fully integrated. Too many files and cannot be easily integrated).
  • No audit trail review process. Be prepared to update SOPs or current working practices to add review time of audit trails. It is expected that at least, every 90 days, qualified staff performed a review of the audit trail for their trials. Proper documentation, filing and signature should be in place.
  • Avoid using Excel or CSV files. Auditors are now asking for SAS datasets of the audit trails. Auditors are getting trained to generate their own output based on pre-defined set of parameters to allow auditors to summarize data and produce graphs.
  • Formatting issues when exporting into Excel, for example.  Numbers and dates fields change it to text fields.
Audit Trail Review

What data must be “audit trailed”?

When it comes to determining on which data the audit trail must be applied, the regulatory agencies (i.e. FDA and EMA) recommend following a risk based approach.

Following a “risk based approach”

In 2003, the FDA issued recommendations for compliance with 21 CFR Part 11 in the “Guidance for Industry – Part 11, Electronic Records; Electronic Signatures — Scope and Application” (see reference: Ref. [04]).  This guidance narrowed the scope of 21 CFR Part 11 and identified portions of the regulations where the agency would apply enforcement discretion, including audit trails. The agency recommends considering the following when deciding whether to apply audit trails:

  • Need to comply with predicate rule requirements
  • Justified and documented risk assessment to determine the potential effect on product quality
  • product safety
  • record integrity

With respect to predicate rule requirements, the agency states, “Persons must still comply with all applicable predicate rule requirements related to documentation of, for example, date (e.g., § 58.130(e)), time, or sequencing of events, as well as any requirements for ensuring that changes to records do not obscure previous entries.”  In the docket concerning the 21 CFR Part 11 Final Rule, the FDA states, “in general, the kinds of operator actions that need to be covered by an audit trail are those important enough to memorialize in the electronic record itself.” These are actions which would typically be recorded in corresponding paper records according to existing recordkeeping requirements.

The European regulatory agency also recommends following a risk based approach.  The Eudralex Annex 11 regulations state, “consideration should be given, based on a risk assessment, to building into the system the creation of a record of all GMP-relevant changes and deletions (a system generated “audit trail”).”

MHRA Audit

When does the Audit Trail begin?

The question of when to begin capturing audit trail information comes up quite often, as audit trail initiation requirements differ for data and document records.

For data records:

If the data is recorded directly to electronic storage by a person, the audit trail begins the instant the data hits the durable media.  It should be noted, that the audit trail does not need to capture every keystroke that is made before the data is committed to permanent storage. This can be illustrated in the following example involving a system that manages information related to the manufacturing of active pharmaceutical ingredients.  If during the process, an operator makes an error while typing the lot number of an ingredient, the audit trail does not need record every time the operator may have pressed the backspace key or the subsequent keystrokes to correct the typing error prior to pressing the ‘‘return key’’ (where pressing the return key would cause the information to be saved to a disk file).  However, any subsequent ‘‘saved’’ corrections made after the data is committed to permanent storage, must be part of the audit trail.

For document records:

If the document is subject to review and approval, the audit trail begins upon approval and issuing the document.  A document record undergoing routine modifications, must be version controlled and be managed via a controlled change process. However, the interim changes which are performed in a controlled manner, i.e. during drafting or review comments collection do not need to be audit trailed.  Once the new version of a document record is issued, it will supersede all previous versions.

Questions from Auditors: Got Answers?

When was data locked? Can you find this information easily on your audit trail files?

When was the database/system released for the trial? Again, how easily can you run a query and find this information?

When did data entry by investigator (site personnel) commence?

When was access given to site staff?

Source:

Part of this article was taking, with permission, from Montrium – Understanding Audit Trail Requirements in Electronic GXP Systems

Fair Use Notice: Images/logos/graphics on this page contains some copyrighted material whose use has not been authorized by the copyright owners. We believe that this not-for-profit, educational, and/or criticism or commentary use on the Web constitutes a fair use of the copyrighted material (as provided for in section 107 of the US Copyright Law).

How to Avoid Electronic Data Integrity Issues: 7 Techniques for your Next Validation Project

The idea of this article was taking (with permission from the original authors) from Montrium:  how-to-avoid-electronic-data-integrity-issues-7-techniques-for-your-next-validation-project

Regulatory agencies around the globe are causing life science companies to be increasingly concerned with data integrity.  This comes with no surprise given that Guidance Documents for Data Integrity have been published by the MHRAFDA (draft), and WHO (draft).  In fact, the recent rise in awareness of the topic has been so tremendous that, less than two years after the original publication, the MHRA released a new draft of its guidance whose scope has been broadened from GMP to all GxP data.

Is data integrity an issue of good documentation practices? You can read GCP information about this topic here.

Good Documentation Practices for SAS / EDC Developers

Are you practising GCP?

In computerised systems, failures in data integrity management can arise from poor or complete lack of system controls.  Human error or lack of awareness may also cause data integrity issues.  Deficiencies in data integrity management are crucial because they may lead to issues with product quality and/or patient safety and, ultimately may manifest themselves through patient injury or even death.

I recently was at the vendor qualification tool that uses a hand held device to read data while the physician or expert manually put pressure on someone’s body parts (e..g. pain related). I was not impressed. Even though it seems like a nice device with its own software, the entire process was manual and therefore, questionable data integrity. The measurement seems to be all over the place and you would need the right personnel at the clinical site to perform a more accurate reading since again, it was all manual and dependent of someone else used of the device.

I also questioned the calibration of this device. The sale’s person answer ? “Well, it is reading 0 and therefore, it is calibrated.”….Really? You mean to tell me you have no way of proving when you perform calibration? Where is the paper trail proving your device is accurate? You mean to tell me I have to truth your words? Or your device’s screen that reads ‘0’? Well, I have news for you. Tell that to the regulators when they audit the trial.

What is Data Integrity?

Data can be defined as any original and true copy of paper or electronic records.  In the broadest sense, data integrity refers to the extent to which data are complete, consistent and accurate.

To have integrity and to meet regulatory expectations, data must at least meet the ALCOA criteria. Data that is ALCOA-plus is even better.

Alcoa

 

What is a Computerised System?

computerised system is not only the set of hardware and software, but also includes the people and documentation (including user guides and operating procedures) that are used to accomplish a set of specific functions.  It is a regulatory expectation that computer hardware and software are qualified, while the complete computerised system is validated to demonstrate that it is fit for its intended use.

How can you demonstrate Electronic Data Integrity through Validation?

Here are some techniques to assist you in ensuring the reliability of GxP data generated and maintained in computerised systems.

Specifications

What to do

Why you should do this

Outline your expectations for data integrity within a requirements specification.

For example:

  • Define requirements for the data review processes.
  • Define requirements for data retention (retention period and data format).
Validation is meant to demonstrate a system’s fitness for intended use.  If you define requirements for data integrity, you will be more inclined to verify that both system and procedural controls for data integrity are in place.
Verify that the system has adequate technical controls to prevent unauthorised changes to the configuration settings.

For example:

  • Define the system configuration parameter within a configuration specification.
  • Verify that the system configuration is “locked” to end-users.  Only authorized administrators should have access to the areas of the system where configuration changes can be made.
The inspection agencies expect you to be able to reconstruct any of the activities resulting in the generation of a given raw data set.  A static system configuration is key to being able to do this.

 

Verification of Procedural Controls

What to do

Why you should do this

Confirm that procedures are in place to oversee the creation of user accounts.

For example:

  • Confirm that user accounts are uniquely tied to specific individuals.
  • Confirm that generic system administrator accounts have been disabled.
  • Confirm that user accounts can be disabled.
Shared logins or generic user accounts should not be used since these would render data non-attributable to individuals.

System administrator privileges (allowing activities such as data deletion or system configuration changes) should be assigned to unique named accounts.  Individuals with administrator access should log in under his named account that allows audit trails to be attributed to that specific individual.

Confirm that procedures are in place to oversee user access management.

For example:

  • Verify that a security matrix is maintained, listing the individuals authorized to access the system and with what privileges.
A security matrix is a visual tool for reviewing and evaluating whether appropriate permissions are assigned to an individual. The risk of tampering with data is reduced if users are restricted to areas of the system that solely allow them to perform their job functions.
Confirm that procedures are in place to oversee training.

For example:

  • Ensure that only qualified users are granted access to the system.
People make up the part of the system that is most prone to error (intentional or not).  Untrained or unqualified users may use the system incorrectly, leading to the generation of inaccurate data or even rendering the system inoperable.

Procedures can be implemented to instruct people on the correct usage of the system.  If followed, procedures can minimize data integrity issues caused by human error. Individuals should also be sensitized to the consequences and potential harm that could arise from data integrity issues resulting from system misuse.

Logical security procedures may outline controls (such as password policies) and codes of conduct (such as prohibition of password sharing) that contribute to maintaining data integrity.

 

Testing of Technical Controls

What to do

Why you should do this

Verify calculations performed on GxP data.

For example:

  • Devise a test scenario where input data is manipulated and double-check that the calculated output is exact.
When calculations are part of the system’s intended use, they must be verified to ensure that they produce accurate results.
Verify the system is capable of generating audit trails for GxP records.

For example:

  • Devise a test scenario where data is created, modified, and deleted.  Verify each action is captured in a computer-generated audit trail.
  • Verify the audit trail includes the identity of the user performing the action on the record
  • Verify the audit trail includes a time stamp
  • Verify the system time zone settings and synchronisation.
With the intent of minimizing the falsification of data, GxP record-keeping practices prevent data from being lost or obscured.  Audit trails capture who, when and why a record was created, modified or deleted.  The record’s chronology allows for reconstruction of the course of events related to the record.

The content of the audit trails ensures that data is always attributable and contemporaneous.

For data and the corresponding audit trails to be contemporaneous, system time settings must be accurate.

 

 

 

Who can delete data?

Adequately validated and have sufficient controls to
prevent unauthorized access or changes to data.

Implement a data integrity lifecycle concept:

  • Activate audit trail and its backup
  • Backup and archiving processes
  • Disaster recovery plan
  • Verification of restoration of raw data
  • Security, user access and role privileges (Admin)

Warning Signs – Red Flags

  • Design and configuration of systems are poor
  • Data review limited to printed records – no review
    of e-source data
  • System administrators during QC, can delete data (no proper documentation)
  • Shared Identity/Passwords
  • Lack of culture of quality
  • Poor documentation practices
  • Old computerized systems not complying with part 11 or Annex 11
  • Lack of audit trail and data reviews
  • Is QA oversight lacking? Symptom of weak QMS?
I love being audited

 

 

 

 

 

 

Perform Self Audits

  • Focus on raw data handling & data review/verification
  • Consider external support to avoid bias
  • Verify the expected sequence of activities: dates,
    times, quantities, identifiers (such as batch,
    sample or equipment numbers) and signatures
  • Constantly double check and cross reference
  • Verify signatures against a master signature list
  • Check source of materials received
  • Review batch record for inconsistencies
  • Interview staff not the managers

FDA 483 observations

“…over-writing electronic raw data…..”

“…OOS not investigated as required by SOP….”

“….records are not completed contemporaneously”

“… back-dating….”

“… fabricating data…”

“…. No saving electronic or hard copy data…”

“…results failing specifications are retested until
acceptable results are obtained….”

  • No traceability of reported data to source documents

Conclusion:

Even though we try to comply with regulations (regulatory expectations from different agencies e.g. EMA, MHRA, FDA, etc), data integrity is not always easy to detect. It is important the staff working in a regulated environment be properly trained and continuous refresher provided through their career (awareness training of new regulations and updates to regulations).

Companies should also integrate a self-audit program and develop a strong quality culture by implementing lesson learned from audits.

Sources:

You can read more about data integrity findings by searching the followng topics:

MHRA GMP Data Integrity Definitions & Guidance for the Industry,
MHRA DI blogs: org behaviour, ALCOA principles
FDA Warning Letters and Import Alerts
EUDRA GMDP database noncompliance

The Mind-Numbing Way FDA Uncovers Data
Integrity Laps”, Gold Sheet, 30 January 2015

Data Integrity Pitfalls – Expectations and Experiences

Fair Use Notice: Images/logos/graphics on this page contains some copyrighted material whose use has not been authorized by the copyright owners. We believe that this not-for-profit, educational, and/or criticism or commentary use on the Web constitutes a fair use of the copyrighted material (as provided for in section 107 of the US Copyright Law)

Good Clinical Practice – The Bible

Good Clinical Practice (GCP) is an international ethical and scientific quality standard for
designing, conducting, recording and reporting trials that involve the participation of
human subjects. Compliance with this standard provides public assurance that the rights,
safety and well-being of trial subjects are protected, consistent with the principles that have
their origin in the Declaration of Helsinki, and that the clinical trial data are credible.

Below is the link to the most common terms used in clinical trials (for reference). Use it as your leisure, during work hours and day-to-day work as a clinical researcher.

Good Clinical Practice Bible – Terminologies

Top 3 Posts at (EDC Developer)

Fist, I would like to thank everyone who has read articles posted at {EDC} Developer. Especially, my colegas and friends from India. The highest reading and hits have come from people living in India.

New to the industry? Want to get in as clinical data manager or clinical programmer? Looking for a particular topic or an answer to a question? check the contact me section.

Here are the top most searched articles this past few months:

1- Data Management: Queries in Clinical Trials

2- How to document the testing done on the edit checks?

3- Why use JReview for your Clinical Trials?

Others most read articles:

Role of Project Management and the Project Manager in Clinical Data Management

4 Programming Languages You Should Learn Right Now (eClinical Speaking)

Data Management Plan in Clinical Trials

For the search term used to find {EDC} Developer:

1-types of edit checks in clinical data management

2-Rave programming

3- pharmaceutical terminology list

4-seeking rave training (better source is mdsol.com)

5- edc programmer

6-central design tips and tricks

Thank you for reading!