Steve Shevel, Vice President Operations, Clinical Development and Quality at Hawthorne Effect, Inc.

In the not too distant past, if you were running clinical trials, we would never have expected this question to be posed: “do we really need a Clinical Trials Management System (CTMS)?” You needed to have a system that would capture and report out information related to the operational progress of your trials, without question. But, these days, we do hear that assumption challenged more and more often.

For many years the scope and functionality of widely available CTMS systems have spread and grown unwieldly. Software vendors looked to incorporate additional tools in their products such as investigator databases, monitoring visit reports, safety letter distribution, and even Trial Master File and document compliance functions. These tools, while not directly relevant to the day-to-day operations of a trial, served the hub of centralized clinical operations. Sponsors appreciated the idea of having one place to go to get their information, and bombarded vendors with a steady stream of requests for additional functionality that supported their particular way of operating, as distinct from the way another sponsor might work or be organized.

Of course, not all requests could be fulfilled, and so the rollout of new functionality was predicated upon variables such as common sponsor desire, vendor cost containment and technological feasibility.   But, in recent years one key factor changed the CTMS landscape entirely: large-scale Outsourcing. Unfortunately, traditional CTMS solutions do not adapt well to a heavily outsourced environment, especially when sponsors are using multiple CROs.

The primary barrier in using traditional CTMS systems in an outsourced model is the question who is doing the actual work? Most CROs serve multiple clients, and a large part of the success of their business model relies on limiting fixed costs (non-billable resources). What this means is that they need their workforce to be fluid and adaptable (while still performing to client satisfaction). To support this they need their own internal management systems where knowledge and process is transferrable and fungible regardless of trial or sponsor. This is the only way CROs can recognize economies of scale, but both parties to the work environment (sponsor and CRO) end up with an investment in their own optimized CTMS, with compelling reasons for their investment. Rarely, however, can these systems be shared, talk to each other, align their metadata, or even have access allowed from one party to the other. At best, each party can review reports or data extracts from the other, almost never in real-time, and rarely in their preferred format.

Here in 2018, with approximately 50% of every research dollar going to third-party providers, what are sponsors to do? We are still responsible for the trial, its subjects and the data it produces. But in an outsourced environment, sponsors are really no longer involved in the day-to-day operational aspects of a trial.  This is what they are paying CROs to do. And so the M (Management) in CTMS is no longer applicable to the sponsor. Likewise, all of the associated functionality in traditional CTMS systems that focused on the M of a trial is of limited value to sponsor users.  CROs still require this functionality, since they are doing the operations, but they have their own CTMS systems that are configured around their unique (and consistent) processes.

Sponsors need to consider replacing the M in CTMS with an O, and pursue the development and implementation of a Clinical Trial Oversight System. In an outsourced environment, oversight is really what sponsors are supposed to be doing, and indeed this is the regulatory expectation. The complexity of changing the mindset at a sponsor from “doer” to “overseer” is perhaps a topic for another article, but clearly the overseer needs data summarized and consolidated for a trial to ensure they know what is going on, and more importantly, to be able to take action based on this information.

This is why it pains us to see sponsors who outsource most of their trials still spend large sums of money for, and devote enormous effort to, implementing a traditional CTMS system. Inevitably, in this situation, a significant portion of the functionality they spent months discussing and configuring is underutilized, or worse, forgotten by their user community. There are a number of vendors in the space that offer systems or software that can meet the oversight needs and requirements of a sponsor – they are not (and shouldn’t be called) CTMS systems. They are often simpler and faster to configure (less functionality) and less expensive. Furthermore, these solutions can integrate with CRO and other third-party systems that could eliminate the requests for CROs to work in sponsor systems, while still allowing sponsors to see and analyze the relevant operational data.   It is important to emphasize that oversight does not consist solely of looking at reports and metrics, but includes many other important factors such as communication/collaboration streams and issue management.  This locus of functionality gets at the true heart of sponsors’ responsibilities and business needs.

Do you really need a CTMS system? The answer, as usual, is “it depends”. If you do not outsource the majority of your trials to CRO providers and do much of the operational and management work in house, then yes, you would/could benefit from a traditional CTMS solution. If you do outsource a large portion of your trials, then you would be better served to seek out options that focus on supporting trial oversight. How you implement this oversight and associated process change is as important, or perhaps more important, than the technical solution you chose.  So, before you commit millions of dollars to your next CTMS initiative, ask yourself two questions: 1) Do we manage trials or oversee them? 2) What is it that we need to support that work.

Dr. Lidiya Todorova, MD, PhB

CTMS implementation success might seem not that easy, especially when utilizing this kind of software for the first time. In this case it is important to have a very well sketched plan that will help you with the initial steps to get everything in place. The beginning is the toughest part and this is the reason why we share with you some ideas that you might find helpful.

Step 1: Involve the Team

After buying and integrating the CTMS, the first step is to make sure your team will use it. The fear of the new is always there and what seems to work in this direction is to ensure that the team has been involved in almost every aspect of the CTMS selection and integration: from the search of an appropriate system, through the tests of trial demos, to the proactive feedback. The people of the team that will use the system the most are the ones aware what the CTMS needs to accomplish.

Step 2: Roles & Responsibilities

CTMS implementation success lies in stratification. In this step you must determine the roles and responsibilities. Who will use the system? What CTMS features will they need to achieve success? Answering these two basic questions and making sure that everyone involved will be impeccably trained in the respective functions of the software will grant you almost 70 per cent of the CTMS implementation success.

Step 3: Deadlines

After determining each one’s responsibilities it is time for timeline establishment. Deadlines might be tedious, but they are an essential component in clinical research. With the end goal of the CTMS adoption in mind, you can then set “mini goals” and “mini-deadlines” for the team involved in the project. For example, you can plan the time that your team will need to complete various activities, so that you can set deadlines accordingly. Another useful tip is to set up realistic goals and not frightening the staff members with expectations that seem unattainable.

Step 4: Priorities

To set the priorities is as important as setting the deadlines. First you might want to get your team involved in training. After they get familiar with the system, it is time to start the work process with setting up the studies and then making sure that all the steps related to the study are being set and ordered by priority. It is not impossible of course to do everything at once, but not only will it cost you time for implementation, but also will you spend more money.

Step 5: Acknowledge the Accomplishments

Success is achieved by completing little steps one by one. What seems to be one-time attainment is most certainly a result of a lot of work and successfully completed milestones. This is why you should acknowledge the accomplishments along the way and recognize when your team has reached important milestones in your implementation plan. Sharing the enjoyment of success with your team, will not only lift everyone’s spirits, but will also motivate them to continue the good work. Even more: this will keep them excited to continue working with the new CTMS.

Our clinical trial management system, Clinicubes, offers integrated solutions for every single aspect and phase of clinical research. In its core, the software is systematized, well-built and easy-to-use. Clinicubes delivers easier way for collecting, retaining and archiving patient and scientific data. Clinical research professionals can also track deadlines, schedule visitations and monitor treatment progress. The system increases the productivity of the clinical research site and the number of successfully completed trials. It also streamlines the entire clinical trial process, making the CTMS implementation success an easily attainable goal.

Cerdi Beltre, SVP, Institutional Services, WCG

Clinical research sites that are in the process of investigating clinical trial management system (CTMS) adoption are finding that the site CTMS space is in flux with mergers, discontinued products, and new start-ups looking to drive innovation. It can be challenging to focus on what’s important to each individual site as they evaluate the available options.

We listen to many research sites as they work through the process of evaluating CTMS features and benefits, driving the decision toward or away from CTMS providers in the process. The two questions we hear most often are:

  • What will happen with our existing data? and
  • Can you integrate with other applications used at our site?

We sat down with Harry Chahal, the vice president of professional services at WCG Velos, to address these questions. Since its founding in 1996, Velos has supported dozens of data migrations and completed more than 100 third party system interfaces for its CTMS clients.

Cerdi Beltre: Velos has seen first-hand the changing landscape of site needs in the past two decades. What are the most pressing integration concerns research sites have now? How have they evolved?

Harry Chahal: In the past two decades, clinical trials have become far more complex. Sites have  to collect more data points and billing compliance is more challenging. Electronic medical records (EMRs) have changed over the years as well, with EMR interfaces and financial management gaining in priority. Also, off-site or cloud application hosting has become especially popular, as it is considerably more efficient, however it adds new IT security concerns.

CTMS platforms used to be standalone solutions – twenty years ago most research sites didn’t have EMRs to interface with. Now, integration is essential, and extends to multiple systems beyond just EMR. While we are seeing these requests for integrations across all research site types, from small independents to larger site networks, this is especially true for large research institutions that have the clinical trial volume to fully justify the investment into integration.

Cerdi Beltre: When research sites switch from one CTMS to another, what are the typical reasons for making the switch?

Harry Chahal: In my experience, the drivers to switch CTMS systems are inadequate capabilities or configurability, and research billing compliance risk. Sites typically want to reduce the number of systems needed to get the desired results. They also want a system that is easy to navigate and reduces the research staff’s workload, which integration helps to achieve. For example, sites want to only enter data once into the CTMS and have it feed directly to the EMR and/or have IRB-related items interface with the CTMS.

Cerdi Beltre: ​What happens to existing data? How important should this factor into the CTMS choice?

Harry Chahal: In our experience, most clinical trial data can be migrated and it’s an important step to avoid any interruption in workflow. Some data may not need to be migrated. For example, it may not be worth the effort to migrate data for studies that are closing soon or are closed. It also depends on how complex their legacy system was and what functions the site was using. As such, the data migration needs of each site vary, however, most sites benefit from some level of data migration and it’s an important factor in the decision-making process. Sites must review what the new system offers compared to where their data was previously and evaluate whether anything from their old system can be discarded, or how they can revise their processes to best utilize the new CTMS.

Data migration should to be quick, seamless, and accurate. Minimizing workflow disruption is paramount for sites that are using a system on a daily basis. Once the data is migrated, either the site or the new CTMS team needs to verify the validity of the migrated data.

Cerdi Beltre: ​What challenges do you typically see when migrating data from one CTMS to another, and how do you handle these challenges?

Harry Chahal: We usually see one-on-one matching of fields and definitions across different CTMS portals. The challenges tend to be around code list values, picklists, data format, data integrity, and data validations such as mandatory data fields. These are semantics challenges.

There are several different ways to handle the validation of the migration:

  1. Sample data validation: This involves the random selection of records from the legacy system and comparing them with the target system. Sampling is not a foolproof system as it selects random records, so we’ll often use or recommend use of profiling techniques that result in better data coverage versus purely random sampling.
  2. Subset of data validation: Instead of choosing random sample records for verification between the legacy and target systems, here we choose a subset of records based on row numbers such as the first thousand records or ten thousand records. The advantage with this process lies in selecting more records resulting in more data coverage.
  3. Complete data set validation: This is the ideal validation method that we strive for in migration testing. It compares every record, bi-directionally, comparing every record in the legacy system against the target system, and vice versa. Using queries that highlight discrepancies, we can then precisely assess the accuracy of the migration.

Cerdi Beltre: ​What advice would you offer to a research site contemplating a change in CTMS?

Here are the five most important considerations I would recommend:

  1. Prioritize integrations that reduce double data entry. As you review your overall process, focus on what information are you duplicating in multiple systems, how frequently, and why?  What efficiencies could you gain if the information were shared between multiple systems?
  2. Get a commercially available, proven CTMS. You’ll want a CTMS that follows standards for integrations, has a proven track record for product delivery and service, and shows commitment to updating the CTMS with changes in industry trends, regulatory requirements, and data standards.
  3. Choose a system that will grow with you – one that meets your current needs but also can adapt as your needs grow or change in the future.
  4. Standardize key statuses/timeline points to provide transparency at the enterprise level. Will your CTMS allow you to easily pull a report of your historical studies and metrics to demonstrate the expertise of your site?  For example, can you use the CTMS to demonstrate your site’s expertise to sponsors:
    • What therapeutic areas do you have experience with?
    • How many studies have you conducted?
    • How many participants have you enrolled?
    • How efficiently did you enroll in the studies?
    • What percentage of participants completed the study?
  5. Assess your current system and what functionalities your team is using most frequently.  Are you using the functionality in the system that you expected to use?  Is there any functionality that you intended to use but are not using?  If so, understand why.  If this functionality would be beneficial to your team, include it as a requirement in your search for a new vendor.
Zach Walker, Defense Innovation Unit (DIU)

There is no greater maxim than speed is decisive in war. However, cyber warfare today is a mostly manual process. Humans scour code to find vulnerabilities and fix problems with patches. Humans evaluate whether a patch will maintain overall system functionality, and whether a patch is performant. Human attackers exploit unpatched systems or vulnerabilities that, in some cases, have been latent in systems for over a decade.

For example, the 2017 WannaCry attack was based on a vulnerability latent in every version of Microsoft Windows since 2001. It took 16 years for a latent vulnerability to become weaponized and wreak havoc across the world. In a sense, modern cyber warfare revolves around attackers taking advantage of low-hanging fruit and defenders hoping that Microsoft will release a patch to fix their systems before it’s too late.

In the future, we won’t have the luxury of waiting 16 years to patch a bug that leads to a zero-day exploit. Humans will augment attack and defense with machine scale and artificial intelligence – as DARPA has said, to take advantage of “zero-second” vulnerabilities. The first to master autonomous cyber warfare will be able to sow disruption, gain access to communications, persist, disrupt, and alter the course of battle. Those left behind will be at a tremendous disadvantage.

Artificially intelligent cyber warfare is already here. DARPA’s Cyber Grand Challenge (CGC) had the audacious goal of building autonomous systems capable of identifying, exploiting, and mitigating previously unknown vulnerabilities.

DARPA held the CGC in August 2016 with a machine-only Capture the Flag-style tournament at DEFCON 24. But is the tech ready for prime time? Congress seemed to think so. In the 2017 Senate Appropriations Committee Department of Defense Appropriations Bill, the Senate suggested that DoD explore “automated exploit generation and vulnerability identification… such as those exemplified in the Cyber Grand Challenge.” Last week, the 2019 NDAA Conference Report articulated the need for a Cyberspace Solarium Commission to give the nation a cyber warfare strategy in which zero-second attack and defense will be the norm.

Another remarkable aspect of the CGC was that it demonstrated the use of artificial intelligence for finding and remediating vulnerabilities. In its Perspectives on AI, DARPA describes three waves of AI. The first, Handcrafted Knowledge, entails reasoning over narrowly defined problems where the structure of the problem is defined by humans but the specifics are explored by machines. This is how the CGC played out; with virtually limitless ways to find and exploit vulnerabilities in the game, machines had to figure out actions would be the most lucrative. It was truly artificially intelligent cyber warfare.

ForAllSecure, the winner of the CGC, came out of the competition with $2 million in prize money and a long line of companies and nation-states interested in their tech. What didn’t they leave CGC with? A contract to bring their tech into the Department of Defense (DoD). DARPA’s job is to prove the possible with their challenges, and that’s exactly what they did in the CGC. But the DoD wasn’t yet ready to accept this technology. Fortunately, the Defense Innovation Unit Experimental (DIUx) was. Leveraging Other Transaction Authority as defined in 10 U.S.C. 2371(b), DIUx launched a project called VOLTRON to find out if commercial “cyber reasoning” could be used to find and remediate previously unknown vulnerabilities in DoD weapon systems. Companies had until June 20, 2017 to respond to a single-sentence solicitation: “The Department of Defense is interested in systems to automatically find previously unreported vulnerabilities in software without source code and automatically generate patches to remediate vulnerabilities with minimal false positives.”

Sixteen companies responded to the solicitation, and twenty-six business days later, DIUx awarded a $5 million contract to prototype cyber reasoning in the DoD. One year later, DIUx has contracts with three more companies, and their tools are being prototyped in every military service. This effort has brought together some of the best vulnerability researchers in the nation, for the first time, to work from a unified platform and to share best practices.

DIUx has been charged to move at the speed of commercial innovation, and by prototyping commercialized DARPA tech back into the DoD less than one year after the conclusion of a Grand Challenge, we’re doing just that. In the sense of how Clay Christensen describes disruptive innovation, VOLTRON is disrupting DoD cybersecurity.

Coalesce Research Group invites all the participants across the globe to attend the ‘International
Conference on Big Data Analytics and Data Science’ scheduled on Nov 11-12, 2019 in Las
Vegas, Nevada, USA.
Data Science 2019 offers an excellent opportunity to meet and make new contacts in the field of
Big Data, by providing collaboration spaces and break-out rooms with tea and lunch for
delegates between sessions with invaluable networking time for you. It allows delegates to have
issues addressed on Big Data and Data Science by recognized global experts who are up to date
with the latest developments and provide information on new techniques and technologies. This
International conference will feature world-renowned speakers, keynote speakers, plenary
speeches, young research forum, poster presentations, technical workshops, and career guidance
sessions.
Big Data Analytics and Data Science conference cover all aspects of Big Data, Data Science and
Data mining including algorithms, software and systems, and applications.
For more details: https://coalesceresearchgroup.com/datascience/

Find us here- https://www.coalesceresearchgroup.com/conferences/datascience/mediapartner

Facebook is rolling out to launch cryptocurrency money its expectations will “change the worldwide economy.” The currency, named Libra, is being created by Facebook, yet the organization means to impart control to a consortium of associations, including investment firms, credit card organizations, and other tech monsters.

At launch, you’ll have the option to send Libra within Facebook Messenger and WhatsApp, with it for the most part being implied as a middle person for moving conventional monetary standards. In the end, Facebook trusts Libra will be acknowledged as a type of installment, and other budgetary administrations will be based over its blockchain-based system.

Facebook is additionally driving a subsidiary organization, Calibra, which will develop products and administrations based around Libra. It’s the place Facebook means to make money off of the cryptographic money, and it’ll be beginning with the dispatch of its computerized wallet. Calibra will likewise deal with Libra reconciliations for Facebook’s different products.

In 2003, Tableau set out to pioneer self-service analytics with an intuitive analytics platform that would empower people of any skill level to work with data. Our customers grew with us to form the strongest analytics community in the world. And today, that mission to help people see and understand data grows stronger.

I’m excited to announce that Tableau has entered into an agreement to be acquired by Salesforce in an acquisition that combines the #1 CRM with the #1 analytics platform. By joining forces we will accelerate our ability to accomplish our mission. Together, Salesforce and Tableau share a deep commitment to empowering their respective communities and enabling people of every skill level to transform their businesses, their careers, and their lives through technology.

 

By: Adam Selipsky – CEO, Tableau Software

Google brings a new game to town with the recent announcement of its Anthos product hitting general availability. Anthos was conceived to help developers and IT administrators navigate the complex waters of distributed applications. While Microsoft was the first hyperscale cloud platform operator to make it possible to run its cloud environment in customers’ own data centers with Azure Stack, both Amazon and Google have now introduced products and services to do the same thing.

All three recognize the need to help customers modernize existing applications by taking advantage of the latest innovations like containers and Kubernetes. Making all these different applications work together across different platforms both on-premises and in the cloud is challenging. Google says it has a viable solution, or solutions, for this.

Aparna Sinha, Kubernetes group product manager at Google, describes the company’s take on modernizing legacy applications by three different approaches:

  1. GKE On-Prem to bring Google’s cloud services to a customer’s data center
  2. Service Mesh for moving applications to a microservices architecture
  3. Containerize legacy applications to make them portable and accessible

“We have seen a lot of customer interest in both hybrid and multi-cloud approaches to providing services that deliver consistent performance and the right levels of control,” Sinha told me.

Each of these offers a structured approach to moving legacy apps to a cloud-based architecture. While this doesn’t rule out keeping some portions of the application inhouse, it does necessitate the use of containers and Kubernetes as the foundational pieces of a new application paradigm.

Google Kubernetes Environment On-Prem

As the cornerstone of Google’s hybrid cloud offering, GKE On-Prem offers customers multiple options for modernizing legacy applications stuck on old hardware. Workload portability, or enabling applications to run anywhere, is the ultimate goal. GKE On-Prem makes it possible to build and run applications when you need to keep the data inhouse or you don’t want to move large amounts of data to the cloud.

Google’s approach here is different from Amazon’s or Microsoft’s in in that GKE On-Prem runs on top of VMware vSphere. Everything runs on customer hardware with support for all the mainstream VMware OEMs, including Cisco, Dell/EMC, HPE, and Lenovo. This approach caters to the large number of existing VMware customers and keeps the familiar management and operating environment already in place.

Service Mesh

Google sees the future of application integration built upon a wide range of microservices all orchestrated and managed in the cloud. Google Cloud Service Mesh (GCSM) is the product offering that handles everything from communication and networking to monitoring and security. GCSM utilizes Google’s Istio product to handle the heavy lifting required to make these new microservices reliable and secure.

Serverless computing is the concept of providing specific services on demand running in a platform-independent manner. The bottom line here is the ability to deliver some piece of functionality without being tied to any physical system. Google’s approach to the problem is to use Kubernetes and a new project called Knative on top of Istio to make it all work.

Containerize

Most corporations have monolithic applications that they will never rewrite. These might be packaged applications like a database or another application purchased in the past. Google’s approach here is to move these applications into a container-based platform to enable them to run and, more importantly, be managed and integrated with the Google Cloud Platform environment.

To make this process easier, Google has a migration center offering specific services both internally and through partners. A variety of approaches to the problem, from lift-and-shift to migrate-and-modernize, can be taken depending on the complexity and flexibility of the customer requirements. Google realizes that one size doesn’t fit all in this approach, and it’s enlisted a wide range of partners to make it happen.

Bottom Line

Google’s whole strategy in tackling the problem of complexity is to simplify. While that might seem trite, it really does work when you take their products out for a test drive. Developers can spin up a test system with just a few clicks and then develop against essentially the same core infrastructure as they would have in production.

Microsoft’s answer to the integrated on-premises and cloud development story is to go with an Azure Stack system. Similarly, the folks at Amazon want you to buy their hardware and run a clone of AWS in your data center. Google thinks you can get what you need by running on top of VMware vSphere on existing hardware at significantly lower cost than either AWS or Microsoft.

 

Source: Data Center Knowledge