We had a virtual meet with Mr. Srinath Vallakirthy to understand how analytics is changing the game for patients. Thanks to healthcare organizations’ innovative approaches.
Can you walk us how data science/AI is revolutionizing in the healthcare Industry?
From improved diagnosis to timely detection of diseases, artificial intelligence has been making positive contributions towards the betterment of the healthcare industry. Especially in areas such as pathology, pharma, and radiology, AI is already helping professionals achieve an improved quality of care services.
Clinical development: R&D centers and pharma clients are analyzing Big Data for new drug or investigational products testing to reduce the cost of trials and running simulations.
Prevention of drug abuse: Developed countries are using Big Data to tackle the problem of overdose or misuse of opioids.
Real-time alerting: Healthcare providers are utilizing the capabilities of clinical decision support software and AI focused networks to analyses patient health records.
Outbreak analytics: In case of outbreaks such as dengue, H1N1, Zika and other diseases, it becomes imperative to identify the origin of the disease for the effective management of the same. Analyzing huge amount of client and demographic data and connecting multiple data points to perform link analysis holds the key.
What are Bottlenecks and challenges in AI/Data Science technology?
There are several challenges in AI/Data Science industry. Companies are struggling with issues pertaining to data quality, availability, storage, access, integration, privacy, security, retention, and management, the complexity of the AI and blockchain tools and limited talents in these areas.
Electronic Health Records (EHR): High costs, functionality and security are the major concerns while implementing the EHR based system.
Real-time alerting: Lack of infrastructure and regulatory compliances across the countries are the roadblocks faced for implementing Real-Time Alert System.
Telemedicine: High-costs and acceptance from society are the major hurdles in the implementation of telemedicine.
How would you describe the future of AI in the healthcare industry?
Innovation, without question.To expand on that, I think that several technology paradigm shifts are on the horizon for healthcare. Here’s how I’d summarize what’s coming up, current trends, and the future.
Continued adoption of telemedicine use for remote/home health
Increased wayfinding adoption to improve patient experiences
Finally, can you give us example Why does Healthcare Need to Leverage AI to Save Lives?
Here are some of them:
AI in healthcare it’s used to analyze the patient’s data and provide the healthcare team with accurate insights that can predict a possible health issue. So far, AI has successfully shown earlier warnings for health conditions like seizures or sepsis, two serious health issues with complex datasets.
Recently read an Article, A team of scientists from oxford and Stanford universities leveraged AI to identify and analyze the most common symptoms of cancer among chemotherapy patients. More than 2000 patients took participation in the study and AI was used to analyze over 44 common symptoms. This way, the AI platform found out that nausea was the key symptom and used this fact for further network analysis.
Since the cost of testing and the clinical trial is down to $4 billion, more and more drug development companies are deciding to invest in AI machines. Using AI for research and analysis is a cost-effective, and revolutionary solution in the long-term.
For example, this year in January, two companies worked together and used an AI machine to create a Clovis drug in a year, reducing the creation time from four years to only one. The machine studied and analyzed millions of molecular combinations, something that’s impossible for any human to achieve. However, the human factor must be included in the entire process of drug development. The right team of scientists and researchers are providing the directions, the tools, and ultimately, they use these insights to come up with a valuable decision.
Srinath Vallakirthy is a Health Care Data Analytics leader who has worked extensively building high-performing teams for various organizations and has provided consulting to many fortune 500 clients. At Texas Health Resources, he focuses on helping his customers realize the value of data analytics to solve priority business problems especially in-Patient Care Safety and Improvements.
Artificial Intelligence is called the new nervous system of the healthcare domain. Application of Artificial Intelligence in Health information comprises of training databases for health data, medical data exchange clinical decision support system, creation, and use of knowledge. Medical data and healthcare statistics have now evolved as a separate domain called Health Informatics. Technology aids to achieve healthcare goals and improve the accessibility of healthcare information. This paper provides insights into the role of artificial intelligence and data science for developing intelligent health informatics systems, trends of advancing technologies such as machine learning, big data analytics.
The 21st century has witnessed the transformation in the field of medical science. Health care organizations have adopted evolving technologies. The emerging use of Artificial Intelligence in the health care domain can be understood as a collection of technologies that enables machines Proceedings. Artificial Intelligence has the potential to perform administrative functions and is being used in research and training purposes as well. The Application of Artificial Intelligence in healthcare is categorized into several broad categories, i.e., Descriptive, Predictive, and Prescriptive. The subcategories include support for physicians, automation of clinical documentation, image analysis, administrative workflow assistance, virtual observation, and patient outreach.
The issue of information overload, which is being faced by healthcare professionals, is addressed by artificial intelligence. Machine learning is employed to look into high volumes of healthcare data. The phenomenon is of how information is analyzed known as ‘Filter Failure.’ The primary cause for which is inadequate information retrieval systems leading to difficulty identifying the relevant information resources. Artificial intelligence prevents relapse of data by following up on cases and making recommendations. It can predict the impact of a patient’s genetic makeup on illness or reactions to medicine.
The availability and access to high quality are essential factors in the development of artificial intelligence applications in health care. Artificial Intelligence algorithms are based on high-quality training sets for medical image analysis at the level of medical capability captured in their training data. The main goal is to accelerate the discovery of novel disease correlations
and to help match people to the best treatments based on their genetic profiles. Research databases should be created to aid the development of artificial intelligence applications. The database can support incentives on sharing the health data followed by new paradigms for assessment of artificial intelligence algorithms, data ownership, labeled data at several levels. Strategies, in order to fill essential data gaps that need to be identified.
Implementation of artificially intelligent tools and technologies requires several stakeholders, which include practitioners, investors, research and development bodies, and
government bodies. Each stakeholder has a distinct role to play in order to successfully
implement the tools and techniques in the field of health information management.
Artificial Intelligence supports the creation and use of clinical knowledge, reasoning with
clinical knowledge, diagnostic assistance, therapy critique and planning
and improves the efficiency of Health Care Delivery Processes. The primary goals of health informatics include the availability of software applications and medical information anywhere and anytime these goals led to the introduction of an electronic healthcare computing concept.
Technology and informatics are an aid to achieve healthcare goals. Informatics serves as
the bridge between big data. Utilization of informatics and big data improves healthcare by
promoting greater accessibility of healthcare information. That further results in a clinical
breakthrough and facilitate progressively deeper insights which lead to proactive care,
reduced future risk and streamlined work processes. Artificial Intelligence plays a role in
transformative changes in health and healthcare both in and out of the clinical setting. It is
shaping the future of public health, community health, and healthcare delivery from a personal
level to a system level.
Srinath Vallakirthy is a Health Care Data Analytics leader who has worked extensively building high-performing teams for various organizations and has provided consulting to many fortune 500 clients. At Texas Health Resources, he focuses on helping his customers realize the value of data analytics to solve priority business problems especially in-Patient Care Safety and Improvements.
MarketsandMarkets World Pharma Logistics and Supply Chain Conference – VIRTUAL EVENT
Due to growing concerns about COVID-19, the MarketsandMarkets World Pharma Logistics and Supply Chain Conference is going fully virtual this year. We are disappointed that we are not able to come together in person in June for this event, but we are so thrilled to see you on an online platform. We are working together to create a virtual conference that will be beneficial and engaging for speakers, attendees and sponsors/exhibitors.
I. Introduction: What is Central Bank Digital Currency (CBDC)and Why Is It Important?
In this post we plan to expend some time and energy in understanding Central Bank Digital Currency or CBDC. While the notion of “Central Banks” implies centralized control and governance structure the term itself may be against the doctrine of decentralization which lays the foundation of the cryptocurrency framework. We will attempt to focus on the distributed or, one can say, a quasi-decentralized model that aims to provide a structure that not only preserves the supervisory role of the Central Bank, but also conforms to various regulatory, compliance, and monetary frameworks of a central banking as a system. The focus of this post to devise a model that factors into a payment modernization agenda while advocating for the right fit in blockchain technology design is driven by industry acumen as well as business design and economic imperatives.
CBDC has gained traction and a lot of attention primarily due to the rise of cryptocurrency and alternative payment instruments, but also CBDC is an integral part of payment systems and payment infrastructure modernization agendas. In an already complex and fragmented payment ecosystem, CBDC conversation adds to the payment modernization agenda with its own set of complexity and promise as an ultimate solution to the store of a value, fungible unit, and a settlement instrument.
While many CBDC conversations are addressing a domestic payment agenda, the implication and design imperatives extend beyond just the domestic payment systems. The solution design consideration includes (but is not limited to): cross-border payments, high value payments, securities settlements, and more. We also think it is important to compartmentalize various payment infrastructures and impacts of CBDC on them from the core conceptual elements of CBDC itself. CBDC in current parlance is broadly categorized into wholesale, which is Real Time Gross Settlements or RTGS systems which are primarily for interbank settlement and retail. These address the domestic payment systems reaching to the edges of the domestic and global economy with cash as an ultimate store of value and a fungible settlement instrument. While we may in this short post not be able to cover all aspects of CBDC, we will attempt to highlight the salient features, business, and technology considerations and structure and design criteria. Let’s explore below.
CBDCs can be defined as digitalized instruments issued by the Central Bank for payments and settlements. They can be described simply as an electronic extension of a form of cash. It is different from money held in Central Bank accounts, as the public may be able to access the CBDC, which remains a liability on the Central Bank balance sheet. 2.
In my post on Forging ahead with Blockchain, I cited tokenized fiat as one of the essential pillars needed for the industry to realize the promise of blockchain in value transfer. Essentially, if blockchain is a network that facilitates value transfer without intermediaries and resulting costs and friction, many of these value transfers will rely on the duality of a transaction—i.e., an exchange in some sort of a crypto asset or cash—i.e., fiat. Now, we all understand the challenges of money movement domestically and cross-border. It is an industry laden with heavy regulatory oversight, a series of intermediaries, and internal systems and processes chipping away at value at every stage of transfer by excessive fees and unrealized potential of locked capital. But cash or fiat provides ultimate fungibility to many products and services, be it natively digital (music, movies, gates, etc.) or tokenized assets (real estate, gold, silver, etc.), are essentially claims or IOUs of sorts. Cash or fiat also provides a definite unit of measurement of value to many products and services. Fiat or cash becomes essential to the movement of value as a fungible unit in a system, like our present economic and financial system. Fiat or cash in any system is a part of a Central Bank Agenda in that every country has a Central Bank that governs its monetary policy (money supply, interest rates, etc.), and it alone has the jurisdictional authority to issue fiat.
While many Central Banks in many countries are experimenting with blockchain as a technology platform to host applications such as a crypto assets, digital fiat, or Central Bank (issued) Digital Currency (CBDC) both at wholesale (RTGS systems) and retail (distribution through retail banks), as these experiments mature the Central Banks’ points of view at every stage of maturity, in the absence of a Central Bank issuing digital fiat, stablecoins fill in the gaps of addressing transaction finality by addressing settlements with fungible units. So, while there is an enormous amount of business complexity, costs structures, and governance challenges (to be discussed later), stablecoins provide a key service to a digital transaction network promise of blockchain technology. I also view stablecoins as an interim step to CBDC retail—i.e., the network is future-proofing itself with stablecoins today and will be ready for the CBDC of tomorrow. There will be more on this topic as we dig deeper.
Source: BIS report on Central Bank Digital Currencies . Based on Bech and Garratt (2017)
II. Central Bank Digital Currency (CBDC): Business and Economic Imperative
Fueled by the emergence of cryptocurrencies and crypto-assets such as stablecoins as alternative payment and settlement instruments, backed by various collateralization models discussed in this post, CBDC has become a global priority for many Central Banks. Motivation may range from gaining and retaining sovereign control on the money supply, including volume and velocity of distribution, but must also address the payment system modernization agenda domestic, cross-border, or even regional influence.
Blockchain as a trusted transaction system has enabled rethinking possibilities to not only connect global disparate transaction systems, but also by adhering to blockchain fundamentals of trade, trust, and ownership. This facilitates a systemic vehicle to transfer value enabling a modernized system to not only address domestic transfers but to also have the systemic ability to connect and converge. This notion of convergence has the potential to adapt to any standardized, both technologically and adhering, global compliance apparatus so as to move value. This core idea of moving value which today bubbles up to Central Banks systems would be systemic, seamless, and real-time. The various systemic considerations for CBDC system design may include the following. While not an exhaustive list, these systems represent majority of payment systems that interface with Central Banks ledger systems:
a) Real-Time Gross Settlement or RTGS
b) Cross Border Payments
c) High Value Payments
d) Domestic Payment System and Infrastructure
e) Payment Modernization Agenda
The particular design of a CBDC–chiefly whether or not it bears interest–would determine its effectiveness as a monetary policy instrument and any consequential financial stability implications. Practically, the operation of a CBDC is likely to rely on some sort of public-private partnership. Central banks could outsource the distribution of the CBDC to private financial institutions, which could also be involved in the onboarding of users. Difficult questions of interoperability, regulatory demands, and cross-border use must also be answered. 1
III. Central Bank Digital Currency (CBDC): Technology consideration
As discussed, earlier CBDC conversations add to the payment modernization agenda with its own set of complexity and promise as an ultimate solution to store of value, fungible unit and a settlement instrument. The technical design of such a system can embark on novelty and complexity, especially as to integration around existing systems. The technology consideration can be fairly complex. It depends on the design of CBDC wholesale, retail, or a hybrid model that extends the wholesale, i.e. RTGS systems to retail or other adjunct systems such as cross-border and high value transfer system. We envision that linkage between CBDC wholesale design and CBDC retail design as intricate and inseparable. An open and modular technology design is not just an imperative, but essential to holistically address the technology requirement needed to address the business imperative and payment modernization agenda. In this section we aim to address the features and core design elements of such an extensible system.
Features Enabled by Technology:
Real-time settlement and liquidity optimization
• Reduce time and uncertainty
• Better liquidity positioning and buffering
• Eliminate centralized dependencies
• Better predictability and traceability
Central Banks supervisory role
• Liquidity resolution
• Audit control
Core Technical Design Elements:
Payments Digitization and Systems Modernization
The core RTGS capabilities of Central Bank Digital Currency (CBDC) includes systemic modernization with high resilience and reduced dependency of system maintenance, which is the Central Bank’s system management function today. At the same time, the technical design elements should include the supervisory function such as resolving liquidity gridlocks and liquidity injection to member banks and institutions while preserving privacy and confidentiality. Therefore, the preservation of the supervisory function of Central Banks is imperative.
Resiliency Achieved Through Decentralized Processing and Systemic Overhaul
The system should extend resilient infrastructure with no single point of failure. This can be achieved via distributed (or in some cases decentralized) system design principles. The idea is to ensure the system designs are in line with the modern-day digital commerce systems aiming to address the real time payment requirement with increased velocity. The technology design needs to consider not only the various netting obligations, but must also preserve the bi-lateral and, in some cases, multilateral transactions between various banks and institutions. As discussed above, the technology design should preserve the supervisory function of Central Banks.
Efficient Handling of Payment Queue and Liquidity Optimization
As discussed before, technical design elements should include the supervisory function such as resolving liquidity gridlocks and liquidity injection to member banks and institutions while preserving privacy and confidentiality. Some of these are implemented with efficient use of uniform queueing systems with prioritization, holding, and cancellation functions. The banks must be given greater control over payment prioritization and liquidity optimization while queuing certain payment and processing others to maintain consistent payment policy. This would include scrutiny that may be needed based on jurisdiction, payment size, and other workflow requirements.
Maintaining Transactions Privacy and Confidentiality
One of the most important technical design imperatives is to ensure that only relevant parties in bi-lateral and or multilateral transactions will have visibility to transaction details. While central banks as a supervisory entity may have visibility of all systemic transaction details, it is vital to protect liquidity positions and other transaction processing details such as a payment queue and liquidity optimization tools employed by banks and financial institutions. Technology design consideration includes (but is not limited to) visibility of the queuing flows during gridlock resolution and liquidity injection during deadlock resolution.
Finality of Settlement Using Digital Settlement Instruments
One of the most significant technical design considerations of introducing a Digital Asset as a settlement instrument is a final and irrevocable settlement. The use of Digital Asset (DA) or even Digital Obligation (DO) should include payment instructions and settlement with deterministic finality. With the emergence of instruments like Stablecoins**, usually collateralized IOU, or Digital Assets, storing value on a chain will create the notion of settlement finality. The use of DA as a settlement instrument vs the use of DO indicating settlement with off-chain/network flows can often muddle the differentiation between messaging and settlement. The technology design should factor in this difference.
(**Note: Stablecoin is a Digital Asset that is collateralized by a price stable asset (such as Euro, USD) issued, collateralized and guaranteed by a regulated Financial Institution, for purposes of transaction processing and settlement on a blockchain powered business network).
Enhancing Systemic Liquidity Optimization with Central Bank Intervention
One important aspect of technology design which is driven by business requirements is to devise a model that systemically implements a netting engine and provides a facility to devise a contextual gridlock resolution algorithm to maximize liquidity efficiency and optimization. Since this is a CBDC supervisory function, appropriate and fractional visibility of the business function should be available for all member banks and institutions.
Cybersecurity Risk and Infrastructure
Since the current Blockchain and CBDC and subsequent payment market infrastructure is dependent on existing networking and internet protocols (TCP/IP), the Cybersecurity risk of the infrastructure poses a significant threat. Cybersecurity risks are one of the most significant technology infrastructure considerations, given that the current distributed (DLT) or Decentralized (Blockchain) Infrastructure is built on the same foundation of the Internet protocols. This assumption also implies that the infrastructure also inherits the advantages (maturity and skills) and vulnerabilities. In addition to the issues around Digital Assets and obligations being used as a settlement instrument, the implication of risk analysis and risk modeling is of much higher profile due to the impact of infrastructure vulnerability exploits and breaches due to the criticality of market and payment infrastructure. This topic warrants its own dedicated discussion.
Digital Identity Design
In plain speak, we cannot assign digital (under the guise of as crypto assets or CBDC) to driver’s licenses or passports as proof of identity; rather, we need to rely (and perhaps devise) a Digital Identity infrastructure. While a Digital Identity is essential to any blockchain network (be it institutional or individual), it is an essential requirement for CBDC for both wholesale (institutions) and retail (individuals). Almost all CBDC projects either have parallel Digital Identity projects or have a dependency on a Digital Identity utility. Be it CBDC wholesale, i.e. interbank settlement, or CBDC retail, the technology design needs to consider the assignment and mapping of a digital asset to a digital identity. This would be true be it for an institution or an individual. We have discussed some elements of Digital Identity in this post.
Digital Wallets – Account Management
Digital Wallets not only represent identity (KYC/AML elements) but also are a vehicle to participate in a transaction network, for purposes of transaction processing, account management, identity, and establishing the tenets of ownership, trade and transfer. The technology consideration with respect to Digital Wallets should be in the context of an account management and a vehicle to enforce various compliance and governance controls. The notion of a Wallet comes more from an element of a holding account and assigning ownership elements to a digital assets and digital obligations but also mapping the Wallet(s) to Digital Identities creating an abstraction between identities and various classification of accounts they represent. We have discussed some of the elements of AML and KYC in this post.
IV. Central Bank Digital Currency (CBDC):Structure and Design
CBDC wholesale and CBDC retail present an interesting dichotomy of solution design, which ranges from institutional participation to the edges of economy with solving for the last mile i.e. retail payments for basics such as food and utilities. While today the wholesale and retail system are linked with intricate systems, CBDC has the potential to simplify not only the flow as value through the economic system but also to provide a natural integration by collapsing the two systems into a single system that facilitates movement of value, seamlessly across the economic systems. Furthermore, by extending the seamless movement of value–cross border, which is actually not only a big deal but also an end goal–it should facilitate the connecting of global payment systems to move value at scale, lower cost, and with no friction.
A WHOLESALE Central Bank Digital Currency may lead to significant improvements in efficiency, speed and resilience, as well as lower the cost and complexity associated with existing payments systems. The current system is susceptible to technical faults and errors. The layering of different technologies on top of the real-time gross settlement system adds to this complexity. A system based on distributed ledger technology can reduce the number of steps in the process. 2
It is a central task of government to provide adequate payment systems as they are uniquely public goods. Means of payment in contemporary economies are based on trust, are fundamentally non-rivalrous and produce benefits enjoyed simultaneously by all citizens. Facilitating and securing the operation of payments systems is part of a central bank’s mandate, for good reason: a smoothly functioning payments system is critically important to the performance of an economy. Payments connect buyers and sellers, borrowers, and lenders. The ability to make payments securely and irrevocably is fundamental to sustaining confidence in the financial system. The nature and form of the methods consumers use to transact have changed significantly, requiring Central Banks to remain alert to shifts in payments habits.1
Fiat or cash becomes essential to the movement of value as a fungible unit in a system, like our present economic and financial system. Fiat or cash in any system is a Central Bank agenda item in that every country has a Central Bank that governs its monetary policy (money supply, interest rates, etc.), and it alone has the jurisdictional authority to issue fiat. The CBDC has gained traction and a lot of attention primarily due to the rise of cryptocurrency and alternative payment instruments, but also CBDCs are an integral part of payment systems and payment infrastructure modernization agendas. In an already complex and fragmented payment ecosystem, the CBDC conversation adds to the payment modernization agenda with its own set of complexity and promise as an ultimate solution to the store of value, fungible units, and as a settlement instrument. CBDC in current parlance is broadly categorized into wholesale, which is real-time gross settlement or RTGS systems which are primarily for interbank settlement. This goes along with retail which addresses the domestic payment systems reaching to the edges of the domestic and global economy with cash as an ultimate store of value and a fungible settlement instrument.
While today the wholesale and retail system are linked with intricate systems, CBDC has the potential to simplify not only the flow as value through the economic system, but to also provide a natural integration by collapsing the two systems into a single system that facilitates movement of value. This would move seamlessly across the economic systems and, furthermore, would extend the seamless movement of value–cross border, which actually is not only a big deal but also the end goal–connecting global payment systems to move value at scale, lower cost, and with no friction. Blockchain as a trusted transaction system has enabled rethinking possibilities to not only connect global disparate transaction systems, but while also adhering to blockchain fundamentals of trade, trust, and ownership thereby facilitating a systemic vehicle to transfer value. This would allow a modernized system to not only address domestic transfers, but also have the systemic ability to connect and converge. This notion of convergence has the potential to adapt to any standardized, both technologically and adhering global compliance apparatus, and move value. This core idea of moving value which today bubbles up to Central Banks systems, would be systemic, seamless, and real-time.
All modern organizations face some degree of cyber risk. Cyber attacks over the last several years have been steadily increasing, and have increased in both count and severity very recently. New technology and its implementation has become a common task for business small and large.
Only recently have small and mid-size businesses realized the importance of securing the implementation of these rapidly growing systems and infrastructure. Cyber threat management solutions are often complex and daunting to those not familiar with them.
This article aims to assist in the understanding of basic tools and guidelines that can properly assist both technical employees and management alike.
An event like no other
In march of 2016, IT staff at the Methodist Hospital in Henderson, Kentucky noticed something wrong with their internal network. They quickly assessed the situation and found the culprit to be the “Locky” strain of ransomware, a type of malware that usually spreads through spam in email. It infects several types of Microsoft Office file types, and once on a victim network, spreads itself and encrypts all accessible files. Ransomware is designed to lock the files that it accesses with an encryption key or password only known to the individual or group spreading the malware.
The IT staff quickly posted a notice on their public website that some web services would be affected due to an “Internal State of Emergency.” They also notified that public that some electronic services and communication methods would not be available due to the malware attack. Hospital IT staff management and the Information Services Director quickly realized that they needed to shut down all systems on their network. After doing so, they slowly brought the systems back online one-by-one after conducting a full scan for the virus or any remnants of it.
Even for a Hospital with a very robust security response plan in place, the Hospital still found itself unable to provide the services that it valued to clients, customers, and patients. For five days, the Hospital was unable to operate at full service capability, due to the complications from the cyber attack. IT staff were eventually able to shift system capacity to alternate servers, and begin restoring data from backup systems not affected by the attack.
Due to the due diligence of IT staff and the management in the Information Services division of the Hospital, a disaster and backup recovery plan was in place before the attack took place, allowing the organization to fully recover without paying ransom to an unknown malicious party, and according to official statements from the Hospital, no patient data was lost or stolen.
Cyber Risk management can be difficult to define and understand. Cyber-risk insurance can alleviate some of the pain associated with taking on any level of cyber-security risk, however most insuring entities will have fairly strict reporting, auditing and change update management. Cyber liability insurance can help offset some discomfort of securing a network, but it is certainly not a catch-all replacement for a dedicated security team with plans and policies in place at an organization. Both management-level and technical employees need to understand the impacts of these policies and procedures, and implement them accordingly, down to the end-user level. This is certainly no small task. For most organizations, getting C-Suite, CTO, and technical-level employees on board is only the first step. Disseminating these policies and ensuring automated reporting is in place at the node level is another hurdle entirely.
Once these policies and protections are designed, implemented, and properly automated, some organizations or mid-size companies may think that they can sit back and relax. This is not the case. Ensuring a security team is on top of reporting, analysis, and monitoring is an around-the-clock effort. With proper software and hardware appliances in place, most of these steps can be updated fairly easily with a robust security team internally.
Cloud computing has quickly become a relevant technology within the information technology field over the past few years. This technology is powerful and useful but is a dual edge sword as security can become an issue quickly when placing data on web based storage servers. Information security is open broken up into C.I.A. (confidentiality, integrity, and availability) and cloud computing on its own increases availability while harming confidentiality and integrity. Much of the technology needed to secure the cloud is already in place, but the policies and procedures to fully utilize that technology are often non-existent. One example of this is the application of cryptography, many of the security exploits found in the last few years had very little technical impact because of PGP (pretty good privacy) or AES (advanced encryption standard) encryption. This paper will focus less on the technology in place, but the policies and procedures we can use to make the most of it for a more secure cloud for everyone.
Cloud computing is a great tool that also presents many security concerns for businesses, governments, and end users alike. It is a growing field in the IT world and an area of concern to many businesses. Some businesses push the envelope by not only putting data up on the cloud but also using cloud services to enhance security. Meanwhile others see it as a risk they are not willing to take and stick to local data storage solutions or private clouds. What a lot of companies implement is a hybrid cloud, keeping sensitive data and programs local and putting less sensitive files on a public or community cloud.
Cloud computing can trace its roots back to the 1960s when Servers provided applications (software as a service) to thin clients or then known as terminal computers. These thin clients did not contain much local storage or processing power and relied heavily on the server. A great early example of cloud computing would be hotmail that is stored online, which could be accessed by any machine and only required a web browser. Hotmail started in the 1990s and is still in use to this day. Cloud computing in the way we perceive it today started only seven years ago in 2008. A system called OpenNebula was open source and could be used for deploying private and hybrid clouds. Large organizations such as NASA, IBM, and Oracle would join the cloud revolution in the years to come. Cloud computing continues to grow with both small and large businesses taking data storage, email accounts, and servers, with virtual machines, to the cloud. By April of 2013 over half of U.S. business used cloud computing to some extent.
The largest concern with cloud computing has been security. There is good reason for this, as it is a hackers dream, second only to banks due to the storing of large volumes of data. The prospect of stealing data from multiple companies by only hacking one server can send hackers swarming like bees to a hive. An example of a cloud computing hiccup occurred in 2008 when Google Docs suffered a software bug, resulting in corporate data being shared with users who should not have had permissions to access that data. Another example of an exploit found in a cloud environment was in 2011 when researchers working out of Ruhr-University Bochum ran into a cryptography hole in Amazon’s services. This would allow a hacker modify, create, and delete rights and they would be able to change login credentials. The final example of a real world attack occurred in July of 2012 when Dropbox, an online storage provider, had accounts hijacked from other sites to access user accounts, one of an employee. Since then Dropbox has deployed the security control of two factor authentication. We will now look into cloud computing vulnerabilities and the steps providers and customers can take to add security.
Business Data Networks and Security by Raymond Panko and Julia Panko has a section dedicated to cloud computing that gives some background on the subject and its history. It sheds light on the technical aspects of cloud computing such as virtualization and hypervisors. They go over the advantages and disadvantages of cloud computing. The book also has a heavy focus on utility computing and software as a service (SaaS) which are two big components of cloud computing.
CompTIA Cloud Essentials from the ITpreneurs McGraw Hill series give a general overview of all things relating to cloud computing. It does not get overly technical but does serve as a good starting point for understanding what cloud computing is and why it is important from a business standpoint. It defines the types of cloud such as private, hybrid, and public. The book brings up information on access storage speed and data replication as well as other topics that relate to cloud effectiveness. They also cover migrating to the cloud and maintaining a cloud environment.
“Cloud Security: A Gathering Storm” was written by multiple authors all in the department of Computer Science at the University of British Columbia. The article gives an introduction to cloud computing technologies and security. One such example they bring up is that added level of security needed due to the virtualization needed in cloud computing. If one virtual machine is hacked then all of the others within the same network are at risk. The article is mainly focused on the software security of cloud computing. Software based CFI (control-flow integrity) is discussed as a way to make sure that enforcement mechanisms are not disabled or tampered with. The last topic they bring up, which is not often talked about in cloud computing, is the lack of user knowledge. Many people think that once they are on the cloud they have no steps to take on their own part to upkeep security.
“Design flaw in ‘secure’ cloud storage puts privacy at risk, JHU researchers say” was a web article released by Phil Sneiderman in April of 2014. The article highlights a flaw in which confidentiality of information could be lost while sharing information in the cloud. The key sharing used by many popular cloud storage companies had the business operating as a trusted third party. Through this the providers could access their customer’s information. This is a problem because it makes the providers a man-in-the-middle while users believe that the provider cannot see their information. To fix this problem they suggested an independent party serve the role of providing third party keys.
“Cloud Security: Reports slam data protection, national Internets, access myths” was an article written by Violet Blue of ZDNet. Violet reviews three papers provided by Google that attempt to cover cloud computing issues that enterprises face. The three biggest issues according Leviathan Security Group who helped Google with this project are availability, scarcity, and vulnerability. When it comes to availability it is suggested that you take advantage of different providers and have your data stored in multiple regions. This is mainly because of natural disasters and other large scale events. For availability they also suggest that there is no latency difference between having a local or a cloud database if they are both located two thousand miles away. They are critical of cloud computing when it comes to security, but they also claim that having local data actually takes more tools, training, and people to protect that data.
“Cloud Computing Security Case Studies and Research” was a paper written in 2013 on instances of cloud computing vulnerabilities being exposed. An example of a cloud computing hiccup occurred in 2008 when Google Docs suffered a software bug, resulting in corporate data being shared with users who should not have had permissions to access that data. Another example of an exploit found in a cloud environment was in 2011 when researchers working out of Ruhr-University Bochum ran into a cryptography hole in Amazon’s services. The book goes through plenty of examples like this that stem from the early days of cloud computing that can serve as examples of security flaws in the methodology.
Current Development & Solutions
More and more companies today are adopting cloud computing because it is cheaper and more effective than their current solutions, or a more desirable solution to building more infrastructure(s). This solution, just like any other implemented in the IT world, has its ups and downs. One major down being that the old idea of perimeter security is now dead. Information is not simply tied to locations and objects. For this reason, as well as others, cloud computing is still growing and changing to cover its flaws and work more effectively for businesses and government. What people lose sight of is that the security of the cloud today depends every bit as much on policy and procedures as it does the technology that runs it.
With the birth of the cloud came the security concern of users connecting to corporate data over public networks on personal devices. How do you secure the data if it is being sent through an insecure network on a device that the company does not have control of? To answer this question, the weight of solving this issue fell upon the cloud providers themselves. They have, and continue to, improve the product so that it is secure when delivering and receiving data. The same way the HTTPS would allow someone to feel safer submitting an online form at Starbucks using open Wi-Fi. This endpoint protection can give companies some peace of mind and allow employees some freedom in accessing information where and when it is needed.
One great method that is already in place with companies such as Google and Dropbox is two-factor authentication. What Google currently has in place is an authentication method where you enter your password and you are then required to enter a six digit one-time password that was sent to your cell phone. One-time passwords and two-factor authentication add a whole new layer to the security model and at least one of these two methods should be considered by all cloud providers. It makes the life of a hacker harder because dictionary and brute force password attacks are now more cumbersome. This procedure alone could make securing the cloud a much more manageable task.
Another issue that has come up with cloud security is the role of government. What should they be allowed to access? Under what circumstances can they read company or personal data off of the cloud? Over in the EU (European Union) a solution has already been proposed where users are told that there information has been accessed in an attempt to balance user security with a governments’ duty to protect its people.
An additional policy related issue with the cloud that has come up and will continue to impact the technology is encryption. Relating to the previous cloud computing concern, encryption gives companies and people a chance to have a say in if their data is accessed. In the case of a cloud storage server being hacked this would also help with hackers needing to break a code in order to see data they’ve stolen. Buying forensics teams time to investigate while the bad guy attempts to break the cryptography. If the government wanted to access a company’s information and used their authority to access cloud storage, encryption would still give the company a say in the matter where they could defend their rights. As more companies become more aware of how the cloud works, its pitfalls, and its upside, we’ll see more companies taking advantage of data encryption before sending data up to the cloud.
Continuing on the policy end of things, IBM is pushing collaboration to improve security within the cloud environment. The idea behind it is that attackers work together to hack into systems and carry out their agenda, so why would companies not also work together to enhance security? The company is putting out API’s (application programming interface) to allow different companies to add to it or harden their own information. This will be shared information so that companies can learn from each other to enhance their security. IBM is taking measures to make sure that only genuine users have access to this cloud and hopes that it will speed the process of advancing cloud security.
A dark, but potential future to the cloud and the internet in general is localization forced on the web by governments. This would be very damaging to end users and businesses that have international interests. Many countries are now known for not keeping to themselves on the Internet. Russia, China, nations within the EU (European Union), and even our own United States have all been found to be spying on other nations. Not only do these nations spy on their enemies but they keep a close watch on their own allies. While it is true that nations spy on each other and the cloud will only extend this problem, laws do not deter the lawless. Many within the IT community (and business) know the harm that could be done through localization and will fight it tooth-and-nail. This is a future that with cloud computing is hard to predict. If all out cyber warfare ever becomes reality many companies will convert to tight security models and the old perimeter network mentality could return very quickly.
Much of the technology to make the cloud grow is already moving at 100mph, but the policy and procedure side of things has grown slowly, and will be catching up to keep the cloud secure in the months and years ahead. When email was publicly available for the first time in the 1990s many more people fell victim to phishing because people and businesses had to adapt. The cloud can be a secure and effective tool when the policy and procedures behind it are secure and effective. Just like we’ve done with previous new technologies, we will have to adapt.
The cloud is a wonderful tool that should be taken advantage of by government, businesses, and home users. There are steps that they can take to protect their information when placed out in the digital open. Companies and government can work together to advance cloud security and learn from each other for example. Another step is to incorporate secure tunnel technology with cloud communications to defend from man in the middle attacks. Finally to encrypt data that is placed in sensitive locations, like the cloud. These are but a few steps to help improve and already growing and improving technology that is bound to shape the future of information technology.
Cyber security is a critical part of our national security plan in the 21st century. Critical infrastructure relies on our cyber capabilities now more than it ever has. Assets that depend on information systems include transportation, financial systems, healthcare, energy, communication, manufacturing, and several other areas. The threats brought on by breaches, malware, ransomware, social engineering, and denial of service (DOS) attacks are constantly carried out by rogue governments, terrorist, criminals, and showoffs. The government felt that the role of cyber was so important that in 2008 President George W. Bush introduced the Comprehensive National Cyber security Initiative (CNCI) to lay the groundwork for information security that would keep our nation safe. President Obama further expanded upon the initiative by determining that its activities should be part of a broader U.S. cyber security strategy. Another part of his new strategy was better transparency with the public, which is why CNCI was unclassified in 2010. The CNCI was part of the 54th National Security Presidential Directive (NSPD) on Cyber security Policy. This directive was meant to address updated security threats, anticipate future threats, and protect the confidentiality, integrity, and availability of both classified and unclassified networks. There are a total of twelve initiatives within the CNCI that effect multiple federal agencies including the Department of Homeland Security (DHS) the Office of Management and Budget (OBM), and the National Security Agency (NSA).
While the CNCI is a government-wide initiative that affects just about every branch at the federal level there were a few agencies that played a larger role. From development to implementation the DHS, NSA, or OBM were involved in carrying out each individual initiative. That was because of the unique role that each of these agencies plays in the federal governments’ cyber security mission.
The DHS was originally the Office of Homeland Security, created in response to the September 11th attacks. In November of 2002 that office became a department with five key missions. Those missions were to prevent terrorism, secure our borders, enforce immigration laws, safeguard cyberspace, and improve national preparedness and resilience. Agencies under the DHS include U.S. Customs and Border Protection (CBP), the Transportation Security Administration (TSA), the Coast Guard, the Secret Service, and the Federal Emergency Management Agency (FEMA). An office within the DHS called the National Cyber security Center (NCSC) handles information on systems belonging to the NSA, FBI, DoD, and DHS. This office was created from NSPD 54, the same legislation that created CNCI.
On October 24, 1952 the NSA was born under great secrecy. The two primary missions of the NSA are information assurance and signals intelligence. Under the mission of information assurance the agency is responsible for the protection of federal communications. They are the largest intelligence agency on the planet and believed to be capable of intercepting and storing 1.7 billion communication acts per day. The agency’s headquarters in Ft. Meade use as much power as the city of Annapolis. While there is much controversy over the agency’s domestic signals collection the information assurance mission remains unchallenged.
The GAO was formed in July of 1921 for the purposes of auditing, evaluating, and carrying out investigations for the U.S. Congress. It is the premier auditing institution within the federal government. The GAO handles mostly financial audits to hold the government accountable for its spending but it will also do security audits on federal cyber security systems. According to 2014 reports 17 of 24 investigated federal agencies had inadequate information security controls.
The Twelve CNCI Initiatives
The Comprehensive National Cybersecurity Initiative (CNCI) is a vital part of NSPD 54. It laid out initiatives for the federal government with timeframes for multiple agencies to meet these goals. As few examples of these initiatives include, but are not limited to, having the OBM enhance the Einstein program and reduce external access points in coordination with the Secretary of Homeland Security. Working on educating the existing cyber workforce of the federal government to guarantee capable individuals and sure up specialized skillsets. Another goal would be for the Office of Science and Technology Policy to come up with plans that would expand cyber research to sustain our technological superiority in cyberspace.
According to the White House website there are twelve initiatives within the CNCI that impact our national cybersecurity. The Cybersecurity Coordinator put forth a summary explanation of the CNCI for the public. The first initiative was directed towards Trusted Internet Connections (TIC). The OBM and the DHS took primary ownership of this goal to reduce the number of external access points and set up a security baseline. This change to a single federal enterprise network consolidated security efforts for agencies.
The second initiative was to install an enterprise wide intrusion detection system (IDS) that would be capable of finding out when and where unauthorized access was attempted and identifying malicious content. To make this happen a new technology called Einstein 2 would be implemented. The system is capable of providing alerts in real time and presenting information in a visual format. Resources were invested to acquire the manpower to utilize this system and now analysts have an improved awareness of what is going on within our networks and the vulnerabilities they have.
Yet another decision made was to roll out intrusion prevention systems (IPS) called Einstein 3. This third initiative is where the assistance of the NSA would come into play for the purpose of adapting cyber threat signatures to the latest threats. The NSA has also been involved in piloting and developing Einstein 3 to assist the DHS. Intrusion prevention works differently from intrusion detection in that rules can be set that automate network defense countermeasures during an attempted breach.
Initiative four called for better coordination and redirection of our federal research and development (R&D) undertakings. The main goals of this effort were to prioritize, make sure that efforts were not being doubled, and fill gaps in research. The desired outcome of this initiative was less wasteful spending of taxpayer dollars and better outcomes for R&D.
The fifth initiative challenged the federal cyber operation center to improve their current situational awareness. To achieve this they would have to assure that agencies were sharing information and taking advantage of each agencies unique proficiencies to build the best national cyber defense cooperative conceivable. The improved collaboration would enhance federal capacity in all cyber mission areas. A new office within the DHS called the National Cyber security Center was created to oversee this objective and protect U.S. government communication networks. They would also share data between the FBI, NSA, Department of Defense (DoD), and the DHS.
Sixth on the initiative list would be to design and execute an enterprise-wide cyber intelligence strategy. The goal of this would be to deter and mitigate threats to both federal and private sector computer networks. The plan called for the expansion of our current cyber counterintelligence (CI) education and awareness programs. The cyber CI plan works hand in hand with the National Counterintelligence Strategy of the United State of America from 2007 and is to support other components of the CNCI plan.
The goal of the seventh initiative put forward would be to increase the security of our classified networks. Classified information is anything deemed top secret, secret, or confidential by the United States government. Examples of the type of information that can be found under these distinctions include information relating to war strategy, diplomatic relations, counterterrorism, law enforcement, and intelligence.
The eighth initiative was set forth to expand cyber education programs. The government realized that an information system is only as good as the people who run it. In response they started a cyber-education upgrade comparable to the science and mathematics upgrades we saw in the 1950s. The NSA does partnerships with educational programs such as the Security and Risk Analysis (SRA) major at Pennsylvania State University. These partnerships label an educational system as being a Center of Academic Excellence (CAE). The DHS and NSA also gave this label to the University of Maryland’s Cybersecurity Center (MC2).
Moving on to the ninth CNCI initiative the goal was set forth to define and develop enduring “leap-ahead” technology, strategies, and programs. The goal here was to be thinking five to ten years in advance and prepare for serious cybersecurity threats. In this focus area they encouraged ‘out of the box’ thinking in order to predict some of the grand challenges we would face. The government also made it a priority to communicate with the private sector in this effort in hopes of seeking out the best mutual outcomes.
In order to secure cyberspace you need methods to deter your adversaries. This is the tenth initiative in the CNCI and an important one. It calls for senior policymakers to think beyond traditional approaches and think about long-range calculated alternatives. The proposed measures included ramping up warning capabilities, finding roles for the private sector and worldwide allies to play in the cybersecurity community, and implementing responses to actions from both state and non-state actors.
Eleventh on the CNCI initiative list requests an approach for global supply chain risk management. This initiative tackles the risks brought on by globalization of our commercial information supply chains. There must also be support for devices and services throughout their lifecycle. There is also a call for action to work with private sector industry leaders to manage and mitigate risks to the supply chain.
The twelfth and final initiative rolled out by the CNCI stated that the federal government would have to define its role for spreading cybersecurity to our critical infrastructure. The government itself relies on many private sector resources that are susceptible to cyber-attack. The American population also depends on Critical Infrastructure and Key Resources (CIKR). This initiative builds on preexisting cooperative efforts between the government and private sector vendors of critical infrastructure. To make this happen the DHS and their private sector companions have set forth a series of milestones. There are long term and short term goals put forth to reach these goals. Finally, the initiative puts a focus on sharing information on cyber threats between the public and private sectors so both sides can maintain the best awareness possible.
Impact and Use of CNCI
The level of secrecy in government and the amount of time needed to make these initiatives happen make it difficult to do a full spectrum analysis on how the government has implemented the CNCI. In spite of that there is evidence to be found that CNCI is making an impact from both the government and private sector organizations.
Einstein was a great physicist, Nobel Peace Prize winner, and he helped start the Manhattan Project in World War 2. Einstein is also the name of two systems developed by the United States Computer Emergency Readiness Team (US-CERT) under the DHS for the purposes of intrusion detection and prevention. Einstein 1 was initially to be developed for the purpose of creating “situational awareness” within civilian agencies. Einstein 1 was developed in 2003 and deployed a year later. One state government that deployed Einstein 1 was Michigan for the purpose of collection and analysis of network security information. Einstein 1 monitors IP addresses, ports, times, and protocols of all network communication for the purpose of reactive security.
Einstein 2 was a much needed update to the Einstein program released in 2008. The big difference between Einstein 2 and its predecessor is that it can do more than just passively observe network traffic. The new system could alert when malicious activity occurred on the network and offer insight as to the nature of the threat. This is where reactive network security became active network security. In November of 2007 the OBM mandated that the rollout of any trusted internet connection (TIC) would require the use of an Einstein 2 system. Intelligence collected from the Einstein 2 program reports that there are over 5.4 million intrusion attempts on the federal government in a single year.
The most recent and advanced release to date was the Einstein 3 Accelerated (E3A) system. E3A was developed and released with the help of the NSA, going live in 2013. What makes the new system different is that it not only detects threats but acts upon them. To paraphrase instead of being alerted an enemy missile is on its way to your base, the system would automatically shoot them down without human interaction. E3A would also be capable of deep packet inspection and the use of indicators based on recognized suspected malicious behavior. Another major change to affect E3A is that it was now designed to provide a managed security service through Internet Service Providers (ISP). This was pushed by the DHS after the OPM cyber incidents of 2015 that put the records of 4.2 million people at risk. As of late 2015 the massive private sector companies AT&T, Verizon, and CenturyLink adopted E3A firewalls to filter traffic on government networks. After AT&T joined in November of 2015 the Vice President of Technology, Chris Smith, wrote the following blog post; “Today, information is currency, power and advantage. The combination of government threat information and commercial threat indicators boosts our ability to help the federal government and businesses in their ongoing fight against cyber threats.” (Boyd, Web.).
The Einstein program alone touched on many of the initiatives put forth by the CNCI. Trusted connections, intrusion detection, and intrusion prevention from the first three initiatives were settled by the Einstein program. Initiative seven to increase security of classified networks received major help form the program as well. The fifth initiative set the goal of enhancing situational awareness which Einstein certainly helped with as well. Based on the fact that ISPs are now using Einstein 3 themselves an argument could be made that the government is protecting critical infrastructure by providing this technology to the companies that run the cyber backbone of the United States.
Another result of the CNCI would be the Utah Data Center, officially titled the Intelligence Community Comprehensive National Cybersecurity Initiative Data Center. Completed in May 2014 at the estimated cost of 1.2 billion dollars for the three years of its construction it was the largest ongoing DoD project. The facility was built to support the efforts of the intelligence communities’ mission to keep America safe. The data center was made for NSA use and has sparked controversy as to why a facility that big is mandatory. Leaked media documents have related the facility to mass data collection of communication data but no official evidence has been released. The facility is believed to be capable of retaining five-thousand servers that could hold up to five zettabytes of data. To match this amount of data you would need one trillion gigabytes or 62 billion iPhones. Outsiders believe that the NSA is using a custom UNIX operating system to help protect its information. It is likely that the Utah Data Center was implemented not only because of a need for more information storage but also the sixth initiative of the CNCI to expand our counterintelligence abilities.
While these two advances in infrastructure and technology certainly get the ball rolling more needs to be done to make sure all twelve of these goals are met. The DHS and NSA lead the way in what is called the Networking and Information Technology Research and Development (NITRD) program. Over twenty government agencies participate in this program including the DHS, Department of Justice (DOJ), National Aeronautics and Space Administration (NASA), and the Department of Commerce (DOC). The forth initiative of sharing and coordinating R&D efforts is well on its way because of this NITRD program that was originally started in 1991 under the title ‘High-Performance Computing Act’.
In 2008 another component of the CNCI, initiative nine, called for leap-ahead technology which is again commonly a task carried out by the DHS and NSA. In 2009 an event was held called the National Cyber Leap Year Summit that brought ideas forward that could improve cyber defense. Among the ideas brought forward by this event were enabling hardware to counter attacks by making security a priority in hardware design and a cyber ‘interpol” that could enforce international cyber law and carry out investigations of cybercrimes that cross borders. This event had 150 researches from not only government but also industry and academia. The DHS also has a continuous diagnostics and mitigation (CDM) program to stay on the cutting edge of technology. The goal of the CDM is to expand diagnostic capabilities through higher network sensor capacity and prioritizing risk alerts.
Finally in 2014 the DHS implemented a program to address the twelfth initiative of the CNCI which was to extend cyber security into critical infrastructure of the private sector. The Critical Infrastructure Cyber Community (C3) voluntary program is a partnership with the private sector to support in their efforts to use the National Institute of Standards and Technology (NIST) cyber security framework. This program focuses on handling cyber risks from an all-hazards approach at the enterprise level. Some of the services of the C3 program include the cyber resilience review (CRR), consultation in implementing the cyber security framework, and a central location to share knowledge. There are incentives offered by the government for involvement in the program. Among these incentives are cyber security insurance, grants, and public recognition. This program is important in that it provides a central location and easy starting point for private sector entities to seek out help in implementing the best cyber security possible.
Cyber security is a critical part of our national security plan in the 21st century. The Comprehensive National Cyber security Initiative (CNCI) enacted by the Bush administration and supported by the Obama administration has helped the U.S. advance cyber defense by leaps and bounds. The Einstein program has advanced our firewall technology to automate network defense. The Utah data center, while controversial, will aid in the governments future intelligence mission. Education programs in schools will beef up our cyber workforce to understand and respect the complex threats we face moving forward. Investments of time and effort into leap ahead technology will keep the government competitive with its adversaries. Finally the Critical Infrastructure Cyber Community (C3) voluntary program opens the door to the private sector so we can secure critical infrastructure and have cyber security cooperation in America. Moving forward the CNCI will need to be updated and further pursued. It was a critical first step that has made the United States a safe place to utilize cyber, and can be in the years ahead.
The below graph maps out the possible combinations.
Conventional database often achieve hashtag#resilience by storing data over multiple physical nodes, but all are controlled by a top node. By contrast, in many DLT-based systems, the ledger is jointly managed by all nodes. The communication needed to achieve hashtag#consensus between these nodes is the main reason why DLTs tend to have lower transaction throughput. As we argue in the below Bank for International Settlements – BIS QR feature, this implies that the infrastructure choice can only be made once the architecture of the CBDC and the associated operational role of the central bank has been decided upon. Specifically, existing DLT likely could not be used for the “Direct CBDC architecture” in larger jurisdictions. When it comes to vulnerabilities, neither a DLT-based system nor a conventional one has a clear-cut advantage. The key vulnerability of a conventional architecture is the failure of the top node. One vulnerability of DLT is the hashtag#consensusmechanism. Full paper: https://lnkd.in/dHkiE2hhashtag#digitalcurrencieshashtag#blockchainhashtag#token
Join us in Las Vegas for The Predictive Analytics World for Healthcare (formerly Mega-PAW), our largest event to date, with eight tracks of sessions covering the commercial deployment of predictive analytics for Healthcare. Register to attend one or more of MLW’s five co-located conferences.
In the not too distant past, if you were running clinical trials, we would never have expected this question to be posed: “do we really need a Clinical Trials Management System (CTMS)?” You needed to have a system that would capture and report out information related to the operational progress of your trials, without question. But, these days, we do hear that assumption challenged more and more often.
For many years the scope and functionality of widely available CTMS systems have spread and grown unwieldly. Software vendors looked to incorporate additional tools in their products such as investigator databases, monitoring visit reports, safety letter distribution, and even Trial Master File and document compliance functions. These tools, while not directly relevant to the day-to-day operations of a trial, served the hub of centralized clinical operations. Sponsors appreciated the idea of having one place to go to get their information, and bombarded vendors with a steady stream of requests for additional functionality that supported their particular way of operating, as distinct from the way another sponsor might work or be organized.
Of course, not all requests could be fulfilled, and so the rollout of new functionality was predicated upon variables such as common sponsor desire, vendor cost containment and technological feasibility. But, in recent years one key factor changed the CTMS landscape entirely: large-scale Outsourcing. Unfortunately, traditional CTMS solutions do not adapt well to a heavily outsourced environment, especially when sponsors are using multiple CROs.
The primary barrier in using traditional CTMS systems in an outsourced model is the question who is doing the actual work? Most CROs serve multiple clients, and a large part of the success of their business model relies on limiting fixed costs (non-billable resources). What this means is that they need their workforce to be fluid and adaptable (while still performing to client satisfaction). To support this they need their own internal management systems where knowledge and process is transferrable and fungible regardless of trial or sponsor. This is the only way CROs can recognize economies of scale, but both parties to the work environment (sponsor and CRO) end up with an investment in their own optimized CTMS, with compelling reasons for their investment. Rarely, however, can these systems be shared, talk to each other, align their metadata, or even have access allowed from one party to the other. At best, each party can review reports or data extracts from the other, almost never in real-time, and rarely in their preferred format.
Here in 2018, with approximately 50% of every research dollar going to third-party providers, what are sponsors to do? We are still responsible for the trial, its subjects and the data it produces. But in an outsourced environment, sponsors are really no longer involved in the day-to-day operational aspects of a trial. This is what they are paying CROs to do. And so the M (Management) in CTMS is no longer applicable to the sponsor. Likewise, all of the associated functionality in traditional CTMS systems that focused on the M of a trial is of limited value to sponsor users. CROs still require this functionality, since they are doing the operations, but they have their own CTMS systems that are configured around their unique (and consistent) processes.
Sponsors need to consider replacing the M in CTMS with an O, and pursue the development and implementation of a Clinical Trial Oversight System. In an outsourced environment, oversight is really what sponsors are supposed to be doing, and indeed this is the regulatory expectation. The complexity of changing the mindset at a sponsor from “doer” to “overseer” is perhaps a topic for another article, but clearly the overseer needs data summarized and consolidated for a trial to ensure they know what is going on, and more importantly, to be able to take action based on this information.
This is why it pains us to see sponsors who outsource most of their trials still spend large sums of money for, and devote enormous effort to, implementing a traditional CTMS system. Inevitably, in this situation, a significant portion of the functionality they spent months discussing and configuring is underutilized, or worse, forgotten by their user community. There are a number of vendors in the space that offer systems or software that can meet the oversight needs and requirements of a sponsor – they are not (and shouldn’t be called) CTMS systems. They are often simpler and faster to configure (less functionality) and less expensive. Furthermore, these solutions can integrate with CRO and other third-party systems that could eliminate the requests for CROs to work in sponsor systems, while still allowing sponsors to see and analyze the relevant operational data. It is important to emphasize that oversight does not consist solely of looking at reports and metrics, but includes many other important factors such as communication/collaboration streams and issue management. This locus of functionality gets at the true heart of sponsors’ responsibilities and business needs.
Do you really need a CTMS system? The answer, as usual, is “it depends”. If you do not outsource the majority of your trials to CRO providers and do much of the operational and management work in house, then yes, you would/could benefit from a traditional CTMS solution. If you do outsource a large portion of your trials, then you would be better served to seek out options that focus on supporting trial oversight. How you implement this oversight and associated process change is as important, or perhaps more important, than the technical solution you chose. So, before you commit millions of dollars to your next CTMS initiative, ask yourself two questions: 1) Do we manage trials or oversee them? 2) What is it that we need to support that work.