Trusting AI Hero image

AI — Trust, Risk & Security

We trust artificial intelligence for both personal tasks and business functions, but how far does that trust go? When it comes to making billion-dollar business decisions or life-and-death medical care choices, is it okay to completely trust a computer? Concerns around trusting AI are many and they include bias, inaccuracy, design flaws, lack of transparency, and security.

Many organizations are currently wrestling with these AI trust issues, and this conversation was recently crystalized in a report from Gartner® that addressed notions of AI trust, risk, and security management (AI TRiSM). The report highlighted that (AI TRiSM) typically requires organizations to implement a best-of-breed tool portfolio approach, as most AI platforms will not provide all required functionality.

 

What Is an AI TRiSM Strategy?

Addressing AI TRiSM requires a multi-pronged strategy capable of managing risks and threats while promoting trust in the technology. Best practices for handling AI TRiSM establish the following core capabilities:

  • Explainability. An AI TRiSM strategy must include justifications or information that explains the purpose of the AI technology. The model should be described in terms of its purpose, strengths, weaknesses, likely behavior, and potential biases. This aspect of an AI TRiSM strategy ought to clarify how a specific AI model will provide accuracy, accountability, fairness, stability, and transparency with respect to decision-making.
  • ModelOps. Model operationalization (ModelOps) of an AI TRiSM strategy covers the lifecycle management and overall governance of all AI models, including both analytical models and models based on machine learning.
  • Data Anomaly Detection. Data monitoring tools are used to analyze weighted data drift or degradation of key features to prevent attacks, bias, and process mistakes.. This aspect of AI TRiSM is meant to highlight data issues and anomalies before decisions are made based on information provided by a model. Data monitoring tools are also helpful when it comes to optimizing model performance.
  • Adversarial Attack Resistance. Used to create different types of organizational loss and harm, adversarial attacks alter the results of machine learning algorithms to gain an advantage. This is done by providing adversarial inputs or malicious data to an AI model after it has been implemented. Adversarial attack resistance methods prevent models from accepting adversarial inputs throughout their entire life cycle; from development, through testing, and into implementation. For example, an attack resistance technique might be designed to help models tolerate a certain level of noise, as this noise may be adversarial inputs.
  • Data Protection. AI technology requires massive amounts of data, and protecting data is a mean concern when it comes to implementation. As a component of AI TRiSM, data protection is particularly critical in highly-regulated industries such as healthcare and finance. Regulations like the Health Insurance Portability and Accountability Act (HIPAA) in the US and the General Data Protection Regulation (GDPR) must be followed or else organizations risk being found in non-compliance. Additionally, AI-specific regulations are currently a major focus for regulators, particularly when it comes to protecting privacy.

 

The State of AI TRiSM

Regulators and government agencies around the world have been issuing guidance and announcing potential new laws designed to enforce the fair and transparent use of AI.

The European Union has taken a multifaceted approach to this issue. In one aspect, the EU has looked to shape the use of AI technology through investments in research and public-private partnerships. The European government has also formed a group of multidisciplinary experts called the European AI High Level Expert Group (HLEG) to develop ethical guidelines around bias, transparency, shared values, and explainability. The group will also recommend policies on AI-related infrastructure and funding. The EU has also proposed hefty fines for companies that do not comply with established AI guidelines.

In the United States, regulation of AI appears to be in a fairly early stage. Some government officials have called for rules to limit the use and design of AI, but the US government has not developed a comprehensive approach to the issue. In May 2021, the National Institute of Standards and Technology (NIST) released a draft publication designed to trigger discussion on trust in AI systems.

Currently, companies are operating in regulated industries that need transparency regarding the AI systems they use. These companies must be able to monitor the precision, performance, and any potential bias of their technology. They must record this information in a level of detail that is suitable for accountability, compliance, and possibly customer service.

Therefore, regulated companies should have staff and structures in place to address the many variables related to risk and compliance — including handling the massive amounts of necessary documentation.

 

AI Security Risk Management

There is no one-size-fits-all platform when it comes to addressing every aspect of AI TRiSM. Companies must cobble together various internal and external policies and tools to cover the various distinct risk and threat categories, including bots, system faults, query attacks, and malicious inputs. Theft of property, asset damage, asset manipulation, model theft, and data corruption are all potential types of damage that can be caused by these threats.

Some of these threats are pre-existing, while others target the AI system after implementation. Therefore, organizations must have mitigation measures in place that cover both types of threats. Enterprise security and trustworthiness measures can be used to address pre-existing threats, while ModelOps measures can be used to mitigate post-implementation threats. Measures that address financial, brand-related, political, or other macro risks fall outside the scope of AI TRiSM. Furthermore, AI TRiSM measures do not specifically address data breaches, fraud, or theft. These measures are specifically focused on securing and protecting AI models.

 

Trust in AI

While there is some conceptual overlap between AI security and a more general approach to cyber security, use of artificial intelligence requires not only trust in a system’s security features, but trust in an AI’s output, results, and potential implications.. Organizations that heavily invest in keeping their technology secure should also invest in a comprehensive approach to sustaining trust in the technology. The goal is to have AI systems that are explainable, have minimal bias, and remain dependable.

An essential first step to sustaining trust in an organization’s AI technology is establishing a high degree of explainability. Unfortunately, this is not an easy goal. AI systems on the market, such those used for medical benefits or online advertising, are often black-box solutions that rarely offer visibility into underlying data, logic, and processes. Furthermore, effective deep learning and other machine learning approaches are not transparent to end users and do not provide much insight into their inner workings. While some efforts are underway to increase transparency, companies can support explainability by establishing strong sets of goals, best practices, and organizational transparency around their AI systems.

Another key step toward establishing trust in AI is the detection and mitigation of bias. Once again, this is not as easy as it might seem. Issues around bias, fairness, and inclusivity can be subjective. For the business-oriented application of AI, measures related to eliminating bias should be calibrated in accordance with the context and application objectives. 

With proper calibration, bias detecting and mitigating algorithms can catch issues and address them before they become deeply integrated into a model. In addition to addressing moral and regulatory concerns, bias checks can simplify risk analysis and add value to system performance reports.

To maintain trust, AI systems must also remain dependable. Model outputs can drift due to conceptual or data-based reasons and therefore require constant monitoring to avoid generating adverse results. This facet of AI TRiSM is a bit more straightforward than more subjective aspects and there are tools available better capable of monitoring the performance of AI models.

 

Market Direction for AI TRiSM

In September 2021, Gartner released a comprehensive report on the current state of the market for AI TRiSM solutions and projections for the near future of the market.

“We expect the market to quickly evolve, driven in part by increasing regulations and by increased capabilities to operationalize AI models,” the report said. “New generations of combined functionality will emerge over time, and we expect [end-to-end TRiSM systems to arrive] by 2026.”

According to the report, the AI TRiSM market will progress in five distinct phases:

  • Phase 1 — Fragmentation: The AI TRiSM market today is highly fragmented. AI vendors (see Note 2) do not provide all the requisite functionality to effectively and continuously manage AI trust, risk and security. This leaves users in a spot where they select more than one provider for best-of-breed products from the primary AI TRiSM categories to fulfill these requirements.
  • Phase 2 — Feature Consolidation: AI TRiSM capabilities will be consolidated into just two buckets, down from the current five. ModelOps and Data Protection will be the primary two vendor categories required to address AI TRiSM issues (see Note 3). At the same time, AI vendors will expand their own packaged TRiSM functionality.
  • Phase 3 — Solution Integration: ModelOps alerts and remediations will be integrated into overarching and existing Enterprise Risk Management and Security Orchestration systems. Third-party models used by the enterprise will be incorporated into ModelOps platform management (beyond first party enterprise developed models). Alerts and remediations for adversarial attacks and malicious transactions will be integrated into existing Security Orchestration or SIEM systems.
  • Phase 4 — Market Consolidation: Most of the model- and platform-neutral ModelOps vendors in the market today will be acquired by broader AI platform vendors or Enterprise Risk Management vendors, leaving very few pureplay ModelOps vendors. These consolidated platforms will coexist with innovative solutions that extend capabilities to composite AI and generative AI. Data Protection for AI model data will continue to evolve from solutions used to protect data outside AI applications.
  • Phase 5 — Augmented-AI Managed TRiSM: New end-to-end fully managed enterprise AI TRiSM systems that themselves use AI will emerge so that AI systems can be self-correcting under human oversight

 

Until the market reaches this final stage, enterprises must create a tapestry of tools and practices to address AI TRiSM. Fortunately, some platforms offer cross-functionality.

“Some, but not all, of today’s ModelOps platforms include explainability functions and also check for anomalous data patterns in incoming production data. However, they do not generally detect the source of the anomalies,” the report said. “Further, pre-existing data anomaly detection tools may already be used by an enterprise, for example in fraud detection operations, and should optimally be integrated into a comprehensive ModelOps operation.”

When it comes to selecting solutions, the report said, decision-makers should focus on addressing their industry-specific, use case needs: “For example, data protection may be the most important priority in the healthcare sector, while adversarial attack resistance may be top of the list for defense contractors.”

The report’s authors also said it is critical to assess the perspectives of all potential users.

“These stakeholders include data and analytics leaders, C-level executives responsible for AI along with data scientists, machine learning engineers, enterprise AI architects, legal and compliance teams, cloud operations, security, privacy and risk managers,” the report said. “In particular, explainability solutions must be able to satisfy the different requirements of these unique user personas so they can understand model risks particular to their domain.”

 

Simplifying AI TRiSM with TripleBlind

Organizational leaders looking to address AI TRiSM should consider the data protection afforded by TripleBlind’s innovative privacy-enhancing technology.

The massive amounts of data required to develop AI models are not always available. For systems designed to process personal information, like those in healthcare and financial services, privacy regulations can prevent significant access to siloed data.

Our innovative technology is specifically designed to facilitate AI development. When partnering with TripleBlind, model developers can use Blind Learning to access data from multiple parties in a way that maintains privacy and protects valuable algorithms. This technology combines and expands the efficiency of split learning with the data residency advantages of federated learning, resulting in a single and highly-efficient privacy-preserving solution.

Blind Learning allows for data to be operationalized without ever revealing any personal or private information, helping data partners to remain compliant with regulations like HIPAA and GDPR. This technology also allows data holders to hold on to their information, addressing issues related to data residency. Furthermore, model owners never have to ship their full model to another organization.

Our technology also allows for the protection of all data types. For example, health care researchers looking to develop AI models for X-ray imagery can easily share diagnostic imaging without revealing any personal information of patients. In this use case, source images are obfuscated and encrypted through privacy-enhancing computation.

With our technology, AI developers and users can tap into new sources of essential data and create valuable new data partnerships. If you would like to learn more about how our technology can be your data protection solution in pursuit of AI TRiSM, contact us today.

 

Gartner, Market Guide for AI Trust, Risk and Security Management, Avivah LitanFarhan ChoudharyJeremy D’Hoinne, 1 September, 2022

GARTNER is the registered trademark of Gartner Inc., and/or its affiliates in the U.S. and/or internationally and has been used herein with permission. All rights reserved.

KPMG US, Global & US CDO: Being Proactive and Transparent Drives Value

https://www.cdomagazine.tech/cdo_magazine/topics/strategy_roadmap/video-kpmg-us-global-us-cdo-being-proactive-and-transparent-drives-value/article_ff06317a-180b-11ed-9e92-93f5bafd4bff.html

5 Uses of IoT Device Data

When you hear “IoT,” it’s easy to jump to smart fridges and fancy thermostats, but according to the International Data Corporation, there will be more than 55 billion devices connected to the Internet of Things (IoT) by 2025. These devices generate large amounts of useful data, so it’s no surprise that businesses are increasingly embracing the technology.

Before getting into use cases, it’s important to talk about the Internet of Things security and data protection. Most business data is sensitive and therefore IoT privacy data protection and information security are essential. Companies need IoT cloud data security and other measures to ensure sensitive information remains secure.

With cybersecurity in mind, consider the following five uses of IoT device data.

1. Supply Chain

One of the most visible applications of IoT device data is in the supply chain. While we are all familiar with the capability to track shipments, business applications of the technology go far beyond tracking packages.

GPS tracking devices are also capable of producing other types of useful data beyond tracking data. Some logistics operations use these devices to determine how strictly drivers are following predefined routes. When this data is combined with the organization’s data on fuel costs, it can reveal if shipping vehicles are being used properly.

Supply chain operations are also increasingly using computer vision systems. According to a recent report from Gartner®,”edge CV yields new capabilities that allow vendors and early adopters to differentiate. It will also enable large-scale business process automation (particularly in manufacturing, logistics, and supply chain), drive process improvement through alerts and analytics (primarily in retail and healthcare), and support autonomous vehicle operation.” [1]

2. Manufacturing

The complex and quantitative nature of manufacturing makes the industry ripe for IoT use cases, and IoT devices are increasingly found as part of manufacturing operations. The connected capabilities of this equipment allow for the monitoring of product quality and equipment conditions.

In the typical manufacturing facility, quality control processes involve looking for defects in finished products. In production equipment with connected capabilities, production line workers can monitor the quality of machine outputs in real-time. This can resolve quality issues faster and significantly reduce the production of defective products.

According to a recent report from Gartner, IoT device data will increasingly be used to enable so-called Edge AI, where “the IoT endpoint (asset) runs AI models to interpret captured or external data and drives endpoint functions (automation and actuation). In this case, the AI model is trained (and updated) on a central system and deployed to the IoT endpoint.” [2]

Gartner also says the adoption of Edge AI will be especially pronounced in industrial settings: “In these use cases, data is captured at an IoT endpoint and transferred to an AI system hosted within an edge computer, gateway or other aggregation point. This edge AI model is used for many industrial enterprises in scenarios on a factory or plant floor, where sensor data from various assets is normalized and analyzed, and/or integrated within various business planning and logistics applications.” [2]

3. Agriculture

While electronic devices have long been used in agriculture to track the effects of weather and climate, IoT devices are being used in new areas of agriculture to provide more granular agricultural data.

In a commercial greenhouse, data from IoT sensors can be used for automated climate control. Based on the types of plants being grown, sensor data can tell an automated system to lower heat or increase humidity.

On a dairy farm, IoT devices are used to track the location and health of individual cattle, making it easier for farmers to monitor the health of their animals. A Russian company called Mustang applies analytics to IoT device data to find connections between milk production and factors like temperature or humidity. IoT devices can even be used to track pregnancies and react quickly to events.

4. Retail

In retail, IoT device data is being used to perform increasingly targeted marketing.

Some stores are now equipped with electronic beacons that use Bluetooth connectivity to send push notifications to people within a certain radius. By sending targeted offers to certain passersby, these stores can attract more foot traffic and increase purchases.

While connected cameras are often associated with store security, they can now be augmented with machine vision technology to gain more information about people who enter a store, including the items they browse and how long they spend in the store. Data pertaining to a customer’s physical journey within the store can be used for more effective product positioning.

5. Service and Product Development

Many modern devices and appliances have internet connectivity to provide additional functionality to customers. But also, these connected devices often provide information back to manufacturers that indicate how these devices are being used.

Some companies use this information to provide a more customized experience. For instance, the manufacturer of a smart TV might apply artificial intelligence to an individual user’s data to recommend shows and movies.

This same usage data for a smart TV could also be passed onto content producers, which in turn, use this information to generate future content. 

Get More from Your IoT Data

If your employees have connected devices, or if there are sensors in your facilities, you have plenty of IoT data just waiting to be utilized. However, there are privacy and security challenges to consider.

First, some business IoT data could contain personal information, such as GPS tracking data or customer purchase histories. Second, IoT business data is often highly valuable and sharing that data with an analytics provider does pose risks of misuse and theft.

TripleBlind’s innovative privacy-enhancing technology can facilitate the safer use of IoT data in a way that is superior to other privacy measures like federated learning. Specifically, our technology allows for the private sharing of all data types. 

This feature is particularly relevant for the Internet of Things, as IoT data can come in many different forms. Furthermore, manufacturers of IoT devices currently do not have visibility into data flowing from their devices through service providers due to privacy concerns. TripleBlind technology allows IoT manufacturers to address privacy concerns, opening up a Pandora’s Box of insights. 

If you would like to know more about the TripleBlind Solution, Contact us today.

 

[1] Gartner, “Emerging Technologies Impact Radar: Edge AI”, Eric Goodness, Danielle Casey, October 26, 2021.

[2]Gartner, “Emerging Technologies and Trends Impact Radar: Internet of Things”, Matthew Flatley, Eric Goodness, September 30, 2021.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

94% of Surveyed CDOs State Optimized Data Privacy Technology Leads to Increased Revenues, New TripleBlind Survey Reveals

Nearly Half of Healthcare and Financial Services Executives Believe Enhanced PET Solutions Give Their Organizations a Competitive Advantage

KANSAS CITY, MO — August 17, 2022 – In a new survey released today by TripleBlind, 94% of CDOs from healthcare organizations and financial service firms stated deploying data privacy technology that enforces existing data privacy regulations would result in increased revenues for their organizations. 37% of respondents stated they estimate improved collaboration would increase revenues as much as 20%. In addition, 46% stated increased data collaboration would give their organization a competitive advantage over other organizations. TripleBlind is creator of the most complete and scalable solution for privacy enhancing computation.

 

Additional key findings of the survey included:

  • 64% of respondents are concerned that employees at organizations with which they are collaborating will use data in a way not authorized in signed legal agreements, 
  • 60% are concerned people at organizations with which they collaborate will use data that violates HIPAA and/or other data privacy regulations,
  • 60% are concerned that the privacy-enhancing technology (PET) solution deployed by data collaboration partners will modify the data to make the results of analyses inaccurate.

 

“There is strong agreement that optimizing effective data collaboration through advanced PET solutions will result in both increased revenues and enhanced competitive advantage,” said Riddhiman Das, TripleBlind’s Co-founder and CEO. “Today, advanced PET solutions exist that render legal agreements obsolete and prevent people at both the data user and data owner from using data in a way that violates HIPAA and other data privacy regulations or modifies data in a way that results in inaccurate analyses.”

 

Healthcare Organizations Especially Concerned about Regulatory Compliance and Accuracy

Healthcare organizations represented in the survey include hospitals, healthcare systems, pharmaceutical manufacturers and health insurance companies. In terms of the value of enhanced data privacy to these organizations, 43% of healthcare system respondents believe it would result in increased revenues of up to 20%, an additional 48% believe up to 10%. 52% and 50% of healthcare system and hospital respondents, respectively, stated expanded data collaboration would give their organizations a competitive advantage.

The level of concern about people at data user organizations using data in a way that violates HIPAA and/or other data privacy regulations varies among different healthcare organizations. This was of great concern to healthcare insurance carriers with 86% citing this concern, as well as to 71% of hospital respondents, and 50% of healthcare systems respondents. 75% of healthcare system respondents were also concerned that data user organizations had installed PET solutions that would make the results of analyses inaccurate, 73% of pharma manufacturers and 60% of hospital respondents shared this concern.

 

Financial Services Firms are Optimistic about the Potential of Improved Data Collaboration

Financial services firms included in the survey are banks, broker/dealers and credit card issuers. 60% of broker/dealer CDOs/senior data managers state improved data collaboration practices would include revenues up to 20%, while 59% of bank executives had the same response. Half of broker dealers, 47% of banks and 44% of credit card issuer respondents believe enhanced data sharing would create a competitive advantage for their organizations.

Regarding use of data, financial institutions are somewhat less alarmist than their healthcare counterparts. 67% of credit card issuers and 63% of bank respondents are concerned that people at data user organizations will use data in a way that violates one or more data privacy regulations. And 50% of broker/dealer respondents are concerned people at data user organizations will deploy PET solutions that modify data to make analyses inaccurate, along with just 27% of bank and 25% of credit card issuer respondents. 

 

To receive the complete results of the survey, please visit https://tripleblind.com/cdo-data-privacy-report/

 

 

About the Survey

TripleBlind surveyed 150 chief data officers (CDOs) and other executives in charge of data management at healthcare and financial services organizations with annual revenues of at least $50 million and at least 250 employees. IntelliSurvey, which conducts approximately 5,000 online surveys annually, executed the survey.

 

Additional Resources:

 

About TripleBlind

Combining Data and Algorithms while Preserving Privacy and Ensuring Compliance

TripleBlind has created the most complete and scalable solution for privacy enhancing computation.

The TripleBlind solution is software-only and delivered via a simple API. It solves for a broad range of use cases, with current focus on healthcare and financial services. The company is backed by Accenture, General Catalyst and The Mayo Clinic.

TripleBlind’s innovations build on well understood principles, such as federated learning and multi-party compute. Our innovations radically improve the practical use of privacy preserving technologies, by adding true scalability and faster processing, with support for all data and algorithm types. We support all cloud platforms and unlock the intellectual property value of data, while preserving privacy and ensuring compliance with all known data privacy and data residency standards, such as HIPAA and GDPR. 

TripleBlind compares favorably with existing methods of privacy preserving technology, such as homomorphic encryption, synthetic data and tokenization and has documented use cases for more than two dozen mission critical business problems.

 

For an overview, a live demo, or a one-hour hands-on workshop, contact@tripleblind.com.

Contact

mediainquiries@tripleblind.com

New TripleBlind Survey Highlights CDOs Thoughts on Data Privacy

94% of CDOs State Enhanced Data Privacy Boosts Revenues, New TripleBlind Survey Finds 

We are excited to share with you the findings from a new survey we’ve just conducted with chief data officers (CDOs) and other senior executives in charge of data management at large healthcare and financial service organizations with annual revenues of at least $50 million and at least 250 employees. The survey results were significant and demonstrate the urgency to use data collaboration through advanced privacy enhancing technology (PET) solutions to increase revenues as well as competitive advantage.

 

Key findings from the survey include:

  • 94% of CDOs stated deploying data privacy technology that enforces existing data privacy regulations would result in increased revenues for their organizations.
  • 37% of respondents stated they estimate improved collaboration would increase revenues as much as 20%.
  • 46% stated increased data collaboration would give their organization a competitive advantage over other organizations.

 

CDOs at healthcare organizations had different priorities and levels of concern than those at financial service firms. Across different types of healthcare organizations, CDOs were most concerned about data user organizations using the data in a way that violates HIPAA and/or other data privacy regulations. The most concerned were CDOs at healthcare insurance providers, hospitals, and healthcare systems. However, despite their hesitations, respondents believe enhanced data privacy can increase revenues by 10% to 20%.  

Financial service firm CDOs were more optimistic than their healthcare counterparts, with most respondents stating that improved data collaboration practices would increase revenues by up to 20%. While slightly less concerned than healthcare CDOs, financial service firm CDOs are still concerned that people at data user organizations will use data in a way that violates privacy regulations. 

Both groups of CDOs had hesitations around people at data user organizations deploying PET solutions that modify data to make analyses inaccurate.

For CDOs looking for a PET in order to increase data collaboration, TripleBlind’s software-only solution is delivered via a simple API and is the most complete and scalable solution for privacy enhancing computation. It offers CDOs and their organizations an advanced PET solution that renders legal agreements obsolete, preserves compute performance, prevents organizations from using data in a way that violates data privacy regulations, and doesn’t modify data in a way that results in inaccurate analyses.

Be sure to stay up-to-date with our blog as we dive deeper into key findings from our CDO survey. You can review the complete survey report here or read the press release here.

 

For more information, reach out to us for more details about the survey or give us a call to chat about your data privacy concerns. 

Constellation Research Report: TripleBlind - A Lateral Approach to Privacy-Enhanced Data Sharing

TripleBlind Makes an “Elegant and Easily Verified” Data Privacy Promise, New Constellation Research Report Finds

New Constellation Research Report Finds that TripleBlind Makes an “Elegant and Easily Verified” Data Privacy Promise 

Report Confirms MITRE Engenuity Evaluation of Solution Capabilities

KANSAS CITY, MO — August 9, 2022 TripleBlind “makes an elegant and easily verified privacy promise … the architecture is simple and there is … no interference with any of the raw data, unlike in the case of homomorphic encryption or differential privacy,” notes a new report on the company from analyst firm Constellation Research. The new report, titled “TripleBlind: a Lateral Approach to Privacy-Enhanced Data Sharing,” is now available on the TripleBlind and Constellation Research websites. TripleBlind is the creator of the most complete and scalable solution for privacy enhancing computation.  

“TripleBlind is a leader in the relatively new and fast-evolving category of privacy-enhanced computation (PEC),” notes Steve Wilson, Vice President and Principal Analyst at Constellation Research and the author of the report. “With TripleBlind APIs and user interface, customers have access to secure multiparty computation (SMPC) and other advanced privacy tools,” he notes. Wilson continues, “This includes TripleBlind’s own computationally superior Blind Learning (a patented solution for distributed, privacy-first, regulatory-compliance machine learning at scale), Advanced Encryption Standard (AES) inference, distributed inference, and distributed regression techniques … all data remains localized within the customers’ own networks.”

He goes on to call alternative PEC technologies such as homomorphic encryption, statistical perturbation or pseudonymization “complex and fragile,” while branding others that copy data to an intermediate clean room for hosted analysts as “ones that challenge data localization policies.”

 

Constellation Confirms Evaluation Completed by MITRE Engenuity

The Constellation report also confirms the findings of an independent analysis completed by MITRE Engenuity of TripleBlind and other cybertechnologies in February 2022. To evaluate these technologies uniformly, the MITRE Engenuity team delineated synthetic use cases that approximated the major types of studies performed in observational research or pragmatic clinical trials over the past year. MITRE then created a list of features its team believed necessary to evaluate each technology against each use case. The team determined four features that were mandatory and five that were desirable. It also identified two business related-metrics that gauged technology readiness and cost of deployment. 

Constellation confirmed MITRE Engenuity’s findings that TripleBlind met all four of the mandatory requirements, three of the four desired requirements and partially met two additional desired requirements. In terms of technological readiness, TripleBlind scored as fully operational and low cost in terms of operations and maintenance. 

The Constellation report concluded, “The TripleBlind code has undergone rigorous independent evaluation … proving both the fundamental mathematics and its high technology readiness. And the company has impressive reference implementations with prestigious institutions such as the Mayo Clinic.”

 

Additional Resources:

 

 

About TripleBlind

Combining Data and Algorithms while Preserving Privacy and Ensuring Compliance

TripleBlind has created the most complete and scalable solution for privacy enhancing computation.

The TripleBlind solution is software-only and delivered via a simple API. It solves for a broad range of use cases, with current focus on healthcare and financial services. The company is backed by Accenture, General Catalyst and The Mayo Clinic.

TripleBlind’s innovations build on well understood principles, such as federated learning and multi-party compute. Our innovations radically improve the practical use of privacy preserving technologies, by adding true scalability and faster processing, with support for all data and algorithm types. We support all cloud platforms and unlock the intellectual property value of data, while preserving privacy and ensuring compliance with all known data privacy and data residency standards, such as HIPAA and GDPR. 

TripleBlind compares favorably with existing methods of privacy preserving technology, such as homomorphic encryption, synthetic data and tokenization and has documented use cases for more than two dozen mission critical business problems.

For an overview, a live demo, or a one-hour hands-on workshop, contact@tripleblind.com.

 

Contact

mediainquiries@tripleblind.com

Constellation Research Report: TripleBlind - A Lateral Approach to Privacy-Enhanced Data Sharing

TripleBlind Makes an “Elegant and Easily Verified” Data Privacy Promise

Constellation Research recently published a report analyzing TripleBlind’s solution, and validating an independent analysis of TripleBlind and other cybertechnologies completed by MITRE Engenuity in February 2022. “TripleBlind: a Lateral Approach to Privacy-Enhanced Data Sharing” is now available on the TripleBlind and Constellation Research websites.

Steve Wilson, Vice President and Principal Analyst at Constellation Research and the author of the report, finds that “TripleBlind is shifting the paradigm in privacy-enhancing technology for research data sharing” and “makes an elegant and easily verified privacy promise.”

The full report outlines the need for privacy-enhancing technologies (PET), explores the current state of PET, and identifies how TripleBlind compares to other methods of privacy-enhancing computation (PEC). In analyzing methods of PEC, Wilson notes:

  • “TripleBlind is a leader in the relatively new and fast-evolving category of privacy-enhanced computation (PEC).”
  • Alternative PEC technologies such as homomorphic encryption, statistical perturbation or pseudonymization can be “complex and fragile.”
  • While TripleBlind allows all data to remain localized within the customers’ own networks, other PEC technologies that copy data to an intermediate clean room for hosted analysts “challenge data localization policies.”
  • TripleBlind’s “architecture is simple and there is … no interference with any of the raw data, unlike in the case of homomorphic encryption or differential privacy.”
  • “Privacy-enhancing computation is still taking shape as a category within the broad field of privacy- enhancing technologies and data protection as a service. TripleBlind is poised to be a leader and could shift expectations of privacy in data analytics.“

 

Simply put, Wilson said, “TripleBlind’s award-winning research-driven solutions may help shift the paradigm in a highly risk-averse environment by enabling organizations to share analytic outcomes instead of sharing data.”

In February 2022, MITRE Engenuity analyzed TripleBlind and other cybertechnologies by delineating synthetic use cases that approximated the major types of studies performed in observational research or pragmatic clinical trials over the past year. MITRE then created a list of features its team believed necessary to evaluate each technology against each use case; four features that were mandatory and five that were desirable. It also identified two business related-metrics that gauged technology readiness and cost of deployment. 

The Constellation report confirms MITRE Engenuity’s findings that TripleBlind met all four of the mandatory requirements, three of the four desired requirements and partially met two additional desired requirements. In terms of technological readiness, TripleBlind scored as fully operational and low cost in terms of operations and maintenance. 

The Constellation Research report is the third analyst report this year that has featured TripleBlind’s solution, showing an increased interest in PET and validating TripleBlind’s eminence in this space.

Check out these other analyst reports to learn more about how PEC can solve a number of business problems, and how TripleBlind fares in comparison to competing solutions:

 

To learn more or to schedule a free demo, contact us today!

Sharing Data And Obliterating Privacy Are Not Synonymous

Sharing Data And Obliterating Privacy Are Not Synonymous

“Technology can completely obliterate privacy. Coming up with laws and policies to stop it from doing so is a vital task for governments,” begins this article from MIT Technology Review. TripleBlind has a different and rosier prediction for the future of privacy, read on!  Privacy-enhancing technologies (PET) make it possible to collaborate with and analyze real data that has been de-identified, without the data ever leaving the owner’s firewall. Essentially, PET makes data collaboration possible without compromising individual privacy.

The MIT article discusses how some corporations are leveraging consumer data in new ways to target people for ads. Traditionally, marketers have focused on consumer behavior by household, sometimes down to the individual level within a household and then tailoring ad and promotion strategies accordingly. Today, they are supplementing this strategy by targeting based on consumer behaviors.  Frequent store purchase (FSP) data can inform marketers who, for example, visit coffee shops regularly and target ads and promotions to them in a more accurate and personal way.

The MIT article highlights the complexity involved for governments writing privacy legislation. Legislators need to understand and address how data-driven business choices can harm society as a whole, especially with regard to consumer markets and decision making. For example, enterprises like Amazon might use consumer behavioral data to create a new line of products, such as shoes or camera bags. Initially, that undercuts other shoe and camera bag manufacturers who are now sidelined by Amazon –– which makes sense, when Amazon holds a competitive advantage by aggregating and using consumer data  However, this long-term strategy ultimately harms individual consumers by removing product choices previously available in the open and equitable marketplace.

While there are a number of data privacy laws protecting consumer data for some industries, states and marginalized groups, there is still no comprehensive federal privacy law that widely protects consumers from corporations using their data unknowingly. Antidiscrimination laws protect some consumers of particular genders, ages, ethnicities or sexual orientations from being targeted on the basis of those identities, but there is no regulation on algorithms sorting and targeting consumers based on other behaviors or identities. Our recent webinar discusses how biases happen in big data, how data biases can harm marginalized groups  and how to overcome big data biases.

Martin Tisne offers this analogy in MIT’s article: 

“People have a right to safe drinking water, but they aren’t urged to exercise that right by checking the quality of the water with a pipette every time they have a drink at the tap. Instead, regulatory agencies act on everyone’s behalf to ensure that all our water is safe. The same must be done for digital privacy: it isn’t something the average user is, or should be expected to be, personally competent to protect.”

While there is no question that the privacy of everyone’s personal data should be protected, legislation is just one of many mechanisms to protect consumer rights. Private-sector solutions ensure that privacy isn’t obliterated by offering a combination of techniques to guarantee sensitive data is never exposed, only used for authorized purposes, and never stored beyond its intended use. Enterprises can unlock the intellectual property value of data, while ensuring compliance with new and changing privacy regulations, by employing privacy-enhancing technology (PET) during collaboration. It’s a win-win for governments, corporations and consumers alike.

It’s important to note that not all PET solutions are alike. Some center around homomorphic encryption or federated learning, while others use a combination of privacy-enhancing techniques. The good news? There’s a straightforward set of requirements CDOs in most industries should expect from any PET solution:

  • The solution should include one-way algorithm encryption, so that any data or algorithm can never be decrypted by its user,
  • It should not degrade the organization’s hardware compute performance,
  • Data should always remain behind the data owner’s firewall to enforce HIPAA, GDPR and other data privacy and data residency standards,
  • Data users should only be able to perform operations specifically approved by the data owner,
  • Data should never be degraded or inaccurate,
  • The data user should only work with real data, not artificial datasets designed to de-identify the data,
  • The solution should be software only so data is not exposed to any potential hardware vulnerabilities.

 

TripleBlind’s innovations radically improve the practical use of PET even further by adding true scalability and faster processing. Gartner, Omdia, MITRE Engenuity and Constellation Research all recently evaluated how the TripleBlind solution compares to other PETs. Gartner also recently named TripleBlind as a Cool Vendor.

To learn more about how businesses can leverage sensitive data for growth while protecting privacy and ensuring compliance with data regulations, contact us today.

European Data Protection

How TripleBlind Addresses the European Data Protection Board’s Schrems II Recommendations

When it comes to handling private data, there are many things that organizations need to keep in mind, and one very important consideration is the European Union’s General Data Protection Regulation (GDPR)

The Schrems II court decision triggered a major rethink for many companies looking to adhere to the GDPR, particularly when it comes to cross-border privacy compliance. Subsequently, comprehensive guidance on GDPR was issued by the European Commission, the European Data Protection Supervisor (EDPS), and the European Data Protection Board (EDPB).

EDPB guidance on data portability was fairly comprehensive on how companies should move forward with international data transfers outside the European Union, including data in the cloud and interbusiness transfers. In advice laid out by the EDPB, data sharing compliance was illustrated through several different use cases. Notably, multi-party computation — involving private data being divided and then processed in multiple jurisdictions — is permissible if organizations in each jurisdiction have controls in place that prevent the re-identification of individuals based on the shared data.

 

Guidance Issued Following the Schrems II Case

To understand the most recent guidance, you need to know about Schrems II.

In the case, activist Maximilian Schrems argued that Facebook’s transfer of personal data from Ireland to the United States was in violation of the GDPR. Facebook had been transferring information to its U.S. operations legitimately, the company argued, through the use of Standard Contractual Clauses — data transfer contracts approved by the European Commission.

One of the decisions to come out of the case determined that companies must make sure the recipient entity can provide privacy protection that meets GDPR guidelines. Companies can no longer take the “sign and done” approach with SCCs, which had been very prevalent before Schrems II.

The court case also laid out new terms related to enforcement and compliance. Rather than issue fines and penalties to non-compliant entities, the court shifted the focus to issuing injunctions designed to stop the flow of private data. This shift means companies can no longer see non-compliance fines as a cost of doing business. Instead, they must do all they can to remain compliant or else risk a devastating stop of data flow.

Furthermore, the European court found policy and contractual measures are no longer enough to keep companies from running afoul of GDPR. These measures must be backed up with technical measures designed to protect privacy along the entire data cycle, including during storage, processing, and sharing.

The regulatory need for robust data protection measures comes as analytics and artificial intelligence systems need more data than ever. Multiparty computation and other tools designed for private data sharing are becoming more essential to support these advanced Big Data systems. Organizations driven by large volumes of data need to understand the sea change unleashed by Schrems II, or else struggle to remain compliant, keep data flowing, and remain competitive with their compliant competition.

Additionally, privacy-related collective legal action is becoming frequent across Europe. Considering the potential problems related to class action and compliance, data-heavy companies need to take deliberate steps and a proactive posture.

 

Compliance in the Post-Schrems II Era

Organizations looking to be proactive should be aware of two major emerging trends established by Schrems II.

Companies are increasingly taking technical steps to protect private data. According to the EDPB, data-sharing agreements now have little value concerning compliance. Also, according to the EDPB, international data transfers should have protective measures at both ends of the arrangement.

With enforcement shifting towards injunctions and away from fines, companies that use large amounts of sensitive data must prioritize complaints to sustain operations. According to the EDPB, data processing at work and data at rest should both be actively protected by robust privacy technology. Also, according to the EDPB, processor-controller organizations can be devastated by a stoppage of data flow if they are found to be non-compliant.

 

Using TripleBlind’s EDPB-Endorsed Approach to Compliant Data Sharing

By preserving privacy and ensuring compliance with GDPR, the TripleBlind Solution unlocks the intellectual property value of sensitive data. Our software-only solution solves a broad range of use cases, particularly those in the healthcare and financial services industries.

Specifically, our technology is based on multi-party compute, which is an approach endorsed by the EDPB. Companies that use our solution not only take a proactive approach to compliance, but also positioned themselves for the future by adding true scalability, with support for all data and algorithm types.

 

If you would like to learn more about how our technology can help your company get the most from its sensitive data while remaining compliant with GDPR and other regulations, contact us today.

Off We Go! TripleBlind Hits the Road in Second Half 2022

We can’t wait to meet and connect with you on privacy as a cornerstone to your data strategy! The TripleBlind team has spent the first half of the year speaking and exhibiting at conferences around the world, meeting business leaders across healthcare and financial services on the ability of  privacy-enhancing computation (PEC) to unlock the value of data stored away within organizations. Starting in August through December, we will be continuing our travel across the world and are looking forward to sharing our approach to PEC, which combines well understood principles, such as federated learning and multi-party compute, and a unique strategy that preserves privacy and ensures compliance with existing data privacy and data residency regulations. Check out where we’re visiting for the rest of the year and join us if you can!

On September 6, the TripleBlind team will fly to Switzerland for the Intelligent Health AI Global Summit. TripleBlind’s Senior Vice President ofHealthcare, Dr. Suraj Kapa, will present,

“Accelerating Intelligent Health: The Role of Privacy-Enhancing Technology.” We’ll also have a booth there where you can ask more about what makes TripleBlind favorbaly comparable to other existing methods of PETs or to schedule a meeting/demo with us. Keep an eye out for us there!

We’ll then head to Singapore from October 12-13 for Big Data & AI World. TripleBlind will be speaking and we’ll also have a booth, so feel free to stop by!

We’re pleased to announce that our paper was accepted to two events. We’ll present our paper in Washington D.C. for AMIA’s 2022 Annual Symposium as well as ACM CCS 2022 which takes place in LA from November 7-11. 

A day after wrapping ACM CCS 2022, we’ll be back in Switzerland for BioData World Congress, where TripleBlind will be speaking. More details to follow. 

TripleBlind will be sponsoring HLTH taking place in Las Vegas from November 13-16! We’ll also have a booth there where you can ask us any questions or schedule a dedicated 1:1 meeting with one of our experts. 

For more information or to schedule an in-person meeting at these events, please reach out to events@tripleblind.ai. Or follow us on LinkedIn and Twitter to find more details leading up to each event!

Stay tuned for more details!