Why Data Regulations Don’t Have To Hinder Health Care’s Digital Transformation

https://www.cdomagazine.tech/cdo_magazine/topics/opinion/why-data-regulations-don-t-have-to-hinder-health-care-s-digital-transformation/article_a43d9078-f3e2-11ec-88fa-7f9ef3fdf803.html

Mayo Clinic Partnership Aims to Further Healthcare Technology Growth

https://fortune.com/2022/05/24/u-s-policies-a-i-marketplace/

PEC for Data Analytics ML

Privacy Enhancing Technologies for Data Analytics and ML

If your work includes machine learning or analytics, you’re likely facing serious data challenges. When we speak with C-Suite leaders, compliance officers, data scientists, and even with cloud architects, there are three major themes around data that come up most often: 

  • Data Access
  • Data Prep
  • Data Bias. 

On top of this, there’s a host of compliance requirements that they need to meet. 

But what if emerging Privacy-Enhancing Technologies (PETs) could reshape and catalyze your organization’s data-based innovations? PETs are designed to leverage the most valuable parts of your data — to unlock its full potential — without creating privacy or security risks. But conventional PETs are unfortunately not always enough.

This article will help you understand the major data challenges facing healthcare and finserv professionals in 2022, and familiarize you with the strengths and drawbacks of how each privacy approach addresses these challenges.

 

The 3 biggest challenges for healthcare and finserv professionals in 2022

Challenge #1: Data Access

Data access, which we translate to data interoperability in healthcare, is the number one bottleneck that is stopping healthcare insurance providers, hospital systems, and other regulatory agencies from being able to provide good analytics.

Typically, if you’re working in something like a hospital system, you’re going to have multiple EHR records. Each wing of a department could be using a different system. Clinicians might be using something like Epic for their EHR records or electronic health records, whereas you might have another wing that’s behavioral health focused, and they might be using an entirely different tool to collect behavioral health questionnaires and information on all of this. 

This leads to organizations using a ton of different EHR systems that don’t talk to each other. For instance, the State of California needed a system to handle Medicare, Medicaid, and Tricare claims, and then aggregate all that information in a third-party cloud environment. This type of consolidation takes a lot of extra labor, and causes huge bottlenecks within the data access space for healthcare agencies. 

Maybe you’re not a giant conglomerate like the California government, or a major hospital insurance system. But you might have a local business or a local hospital where you have different cloud environments set up, or you might have different EHR systems set up, where data’s coming in from multiple sources.

You could try to create a central place to put all this data, but this is an expensive process, so data movement takes the lion’s share of the budget for any data initiative. But what if you didn’t have to move the data from where it is? More on that later.

 

Challenge #2 Data Prep

Data prep is a largely manual resource-intensive task, so it requires a lot of skilled labor. You need somebody who has knowledge of data — how to wrangle it, how to “munge” it, and how to scale the dataset effectively.

Working with data can be really challenging because it has all kinds of different shapes and structures. As soon as you start adding external entities, like other businesses, it becomes a really complicated endeavor, and this kills the majority of projects before they even get off the ground.

 

Challenge #3: Data Bias

Data bias is a huge issue for anyone working in analytics. It’s also an issue where it’s incredibly important to address problems early on. Small problems early on can grow much bigger, and the costs of a biased data set can scale with your business.

This is something you’ll see when you’re working with various data sets that haven’t been vetted properly. If you are constrained to using only the data you have in-house, or if you’re using one of many commonly used example data sets for training your models (and the data scientist hasn’t excluded the problem columns), you’ll end up with inherent biases. These biases could cause problems that require increasingly more substantial fixes down the line.

The 3 major use cases for Privacy Enhancing Technology

Privacy Enhancing Technologies are a relatively new concept, and though many of these tools have been used within the space since the early days, their business applications are especially new. Gartner has identified 3 major use cases for PETs that the market is finding most important right now.

 

Use case #1: AI model training and sharing models with third parties

If you’re training an AI model, you want to get access to the best possible data out there, but you may not already have that. You may need to look to third parties for that data. You also want your data to be unbiased, because you don’t want the model to become biased. So you need to source from multiple locations.  

Those locations may be subject to different data laws, residency restrictions, and competitive pressures. And business reasons may keep you from getting access to that data, so Privacy Enhancing Technologies are addressing this problem.

 

Use case #2: Usage of public cloud platforms amid data residency restrictions

As we think about pushing more data and more information to the cloud, lots of resources become shared. That means companies are trying to figure out how to create scenarios where they can use sensitive data to its best utility, without adding the risk and liability that they’re going to expose any sensitive information.

So some Privacy Enhancing Technologies (we’ll go through these all shortly) are addressing this problem: how do we store data in encrypted states and actually operate on it without encrypting it, without decrypting it, without moving it? How do we add trust in a kind of “trustless” or “semi-trusted” environment?

 

Use case #3: Internal, external, and business intelligence activities

This is a very broad term to basically say: sharing, using data, analytics, getting insights from various data sources. This would touch on data access, prep, and bias (all three challenges we mentioned above). 

MITRE, which operates 42 federally funded R&D centers (and provides a lot of expertise on data and security), says

 

“The most valuable insights come from applying highly valuable analytics to shared data across multiple organizations, which increases the risk of exposing private information or algorithms. This three-way bind – balancing the individual needs of privacy, the analyst’s needs of generating insight and the inventor’s needs of protecting analytics – has been hard to balance…”

 

So how do we better operationalize data within an organization? When do we identify opportunities to bring in third-party data or to leverage our first-party data with third parties for potential partnerships, additional revenue, or other opportunities?

From an analyst perspective, how do we generate insight, and what are the issues around that? This has to be balanced of course with the individual needs of privacy, while also protecting the algorithms. That’s where PET comes into play.

Note: MITRE has independently done an extremely thorough and exhaustive review of TripleBlind’s technology. You can get the public report for that on our website.

 

The 7 major types of Privacy Enhancing Technology

We have resources about the major forms of PET on our page about Competing Privacy Enhancing Technologies, which includes definitions and links to full explanations of each technology. If you’re unfamiliar with the strengths and drawbacks of each, we suggest reading that.

  1. Differential privacy
  2. Federated learning
  3. Homomorphic encryption
  4. Secure enclaves
  5. Secure Multiparty Computation (SMPC)
  6. Synthetic data
  7. Tokenization

 

How useful is each type of PET for common use cases?

Each of these PETs offers something a little different, so some of them are better than others at addressing the major use cases Gartner described. In the chart here, you can see how well each technology holds up in a comparison. 

 

The 11 major factors good privacy technology should address

Let’s take a deeper look at how all of these PETs stack up, compared across the 11 most important factors. The chart below measures the extent to which each PET meets key criteria, and below the chart you can get more context about what each criteria refers to.

As you can see in the chart above, TripleBlind has a lot of green checked boxes, but that’s because TripleBlind is designed specifically for these needs — to fill the gaps, and address the red circles that are left by other techniques. 

This is why we call ourselves the most complete and scalable solution for privacy enhancing technology, because we aim to take a holistic view when solving these problems. 

Let’s take a quick look at each of these factors so you understand how well each PET fits with your organization’s use cases:

  1. Degree of privacy: some of the considerations here are, “is there a description key, is the data being moved, and how much raw data is seen by the end user?”
  2. Ability to operate at scale means “how easy is this to scale horizontally into different business problems, but also vertically to really scale within an organization?”
  3. Types of data: we want to work on more than just tabular data. This is really important. We want to work with image data and genomics and large files and voice data and everything. We want to be able to keep all their data private and compute on it. 
  4. Speed is really important. Data expires really quickly in a lot of spaces, so the faster we can use it and the less burden we add to the process, the better. 
  5. Supporting training new AI and ML models: not every solution will offer this, as it’s pretty unique. And to be able to leverage data from multiple locations to train an AI model, that’s something we really wanted to provide.
  6. Digital rights is also a very unique aspect of our solution, because we’re able to allow our customers to permission how and why their data is used, and how often. 
  7. Algorithm encryption: increasingly, there is intellectual property wrapped up in some of the models and algorithms that people are developing. We want to actually protect algorithms and usage as well.
  8. Compliance with laws like GDPR and HIPAA is a baseline requirement for any PET.
  9. Eliminate masking, synthetic data hashing, and accuracy reduction, basically to preserve the full fidelity of data and eliminate having to make that trade off between utility and privacy. We aim to maximize for both.
  10. Hardware dependencies really slow down data usage when everything is being virtualized. Why should we design a solution that requires specific hardware? 
  11. Interoperability with third parties. Like we said, we want this to scale both within an organization and externally. How easy is it for your data partners to get up and running?

 

The TripleBlind Solution 

At TripleBlind, we use the terms complete and scalable. What this means is we’re addressing these problems from multiple angles, and we’re providing the best privacy solution for any given scenario. We do that by leveraging some of the novel advancements that TripleBlind cryptographers and engineers have made on top of existing solutions.

Most of the time, companies will move data and run analytics. But this is costly, labor intensive, and is riddled with data access issues. Instead, we one-way encrypt the data and run the AI or analytics using resources within the firewalls of trusted parties only. That way, you don’t have to worry about piping the data anywhere, and you skip all that plumbing and the mess of cobbled systems. And that’s really the core value of what TripleBlind is doing.

In a nutshell, TripleBlind lets you collaborate with data privately behind your firewall, right where it’s generated. One of the advantages of the TripleBlind solution is that you can run studies on this data and train models on this data, without moving it from its source.

Hazmat Hero

Another Clean Room Announcement That Feels More Like Hype Than Substance

You can’t swing a hazmat suit without hitting yet another announcement about a new Data Clean Room offering. I hope these solutions provide value for enterprise customers, but I personally remain skeptical. Beyond the buzzword bingo, I don’t see any evidence that these approaches are anything more than complex encryption schemes for multiple parties. Somewhere personal data may get exposed when an analysis is computed on temporarily decrypted data. And does the “data clean room” have carpet or tile flooring? 🤣

“This ground-breaking solution bridges the gaps faced by other clean room solutions by combining native identity data and ML-powered graph capabilities with extensive integrations across the media and marketing world including linear and connected TV providers and the walled-gardens. It enables multiple organizations and internal teams to bring data together for joint analytics, media activation, and marketing measurement.” – Neustar

But this is just my first impression, you can read the whole 14 June 2022 article in Martech Series here and draw your own conclusions.

Gaming Hero

Did You Know That Your Video Gaming Avatar is a Privacy-Preserving Technology?

Like a lot of other folks, I have been an intermittent video gamer over the years. Either with my kids or with long time friends. In lots of games like Minecraft and Fortnite you have tons of avatar options. Until just recently, I always thought of this merely in terms of the fun and entertainment value. But a recent news story by Jim McManus in The New Stack made me realize for the first time that my avatar – and everyone else’s avatar – is actually a privacy enhancing technology. Privacy technology is about more than tokenization, data hashing, data masking and homomorphic encryption!

McManus wrote: “I talked with jin (@dankvr) one of the first members of The Open Metaverse Interoperability Group (OMI), which is focused on “bridging virtual worlds by designing and promoting protocols for identity, social graphs, inventory, and more.” Jin is pseudonymous, meaning that he hasn’t revealed his real identity online. This, it turns out, is an important characteristic of the open metaverse. “Pseudonymous work will be big in the virtual economy, avatars are inherently a privacy-preserving technology…’ said Jin.“

This according to a 14 June 2022 article by Jim McManus in The New Stack, you can read the whole article here.

Money Laundering hero

Transatlantic Pact to Put up Cash Prizes for the best PET Solution to Money Laundering

I’ve always felt that AML anti-money laundering has fantastic potential as a Privacy Enhancing Technology use case. But it never seemed to me quite easy to get all the right players in Finserv to jump through all the right hoops to make it work. However, with the US and UK governments now jointly leading the charge, AML PET solutions could go from blue sky to the real world by next year.

“‘The U.K.’s National Data Strategy outlines the promise of PETs in enabling trustworthy data access. PETs have the potential to facilitate new forms of data collaboration to tackle the harms of money laundering, while protecting citizens’ privacy,’ said Julia Lopez, U.K. minister for media, data and digital infrastructure at the Department for Digital, Culture, Media and Sport.

For this reason, the U.S. and the U.K. are organizing prize challenges, where participants will develop ‘state-of-the-art privacy-preserving federating learning solutions’ that will help to tackle this problem while respecting privacy regulations.”

This according to a 16 June 2022 article on PAYMTS.COM, you can read the whole article here.

Myths and Misconceptions

Common Myths and Misconceptions about Privacy Enhancing Technologies

Managing data effectively is among the biggest concerns for CDOs, other C-suite leaders, compliance officers and data scientists in the financial services and healthcare industries. Data access, data quality, preparation, bias challenges and compliance affect every business that attempts to collaborate with other organizations to improve the quality of their data sets to improve the accuracy of their implementation of artificial intelligence, machine learning, analytics and related projects.

Privacy Enhancing Technologies (PET) includes a cohort of solutions designed to facilitate collaborating with sensitive and protected data, while remaining in compliance with the growing number of data privacy and residency regulations in place. However, there are a number of myths and misconceptions surrounding PETs that cause decision makers to hesitate from deploying these technologies and benefitting from its potential. Here we debunk some of those myths and clarify those misconceptions.

 

Misconception #1

Privacy Enhancing Computation Requires High Compute Overhead

Homomorphic encryption was an early privacy enhancing technology that continues to have promise, but also raises concerns. It has gained a reputation for being slow and it requires more than 42X the compute power and 20X the memory of alternative solutions. Not all PETs require excessive memory, bandwidth or compute time. TripleBlind’s privacy preserving operations protect sensitive data and enforce privacy regulations with little impact on compute performance.

Here are four examples where TripleBlind can help organizations reduce concerns about performance. TripleBlind incorporates several techniques to guarantee privacy. Four of these techniques and the speed of their performance is noted below. 

 

Blind Query

A Blind Query allows a remote party to request the execution of predefined SQL-like queries. This approach enforces privacy by restricting the output to only what is approved by the data owner. The Blind Query was requested by the organization to produce summary statistics for a one million record database owned by a second organization. The total execution time was a mere 1.2 seconds.

 

Blind Join

A Blind Join facilities identification of overlapping fields between two or more databases. Privacy is protected by completely obscuring the field being compared using SMPC. A Blind Join was performed between three independent organizations, each with a one million records and a 10% overlap was found. The total execution time was 90 seconds.

 

Blind Learning

Machine learning has a reputation for high resource usage and long training cycles. TripleBlind supports the federated learning technique and offers our own patented Blind Learning technique. We compared these two approaches using private data sets distributed over 2-5 independent client organizations with and without GPUs against a single system with direct access to a non-private full data set. The training data included 10,000 x-ray images with a median file size of .4MB. As a result of workload distribution and parallel operation made possible by distributed learning, machine learning training was up to 5X faster than training on an equivalent single machine.

 

Blind Inference

Machine learning models are expensive to create and are subject to multiple types of reverse-engineering attacks when distributed to others for uncontrolled usage. TripleBlind’s SMPC-based Blind Interference protects both the source data and the trained model.  Performing interference using an image owned by one organization against a neural network based on the LeNet-5 architecture owned by a second organization occurred in 0.2 seconds, 15% – 2,500% faster than other privacy preserving approaches such as SecureNN, Gazelle or MiniONN.

 

Misconception #2

Privacy-Enhancing Technologies are Flawed 

Several PET solutions provide only partial data protection and/or can create inaccurate results. Secure enclaves provide isolation for code and data from the operating system using either hardware-based isolation or isolating an entire virtual machine. As a result it is hardware dependent and silos data. Tokenization substitutes original sensitive data with non-sensitive placeholders referred to as tokens. Masking obscures, anonymizes or suppresses data by replacing sensitive data with random characters or just any non-sensitive data. Hashing is the process of transforming any given key or a string of characters into another value. All three approaches reduce the accuracy of analytics. 

Synthetic data is artificially created rather than being generated by actual events. It is often created with the help of algorithms and is used for a wide range of activities. Since it’s not real data, it can skew the analytical outcomes. Differential privacy is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset, this approach can potentially introduce errors when analysis is run with this data. Federated learning trains an algorithm across multiple decentralized edge devices or servers holding local data samples, without exchanging them. It comes with high compute and communications costs, and lower accuracy.

TripleBlind offers the most complete and scalable solution for privacy enhancing computation. It is a software-only solution delivered via a simple API that allows data users to compute on data as they normally would, without having to “see”, copy or store any data. Our solution allows data owners full Digital Rights Management (DRM) over how their data is used on a granular, per-use level. TripleBlind eliminates the expensive manual data anonymization step required when using other solutions, while enforcing regulatory compliance.

Privacy enhancing computation with TripleBlind demonstrates that PETs don’t have to be complex in order to safeguard your data and reduce privacy risks. In the case of a healthcare provider, it can ensure privacy on both sides – the provider never provides raw data and its partners never provide raw algorithms. All operations are available without the risks of working with raw patient data.

 

Misconception #3

Privacy-Enhancing Technologies Isn’t a Valid Approach to Data Privacy

Since PET is a relatively new data privacy solution, there is a myth that it’s not a comprehensive solution and insufficient to comply with today’s data privacy and data residency regulations. 

While several PETs exhibit performance weaknesses, TripleBlind builds on well understood principles, such as federated learning and multi-party compute. It radically improves the practical use of privacy preserving technology by adding true scalability and faster processing with support for all data and algorithm types. ’s novel method of data de-identification via one-way encryption allows all attributes of the data to be used, even at an individual level, while eliminating any possibility of the data user learning anything about the individual. 

At TripleBlind, we recognize the barriers organizations and data scientists face when trying to collaborate with data while adhering to regulatory standards such as HIPAA and GDPR. The TripleBlind solution compares favorably in its ability to allow organizations to collaborate around their most sensitive data and algorithms without compromising their privacy.

Daniel Kraft, Suraj Kapa, Brian Anderson

Experts from TripleBlind, Singularity and MITRE to Address Challenges of Bias in Big Data in June 22 Webinar

WHO:

Suraj Kapa, SVP Healthcare, TripleBlind
Daniel Kraft, Faculty Chair for Medicine at Singularity, Chair XPrize Pandemic Alliance Task Force
Brian Anderson, Chief Digital Health Physician, MITRE

 

WHAT:

How can enterprises enable cross institutional data marketplaces for secure data collaboration and genomic information while decreasing bias? How much data do we need and is there a better approach for vetting this data? 

On June 22, TripleBlind will host a webinar joined by healthcare experts from Google Cloud, Singularity and MITRE. 

 

WHY:

Common obstacles among statisticians, data scientists and AI developers stems from three challenges: data access, data prep and the hidden biases in big data. This has prevented collaboration between healthcare enterprises, which is also stymied by regulated global data privacy protections, such as HIPAA. Learn how privacy enhancing technologies (PET) are enabling broader, diverse data engagement when it comes to data prep and enabling secure cross-institutional data exchange.

 

WHEN:

Wednesday, June 22, 2022, “How to Overcome Bias in Big Data”
11 a.m. CT

 

WHERE:

Virtual, via Zoom
Participants can register here

 

Additional Resources:

 

 

About TripleBlind

Combining Data and Algorithms while Preserving Privacy and Ensuring Compliance

TripleBlind has created the most complete and scalable solution for privacy enhancing computation.

The TripleBlind solution is software-only and delivered via a simple API. It solves for a broad range of use cases, with current focus on healthcare and financial services. The company is backed by Accenture, General Catalyst and The Mayo Clinic.

TripleBlind’s innovations build on well understood principles, such as federated learning and multi-party compute. Our innovations radically improve the practical use of privacy preserving technology, by adding true scalability and faster processing, with support for all data and algorithm types. TripleBlind natively supports major cloud platforms, including availability for download and purchase via cloud marketplaces. TripleBlind unlocks the intellectual property value of data, while preserving privacy and ensuring compliance with HIPAA and GDPR. 

TripleBlind compares favorably with other privacy preserving technologies, such as homomorphic encryption, synthetic data, and tokenization and has documented use cases for more than two dozen mission critical business problems.

For an overview, a live demo, or a one-hour hands-on workshop, contact@tripleblind.ai.

 

Contact

mediainquiries@tripleblind.ai

Greg Storm COO of TripleBlind presenting at Privacy-Enhancing Technology Summit North America

Recap from Privacy-Enhancing Technology North America Summit

TripleBlind co-founder and COO, Greg Storm, presented at Kisaco Research’s Privacy Enhancing Technology Summit in Boston! In case you missed it, here’s a quick recap:

 

Overview

TripleBlind believes that adoption of Privacy-Enhancing Technology (PET) by businesses will increase 10x over the next 18 months, but why and how?

PET has been around for a while, but the adoption rate has been slow for organizations with concerns over data privacy and industry regulations. In major sectors, like financial services and healthcare, there have been many barriers and flaws with technologies that have made it difficult for businesses to enable secure data processing, sharing, cross-border transfers and analytics, while remaining compliant with stringent regulations.

Gartner recently recognized PET – also known as Privacy-Enhancing Computation (PEC) – as a strategic technology trend that will have a broad impact on businesses in 2022. This lays the foundation for PETs to become a more widely known and established category. Why is this important?

 

Latent Demand Is Everywhere 

There’s an increasing number of regulations and growing concern around data collaboration and reducing the risk of data breaches. PET’s, including TripleBlind’s innovation, has the ability to transform how businesses can securely share data while ensuring data privacy.  

There’s no question the potential market is huge, but many businesses have yet to recognize how today’s technology can deliver real value by offering new ways of secure data sharing in the future.  

 

Enabling Technologies Abound

Secure enclaves, tokenization, differential privacy, homomorphic encryption, multi-party compute are all viable privacy-enhancing technologies. But confusion exists as to where and when these strategies are best utilized.  Many potential customers with the authority to purchase often struggle to evaluate the accuracy of compliance and efficiency claims by PET vendors. Understanding the difference between the 7+ different kinds of PETs is the first step in evaluating solutions. 

 

Acceptable User Experience

Businesses might have vetted their options and chosen the best PET solution for their use case, but then what? PET doesn’t have to be a “killer app,” but it does have to solve a real business problem in a way that most can understand. PETs can’t go mainstream until a “regular user” can use it, and can solve a real business problem in an eloquent way. 

 

Where Do We Go From Here?

The future is massively collaborative and private. For PETs to gain wide-spread adoption we must define industry standards, emphasize why they are important for business growth, and educate customers and partners on how PETs address their data privacy problems so that informed decisions can be made about technology investments.  

TripleBlind is largely being used by healthcare, financial services, government and other organizations, and has dozens of documented use cases. We offer a free demonstration of the TripleBlind solution to showcase why it checks off all the boxes and to prove why it’s needed for business growth.

If you’re interested in a demo, reach out to us. To keep up with the latest TripleBlind news, follow us on Twitter and LinkedIn!

Secure data unlocks value hero image

How Improved Secure Data Collaboration Unlocks the Value of Data

The private and secure sharing of data may sound like a noble undertaking and in many cases, it is, but the primary motivation for organizations and government agencies to exchange sensitive information is to unlock valuable insights.

The international Organization for Economic Co-operation (OECD) has estimated that data sharing and increased access are capable of producing up to 1.5 percent of a country’s GDP with respect to public-sector data, and between 1 and 2.5 percent of GDP for private-sector data.

The value created by secured data collaboration comes in the forms of greater efficiency and innovation. Medical researchers can share sensitive data to better treat diseases like tuberculosis. Carmakers developing self-driving vehicles can share data to make their automated systems safer. Tech companies can share GPS data with government agencies to help design better traffic flows.

Secure data collaborations are often used to make artificial intelligence systems less biased, more accurate, and more functional. An AI system is only as good as the data used to train it, so if the system only has access to a limited dataset, that severely limits its capabilities. Secure data collaborations can also break down data silos so that larger sets can be aggregated, leading to higher-quality AI models.

In data collaborations that involve sensitive data, maintaining privacy is a major concern. Healthcare organizations looking to share medical information need to ensure that the identities of patients are not revealed. Financial companies looking to create better products using customer data need to protect the financial information of individual customers.

Privacy-enhancing technologies have been developed to address this concern. Through methods like differential privacy and the TripleBlind Solution’s innovative technology, organizations can address privacy concerns while producing valuable insights.

Consider these examples of secure data collaboration unlocking valuable solutions for stakeholders.

 

Fighting Tuberculosis in India

According to the Centers for Disease Control and Prevention, 1.7 billion people around the world were infected with tuberculosis in 2018, equal to about 23 percent of the global population.

To battle this crisis in India, a secure data collaboration between mobile provider Airtel and the World Health Organization’s “Be He@lty, Be Mobile” initiative was able to develop a proof of concept method for identifying areas at risk for increasing tuberculosis cases.

Since tuberculosis is spread through recurring proximity among individuals, location data of mobile phones can be used to assess the risk for increased spread of the disease. The data partnership between Airtel and the WHO used anonymized data from the company’s mobile network to identify patterns of population movements, such as commuting patterns and other daily routines. The precision, scale, and immediacy of mobile data provided by Airtel allowed researchers to identify at-risk areas. Analysis of the mobile data revealed movement patterns are a more reliable indicator of tuberculosis incidence than proximity to tuberculosis hot spots.

Now that these patterns and insights have been unlocked, the Indian government can take better public health steps related to the prevention and treatment of tuberculosis. More broadly, this collaboration revealed how the GPS data of individuals can be used for public benefit while still maintaining privacy.

 

Enabling Automation

WorkFusion is a tech company that offers business tools capable of automating standard, time-consuming tasks — such as invoice processing. The company’s platform uses character recognition technology to read documents and extract relevant information from them. The tool was developed to process invoices, but it is capable of other similar applications that can be found in just about every industry.

One of the main challenges in opening up the system to have broad appeal is the fact that different industries use different types of documents and document formats. An invoice in the supply chain and an invoice in the medical industry might serve the same basic purpose but appear very different from each other. For the WorkFusion platform to be capable of processing invoices from many different industries, it must be able to recognize key information regardless of document format.

To achieve this, the platform’s AI model must be trained using the types of documents it will be processing. Unfortunately, companies do not freely publish internal documents and so the WorkFusion AI model had to be trained on a per-customer basis. Lasting as long as six months, the necessary training period did not generate revenue for WorkFusion, and customers had to disclose months’ worth of internal documents before being able to realize any benefit.

WorkFusion addressed this “cold start” problem by combining deep learning models with privacy-enhancing technology. First, the company improved its existing machine learning capabilities with a new deep learning model. The company then developed the capability to extract key insights from a broad range of its customers through the generation of synthetic invoice data. Finally, the company used differential privacy to aggregate all of its customers’ data, including new customers, while guaranteeing that individual data contributions were kept private.

The new and improved WorkFusion platform is now able to get new customers up and running after just a few days of AI training, reducing the delay on incoming revenue. One of the company’s financial services customers was so impressed with the technology that it significantly expanded its partnership with the company after one year, resulting in a major revenue increase.

 

How TripleBlind Can Unlock Value in Your Next Data Collaboration

From preventing credit card fraud in the financial services industry to facilitating genetic analysis in healthcare, TripleBlind can ensure the privacy of individuals while allowing data collaborators to unlock the intellectual property value of data.

Delivered via simple API, the TripleBlind Solution can address a wide range of use cases. It allows for true scalability and faster processing of sensitive data, improving the practical use of privacy-preserving technology. Our technology is able to handle all data types and natively supports all major cloud platforms. It compares favorably to other privacy technology like differential privacy and federated learning. Contact us today to learn more.