Differential privacy banner image

How TripleBlind’s Data Privacy Solution Compares to Differential Privacy

Differential privacy is not a specific process like de-identification, but a property that a process can have. For example, it is possible to prove that a specific algorithm “satisfies” differential privacy. Informally, differential privacy guarantees the following for each individual who contributes data for analysis: the output of a differentially private analysis will be roughly the same, whether or not you contribute your data.

When computing on data via differential privacy, it adds stochastic deterministic noise to each data element that masks the actual data element. Stochastic refers to a variable process where the outcome involves some randomness and has some uncertainty. This might result in significant accuracy degradation, whereas TripleBlind’s one-way encryption algorithms don’t add any noise to the dataset that would impair results. 

Differential privacy is suitable for situations with a higher degree of tolerance for error. For example, Apple keyboard suggestions – Apple doesn’t need to know exactly what you’re typing, but needs to know in general what people are typing to offer reasonable suggestions. 

Apple itself sets a strict limit on the number of contributions from a user in order to preserve their privacy. The reason is that the slightly biased noise used in differential privacy tends to average out over a large number of contributions, making it theoretically possible to determine information about a user’s activity over a large number of observations from a single user. It’s important to note that Apple doesn’t associate any identifiers with information collected using differential privacy.

The majority of the other approaches to data collaboration we’ve covered only work for tabular /columnar data; including homomorphic encryption, secure enclaves, tokenization. These approaches face severe challenges when it comes to producing high-performance, accurate models on complicated datasets like x-ray image analysis. However, TripleBlind is the solution to this problem since these images are encrypted – complying with HIPAA regulations.

 

diagnostic ai diagram

TripleBlind allows data from outside sources to be used in our private infrastructure to compute and develop accurate diagnostics. Our Blind AI Pipeline ensures that the original data cannot be reverse-engineered and is compliant with HIPAA regulations.

 

If you’re interested in knowing more about how you can safely and efficiently share data , please email contact@tripleblind.ai for a free demo. Don’t forget to follow TripleBlind on Twitter and LinkedIn for our latest updates. 

This is the final blog of our Competitor Blog Series where we compared TripleBlind’s technology to other approaches of data collaboration. If you missed the other blogs, you can check them out below!

 

Read the other blogs in this series:
Business Agreements
Homomorphic Encryption
Synthetic Data
Blockchain
Tokenization, Masking and Hashing
Federated Learning