Secure Computation: Today's Top Privacy-Enhancing Strategies

Secure Computation: Today’s Popular Privacy-Enhancing Strategies

For companies that handle sensitive data, compliance with privacy regulations is the bare minimum for expectations. Companies have moral and business-related obligations to ensure that any private data they collect is protected from unauthorized access and use.

At the same time, sensitive data is a massive source of revolutionary insights. Privacy-enhancing strategies are designed to enable the operationalization of sensitive data while still maintaining the privacy of individuals and the protection of sensitive digital assets. Whether hardware- or software-based, these strategies use different approaches to protect data while allowing for more value to be extracted for scientific, social, and commercial benefit. Following are three of today’s top privacy-enhancing strategies: tokenization, synthetic data, and trusted execution environments.



In business applications, tokenization can be used to outsource responsibility for handling sensitive data. Companies can store sensitive information in a third-party database and not have to dedicate the resources needed to oversee and handle this data.

While this is an obvious benefit, tokenization doesn’t address many security risks. The most prominent issue with tokenization is being able to trust a third party with access to sensitive data. While business associate agreements can be used to hold a third party liable for misuse, an unethical actor seeing the massive commercial value of a sensitive dataset could consider violating any agreements a comparatively small price to pay. 

Furthermore, tokenization adds a layer of complexity to an organization’s infrastructure. In the example of financial transactions, a customer’s account information must be de-tokenized and re-tokenized for authentication to occur. In situations involving massive dataset, such as the training of machine learning algorithms, this added layer of complexity translates to enormous computational costs.

Also, tokenization may not address digital rights management and compliance issues, especially when a third-party provider is storing sensitive data in another jurisdiction or country. While this strategy may be popular and effective for financial transactions, it isn’t well-suited to processing datasets and engaging in international data partnerships.


Synthetic Data

Collecting massive amounts of data for analysis can be a regulatory and logistical nightmare. One popular privacy-enhancing framework developed to address these myriad data challenges is synthetic data.

Unlike standard data that is collected from original sources, synthetic data is generated from the statistical properties of real data and often serves to augment or replace that real data in mission-critical applications.. 

Because synthetic data holds the promise of generating new insights and enabling powerful artificial intelligence technologies, it has become a highly regarded tool in industries that deal with sensitive information — finance and healthcare in particular.

Although synthetic data can be very useful, it does have major limitations. Synthetic data systems are not particularly adept at generating outlier data, and this means synthetic data often fall short of real-world data. An over-dependence on imprecise synthetic data could lead to false insights that are costly in business, and possibly deadly in healthcare situations.

Synthetic data is an effective strategy in use cases with a narrow focus.. In situations involving a wide distribution of outcomes, however, synthetic data often proves to be quite limited in value. This reality is particularly problematic given the fact that many narrow cases have already been defined or studied at this point, and wider-scope studies are now of greater interest and importance.

The generation of “new data” from the statistical properties of sensitive data can also be fraught with challenges. If the original data set is significantly biased, problematic bias will likely be passed into the synthetic dataset. Addressing potential bias in synthetic data requires specialized knowledge of context around the data, and thus the need to address bias means decreased practicality as a privacy-enhancing framework. 

Furthermore, it has been shown possible to identify real people based on information in a synthetic dataset, especially if the system used to generate the data set is flawed. Currently, this isn’t a widespread problem, but if synthetic data is more broadly adopted, reverse-engineering private data could become a more attractive option for wrongdoers.

In essence, synthetic data takes an imperfect approach to preserving privacy, resulting in limited actual utility.


Trusted Execution Environments

Trusted execution environments (TEEs) are physical hardware enclaves housing processing systems that are isolated from any processing performed by a main computer to allow for the protected storage and computing of sensitive information.

TEEs are designed to protect both the data and code running inside the environment. In data collaborations, TEEs can enable secure remote communications. They store, manage, and use encryption keys only within a secure environment, which limits the possibility of eavesdropping. Unfortunately, there are a number of issues associated with TEEs. Because these systems are mostly proprietary hardware assets, they do not readily support platform interoperability. This type of privacy-enhancing strategy can also be cumbersome, and using it can be like having a private sandbox on Mars: It’s a secure environment, but it’s difficult to get there.

TEEs are also not impervious to attack. A number of studies have revealed how cryptographic keys can be stolen, and side-channel attacks can be used to expose security vulnerabilities.

Because they are hardware-based, TEEs are not easily patched or updated – new hardware is required. Software, on the other hand, can be updated instantly over the internet, enabling patches to security vulnerabilities, bug fixes, and new functionality to be added in real time.

Finally, TEEs require data and algorithms to be physically aggregated on one machine or server. This is often impossible due to data laws which keep data locked in place. The use of TEEs for cross-border data collaboration could result in a violation of GDPR or data residency laws, resulting in steep fines and reputational damage.


A More Flexible and Practical Privacy-Enhancing Strategy

Many of the most popular privacy-enhancing strategies are effective for certain use cases. However, each one has significant limitations and vulnerabilities. The TripleBlind Solution is an elegant and flexible approach to privacy enhancement that can augment or even replace the top strategies in use today.

Available via a simple API as a software-based solution, our technology improves the practical use of privacy-enhancing technologies and addresses a wide range of use cases. Offering true scalability and faster processing than other options, our technology can unlock the intellectual property value of data while protecting privacy and supporting regulatory compliance.

Please contact us today to learn more about our superior privacy-enhancing solution.

Image of patients wearing masks with faces blurred for privacy

How Privacy Enhancing Computation Can Increase Collaboration Amid Surge of Healthcare Data Privacy Breaches

The Department of Health and Human Services’ Office for Civil Rights’ breach portal reveals 2021 was the worst year ever for healthcare data privacy breaches. Nearly 45 million healthcare records containing patients’ protected health information (PHI) were exposed across 686 healthcare breaches. While the number of incidents that occurred increased only 2.4% in 2021, the number of patients affected increased 32%. As healthcare systems, insurance carriers, medical device manufacturers and others create, store and share more sensitive patient data, the amount of data exposed with each breach increases.

Similar to findings in the State of Financial Crime report which show that financial services companies that generate and handle data are hypersensitive to cyberattacks and data privacy breaches, healthcare organizations that collaborate using data are experiencing the same vulnerabilities. As the presence of healthcare data proliferates across mobile devices and cloud networks to accommodate trends such as remote work and telehealth, healthcare data becomes vulnerable to privacy threats, which IT departments may not even be aware of.

The number of attacks against healthcare third-party vendors and business partners increased by 18% compared to 2020. When looking at the top healthcare security breaches of 2021, it’s clear there is a need for healthcare enterprises to dramatically improve the quality of their data privacy practices when collaborating with other healthcare systems, vendors, partners and related entities. 


Privacy-Enhancing Computation Allows Secure Collaboration with Partners

Privacy-enhancing computation (PEC) is designed to allow healthcare institutions to collaborate and innovate without giving up proprietary data. PEC solves for a broad range of data challenges and allows institutions to glean insights from data that has historically been inaccessible due to healthcare privacy regulations.

Here are seven examples of how PEC can increase collaboration and innovation despite the increased risk of healthcare data breaches:

  1. COVID created a need for telemedicine to be more widely used for radiology, increasing the number of reconstruction attacks to infer patient ID based on X-Ray images. Using X-Ray source images from Medical Imaging Centers where patient metadata has been obfuscated, Diagnostic AI Developers will have more quality data for training, making AI algorithm training on X-Rays more secure, more cost efficient and faster.
  2. By operating algorithms on de-identified data and without the risk of models being reverse engineered, hospitals and others who have developed highly-advanced diagnostics algorithms can license their algorithms for remote diagnostics, without exposing valuable IP.
  3. Because PEC-based operations enforce the appropriate privacy regulations (HIPAA, GDPR, CCPA, etc.), pharmaceutical companies and drug developers can use genomic data sequences to create life-changing drugs and vaccines.
  4. Because clinical trial participant data is protected by HIPAA, researchers often are not typically able to analyze or interact with trials until after the trials are completed. Using de-identified, real-time data throughout clinical trials, healthcare enterprises can conduct early indication trial reporting without violating regulations for blind and double-blind studies.
  5. As biobanks store data that spans across different hospital systems and legal jurisdictions, it can be challenging for companies to compliantly access that data due to differing privacy regulations. With access to a larger amount of diverse patient data from biobanks, pharmaceutical developers can improve their modeling and analysis.
  6. Using prescription data and sales information from pharmacies with shared customers, hospitals can gain more accurate insight into the medications that patients are actually taking to incorporate in their treatment and wider research.
  7. Prior to PEC, when combining multiple data types for analysis – including image, text, voice, video and more – data scientists needed to create a machine learning model for each type of data and manually combine those outputs to analyze. PEC allows for collaboration using any type of data, allowing healthcare enterprises to better create and train predictive and generalizable AI models.


TripleBlind has created the most complete and scalable solution for solving use cases and business problems that are ideal for Privacy Enhancing Computation. TripleBlind allows data users to compute on data as they normally would, without having to “see,” copy, or store any data. The TripleBlind solution is software-only, supports all cloud platforms and is delivered via a simple API. It unlocks the intellectual property value of data, while preserving privacy and ensuring compliance with HIPAA and GDPR.

Check out these recent blogs from the TripleBlind team to learn more about how privacy-enhancing computation can benefit the healthcare industry and increase data collaboration opportunities:

Contact us today to schedule a personalized demo of our innovative technology! To learn more from TripleBlind thought leaders, be sure to follow us on Twitter and LinkedIn.

How to Overcome Bias in Big Data

How to Overcome Bias in Big Data

How to Overcome Bias in Big Data

Privacy-Enhancing Technologies: Who Should Care & Why?

Privacy-Enhancing Technologies: Who Should Care and Why?

Privacy-Enhancing Technologies: Who Should Care and Why?

U.S. supremacy in A.I. may hinge on these proposed policies

Benefits and Challenges of Data Collaboration Hero Image

The Benefits and Challenges of Data Collaboration in Finance

The data that financial institutions collect from their clients and partners is extremely valuable. However, it only offers a limited perspective. Suppose banking and financial companies can collaborate with other organizations using their collected data. In that case, the resulting data collaboration can lead to powerful new insights and a wide range of resulting business benefits.

But what is data collaboration exactly? A good data collaboration definition is pooling insights from data across various sources to unlock valuable insights for all participants. However, how individual organizations define data collaboration has some variance, including whether the collaborations are happening internally or externally. In the financial services industry, these insights could lead to the development of innovative products, better customer service, and privacy-preserving analytics that may increase information-sharing effectiveness within national and international financial fraud prevention and regulatory compliance.

It is essential to point out that this type of collaboration requires overcoming many common data problems. There are problems with data access, data transformation, and data bias. Value-generating insights are increasingly uncovered through artificial intelligence, and this technology requires massive amounts of information from various data sources to be effective.

Simple Principles for Data Collaboration

A fundamental principle for extracting the most value out of data collaboration is the development of user-friendly workflows adopting data protection and governance.

Rather than having decision-makers sift through dozens of spreadsheets and workbooks to identify insights, top financial companies often rely on business intelligence dashboards and reports that display easily digestible information. These dashboards focus on tracking key metrics in the same way retail stock trading platforms provide information on an investor’s portfolio.

While these dashboards may appear simple, the technology behind them is not. Artificial intelligence processes large amounts of complex data to derive insights meaningfully. Most business intelligence platforms now incorporate AI features that help business analysts quickly make more informed decisions. This can potentially allow operational benefits like portfolio position reporting or resolving customer service tickets quickly.

Data used to drive business insights should be easy to understand and query. We may romanticize the idea of game-changing insights surfacing from extensive data analysis. Still, data-driven insights may confirm leaders’ suspicions based on professional experience and data quality. And, when an analytics platform does produce a surprising result, it’s essential to know the data sources from where the insights came from and have the ability to audit the data.

Data aggregation and validation are simpler when companies work with good data sources. Companies should also create a culture that supports a collaborative, constructive, and timely dialogue preventing valuable insights from being discarded and delaying critical decisions.

Benefits of Data Collaboration

  • Increased access to financial services
  • Better customer experience
  • Better financial products
  • Greater efficiency
  • Greater fraud protection
  • Efficient workforce distribution

According to a survey from Gartner, company leaders that promote data sharing and the dissolution of data silos often trigger a higher value return from their analytics teams. The research company also predicted that companies that share data will comprehensively outperform rivals that do not by 2023.

Data Collaboration and Increased Access To Financial Services

According to research from McKinsey, data collaboration can lead to economic value in several different ways, including increased access to financial services. When banking institutions understand what services customers need access to, when they need them, and how they prefer those services to work, they can deliver exactly what people need, when and how they need it.

Data Collaboration and Better Customer Experience

Likewise, data collaborations can yield a better customer experience, the same McKinsey research found. For example, identifying data patterns related to how long people wait to reach a representative on the phone, how long certain calls tend to last, and how often and at what point people tend to get tired of waiting on hold before hanging up can all inform staffing strategy to ensure that customers feel like they can reach a live person easily. 

Data Collaboration and Better Financial Products

According to a survey from Gartner, company leaders that promote data sharing and the dissolution of data silos often trigger a higher value return from their analytics teams. The research company also predicted that companies that share data will comprehensively outperform rivals that do not by 2023. Part of the reason for that may be that armed with data, banking institutions can engage in better decision-making about what kinds of financial products may do best with their existing and potential customers. 

Data Collaboration and Greater Efficiency

Data collaboration can mean a more agile rollout of new products and services through greater efficiency. For example, when an investment bank partnered with tech company Altimetrik, they were able to leverage internal data to improve their application development teams’ productivity by lowering the back-end requirements of new applications. That means that the banking organization can respond more quickly to changing customer demands with new applications and online services that can be developed and launched swiftly.

Data Collaboration and Greater Fraud Protection

Data collaborations can also address threats related to fraud and other criminal activity. When banking organizations have a more comprehensive picture of their clients’ internal and partner financial institutions’ financial transactions, it helps them identify suspicious activity within an expanded ecosystem. Data collaborations can also improve credit risk modeling, ESG portfolio construction and detection of financial fraud based on alternate data sources. Insights from data collaborations and privacy protection scenarios help financial services companies to protect data in use and transit. It supports the processing of data with confidentiality in artificial intelligence and business intelligence applications within untrusted computing environments like the public cloud.

Data Collaboration and Efficient Workforce Distribution

Earlier, we used the example of how data collaboration can help banking institutions determine appropriate staffing levels for inbound calls so that customers are not stuck with long wait times to speak with a live person. However, data collaboration can help with all facets of efficient workplace distribution. That may be particularly important when it comes to companies with many branches or offices, where shifts in staffing may make sense depending on the times of the year or other external factors.

The Challenges of Data Collaboration

Data collaboration needs a lot of data to provide useful insights. Unfortunately, there are many security and privacy challenges associated with data collaboration efforts. Companies do not want their financial information to be shared without discretion. Even if they did, financial institutions have strong business motives for keeping critical information to themselves and protecting their intellectual property while complying with privacy regulations.

Risks are involved when a company enters into a data collaboration with another organization. The data could be intercepted or misused by other participants during the collaboration process. This could be detrimental to both the organization and its customers. In October 2020, hackers breached a Facebook data partner to run targeted ads based on Facebook data for a money-making scam.

Additionally, the sharing of financial information could violate privacy regulations. In the United States, the Gramm-Leach-Bliley Act (GLBA), and CCPA outline several privacy guidelines for sharing an individual’s financial information.

Regulations related to the sharing of personally identifying information, such as date of birth and social security number. In Europe, the GDPR act strictly outlines how organizations may use personal data.

Finally, there are gray areas around the use of data on an individual — actions that are not necessarily illegal or immoral but potentially bear a reputation risk for business.

We Provide More Significant Data Privacy and Control for Data Collaboration

Financial institutions are looking to unlock insights to improve customer experience, increase market share, reduce risk, and drive innovative offerings through data collaboration using privacy-enhancing technologies. While approaches like masking, tokenization, differential privacy, and synthetic data can be helpful, the TripleBlind solution compares favorably with these and other privacy-enhancing technologies. Our innovations radically improve the practical use of privacy preserving technology by adding true scalability and faster processing with support for a majority of data formats and machine learning algorithms that can be deployed on cloud and on-premise platforms.

Our patented one-way encryption technology approach ensures that data and algorithms can never be decrypted and only permits authorized operations. Best of all, the TripleBlind Solution is available through a simple API, and we never take possession of any data, algorithms, or answers.

Book a demo today if you want to learn more about how our solution enables data collaboration.

Craig Gentry - CTO of TripleBlind

TripleBlind hires CTO with coveted expertise

Kansas City-based tech startup TripleBlind hired a new chief technology officer who’s won coveted awards for his cryptography work.

CTO Craig Gentry said he’s reached the “moral activist stage of my life” and wants to make a difference through technology.

“I’ve reached the phase in my career where I’m putting aside theoretical research for a little while, and I’d like to build systems that actually get used in the real world,” he said. “I think TripleBlind is a good place to use that, and they’re really focusing on using the right tools for the right problems.”

TripleBlind’s technology allows enterprises, such as hospitals and financial institutions, to securely share regulated and private data, without decrypting it or introducing additional risk and liabilities. It also adheres to regulatory standards such as HIPAA and GDPR. An example is its work with the Mayo Clinic to help the nonprofit secure third-party EKG and genetic data for developing, validating and deploying algorithms.

Another aspect that drew the tech veteran to TripleBlind is the fact it’s a $100 million company with a solid team already in place, which allows him to immediately contribute to the technology side, he said. They’re also laser-focused on building worthwhile solutions for customers that solve real challenges. Eventually, Gentry wants to help the startup expand its technology to the consumer side.

Gentry’s tech experience allows him to speak the language of TripleBlind’s customers and understand the problems they face, said Chris Barnett, vice president of partnerships and marketing at TripleBlind.

“When somebody says, ‘How does this really work? What’s under the hood?’, he has the knowledge, credibility and background to go toe-to-toe with anybody on the customer side that wants to go for a deep dive,” Barnett said.

Gentry, who brings more than 20 years of experience in cryptography, data privacy and blockchain, previously was a research fellow at the Algorand Foundation and spent a decade in the Cryptography Research Group at the IBM Thomas J. Watson Research Center. He earned a bachelor’s in mathematics from Duke University and has a Ph.D. in computer science from Stanford University.

Gentry invented the first fully homographic encryption scheme as part of his dissertation, which won the Association for Computing Machinery (ACM) international Doctoral Dissertation Award in 2009. The following year for his encryption work, he won ACM’s Grace Murray Hopper Award, which is given to people under 35 who have made a single, significant technical or service contribution. Apple co-founder Steve Wozniak won the award in 1979.

Fully homographic encryption allows individuals to perform analytics on encrypted data without needing an encryption key, he said. During the process, the data remains encrypted, and only the data’s owner has the ability to decrypt the results from the analytics performed. A real-life example could be an H&R Block customer sending his encrypted financial information to the tax preparation firm. H&R Block would apply its tax form algorithms to the data and send the encrypted tax return to the customer. Only the customer has the key to decrypt the tax information, he said.

For Gentry, technology’s allure is wrapped up in math. It’s like a puzzle that perfectly fits together, he said. After earning his bachelor’s in math, however, he went to law school and became an intellectual property lawyer for nearly two years.

“I thought if I go to math grad school, I’m just going to be some recluse in a basement somewhere working on math problems that no one in the real world cares about. … (But) I got sick of (being a lawyer) pretty quickly.”

He started applying to math and computer science jobs and heard back from one company: DoCoMo USA Labs, which gave him a list of research topics that he could choose from in his new role. He picked cryptography.

“So that’s how I fell into cryptography – as a disenchanted lawyer, just looking for something mathematical to do,” he said. Read the article at

Privacy Enhancing Technologies Webinar Banner Image

TripleBlind Experts to Highlight Optimal Privacy-Enhancing Technologies for Unlocking IP Value of Data in May 25 Webinar


Chris Barnett, VP of Partnerships & Marketing, TripleBlind
Tim Massey, VP of Product & Customer Success, TripleBlind
Chad Lagomarsino, Sales Engineer, TripleBlind



What if emerging privacy-enhancing technologies (PET) could reshape and accelerate an organization’s data-based innovation activities?

On May 25, TripleBlind will host a webinar to highlight how enterprises can select the optimal privacy-enhancing technologies (PET) to suit their specific business and collaboration needs. Three experts from the company will cover multiple techniques for privacy-enhancement, and offer guidance on how to evaluate and implement those techniques. 



Handling data effectively is among the biggest concerns for C-Suite leaders, compliance officers and data scientists in the healthcare and financial services industries. Issues surrounding data access, data prep, data bias challenges and compliance affect every business that leverages artificial intelligence (AI), machine learning, analytics or collaboration.

The emerging PET category represents a cohort of technological solutions that seek to ease the pains, pressures and risks involved in working with sensitive and protected data. 



Wednesday, May 25, 2022, 11 a.m. CT
“Privacy-Enhancing Technologies: Who Should Care and Why?”



Virtual, via Zoom
Participants can register here


Additional Resources


About TripleBlind

Combining Data and Algorithms while Preserving Privacy and Ensuring Compliance

TripleBlind has created the most complete and scalable solution for privacy enhancing computation.

The TripleBlind solution is software-only and delivered via a simple API. It solves for a broad range of use cases, with current focus on healthcare and financial services. The company is backed by Accenture, General Catalyst and The Mayo Clinic.

TripleBlind’s innovations build on well understood principles, such as federated learning and multi-party compute. Our innovations radically improve the practical use of privacy preserving technology, by adding true scalability and faster processing, with support for all data and algorithm types. TripleBlind natively supports major cloud platforms, including availability for download and purchase via cloud marketplaces. TripleBlind unlocks the intellectual property value of data, while preserving privacy and ensuring compliance with HIPAA and GDPR. 

TripleBlind compares favorably with other privacy preserving technologies, such as homomorphic encryption, synthetic data, and tokenization and has documented use cases for more than two dozen mission critical business problems.

For an overview, a live demo, or a one-hour hands-on workshop,



Craig Gentry, Chief Technology Officer

TripleBlind Appoints Encryption, Privacy and Blockchain Expert Craig Gentry as Chief Technology Officer

KANSAS CITY, MO – May 18, 2022 – TripleBlind, creator of the most complete and scalable solution for privacy enhancing computation, announces Craig Gentry as the new Chief Technology Officer. Craig will lead TripleBlind’s technology vision for expanding the most comprehensive privacy preserving technology in the industry.

Craig joins TripleBlind with more than 20 years of experience in cryptography, data privacy and blockchain, and has received numerous accolades for his research and advancements. This includes:

  • 2009 – After inventing the first fully homomorphic encryption scheme as part of his Ph.D., the Association for Computing Machinery (ACM) awarded him the ACM Doctoral Dissertation Award. This award is presented annually to the author of the best doctoral dissertation in computer science and engineering.
  • 2010 – Won the Association for Computing Machinery Grace Murray Hopper Award, which goes to an individual who makes a single, significant technical or service contribution before age 35. Apple inventor and legend Steve Wozniak received the award in 1979.    
  • 2014 – Awarded a MacArthur Fellowship, unofficially but commonly known as the Genius Grant, as a future investment in his originality, insight and potential.


Before joining TripleBlind, Craig served for three years as a research fellow at Algorand Foundation, an organization dedicated to fulfilling the global promise of the Algorand blockchain, designed to create a borderless global economy.  Prior, he spent 10 years in the Cryptography Research Group at the IBM Thomas J. Watson Research Center, where he worked with colleagues to bring previously theoretical privacy enhancing technologies – such as homomorphic encryption and zero-knowledge proofs – toward practically. Craig was introduced to cryptography as a researcher at DoCoMo USA Labs.

Craig holds a Ph.D. in Computer Science from Stanford University, a J.D. from Harvard Law School, and a B.S. in Mathematics from Duke University.

“TripleBlind is a leader in solving real business problems with Privacy Enhancing Computation. The addition of Craig Gentry to our leadership team will foster further innovation and accelerate development of groundbreaking technology,” said Riddhiman Das, CEO and co-founder of TripleBlind. “Craig is a luminary in this space, and I’m honored to have him lead and define the strategy for how the latest advancements in privacy enhancing technologies can deliver scalable solutions for enterprises in healthcare, financial services, and other industries globally.”


About TripleBlind

Combining Data and Algorithms while Preserving Privacy and Ensuring Compliance

TripleBlind has created the most complete and scalable solution for privacy enhancing computation.

The TripleBlind solution is software-only and delivered via a simple API. It solves for a broad range of use cases, with current focus on healthcare and financial services. The company is backed by Accenture, General Catalyst and The Mayo Clinic.

TripleBlind’s innovations build on well understood principles, such as federated learning and multi-party compute. Our innovations radically improve the practical use of privacy preserving technology, by adding true scalability and faster processing, with support for all data and algorithm types. TripleBlind natively supports major cloud platforms, including availability for download and purchase via cloud marketplaces. TripleBlind unlocks the intellectual property value of data, while preserving privacy and ensuring compliance with HIPAA and GDPR. 

TripleBlind compares favorably with other privacy preserving technologies, such as homomorphic encryption, synthetic data, and tokenization and has documented use cases for more than two dozen mission critical business problems.

For an overview, a live demo, or a one-hour hands-on workshop,



3 Key Figures in the History of Privacy-Enhancing Technology

Privacy-enhancing technology may not appear in as many headlines as blockchain or cryptocurrency technologies, but behind the scenes, privacy-enhancing technology is enabling scientific breakthroughs and unprecedented business insights.

The privacy technologies widely used today are the result of developments made over the past half-century.  Extremely gifted and innovative change makers have made groundbreaking contributions spanning this time period, from Andrew Yao developing essential principles in the early 1980s at the University of California Berkeley to Cynthia Dwork devising privacy-based research principles just a few years ago while at Microsoft Research. The developments made by a handful of key figures have been instrumental in advancing this area of technology, and we are enjoying the fruits of their labor today. While there are many people to highlight and thank for their contributions to the current state  of privacy-enhancing technologies, below we feature three figures that have played fundamental roles.


Andrew Yao

In 1982, Andrew Yao was the solo author on a paper that would lead to a game-changing privacy technology called multi-party computation. In addition to developing the theoretical concept, Yao also developed several fundamental multi-party computing algorithms, on which the majority of today’s protocols are built.

In the seminal paper he presented at the 23rd Annual Symposium on Foundations of Computer Science, Yao used a simple riddle to introduce the problem he hoped to solve: Two secretive millionaires having lunch decide the richer person should pay the bill, but how can they do this if neither one wants to reveal what they are worth?

The solution to this riddle — Yao determined — is a two-party protocol that can determine the Boolean result of private input 1 ≤ private input 2. Called the Garbled Circuits Protocol, Yao’s approach involves Boolean gate truth table that is ‘garbled’; obfuscated using randoms strings or labels. This truth table is sent from the first party to the second party, who evaluates the garbled gate using a symmetric encryption key to produce a Boolean result.

An extension of this protocol to include more than two parties, multi-party computation is a system that allows multiple parties to compute a shared function using individual private inputs.

The practical application of this protocol was difficult to achieve until the 2000s, when more sophisticated algorithms, fast networks, and more powerful and accessible computing made it practical to develop multi-party computing systems. Yao’s work became even more relevant during the rise of Big Data and machine learning.

By enabling the privacy-preserving use of large datasets, multi-party computing has become a valuable tool in the inferencing phase of machine learning algorithms. During this phase, multiple parties want to collaborate on a model through the use of a combined data set that does not expose the raw inputs of individual participants.

Thanks to the pioneering work of Andrew Yao, machine learning systems have more access to a wider variety of sensitive data, and this enables the development of critical new breakthroughs and insights, in fields such as precision medicine and diagnostic imaging.


Cynthia Dwork

Cynthia Dwork is a theoretical computer scientist at Harvard University specializing in cryptography, distributed computing, and privacy technologies with more than 100 academic papers and two dozen patents to her name.

In 2006, Dwork was the main author and contributor to a groundbreaking paper presented at The Third Theory of Cryptography conference that established principles for a new kind of privacy-enhancing methodology: differential privacy. Dwork has said conversations with philosopher Helen Nissenbaum inspired her to focus on ways to maintain privacy in the digital age.

Differential privacy describes a group of mathematical methods that lets researchers compute on large datasets containing personal information, including medical and financial information, while maintaining the privacy of individual contributors to the dataset. These methods support privacy by adding small amounts of statistical noise to either raw data or the output of computations on raw data.

Differential privacy methods are designed to ensure that the added noise doesn’t significantly dilute the value of data analysis. At the same time, these methods maintain the integrity of analysis, whether or not a given individual opts in or out of the dataset. Thus, differential privacy blocks the release of individuals’ personal information resulting from data analysis. . This groundbreaking approach in privacy-enhancing technology addresses many of the limitations associated with previous approaches.

In 2015, Dwork was the main author of another key paper called “The reusable holdout: Preserving validity in adaptive data analysis” that outlined how differential privacy could be used to further machine learning-based scientific research.

In scientific research, machine learning typically involves the use of a training dataset and a testing, or ‘holdout’, dataset — on which a trained machine learning system conducts an analysis. After the holdout dataset is analyzed, it is no longer seen as an independent ‘fresh’ dataset. In the 2015 paper, Dwork and her colleagues proposed using differential privacy to preserve the independence of the holdout dataset.

According to Dwork, this application of differential privacy targets a future in which new data is hard to come by. Since machine learning requires massive amounts of data, and data is a finite resource on Earth, this application enables repeated uses of the same holdout dataset.


David Chaum

Having taught graduate-level business administration at New York University and computer science at the University of California Berkeley, David Chaum laid the foundation for a number of business-focused privacy-enhancing computation techniques, including digital signatures, anonymous communications, and a trustworthy digital system for secret voting ballots.

In a groundbreaking 1983 paper, Chaum established principles for blind signatures. The digital signature system enabled non-traceable payments by allowing a payment receiver to sign for payment without knowing its origin. The same 1983 paper that established principles for blind signatures also laid out principles for digital cash — a precursor to cryptocurrency. Chaum’s paper described how people could obtain and spend digital currency in a way that could be untraceable. 

Initially, Chaum found these politically- and socially-tinged concepts to be very unpopular in academic circles. Facing resistance, Chaum decided to strike out on his own to create Digicash, a digital payments company. The Digicash system was called eCash and its currency was called CyberBucks. The system was very similar to Bitcoin, but the Digicash system was centralized, unlike Bitcoin’s decentralized network. Private sector success helped the idea of privacy-enhanced payments catch on, and Chaum would go on to present his cornerstone concept of cryptocurrencies at the first ever CERN conference in 1994 in Geneva, Switzerland.

In 1989, Chaum and his colleague would develop ‘irrefutable signatures’ — an interactive signature system that allows the signer to control who is able to verify the signature. In 1991, Chaum and another colleague developed a system for “group signatures” that allowed one individual to anonymously sign for an entire group.

Over the years, Chaum has also developed a number of digital voting systems designed to preserve a secret ballot and protect the integrity of elections. One cryptographically verifiable system called Scantegrity was used by Takoma Park, Wash. for an election in November 2009 — the first time such a system was used in a public election.

While Chaum was able to develop an impressive array of privacy-enhancing techniques, he’s probably best known for devising the core principles behind something that gets a lot more headlines: blockchain technology.


We’re taking the next step in privacy-enhancing technology

The TripleBlind Solution expands on the data privacy-enhancing technologies developed by the pioneers in our industry.

Our technology allows easy access to the foundational multi-party computing approach established by Yao, as well as other privacy-enhancing technologies, in a seamless package. By leveraging our solution, researchers, financial institutions, and other organizations are able to focus on innovative collaborations while maintaining possession of their own proprietary assets.

Our solution also meets the highest privacy standards. In the same way differential privacy protects individuals, our privacy-enhancing software allows data owners to operationalize sensitive data while protecting the privacy of individuals.

If you would like to learn more about the latest in data privacy technology and tools, please contact us today.