TripleBlind team photo at Harvesters Volunteering

Q3 On-Site Recap: TripleBlind Collaborates In-Person in KC

Excited to see what our quickly-scaling startup is up to? At the start of Q3, TripleBlind’s team gathered at our Kansas City HQ for a few days of meaningful reflection, strategic planning, and community-centered volunteering!

This quarter, we’ve been proud to launch a full-scale customer support center, conduct a robust survey on Chief Data Officer’s perspectives on data privacy, and successfully train a privacy-preserving random-forest model on a 300 million row dataset. We’ve also been featured in Constellation Research and Omdia’s respective market research reports! We’ve been growing rapidly, so to create and uphold a strong company culture, we wanted to gather our globally-distributed team members together at our home base in Kansas City.

Here’s a recap of our three-day on-site, as well as a handful of shoutouts to all who helped make this special event possible.

Day One

Arrival day, warm introductions, and an all-team dinner.


Our now 50-person team met (and reunited) at the end of July, many making fresh introductions at a company dinner graciously hosted by COO & Co-Founder Greg Storm and his wife. We shared jokes, reminisced on our progress, and set the stage for the coming days over a meal catered by Cupini’s!

Day Two

Community volunteering, an all-team show-and-tell session, and Boulevard Brewery’s “Happy Hour”!

We started the day by giving back, volunteering at Kansas City’s very own Harvester’s –– a food network that “mobilizes the power of our community to create equitable access to nutritious food and address the root causes and impact of hunger.” Learn more here!


Following our volunteering session, each key department at TripleBlind shared insights, progress, and end-of-year goals in a “show-and-tell” presentation. Priorities for Q4 include:

  • Improving Customer Success toolsets and processes to effectively and efficiently engage with product users
  • Driving improvements in our onboarding process, SDK, and AP, across all supported platforms
  • Sharing information-rich content about privacy-enhancing computation to increase market awareness

We’re feeling prepared and motivated to tackle these challenges as we continue to grow and advance!

We finished the day with some local favorites: dinner from Joe’s Kansas City Barbecue, a happy hour at Boulevard Brewing Company, and a night of fun and arcade games at UpDown! Craig Gentry (CTO), Babak Poorebrahim Gilkalaye (Head Cryptographer), Gharib Gharibi (Applied Scientist), Mikulas Plesak (Implementation Engineer), and Brian Koziel (Senior Cryptographic Engineer) had a showdown over who would reign champion of PacMan. What can we say? A team that works hard plays hard!

Day Three

Interdepartmental collaboration and farewells.

TripleBlind team at breakfast

On our final day together, the TripleBlind team collaborated in interdepartmental meetings. We value transparency and open communication across all projects, so this was an excellent opportunity for team members to find common ground and devise creative solutions. We closed out our Q3 onsite with wholesome lunches, warm see-you-laters, and a renewed sense of vision and purpose for Q4!

We’d love to thank the employees of Boulevard Rec Deck for hosting us, the employees at Harvester’s for teaching us, and the employees of Cupini’s in Westport, Zocalo on the Plaza, Town Topic in the Crossroads, and Joe’s Kansas City Barbecue for keeping us fed.

Thanks again, Kansas City!

Interested in joining our fast-paced and mission-driven team? We’re always looking for talent, both in Kansas City and remotely! Check out our Careers Page for open opportunities –– We’d love to see you at our next on-site!

Q3 Wrap Up

Privacy By Design Personal Data Hero

Privacy By Design Should Be the New Normal

What is personal data really used for?

LinkedIn, Facebook, Instagram, or Twitter –– One scroll is met with countless advertisements for data-based products and services. In fact, digital marketing experts estimate that the average person is exposed to 4,000 to 10,000 ads per day, carefully curated according to their online activity, geographic location, and demographic information. This isn’t necessarily a bad thing, as innovative and personalized solutions can save users time and money, improve health and wellbeing, and generally ease burdens of life at the click of a button. Personal data is used in industries other than advertising as well –– healthcare, finance, insurance, and transportation companies all use vital consumer data to improve patient outcomes, revolutionize product offerings, and optimize cumbersome operations. But with growing concerns about how personal data is collected, stored, and used, how can enterprises utilize and collaborate with insight-rich information without compromising the privacy of individual consumers? The answer: Privacy by Design.

“81% of the public say that the potential risks they face because of data collection by companies outweigh the benefits.” 

Pew Research Center, 2019

In this article, we’ll explore the Privacy by Design framework as a foundation for protecting individual privacy while unlocking the true intellectual value of sensitive data. No, this isn’t an oxymoron –– by implementing this framework and using privacy-enhancing technologies in your organization’s product or service development, you’ll be able to preserve privacy, reduce risk and liability, and increase consumer trust in a data-driven world. How is that for a new normal? 

What is “Privacy by Design?”

Privacy by design is “data protection through technology design.” In the context of systems engineering, Privacy by Design requires that privacy is accounted for throughout the entire engineering process –– allowing for individuals and organizations to selectively share or refrain from sharing information about themselves or others. Originally developed as a framework in 1995 by Ann Cavoukian, the Dutch Data Protection Authority, the Netherlands Organization for Applied Scientific Research, and a team of the Information Privacy Commissioner of Ontario. It was then adopted by The International Assembly of Privacy Commissioners and Data Protection Authorities in 2010.

Initially critiqued as vague and inapplicable for real-world solutions, Privacy by Design has proven valuable in both legal systems and practical computer science applications. The European Union’s General Data Protection Regulation (GDPR) of 2018 incorporates the Privacy by Design framework, including principles such as “data protection by design” and “data protection by default.” The United States, Australia, Singapore, and additional countries have also adopted various elements of Privacy by Design in legislation protecting consumer and citizen data privacy. 

In the private sector, Privacy by Design strongly influences the development of new and innovative privacy-enhancing technologies (PETs). Privacy-enhancing technologies are known by the umbrella term “privacy-enhancing computation,” and include methods such as differential privacy, federated learning, homomorphic encryption, and more. These technologies allow internet users, product consumers, healthcare patients, and others to protect the privacy of their personally identifiable information (PII) when handled by outside organizations, services, and applications. 

“Privacy by Design” Principles

Privacy by Design encourages the proactive embedding of privacy into IT systems, network infrastructure, and business operations. The seven foundational principles of Privacy by Design include:

1. Proactive not Reactive; Preventative not Remedial

This principle anticipates and prevents privacy breaches before they happen, rather than after. By identifying and correcting for privacy gaps in a product, system, or service, organizations can prevent reactive and remedial actions in the event of a privacy breach.

2. Privacy as the Default Setting

With this principle in mind, individuals should retain privacy without needing to request the right from an IT system or business. No action should be required on the part of the individual, and instead, it should be built into a system by default.

3. Privacy Embedded into Design

This principle requires that privacy is an essential component of any system or service’s core functionality. Instead of tacking on privacy as an afterthought, systems engineers and developers should treat privacy as integral to a product or system.

4. Full Functionality –– Positive-Sum, not Zero-Sum

The Full Functionality principle suggests that false dichotomies between security and privacy should not exist –– and that in fact, it is possible to have both. Products, services, and systems designed under this principle should be private, secure, and functional.

5. End-to-End Security  –– Full Lifecycle Protection

Under this principle, strong security measures should be treated as essential to privacy from start to finish. When privacy is implemented into the system prior to any collection of information, all data is securely retained, and all data is securely destroyed at the end of a process, then providers and users benefit from end-to-end lifecycle management.

6. Visibility and Transparency –– Keep it Open

This principle encourages users, providers, and system/product designers to “trust, but verify.” To create trust, providers and businesses should openly state promises and objectives with data, subject to independent verification. Users should be able to easily access and verify this information.

7. Respect for User Privacy –– Keep it User-Centric

A user-centric approach requires the consideration of the individual’s interest when using a product or service that incorporates or handles data. This includes measures such as privacy defaults, appropriate and comprehensive notice, and user-friendly design.

What are the benefits of “Privacy by Design?”

When Privacy by Design is incorporated into systems by default, entire corporations and their consumers can see substantial benefits. Additionally, IT, risk management, and cybersecurity organizations can secure:

  • Regulatory and legal compliance with laws such as GDPR, CCPA, and HIPAA
  • Reputational and financial security
  • Proactive posture to legislative and cybersecurity changes
  • Cost-effective risk management

By implementing technical safeguards at each step of a system or product design process, organizations can benefit from technological advancements and insights-driven data without compromising individual privacy rights or security. Legal compliance through a combination of the Privacy by Design framework and innovative privacy-enhancing technologies can reduce risk, liabilities, and cost burdens in the event of a breach. Organizations will also be well-postured in the event of legislative changes, reducing resource-intensive product or system restructuring the moment a new privacy bill passes. It’s a win-win on all fronts, so long as privacy is treated as a proactive measure in each design process.

What challenges do organizations face in implementing “Privacy by Design?”

Personal data is at the heart of many critical industries’ business operations, including healthcare and finance. High privacy standards, though an ethical and legal requirement, can restrict the collection and use of data for future purposes, such as research, marketing, or business strategy. Other key challenges can include:

  • Inconsistent definitions of privacy — While some pieces of legislation, such as the GDPR and HIPAA, define key terms such as “Personally Identifiable Information” (PII) or “Protected Health Information” (PHI), these laws are often sector or nation-specific. Organizations that operate globally must adhere to numerous privacy laws with varying definitions of privacy and protected information –– increasing resource and cost burdens for compliance. Proactive institutions will need to ensure compliance with these regulations, and/or create their own comprehensive definitions of privacy and protected information.
  • Buy-in at all organizational levels –– For correct implementation of the Privacy by Design framework, management and individual contributors must be aligned, forward-thinking, and adaptable to privacy stewardship and product development. Privacy by Design inherently requires centering privacy at each stage of the development process, from ideation to implementation.
  • Solidifying the framework –– By nature, Privacy by Design suggests a set of principles for centering privacy –– rather than a step-by-step, easy-to-implement process. Organizations will need well-defined strategies for implementation, product testing, and more to ensure each principle is successfully upheld.

How can my organization implement “Privacy by Design?”

Privacy by Design can easily guide data protection engineering processes, ensuring full lifecycle protection for sensitive data. Simple steps can include:

1. Devising and Sharing Transparent Privacy Policies 

If your organization collects, uses, or shares personal information, it’s important to explicitly share the nature and purpose of collecting user data. This can include volunteered personal data, such as the information a user inputs on a form, or automated personal data, such as data collected through cookies, tracking scripts, and more. Methods for transparency include sharing privacy policies through pop-up notifications, banner displays, and user agreements. Users should always have the option to opt out of sharing personal information.

2. Incorporating Privacy as a Default Setting

If your website, application, or other service requires explicit consent in the form of a checkbox –– avoid using a pre-ticked option. By granting the user full choice and autonomy to “check” for explicit consent, you allow the individual to actively participate in the collection, storage, or use of their data. If your organization requires consent to progress with data processing, you can always display a prompt or banner to let users know.

3. Reduce Data Collection

Supporting the principle of Full Lifecycle Protection requires collecting and processing the minimum amount of user data to achieve a specified purpose. In doing so, your organization can minimize liability and potential harm in the event of a data breach. This takes place by limiting the volume of data collected, selecting or excluding sections of user data collected, and only collecting critical data from users.

4. Restrict Data Observability

By limiting data access or sharing to a need-to-know-basis, your organization can more effectively protect user confidentiality. These access controls can include one-way encrypting datasets, terminating electronic sessions with data after a predetermined time of inactivity, and utilizing robust digital management systems for collection, storage, and use approval.

5. Implement a Combination of Privacy-Enhancing Technologies

If collaborating with sensitive data is a core element of your organization’s business development, strategy, or operations, consider implementing privacy-enhancing technologies into your product or service’s design process. Combinations of techniques like secure multi-party computation, tokenization and masking, and more can allow your data-intensive organization to thrive without compromising security or privacy. Note that while each individual privacy-enhancing technique can come with downsides, new innovations have radically improved the practical use of privacy-preserving technologies by adding true scalability and faster processing. 

What is the TripleBlind Solution?

The TripleBlind Solution is one such innovation. Built on well-understood principles of data protection, our software-only solution supports all data and algorithm types –– so your organization can unlock the intellectual property value of data, while preserving privacy and enforcing compliance with HIPAA and GDPR.

Here’s how the TripleBlind Solution is built with Privacy-by-Design in mind:

  1. Proactive not Reactive; Preventative not Remedial: TripleBlind never stores or handles any personal data. Our technology permanently and irreversibly de-identifies data through a combination of one-way encryption and distributed computing, which allows the algorithm to generate the same outputs without requiring an Algorithm Provider to process or use any personal data.
  2. Privacy as the Default Setting: TripleBlind was built with privacy and security in mind, allowing your data to remain behind your firewall while it is made discoverable and computable by third parties for analysis and ML training. The TripleBlind Solution has also obtained SOC 2 Type 1 certification for our commitment to establish and follow security policies and procedures. Learn more here.
  3. Privacy Embedded into Design: Since TripleBlind never stores, handles, or processes personal data, our technology helps minimize privacy risks and infringement on data subjects’ rights when working between vendors.
  4. Full Functionality –– Positive-Sum, not Zero-Sum: We believe that data privacy, security, and collaboration can exist simultaneously.  Since data never changes hands and remains behind each party’s firewall when using our API, the TripleBlind Solution reduces the risk of compliance failures with respect to data transfers –– allowing your organization to harness the full potential of information, without compromising on privacy or security.
  5. End-to-End Security  –– Full Lifecycle Protection: Our AI Tools remove common barriers to using high-quality data for artificial intelligence and deep learning, allowing AI professionals to solve their most pressing data access, prep, and bias challenges. These tools make it possible to train new models on remote data and run inference on existing models, while protecting the privacy and fidelity of data and intellectual property. 
  6. Visibility and Transparency –– Keep it Open: We believe in, “trust, but verify.” TripleBlind provides robust digital rights management (DRM). Each data operation must be explicitly approved by the appropriate administrator. Once approved, the dataset is one-way encrypted for one-time use. Once the operation is complete and the result is returned to the appropriate party, the one-way encrypted data is rendered useless. Permissions can be set as broadly or specifically as needed, to govern both internal and external use of an organization’s information assets.
  7. Respect for User Privacy –– Keep it User-Centric: We’re built with privacy in mind, and we’re also built with users in mind. Our software-only solution is easy to use, delivered by a simple API, and comes with comprehensive support from the TripleBlind team. All operations within the TripleBlind Solution are private by default, allowing your organization to decide how to collaborate with sensitive data.

If you’re an organization looking to harness the intellectual property value of data, you can start by implementing privacy-enhancing technologies such as the TripleBlind Solution. Privacy by design, reduced risk, and increased consumer confidence can become your new normal.

Ready to see the most practical application of Privacy by Design? Schedule a demo with us or check out the following use cases of the TripleBlind Solution:

Early Indication Clinical Trial Reporting

Alternative Data in Financial Services

Bank Vendor Management

Insurance Claims Management Services


Big Data & AI World Event Hero

Big Data & AI World, October 12-13, Singapore

Big Data & AI World, October 12-13, Singapore

Big Data & AI World is an award-winning event focused on the intersection of technology and business. As an attendee, we’re excited to learn about the data challenges business professionals face in 2022 and explore how the TripleBlind Solution can enable enterprises to collaborate across disparate datasets at scale.

COO Greg Storm will be speaking on The Impact of Privacy-Enhanced Computing on Big Data Availability, be sure to register for our slot or stop by booth T70 for a chat.

Click here to set up a meeting or demo with TripleBlind.

BioData World Congress Event Hero

BioData World Congress, November 8-10, Basel, Switzerland

BioData World Congress, November 8-10, Basel, Switzerland

TripleBlind is honored to be speaking at BioData World Congress 2022. We can’t imagine a better forum to discuss the TripleBlind Solution than Europe’s largest event for big data in pharmaceuticals and healthcare. We’ll be speaking about how TripleBlind compares favorably with other privacy preserving technologies, such as homomorphic encryption, synthetic data, and tokenization and has documented use cases for more than two dozen mission critical heatlhcare data problems.

Click here to set up a meeting or demo with TripleBlind.

HLTH 2022 - Las Vegas Nevada, November 13-16

HLTH, November 13-16, 2022, Las Vegas

The TripleBlind team will exhibit at HLTH 2022 in Las Vegas, so swing by our booth to say “hi”. Learn more about HLTH here, click here to set up a meeting or demo with TripleBlind.

Responsible AI Hero Image

AI – Trust, Risk & Security: Responsible AI

Artificial intelligence brings vast opportunities to next-generation businesses, but great power comes with great responsibility.

AI has the capacity to directly impact billions of lives…positively and negatively. When AI is used to make hiring decisions, companies may benefit from an expedited or streamlined application process –– but qualified individuals may not receive a career-altering job offer due to inadvertent bias from an AI algorithm, limiting professional development and talent growth. On a societal level, industries like finance, transportation, and healthcare rely on accurate insights from data to provide essential services to the population. However, algorithmic bias threatens the efficacy of these industry decisions with severe implications: Some communities may not receive essential services because an algorithm determines that providing services and other communities is a more profitable business move. Take, for example, the widening social inequities in healthcare that stem from biases in data and algorithms. 

Growing socioeconomic inequality, job automation, privacy risks, and even weapons automation are known risks of irresponsible AI. These potential negative impacts around AI have (reasonably) raised considerable questions about trusting the technology, its ethical use, and its legal implications. In a recent report from Accenture, only 35% of global consumers said they trust how companies are using AI. 77% of respondents – an overwhelming majority – stated organizations should be held responsible for misusing AI.

Given the state of public sentiment, companies looking to adopt and increase their use of AI must take steps to use the technology in a responsible manner through an emerging area of business management called AI Trust, Risk, and Security Management (AI TRiSM).

See The Importance of AI TRiSM for more on the impact of responsibly implementing and using AI.

What is AI TRiSM and Responsible AI?

AI TRiSM involves a multifaceted strategy aimed at addressing the ethical, business, and legal concerns around AI.

While regulations around the world are slowly evolving to address concerns around AI, the creation of dependable standards is currently performed at the discretion of an organization’s data scientists and software developers. Organizations have a strong incentive to create dependable standards and there is a growing awareness around this issue. In a recent survey of risk managers, 58 percent of respondents said AI currently has the greatest potential for unintended consequences. Just 11 percent of surveyed risk managers said their organization is fully capable of analyzing the risks connected to AI adoption.

One emerging governance framework designed to document how an organization addresses the legal and ethical challenges surrounding AI is called Responsible AI. This framework and the surrounding initiatives are focused on resolving ambiguity in an effort to prevent negative unintended consequences.

The tenants of a Responsible AI framework address the designing, developing, and use of AI in ways that empower employees, provide value to customers and fairly impact society. A successful Responsible AI framework allows companies to build trust around AI and scale up the technology in a responsible way.

The core principles of Responsible AI framework deal with

  • Comprehensiveness
  • Explainability
  • Ethical use
  • Efficiency


Well-defined testing and governance metrics must be designed to protect algorithms from malicious actors.


Users of AI should be able to describe the purpose for using it, the rationale for adopting it, and any associated decision-making process in a way that is easily understood by end users.

Ethical Use

Measures should be developed to identify and eliminate any systemic bias.


AI systems should be able to operate on a continual basis and react quickly to environmental changes.

While legal and ethical concerns are the primary drivers behind the adoption of Responsible AI, the use of this governance framework has also been shown to provide business benefits. According to research, companies that have put the proper governance frameworks in place, including Responsible AI, have seen almost three times the return on investments in AI compared to companies that have not put these frameworks in place.

The Difficulties of Implementing Responsible AI in an Organization

Organizations looking to implement Responsible AI must focus on translating ethical principles into quantifiable metrics that can be used in everyday operations. This calls for making:

  • Technical
  • Organizational
  • Operational
  • Reputational considerations

Technical Difficulties Of Implementing AI TRiSM 

The effectiveness of a Responsible AI framework can’t be measured using tried and true business metrics like website traffic or click-through rates. New technical metrics must be developed to monitor factors related to AI trust, AI risk, and AI security. Without good metrics and methods, organizations will find it difficult to effectively maintain their Responsible AI framework. They will also find it difficult to perform essential decision-making and build consensus around AI initiatives. There are promising signs, however, that counterfactual analyses and metrics like error rates making it easier for organizations to implement a Responsible AI framework.

Organizational Difficulties of Implementing AI TRiSM

Because the principles of Responsible AI come out of ethical concerns, it is critical for companies leveraging AI to have an organizational culture that encourages its people to raise concerns regarding AI systems. Too often, fears of decreased innovation or productivity prevent people from coming forward, and this can have a negative impact on risk mitigation. Along with establishing solid metrics related to Responsible AI, organizations need to provide training and incentives that empower employees to make the right decisions.

Operational Difficulties of Implementing AI TRiSM 

Organizations using AI should have governance structures in place to address accountability, conflict resolution, and competing incentives. These structures should be transparent and focused on addressing any misalignment, bureaucratic issues, and lack of clarity regarding AI-related operations.

Reputational Difficulties of Implementing AI TRiSM 

Ongoing and proactive approaches to Responsible AI can help prevent an organization from suffering reputational damage as a result of AI. This involves healthy skepticism from internal stakeholders, as ethical principles can shift due to changing opinions or recent events. Ongoing and well-intentioned scrutiny encourages the regular pressure testing of an organization’s Responsible AI framework.

How TripleBlind Supports Responsible AI

AI systems are built on vast amounts of data, and protecting that data is critical to addressing the trust and security risks of AI.

Innovative technology from TripleBlind is specifically designed to secure data for the development of AI technology. Specifically, our Blind Learning data tools allow AI developers to access vast amounts of sensitive data without ever exposing their proprietary algorithms. We believe in:

  • Protecting the model by distributing learning, never revealing an entire model to any data provider
  • Reducing the burden on data providers by unlocking more data partners, solving for communication overhead and easing collaboration through our simple-to-use API.
  • Dividing and conquering model training, optimizing for computational resources among partner organizations
  • Blind Decorrelation, guarding against membership inference attacks and preventing actors from predicting or uncovering training data

Are you ready to remove common barriers to using high-quality and ethical data for AI? We help solve for data access, prep, and bias challenges. Train new models on remote data and run inference on existing models –– all while protecting the privacy, fidelity, and intellectual property value of your data. Contact us today to learn more, or download our Whitepaper to deep-dive into our technology.

AI Security Risk Management Hero Image

AI Security Risk Management: Industry Standards

Artificial intelligence has the power to make positive changes for both business and society, and as AI has become more influential, it has also become a target for malicious actors.

The enthusiasm and growth mindset around AI must be tempered with the need to secure these systems, and there is a recognition that current security measures are inadequate. A recent report from Gartner on AI Trust, Risk and Security Management (AI TRiSM) stated, “AI poses new trust, risk and security management requirements that conventional controls do not address.”

AI Security Risk Assessment Standards

Organizations that leverage AI must ensure their entire system is kept secure, including both algorithms and the data they use. The most common threat vectors against AI systems are adversarial attacks, data poisoning, and model extraction. The risk level and potential impact of attacks can vary based on the industry and type of organization. For example, some organizations may not have to worry about model extraction attacks but should be very concerned about issues related to data privacy.

According to a Gartner report, 30 percent of all AI cyberattacks through 2022 will leverage training data poisoning, adversarial samples, or model extraction to attack machine learning-powered systems.

Data Privacy Attacks

Data privacy attacks are focused on describing the data set used to train an AI model, which can potentially compromise sensitive data like bank account numbers or personal health information. By querying a model or assessing its parameters, a malicious actor could glean sensitive information from the original data set used to train the model in question.

There are two main types of data privacy attacks: membership inference and model inversion. A membership inversion attack is focused on determining if a particular record or collection of records is inside a training dataset. A model inversion attack is used to extract the training data that was directly used to train the AI model.

Training Data Poisoning

A data poisoning attack involves the corruption of data used to train an AI system, adversely impacting its ability to learn or function. Data poisoning is often done to make an AI system flawed or two affect a retraining process.

There are two main types of data poisoning attacks: “label flipping” and “frog boil”. Intended to degrade an AI system, label flipping involves an attacker controlling the labels assigned to a section of training data. This attack has been shown to degrade the performance of an AI system’s classifier, leading to increases in classification errors.

 In a frog boil attack, a malicious actor interacting with an AI system persistently disrupts it while remaining within the threshold of rejection from the system. The result of a frog boil attack is an unwanted shift in the model’s predictions.

Adversarial Inputs

When a computer system receives data, it categorizes that data in a binary fashion using a divider called a classifier. Adversarial inputs are designed to manipulate and AI system in small but meaningful ways by feeding specially engineered noise into an AI system’s classifier.

Small but well-designed adversarial inputs can have a profound impact on the output of an AI system. For example, minor manipulations of a debt-to-total credit ratio could significantly impact the output of a credit score system –– which leads to real-life implications for everyday consumers, such as rejections for auto loans, mortgages, credit cards, and more.

Model Extraction

In a violation of intellectual property and privacy, a model extraction attack involves a malicious actor trying to steal an entire AI model by extracting conclusions from the model’s predictions, allowing them to reverse-engineer algorithms instead of independently developing ML models. This can be the most damaging type of security threat, because a stolen model could be leveraged to compromise proprietary information, misrepresent a company, tarnish brand image, or spread misinformation. Perhaps most alarmingly, successful model extraction attacks have been performed without the use of high-level technical sophistication, and at high speeds.

AI Risk Management Framework

While attacks against an AI system are becoming more prevalent, there are several ways to address potential risks using risk management frameworks that are customized to each particular organization. The four main elements of an industry standard, AI risk management framework are definitions, inventory, policy, and structures.

  • Definitions. Organizations using AI must clearly define the scope and uses of the technology. Defining an AI system establishes a foundation for the risk management framework, and it lays out the various building blocks.
  • Inventory. An AI inventory identifies and is used to track the various systems an organization has deployed so as to monitor associated risks. and inventory typically defines the various purposes of the system, its objectives, and any restrictions on its use. An inventory can also be used to list the main data elements for each AI system, including any federated models or data owners.
  • Policies. Companies should decide if they need to update existing risk management policies or create an entirely new set of policies for their AI systems. These policies should be developed to encourage the appropriate usage and scaling of the technology.
  • Structures. Companies looking to develop or refine structures should establish a collection of key stakeholders with different sets of expertise coming from various departments. This ‘coalition’ should consider topics like data quality, data ethics, compliance obligations, existing data partners, potential data partners, appropriate safeguards, and system oversight. For the large-scale adoption of risk management structures, the standard approach is to have a formal approval process from a central body staffed by subject matter experts like machine learning engineers and security architects.

Leveling Up Industry Standards with TripleBlind

With an industry-standard framework in place, organizations can take their AI security risk management to the next level with innovative technology from TripleBlind. Our Blind Learning tools allow organizations to protect both their valuable AI models and the sensitive data used to train them. We believe in:


  • Protecting the model by distributing learning, never revealing an entire model to any data provider
  • Reducing the burden on data providers by unlocking more data partners, solving for communication overhead and easing collaboration through our simple-to-use API.
  • Dividing and conquering model training, optimizing for computational resources among partner organizations
  • Blind Decorrelation, guarding against membership inference attacks and preventing actors from predicting or uncovering training data

Ready to find out how? 451 Research recently joined us to discuss how to maximize data confidentiality and utility, as well as how privacy-enhancing technology is an essential component of any risk management framework.

We’d love to schedule a personal call to determine how the TripleBlind Solution can can optimize your AI security. In the meantime, check out our Whitepaper to learn more about how data is abundant and underutilized.

Top 10 Innovative Healthcare Startups Hero Image

Top 10 Innovative Healthcare AI Startups

What if you could diagnose breast cancer in under ten minutes? Detect Alzheimer’s 20 years before the onset of symptoms? Dramatically reduce the cost of prescription medicines in Ghana? 

A handful of startups are disrupting a heavily-regulated healthcare industry through the power of artificial intelligence (AI). Everything from health outcomes to administrative processes can be improved using high-quality data and machine learning, allowing healthcare practitioners to focus on what matters most: the patients they treat.

Experts predict that by 2026, the Global Healthcare AI market will account for $19.25 billion. In 2017, the same market was valued at $0.95 billion –– highlighting the rapid growth of the industry and the scale of these time-saving, cost-reducing, and accuracy-boosting technologies.

What organizations are driving the future of healthcare artificial intelligence (AI)? 

These are the top 10 innovative healthcare AI startups from around the world:

North America

1. Insitro, United States

Insitro combines cutting-edge machine learning technologies to develop a fresh approach to drug development. By leveraging the tools of modern computational biology, Insitro “generates high-quality, large data sets optimized for machine learning.” Biologists and data scientists work hand-in-hand to reduce the costs of research, increase the success rate of drug development, and provide better medicines for patients in need. Insitro advances the pharmaceutical value chain by focusing on gathering insightful population-scale data, in-vitro disease modeling, and scalable machine learning methods. 

Year Founded: 2018
Location: San Francisco, United States
Employee Count: 193

2. CorVista Health, Canada

CorVista’s mission is to improve cardiovascular patient experiences and outcomes using its novel diagnostic platform. Through a simple hand-held digital device, physicians can quickly collect patient signals and use machine-learned algorithms to enable rapid diagnosis. CorVista’s next-generation and non-invasive system can evaluate cardiovascular anomalies without the need for radiation or patient-intensive stress testing, decreasing patient burdens and allowing for efficient predictive assessments on cardiovascular disease. Cardiovascular conditions that CorVista can diagnose include coronary artery disease, pulmonary hypertension, and decreased heart function. 

Year Founded: 2012
Location: Toronto, Canada
Employee Count: 51


3. Doctor Anywhere, Singapore

Doctor Anywhere is on a quest to “make healthcare simple, accessible, and efficient for everyone.” Much like Teladoc in the United States, Doctor Anywhere enables over 2.5 million users in the APAC region to access primary and specialized healthcare through an easy-to-use mobile application. Patients can schedule video consultations, access home care services, and craft preventative health plans at the click of a button. Doctor Anywhere seeks to become the largest tech-enabled omnichannel healthcare provider in Southeast Asia & holistically improve patient outcomes.

Year Founded: 2017
Location: Singapore, Central Region
Employee Count: 200+

4. FRONTEO, Japan

Based in Tokyo, FRONTEO is a pioneer in AI-assisted electronic discovery. While its applications stem across sectors from finance to government, FRONTEO’s most recent notable work is in healthcare. In 2018, FRONTEO developed a fall prediction system for elderly hospital patients in Japan. Why? The total number of fall-related deaths in Japan increased from 5,872 in 1997 to 8,030 in 2016, with 78.8% of cases involving people aged 65 and over –– and as Japan faces demographic challenges with a record-breaking aging population, it’s more important than ever to identify preventative care solutions for geriatric patients.

According to FRONTEO, “Prevention of fall incidents and resultant avoidance of extended hospital stays helps the inpatients have better prognosis and quality of life, [and] further, reduces the burden of medical professionals and social medical costs.” Consequently, “Coroban Care,” was released in July of 2022 in collaboration with Eisai Inc., a U.S. research-based human healthcare organization that discovers, develops, and markets products around the globe. This AI-powered fall prediction system improves upon previous risk assessment tools such as the Morse Fall Scale and STRATIFY, reducing the time and effort of healthcare providers and increasing quality of care for elderly patients in Japan.  

Year Founded: 2003 (“Corobon Care,” 2022)
Location: Tokyo, Japan
Employee Count: 387


5. Onera Health, Netherlands

With approximately one in five individuals struggling with sleep, Onera Health’s mission is to easily identify ailments such as insomnia, sleep apnea, narcolepsy, and more through groundbreaking diagnostic patch systems. 

The traditional sleep testing process often requires numerous consultations with physicians, overnight stays at hospitals, and costly (and invasive!) machinery for testing. Onera Health offers an alternative: non-invasive sensors that record physiological parameters related to cardiovascular, respiratory, and brain functions. The no-wire solution is attached to the forehead, upper chest, abdominal area, and lower leg –– allowing physicians to collect and analyze data from electroencephalograms (EEGs), electromyograms (EMGs), and electrocardiograms (ECGs). 

Onera Health recently received FDA 501(k) clearance for its Onera STS system, allowing a commercial launch in both Europe and the United States by 2023.

Year Founded: 2017
Location: Eindhoven, Netherlands
Employee Count: 49

6. Bottneuro, Switzerland

Bottneuro, a spin-off of the University of Basel, specializes in developing non-invasive home-based therapy systems for Alzheimer’s and dementia patients. Using personalized transcranial electrical stimulation of selected brain areas, Bottneuro is able to provide a rapid, reliable, and cost-effective diagnostic process for those affected by neurodegenerative disorders. 

Alzheimer’s is currently estimated to affect 9.8 million people in Europe and 50 million people globally. Bottneuro improves upon existing diagnostic methods by allowing for early detection of Alzheimer’s and dementia –– up to 10 to 20 years before the first symptoms of these diseases appear. Better yet: Bottneuro’s diagnostic headset can be 3D printed and data can be analyzed on a tablet, increasing accessibility for personalized care around the world.

Year Founded: 2021
Location: Basel, Switzerland
Employee Count: 10

Latin America

7. Eva Tech, Mexico

Named one of the Top 30 Most Promising Businesses of 2018 by Forbes Magazine, Eva strives to improve patient outcomes and increase efficiency for clinics by digitizing radiology processes. Co-founders Raymundo González Leal, José Antonio Torres, and Julian Rios Cantu initially devised high-tech wearables to help women self-diagnose breast cancer, with garments holding biosensors to identify potentially malignant lumps. By 2020, Eva evolved into a network of booths in Mexico to perform thermal-imaging tests that deliver results in under 10 minutes.

Now, more than 350 companies use Eva’s picture-archiving and communication system (PACS) to allow physicians remote access to radiology reports –– expanding medical access and care to individuals who might otherwise need to travel hundreds or thousands of miles to see a radiology specialist or oncologist. Eva’s services are also more cost-effective than physical imaging plates that are traditionally used for cancer identification, further reducing barriers to entry for quality medical care.

Year Founded: 2017
Location: Mexico City, Mexico
Employee Count: 48

8. Osana Salud, Argentina

Osana Salud is building an API-connect infrastructure to lead digital transformation in South America’s healthcare system. By connecting providers, payors, and the pharmaceutical industry in one streamlined application, Osana Salud creates a patient-centric healthcare experience for everything from telehealth consultations to online prescription management. The company’s digital platform aggregates provider information, scheduling, treatment plans and more –– offering “more convenience, better results, and lower costs” for all parties.

Year Founded: 2019
Location: Buenos Aires, Argentina
Employee Count: 96


9. Ilara Health, Kenya

Ilara Health’s mission is to provide affordable diagnostic tools for primary care settings across Kenya. Through artificial intelligence and tech-powered diagnostic equipment, Ilara Health allows doctors and healthcare systems to provide high-quality care, reduce turnaround times for testing, and increase revenue. Since 2019, Ilara Health has worked to solve critical challenges in Kenya’s healthcare system: high-cost barriers for legacy diagnostic tools, a shortage of specialist expertise to operate them, and a vastly distributed medical system across the nation. Its developments in point-of-care diagnostic technology and smart financing packages allow patients to receive community-based care for blood tests, ultrasounds, diabetic screening, and more.

Year Founded: 2019
Location: Nairobi, Kenya
Employee Count: 25 – 100 

10. mPharma, Ghana

Ghana’s mPharma is “building a network of community pharmacies across Africa as it plans to be the primary go-to-primary healthcare service provider for millions of people residing in the region.” Originally founded in 2013, mPharma’s mission is to use the collective power of pharmacy networks to negotiate lower prices with the best manufacturers, eliminating inefficiencies and increasing access to high-quality prescription medication. African countries with small to medium-sized economies often pay up to 30x more for the same prescription medication as those in Western nations –– leading many to opt for low-grade alternatives or even counterfeit medications. mPharma harnesses AI technology to account for price fluctuations, supply-chain disruptions, and market inefficiencies, allowing pharmacies to effectively stock life-saving medicines.

Year Founded: 2013
Location: Accra, Ghana
Employee Count: 293

What is TripleBlind?

TripleBlind is a fast-growing startup working at the cutting edge of enabling advanced data science in healthcare through military-grade cryptography. Our innovations radically improve the practical use of privacy preserving technologies, allowing healthcare organizations to collaborate with sensitive data without violating privacy regulations such as HIPAA. 

Our previous blog posts discuss how leveraging data can fuel pharma innovation and reduce health disparities and improve health equality. The TripleBlind Solution includes Blind AI Tools that remove common barriers to using high-quality data for artificial intelligence and deep learning, so healthcare-focused AI professionals can solve their most pressing data access, prep, and bias challenges.

To learn more about how TripleBlind can help further data collaboration and analysis for healthcare enterprises, please contact us today.

Chief Data Officers’ (CDO) Thoughts on Data Privacy

Upcoming: Chief Data Officers’ (CDO) Thoughts on Data Privacy

How to turn Privacy Technology into Business Performance

Upcoming: Business Performance