fbpx

Perle Raises USD 9M to Launch Web3-Powered AI Data Training Platform

Perle Raises USD 9M to Launch Web3-Powered AI Data Training Platform
Image Source: Ahmed Rashad LinkedIn
  • Perle secures USD 9M seed funding led by Framework Ventures to launch Web3-powered AI data training platform, Perle Labs.
  • Perle Labs will reward contributors with transparent payments, on-chain proof of work, and verifiable histories to improve AI model performance.
  • Founded by AI veterans, it offers tools for multimodal data collection, RLHF, and fine-tuning to tackle complex, nuanced AI challenges.

Perle

Perle, an AI training data platform powered by Web3, closed USD 9 million in a seed funding round. Moreover, the round was led by Framework Ventures

With the newly acquired capital, it aims to launch Perle Labs, a crypto-native ecosystem designed to transform how humans input powers. Additionally, with this launch it aims to provide transparent payments, on-chain attribution, and verifiable work histories. 

AI Models

Perle Labs aims to address limitations of today’s AI models by expanding access to high quality, diverse and verified data sets. According to the company, today’s AI models struggle with complex, nuanced tasks as they are only as good as the data they are trained on. It aims to reward contributors, establish on-chain proof of work, and empower a global community to participate in shaping AI.

“As AI models grow more sophisticated, their success hinges on how well they handle the long tail of data inputs—those rare, ambiguous, or context-specific scenarios. By decentralizing this process, we can unlock global participation, reduce bias, and dramatically improve model performance.” said Ahmed Rashad, CEO of Perle.

The startup combines human expertise with adaptive workflows. Furthermore, this is to help teams collect, annotate, and evaluate specialized training data faster and with higher accuracy. Its self-serve platform supports the full AI development lifecycle. This includes multimodal data collection (audio, image, video, etc.), reinforcement learning from human feedback (RLHF), and assistant fine-tuning. 

If you see something out of place or would like to contribute to this story, check out our Ethics and Policy section.