Exploring AI Licensing: Human Native AI’s Vision for an Ethical Data Marketplace

In the realm of AI development, the necessity for vast datasets is undeniable, yet the ethical sourcing of such data remains paramount. Recent licensing agreements between OpenAI and esteemed media outlets like The Atlantic and Vox underscore the mutual interest in securing rights for AI training data.

Human Native AI, a London-based startup, is spearheading the establishment of a marketplace designed to facilitate AI training licensing deals. Their aim is to bridge the gap between companies embarking on Large Language Model (LLM) projects and content rights holders willing to license their data.

The core objective is twofold: to assist AI companies in finding suitable data for model training while ensuring rights holders are duly acknowledged and compensated. Under this model, rights holders can upload their content free of charge, connecting with AI companies to negotiate revenue-sharing or subscription deals. Human Native AI provides support in content preparation, pricing, and copyright monitoring, earning a commission from each successful deal while also charging AI companies for transaction and monitoring services.

James Smith, CEO and co-founder, drew inspiration from his experiences at Google’s DeepMind project, where data scarcity posed significant hurdles. Reflecting on the Napster era, Smith envisioned a marketplace that empowers creators while facilitating fair compensation—a concept he pitched to his friend Jack Galilee during a casual stroll in the park, a departure from their usual discourse on potential startup ideas.

Since its April launch, Human Native AI has garnered promising interest from both content creators and AI companies, culminating in several forthcoming partnerships. A recent £2.8 million seed funding round led by British micro VCs LocalGlobe and Mercuri further validates their vision, with plans to expand the team and enhance platform capabilities.

While still in its infancy, Human Native AI fills a crucial void in the burgeoning AI industry. By providing a streamlined avenue for data acquisition, the platform aims to benefit both AI players and rights holders, fostering collaboration and equitable access.

Looking ahead, Smith envisions leveraging the platform’s data to provide insights for rights holders, aiding in effective pricing strategies. Moreover, the timing of Human Native AI’s launch coincides with evolving AI regulations, emphasizing the importance of ethically sourced data—a sentiment echoed by Smith, who emphasizes the need for responsible AI development that safeguards existing industries.

In conclusion, Human Native AI stands at the forefront of reshaping the AI landscape, championing ethical practices and human-centric progress. As the industry evolves, their platform serves as a testament to the potential of responsible innovation—one that prioritizes collaboration, transparency, and equitable participation.

Back To Top