Webp 5ofj04wjr4850myfpgm0rc4hsikx

AI startup partners with AWS to enhance video searchability

ORGANIZATIONS IN THIS STORY

Jeffrey Preston Bezos Executive Chairman of Amazon | Amazon

At the AWS re:Invent event, Amazon Web Services (AWS) announced a collaboration with Twelve Labs, a startup focused on using multimodal artificial intelligence to enhance video content understanding. This partnership aims to make videos as searchable as text by developing foundation models that map natural language to video content, including actions, objects, and sounds.

Twelve Labs' technology is available on AWS Marketplace and allows developers to create applications for semantic video search and text generation. These applications can be used across various industries such as media, entertainment, gaming, and sports. For instance, sports leagues can streamline cataloging game footage or coaches can analyze athletes' techniques for performance improvements.

"Twelve Labs was founded on a vision to help developers build multimodal intelligence into their applications," said Jae Lee, co-founder and CEO of Twelve Labs. "Nearly 80% of the world’s data is in video, yet most of it is unsearchable."

AWS supports Twelve Labs by providing compute power through Amazon SageMaker HyperPod to train its foundation models faster and at reduced costs. This technology enables deeper insights by comprehending different data formats simultaneously.

Jon Jones from AWS stated that "Twelve Labs is using cloud technology to turn vast volumes of multimedia data into accessible and useful content." The collaboration will expand globally under a three-year Strategic Collaboration Agreement (SCA), enhancing model training capabilities and deploying advanced video understanding models across new industries.

ORGANIZATIONS IN THIS STORY