Skip to main content

Press Release: Mobius Labs Unveils Breakthrough Multimodal AI Video Search Empowering Organizations to “Chat” with Their Content Libraries

At
 IBC 2023 on booth 5.G04, Mobius Labs, a developer of next-generation AI-powered metadata technology, will unveil its latest Multimodal AI technology that represents a breakthrough in how organisations can “chat” with their content libraries in the same way the world has learned to chat with large language model (LLM) systems such as OpenAI’s ChatGPT and Google’s Bard. The difference with Mobius Labs is that this system has been designed specifically for the Media & Entertainment industry, and it is efficient enough to be hosted locally, in the cloud, or both.
 
When humans look at a piece of video we use our vision, hearing and language capabilities to understand the content. Mobius Labs has trained foundational models based on computer vision, audio recognition and LLMs to interpret media in the same way.
 
“Imagine having a private conversation with your content library about what is happening in a scene or episode using natural language prompts,” said Appu Shaji, CEO and Chief Scientist at Mobius Labs. “Multimodal AI technology lets us combine what the AI sees, hears, and reads to create a more nuanced understanding of what is happening within the content. Once AI can summarise and understand what the content is, things like search and recommendation becomes infinitely more powerful.”
 
As an extension of Mobius Lab’s Visual DNA, the company’s AI-based metadata tagging solution, this new technology breaks new barriers in how content can be described and indexed without any human involvement. In the past few years, AI solutions have begun to address search and recommendation challenges, but these solutions required extensive development, customization, and engineering efforts. With new multimodal solutions, the technology works 'out of the box' to cover a wide range of use cases.
 
Furthermore, to ensure that customers maintain full ownership of their data, these solutions offer a headless SDK that adheres to the principle of 'bringing the code to the data, rather than bringing the data to the code’. This approach not only reduces expensive network communication but also incorporates privacy by design.
 
“Our R&D team are true pioneers as they bring the power of multimodal AI to the M&E industry,” said Jeremy Deaner, COO, Mobius Labs. As data volume continues to grow exponentially, a key design tenet of the team is to keep the code efficient to bring the marginal cost of running Mobius Labs’ AI solution to near zero. “We have some fundamental R&D within the company which is able to make our model smaller and in some cases as much as 20 times more efficient than the competition,” said Shaji.
 
By enabling this new level of data awareness, Mobius Labs empowers media companies to go beyond content search to provide their customers with a new level of value. Curated recommendations can be made on a large scale, tailored to individual tastes and preferences, leading to significantly higher customer engagement. This is achieved without the need for a large data science team to build and iterate these algorithms. For example, the greatest successes in the media industry have been Netflix and TikTok's recommendation algorithms. Mobius Labs' solution can be easily integrated into existing systems to provide superior recommendation capabilities.
 
Visitors to IBC Booth 5.G04 can get a demo of this breakthrough technology and see first-hand what it’s like to chat privately with your content library.
 
For additional information, please see the video demonstration here: https://drive.google.com/file/d/12XjwXVRY2ItK8kSUVDTAIRMUSTDzglKg/view
 
About Mobius Labs
Mobius Labs is a Berlin-based company that has developed a novel range of AI-powered computer vision technologies that are disrupting the norms across many different industries. Using the company’s software, new levels of customisation can be generated by end-users without the need for any prior AI knowledge bringing true democratisation of advanced AI services. The software can be run on-premise, on edge devices or in the cloud requiring minimal processing power and storage. Delivered as an SDK, the technology ensures that data is kept private at all times.

Comments

Popular posts from this blog

3SS and Dolby Team Up to Enable Carmakers to Differentiate with Immersive In-Car Entertainment

3 Screen Solutions (3SS) , leading provider of software solutions for set-top boxes (STB), smart TV, multiscreen and in-vehicle entertainment, announces its collaboration with  Dolby Laboratories , a leader in immersive entertainment experiences, to enable Dolby Atmos ®  for video entertainment services in cars.   3Ready Automotive , the highly capable video entertainment platform that brings super-aggregated content into vehicles, now boasts integrated support of  Dolby Atmos  immersive sound technology. “3SS and its enabling entertainment platform 3Ready are well-known and trusted within the pay-TV space, and we're delighted to collaborate to jointly help automotive OEMs realize their vision of providing best-in-class entertainment experiences in cars with Dolby Atmos,” said Andreas Ehret, Sr. Director Automotive at Dolby.   Simultaneous to the announcement of the 3SS-Dolby partnership, German video on demand service  maxdome  has become the new...

Disguise and Cuebric Launch AI-Driven Content Generator for Virtual Production

Disguise  — a world leader in virtual production technology used for  Top Gun: Maverick ,  The Joker  and more — has partnered with AI platform  Cuebric  to make virtual environments faster, easier and more cost-effective to build than ever before.  As part of the partnership, Cuebric has been integrated with the Disguise platform. This means creatives can now use AI to create the shape and depth of 2.5D environments, then import them into Disguise. The result is a plug-and-play scene that can be executed on an LED stage in only two minutes. This allows anyone to tell immersive stories, no matter the production environment or virtual art capabilities they have, saving weeks of pre-production work. “Real-time environments look spectacular on-camera, yet often require many hours of artistic and technical build,” said Addy Ghani, VP of Virtual Production at Disguise. “Thanks to our partnership with Cuebric, there’s now another option. Using generative AI,...

e-interview with Cuebric

1. For the readers of TVTechTR, I kindly request a brief introduction to Cuebric. Cuebric is an elegant, intuitive tool that helps users create, segment, and dimensionalize scenes, allowing them to go From Concept to CameraTM in minutes. Through the power of artificial intelligence, Cuebric creates an entirely new way of working, one that transcends what was once possible, allowing storytellers to achieve new levels of productivity while maintaining cinematic quality. Cuebric has been welcomed and used by creators from all backgrounds, including filmmakers at prominent studios and leading VFX companies. More information online at www.cuebric.com 2. Could you please elaborate on the innovations that the collaboration between Cuebric and Disguise will bring to the industry? Additionally, is it possible to integrate Cuebric with native game engines? Plug & Play 3D uses advanced algorithms and machine learning techniques to turn image layers into near-3D assets, automatically mapped to...