Snapchat recently revealed more AI tools for its users. At its sixth annual Snap Partner Summit, the social media giant disclosed its plans to provide an AI video tool for creator account holders.
Thanks to AI technology, users will be able to create videos using text and image prompts. The startup claims to watermark every video produced by AI to allow other users to distinguish between actual and artificially generated content.
The social media company described the new features in a press statement. Among the more intriguing tools revealed during the event is the AI Video tool. Snap AI Video will be exclusive to the platform’s Creators. Notably, users need to have a sizable audience, an active Stories and Spotlight posting activity, and a public profile in order to become Creators.
The feature can create videos based on text prompts and looks similar to a standard AI video generator. According to Snapchat, creators will soon also be able to produce videos using image prompts. The tool is now available to a limited number of creators on the web in beta.
The AI capability is driven by Snap’s internal fundamental video models, a company spokesperson told TechCrunch. Additionally, the company intends to use context cards and symbols to inform users when a Snap was created with artificial intelligence once the technology is generally accessible. You can still see a distinct watermark when you download or share the content.
The publication was also informed by the spokesman that the video models had undergone extensive testing and safety evaluations to guarantee that no harmful material was produced.
In addition, Snapchat just unveiled a new AI Lens that enables users to appear older than they actually are. Subscribers to Snapchat+ can now access Snapchat Memories, which now supports AI captions and Lenses. Additionally, the company’s native chatbot, My AI, is improving and is capable of taking on several new tasks.
Also Read:
With the help of My AI, Snapchat claims that users can now interpret parking signs, translate menus into other languages, recognize unique plants, and solve more challenging puzzles. Lastly, the company is collaborating with OpenAI to provide developers with access to multimodal large language models (LLMs), enabling them to produce additional Lenses that can identify items and offer additional context.