Published on 2024-03-15
OpenAI has provided further details about its generative Artificial Intelligence (AI) tool Sora, capable of generating realistic video scenes from text instructions, which will be publicly launched later this year, though the exact date has not been specified.
The company introduced Sora a month ago, mentioning that it is a model that allows for the creation of "highly detailed" scenes, as well as "complex" camera movement and the integration of multiple characters.
To create these videos, which can last up to 60 seconds, users only need to write a series of instructions detailing what features the scene should include, such as the characters, the actions they will perform, the setting, the weather, and the camera movements that need to be recreated.
At that time, OpenAI commented that this model was only available to the firm's red team members, that is, the team dedicated to researching the service, in order to test it and see what faults it presents and what its potential risks are.
OpenAI's Chief Technology Officer, Mira Murati, has now advanced in an interview with The Wall Street Journal that OpenAI will not publicly launch Sora until the end of this year, as this development team is still working to detect vulnerabilities, biases, and other harmful outcomes.
Without going into detail on how this AI is trained, she mentioned it uses "publicly available data and licensed data," as well as content from Shutterstock, although she stated she is unaware if any of it comes from YouTube videos or platforms like Instagram and Facebook.
Furthermore, she indicated that this tool follows Dall-E's path in the sense that it does not allow the generation of images of public figures, in addition to the company working with artists to determine "barriers and limitations without hindering creativity".
COMMENTS
No customer comments for the moment.