OpenAI has officially launched Sora, its groundbreaking text-to-video AI model, marking a significant leap forward in generative AI technology.
Introduced as part of OpenAI’s 12-day “ship-mas” product release series, Sora is now available to ChatGPT subscribers across the U.S. and most other countries. Alongside the launch, OpenAI rolled out Sora Turbo, an enhanced version with advanced features, cementing the company’s position at the forefront of AI-powered creativity.
Sora opens new doors for creators, offering tools that turn text into videos, animate static images, and remix existing footage into fresh, engaging content. The model caters to various user needs through tiered subscription plans. ChatGPT Plus users, for $20 per month, can generate up to 50 priority videos with a resolution of 720p and durations capped at 5 seconds. For professionals seeking more capabilities, the $200 per month ChatGPT Pro plan significantly ups the ante with unlimited video generations, 500 priority videos, 1080p resolution, and clips as long as 20 seconds. Pro subscribers also enjoy watermark-free downloads and the ability to produce up to five videos simultaneously, making it a robust choice for serious content creators.
First teased in February, the tech and creative communities have eagerly anticipated Sora. Marques Brownlee, known as MKBHD, confirmed the model’s release earlier today, sharing his early impressions of its capabilities. OpenAI showcased several of Sora’s standout features during a live demonstration, including a new Explore Page that displays a feed of community-generated AI videos. The tool’s “storyboard” functionality enables users to create videos from a sequence of prompts, while a photo-to-video transformation feature breathes life into static images. Another highlight is the “remix” tool, allowing users to refine or alter generated videos with additional text prompts, and a blending feature that seamlessly merges two scenes into one.
OpenAI has implemented robust safeguards to ensure Sora is used responsibly. All videos generated with the platform are visibly watermarked and embedded with C2PA metadata to indicate their AI origin. Before uploading media, users must agree to strict guidelines prohibiting content involving minors, explicit or violent material, or copyrighted works. OpenAI has made it clear that violations could lead to account suspension or bans. “We want to prevent illegal activity while encouraging creative expression,” said Rohan Sahai, Sora’s product lead, during the launch livestream. Acknowledging the challenges of balancing innovation and moderation, Sahai added, “We know this won’t be perfect from day one, but user feedback will help us improve.”
While Sora is now available in many regions, OpenAI CEO Sam Altman noted that the rollout in Europe and the UK might take longer, citing regulatory hurdles. Non-subscribers can still explore Sora’s public feed to view AI-generated videos created by others, showcasing the platform’s potential even to casual users.
The launch, however, has not been without controversy. Just a week before Sora’s debut, a group of artists who claimed to have been part of the model’s alpha testing program accused OpenAI of exploiting their efforts for unpaid research and public relations. The protest has added a layer of tension to what would otherwise be a purely celebratory moment for the company.
Despite these challenges, Sora’s release represents a transformative moment in AI development. By turning simple text into dynamic video content, OpenAI is empowering creators with tools that make high-quality video production more accessible than ever. Sora’s powerful features, combined with its emphasis on responsible use, set a new standard for generative AI, paving the way for future innovations in the field.
Bijay Pokharel
Related posts
Recent Posts
Subscribe
Cybersecurity Newsletter
You have Successfully Subscribed!
Sign up for cybersecurity newsletter and get latest news updates delivered straight to your inbox. You are also consenting to our Privacy Policy and Terms of Use.