[ad_1]
OpenAI has unveiled the next generation of its image creation tool. Known as DALL-E 3, the new version is designed to better understand your text descriptions to create more precise and faithful images. On its new DALL-E 3 webpage, OpenAI didn’t reveal much about the tool but did provide hints as to how it aims to surpass its predecessor DALL-E 2.
DALL-E 3 is designed to better grasp the nuances and details in your descriptions, thereby creating more accurate images, OpenAI said. Current AI-powered image generators sometimes ignore words in your descriptions, resulting in images that miss the mark. Based on the images displayed on the DALL-E 3 page, the new version seems capable of creating more accurate, detailed, and imaginative images.
Also: The best AI image generators of 2023
With the buzz around AI, image generators have become popular among individuals and businesses. Such tools as DALL-E 2, Microsoft’s Bing Image Creator, Midjourney, Stable Diffusion, DreamStudio, and Craiyon all work more or less the same. Using a prompt, you describe the image you want generated. Choose a style and other attributes. In response, the tool creates one or more images that hopefully match your request.
But like many of today’s AI bots, these image generators can be challenging to use. Typically, you have to phrase your prompt in just the right way. And even then, they don’t always interpret your requests correctly. Acknowledging that modern text-to-image systems force you to learn prompt engineering, OpenAI said that DALL-E 3 would be a leap forward in generating images that better adhere to your descriptions.
Built on ChatGPT, DALL-E 3 will be accessible through the ChatGPT platform. The benefit here is that you’ll be able to use ChatGPT to brainstorm your image ideas and prompts. You can then pose a request to create an image using a simple sentence or a more detailed paragraph.
Also: My two favorite ChatGPT Plus plugins and the remarkable things I can do with them
In the examples offered on the DALL-E 3 webpage, OpenAI showed how the new version would work.
One image was generated based on the description: “Tiny potato kings wearing majestic crowns, sitting on thrones, overseeing their vast potato kingdom filled with potato subjects and potato castles.”
A second was created from the description: “An illustration of an avocado sitting in a therapist’s chair, saying ‘I just feel so empty inside,’ with a pit-sized hole in its center. The therapist, a spoon, scribbles notes.”
And two images were generated based on a description that read: “An expressive oil painting of a basketball player dunking, depicted as an explosion of a nebula.” One image used DALL-E 2, while the other used DALL-E 3 .
OpenAI also stressed that it has limited DALL-E 3’s ability to create violent, adult, or hateful content — as it has with previous versions. Safety improvements have been made in areas such as the creation of public figures and certain harmful biases. For example, the tool will decline prompts that ask for a public figure by name.
Also: Who owns code, images, and narratives generated by AI?
AI-generated images can also pose a problem when used to depict a real person or event, misleading people into thinking that the image is real. To combat that issue, OpenAI said that it’s testing a new internal tool that can tell whether or not an image was created by DALL-E 3.
Currently in closed testing, DALL-E 3 is scheduled to roll out to ChatGPT Plus and Enterprise customers in early October.
[ad_2]
Source link