Follow

openai.com/blog/image-gpt/

"We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting."

Sign in to participate in the conversation
Mastodon

Personal server of Lukáš Lánský