RT @citnaj@twitter.com

So yeah I'm really happy about this new DeOldify model actually......

🐦🔗: twitter.com/citnaj/status/1275

RT @JanelleCShane@twitter.com

When an AI is trained on words, weird things can happen to the physical domain.

I asked the @OpenAI@twitter.com API about horses.
aiweirdness.com/post/621186154

🐦🔗: twitter.com/JanelleCShane/stat

arxiv.org/abs/1911.03584

"Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis."

openai.com/blog/image-gpt/

"We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting."

The Alternative Big O notation:

O(1) = O(yeah)
O(log n) = O(nice)
O(n) = O(k)
O(n²) = O(my)
O(2ⁿ) = O(no)
O(n!) = O(mg!)

sivv.io/article/5ecededf46cc9f

"After following study participants for six months after making their decision, Levitt found that those who had opted for the choice that involved making a change (as opposed to sticking with the status quo) were more satisfied with their decision and generally happier."

RT @OpenAI@twitter.com

Since 2012, the amount of compute for training to AlexNet-level performance on ImageNet has been decreasing exponentially — halving every 16 months, in total a 44x improvement.

By contrast, Moore's Law would only have yielded an 11x cost improvement: openai.com/blog/ai-and-efficie

🐦🔗: twitter.com/OpenAI/status/1257

Show more
Mastodon

Personal server of Lukáš Lánský