youtube.com/watch?v=WVPE62Gk3E

"The quadratic resource requirements of the attention mechanism are the main roadblock in scaling up transformers to long sequences. This paper replaces the full quadratic attention mechanism by a combination of random attention, window attention, and global attention."

RT @ares_emu@twitter.com

"Imagine all the extraordinary things we'll be able to do once computers have literally hundreds of CPU cores!"
Developers:

🐦🔗: twitter.com/ares_emu/status/12

"Current prediction markets are so bad in so many different ways that it simply is not surprising for people to know better than them, and it often is not possible for people to make money from knowing better."

lesswrong.com/posts/c3iQryHA4t

RT @components_ai@twitter.com

Give GPT-3 a color scale and an emoji. Get back new scales based on color of the emoji. HOW DOES IT KNOW.

violet: [
'',
'',
'',
',
'',
'',
',
'',
'',
''
],
🍑: [

🐦🔗: twitter.com/components_ai/stat

RT @quocleix@twitter.com

A surprising result: We found that smooth activation functions are better than ReLU for adversarial training and can lead to substantial improvements in adversarial robustness.
arxiv.org/abs/2006.14536

🐦🔗: twitter.com/quocleix/status/12

RT @citnaj@twitter.com

So yeah I'm really happy about this new DeOldify model actually......

🐦🔗: twitter.com/citnaj/status/1275

RT @JanelleCShane@twitter.com

When an AI is trained on words, weird things can happen to the physical domain.

I asked the @OpenAI@twitter.com API about horses.
aiweirdness.com/post/621186154

🐦🔗: twitter.com/JanelleCShane/stat

arxiv.org/abs/1911.03584

"Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis."

openai.com/blog/image-gpt/

"We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting."

The Alternative Big O notation:

O(1) = O(yeah)
O(log n) = O(nice)
O(n) = O(k)
O(n²) = O(my)
O(2ⁿ) = O(no)
O(n!) = O(mg!)

sivv.io/article/5ecededf46cc9f

"After following study participants for six months after making their decision, Levitt found that those who had opted for the choice that involved making a change (as opposed to sticking with the status quo) were more satisfied with their decision and generally happier."

Show more
Mastodon

Personal server of Lukáš Lánský