RT @OpenAI@twitter.com

We've used reinforcement learning from human feedback to train language models for summarization. The resulting models produce better summaries than 10x larger models trained only with supervised learning: openai.com/blog/learning-to-su

🐦🔗: twitter.com/OpenAI/status/1301

RT @tgraf__@twitter.com

4 years ago we started the @ciliumproject@twitter.com. Today, Google announced the availability of Cilium as the new GKE networking dataplane.

What a great honor for everyone who has contributed to the Cilium project and to eBPF overall.

The background story:
cilium.io/blog/2020/08/19/goog

🐦🔗: twitter.com/tgraf__/status/129

RT @drusepth@twitter.com

This is the best zero-shot prompt style I've found for generating short poetry snippets with GPT-3 (at default 0.7 temperature and 0/0 penalty rates).

You can also tweak the poem tone by adjusting the student's adjectives (clever, depressed, etc) and name (for mimicking style).

🐦🔗: twitter.com/drusepth/status/12

youtube.com/watch?v=WVPE62Gk3E

"The quadratic resource requirements of the attention mechanism are the main roadblock in scaling up transformers to long sequences. This paper replaces the full quadratic attention mechanism by a combination of random attention, window attention, and global attention."

@fribbledom Automatic captioning is getting significantly better every year. In this context the move makes total sense.

RT @ares_emu@twitter.com

"Imagine all the extraordinary things we'll be able to do once computers have literally hundreds of CPU cores!"
Developers:

🐦🔗: twitter.com/ares_emu/status/12

"Current prediction markets are so bad in so many different ways that it simply is not surprising for people to know better than them, and it often is not possible for people to make money from knowing better."

lesswrong.com/posts/c3iQryHA4t

RT @components_ai@twitter.com

Give GPT-3 a color scale and an emoji. Get back new scales based on color of the emoji. HOW DOES IT KNOW.

violet: [
'',
'',
'',
',
'',
'',
',
'',
'',
''
],
🍑: [

🐦🔗: twitter.com/components_ai/stat

RT @quocleix@twitter.com

A surprising result: We found that smooth activation functions are better than ReLU for adversarial training and can lead to substantial improvements in adversarial robustness.
arxiv.org/abs/2006.14536

🐦🔗: twitter.com/quocleix/status/12

RT @citnaj@twitter.com

So yeah I'm really happy about this new DeOldify model actually......

🐦🔗: twitter.com/citnaj/status/1275

RT @JanelleCShane@twitter.com

When an AI is trained on words, weird things can happen to the physical domain.

I asked the @OpenAI@twitter.com API about horses.
aiweirdness.com/post/621186154

🐦🔗: twitter.com/JanelleCShane/stat

arxiv.org/abs/1911.03584

"Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis."

openai.com/blog/image-gpt/

"We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting."

The Alternative Big O notation:

O(1) = O(yeah)
O(log n) = O(nice)
O(n) = O(k)
O(n²) = O(my)
O(2ⁿ) = O(no)
O(n!) = O(mg!)

Show more
Mastodon

Personal server of Lukáš Lánský