RT @chrismessina@twitter.com

This is insane. @nvidia@twitter.com just replaced video codecs with a neural network.

We'll all be controlling digital face puppets of ourselves on video calls in the future! 👹


/cc @lishali88@twitter.com @borthwick@twitter.com @MattHartman@twitter.com

🐦🔗: twitter.com/chrismessina/statu

RT @tgraf__@twitter.com

4 years ago we started the @ciliumproject@twitter.com. Today, Google announced the availability of Cilium as the new GKE networking dataplane.

What a great honor for everyone who has contributed to the Cilium project and to eBPF overall.

The background story:

🐦🔗: twitter.com/tgraf__/status/129

RT @drusepth@twitter.com

This is the best zero-shot prompt style I've found for generating short poetry snippets with GPT-3 (at default 0.7 temperature and 0/0 penalty rates).

You can also tweak the poem tone by adjusting the student's adjectives (clever, depressed, etc) and name (for mimicking style).

🐦🔗: twitter.com/drusepth/status/12

RT @ares_emu@twitter.com

"Imagine all the extraordinary things we'll be able to do once computers have literally hundreds of CPU cores!"

🐦🔗: twitter.com/ares_emu/status/12

RT @components_ai@twitter.com

Give GPT-3 a color scale and an emoji. Get back new scales based on color of the emoji. HOW DOES IT KNOW.

violet: [
🍑: [

🐦🔗: twitter.com/components_ai/stat

RT @quocleix@twitter.com

A surprising result: We found that smooth activation functions are better than ReLU for adversarial training and can lead to substantial improvements in adversarial robustness.

🐦🔗: twitter.com/quocleix/status/12

RT @citnaj@twitter.com

So yeah I'm really happy about this new DeOldify model actually......

🐦🔗: twitter.com/citnaj/status/1275

RT @JanelleCShane@twitter.com

When an AI is trained on words, weird things can happen to the physical domain.

I asked the @OpenAI@twitter.com API about horses.

🐦🔗: twitter.com/JanelleCShane/stat

RT @OpenAI@twitter.com

Since 2012, the amount of compute for training to AlexNet-level performance on ImageNet has been decreasing exponentially — halving every 16 months, in total a 44x improvement.

By contrast, Moore's Law would only have yielded an 11x cost improvement: openai.com/blog/ai-and-efficie

🐦🔗: twitter.com/OpenAI/status/1257


Personal server of Lukáš Lánský