RT @chrismessina@twitter.com
This is insane. @nvidia@twitter.com just replaced video codecs with a neural network.
We'll all be controlling digital face puppets of ourselves on video calls in the future! 👹
https://www.youtube.com/watch?v=NqmMnjJ6GEg
/cc @lishali88@twitter.com @borthwick@twitter.com @MattHartman@twitter.com #ThisDoesNotExist #AvatarLand #SyntheticMedia
🐦🔗: https://twitter.com/chrismessina/status/1313209403051442176
RT @tgraf__@twitter.com
4 years ago we started the @ciliumproject@twitter.com. Today, Google announced the availability of Cilium as the new GKE networking dataplane.
What a great honor for everyone who has contributed to the Cilium project and to eBPF overall.
The background story:
https://cilium.io/blog/2020/08/19/google-chooses-cilium-for-gke-networking
RT @drusepth@twitter.com
This is the best zero-shot prompt style I've found for generating short poetry snippets with GPT-3 (at default 0.7 temperature and 0/0 penalty rates).
You can also tweak the poem tone by adjusting the student's adjectives (clever, depressed, etc) and name (for mimicking style).
RT @EU_Eurostat@twitter.com
Euro area #RetailTrade +5.7% in June over May, +1.3% over June 2019 https://ec.europa.eu/eurostat/en/web/products-press-releases/-/4-05082020-AP
🐦🔗: https://twitter.com/EU_Eurostat/status/1290935579358760960
RT @ares_emu@twitter.com
"Imagine all the extraordinary things we'll be able to do once computers have literally hundreds of CPU cores!"
Developers:
RT @components_ai@twitter.com
Give GPT-3 a color scale and an emoji. Get back new scales based on color of the emoji. HOW DOES IT KNOW.
violet: [
'#2d1832',
'#502b5a',
'#753f83',
#8e4c9e',
'#9f5bb0',
'#b683c3',
#c9a2d2',
'#dbc1e1',
'#ebddee',
'#f7f1f8'
],
🍑: [
🐦🔗: https://twitter.com/components_ai/status/1282379087412174848
RT @quocleix@twitter.com
A surprising result: We found that smooth activation functions are better than ReLU for adversarial training and can lead to substantial improvements in adversarial robustness.
http://arxiv.org/abs/2006.14536
RT @citnaj@twitter.com
So yeah I'm really happy about this new DeOldify model actually......
RT @JanelleCShane@twitter.com
When an AI is trained on words, weird things can happen to the physical domain.
I asked the @OpenAI@twitter.com API about horses.
https://aiweirdness.com/post/621186154843324416/all-your-questions-answered
🐦🔗: https://twitter.com/JanelleCShane/status/1273296527662841856
RT @ak92501@twitter.com
Language Models are Few-Shot Learners
pdf: https://arxiv.org/pdf/2005.14165.pdf
abs: https://arxiv.org/abs/2005.14165
github: https://github.com/openai/gpt-3
RT @OpenAI@twitter.com
Since 2012, the amount of compute for training to AlexNet-level performance on ImageNet has been decreasing exponentially — halving every 16 months, in total a 44x improvement.
By contrast, Moore's Law would only have yielded an 11x cost improvement: https://openai.com/blog/ai-and-efficiency/
I'm a Prague based software engineer. This is my personal account.