"SRE is seen as a high modernist project, intent on scientifically managing their systems, all techne and no metis; all SLOs and Kubernetes and no systems knowledge and craft. That view is not entirely wrong."
RT @terrajobst@twitter.com
Check out the networking improvements we made in .NET 5. https://devblogs.microsoft.com/dotnet/net-5-new-networking-improvements/
🐦🔗: https://twitter.com/terrajobst/status/1348693939004399616
RT @dotnet@twitter.com
Announcing .NET 5.0 https://devblogs.microsoft.com/dotnet/announcing-net-5-0/
https://moultano.wordpress.com/2020/10/18/why-deep-learning-works-even-though-it-shouldnt/
"In the dimensions we live in, we’re used to the idea that some things are closer together than other things, so we mentally think of concepts like “regions” and think about things like bad regions and good regions for parameters. But high dimensional spaces are extremely well connected. You can get to anywhere with a short jump from anywhere else. There are no bad places to start."
RT @chrismessina@twitter.com
This is insane. @nvidia@twitter.com just replaced video codecs with a neural network.
We'll all be controlling digital face puppets of ourselves on video calls in the future! 👹
https://www.youtube.com/watch?v=NqmMnjJ6GEg
/cc @lishali88@twitter.com @borthwick@twitter.com @MattHartman@twitter.com #ThisDoesNotExist #AvatarLand #SyntheticMedia
🐦🔗: https://twitter.com/chrismessina/status/1313209403051442176
RT @ctbeiser@twitter.com
thinking about how sqlite is written by a team of like three people who have been working on the same thing for 20 years, refuse outside contributions, and release all their code as public domain
it's like if there were a tiny monastery producing all the world's steel
RT @OpenAI@twitter.com
We've used reinforcement learning from human feedback to train language models for summarization. The resulting models produce better summaries than 10x larger models trained only with supervised learning: https://openai.com/blog/learning-to-summarize-with-human-feedback/
RT @tgraf__@twitter.com
4 years ago we started the @ciliumproject@twitter.com. Today, Google announced the availability of Cilium as the new GKE networking dataplane.
What a great honor for everyone who has contributed to the Cilium project and to eBPF overall.
The background story:
https://cilium.io/blog/2020/08/19/google-chooses-cilium-for-gke-networking
RT @drusepth@twitter.com
This is the best zero-shot prompt style I've found for generating short poetry snippets with GPT-3 (at default 0.7 temperature and 0/0 penalty rates).
You can also tweak the poem tone by adjusting the student's adjectives (clever, depressed, etc) and name (for mimicking style).
RT @EU_Eurostat@twitter.com
Euro area #RetailTrade +5.7% in June over May, +1.3% over June 2019 https://ec.europa.eu/eurostat/en/web/products-press-releases/-/4-05082020-AP
🐦🔗: https://twitter.com/EU_Eurostat/status/1290935579358760960
https://www.youtube.com/watch?v=WVPE62Gk3EM
"The quadratic resource requirements of the attention mechanism are the main roadblock in scaling up transformers to long sequences. This paper replaces the full quadratic attention mechanism by a combination of random attention, window attention, and global attention."
RT @ares_emu@twitter.com
"Imagine all the extraordinary things we'll be able to do once computers have literally hundreds of CPU cores!"
Developers:
"Current prediction markets are so bad in so many different ways that it simply is not surprising for people to know better than them, and it often is not possible for people to make money from knowing better."
https://www.lesswrong.com/posts/c3iQryHA4tnAvPZEv/limits-of-prediction-markets
RT @components_ai@twitter.com
Give GPT-3 a color scale and an emoji. Get back new scales based on color of the emoji. HOW DOES IT KNOW.
violet: [
'#2d1832',
'#502b5a',
'#753f83',
#8e4c9e',
'#9f5bb0',
'#b683c3',
#c9a2d2',
'#dbc1e1',
'#ebddee',
'#f7f1f8'
],
🍑: [
🐦🔗: https://twitter.com/components_ai/status/1282379087412174848
RT @quocleix@twitter.com
A surprising result: We found that smooth activation functions are better than ReLU for adversarial training and can lead to substantial improvements in adversarial robustness.
http://arxiv.org/abs/2006.14536
RT @citnaj@twitter.com
So yeah I'm really happy about this new DeOldify model actually......
RT @tylercowen@twitter.com
I am Scott Alexander.
🐦🔗: https://twitter.com/tylercowen/status/1275409192149450752
RT @JanelleCShane@twitter.com
When an AI is trained on words, weird things can happen to the physical domain.
I asked the @OpenAI@twitter.com API about horses.
https://aiweirdness.com/post/621186154843324416/all-your-questions-answered
🐦🔗: https://twitter.com/JanelleCShane/status/1273296527662841856
I'm a Prague based software engineer. This is my personal account.