When an AI is trained on words, weird things can happen to the physical domain.
I asked the @OpenAI@twitter.com API about horses.
"Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis."
"We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting."
Variadic tuple types are coming to TypeScript 4.0. @email@example.com #TypeScript https://github.com/microsoft/TypeScript/pull/39094
The rise of embarrassingly parallel serverless compute
Announcing .NET 5.0 Preview 5 https://devblogs.microsoft.com/dotnet/announcing-net-5-0-preview-5/
"After following study participants for six months after making their decision, Levitt found that those who had opted for the choice that involved making a change (as opposed to sticking with the status quo) were more satisfied with their decision and generally happier."
Since 2012, the amount of compute for training to AlexNet-level performance on ImageNet has been decreasing exponentially — halving every 16 months, in total a 44x improvement.
By contrast, Moore's Law would only have yielded an 11x cost improvement: https://openai.com/blog/ai-and-efficiency/
Introducing #csharp Source Generators, try it out with the latest #dotnet 5 preview! https://devblogs.microsoft.com/dotnet/introducing-c-source-generators/