Show more

RT @OpenAI@twitter.com

Since 2012, the amount of compute for training to AlexNet-level performance on ImageNet has been decreasing exponentially — halving every 16 months, in total a 44x improvement.

By contrast, Moore's Law would only have yielded an 11x cost improvement: openai.com/blog/ai-and-efficie

🐦🔗: twitter.com/OpenAI/status/1257

slatestarcodex.com/2020/01/22/

"My Bay Area friends treat people as naturally motivated, and assume that if someone acts unmotivated, it’s because they’ve spent so long being taught to suppress their own desires that they’ve lost touch with innate enthusiasm. Personified China treats people as naturally unmotivated, and assumes that if someone acts unmotivated, it’s because they haven’t been trained to pursue a goal determinedly without getting blown around by every passing whim."

florentcrivello.com/index.php/

"love of order is above all else about appearances. Streets arranged in grids, people waiting in clean lines, cars running at the same speed… But everything that looks good doesn’t necessarily work well. In fact, those two traits are opposed more often than not: efficiency tends to look messy, and good looks tend to be inefficient."

programming.lansky.name/covera

"The idea of this year’s experiment is simple: using per-test coverage data flowing from CI to run only that part of your test suite that might have been affected by changes you did."

Show more
Mastodon

Personal server of Lukáš Lánský