I finally made a coffee and decided to read Neural Ordinary Differential Equations (Chen et al. 2018); which re-formulates neural networks as differential equations so that they are based on continuous domains that can be trained using any ODE solver.
This results in better memory management, less parameters and better model reconstruction under certain circumstances. What a refreshing point of view, with really interesting possibilities. https://arxiv.org/abs/1806.07366
Renfe, the Spanish railway company, just released some files as open data in http://data.renfe.es/dataset . Not only most of them are barely useful (lat-lon of different stations), but there are reasons to believe that they don't really know how this works: http://data.renfe.es/dataset/atendo-por-tipo-de-discapacidad/resource/4cd0cba4-b101-4e60-a0e5-0aae9c48bf16
Just saw Michael's Atiyah's proof for Riemann's Hypothesis. It seems that it is not an absolute proof (and I've already seen some experts claiming that it does not hold) but it is *bold* just to try to prove it in barely three paragraphs.
This image represents numbers up to 1M, arranged using Leland Mcinnes' UMAP dimensionality reduction algorithm "so that numbers with similar prime factorisations are closer together than those with dissimilar factorisations" — Look all those beautiful patterns!
Also relevant: have you seen me brag about my super awesome Magit stickers? 😁
I think that this Poorly Drawn Lines comic strip is my spirit animal. http://www.poorlydrawnlines.com/comic/an-idea/
Junior data scientist, music nerd, proficient in xkcd comics and other pop culture references.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!