@AlanZucconi@twitter.com It’s in a binder that I need to find, will do ASAP for you. Another gem in there was
@SwiftOnSecurity@twitter.com This is from a 1979 presentation. We are v slow learners, it seems.
S04E02: @firstname.lastname@example.org and the autonomous school bus.
Graphe des relations entre informaticiens en France selon les invitations dans les jurys de thèse (entre directeur et juré) et les "communautés" associées
Pour zoomer et naviguer dedans :
S09E21: Apu is dropped out.
S04E17: Homer visits Mr. Burns' GPT-3 implementation.
S04E19: Homer's RNN shenanigans.
S22E07: Neuron vs. Neuron.
Funnily enough, when you do this, you also don't need the SVM (à la InterFaceGAN) to estimate latent directions. You can just use average vector arithmetics, i.e. compute difference of class centroids.
The first preprint of our PhD student Perla Doubinsky, with M. Crucianu et H. Le Borgne.
We show that you don't need weird tricks to disentangle latent directions in GANs. If you know the attributes (or you can estimate them), balancing the contigency matrix is enough.
Looking for semantic directions in your GAN latent space? Do you fear entanglement because your dataset is biased and your attributes correlated?
Well, don't: you just have to resample. 😉
⚖️Multi-Attribute Balanced Sampling for Disentangled GAN Controls
Deep learner fitting a dataset, however the latent space is too small (2021).
Message de rappel : La deadline pour transmettre son dossier pour la qualification c'est aujourd'hui à 16h ! #ESR https://twitter.com/alexandra_gros/status/1457992196444000261
S029E01: Moe and the RTX 3000 series that was scalped
Code has been released for ROADMAP, the smoothly differentiable and decomposable AP approximation for all things deep and ranking.
Give it a try and send Elias your feedback! 😉
Do you like image retrieval? Do you wish that there would be better differentiable approximations of the average precision to train your models?
Well, say hello to ROADMAP 👋: https://arxiv.org/abs/2110.01445
Accepted to NeurIPS 2021. 🎀
First paper of our grad student Elias Ramzi !
S011E14: Lisa, in this house we obey the laws of c̶o̶n̶v̶o̶l̶u̶t̶i̶o̶n̶s̶ t̶r̶a̶n̶s̶f̶o̶r̶m̶e̶r̶s̶ patches!
I'm a neuron trainer. Assistant Prof. in Computer Science @cedric_lab @LeCnam. Deep learning, computer vision, data fusion, Python.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!