and a little interface for it. this is trying to spell the words using phonetic information (using a sequence-to-sequence neural network), the temperature parameter basically controls how the probabilities are distributed (at low temperatures, only the most likely characters are generated according to the information in the model; at higher temperatures, any character might be generated)
I need to stop playing with this, I have other stuff to do geez
still at work on this english nonsense word vae. here are some nonsense words sampled from the latent space of the latest trained model...
these are generated by feeding the decoder with normally-distributed random numbers. pretty happy with how they all seem like jabberwockian-yet-plausible english words
by contrast, results of feeding normally-distributed random numbers into the decoder on the RNN without the VAE:
not as good! which is encouraging, since it shows that the VAE model does actually have a "smoother" space than the non-VAE model.
(I have to admit that when I started this project I was like, "why do you even need a variational autoencoder, if just plugging random vectors into the decoder was good enough for jesus it's good enough for me," but there really is something magical and satisfying about being able to get more-or-less plausible generated results for basically any randomly sampled point in the distribution)
progress: at 50 epochs, even w/KL annealing, 32dims is not enough for the VAE latent vector to represent much of anything. leads to reconstructions that are probably just the orthography model doing its best with next-to-noise, but sometimes amusing, e.g.
cart → puach
liotta → pinterajan
intellectually → aching
capella → pellaka
photometer → augh
sympathizer → disteghway
butrick → jorserich
botha's → szine
clayman → tsantiersche
sparkles → trenlew
calamity → muliss
thermoplastic → tphare
apparently the trick to training a VAE w/annealing is to *never* let the KL loss go below the reconstruction loss. otherwise you get beautifully distributed, wonderfully plausible reconstructions that have almost nothing to do with your training data, i.e., "allison" becomes
using the phonetic VAE to interpolate between US state names in a grid
visualizing the vector in the latent phonetic space while interpolating between "abacus" and "mastodon." (this is after inferring the latent vectors via orthography->phoneme features->VAE). I just arbitrarily reshaped the vectors from (1, 1, 128) to (8, 16), so the 2d patterns are arbitrary. still interesting to see what it's actually learning!
generating random magic words by adding progressively more noise to "abracadabra"
@aparrish terich sounds like a nice place to live
@aparrish some of these rows and columns make good nonsense poetry
@kragen that's the idea!
I would not mind visiting califania
@aparrish brb, setting a fantasy novel in Oache-Gnang
@aparrish Have you tried doing t-sne or some other kind of clustering on the dimensions?
@mewo2 i haven't but i probably should! i've just been going by my gut and the kl loss of the vae model to tell me that it has a normal(ish) distribution but it would be nice to have some visual evidence
@aparrish The last is what conjures the nayan cat.
@aparrish aardvark b'daardvark
so THAT'S what my neighbour's yappy dog is doing
> generating random magic words by adding progressively more noise to "abracadabra"
You didn't *quite* end up with "Avada Kedavra", but pretty close!
until, finally, somehow, mysteriously
it turned out
that Fred Rogers was right all along:
Stop taking the above if you experience any adverse reactions...
it's the circle of life
and it moves us all
@aparrish Ablamabalabtab is so good
@aparrish the third one is just "abracadabra" though! Did the noise randomly all turn out close to zero on that one?
she's my baby
i don't mean maybe
@aparrish Are you looking for reasons to have States named after you? Parishonde. Parrison.
@aparrish tag your location I’m in Talifenia
in the middle is -- of course -- 'Marican
@aparrish I've always wanted to visit Pariss, and now I know where it is.
@aparrish 1) i love this 2) do you have any idea why theres exactly one fake state name name with a hyphen in that chart? It seems like if it was an option, there should be more, but
@Satsuma hyphens are in the training data vocabulary but don't happen very frequently. I think it was just happenstance in this case that the spelling model picked that character in that position
@aparrish Is california missing its second 'i' due to something about the spelling model?
@JoshGrams the VAE doesn't perfectly encode phonetics, and the spelling model is doing its best guess with the results—plus the spelling model has a bit of randomness built into the inference process. so things like that are kind of an expected part of the process in this case!
@aparrish Ah, got it. I saw some other -nia words so I thought it might be *something* like that. Thanks!
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!