Allison Parrish is a user on You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.

Allison Parrish

in a sense photography was just the culmination of a particular school of post-renaissance photo-realistic art in europe (which already used photography-like techniques like camera obscura etc), and the focus of "modern" art on materiality was a reaction to *that*, not a priori pioneering in non-representational art. because of course even a cursory glance at visual art throughout the world reveals a focus on materiality (from the beginning, even back to, like, lascaux)

it seems like the distinction between "representational" (or imitative) and "non-representational" art that arose after the invention of photography tended to lump all art from before this distinction arose into the "representational" category (with "new" art being distinguished, and seen as virtuous, as "non-representational"). but of course before this distinction was drawn there was no such taxonomy and artists wouldn't have conceptualized themselves as belonging to either category

printing the string of the entire syntactic subtree headed by each token in a sentence gives a lovely repetitive effect

turning the kl weight all the way up helped—reconstructions now somewhat reliably have the general shape and some of the same letters as the original words (though not always in the right orders). encouraging though!

birdsite Show more

todo, find out if there's a way to make it so "X reacts to Y!" videos on youtube are never shown to me in lists or suggested to play next

and here's the youtube video he mentions which consists of three solid minutes of morphing orangutans set to sixpence none the richer's classic "kiss me" and it is indeed incredible

here's the bibliography for the talk which has like a dozen things that are extremely helpful to me at the moment

this is a pretty great talk by noah veltman on shape interpolation

(seems learning the letterforms themselves okay but basically nothing about how they fit together, and right now the decoded vectors seem to have no relationship whatsoever with the encoded vectors you send in)

after a whole day of training on HPC here's how my sketch-rnn model is reconstructing the word "explorers" (at various temperatures)... probably time for me to actually read the paper

When I'm walking on or off a plane and I see those HSBC ads I always wish they were for a socialist utopian project rather than a celebration of capitalists mortgaging our future.

more of this, now training on a gpu and on a much larger dataset (still very early in the training process though). was hoping to get actual, like, recognizable words out of this, but I might have been hoping for too much, we'll see

(the fact that it was extremely easy to perform this dissection is actually a pretty good argument for not having everything in the same package or repo)

it's not that big a deal, but I wish all google's magenta stuff wasn't all in the same repository to install the whole thing with pip you have to have already installed alsa and jack, but those are only necessary for the music stuff, which I'm not interested in (and which would have been a pain to install on NYU's HPC I think?). so I ended up having to just... dissect the project I was interested in from the package as a whole

food, disability, accessiblity, The Guardian blog link Show more

it hasn't really learned anything yet because I only trained it on a small part of the corpus and I stopped it after like half an epoch but this is still pretty good

sampling from a sketch-rnn model trained with a few thousand english words drawn with the hershey font vectors