ran across this very good overview of text generation techniques with neural networks blog.usejournal.com/generating though it's notable mostly for this very odd/hilarious illustration of "meaning space"

having trouble figuring out what this distinction could even be, and if a meaningful distinction can be made, how it could possibly be important for an arts residency application

i hope someone has written an extensive fanfic that intrafictionally explains the terrible kerning in destiny 2 mastodon.social/media/cs0GjOCN

okay this is going to mean absolutely nothing to anyone but it's exciting to me, so. I'm using a seq2seq model with CMUdict data to predict phoneme features from the way words are spelled. each phoneme in the word is associated with a list of features (see github.com/aparrish/phonetic-s) and the network is trying to "translate" one-hot encoded letters to those features. after just a few epochs it's sounding out "fediverse" (not in the training set) as something like "fevezunz," which isn't bad?

now this... this is an LSTM conditioned on *both* the GloVe 50d vector *and* the phonetic vector (as separate inputs to the model) for ~100k words, the idea being that you can keep the sound of the word constant while changing the meaning (or vice versa), or invent plausible soundalike neologisms. here I'm resampling arrays of phonetic and semantic representations of words in the same line into progressively smaller arrays and predicting from both for each word respectively

text -> array of phonetic vectors for each word -> resample progressively larger -> generate word from RNN from vector at each index in resulting array (with thanks to alvin lucier)

typography, mouths, selfie(?), eye contact (briefly) Show more

ok it turns out de gruyter has the whole thing online and lets you download the pdfs (at least I could from inside the NYU network). so now I have 800 pages of PDFs of aubades and albas etc. here's a nice one from ancient Egypt

generating words from uniformly-distributed random phonetic vectors...

anyway here's a neural network attempting to spell words based on their phonetic vectors (using the vector to condition an LSTM). caveats: I stopped the training after uh, 20k samples because I have to go home now, so it's only trained on a few hundred words or so? also I should have stripped the alternate pronunciation notation from CMU dict and used start/end tokens. still, a promising start!

i have a new piece in queer.archive.work v1.1, being sold by editor Paul Soulellis at the fair queer.archive.work/ and i'm so happy with how it turned out (black riso on dark rich folded paper)

actually I like this better with hyphens: auto

In linguistics, a word is the smallest element that can be uttered in isolation with objective or practical meaning.

taken out of context, the film stills section in Michel Chion's _Words on Screen_ form their own kind of concrete cutup poetry (the stills are referenced by number in the text, but all collected together in one section in the middle of the book)

okay somehow managed to rickroll myself while researching this, probably time to move on to something else for a while

can't believe I've poured thousands of dollars into the apple ecosystem over the course of my adult life but keynote won't let me nest slides deeper than 5x

reviewing the quantitative evaluation sections of various papers on computer-generated poetry

Show more
Mastodon

Follow friends and discover new ones. Publish anything you want: links, pictures, text, video. This server is run by the main developers of the Mastodon project. Everyone is welcome as long as you follow our code of conduct!