17 hours of training later and still no convergence in sight.
It's almost as if I have never been away.
The third image here (cbc8d… jpg) is the best so far in the following regard:
I’ve noticed with GPT-2 that it’s really bad at typesetting basics, like smart quotes or non-smart-quotes, single quotes or double quotes etc. (Maybe that’s on me for not sorting that out in my corpus.) And some of the early ones you posted feel like they were rended quite mismatchedly— more like collage than the work of one hand. That’s how GPT-2’s text feels to me too, tbh.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!