I’ve noticed that certain terms or tags are causing rendering issues with the new model. The outputs are highly unstable and inconsistent—beyond what I would consider normal variation.
This doesn’t appear to be due to new interpretation logic or prompt strategy shifts. Instead, many of these generations look glitched, underprocessed, washed out, or as if rendering was prematurely stopped. The saturation is often low, and overall image quality degraded.
I suspect that some of these tags may be acting like “stop codons”, halting generation early—possibly similar in effect to using guidance_scale = 1.
From my testing, the problematic tags seem to fall into two groups:
Furry-related terms: furry, fursona, anthro, etc.
Illustration-related terms: drawing, line work, cel shading, etc.
It’s possible these tags are being masked or diluted when mixed with stronger or more stable tags, which may explain why some prompts still produce acceptable or mixed results. However, when multiple of these unstable tags are combined, the generation almost always fails—suggesting a kind of cumulative destabilization effect.
By contrast, photography and painting-style tags remain mostly unaffected and render normally.
I’m willing to bet the training data for those tags simply hasn’t completed and/or hasn’t started yet.
Quote from the notes you can find by clicking the little link that appears when you are generating an image:
MONTHS
Training takes a long time and a lot of the GPU power on the hardware this model runs on (part of the speed issue). It is getting better as more training data completes, but please be patient.
Yeah, that’s the correct and official answer. But I don’t want just sit watch and wait rather than point it out - What if I made this faster, or it would take months to fix in the original plan? I do notice some progress 24 hours later after I posted this. Coincidence? Nothing promising for a whole week before.
Of course it isn’t a coincidence. That is the entire point of the training! Training will literally improve the output quality as it is fed more data on different styles, celebrities, artists, etc. this model does not include everything, and that was for a reason. The dev is attempting to run this model on very limited resources, so they loaded the base model (not to mention, many of the popular/new models are deliberately excluding data on celebs for fear of lawsuits). So, it’s going to need training because it needs to be very carefully trained in order to keep memory and GPU usage down.
So, no, you are not going to make the hardware work faster.
No, if there’s anything about selectivnism or guessing around, you also fall into not reading the last line about dev log about update
Send feedback and report bugs using the feedback box below.
I feel like doing my part and I feel even better when this actually improved.Maybe it’s coincidence but I found a way to brighten my moody mind.
It’s much meaningful then telling people things will get better in… How many months?
nice copypaste disclaimer :)
much wise
What do you mean? It’s useful information.
exactly. it wasnt sarcasm. good job
Ah, thanks. People have been so angry about the new model lately. I’m just trying to get this information out there.
I’ve read through it day 0. And I’m inpatient because you know, I favor Cel-shading drawing and furry