• LwL@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    4 days ago

    It is more appropriate for llms, but not for diffusion models (imagegen). Those are more throw shit at a wall and refine it a thousand times (whereas llms just grab shit that looks similar to what they want). It’s why generated images usually look normal at a glance and fall apart the moment you pay attention to details, because the AI judges the whole image to be close enough to training images that match the prompt instead of having any intent behind individual parts.