• t3rmit3@beehaw.org
    link
    fedilink
    arrow-up
    9
    ·
    12 days ago

    Products of a bigoted society goes in, bigoted product comes out.

    In that regard, developers and decision makers would benefit from centering users’ social identities in their process, and acknowledging that these AI tools and their uses are highly context-dependent. They should also try to enhance their understanding of how these tools might be deployed in a way that is culturally responsive.

    You can’t correct for bias at the ass-end of a mathematical algorithm. Generative AI is just caricaturizing our own society back to us; it’s a fun-house mirror that makes our own biases jump out. If they want a model that doesn’t produce bigoted outputs, they’re going to have to fix their inputs.