• mozz@mander.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Disclaimer: I have no real qualification on this. But it seems like this whole technology is pretty sensitive to the specific model being used and the specific details of the pixels; the whole thing is written like there’s some silver-bullet image alteration that can fool “machine vision” in general, but what it demonstrates is nothing like that.

    I asked Midjourney to identify the altered images that machines are supposed to identify as a sheep or a cat or whatever, and it said:

    • A bouquet of flowers sitting on the table in a brown vase
    • Some bright colored flowers in a circular vase
    • An omelette and sandwiches on the table
    • An omelet with hash browns

    … which is what they are.

    The last two images were actually a little more interesting – they’re distorted to the point that it’s visually obvious that they’ve been altered, and Midjourney actually picks up that the image is distorted a little, and includes that in the style part of its description, while mostly-accurately describing what’s in the image. These are its full descriptions:

    “a red bridge, traffic lights, and a fencedin section of street, in the style of digital mixed media, thermal camera, american realism, found object sculpture, stipple, ricoh r1, xbox 360 graphics”

    “a pole with a traffic light and a van, in the style of distorted, fragmented images, manapunk, found objects, webcam photography, suburban ennui capturer, hyper-realistic bird studies, 19th century american art”