I asked Google Bard whether it thought Web Environment Integrity was a good or bad idea. Surprisingly, not only did it respond that it was a bad idea, it even went on to urge Google to drop the proposal.
I asked Google Bard whether it thought Web Environment Integrity was a good or bad idea. Surprisingly, not only did it respond that it was a bad idea, it even went on to urge Google to drop the proposal.
Yes because online discussions usually aren’t inherently subjective and instead backed by sourceable knowledge. Sorry for the cynicism but one could always find any source that underlines any point so everything should be taken with a grain of salt.
I’d personally argue, that the way generative AI works lends itself to produce answers that fit the general consensus of the internet that is relevant to the given prompt, because it calculates the most likely response based on the information available. Since most information relevant to “Google Web DRM” is critical of it (Google doesn’t call it DRM themselves), it makes sense a prompt querying the AI for opinions on Web DRM will result in a rather negative response, if Google doesn’t tamper with it to their advantage.