• 0 Posts
  • 44 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle
  • Type in "Is Kamala Harris a good Democratic candidate

    …and any good search engine will find results containing keywords such as “Kamala Harris”, “Democratic”, “candidate”, and “good”.

    […] you might ask if she’s a “bad” Democratic candidate instead

    In that case, of course the search engine will find results containing keywords such as “Kamala Harris”, “Democratic”, “candidate”, and “bad”.

    So the whole premise that, “Fundamentally, that’s an identical question” is just bullshit when it comes to searching. Obviously, when you put in the keyword “good”, you’ll find articles containing “good”, and if you put in the keyword “bad”, you’ll find articles containing “bad” instead.

    Google will find things that match the keywords that you put in. So does DuckDuckGo, Qwant, Yahoo, whatever. That is what a good search engine is supposed to do.

    I can assure you, when search engines stop doing that, and instead try to give “balanced” results, according to whatever opaque criteria for “balanced” their company comes up with, that will be the real problem.

    I don’t like Google, and only use google when other search engines fail. But this article is BS.




  • It’s not an article about LLMs not using dialects. In fact, they have learned said dialects and will use them if asked.

    What they did was, ask the LLM to suggest adjectives associated with sentences - and it would associate more aggressive or negative adjectives with African dialect.

    Seems like not a bias by AI models themselves, rather a reflection of the source material.

    All (racial) bias in AI models is actually a reflection of the training data, not of the modelling.
















  • I disagree with the “limitations” they ascribe to the Turing test - if anything, they’re implementation issues. For example:

    For instance, any of the games played during the test are imitation games designed to test whether or not a machine can imitate a human. The evaluators make decisions solely based on the language or tone of messages they receive.

    There’s absolutely no reason why the evaluators shouldn’t take the content of the messages into account, and use it to judge the reasoning ability of whoever they’re chatting with.


  • No, I want a communal, collaboratively managed platform to recommend things to me based on an open source algorithm whose behavior I can adjust the way I want. Alas, this just isn’t a thing.

    Just amongst the available options, the closed algorithm optimized for engagement has so far been better at showing me interesting things than an unfiltered chronological feed.