Mount Sinai has become a laboratory for AI, trying to shape the future of medicine. But some healthcare workers fear the technology comes at a cost.

WP gift article expires in 14 days.

https://archive.ph/xCcPd

  • RiikkaTheIcePrincess@kbin.social
    link
    fedilink
    arrow-up
    19
    ·
    1 year ago

    Interesting that the “AI” posts alternate between “Oh, AI will lead us into the shining, glorious future!” and “Wow, the new thing that people call ‘AI’ can’t reliably add 2+2, makes up lies that even supposed professionals fall for when asked for real info, and just generally mimics only the form of human communication with no idea whatsoever about the content, often resulting in hilarious claims, images, et cetera that a dog could recognize as wrong!”

    With bosses in charge… ugh. I assume soon if not already someone will be scheduled for heart surgery because WebMDGPT decided their cough was due to irritable bowel syndrome, which it will claim is a form of cancer because that’s the sort of crap these things do.

    “The ability to speak does not make you intelligent” somehow applies to Jar-Jar but everybody’s impressed with the chatbot that knows literally nothing.

    • ConsciousCode@beehaw.org
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      It makes me really sad because the techbros are a cargo cult with no understanding of the technology, and the anti-AI crowd is an overcorrection to the techbro hype train which overemphasizes the limitations without acknowledging that this is the first generation of general-purpose AI (distinct from AGI). Meanwhile I, someone who’s followed the AI field for 10 years waiting for this day, am overjoyed by the near miracle that is a general-purpose model that can handle any task you throw at it and simultaneously worried this yet-another-culture-war will distract people screeching about utopia vs skynet while capitalists use the technology to lay everyone off and send us into a neotechnofeudal society where labor has no power instead of the socialist utopia where work is optional we deserve…

      • RiikkaTheIcePrincess@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        I generally agree but struggle to see where there’s any proper “general-purpose AI” involved. The current “AI” seems to be a crop of, simply put, overgrown chat bots. They make things that kinda-sorta look like other things that humans have already made and are getting a lot of attention for getting things very wrong. Hands, mouths, maths, laws, wrong wrong wrong.

        From my perspective (as someone who loves novel tech, was thrilled to take a uni course on evolutionary computation, had PopSci, SciAm, Discover (tech magazines), etc.) people are blowing the hell up praising glorified chat bots as our lord and saviour and it baffles me endlessly. Like… I was evolving solutions to notoriously hard problems as an undergrad roughly a decade ago. The power of evolution itself! Wow, right? No, no one cares any more. Interesting is interesting but the hype train’s decided these I’m not going to stop calling them chat bots because that’s what they are represent a miracle of advanced, movie-style AGI. Unless my understanding of how it works is way off, it’s not really even a good starting point for AGI. I’d even go so far as to say it’s less technically interesting to me than Sierra’s AGI, but then I do have a deep, burning hatred of memes and excessive, blind popularity/hype and a bit of a taste for old tech so part of that’s just me. As for “utopia vs. Skynet” stuff… sigh. No technology is gonna do more to heal or harm humanity than this batch of buttholes is already doing to itself and ELIZA here isn’t going to change that.

        tl;dr: The current cultural idea of “AI” is (as always) a damn meme based on chat bots and exploitation, and not a miracle. Wake me when AI is capable of some interesting new kind of NLP or can create something entirely new or something beyond impressing fools (because I actually do like neat tech). Also yes, any big, moneyful/profitable tech-thing is 100% gonna serve money over all else because everything in this capitalist hell-world does. rant rant rant! … dozes off

          • RiikkaTheIcePrincess@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Okay, see, that smells smarter than “we’re gonna cram the entire Internet into a box full of neurons and shove a shitload of compute through it.” It is, therefore, more interesting. Maybe I’ll have a deeper peek into it… In a few years when it’s not associated with any hype 😅 Here, have some of my pizza 🫴🍕

        • ConsciousCode@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          First I’d like to be a little pedantic and say LLMs are not chatbots. ChatGPT is a chatbot - LLMs are language models which can be used to build chatbots. They are models (like a physics model) of language, describing the causal joint probability distribution of language. ChatGPT only acts like an agent because OpenAI spent a lot of time retraining a foundation model (which has no such agent-like behavior) to model “language” as expressed by an individual. Then, they put it into a chatbot “cognitive architecture” which feeds it a truncated chat log. This is why the smaller models when improperly constrained may start typing as if they were you - they have no inherent distinction between the chatbot and yourself. LLMs are a lot more like broca’s area than a person or even chatbot.

          When I say they’re “general purpose”, this is more or less an emergent feature of language, which encodes some abstract sense of problem solving and tool use. Take the library I wrote to create “semantic functions” from natural language tasks - one of the examples I keep going to in order to demonstrate the usefulness is

          @semantic
          def list_people(text) -> list[str]:
              '''List the people mentioned in the given text.'''
          

          a year ago, this would’ve been literally impossible. I could approximate it with thousands of lines of code using SpaCy and other NLP libraries to do NER, maybe a massive dictionary of known names with fuzzy matching, some heuristics to rule out city names or more advanced sentence structure parsing for false positives, but the result would be guaranteed to be worse for significantly more effort. With LLMs, I just tell the AI to do it and it… does. Just like that. I can ask it to do anything and it will, within reason and with proper constraints.

          GPT-3 was the first generation of this technology and it was already miraculous for someone like me who’s been following the AI field for 10+ years. If you try GPT-4, it’s at least 10x subjectively more intelligent than ChatGPT/GPT-3.5. It costs $20/mo, but it’s also been irreplaceable for me for a wide variety of tasks - Linux troubleshooting, bash commands, ducking coding, random questions too complex to google, “what was that thing called again”, sensitivity reader, interactively exploring options to achieve a task (eg note-taking, SMTP, self-hosting, SSI/clustered computing), teaching me the basics of a topic so I can do further research, etc. I essentially use it as an extra brain lobe that knows everything as long as I remind it about what it knows.

          While LLMs are not people, or even “agents”, they are “inference engines” which can serve as building blocks to construct an “artificial person” or some gradiation therein. In the near future, I’m going to experiment with creating a cognitive architecture to start approaching it - long term memory, associative memory, internal thoughts, dossier curation, tool use via endpoints, etc so that eventually I have what Alexa should’ve been, hosted locally. That possibility is probably what techbros are freaking out about, they’re just uninformed about the technology and think GPT-4 is already that, or that GPT-5 will be (it won’t). But please don’t buy into the anti-hype, it robs you of the opportunity to explore the technology and could blindside you when it becomes more pervasive.

          What would AI have to do to qualify as “capable of some interesting new kind of NLP or can create something entirely new”? From where I stand, that’s exactly what generative AI is? And if it isn’t, I’m not sure what even could qualify unless you used necromancy to put a ghost in a machine…

    • keeb420@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      thats because everyone is trying to treat ai as if its what movies promise us. its not there yet, if itll get there at all. it can be a good tool in the tool chest but its far from being the only tool. like ai might be able to speed up blood tests or screen for more things but its not gonna replace a good doctor yet. or can monitor everyone whos vitals are being monitored and can flag times for doctors to follow up on that arent life threatening that could be missed currently.