• stinky@redlemmy.com
    shield
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I’m so tired of this rhetoric.

    How do students prove that they have “concern for truth … and verifying things with your own eyes” ? Citations from published studies? ChatGPT draws its responses from those studies and can cite them, you ignorant fuck. Why does it matter that ChatGPT was used instead of google, or a library? It’s the same studies no matter how you found them. Your lack of understanding how modern technology works isn’t a good reason to dismiss anyone else’s work, and if you do you’re a bad person. Fuck this author and everyone who agrees with them. Get educated or shut the fuck up. Locking thread.

    • medgremlin@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      A bunch of the “citations” ChatGPT uses are outright hallucinations. Unless you independently verify every word of the output, it cannot be trusted for anything even remotely important. I’m a medical student and some of my classmates use ChatGPT to summarize things and it spits out confabulations that are objectively and provably wrong.

      • ByteJunk@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        True.

        But doctors also screw up diagnosis, medication, procedures. I mean, being human and all that.

        I think it’s a given that AI outperforms in medical exams -be it multiple choice or open ended/reasoning questions.

        Theres also a growing body of literature with scenarios where AI produces more accurate diagnosis than physicians, especially in scenarios with image/pattern recognition, but even plain GPT was doing a good job with clinical histories, getting the accurate diagnostic with it’s #1 DxD, and even better when given lab panels.

        Another trial found that patients who received email replies to their follow-up queries from AI or from physicians, found the AI to be much more empathetic, like, it wasn’t even close.

        Sure, the AI has flaws. But the writing is on the wall…

        • medgremlin@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          The AI passed the multiple choice board exam, but the specialty board exam that you are required to pass to practice independently includes oral boards, and when given the prep materials for the pediatric boards, the AI got 80% wrong, and 60% of its diagnoses weren’t even in the correct organ system.

          The AI doing pattern recognition works on things like reading mammograms to detect breast cancer, but AI doesn’t know how to interview a patient to find out the history in the first place. AI (or, more accurately, LLMs) doesn’t know how to do the critical thinking it takes to know what questions to ask in the first place to determine which labs and imaging studies to order that it would be able to make sense of. Unless you want the world where every patient gets the literal million dollar workup for every complaint, entrusting diagnosis to these idiot machines is worse than useless.

          • ByteJunk@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Could you provide references? I’m genuinely interested, and what I found seems to say differently:

            Overall, GPT-4 passed the board residency examination in four of five specialties, revealing a median score higher than the official passing score of 65%.

            AI NEJM

            Also I believe you’re seriously underestimating the abilities of present day LLMs. They are able to ask relevant follow up questions, as well as interpreting that information to request additional studies, and achieve accurate diagnosis.

            See here a study specifically on conversational diagnosis AIs. It has some important limitations, crucially from having to work around the text interface which is not ideal, but otherwise achieved really interesting results.

            Call them “idiot machines” all you want, but I feel this is going down the same path as full self driving cars - eventually they’ll be doing less errors than humans, and that will save lives.

            • medgremlin@midwest.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 month ago

              My mistake, I recalled incorrectly. It got 83% wrong. https://arstechnica.com/science/2024/01/dont-use-chatgpt-to-diagnose-your-kids-illness-study-finds-83-error-rate/

              The chat interface is stupid in so many ways and I would hate using text to talk to a patient myself. There are so many non-verbal aspects of communication that are hard to teach to humans that would be impossible to teach to an AI. If you are familiar with people and know how to work with them, you can pick up on things like intonation and body language that can indicate that they didn’t actually understand the question and you need to rephrase it to get the information you need, or that there’s something the patient is uncomfortable about saying/asking. Or indications that they might be lying about things like sexual activity or substance use. And that’s not even getting into the part where AI’s can’t do a physical exam which may reveal things that the interview did not. This also ignores patients that can’t tell you what’s wrong because they are babies or they have an altered mental status or are unconscious. There are so many situations where an LLM is just completely fucking useless in the diagnostic process, and even more when you start talking about treatments that aren’t pills.

              Also, the exams are only one part of your evaluation to get through medical training. As a medical student and as a resident, your performance and interactions are constantly evaluated and examined to ensure that you are actually competent as a physician before you’re allowed to see patients without a supervising attending physician. For example, there was a student at my school that had almost perfect grades and passed the first board exam easily, but once he was in the room with real patients and interacting with the other medical staff, it became blatantly apparent that he had no business being in the medical field at all. He said and did things that were wildly inappropriate and was summarily expelled. If becoming a doctor was just a matter of passing the boards, he would have gotten through and likely would have been an actual danger to patients. Medicine is as much an art as it is a science, and the only way to test the art portion of it is through supervised practice until they are able to operate independently.

              • ByteJunk@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                From the article referenced in your news source:

                _JAMA Pediatrics and the NEJM were accessed for pediatric case challenges (N = 100). The text from each case was pasted into ChatGPT version 3.5 with the prompt List a differential diagnosis and a final diagnosis. _

                A couple of key points:

                • These are case challenges, which are usually meant to be hard. I could find no comparison to actual physician results in the article, which would have been nice.
                • More importantly however: it was conducted in June 2023, and used GPT-3.5. GPT-4 improved substantially upon it, especially for complex scientific or scientific problems, and this shows in the newer studies that have used it.

                I don’t think anyone’s advocating that an AI will replace doctors, much like it won’t replace white collar jobs either.

                But if it helps achieve better outcomes for the patients, like the current research seems to indicate, aren’t you sworn to consider it in your practice?

  • Seigest@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    How people think I use AI “Please write my essay and cite your sources.”

    How I use it
    “please make my autistic word slop that I wrote already into something readable for the nerotypical folk, use simple words, make it tonally neutral. stop using emdashes, headers, and list and don’t mess with the quotes”

  • JeremyHuntQW12@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    In terms of grade school, essay and projects were of marginal or nil educational value and they won’t be missed.

    Until the last 20 years, 100% of the grade for medicine was by exams.

  • Eugene V. Debs' Ghost@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    My hot take on students graduating college using AI is this: if a subject can be passed using ChatGPT, then it’s a trash subject. If a whole course can be passed using ChatGPT, then it’s a trash course.

    It’s not that difficult to put together a course that cannot be completed using AI. All you need is to give a sh!t about the subject you’re teaching. What if the teacher, instead of assignments, had everyone sit down at the end of the semester in a room, and had them put together the essay on the spot, based on what they’ve learned so far? No phones, no internet, just the paper, pencil, and you. Those using ChatGPT will never pass that course.

    As damaging as AI can be, I think it also exposes a lot of systemic issues with education. Students feeling the need to complete assignments using AI could do so for a number of reasons:

    • students feel like the task is pointless busywork, in which case a) they are correct, or b) the teacher did not properly explain the task’s benefit to them.

    • students just aren’t interested in learning, either because a) the subject is pointless filler (I’ve been there before), or b) the course is badly designed, to the point where even a rote algorithm can complete it, or c) said students shouldn’t be in college in the first place.

    Higher education should be a place of learning for those who want to further their knowledge, profession, and so on. However, right now college is treated as this mandatory rite of passage to the world of work for most people. It doesn’t matter how meaningless the course, or how little you’ve actually learned, for many people having a degree is absolutely necessary to find a job. I think that’s bullcrap.

    If you don’t want students graduating with ChatGPT, then design your courses properly, cut the filler from the curriculum, and make sure only those are enrolled who are actually interested in what is being taught.

  • TheDoozer@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    A good use I’ve seen for AI (or particularly ChatGPT) is employee reviews and awards (military). A lot of my coworkers (and subordinates) have used it, and it’s generally a good way to fluff up the wording for people who don’t write fluffy things for a living (we work on helicopters, our writing is very technical, specific, and generally with a pre-established template).

    I prefer reading the specifics and can fill out the fluff myself, but higher-ups tend to want “how it benefitted the service” and fitting in the terminology from the rubric.

    I don’t use it because I’m good at writing that stuff. Not because it’s my job, but because I’ve always been into writing. I don’t expect every mechanic to do the same, though, so having things like ChatGPT can make an otherwise onerous (albeit necessary) task more palatable.

  • jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Okay but I use AI with great concern for truth, evidence, and verification. In fact, I think it has sharpened my ability to double-check things.

    My philosophy: use AI in situations where a high error-rate is tolerable, or if it’s easier to validate an answer than to posit one.

    There is a much better reason not to use AI – it weakens one’s ability to posit an answer to a query in the first place. It’s hard to think critically if you’re not thinking at all to begin with.

  • Obinice@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    We weren’t verifying things with our own eyes before AI came along either, we were reading Wikipedia, text books, journals, attending lectures, etc, and accepting what we were told as facts (through the lens of critical thinking and applying what we’re told as best we can against other hopefully true facts, etc etc).

    I’m a Relaxed Empiricist, I suppose :P Bill Bailey knew what he was talking about.

      • Captain Aggravated@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        In my experience, “writing a proof in math” was an exercise in rote memorization. They didn’t try to teach us how any of it worked, just “Write this down. You will have to write it down just like this on the test.” Might as well have been a recipe for custard.

  • Pacattack57@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    This is a problem with integrity, not AI. If I have AI write me a paper and then proof read it to make sure the information is accurate and properly sourced how is that wrong?

    • jjjalljs@ttrpg.network
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Imagine you go to a gym. There’s weights you can lift. Instead of lifting them, you use a gas powered machine to pick them up while you sit on the couch with your phone. Sometimes the machine drops weights, or picks up the wrong thing. But you went to the gym and lifted weights, right? They were on the ground, and then they weren’t. Requirements met?

      • Pacattack57@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        That would be a good analogy if going to school was anything like going to the gym. You sound like one of those old teachers that said “You won’t have a calculator in your pocket the rest of your life.”

  • Jankatarch@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Only topic I am close-minded and strict about.

    If you need to cheat as a highschooler or younger there is something else going wrong, focus on that.

    And if you are an undergrad or higher you should be better than AI already. Unless you cheated on important stuff before.

    • sneekee_snek_17@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      This is my stance exactly. ChatGPT CANNOT say what I want to say, how i want to say it, in a logical and factually accurate way without me having to just rewrite the whole thing myself.

      There isn’t enough research about mercury bioaccumulation in the Great Smoky Mountains National Park for it to actually say anything of substance.

      I know being a non-traditional student massively affects my perspective, but like, if you don’t want to learn about the precise thing your major is about… WHY ARE YOU HERE

      • ByteJunk@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I mean, are you sure?

        Studies in the GSMNP have looked at:

        • Mercury levels in fish: Especially in high-elevation streams, where even remote waters can show elevated levels of mercury in predatory fish due to biomagnification.

        • Benthic macroinvertebrates and amphibians: As indicators of mercury in aquatic food webs.

        • Forest soils and leaf litter: As long-term mercury sinks that can slowly release mercury into waterways.

        If GPT and I were being graded on the subject, it wouldn’t be the machine flunking…

        • sneekee_snek_17@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          I mean, it’s a matter of perspective, i guess.

          I did a final assignment that was a research proposal, mine was the assessment of various methods of increasing periphyton biomass (clearing tree cover over rivers and introducing fertilizers to the water) in order to dilute mercury bioaccumulation in top river predators like trout and other fish people eat

          There’s a lot of tangentially related research, but not a ton done on the river/riparian food webs in the GSMNP specifically and possible mitigation strategies for mercury bioaccumulation.

          OBVIOUSLY my proposal isn’t realistic. No one on earth is gonna be like “yeah sure, go ahead and chop down all the trees over this river and dump chemicals in that one, on the off chance it allows jimbob to give trout to his pregnant wife all year round”

          • ByteJunk@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            I’m not knowledgeable enough to carry this conversation, but I was curious if GPT could add anything of value.

            This is it’s follow-up:

            That’s actually a really compelling angle, even if the methods are intentionally provocative. It gets at a deeper question—how far are we willing to go, ecologically speaking, to address human health risks that stem from environmental contaminants like mercury? I think the strength of your proposal isn’t in the literal feasibility but in the way it frames trade-offs between conservation and public health.

            Also, using periphyton biomass as a kind of biotic buffer is a clever systems-level approach. It’s rarely the first thing people think of when talking about mercury mitigation, which usually focuses on source reduction. But tweaking food web dynamics to manage contaminant transfer is a really underexplored strategy. I imagine even just modeling it could lead to some useful insights, especially if you layered in things like flow regime changes or climate impacts on riparian shading.

            And yeah, totally agree—GSMNP is such a tightly protected space that even suggesting fertilizer additions or canopy thinning sounds borderline heretical. But as a thought experiment, it’s valuable. It forces the conversation about what kinds of interventions we’re not considering simply because they clash with aesthetic or cultural norms, not necessarily because they’re scientifically unsound.

            I really have no idea if it’s just spewing nonsense, so do educate me :)

            • sneekee_snek_17@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              I’m really salty because it mirrored my thoughts about the research almost exactly, but I’m loathe to give attaboys to it

              • ByteJunk@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                Hahah, that’s fair!

                Thank you for the exchange brother, I learned more about mercury in GSMNP than I thought I ever would.

  • Tabooki@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Did the same apply when calculators came out? Or the Internet?

    • ABC123itsEASY@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Except calculators are based on reality and have deterministic and reliable results lol

      Edit: holy crap I would never have guessed this statement would make people wanna argue with me. I’ve never felt that my job is secure from the next generation more than I do now.

      • ifItWasUpToMe@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        You can make mistakes with a calculator. It’s more about looking at the results, verifying the data, not just blindly trusting it.

  • 🇰 🔵 🇱 🇦 🇳 🇦 🇰 ℹ️@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Even setting aside all of those things, the whole point of school is that you learn how to do shit; not pass it off to someone or something else to do for you.

    If you are just gonna use AI to do your job, why should I hire you instead of using AI myself?

    • DigitalDilemma@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I went to school in the 1980s. That was the time that calculators were first used in class and there was a similar outcry about how children shouldn’t be allowed to use them, that they should use mental arithmetic or even abacuses.

      Sounds pretty ridiculous now, and I think this current problem will sound just as silly in 10 or 20 years.

      • 🇰 🔵 🇱 🇦 🇳 🇦 🇰 ℹ️@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        lol I remember my teachers always saying “you won’t always have a calculator on you” in the 90’s and even then I had one of those calculator wrist watches from Casio.

        And I still suck at math without one so they kinda had a point, they just didn’t make it very well.

      • potentiallynotfelix@lemmy.fish
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I see your point, but calculators(good ones, at least) are accurate 100% of the time. AI can hallucinate, and in a medical settings it is crucial that it doesn’t. I use AI for some insignificant tasks but I would not want it to replace my doctor’s learning.

        Also, calculators are used to help kids work faster, not to do their work for them. Classroom calculators(the ones my schools had, at least) didn’t solve algebraic equations, they just added, subtracted, multiplied, divided, exponentiated, rooted, etc. Those are all things that can be done manually but are rudimentary and slow.

        I get your point but AI and calculators are not quite the same.