

It’s absolutely abhorrent that this would be an incredibly useful service.
It’s absolutely abhorrent that this would be an incredibly useful service.
That’s kind of what I was thinking. Lots of options for people with different education levels.
I saw the bit about how the 20 hours a week can be work or volunteering, and my first thought was to figure out how to start a charity that needs people to volunteer from home doing something fairly easy but still useful. I’ll have to ponder that some. It’ll be a project to start working on. (Suggestions are welcome!)
Minnesota: Honeycrisp apple hard cider and fried Ellsworth cheese curds.
I’m a medical student applying for residency this year, and the top program I’m interested in is at the top of my list partially because the residents have a union.
But the front/hood is much shorter in length. Also, people driving that type of van are much more likely to be doing so in a professional capacity and are significantly less likely to be asshole drivers fucking around with their phone while driving. People are bad drivers at baseline quite frequently, but if someone is on the job in a van used for commercial purposes, they’re more likely to at least be paying attention and not speeding everywhere.
Edit: I marked up your image to illustrate the point made much more eloquently in the video. Because of the length of the hood, the truck has a much longer distance of road obstructed from view in front of it, and this is with a standard truck that doesn’t have one of the very popular lift kits (and assuming that the driver is relatively tall.)
Here’s a great video by Fort Nine that explains how and why the shape and size of these trucks are a threat to everyone outside the vehicle.
My mistake, I recalled incorrectly. It got 83% wrong. https://arstechnica.com/science/2024/01/dont-use-chatgpt-to-diagnose-your-kids-illness-study-finds-83-error-rate/
The chat interface is stupid in so many ways and I would hate using text to talk to a patient myself. There are so many non-verbal aspects of communication that are hard to teach to humans that would be impossible to teach to an AI. If you are familiar with people and know how to work with them, you can pick up on things like intonation and body language that can indicate that they didn’t actually understand the question and you need to rephrase it to get the information you need, or that there’s something the patient is uncomfortable about saying/asking. Or indications that they might be lying about things like sexual activity or substance use. And that’s not even getting into the part where AI’s can’t do a physical exam which may reveal things that the interview did not. This also ignores patients that can’t tell you what’s wrong because they are babies or they have an altered mental status or are unconscious. There are so many situations where an LLM is just completely fucking useless in the diagnostic process, and even more when you start talking about treatments that aren’t pills.
Also, the exams are only one part of your evaluation to get through medical training. As a medical student and as a resident, your performance and interactions are constantly evaluated and examined to ensure that you are actually competent as a physician before you’re allowed to see patients without a supervising attending physician. For example, there was a student at my school that had almost perfect grades and passed the first board exam easily, but once he was in the room with real patients and interacting with the other medical staff, it became blatantly apparent that he had no business being in the medical field at all. He said and did things that were wildly inappropriate and was summarily expelled. If becoming a doctor was just a matter of passing the boards, he would have gotten through and likely would have been an actual danger to patients. Medicine is as much an art as it is a science, and the only way to test the art portion of it is through supervised practice until they are able to operate independently.
The AI passed the multiple choice board exam, but the specialty board exam that you are required to pass to practice independently includes oral boards, and when given the prep materials for the pediatric boards, the AI got 80% wrong, and 60% of its diagnoses weren’t even in the correct organ system.
The AI doing pattern recognition works on things like reading mammograms to detect breast cancer, but AI doesn’t know how to interview a patient to find out the history in the first place. AI (or, more accurately, LLMs) doesn’t know how to do the critical thinking it takes to know what questions to ask in the first place to determine which labs and imaging studies to order that it would be able to make sense of. Unless you want the world where every patient gets the literal million dollar workup for every complaint, entrusting diagnosis to these idiot machines is worse than useless.
A bunch of the “citations” ChatGPT uses are outright hallucinations. Unless you independently verify every word of the output, it cannot be trusted for anything even remotely important. I’m a medical student and some of my classmates use ChatGPT to summarize things and it spits out confabulations that are objectively and provably wrong.
That is an option, but I would want to make sure that people with limited English fluency or education wouldn’t be excluded.