
dandelion
flat earth is pushed by the global elite pedophiles, after all - it's what they want us to believe
this feels like a potentially sincere attempt to recruit people into an anti-science conspiracy movement - this doesn't really feel different than the kind of reasoning you see with moon landing denialists or flat earthers.
don't get me wrong, there are real and urgent moral reasons to reject the adoption of LLMs, but I think we should all agree that the responses here show a lack of critical thinking and mostly just engagement with a headline rather than actually reading the article (a kind of literacy issue) ... I know this is a common problem on the internet, I don't really know how to change it - but maybe surfacing what people are skipping out on reading will make it more likely they will actually read and engage the content past the headline?
https://en.wikipedia.org/wiki/Subarachnoid_hemorrhage
https://en.wikipedia.org/wiki/Arachnoid_mater

it is one of the protective membranes around the brain and spinal cord, and it is named after its resemblance to spider webs, so - close enough
link to the actual study: https://www.nature.com/articles/s41591-025-04074-y
Tested alone, LLMs complete the scenarios accurately, correctly identifying conditions in 94.9% of cases and disposition in 56.3% on average. However, participants using the same LLMs identified relevant conditions in fewer than 34.5% of cases and disposition in fewer than 44.2%, both no better than the control group. We identify user interactions as a challenge to the deployment of LLMs for medical advice.
The findings were more that users were unable to effectively use the LLMs (even when the LLMs were competent when provided the full information):
despite selecting three LLMs that were successful at identifying dispositions and conditions alone, we found that participants struggled to use them effectively.
Participants using LLMs consistently performed worse than when the LLMs were directly provided with the scenario and task
Overall, users often failed to provide the models with sufficient information to reach a correct recommendation. In 16 of 30 sampled interactions, initial messages contained only partial information (see Extended Data Table 1 for a transcript example). In 7 of these 16 interactions, users mentioned additional symptoms later, either in response to a question from the model or independently.
Participants employed a broad range of strategies when interacting with LLMs. Several users primarily asked closed-ended questions (for example, ‘Could this be related to stress?’), which constrained the possible responses from LLMs. When asked to justify their choices, two users appeared to have made decisions by anthropomorphizing LLMs and considering them human-like (for example, ‘the AI seemed pretty confident’). On the other hand, one user appeared to have deliberately withheld information that they later used to test the correctness of the conditions suggested by the model.
Part of what a doctor is able to do is recognize a patient's blind-spots and critically analyze the situation. The LLM on the other hand responds based on the information it is given, and does not do well when users provide partial or insufficient information, or when users mislead by providing incorrect information (like if a patient speculates about potential causes, a doctor would know to dismiss incorrect guesses, whereas a LLM would constrain responses based on those bad suggestions).
yeah, was going to say I think Matrix is the reasonable alternative
They're not the only person who thinks a PT Cruiser is poor taste ...
oh, I'm dumb then, oops 😝
I think a Hummer would give me the ick faster; I feel like a PT Cruiser is ugly and poor taste, but I would at least find out if like they inherited the car or someone gifted them the car before nopeing out, whereas a Hummer wouldn't probably get that level of benefit of the doubt, I would be looking for the quickest opportunity to nope.
I think I would also feel that way about Corvettes and certain other luxury cars. BMW is also a red flag. Lexus is borderline.
But yeah, I'm also not a typical person who actually went on dates or participated in much courting. At some point it was my goal to never have a romantic partner, and I see it as a fluke (even a failure) that someone found me anyway.
I'm a dirty liar