this post was submitted on 19 Mar 2024
299 points (97.8% liked)
Technology
59627 readers
2807 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This really just shines a light on a more significant underlying problem with scientific publication in general, that being that there's just way too much of it. "Publish or perish" is resulting in enormous pressure to churn out papers whether they're good or not.
As an outsider with a researcher PhD in the family, I suspect its an issue of how the institutions measure success. How many papers? How many cites? Other metrics might work, but probably not as broadly. I assume they will also care about the size of your staff, how much grant money you get, patents held, etc.
I suspect that, short of a Nobel prize, it is difficult to objectively measure how one is advancing scientific progress (without a PhD committee to evaluate it.)
The saying "when a measure becomes a target it ceases to be a good measure" (Goodhart's Law) has been making the rounds online recently, this is a good example of that.
Ironically, this is a common problem faced when training AIs too.