Seeing double posts is IMO not frequent enough to require mechanisms to fix it (and I can't even imagine a built-in mechanism against it).
c/greentext should be blocked because it's full of annoying fake stories, though.
Seeing double posts is IMO not frequent enough to require mechanisms to fix it (and I can't even imagine a built-in mechanism against it).
c/greentext should be blocked because it's full of annoying fake stories, though.
it is quite literally named the “land of the blacks” after all that is what Egypt means
Egypt is from Greek and definitely doesn't mean that. The Egyptian endonym was kmt (traditionally pronounced as kemet), which is interpreted as "black land" (km means "black", -t is a nominal suffix, so it might be translated as black-ness, not at all "quite literally land of the blacks"), most likely referring to the fertile black soil around the Nile river. Trying to interpret that as "land of the blacks" should be suspicious already due to the fact people would hardly name themselves after their most ordinary physical characteristic; the Egyptians might call themselves black only if they were surrounded by non-black people and could view that as their own special characteristic, but they certainly neighboured and had contact with black peoples. And either way one has to wonder if the ancient views of white and black skin were meaningfully comparable to modern western ones. On the other hand, the fertile black soil most certainly is a differentia specifica of the settled Egyptian land that is surrounded by a desert.
More screenshots are here: https://xcancel.com/p9cker_girl/status/1844203626681794716
What I find odd is that the message that they actually left on the site has nothing to do with Palestine, just childish "lol btfo" sort of message. So I wouldn't be surprised if these guys aren't the ones who actually did it, and it's merely a false flag to make pro-Palestinian protesters look like idiotic assholes.
I don't get the impression you've ever made any substantial contributions to Wikipedia, and thus have misguided ideas about what would be actually helpful to the editors and conductive to producing better articles. Your proposal about translations is especially telling, because the machine-assisted translations (i.e. with built-in tools) have already existed on WP long before the recent explosion of LLMs.
In short, your proposals either: 1. already exist, 2. would still risk distorsion, oversimplification, made-up bullshit and feedback loops, 3. are likely very complex and expensive to build, or 4. are straight up impossible.
Good WP articles are written by people who have actually read some scholarly articles on the subject, including those that aren't easily available online (so LLMs are massively stunted by default). Having an LLM re-write a "poorly worded" article would at best be like polishing a turd (poorly worded articles are usually written by people who don't know much about the subject in the first place, so there's not much material for the LLM to actually improve), and more likely it would introduce a ton of biases on its own (as well as the usual asinine writing style).
Thankfully, as far as I've seen the WP community is generally skeptical of AI tools, so I don't expect such nonsense to have much of an influence on the site.
As far as Wikipedia is concerned, there is pretty much no way to use LLMs correctly, because probably each major model includes Wikipedia in its training dataset, and using WP to improve WP is... not a good idea. It probably doesn't require an essay to explain why it's bad to create and mechanise a loop of bias in an encyclopedia.
It has custom user-made themes that are dark mode, so it probably has dozens of dark modes.
That might depend on where you live, but generally no, I think.
8 hours of work, 8 hours of sleep, 48 hours of Family Guy
FYI, there are multiple methods to download "digitally loaned" books off IA, the guides exist on reddit. The public domain stuff is safe, but the stuff that is still under copyright yet unavailable by other means (Libgen/Anna's Archive, or even normal physical copies) should definitely be ripped and uploaded to LG.
The method I use, which results in best images, is to "loan" the book, zoom in to load the highest resolution, and then leaf through the book. Periodically extract the full images from your browser cache (with e.g. MZCacheView). This should probably be automatised, but I'm yet to find a method, other than making e.g. an Autohotkey script. When you have everything downloaded, the images can be easily modified (if the book doesn't have coloured illustrations IMO it is ideal to convert all images to black-and-white 2-bit PNG), and bundled up into a PDF with a PDF editor (I use X-Change Editor; I also like doing OCR, adding the bookmarks/outline, and adding special page numbering if needed - but that stuff can take a while and just makes the file easier to handle, it's not necessary). Then the book can be uploaded to proper pirate sites and hopefully live on freely forever. Also there are some other methods you can find online, on reddit, etc.
Produce infinite copies of bread loaves, and then get arrested because the baker lobby doesn't like that.
You can (well, could) put in any live URL there and IA would take a snapshot of the current page on your request. They also actively crawl the web and take new snapshots on their own. All of that counts as 'writing' to the database.