1055
Elon Musk Dragged After His Own Chatbot Admits He's A 'Significant Spreader' Of Misinformation
(www.comicsands.com)
This is a most excellent place for technology news and articles.
One of the reasons I love StarCoder, even for non-coding tasks. Trained only on Github means no "instruction finetuning" bullshit ChatGPT-speak.
People still run or even continue pretrain llama2 for that reason, as its data is pre-slop.
I really wish it were easier to fine-tune and run inference on GPT-J-6B as well... that was a gem of a base model for research purposes, and for a hot minute circa Dolly there were finally some signs it would become more feasible to run locally. But all the effort going into llama.cpp and GGUF kinda left GPT-J behind. GPT4All used to support it, I think, but last I checked the documentation had huge holes as to how exactly that's done.
Still perfectly runnable in kobold.cpp. There was a whole community built up around with Pygmalion.
It is as dumb as dirt though. IMO that is going back too far.