I use TextgenWebui and sometimes Kobold. I can only run it with 4bit quant enabled since I'm just short on VRam to fully load the model.
Text gen runs a server you access though the web browser instead of a desktop app.
I haven't tried GPT4all.
I use TextgenWebui and sometimes Kobold. I can only run it with 4bit quant enabled since I'm just short on VRam to fully load the model.
Text gen runs a server you access though the web browser instead of a desktop app.
I haven't tried GPT4all.
I never had any luck with most of the extensions let alone figuring out how to format a prompt for the API. I'm just making shit up as I go.
Check out the vladmandic fork of auto1111. It seems to be much quicker with new model support.
Been wanting to try voice cloning and totally not cobble together a DIY Ai wiafu.
I been playing with the Mistral 7b models. About the most my hardware can reasonably run... so far. Would love to add vision and voice but I'm just happy it can run.
I can't wait till someone dose this, but open source and running on not billionaire hardware.
I can't wait for my GPU to not be at 70c running Firefox.
Perfect. I can't tell a difference.
I just expect to insult the user while not answering the question.
They just pay up and do it again. It's a business expense, not a punishment.
From what I seen, I think I can win a fight against the guy by tipping him over.
There's a little bit, I think its Sony fans mostly.
No problem. It's fairly easy to figure out in a few minutes if you know Auto1111. Getting your model to actually load may need some tweaking, but I managed to just trial and error it.