RonSijm

joined 1 year ago
[–] RonSijm@programming.dev 1 points 9 months ago (4 children)

Well I have Copilot Pro, but I was mainly talking about GitHub Copilot. I don't think having the Copilot Pro really affects Copilot performance.

I meanly use AI for programming, and (both for myself to program and inside building an AI-powered product) - So I don't really know what you intend to use AI for, but outside of the context of programming, I don't really know about their performance.

And I think Copilot Pro just gives you Copilot inside office right? And more image generations per day? I can't really say I've used that. For image generation I'm either using the OpenAI API again (DALL-E 3), or I'm using replicate (Mostly SDXL)

[–] RonSijm@programming.dev 10 points 9 months ago (3 children)

This model is being released under a non-commercial license that permits non-commercial use only.

Hmm, I wonder whether this means that the model can't be run under replicate.com or mage.space.

Is it commercial use if you have to pay for credits/monthly for the machines that the models are running on?

Like is "Selling the models as a service" commercial use, or can't the output of the models be used commercially?

[–] RonSijm@programming.dev 5 points 9 months ago* (last edited 9 months ago) (8 children)

I use Copilot, but dislike it for coding. The "place a comment and Copilot will fill it in" barely works, and is mostly annoying. It works for common stuff like "// write a function to invert a string" that you'd see in demos, that are just common functions you'd otherwise copypaste from StackOverflow. But otherwise it doesn't really understand when you want to modify something. I've already turned that feature off

The chat is semi-decent, but the "it understands the entire file you have open" concept also only just works half of time, and so the other half it responds with something irrelevant because it didn't get your question based on the code / method it didn't receive.

I opted to just use the OpenAI API, and I created a slack bot that I can chat with (In a slack thread it works the same as in a "ChatGPT context window", new messages in the main window are new chat contexts) - So far that still works best for me.

You can create specific slash-commands if you like that preface questions, like "/askcsharp" in slack would preface it with something like "You are an assistant that provides C# based answers. Use var for variables, xunit and fluentassertions for tests"

If you want to be really fancy you can even just vectorize your codebase, store it in Pinecone or PGVector, and have an "entire codebase aware AI"

It takes a bit of time to custom build something, but these AIs are basically tools. And a custom build tool for your specific purpose is probably going to outperform a generic version

[–] RonSijm@programming.dev 4 points 10 months ago

There's a user made OpenAPI spec: https://github.com/MV-GH/lemmy_openapi_spec - You probably mean that one

I've had similar issues as you mentioned that the dev did fix - but yea, Typescript has less precision than Rust (the source) or the openapi spec. And the Typescript client is build for Lemmy-JS and not build an example for other language client libraries...

Though the OpenAPI Documents in C# and Java are based on reflection of the source itself, and Rust doesn't have Reflection like that... So it's probably difficult for them to add without manually maintaining the OpenAPI specs

view more: ‹ prev next ›