Inference costs are very, very low. You can run Mistral Small 24B finetunes that are better than GPT-4o and actually quite usable on your own local machine.
As for training costs, Meta’s LLAMA team displace their emissions with environmental programs, which is more green than 99.9% of any company making any product you use
TLDR; don’t use ClosedAI use Mistral or other foss projects
Inference costs are very, very low. You can run Mistral Small 24B finetunes that are better than GPT-4o and actually quite usable on your own local machine.
As for training costs, Meta’s LLAMA team displace their emissions with environmental programs, which is more green than 99.9% of any company making any product you use
TLDR; don’t use ClosedAI use Mistral or other foss projects
self-hosting models is probably the best alternative to chatgpt