• 0 Posts
  • 3 Comments
Joined 1 year ago
cake
Cake day: September 9th, 2023

help-circle


  • Thinking a bit outside the box, if your phone is capable of it, you could find a way to run a small local LLM on it. Maybe it can even be done in Termux?

    If that’s not an option and/or you need a bigger, more capable model, you could host a local Ollama instance, and connect to it from the Ollama (IzzyOnDroid) or GPTMobile (F-Droid). This way you will only connect to yourself instead of some 3rd party translation or LLM provider.

    I think that, with a well-written system prompt, you could make it more efficient by concisely instructing it to expect your text input and a language (or include permanent language instructions in system prompt), to then only output the translated version of your input in that language. This will keep the number of input+output tokens low, thereby saving some inference. You can also get creative and instruct it to output multiple variations, change the style/tone/formatting, provide an example sentence containing a single translated word, etc…