QuentinCallaghan@sopuli.xyzM to Political Memes@lemmy.caEnglish · 1 day agoMeanwhile at DeepSeeksopuli.xyzexternal-linkmessage-square79fedilinkarrow-up10arrow-down10
arrow-up10arrow-down1external-linkMeanwhile at DeepSeeksopuli.xyzQuentinCallaghan@sopuli.xyzM to Political Memes@lemmy.caEnglish · 1 day agomessage-square79fedilink
minus-square474D@lemmy.worldlinkfedilinkarrow-up0·1 day agoYou can do it in LM Studio in like 5 clicks, I’m currently using it.
minus-squareAtHeartEngineer@lemmy.worldlinkfedilinkEnglisharrow-up0·1 day agoRunning an uncensored deepseek model that doesn’t perform significantly worse than the regular deepseek models? I know how to dl and run models, I haven’t seen an uncensored deepseek model that performs as well as the baseline deepseek model
minus-square474D@lemmy.worldlinkfedilinkarrow-up0·12 hours agoI mean obviously you need to run a lower parameter model locally, that’s not a fault of the model, it’s just not having the same computational power
minus-squareAtHeartEngineer@lemmy.worldlinkfedilinkEnglisharrow-up0·9 hours agoIn both cases I was talking about local models, deepseek-r1 32b parameter vs an equivalent that is uncensored from hugging face
You can do it in LM Studio in like 5 clicks, I’m currently using it.
Running an uncensored deepseek model that doesn’t perform significantly worse than the regular deepseek models? I know how to dl and run models, I haven’t seen an uncensored deepseek model that performs as well as the baseline deepseek model
I mean obviously you need to run a lower parameter model locally, that’s not a fault of the model, it’s just not having the same computational power
In both cases I was talking about local models, deepseek-r1 32b parameter vs an equivalent that is uncensored from hugging face