Ive been playing around with the deepseek R1 distills. Qwen 14b and 32b specifically.

So far its very cool to see models really going after this current CoT meta by mimicing internal thinking monologues. Seeing a model go “but wait…” “Hold on, let me check again…” “Aha! So…” Kind of makes it feel more natural in its eventual conclusions.

I don’t like how it can get caught in looping thought processes and im not sure how much all the extra tokens spent really go towards a “better” answer/solution.

What really needs to be ironed out is the reading comprehension seems to be lower th average as it misses small details in tricky questions and makes assumptions about what youre trying to ask like wanting a recipe for coconut oil cookies but only seeing coconut and giving a coconut cookie recipe with regular butter.

Its exciting to see models operate in a kind of a new way.

  • GenderNeutralBro@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    I’m not entirely sure how I need to effectively use these models, I guess. I tried some basic coding prompts, and the results were very bad. Using R1 Distill Qwen 32B, 4-bit quant.

    The first answer had incorrect, non-runnable syntax. I was able to get it to fix that after multiple followup prompts, but I was NOT able to get it to fix the bugs. It took several minutes of thinking time for each prompt, and gave me worse answers than the stock Qwen model.

    For comparison, GPT 4o and Claude Sonnet 3.5 gave me code that would at least run on the first shot. 4o’s was even functional in one shot (Sonnet’s was close but had bugs). And that took just a few seconds instead of 10+ minutes.

    Looking over its chain of thought, it seems to get caught in circles, just stating the same points again and again.

    Not sure exactly what the use case is for this. For coding, it seems worse than useless.

    • ikt@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      that’s interesting, in gpt4all they have the qwen reasoner v1 and it will run the code in a sandbox (for javascript anyway) and if it errors it will fix itself

        • ikt@aussie.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          https://www.nomic.ai/blog/posts/gpt4all-scaling-test-time-compute

          This release introduces the GPT4All Javascript Sandbox, a secure and isolated environment for executing code tool calls. When using Reasoning models equipped with Code Interpreter capabilities, all code runs safely in this sandbox, ensuring user security and multi-platform compatibility.


          I use LM Studio as well but between this and LM Studios bug where LLM’s larger than 8b won’t load I’ve gone back to gpt4all

    • Smokeydope@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 days ago

      The lower quant coding models in the 14b-32b range ive tried just can’t cook functioning code easily in general. Maybe if they distilled the coder version of qwen 14b it might be a little better but i doubt it. I think a really high quant 70b model is more in range of cooking functioning code off the bat. Its not really fair to compare a low quant local model to o1 or Claude with on the cloud they are much bigger.

      Some people are into this not to have a productive tool but just because they think neural networks are rad. The study of how brains process and organize information is a cool thing to think about.

      So I ‘use’ it by asking questions to poke around at its domain knowledge. Try to find holes, see how it handles not knowing things, and how it reasons about what information might imply in an open ended question or how it relates to something else. If I feel its strong enough with general knowledge and real world reasoning problems I consider trusting it as a rubber duck to bounce ideas and request suggestions.

      Deepseek feels to me like its aimed as a general experimental model that peels back how llms ‘think’. It examines how altering or extending an LLMs ‘thought process’ changes its ability to figure out logic problems and similar comparative examination abilities.

      Ive gotten good test asking a very domain specific question and a followup:

      • how are strange attractors and julia sets related?

      • Are they the same undelying process occuring in the two different mediums of physical reality and logical abstraction?

      • what is the collatz conjecture ?

      • how does it apply to negative numbers?

      • How do determinists arguments against human free will based on the predictability of human neurons firing relate to AI statements about lacking the ability to generate consciousness or have experiences?

      • what is Barnsley’s collage conjecture? Explain it in easy to understand way.

      • Does it imply every fractal structure have a related IFS equation?

      • What is Gödel’s incompleteness theorem?

      • What does it imply about scientific theories of everything?

      • Can fractal structures contain other fractals? is the universe structured as a super fractal that contains all other fractals?

      These kind of questions really grill an LLMs exact knowledge level of scientific, mathematical, and philosophical concepts, as well as its ability to piece these concepts together into coherent context. Do human like monologues and interjections of doubt actually add something to its ability to piece together coherent simulations of understanding or is just extra fluff? Thats an interesting question worth exploring.