Ive been playing around with the deepseek R1 distills. Qwen 14b and 32b specifically.

So far its very cool to see models really going after this current CoT meta by mimicing internal thinking monologues. Seeing a model go “but wait…” “Hold on, let me check again…” “Aha! So…” Kind of makes it feel more natural in its eventual conclusions.

I don’t like how it can get caught in looping thought processes and im not sure how much all the extra tokens spent really go towards a “better” answer/solution.

What really needs to be ironed out is the reading comprehension seems to be lower th average as it misses small details in tricky questions and makes assumptions about what youre trying to ask like wanting a recipe for coconut oil cookies but only seeing coconut and giving a coconut cookie recipe with regular butter.

Its exciting to see models operate in a kind of a new way.

  • Smokeydope@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 days ago

    The lower quant coding models in the 14b-32b range ive tried just can’t cook functioning code easily in general. Maybe if they distilled the coder version of qwen 14b it might be a little better but i doubt it. I think a really high quant 70b model is more in range of cooking functioning code off the bat. Its not really fair to compare a low quant local model to o1 or Claude with on the cloud they are much bigger.

    Some people are into this not to have a productive tool but just because they think neural networks are rad. The study of how brains process and organize information is a cool thing to think about.

    So I ‘use’ it by asking questions to poke around at its domain knowledge. Try to find holes, see how it handles not knowing things, and how it reasons about what information might imply in an open ended question or how it relates to something else. If I feel its strong enough with general knowledge and real world reasoning problems I consider trusting it as a rubber duck to bounce ideas and request suggestions.

    Deepseek feels to me like its aimed as a general experimental model that peels back how llms ‘think’. It examines how altering or extending an LLMs ‘thought process’ changes its ability to figure out logic problems and similar comparative examination abilities.

    Ive gotten good test asking a very domain specific question and a followup:

    • how are strange attractors and julia sets related?

    • Are they the same undelying process occuring in the two different mediums of physical reality and logical abstraction?

    • what is the collatz conjecture ?

    • how does it apply to negative numbers?

    • How do determinists arguments against human free will based on the predictability of human neurons firing relate to AI statements about lacking the ability to generate consciousness or have experiences?

    • what is Barnsley’s collage conjecture? Explain it in easy to understand way.

    • Does it imply every fractal structure have a related IFS equation?

    • What is Gödel’s incompleteness theorem?

    • What does it imply about scientific theories of everything?

    • Can fractal structures contain other fractals? is the universe structured as a super fractal that contains all other fractals?

    These kind of questions really grill an LLMs exact knowledge level of scientific, mathematical, and philosophical concepts, as well as its ability to piece these concepts together into coherent context. Do human like monologues and interjections of doubt actually add something to its ability to piece together coherent simulations of understanding or is just extra fluff? Thats an interesting question worth exploring.