• 4 Posts
  • 6 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle


  • I hear that OpenAI and ChatGPT have been smoked by the competition for awhile now. I wouldnt know, I only run open models local on an old desktop computer.

    Here’s what the local deepseek distill of qwen2.5-14b has to say about your question. Its been awhile since I had to do volume equations but its answer seems about right, 7.89 cm sounds closer to the right answer than 7 meters anyway lol.

    Question: how high should I fill a pot that’s 22cm in diameter to get 3 liters?

    Answer: "Alright, let’s tackle this problem. The user wants to know how high they should fill a pot that’s 22 cm in diameter to get 3 liters of water.

    First, I need to understand what exactly is being asked here. It seems like they’re trying to determine the height of water when poured into a pot with a specific diameter, such that the volume of water is 3 liters.

    Okay, so the pot can be assumed to be cylindrical since pots are typically round and symmetrical. That makes sense because the diameter is given, and we can model it as a cylinder.

    Now, the formula for the volume of a cylinder is:

    V = pi r^2 h

    Where: ( V ) is the volume, ( r ) is the radius, ( h ) is the height.

    The user provided the diameter, which is 22 cm. Since the radius is half of the diameter, the radius ( r ) would be 11 cm.

    The volume they want is 3 liters. I know that 1 liter is equal to 1000 cubic centimeters (cm³), so 3 liters would be 3000 cm³.

    So, plugging in the values we have:

    3000 = pi (11)^2 h

    Calculating ( 11^2 ):

    11^2 = 121

    So now the equation becomes:

    3000 = pi times 121 times h

    To solve for ( h ), we’ll divide both sides by ( pi times 121 ):

    h = 3000/( pi times 121 )

    Calculating the denominator first:

    pi approx 3.1416

    3.1416 times 121 = approx 380.132

    Now, dividing 3000 by 380.132:

    h = 3000 divided by 380.132 = approx 7.89 cm

    So, the height of the water should be approximately 7.89 cm to get 3 liters.

    Since it’s not practical to measure 0.89 cm precisely, rounding up to 8 cm would ensure there’s enough water, but if precision is important, sticking with 7.89 cm is better.

    Therefore, the pot should be filled to about 7.89 cm high to hold exactly 3 liters.


  • They will never be as efficient or as certain as calculators, absolutely. Even with function calling its better to just use calculators for calculation. Just like they will never replace a definitive database for factual information calling like wolframalpha.

    It is surprising that despite not having built in functions for calculation they still seem to understand algebra concepts, try to work with numbers, and are getting better in accuracy over time just through training. A calculator cant reason about what godels incompleteness theorem implies about the scientific method, but a llm can try to simulate reasoning about high level math concepts to generate potential relationships between information.

    They are surprisingly good at math operations requiring accuracy when considering how much worse they were at it a few years ago. Not sure if you tried the newest models but deepseek can really keep up with consistent notation across operations which is a big win in itself.



  • The lower quant coding models in the 14b-32b range ive tried just can’t cook functioning code easily in general. Maybe if they distilled the coder version of qwen 14b it might be a little better but i doubt it. I think a really high quant 70b model is more in range of cooking functioning code off the bat. Its not really fair to compare a low quant local model to o1 or Claude with on the cloud they are much bigger.

    Some people are into this not to have a productive tool but just because they think neural networks are rad. The study of how brains process and organize information is a cool thing to think about.

    So I ‘use’ it by asking questions to poke around at its domain knowledge. Try to find holes, see how it handles not knowing things, and how it reasons about what information might imply in an open ended question or how it relates to something else. If I feel its strong enough with general knowledge and real world reasoning problems I consider trusting it as a rubber duck to bounce ideas and request suggestions.

    Deepseek feels to me like its aimed as a general experimental model that peels back how llms ‘think’. It examines how altering or extending an LLMs ‘thought process’ changes its ability to figure out logic problems and similar comparative examination abilities.

    Ive gotten good test asking a very domain specific question and a followup:

    • how are strange attractors and julia sets related?

    • Are they the same undelying process occuring in the two different mediums of physical reality and logical abstraction?

    • what is the collatz conjecture ?

    • how does it apply to negative numbers?

    • How do determinists arguments against human free will based on the predictability of human neurons firing relate to AI statements about lacking the ability to generate consciousness or have experiences?

    • what is Barnsley’s collage conjecture? Explain it in easy to understand way.

    • Does it imply every fractal structure have a related IFS equation?

    • What is Gödel’s incompleteness theorem?

    • What does it imply about scientific theories of everything?

    • Can fractal structures contain other fractals? is the universe structured as a super fractal that contains all other fractals?

    These kind of questions really grill an LLMs exact knowledge level of scientific, mathematical, and philosophical concepts, as well as its ability to piece these concepts together into coherent context. Do human like monologues and interjections of doubt actually add something to its ability to piece together coherent simulations of understanding or is just extra fluff? Thats an interesting question worth exploring.




  • Smokeydope@lemmy.worldtoScience Memes@mander.xyzPercentages
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Very weird fun fact about arrows/darts and statistics, theres 0% chance of hitting an exact bullseye. You can hit it its possible to throw a perfect bullseye. It just has a probability of zero when mathematically analyzed due to being an infinitesimally small point. Sound like I’m making shit up? Here’s the sauce

    How can an outcome both be entirely possible and have 0% probability?

    Q.E.D