I’m sick and tired of the Capitalist religion and all their fanatic believers. The Western right-wing population are the most propagandized and harmful people on Earth.

  • 1 Post
  • 15 Comments
Joined 2 years ago
cake
Cake day: June 8th, 2023

help-circle
  • As others say, it can be done. If you want more normal umpf, you’ll need to mount parts of the filesystem to your ssd. You can mount /home or / on ssd, or have an overlay file system as a file on an ssd/hdd, or use bcachefs with back propagation to the usb, or similar fancy setups.

    So you’ll boot linux kernel from the usb, but most disk activity will be on your ssd. Fun project, but not super easy/practical if it isn’t done automatically.

    My old HP microserver is ‘made’ to boot from a usb-stick inserted on the mb.

    Anyway, perhaps an AI can suggest a script to do what you want ?





  • It can be a bit difficult with these ‘what if we remove this fundamental force’ questions, because they are so fundamental that the rule screws up further reasoning about the situation, but:

    Assuming bonds in a body just ‘disappeared’ by magic: Instant decompression would happen at molecular level.

    There would be no puddle or even visible dust. All molecules - mostly H and O - in our body would instantly be de-constructed in to individual atoms - in effect turning into a compressed gas, and I guess that the lighter elements would ‘boil off’ so fast that our whole body of compressed gas would explode rather violently.


  • Yes, you are leaking data, but don’t panik. First of all, your mental health here and now is important - without it you won’t have energy for other things. Next, It takes a lot of energy to de-google or de-corp and you don’t wan’t to ‘leak’ now, but in 6 months, you’ll have your own private/foss talking AI assistant, and it will help you cut the ties to the last corporation then.

    So, soon you’ll be more ‘invisible’ for the corps, and maybe you can live with the spying/manipulation for a moment longer ? Not sure how long it takes for their AI to find you anyway, but at least the removed have to work for it…

    Alternatively, get a free account at Groq (also have ‘whisper’ stt), or sambanova and install/use open-webui for talking. These new hardware corps don’t train AI on free user interactions, and they probably don’t sell your information - yet. There are other methods for p2p sharing of AI resources, but they may not provide quality high enough or with all modalities.



  • Sims@lemmy.mltoPrivacy@lemmy.mlIs freetube broken again?
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    2 days ago

    Same here without vpn. Getting by using embedded yt player, but not optimal. Seems that freetube users could win if we had a common cache like ipfs where watched videos are stored. If one user gets the whole video, everyone have access to it via ipfs. There would still be trouble with rare videos/first view, but YT would probably not block an ip if most of the yt videos where loaded from ipfs instead ?

    Just a quick thought.


  • Didn’t know what uBlue was, so here: https://universal-blue.org/

    "The Universal Blue project builds a diverse set of continuously delivered operating system images using bootc. That’s nerdspeak for the ultimate Linux client: the reliability of a Chromebook, but with the flexibility and power of a traditional Linux desktop.

    These images represent what’s possible when a community focuses on sharing best practices via automation and collaboration. One common language between dev and ops, and it’s finally come to the desktop.

    We also provide tools for users to build their own image using our templates and processes, which can be used to ship custom configurations to all of your machines, or finally make the Linux distribution you’ve long wished for, but never had the tools to create.

    At long last, we’ve ascended."


  • Been enjoying Linux for ~25 years, but have never been happy with how it handled low memory situations. Swapping have always killed the system, tho it have improved a little. It’s been a while since I’ve messed with it. I’ve buckled up and are using more ram now, but afair, you can play with:

    (0. reduce running software, and optimize them for less memory, yada yada)

    1. use a better OOM (out of memory) manager that activates sooner and more gracefully. Search in your OS’ repository for it.
    2. use zram as a more intelligent buffer and to remove same (zero) pages. It can lightly compress lesser used memory pages and use a partition backend for storing uncompressible pages. You spend a little cpu, to minimize swap, and when needed, only swap out what can’t be compressed.
    3. play with all the sysctl vm settings like swappiness and such, but be aware that there’s SO much misinformation out there, so seek the official kernel docs. For instance, you can adapt the system to swap more often, but in much smaller chunks, so you avoid spending 5 minutes to hours regaining control - the system may get ‘sluggish’, but you have control.
    4. use cgroups to divide you resources, so firefox/chrome (or compilers/memory-hogs) can only use X amount before their memory have to swap out (if they don’t adapt to lower mem conditions automatically). That leaves a system for you that can still react to your input (while ff/chrome would freeze). Not perfect, tho.
    5. when gaming, activate a low-system mode, where unnecessary services etc are disabled. I think there’s a library/command that helps with that (and raise priority etc), but forgot its name.

    EDIT: 6. when NOT gaming, add some of your vram as swap space. Its much faster than your ssd. Search github or your repository for ‘vram cache’ or something like that. It works via opencl, so everyone with dedicated vram can use it as super fast cache. Perhaps others can remember the name/link ?

    Something like that anyway, others will know more about each point.

    Also, perhaps ask an AI to create a small interface for you to fiddle with vm settings and cgroups in an automated/permanent way ? just a quick thought. Good luck.



  • Agree. I also shift between them. As the bare minimum, I use a thinking model to ‘open up’ the conversation, and then often continue with a normal model, but it certainly depends on the topic.

    Long ago we got ‘routellm’ I think, that routed a request depended on its content, but the concept never got traction for some reason. Now it seems that closedai and other big names are putting some attention to it. Great to see DeepHermes and other open players be in front of the pack.

    I don’t think it will take long before we have the agentic framework do the activation of different ‘modes’ of thinking dependent on content/context, goals etc. It would be great if a model can be triggered into several modes in a standard way.


  • You can argue that a 4090 is more of a ‘flagship’ model on the consumer market, but it could be just a typing error, and then you miss the point and the knowledge you could have learned:

    “Their system, FlightVGM, recorded a 30 per cent performance boost and had an energy efficiency that was 4½ times greater than Nvidia’s flagship RTX 3090 GPU – all while running on the widely available V80 FPGA chip from Advanced Micro Devices (AMD), another leading US semiconductor firm.”

    So they have found a way to use a ‘off-the-shelf’ FPGA and are using it for video inference, and to me it looks like it could match a 4090(?), but who cares. With this upgrade, these standard Fpga’s are cheaper(running 24/7)/better than any consumer Nvidia GPU up to at least 3090/4090.

    And here from the paper:

    "[problem] …sparse VGMs [video generating models] cannot fully exploit the effective throughput (i.e., TOPS) of GPUs. FPGAs are good candidates for accelerating sparse deep learning models. However, existing FPGA accelerators still face low throughput ( < 2TOPS) on VGMs due to the significant gap in peak computing performance (PCP) with GPUs ( > 21× ).

    [solution] …we propose FlightVGM, the first FPGA accelerator for efficient VGM inference with activation sparsification and hybrid precision. […] Implemented on the AMD V80 FPGA, FlightVGM surpasses NVIDIA 3090 GPU by 1.30× in performance and 4.49× in energy efficiency on various sparse VGM workloads."

    You’ll have to look up what that means yourself, but expect a throng of bitcrap miner cards to be converted to VLM accelerators, and maybe give new life for older/smaller/cheaper fpga’s ?