• 0 Posts
  • 125 Comments
Joined 8 months ago
cake
Cake day: December 25th, 2023

help-circle




  • I’d try chat gpt for that! :)

    But to give you a very brief rundown. If you have no experience in any of these aspects and are self learning you should expect a long rampup phase! Perhaps there is an easier route but I’m not familiar with it if there is.

    First, familiarize yourself with server setups. If you only want to host this you won’t have to go into the network details but it could become a cause for error at one point so be warned! The usual tip here is to get yourself familiar enough with docker that you can read and understand docker compose files. The de facto standard for self hosting are linux machines but I have read of people who used Macos and even windows successfully.

    One aspect quite unique to themodel landscape is the hardware requirements. As much as it hurts my nvidia despicing heart at this point in time they are the de facto monopolist. Get yourself a card with 12GB VRAM or more (everything below will be painful if you get things running at all. I’ve tried and pulled or smaller models on a 8GB card but experienced a lot of waiting time and crashes). Read a bit about Cuda on your chosen OS and what their drivers need.

    Once you can understand this whole port, container, path mapping and environment variable things.

    Then it’s going to the github page linked, following their guide and starting a container. Loading models is actually the easier part once you have the infrastructure running.


  • No offense intended, possible that I miss read your experience level:

    I hear a user asking developer questions. Either you go the route of using the publicly available services (dalle and Co) or you start digging into hosting the models yourself. The page you linked hosts trained models to use in your own contexts, not for a “click button and it works”.

    As a starting point for image generation self hosting I suggest https://github.com/AUTOMATIC1111/stable-diffusion-webui.

    For the training part, I’ll be very blunt: if you don’t indent to spend five to six digit sums on hardware or processing power, forget it. And even then you’d need the raw training data to pull it of.

    Perhaps what you want to do use fine tune a pretrained model, that’s something I only have a. It of experience in LLMs thohfn(and even there I don’t have the hardware to get beyond a personal proof of concept).











  • Uhm I don’t know your cultural background but at least around where I am the “own limitations” part is a crucial element of the therapy aspect. Accept your own limits to and work with your strengths.

    Managing and accepting restrictions is what is thought here for therapists (at least the fields I’m in closer contact with.

    This “widely knowing” people are at least not scientists as the last meta study I am aware of basically says “not enough data”: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7265021/

    That said: there is a high risk of discussing local variations on various therapy approaches and it’s even highly likely I’d guess that you’re absolutely correct for your medical cultural background and my lense is highly dissorted (from your pov) by my own.





  • Edit: I missed some complexity as suspected! I’m not sure how this process would handle hard and symlinks. Would add an experiment for that before going with the nix and root folders (it shouldn’t harm log at all).

    Original text: Perhaps I’m missing some complexity in your setup but from my understanding it’s really straight forward:

    The main caveat is that you need twice the space of your largest future sub volume. A garbage collect - d and any manual cleanup can help you there. I’d gets that approach with /var/log and when that works move over to the more critical systems.

    • You create the subvolumes within the partition you want to keep.
    • Mount them at a temp location and copy the files over.
    • alter your hardware.nix or whereever you’ve set your mount points to use the subvolume.
    • rebuild switch and reboot.

    If everything is working as expected, write a run book for every step and repeat with /home (i.e. have every step written up). Home is the second least critical folder for this.

    Once you have your runbook repeat the process and when you run out of space resize as needed. (e.g. https://btrfs.readthedocs.io/en/latest/btrfs-filesystem.html#man-filesystem-resize)

    That said: as you aim for the fully ephemeral root I personally would actually go the reibstwllwtion/reinstall route and write up everything I needed to do by hand. But that needs even more spare space (I’d prefer even a second disk for stuff like that to have a fallback).

    Good luck!