I get the idea behind it for sure but why use our available ram for this? I thought whatever init functionality would just wipe clean /tmp at boot.

Right now what I’m looking at is that if a system has 16gbram KDE Neon uses half of it for /tmp.

The thing is applications could output to /tmp for a plethora of reasons that could maximize that. Whether you are a content creator or processing data of some sort leaving trails in /tmp the least I want is my ram being used for this thing regardless.

Basically if you drop-in a 10GB file in /tmp right now (if your setup has tmpfs active) you will see a 10GB usage in your htop. Example in https://imgur.com/a/S9JIz9p

I’m not here to pick a fight but as a new KDE Neon user I’m scratching my head on the why after years in Arch Linux.

    • @lumirellOP
      link
      37 months ago

      I… don’t think I have ever seen it do that automatically unless I missed some steps in the installation guide…? most of the time I just created the partitions I needed. I did a quick CTRL + F on tmpfs or tmp but not seeing anything…

      Anyhow, I don’t see on my desktop which still has Arch Linux installed which I want to move to KDE Neon but extremely lazy when you have an immense backup to do…

      • Max-P
        link
        fedilink
        67 months ago

        It’s default since systemd afaik. I think systemd-tmpfiles manages this. It’s never been a problem for me, it pretty much remains fairly empty most of the time. Most things like sockets are in /run which is also tmpfs.

      • Strit
        link
        fedilink
        37 months ago

        It has been default in Arch for a long time.

        What is the output of your df -h | grep tmpfs command? It should list a couple of devices using tmpfs, where /tmp is one of them.

  • mox
    link
    fedilink
    14
    edit-2
    7 months ago

    Why would you drop a 10GB file in /tmp and leave it there?

    Every decent app I’ve used that processes large files also moves them to a final location when finished, in which case it makes sense not to use /tmp for those, because doing so would turn that final move operation into a copy (unless you happen to have /tmp on the same filesystem as the target location). That’s why such applications usually let you configure the directory they use for their large temp files, or else create temp files in the target dir to begin with.

    For what it’s worth, I changed my /tmp to a tmpfs years ago, even on a 16GB system, for performance and to minimize SSD wear. I think it was only ever restrictive once or twice, and nothing terrible happened; I just had to clear some space or choose a different dir for whatever I was doing.

    It’s worth reviewing the tmpfs docs to make sure you understand how that memory is actually managed. It’s not like a simple RAM disk.

    • @lumirellOP
      link
      37 months ago

      Why would it matter the reason of dropping a file of X size? The point is that not all applications are “decent” and some will undoubtedly use /tmp because “it might be the most logical thing” for any developer that’s not really up to date.

      I don’t see how reviewing the tmpfs helps in this scenario if at all… we are talking about end-users your common joe/dane running your day to day applications, whatever they may be. I don’t and will never expect developers to adhere to anything and just put out whatever.

      • mox
        link
        fedilink
        67 months ago

        Why would it matter the reason of dropping a file of X size? The point is that not all applications are “decent” and some will undoubtedly use /tmp because “it might be the most logical thing” for any developer that’s not really up to date.

        It matters because it’s the difference between a real-world situation, and a fabricated scenario that you expect to be problematic but doesn’t generally happen.

        All filesystems have limits, and /tmp in particular has traditionally been sized much smaller than the root or home filesystems, regardless of what backing store is used. This has been true for as long as unix has existed, because making it large by default was (and is) usually a waste of disk space. Making it a tmpfs doesn’t change much.

        The point is that not all applications are “decent” and some will undoubtedly use /tmp because “it might be the most logical thing” for any developer that’s not really up to date.

        In my experience, the developers of such applications discover their mistake pretty quickly after their apps start seeing wide use, when their users complain about /tmp filling up and causing failures. The devs then fix their code. That’s why we don’t see it often in practice.

        I don’t see how reviewing the tmpfs helps in this scenario if at all…

        I mentioned it in case it helps you to understand that the memory is used more efficiently than you might think. Perhaps that could relieve some of your concern about using it on a 16GB system. Perhaps not. Just trying to help.

        we are talking about end-users your common joe/dane running your day to day applications, whatever they may be.

        We are? I don’t see them echoing your concerns. Perhaps that’s because this is seldom a problem.

        • @lumirellOP
          link
          17 months ago

          In my experience, the developers of such applications discover their mistake pretty quickly after their apps start seeing wide use, when their users complain about /tmp filling up and causing failures. The devs then fix their code. That’s why we don’t see it often in practice.

          I humbly disagree. We don’t live in that utopia.

          I don’t see them echoing your concerns.

          I guess for an scenario to be real everyone has to know exactly what’s happening? They will know what caused it and they will all know how to properly report it even though I don’t even expect a lot of people to know their system especially your average joe/dane nor do I expect them to even troubleshoot the issue if something were to happen. It doesn’t really invalidate the scenario at all.

          A fabricated scenario is itself pretty redundant. :)

    • @ulterno
      link
      English
      17 months ago

      Nice. I’m going to go set up tmpfs rn

  • Strit
    link
    fedilink
    57 months ago

    2 good reasons to have your /tmp on tmpfs filesystems.

    1. It’s faster. RAM speeds are faster than your drive speeds, even SSDs.
    2. You are certain that all the files are removed on reboot, because RAM always gets cleared when it looses power.

    1 bad reason for having your /tmp be tmpfs.

    1. It can quickly fill out your RAM if your application (or you manually) drop huge files in /tmp and don’t move them out afterwards.

    In my mind, the Pros outweigh the Cons.

  • @CameronDev@programming.dev
    link
    fedilink
    37 months ago

    I think, and i’m open to alternative theories, is that using RAM instead of disk is safer when the tmp directory fills up.

    If you have /tmp being a regular directory on your root drive, if you fill your disk witg tmp files, other processes wont be able to save files to disk, resulting in lost data.

    If you have it in a ram disk, when the tmpfs fills up too much, the oom killer can get more space (unsure if oomkiller can wipe tmpfs, but that probably would be ideal?).

    Neither are good, and both can result in data loss, but tmpfs may be safer?

      • @CameronDev@programming.dev
        link
        fedilink
        27 months ago

        Are you thinking rowhammer? My understanding is limited, but doesnt rowhammer require being able to write to memory at a consistent address, co-located with the data being attacked? Im not sure thats doable with tmpfs, but probably worth an investigation by someone more knowledgable than me :)