

Thanks for pointing that out. It was a case of conflating the two G’s in “GNU General Public License”.


Thanks for pointing that out. It was a case of conflating the two G’s in “GNU General Public License”.


And what would that goalpost be?
This would be really exciting if Canonical weren’t using this in part because it helps them de-GPL their Linux distro.
I pointed out that A LOT of core dependencies installed in your system right now are not GNU (the GNU in GNU GPL), and never been. You thought I was talking about GNU the project, not realizing I was actually talking about the license, which proved my point from months ago that people who talk like you are completely clueless about the licenses used by packages in their systems.
The supposition that the GPL dependence ratio is both high and getting significantly lowered is doubly wrong (both parts).
The claim that these moves are de-GPLing ones is also wrong, as trivially proven by the fact that the pattern doesn’t even hold (Ubuntu moved to GPL chrony not long ago).
The “rug pull” theory, already invalidated by the falsity of the above suppositions, is independently incoherent, as explained in my previous comment from both a technical and a business/commercial/cost POV.
There are countless angles where an “I’m feeling smart corpos bad” wouldn’t be invalid. This is not one of them.


I’m very aware of the great work Chimera Linux is doing. But still, there are GNUisms hanging around, and binary dependence in particular is hard to shake off, and replacing a system libc can be very complicated, if only for the reason of distros needing to support a smooth upgrade path between versions*.
* I always had the idea of a hybrid “static core/dynamic world” distro packaging model in part to ease such complications.


That’s another fictional aspect. That a distro will simply subsume a random third party upstream for one non-gnu package (or 5 or 10), and change the whole distro model and go proprietary.
I will let you in on a secret, the “stable” distro model itself is largely a lie. So called “stable” distros, even well funded ones, can barely do the minimal in that aspect. The only exception is maybe Red Hat because they employ people who do a lot of upstream development. But even in that case, that only covers a small fraction of what they package.
Distros need good upstreams to avoid responsibility, especially when it comes to security updates, not because they want to subsume all of that responsibility at some unspecified point for some unspecified reason.
The fact that this gets brought up whenever one more non-gnu-licensed rust package (or 3 or 5) is getting adopted, when non-rust literal thousands are already there, including many core dependencies, is what gives this FUD-like argumentation disingenuous vibes (assuming originality and non-ignorance).
Even arguing that “it’s a clear pattern” wouldn’t work, as that also wouldn’t survive fact-checking scrutiny. For example, Ubuntu switched from the multi-licensed systemd to the GPL-only chrony for NTP purposes not that long ago. Where was that supposed “pattern” then?!
EDIT: btw, all “non GNU” mentions in my original comment are about the license. All use non copyleft ones (with the exception of MPL for a couple of packages).


The notion that a modern Linux desktop is GNU is pure fiction.
You posted this from Firefox or a chromium/BLINK based browser! => not GNU
You use X11 libs or libwayland => not GNU
mesa => not GNU
openssl or nss => not GNU (check your system libcurl for me, does your distro build it against gnutls?)
openssh => not GNU (obviously)
fontconfig, freetype, harfbuzz => not GNU (freetype is dual-licenced)
zlib, bzip2, brotli, zstd => not GNU (gzip is, zstd is dual-licensed)
libjpeg, libpng, libvpx, libaom => not GNU
(neo)vim, tmux => not GNU (who still uses screen?!)
and I could go on and on and on
Even when it comes to ntp implementations, OpenNTPD and NTPsec are not GNU. gpsd, one of the three projects mentioned by Canonical, is not GNU (the other two are).
(all software mentioned above sans browsers is written in C btw)
Even GCC is almost fully replaceable now. The only strong holdout is glibc (musl is no match, and doesn’t pretend to be anyway). And surprise surprise, it is not going to be replaced, not anytime soon anyway.


Let’s take Lemmy UIs as an example. In a world where this “RCE” is removed, all API calls and returned data would have to go through a “server client” first. I hope this won’t take you long to ponder if that’s an improvement or not 😉
The web is indeed shit. But dumber web means more “clouding”, or if it’s not “clouding”, and to borrow from your reductionist fatalism: Dumber web replaces a potential RCE with a definite MITM.


Good move, removing some incentive from the security theater industry to exaggerate, or even manufacture, problems then “solving” them, while gaining some free ad space and “credibility” in the process, which is something I already pondered in a previous thread that had a bad smell.


I didn’t. And I was specifically referring to the published “analysis”.
How do we know the supposedly malicious content (which hasn’t provably affected a single person) a security company finds, didn’t originate from that same company?
It all sounds like a joke, and a lazily written one at that (Edit for fairness: the ctor part was a nice touch tbf).
And this is not limited to this analysis, or this company, or the Rust ecosystem. The era of CVE logos and all that theater can become rather tiring, and AI slop took the silliness to a whole other level. Or as our friend Daniel puts it, it’s a “Death by a thousand slops”.


Maybe it’s a bug, but my false flag alarm bells are ringing loudly here. Although to be fair, they always do that whenever they get a whiff of anything from the modern security theater industry.
Or maybe my mind is wrongly biased towards applying a “Problem - Reaction - Solution” reading to many “commercial” moves.


Super-human claims require evidence. And asking for that evidence is not an insult.


I think it’s time for this instance to consider introducing a filter where users have to choose a language they know (any language), and then have to answer easy questions about it (in a specific way), before being able to post here.
It can be limited to specific posts, to limit the false-negative filtering of genuine discourse.
This should help with bots, or worse, actual humans who accepted being shaped into acting like ones. The line separating the two has become very thin anyway, given the prevalence of LLM use, both automatic AND manual.


Can you point to relevant non-trivial public work of yours that has zero CVE’s?
The more you learn and know, the more you refrain from making such statements. This is universally applicable, and not limited to C or programming. And that’s what makes your “story” suspect.
Or maybe it’s a reading comprehension issue.
I used to write non-trivial C code myself btw.


It is guaranteed those who talk about this have ZERO clue about the licenses of the software they directly use, or have been always installed on their systems.


waypipe support input (keyboard+mouse)? because if it doesn’t, it’s kind of useless. you might as well just use ffmpeg with kmsgrab (provided that the pixel format the compositor uses is supported).I have no intention of switching to wayland, but I did try wayvnc a couple of times. The first time it was very buggy. The second time it seemed to have improved. But I see now that it isn’t actively developed anymore!


Rust has features that are not directly related to memory safety, but introduce paradigmatic and ergonomic improvements that help writing correct logic more often. Features like sum types (powerful enums) and type classes (traits, how generics are implemented) quickly come to mind. Hygienic macros and procedural macros are also very powerful features.
Sometimes the two aspects (language feature and memory safety) come together. For example, the Send and Sync traits is the part of the type system that contributes to implementing thread safety.
So it’s not all just about (im)mutability, lifetimes, and the borrow checker, the directly relevant safety features.
Also, the tooling and the ecosystem are factors the value of which can not be understated.


Nice(!) to see so many people who don’t know anything about programming get successfully propagandized into going against something they know nothing about.
Below is a list of CVE’s published against original sudo, all within the last 5 years. You may not heard of them, because CVE’s against non-Rust projects are not news 🫣
sudo CVE’s from within the last 5 years(severity scores are not available/assigned always)
Sudo before 1.9.5p2 contains an off-by-one error that can result in a heap-based buffer overflow, which allows privilege escalation to root via “sudoedit -s” and a command-line argument that ends with a single backslash character.
The sudoedit personality of Sudo before 1.9.5 may allow a local unprivileged user to perform arbitrary directory-existence tests by winning a sudo_edit.c race condition in replacing a user-controlled directory by a symlink to an arbitrary path.
selinux_edit_copy_tfiles in sudoedit in Sudo before 1.9.5 allows a local unprivileged user to gain file ownership and escalate privileges by replacing a temporary file with a symlink to an arbitrary file target. This affects SELinux RBAC support in permissive mode. Machines without SELinux are not vulnerable.
Sudo 1.8.0 through 1.9.12, with the crypt() password backend, contains a plugins/sudoers/auth/passwd.c array-out-of-bounds error that can result in a heap-based buffer over-read.
A flaw was found in sudo in the handling of ipa_hostname, where ipa_hostname from /etc/sssd/sssd.conf was not propagated in sudo. Therefore, it leads to privilege mismanagement vulnerability in applications, where client hosts retain privileges even after retracting them.
In Sudo before 1.9.12p2, the sudoedit (aka -e) feature mishandles extra arguments passed in the user-provided environment variables (SUDO_EDITOR, VISUAL, and EDITOR), allowing a local attacker to append arbitrary entries to the list of files to process. This can lead to privilege escalation.
Sudo before 1.9.13p2 has a double free in the per-command chroot feature.
Sudo before 1.9.13 does not escape control characters in log messages.
Sudo before 1.9.13 does not escape control characters in sudoreplay output.
Sudo before 1.9.15 might allow row hammer attacks (for authentication bypass or privilege escalation) because application logic sometimes is based on not equaling an error value (instead of equaling a success value), and because the values do not resist flips of a single bit.
Sudo before 1.9.17p1, when used with a sudoers file that specifies a host that is neither the current host nor ALL, allows listed users to execute commands on unintended machines.
Sudo before 1.9.17p1 allows local users to obtain root access because /etc/nsswitch.conf from a user-controlled directory is used with the --chroot option.
The special comment from @MTK@lemmy.world in this thread deserves some focus:
The Rust hype is funny because it is completely based on the fact that a leading cause of security vulnerabilities for all of these mature and secure projects is memory bugs, which is very true, but it completely fails to see that this is the leading cause because these are really mature projects that have highly skilled developers fixing so much shit.
So you get these new Rust projects that are sometimes made by people that don’t have the same experience as these C/C++ devs, and they are so confident in the memory safety that they forget about the much simpler security issues.
This has all the classics from the collectively manic discourse that has been spreading lately
mature projects
highly skilled developers
Rust projects that are sometimes made by people that don’t have the same experience as these C/C++ devs
C/C++ devs (deserves a separate entry)
they forget about the much simpler security issues.
The only classic missing is “battle tested” which is a crowd favorite these days.
But of course the internet gantry’s knowledge about CVE’s reported against non-Rust projects, is as good as their understanding of the Rust language itself.
Someone bothering to be minimally informed, even when lacking the technical knowledge to maximize their understanding of the information, would have known that the original “mature” sudo has CVE’s published against it all the time. A CRITICAL one was rather recent even. And as it just happens, the ones not (directly) related to memory safety did outnumber the ones that did recently (5 year span). Which ones had higher severity is left as homework for the internet gantry.
The discourse centered around memory safety is itself lacks the knowledge to realize that the overall value proposition of Rust is much bigger than this single aspect, although the breadth of sub-aspects that cover memory safety offered by Rust is itself also under-grasped.
The internet gantry’s susceptibility to propaganda and good old FUD done by ignorant and drama mongering “influencers” and “e-celebs” would have been almost concerning, that is if their transient feelings mattered in any way, in the grand scheme of things.
Needless to say, but this is comment is not meant to be disparaging towards Todd C. Miller or any other sudo developer/maintainer. He has a good relationship with sudo-rs developers anyway, not that the internet gantry would know.


I used to run their closed cli client years ago, but only when connecting to grab wireguard configs, then I closed it and connected with that config without it, which worked well*.
I also remember strace showing it reading a bunch of stuff including /etc/os-release. So they at least knew what distro you were using 😉
It was okay for me because I knew how to deal with it, although I’m with a provider that provides configs directly so you don’t need to use any service-specific clients.
Nord was never, or should have never been, a “privacy” choice, unless you are the kind of person that falls for paid reviewers and comparison sites, or marketing bullshit like all the X eyes talk.
*you can do that with any client that connects through wireguard since you can run wg showconf on the connected wireguard device. Although you would have to do some scripting yourself to replicate other steps like DNS and routing. I don’t think I was the only one doing this.


A long time ago, there was this misconception that “linux” was terminal-only. You know, like the interface sysadmins and Hollywood hackers use.
A small long-defunct non-tech forum I used to be a member of had a tech sub-forum, and in that sub-forum there was a new post one day introducing “linux” and covering some basics. It was full of DE screenshots (GNOME 2 and KDE 3) specifically to dispel the “terminal-only” misconception.
That was almost ~20 years ago. And the rest is history. I never liked Windows or M$ anyway for both technical and non-technical reasons. So it wasn’t that hard to convince me.
I almost exclusively use the terminal for everything except web browsing now, and don’t use a DE. So you could say that I myself ironically became a perpetuator of the misconception 😉
Or to avoid ad hominem accusations:
No code. Don’t Care.
And no benchmarks either. That intro about stack vs. heap also reads like someone who never went further than sophomore-level knowledge, or someone explaining things to kids.
Did you ask an LLM to write a comment full of cliches?