Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 134 Posts
  • 7.42K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • I would say that the Web in 2026 is much less isolated in terms of person-person interaction than it was in the late '90s, as a lot of websites in the late 1990s were static or mostly-static and the major rise of social media hadn’t yet happened. Much social interaction happened on non-Web platforms like IRC, Usenet, or mailing lists.

    https://en.wikipedia.org/wiki/Web_2.0

    The term “Web 2.0” was coined by Darcy DiNucci, an information architecture consultant, in her January 1999 article “Fragmented Future”:[3][20]

    “The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will […] appear on your computer screen, […] on your TV set […] your car dashboard […] your cell phone […] hand-held game machines […] maybe even your microwave oven.”

    Instead of merely reading a Web 2.0 site, a user is invited to contribute to the site’s content by commenting on published articles, or creating a user account] or profile on the site, which may enable increased participation. By increasing emphasis on these already-extant capabilities, they encourage users to rely more on their browser for user interface, application software (“apps”) and file storage facilities. This has been called “network as platform” computing.[5] Major features of Web 2.0 include social networking websites, self-publishing platforms (e.g., WordPress’ easy-to-use blog and website creation tools), “tagging” (which enables users to label websites, videos or photos in some fashion), “like” buttons (which enable a user to indicate that they are pleased by online content), and social bookmarking.

    Users can provide the data and exercise some control over what they share on a Web 2.0 site.[5][28] These sites may have an “architecture of participation” that encourages users to add value to the application as they use it.[4][5] Users can add value in many ways, such as uploading their own content on blogs, consumer-evaluation platforms (e.g. Amazon and eBay), news websites (e.g. responding in the comment section), social networking services, media-sharing websites (e.g. YouTube and Instagram) and collaborative-writing projects.[29] Some scholars argue that cloud computing is an example of Web 2.0 because it is simply an implication of computing on the Internet.[30]

    I don’t know exactly when the first Web forum software packages started spreading off-the-cuff, but they took a while to spread; were much less common early on, and at first were custom implementions on a per-site basis.

    PhpBB’s been around for a while, was popular in the “standalone Web forum” era.

    goes to look when it started out

    https://en.wikipedia.org/wiki/PhpBB

    phpBB was founded by James Atkinson as a simple UBB-like forum for his own website on June 17, 2000.

    It looks like UBB is a commercial package that dates to 1997, though it certainly didn’t meet with phpBB level of use, and I don’t know if I’ve ever used a UBB-based forum.


  • I don’t know the specific situation there, but traditionally if you have a military conflict going on, battle damage assessment is part of a military’s job.

    Battle damage assessment (BDA), sometimes referred to as bomb damage assessment, is the process of evaluating the physical and functional damage inflicted on a target as a result of military operations. It is a core component of combat assessment and is used to inform judgments about mission effectiveness and potential follow-on actions, including reattack recommendations.[1]

    Information on battle damage is highly valuable to the enemy and military intelligence and censors will endeavor to conceal, exaggerate or underplay the extent of damage depending on the circumstances.

    With long-range weapons — which is what Iran is using against UAE targets — it can be hard to know whether-or-not you’re actually hitting something. You need some sort of reconnaissance platform or a physical person to go out and take a look. So in general, a defending military would rather not permit an attacking military to know what has actually been hit. If the attack missed, then they don’t want the attacking military to know, so that they can fire another at the target, for example. And if there are accuracy issues or jamming or other things going on, they don’t want the attacking military to know about that. If the attacking military is defeating jamming efforts or has resolved accuracy issues or similar, they also don’t want the attacking military to know about that. They’re going to want their attacker to be as blind as they can keep them, to deny them a useful battle damage assessment.

    In one extreme case of this, the UK, in World War II, had Nazi Germany fire V-2 rockets, early ballistic missiles, at them. Guidance systems at the time were primitive, limiting accuracy, and the British conducted an extensive disinformation effort, mis-reporting where rockets were hitting and seeking to prevent Germany from obtaining access to accurate information. This led to Germany consistently shooting V-2s at the wrong place, because they were trusting that bad information for their battle damage assessment.

    https://en.wikipedia.org/wiki/V-2_rocket#Direct_attack_and_disinformation

    The only effective defences against the V-2 campaign were to destroy the launch infrastructure—expensive in terms of bomber resources and casualties—or to cause the Germans to aim at the wrong place by disinformation. The British were able to convince the Germans to direct V-1s and V-2s aimed at London to less populated areas east of the city. This was done by sending deceptive reports on the sites hit and damage caused via the German espionage network in Britain, which was secretly controlled by the British (the Double-Cross System).[79]

    EDIT: Another WW2 example that comes to mind: for some time, Japanese warships had been trying to depth-charge American submarines, but using an incorrect depth. A congressman released information to the public about this fact. That information then made its way to Japan, at which point the Japanese military corrected their weapon use.

    https://en.wikipedia.org/wiki/Andrew_J._May

    May was responsible for the release of highly classified military information during World War II known as the May Incident.[6] U.S. submarines had been conducting a successful undersea war against Japanese shipping during World War II, frequently escaping their anti-submarine depth charge attacks.[6][7] May revealed the deficiencies of Japanese depth-charge tactics in a press conference held in June 1943 on his return from a war zone junket.[6][7] At this press conference, he revealed the highly sensitive fact that American submarines had a high survival rate because Japanese depth charges were exploding at too shallow a depth.[6][7] Various press associations sent this leaked news story over their wires and many newspapers published it, including one in Honolulu, Hawaii.[6][7]

    After the news became public, Japanese naval antisubmarine forces began adjusting their depth charges to explode at a greater depth.[6][7] Vice Admiral Charles A. Lockwood, commander of the U.S. submarine fleet in the Pacific, estimated that May’s security breach cost the United States Navy as many as 10 submarines and 800 crewmen killed in action.[6][7] He said, “I hear Congressman May said the Jap depth charges are not set deep enough. He would be pleased to know that the Japs set them deeper now.”[6][7]


  • One minor thing that I am not super enthusiastic about when it comes to emojis is that they are typically colored. This has two drawbacks:

    • In a number of environments, it’s possible to set text color. This is only really practical because most characters are not colored, so the color can be variable. If we start introducing colored characters in general, that stops working. It also has at least the potential to create issues for colorblind users (though we could potentially also create workarounds).

    • It means that onscreen text may not be practical to present well in a monochrome environment, like a monochrome e-ink display or printed on paper. Traditionally, if you can see text onscreen, you can print it and it’s still legible on a monochrome printer. But, for example, there’s U+1FA75, LIGHT BLUE HEART: 🩵, and U+1FA77, PINK HEART: 🩷. Most non-sight-impaired users can probably distinguish between the two on a color display, but I suspect that a situation where one was using it to write text — maybe using blue to indicate male and pink to indicate female or something like that — wouldn’t be very easy to distinguish after being printed on a monochrome printer.

    Both of these are kind of minor complaints. In practice, I just don’t see a whole lot of emoji use, and haven’t run into practical issues. But I do think that if we wanted to adopt a writing system that incorporated color, I’d probably favor a more-considered approach than just throwing whatever someone happens to propose in.

    One other minor issue is that some emojis have political or social weight that get people upset. For example, you have U+1F52B, PISTOL.

    Some people felt that people shouldn’t be able to portray an actual pistol, so changed the thing to a water pistol. I personally think that the whole debate is kind of absurd, because one can just write “pistol”, but it clearly has been a topic of political infighting.

    https://en.wikipedia.org/wiki/Pistol_emoji

    The pistol emoji (U+1F52B 🔫 PISTOL) is an emoji defined by the Unicode Consortium as depicting a “handgun” or “revolver”.[1]

    It was historically displayed as a handgun on most computers (although Google once used a blunderbuss);[2] as early as 2013, Microsoft chose to replace the glyph with a ray gun,[3] and in 2016 Apple replaced their glyph with a water pistol.[4] Since then, its rendering has been inconsistent across vendors. Microsoft changed its glyph back to an icon of a revolver during 2016 and 2017, before switching it to a (differently-styled) ray gun; in 2018, Google and Samsung changed their devices’ rendering of the emoji to a water pistol,[2] as well as the websites Facebook and Twitter. In 2024, Twitter (by then known as “X”) chose to restore the glyph of a handgun, although instead of a revolver it used a semi-automatic M1911.[5]

    Based on the above, it looks like Elon Musk moved things back to being a classic American handgun.

    But, point is, you have this political spat and platform inconsistency going on (where the imparted meaning of someone’s text might reasonably change based on how the Unicode characters are portrayed) where it’s not at all clear to me that anyone ever had a particular desire to embed a pistol in text in the first place, be it a water gun or semiautomatic pistol or revolver or whatever.

    I’ve seen people arguing about the skin color of characters in various emojis. In text, I can just say “sad person” without attaching addition information, but if I have a visual representation, then I have to choose things like the skin color.

    It just seems like room for friction that doesn’t really need to exist.

    Oh, and another point — one of the things that initially seemed to me like a great application for Unicode emojis is flags, because in theory, those are designed to let one identify a country at a distance, and often people look at lists of countries. But…there are actually a lot of flags that look really similar to each other or are even identical, like the flag of Romania (U+1F1F7, U+1F1F4: 🇷🇴) and the flag of Chad (U+1F1F9, U+1F1E9: 🇹🇩). I remember some Romanians a bit back poking fun at some Romanian politician who had inadvertently used the Chad flag in some important post on social media. I’d imagine that it’s more-obnoxious if someone decides to do it in, say, a menu for country location. Like, in most cases, I think that it’s probably preferable to use the ISO country codes than flag emojis if you really need a short form, or to just write out the name of the country fully.


  • It’s 2026, and I still have a hard time seeing major gains from emojis.

    They are maybe useful for something like Twitter, where people had artificially-constrained message lengths, and wanted to pack as much information into as few characters as possible, but that seems like a pretty niche use.

    I get that conversational text has reasons to want to add information that normally comes out-of-band, via tone or expression, but it’s not clear to me that that requires emojis. I’ll use an emoticon for that. There’s a pretty small number of emotions that one really needs to indicate, so even if one wants to use an emoji, there just aren’t all that many that one needs.

    I think that there’s a case for a heart emoji. I certainly have seen people embedding hearts in handwritten text, so people do want to do that. I don’t, but I think that providing a way to do the same thing in typed text as one does in handwritten text is certainly reasonable.

    But…the overwhelming majority of emojis just don’t have an analog in handwritten text.

    On phones using on-screen keyboards, where text entry is slower, it might be faster to pick out an emoji than to write an associated word…but if that’s the goal, present-day phone on-screen keyboards also typically do predictive text, which is a more-general solution to the problem.

    I think that having Unicode include characters for various languages is nice, lets one embed quotes from various languages together. Line-drawing characters are convenient for monospace-font text stuff. I like having some typographic characters, like printer’s quotes or em- and en-dashes. Superscript characters and subscript characters.

    But I’ve just never really benefited much from emojis. They don’t really hurt much, but I don’t feel that they’ve provided much of a benefit, either.


  • I care less about speakerphone than I do Bluetooth headsets or regular phone speaker use near me.

    The speakerphone makes more noise!

    Yes, but people already have conversations between each other in public where we can hear both sides. We train ourselves to tune those out. A speakerphone is analogous to that case of another human talking.

    What I find most disruptive about phone conversations near me versus listening to two other people talking (which I can tune out) is that the speech pattern of a phone user is to say something and then pause. The problem is that that is exactly the signal that someone has said something to you, and that your attention is required. I have a harder time ignoring those one-sided conversations than turning out a conversation where I can hear both sides, because it’s basically constantly giving my head the “you just missed something and need to respond” signal. It’s like when someone says something to you, waits for a few seconds, and then your attention gets triggered and you look up and say “what?”

    Now, the article does also reference someone turning a speakerphone way up, and that I can get, if you’re playing it louder than a human would speak. But that’s also kinda a special case.

    I think that in general, the best practice is to text, and I think that most would agree that that’s uncontroversially the best approach in public. But after that, I’d personally prefer to have speakerphone use, above headset or regular phone use.

    EDIT: One interesting approach — I mean, smartphone vendors would always like to have new reasons to sell more hardware, so if they can figure out how to make it work, they might jump on it — might be phones capable of picking up subvocalization.

    https://en.wikipedia.org/wiki/Subvocalization

    Subvocalization, or silent speech, is the internal speech typically made when reading; it provides the sound of the word as it is read.[1][2] This is a natural process when reading, and it helps the mind to access meanings to comprehend and remember what is read, potentially reducing cognitive load.[3]

    This inner speech is characterized by minuscule movements in the larynx and other muscles involved in the articulation of speech. Most of these movements are undetectable (without the aid of machines) by the person who is reading.[3]

    You’d probably also need some sort of speech synthesizer rig capable of converting that into speech.

    A conversation where someone’s using headphones/earbuds and a subvocalization-pickup phone would avoid some of the limitations of texting (not limited to text input speed on an on-screen keyboard or having to look at the display), provide for more privacy for phone users, and not add to sound pollution affecting other people in the environment.

    EDIT2: Other possibilities for the speaker side:

    Bone conduction

    This has actually been done, but has some limitations on the sound it can produce, and you need to have a device in contact with your head.

    https://en.wikipedia.org/wiki/Bone_conduction

    Bone conduction is the conduction of sound to the inner ear primarily through the bones of the skull, allowing the hearer to perceive audio content even if the ear canal is blocked. Bone conduction transmission occurs constantly as sound waves vibrate bone, specifically the bones in the skull, although it is hard for the average individual to distinguish sound being conveyed through the bone as opposed to the sound being conveyed through the air via the ear canal. Intentional transmission of sound through bone can be used with individuals with normal hearing—as with bone-conduction headphones—or as a treatment option for certain types of hearing impairment. Bones are generally more effective at transmitting lower-frequency sounds compared to higher-frequency sounds.

    The Google Glass device employs bone conduction technology for the relay of information to the user through a transducer that sits beside the user’s ear. The use of bone conduction means that any vocal content that is received by the Glass user is nearly inaudible to outsiders.[47]

    Phase-array speakers to produce directional sound

    Here, you need to have the device track its position and orientation relative to a given user’s ears, then have a phase array of speakers that each play the sound at just the right phase offset to produce constructive interference in the direction of the user’s ears — it’s beamforming with sound. Other users will have a hard time hearing the sound, which will be garbled and quieter, because of destructive interference in their direction.

    https://en.wikipedia.org/wiki/Beamforming

    Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception.[1] This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array.

    We more-frequently use this for reception than for transmission, with microphone arrays, but you can make use of it for transmission. You’ll need a minimum number of speakers in the array to be able to play beams of sound with constructive interference in the direction of a given number of listeners.




  • You mean for the Linux kernel specifically? Linux distributions?

    For software in general — not Linux-specific — updates fix bugs (some of which might be security-related). Adds features.

    That may be too general to be useful, but the question doesn’t have much by way of specifics.

    I feel like maybe more context would make for better answers. Like, if what you’re asking is “I have a limited network connection, and I’d like to reduce or eliminate downloading of updates” or “I have a system that I don’t want to reboot; do I need to apply updates”, that might affect the answer.

    EDIT: Okay, you updated your post, and it sounds like it’s the Ubuntu distribution and the new release frequency that’s an issue.

    Well, if you want fewer updates and are otherwise fine with Ubuntu, you could try using Ubuntu LTS.

    https://ubuntu.com/about/release-cycle

    LTS releases

    LTS are released every two years and receive 5 years of standard security maintenance.

    LTS releases are the go-to choice for users who value stability and extended support. These versions are security maintained for 5 years with CVE patches for packages in the Main repository. They are recommended for production environments, enterprises, and long-term projects.

    You’ll still get security updates, but you won’t see new releases on a six-month basis.

    It can be nice to have a relatively-new kernel, as it means support for the latest hardware (like, say you have a desktop with a new video card), but if you have some system that’s working and you don’t especially want it to change, a lower frequency might be preferable for you.

    I use Debian myself, and Debian stable tends to have less-frequent new releases. You’ll normally get a new stable release every two years, with inter-release updates generally just being bugfixes, and new stuff going in every two years.

    https://www.debian.org/releases/

    Debian announces its new stable release on a regular basis. The Debian release life cycle encompasses five years: the first three years of full support followed by two years of Long Term Support (LTS).

    EDIT2: If you already have Ubuntu on your system and only want LTS updates, it looks like this is how one selects notification of new LTS releases or all releases.

    https://ubuntu.com/tutorials/upgrading-ubuntu-desktop#5-optional-upgrading-to-interim-releases

    Navigate to the ‘Updates’ tab and change the menu option titled ‘Notify me of a new Ubuntu version’ to For any new version.

    EDIT3: I’d wait until an LTS release to switch to LTS, if you aren’t currently using LTS, so that you aren’t on a system that isn’t getting updates. Looking at that Ubuntu release page, it looks like 26.04 is an LTS release. The Ubuntu versioning scheme refers to the year and month (26.04 being the fourth month of 2026). It’s the third month of 2026 right now, so the next release will be LTS, so switching over to LTS notifications now is probably a good time. You’ll get a release update notification next month. You do that update, and then will be on LTS and won’t receive another notification again for the next two years.


  • Setting aside mass transit use, the relative impact of higher oil prices in the US will, I’d imagine, probably be higher than in somewhere like Europe, because Europe already has relatively high prices because it has hefty fuel taxation in the places that I’ve looked at, whereas the US has relatively low fuel taxation. That’ll make the relative price change of the cost of the crude oil changing be larger in the US.

    https://moneyweek.com/economy/uk-economy/budget/604621/what-makes-up-the-price-of-a-litre-of-petrol

    This has fuel duty in the UK (a consumption tax) being 39% of the price of fuel. Then VAT is 17%. So right there, that’s over half the price at the pump, 56%.

    The cost of the gasoline itself — and the crude required is only one input of that — is only 29% of the price at the pump.

    https://www.eia.gov/tools/faqs/faq.php?id=10&t=10

    For mid-2024, this has federal tax of 18.4 cents per gallon of gasoline, and average state taxes — sales and consumption tax in the US varies by state and municipality — of 32.61 cents per gallon of gasoline.

    https://fred.stlouisfed.org/series/APU000074714

    Average fuel price in February 2026 is $3.065/gallon.

    So taxation makes up about 17% of the price of fuel in the US.

    EDIT: That being said, the US is also, these days, a net oil exporter. So there will be winners in the US, like oil extraction companies — but it won’t be vehicle operators.

    EDIT2: Actually, it’s probably slightly lower than 17% in the US, because it’s convention in the US to exclude sales tax in listing prices, so the $3.065 won’t actually be the post-tax pump price. It will include state consumption tax, though, so I don’t have a way to directly compute it from just those figures.



  • I don’t presently need to use any service that requires use of a smartphone. I’ve never had a smartphone tied to a Google/Apple account. I don’t even think that I currently have any apps from the Google Store on my phone — just open-source F-Droid stuff.

    It’s true that hypothetically, you could depend on a service that does require you to use an Android or iOS app to make use of it. There are services that do require that there. Lyft, for example, looks like it requires use of an app, though Uber doesn’t appear to do so. And I can’t speak as to your specific situation, but at least where I am, in the US, I’ve never needed to use an Android or iOS app to make use of some class of service.

    But I will say that services will track what people use, and if people are continuing to use other interfaces than smartphone apps to make use of their services, that makes it more likely that that’s what they’ll provide.

    I can’t promise that somewhere in the world, or in some country or city or specific place, someone might be required to use an Android or iOS app, or if not now, down the line, and not have an alternative. They can, at least, limit their use to that app, rather than using it more-broadly. I don’t make zero use of my smartphone software now — like, when I’m driving, I’ll use the open-source OSMAnd to navigate. I sometimes check for Lemmy updates when waiting in line or similar. I don’t normally listen to music while just walking around, but if I did, I’d use a music player on the phone rather than a laptop for it. But I try to shift my usage to the laptop as much as is practical.



  • I don’t intend to get rid of my smartphone, but I do carry a larger device with me, and try to use the phone increasingly as just a dumbphone and cell modem for that device to tether to.

    That may not be viable for everyone — it’s not a great solution to “I’m standing in line and want to use a small device one-handed”. And iOS/Android smartphones are heavily optimized to use very little power, and any other devices mean more power. It probably means carrying a larger case/bag/backpack of some sort with you. And most phone software is designed to know about and be aware of cell network constraints, like acting differently based on whether you’re connected to a cell network for data or a WiFi network for data.

    However, it doesn’t require shifting to a new phone ecosystem. It also makes any such future transition easier — if I have a lot of experience tied up in Android/iOS smartphone software, then there’s a fair bit of lock-in, since shifting to another platform means throwing out a lot of experience in that phone software. If my phone is just a dumbphone and a cell modem, then it’s pretty easy to switch.

    And it’s got some other pleasant perks. Phone OSes tend to be relatively-limited environments. They’re fine for content consumption, like watching YouTube or something, but they’re considerably less-capable in a wide range of software areas than desktop OSes. A smartphone has limited cooling; laptops are significantly more-able to deal with heat. Due to very limited physical space, smartphones usually have very few external connectors — you probably get only a single USB-C connector, and no on-phone headphones jack. You’re probably looking at a USB hub or adapters and rigging up pass-through power if you want anything else. Laptops normally have a variety of USB connectors, a headphones jack, maybe a wired Ethernet connector, maybe an external display jack. Laptops tend to have a larger battery, so it’s reasonable to use the laptop to power external devices like trackballs/larger trackpads, keyboards, etc. You get a larger display, so you don’t have to deal with the workarounds that smartphones have to do to make their small screens as usable as possible. You don’t have to deal with the space constraints that make a touchscreen necessary, having your fingers in front of whatever you’re looking at (though you can get larger devices that do have touchscreens, if you want). You have far more choices on hardware, and that hardware is more-customizable (in part because the hardware likely isn’t an SoC, though you can get an SoC-based laptop if you want). Software support isn’t a smartphone-style “N years, tied to the phone hardware vendor, at which point you either use insecure software or throw the phone out and buy a new one”.






  • Note that this is excluding things like home equity, which is a pretty important exclusion.

    I wouldn’t call home equity the best place to stick assets — you can probably find better investments — but real estate also represents a substantial chunk of a typical American’s assets.

    This is talking about IRAs and 401(k)s and stuff like that, saying that Americans have an average of less than $1k in those.

    By-and-large, Americans probably should spend less on housing and should make more use of things like those — that’s a real takeaway — but it’s important to note that the article isn’t saying that Americans have an average of less than $1k in assets stored to live on in retirement.

    EDIT: That does seem low, though.

    Fidelity says that the average 401(k) has $144k in it.

    It’s technically possible, if enough people don’t have 401(k)s, to bring the average for people, if you include those who don’t have a 401(k) or other retirement savings plan set up, below $1k, but I have a hard time believing that that’s actually the case. The median might well be below $1k, but normally “average” means “mean”.

    https://www.fidelity.com/learning-center/smart-money/average-net-worth-by-age

    Average retirement account balance by age

    The average 401(k) retirement balance across all age groups is $144,400, according to Fidelity Investments’ Building Financial Futures Q3 2025 report.3 Here is the average 401(k) account balance for different generations.

    Average 401(k) retirement account balance by generation

    Generation Average 401(k) balance
    Baby boomers (born 1946–1964) $267,900
    Gen X (born 1965–1980) $217,500
    Millennials (born 1981–1996) $80,700
    Gen Z (born 1997–2012) $17,000

    Keep in mind that 401(k) account balances are just one chunk of someone’s net worth—and might even be just one part of their retirement savings. An investor could have long-term money saved in other types of retirement accounts or a brokerage account.

    That generational increase represents how normally, one accrues assets over one’s working life — it starts small and then grows.

    goes looking at the article

    https://www.nirsonline.org/wp-content/uploads/2026/02/NIRS_2026-Retirement-in-America-FINAL.pdf

    Page 18:

    The goal of contributing to a DC savings plan and aiming for a savings target is to accumulate assets for retirement, i.e., retirement wealth. The sample for this analysis is restricted to respondents ages 21-64 who have positive personal income, likely from a job, but possibly from other sources. Further restricting the sample to those for whom DC retirement wealth is positive and then examining the median values shows that the median amount of DC retirement wealth was $40,000 in December 2022 (Figure 17). This finding is only for those with at least one dollar saved in a DC plan. Examining all respondents ages 21-64, even if they have nothing saved for retirement, indicates that the median amount of DC retirement wealth is a meager $955.

    Okay, so that’s what’s going on. Basically, a lot of workers never set up a retirement savings plan, so they have $0 in retirement savings plans. The original report uses “median” and along the chain of quotes, the thing got converted to “average”. And then the title uses “retirement savings” rather than “retirement savings plans” or something that clearly indicates that it’s specifically talking about a class of savings plans.

    EDIT2: The report is apparently making the case that more people should use 401(k) or similar plans, and more employers should offer them.

    Normally, when you’re getting hired at a new job, if it has a 401(k) plan, they’ll ask what you want to contribute. You likely want to max out your contribution if you can afford it. If they have an employer-match or something, it’s an even better idea.

    https://www.fidelity.com/learning-center/smart-money/average-401k-match

    How does a 401(k) match work?

    It’s like free money you don’t want to miss out on.

    A 401(k) match is when an employer puts money in an employee’s retirement account based on what the employee contributes. Match formulas vary, but a common setup is for employers to contribute $1 for every $1 an employee contributes up to 3% of their salary, then 50 cents on the dollar for the next 2% of an employee’s salary. Ideally, workers should aim to save 15% of their pre-tax income each year, including any match.

    More than 85% of 401(k) plans for which Fidelity is the service provider offer some type of employer contribution, according to Mike Shamrell, vice president of Thought Leadership at Fidelity. “As the largest service provider in the country with around 25,000 plans as of March 2025, our numbers are viewed as a good indicator of what’s going on across the retirement landscape,” he says.