Dependency cooldowns turn you into a free-rider - Comments

Dependency cooldowns turn you into a free-rider

chanux

I would argue the blind copy pasting, cargo cult orgs are less likely to be helpful anyway.

But I get the point, it's a numbers game so any and all usage can help catching issues.

antonvs

Mature professionals and organizations have always waited to install updated dependencies in production, with exceptions for severe security issues such as zero day attacks.

"Free riding" is not the right term here. It's more a case of being the angels in the saying "fools rush in where angels fear to tread".

If the industry as a whole were mature (in the sense of responsibility, not age), upgrades would be tested in offline environments and rolled out once they pass that process.

Of course, not everyone has the resources for that, so there's always going to be some "free riding" in that sense.

That dilutes the term, though. Different organizations have different tolerance for risk, different requirements for running the latest stuff, different resources. There's always going to be asymmetry there. This isn't free riding.

usefulcat

> Frankly, dependency cooldowns work by free-riding on the pain and suffering of others.

I suspect there are some reasonable points to be made here, but frankly, I pretty much stopped reading after that. Way too simple minded.

asdfasgasdgasdg

I think the appeal to the categorical imperative is very interesting though. Someone needs to try it. If everyone were wise as you term it, then it's essentially a stalemate while you wait for someone else to blink first and update.

Then again, there are other areas where I feel that Kantian ethics also fail on collective action problems. The use of index funds for example can be argued against on the same line as we argue against waiting to update. (That is, if literally everyone uses index funds then price discovery stops working.) I wonder if this argument fails because it ignores that there are a diversity of preferences. Some organizations might be more risk averse, some less so. Maybe that's the only observation that needs to be made to defeat the argument.

8note

itd be better for the title to be about upload queues and distribution, rather than free-loading.

idk if one of the touted benefits is really real - you need to be able to jump changes to the front of the queue and get them out asap sometimes.

hacked credentials will definitely be using that path. it gives you another risk signal sure, but the power sticks around

twotwotwo

The topic of cooldowns just shifting the problem around got some discussion on an earlier post about them -- what I said there is at https://lobste.rs/s/rygog1/we_should_all_be_using_dependency... and here's something similar:

- One idea is for projects not to update each dep just X hours after release, but on their own cycles, every N weeks or such. Someone still gets bit first, of course, but not everyone at once, and for those doing it, any upgrade-related testing or other work also ends up conveniently batched.

- Developers legitimately vary in how much they value getting the newest and greatest vs. minimizing risk. Similar logic to some people taking beta versions of software. A brand new or hobby project might take the latest version of something; a big project might upgrade occasionally and apply a strict cooldown. For users' sake, there is value in any projects that get bit not being the widely-used ones!

- Time (independent of usage) does catch some problems. A developer realizes they were phished and reports, for example, or the issue is caught by someone looking at a repo or commit stream.

As I lamented in the other post, it's unfortunate that merely using an upgraded package for a test run often exposes a bunch of a project's keys and so on. There are more angles to attack this from than solely when to upgrade packages.

ArcHound

The core point is of course solid. By not updating on day 0, maybe somebody else spend the effort to discover this and you didn't. But there are plenty of other benefits for not rolling with the newest and greatest versions enabled.

I'd argue for intentional dependency updates. It just so happens that it's identified in one sprint and planned for the next one, giving the team a delay.

First of all, sometimes you can reject the dependency update. Maybe there is no benefit in updating. Maybe there are no important security fixes brought by an update. Maybe it breaks the app in one way or another (and yes, even minor versions do that).

After you know why you want to update the dependency, you can start testing. In an ideal world, somebody would look at the diff before applying this to production. I know how this works in the real world, don't worry. But you have the option of catching this. If you automatically update to newest you don't have this option.

And again, all these rituals give you time - maybe someone will identify attacks faster. If you perform these rituals, maybe that someone will be you. Of course, it is better for the business to skip this effort because it saves time and money.

bnjemian

Okay sure, but what happens when a high CVE is discovered that requires immediate patching – does that get around the Upload Queue? If so, it's possible one could opportunistically co-author the patch and shuttle in a vulnerability, circumventing the Upload Queue.

If you instead decide that the Upload Queue can't be circumvented, now you're increasing the duration a patch for a CVE is visible. Even if the CVE disclosure is not made public, the patch sitting in the Upload Queue makes it far more discoverable.

Best as I can tell, neither one of these fairly obvious issues are covered in this blog post, but they clearly need to be addressed for Upload Queues to be a good alternative.

--

Separately, at least with NPM, you can define a cooldown in your global .npmrc, so the argument that cooldowns need to be implemented per project is, for at least one (very) common package manger, patently untrue.

# Wait 7 days before installing > npm config set min-release-age 7

vlovich123

This literal example is actually addressed by the Debian example - the security team has powers to shuttle critical CVEs through but it’s a manual review process.

There’s a bunch of other improvements they call out like automated scanners before distribution and exactly what changed between two distributed versions.

The only oversight I think in the proposal is staggered distributions so that projects declare a UUID and the distribution queue progressively makes it available rather than all or nothing

xg15

> Okay sure, but what happens when a high CVE is discovered that requires immediate patching

I'm pretty sure, once cooldowns are widely implemented, the first priority of attackers will become to convince people to make an exception for their update because "this is really really urgent" etc.

onionisafruit

The people who will benefit from a cooldown weren’t reviewing updates anyway. Without the cooldown they would just be one more malware victim. If you don’t review code before you update, it just makes sense to wait until others have. Despite what the article says, the only people who benefit from a rush to update are the malware spreaders.

whoamii

Cooldown is merely a type of flighting. Specifically, picking a flight beyond canary.

skybrian

It's open source. Free riding is expected and normal. We all benefit from the work of others.

If you're not doing the work yourself, it makes sense to give the people who review and test their dependencies some time to do their work.

ryanjshaw

This doesn’t solve the problem either, which is that of the Confused Deputy [1]. An arbitrary piece of code I’m downloading shouldn’t be able to run as Ryan by default with access to everything Ryan has.

We need to revitalize research into capabilities-based security on consumer OSs, which AFAIK is the only thing that solves this problem. (Web browsers - literally user “agents” - solve this problem with capabilities too: webapps get explicit access to resources, no ambient authority to files, etc.)

Solving this problem will only become more pressing as we have more agents acting on our behalf.

[1] https://en.wikipedia.org/wiki/Confused_deputy_problem

_3u10

I’ve never seen code that is downloaded run itself. Why not be the change you want to see in the world and run sudo or spawn your browser in a jail. Or download as another user.

2001zhaozhao

> Dependency cooldowns turn you into a free-rider

Avg tech company: "that's perfect, we love to be free riders."

8cvor6j844qw_d6

Not everyone has the same update cycle. That's not free-riding. The framing around not being on the latest version as irresponsible doesn't hold up.

pamcake

Right.

Not to mention the (apparently not obvious?) option of detaching review- and release versions. We still look at the diff of latest versions of dependencies before they reach our codebase. That seems like the most responsible.

Besides, why stop there? Everyone installing packaged builds from NPM are already freeriding from those installing sources straight from Github releases. smh

suzzer99

Yeah this. If I don't buy the new iPhone XX.0 but instead wait for XX.1, which could include software and hardware fixes, does that make me a free rider?

dominicq

> Fundamental in the dependency cooldown plan is the hope that other people - those who weren't smart enough to configure a cooldown - serve as unpaid, inadvertent beta testers for newly released packages.

This is wrong to an extent.

This plan works by letting software supply chain companies find security issues in new releases. Many security companies have automated scanners for popular and less popular libraries, with manual triggers for those libraries which are not in the top N.

Their incentive is to be the first to publish a blog post about a cool new attack that they discovered and that their solution can prevent.

riknos314

Sure, but the alternative the author proposes not only allows for time for those scanners to run but explicitly models that time as a formal part of the release process.

Status quo (at least in most language's package managers) + cooldowns basically means that running those checks happens in parallel with the new version becoming the implicit default version shipped to the public. Isn't it better to run the safety and security checks before making it the default?

renewiltord

Exactly. In fact, we as a society pay them the same way we should pay artists: exposure.

absynth

Security people should love a delay in distribution as packages wait in the queue. Then they have an opportunity to report before anyone else.

arianvanp

I feel like this is false. These companies mostly seem to monitor social media and security mailing lists with an army of LLMs and then republish someone else's free labor as an LLM slop summary as fast as possible whilst using dodgy SEO practices to get picked up quickly.

They do do original work sometimes. But most of it feels like reposted stuff from the open source community or even other vendors

_kulang

I just feel like this problem is something where unfettered capitalism does not work. What we are discussing here is a public utility, and should be managed as such

p0w3n3d

It keeps me thinking that every company loves "those guys" who create OpenSource but won't give them a broken penny, nor support them in any other way

Servants! Just do your open source magic, We're impatient! Ah and thanks for all the code, our hungry hungry LLMs were starving.

joeframbach

> Python has multiple package managers at this point (how many now? 8?). All must implement dependency cooldowns.

No, nobody _has to_ implement it, and if only one did, then users who wanted cooldowns can migrate to that package manager.

charcircuit

One thing not addressed is the incentive for large software packages to make their own repositories that bypass this queue in order to have instant updates.

kartika36363

this is like the guiltying me about carbon offsets when there are mountains of burning tires in kuwait

renewiltord

Sure, in the way that people who only use Debian stable are free riding or using Rust are free riding nightly users.

qsera

One thing I don't understand about cooldowns is that it seems that if everybody uses cooldowns then there is no effective cooldown. Then you ll have to keep increase the cooldown period to get the advanatage...

nikanj

The admins of the hacked project are likely to notice the hack in a day or two. Malicious actors are a separate concern, but hacks can be mitigated with cooldowns even if everyone was using them

JoshTriplett

The primary benefit of cooldowns isn't other people upgrading first, it's vulnerability scanning tools and similar getting a chance to see the package before you do.

fendy3002

there are parties that don't want that cooldown, libraries or software writers. XZ utils backdoor are found by Microsoft and Postgresql developer Andres Freund due to high CPU usage (or latency? CMIIW) during SSH tests, those are the people who will keep the same workflow.

Terr_

That can sometimes be true, but the reverse is also problematic: Uniform automatic updates can turn some users who were happy with the status-quo into unwitting guinea pigs for unexpected features and changes, without informed consent.

All else being equal, I'd rather the people who desire the new features be the earlier-adopters, because they're more likely to be the ones pushing for changes and because they're more likely to be watching what happens.

gleenn

Those tools aren't floating in the ether: someone has to go download it and run it in some way, automated or otherwise. I think the suggestion is to make that a step before publication as the post suggests.

BrenBarn

Or you could just, like, not update things immediately just because you can. It's wild that we now refer to it as a "cooldown" to not immediately update something. The sane way would be each user upgrades when they feel they need to, and then updates would naturally be staggered. The security risks of vulnerabilities are magnified by everyone rushing to upgrade constantly.

moron4hire

Frankly, this reads as sometime going way too far to be contrary. Yeah, sure, Act Utilitarianism is different than Rule Utilitarianism. News at 11. But most developers don't get the luxury of fighting for the greater good. Most are fighting to keep their paycheck flowing so they can eat. What I'm saying is, insecure software comes from organizational dysfunction, not "bad developers adopting software too quickly." It's a corporate political problem to which you're attempting to apply technical management to solve.

unethical_ban

Hoo boy.

Anyone in the IT Ops side of things knows the adage that you don't run ".0" software. You wait for a while to let the kinks get worked out by those who can afford the risk of downtime, and of the vendors to find and work out bugs in new software on their own.

Are conservative, uptime-oriented organizations "free-riders" for waiting to install new software on critical systems? Is that a sin, as this implies?

The answer is no. It's certainly a quandry - someone has to run it first. But a little time to let it bake in labs and low-risk environments is worth it.

BlackFly

I think what you actually want is audit sharing as the cooldown period. No audit shared with the community yet? The package is still in cooldown. Or you can risk it and run unaudited dependencies or audit it yourself and potentially share that.

It seems to me that many organizations are relying on other companies to do their auditing in any case, why not just admit that and explicitly rely on that? Choose who you trust, accept their audits. Organizations can perform or even outsource their own auditing and publish that.

https://mozilla.github.io/cargo-vet/

vasco

If lawmakers understood even an iota of technology they'd be trying to legislate using your ID card to upload npm dependencies with more than 10k downloads instead of for watching porn.

But alas.

Dumbledumb

Would staying at an LTS version instead of running my production workloads on the bleeding edge also be free-riding, because I am depriving the community of my testing?

Dedime

The brilliance of the implementation of cooldowns: For someone to go download and run it, automated or otherwise, they simply follow the standard installation process.

Users who want take the extra precaution of waiting an additional period of time must decide to manually configure this with their tooling.

This practice has been a thing in the sysadmin community for years and years - most sysadmins know that you never install Windows updates on the day they release.

Having a step before publication means that's it's essentially opt-in pre-release software, and that comes with baggage - I have zero doubts that many entities who download packages to scan for malware explicitly exclude pre-release software, or don't discover it at all until it's released through normal channels.

burnto

Yes the publish-distribute delay pattern looks like a reasonable design.

But you’re not a “free-rider” if you intentionally let others leap before you. You’re just being cautious, which is rational behavior and should be baked into assumptions about how any ecosystem actually works.

usernametaken29

I thought that this article is largely theoretical in nature. I have almost never updated a dependency in a commercial product in a timely fashion, unless it was explicitly a vulnerability fix. I believe very few companies will do that. Upgrades cause frictions so people do as little of them as possible anyways. I was confused about the terminology to begin with because in a decade of software development I never had to advocate to slow down updating dependencies … that sounds like absolutely wishful thinking. Maybe we can pay money to audit new releases of software we depend on, sure, but that is an entirely different issue.

jcalvinowens

One thing people miss is that bugs in open source are much much easier to fix when you catch them right away. You find more bugs when you test aggressively, but the effort per bug is usually significantly lower.

I think the key is to differentiate testing from deployment: you don't need to run bleeding edge everywhere to find bugs and contribute. Even running nightly releases on one production instance will surface real problems.

dingdongditchme

Having skimmed the article I understand the title. While I agree on some level I wholly disagree on another: to me "dependency cooldown" is a way to automate something as old as time: the late-adopter-laggard. Although I am a tech-nerd and like the latest stuff. I have almost always let other people try it out first. I've missed out on some things because of it but if you are more conservative in your actions it just happens naturally. I think it is OK to have a dependency cooldown, in fact not everybody should update to the newest stuff right away. It's good to have cascaded updates. See the crowd-strike incident in 2024. If some people want to be later in the chain so be it. They will also miss out on important security updates by their cooldown time. I'd advocate for the feature despite never having used it. So "collectively rational" in my mind.

swiftcoder

The problem is making it a default (or even popular). If everyone tries to move themselves later in the chain, you just moved detection later in the chain as well

bob1029

You can do this everywhere. Not just libraries. I take great pleasure in using the old 2022 LTS builds of Unity. The stability of these products is incredible compared to the latest versions. I simply have to ignore console errors in unity 6. In 2022 they are much more meaningful.

Think about how much cumulative human suffering must be experienced to bring you stable and effective products like this. Why hit the reset button right when things start getting good every time?

internet_points

Then I sincerely hope my bank and doctor and government offices are all free-riders.

Dependency cooldowns, like staged update rollouts, mean less brittleness / more robustness in that not every part of society is hit at once. And the fact that cooldowns are not evenly distributed is a good thing. Early adopters and vibe coders take more chances, banks should take less.

But yeah, upload queues also make sense. We should have both!

leni536

I am surprised I don't hear about vim/neovim/vscode plugin supply chaim attacks. Feels like a similarly lucrative target to language package managers.

UqWBcuFx6NV4r

I genuinely don’t know why this warranted a blog post at all, yet alone such an accusatory one, let alone now when everyone has already talked this to death.

gcupc

Free riding is a well established industry best practice.

yossarian

I wrote the original (?) cooldown post that’s linked in this response.

I think this post is directionally accurate (cooldowns are a form of free-riding, which is the goal for mostly unpaid open source maintainers), but misses a key part of the original argument: you’re not free-riding on other maintainers, but instead on a number of “supply chain security” companies that are financially incentivized to find malware as quickly as possible.

The recent wave of open source malware demonstrates this (as originally speculated in my post): Trivy, LiteLLM, etc. were detected by scanning parties, not by users being victimized; victimization also happened, but wasn’t actually necessary for timely discovery at all. That’s the core premise behind cooldowns.

I agree with the points about configurability and variance, however. It’s not clear to me that different tools within an ecosystem (much less different ecosystems) will ever align on cooldowns besides the high-level idea. I’m also not sure it’s a good use of anyone’s time to fight that battle; I’ve mostly thought of cooldowns are a layer atop lockfiles, so the “goal” is to lock the cooled-down dependencies once and practice discipline when updating them. Easier said than done!

Edit: I should also say, I essentially agree with the idea that an upload queue is more correct than a client side cooldown. But it’s also a harder political problem: it requires indices to become more active participants in overriding the queue for vulnerability releases, for example. This is, in my experience, a hard sell for the maintainers of these indices (insofar as it’s more work).

k749gtnc9l3w

I guess centralising the decision to circumvent the delay for a security update is the main benefit over everyone tracking the security news and trying to work around their own cooldowns (then buying fake vulnerability news and installing the attacker's release anyway).

Variance itself might not even be that bad. It is certainly more convenient to know in advance when your backdoor either gets unnoticed and widely deployed or not, than to do per-target investigation about cooldown policies.

Also, I wonder if some of the package managers decides that grabbing from upload queue for-review access is a feature, and then variability is back…

calpaterson

I think I failed to explain that I think most important (and undesirable) free riding in my view is of eg, commercial users of, say, the litellm package waiting 2 weeks to adopt it when personal users do not. In that case, as you say: the victimisation is happening and serves pretty much no purpose. We just need to wait longer for scanners to run pre-distribution. Hence queues instead.

One of the examples I give in the article is Debian, who effectively are basically an upload queue for broken and buggy FOSS projects. Debian are old and run on a much smaaller budget than I bet NPM ever have done.

I concede there would be cost in switching to the debian-kind of orientation for them, but I think most of the work is in the switchover and not in doing it once you've switched. Package indexes are necessarily headed for a future of social co-ordination ("yanking releases, maintaining embargoes, dealing with typosquatting and coordinating 0days"). I think managing a slight delay to package distribution is a small but very worthwhile addition to those responsibilities.

It's kind of unfortunate that all the language package managers have been "instant distribution" for so long as that is a mechanism which is so specifically vulnerable to supply chain attacks. It's worth at least some pause to think about why that didn't happen with linux distributions, but did with PyPI.

reidrac

What I don't understand is how delaying dependency updates by default is a good policy. What happens if you have a dependency that has a security vulnerability or a bug affecting reliability? Would you delay installing the fix two weeks because the cooldown?

Managing dependencies is hard and the industry has been ignoring the problem for a long time. The "cooldown" sounds a bit like one of those "life hacks" more than an actual strategy.

nicoco

You can override the cooldown for a specific package if you need.

I work in academia. In my lab, everybody does conda install xxx or pip install xxx where xxx in an obscure package with 18,149 transitive dependencies every day. It is hard to quantify, but I am pretty sure a one-week minimal package publishing date policy would accomplish a lot already. Definitely not a silver bullet. Definitely not addressing the OP concerns here. But it would have prevented the recent llm-something package takeover.

kryptiskt

It's not a problem. You can just update it explicitly.

In every thread about vulnerabilities in language package repositories, there are always someone claiming that we should go back to getting all the packages from distros. There is truth in that that is more secure, and the reason for it being more secure is not the vetting. Distro maintainers can't vet packages more than cursorily (if you don't believe me, ask a maintainer), there is no added security there. But they usually have an extreme cooldown period (and even rolling-release distros delay packages for a little time), that helps them avoid a lot of issues with freshly baked software.

k749gtnc9l3w

Well, first of all there are many deep-deps that are nontrivial to exploit from network (through all the layers that might fail and cancel the operation on funny input…) but can access network if actively and overtly malicious code is injected into them. You'd probably prefer to keep those somewhat buggy and vulnerable to exotic attacks rather than a bit less buggy most of the time but sometimes fully taken over.

calpaterson

What I don't understand is how delaying dependency updates by default is a good policy. What happens if you have a dependency that has a security vulnerability or a bug affecting reliability? Would you delay installing the fix two weeks because the cooldown?

It's a good default because allowing a little time prior to adopting a package allows people (and automated scanners, who are improving quite quickly) to notice security issues.

And yes, I agree: there will certainly be overrides that you want for that. Deliberately delaying the adoption, for example, of OpenSSL 1.0.1g (which fixed heartbleed) would be counter-productive.

Part of the question here, imo, is how and where you achieve those two goals. Dependency cooldowns are an anti-social way to achieve the first goal and are totally counter-productive to the second.

squarism

This reminds me of cron jobs that aren't really about time. Unless I have a date based thing, let's say a birthday, then I don't want time in it. I know I'm over-fitting here but I'm working on a tool that replaces these cron fallback situations so it's kind of been a theme (likely not original or discovered) for me.

I mean, 2 days of cooldown is trying to proxy for "it's probably reviewed, probably vetted"? I don't know what the real signal is or could be but I don't think it is the 2 days part. So my hunch, and I understand that this is not easy, is that there's something else hidden in there. When the article talks about a queue and basically promotion, that's closer to what I mean. Then it's not about time or the cooldown, it's about a logical event: "publication -> distribution"

kryptiskt

Of course I'm a free rider, I can't conceivably be engaged with every package I use. Such is life. What I don't see is how waiting and seeing before using releases makes me any more of a free rider.

Package cooldowns approximate a gradual rollout, which is used by many apps and web services to limit the blast radius when releases have problems. I don't see it as a bug that users get to choose which group they're in, it's a nice feature that you can set your own risk profile. And the best part is that no coordination is needed at all, it requires no cooperation from package registries or other users. Now that the idea is out there I doubt that this genie can be put back into the bottle. No matter how many epithets for it you can come up with.

calpaterson

I don't see it as a bug that users get to choose which group they're in

People who were unlucky enough to run pip install litellm at an unfortunate moment were not consciously selecting a higher risk profile for themselves. They were just naive and unlucky. I think adopting a security posture for the whole ecosystem that relies on such a person biting into the cherry before you is anti-social in the extreme

talideon

The idea of having an upload queue is an interesting one, though I think applying some kind of jitter to when you take up a dependency is probably still a good thing. Consumers probably don't want shove their noses in the trough the moment a dependency update comes available regardless of the presence of an upload queue. Now, the question is whether you apply that randomly when the update is detected or use some kind of deterministic mechanism based on some project metadata, a secret of some kind, and dependency metadata.

k749gtnc9l3w

In terms of «free-riding» or not, if I am installing new releases quickly anyway, I might actually prefer that other people delay: economics of attacking everyone is better for the attackers than getting some installs on VPSes hosting personal blogs, so for the same amount of carelessness I should be attacked less not more. If everyone installs at once, any auto-updating setup is not getting any protection from the others being victimised!

orib

I don't think dependency cooldowns are it. I don't have a great source, but my gut says that compromises take longer to find than most people are willing to cool down for. There are some sites which say that the average supply chain attack takes 267 days to find, so if you cool down for that long you'd skip on the order of half of the attacks (yeah, yeah, average isn't median), but I don't trust them -- they feel a bit sloppy, and don't cite properly.

What's the 95th percentile? Do you need to stick to dependencies older than a year?

[1] https://deepstrike.io/blog/supply-chain-attack-statistics-2025
[2] https://www.breachsense.com/blog/supply-chain-attack-examples/

Also, anyone actually have numbers I can trust?

k749gtnc9l3w

I think another issue with such reports is that they classify threats from paying customers' point of view, which is always a large enough setup to be worth targeting/exploiting one-by-one. 4 million dollars median damages in one of the links seems a revealing scale-setting number. Cooldowns are relevant rather in case of lower-effort wide-spectrum attacks.

travisgriggs

Surfing the bleeding edge or free riding, I just wish these “thought leaders” would advocate LESS dependencies.

calpaterson

Are you able to appoint me thought leader? If so: call your mum more often

chanux

I would argue the blind copy pasting, cargo cult orgs are less likely to be helpful anyway.

But I get the point, it's a numbers game so any and all usage can help catching issues.

antonvs

Mature professionals and organizations have always waited to install updated dependencies in production, with exceptions for severe security issues such as zero day attacks.

"Free riding" is not the right term here. It's more a case of being the angels in the saying "fools rush in where angels fear to tread".

If the industry as a whole were mature (in the sense of responsibility, not age), upgrades would be tested in offline environments and rolled out once they pass that process.

Of course, not everyone has the resources for that, so there's always going to be some "free riding" in that sense.

That dilutes the term, though. Different organizations have different tolerance for risk, different requirements for running the latest stuff, different resources. There's always going to be asymmetry there. This isn't free riding.

usefulcat

> Frankly, dependency cooldowns work by free-riding on the pain and suffering of others.

I suspect there are some reasonable points to be made here, but frankly, I pretty much stopped reading after that. Way too simple minded.

asdfasgasdgasdg

I think the appeal to the categorical imperative is very interesting though. Someone needs to try it. If everyone were wise as you term it, then it's essentially a stalemate while you wait for someone else to blink first and update.

Then again, there are other areas where I feel that Kantian ethics also fail on collective action problems. The use of index funds for example can be argued against on the same line as we argue against waiting to update. (That is, if literally everyone uses index funds then price discovery stops working.) I wonder if this argument fails because it ignores that there are a diversity of preferences. Some organizations might be more risk averse, some less so. Maybe that's the only observation that needs to be made to defeat the argument.

8note

itd be better for the title to be about upload queues and distribution, rather than free-loading.

idk if one of the touted benefits is really real - you need to be able to jump changes to the front of the queue and get them out asap sometimes.

hacked credentials will definitely be using that path. it gives you another risk signal sure, but the power sticks around

twotwotwo

The topic of cooldowns just shifting the problem around got some discussion on an earlier post about them -- what I said there is at https://lobste.rs/s/rygog1/we_should_all_be_using_dependency... and here's something similar:

- One idea is for projects not to update each dep just X hours after release, but on their own cycles, every N weeks or such. Someone still gets bit first, of course, but not everyone at once, and for those doing it, any upgrade-related testing or other work also ends up conveniently batched.

- Developers legitimately vary in how much they value getting the newest and greatest vs. minimizing risk. Similar logic to some people taking beta versions of software. A brand new or hobby project might take the latest version of something; a big project might upgrade occasionally and apply a strict cooldown. For users' sake, there is value in any projects that get bit not being the widely-used ones!

- Time (independent of usage) does catch some problems. A developer realizes they were phished and reports, for example, or the issue is caught by someone looking at a repo or commit stream.

As I lamented in the other post, it's unfortunate that merely using an upgraded package for a test run often exposes a bunch of a project's keys and so on. There are more angles to attack this from than solely when to upgrade packages.

ArcHound

The core point is of course solid. By not updating on day 0, maybe somebody else spend the effort to discover this and you didn't. But there are plenty of other benefits for not rolling with the newest and greatest versions enabled.

I'd argue for intentional dependency updates. It just so happens that it's identified in one sprint and planned for the next one, giving the team a delay.

First of all, sometimes you can reject the dependency update. Maybe there is no benefit in updating. Maybe there are no important security fixes brought by an update. Maybe it breaks the app in one way or another (and yes, even minor versions do that).

After you know why you want to update the dependency, you can start testing. In an ideal world, somebody would look at the diff before applying this to production. I know how this works in the real world, don't worry. But you have the option of catching this. If you automatically update to newest you don't have this option.

And again, all these rituals give you time - maybe someone will identify attacks faster. If you perform these rituals, maybe that someone will be you. Of course, it is better for the business to skip this effort because it saves time and money.

bnjemian

Okay sure, but what happens when a high CVE is discovered that requires immediate patching – does that get around the Upload Queue? If so, it's possible one could opportunistically co-author the patch and shuttle in a vulnerability, circumventing the Upload Queue.

If you instead decide that the Upload Queue can't be circumvented, now you're increasing the duration a patch for a CVE is visible. Even if the CVE disclosure is not made public, the patch sitting in the Upload Queue makes it far more discoverable.

Best as I can tell, neither one of these fairly obvious issues are covered in this blog post, but they clearly need to be addressed for Upload Queues to be a good alternative.

--

Separately, at least with NPM, you can define a cooldown in your global .npmrc, so the argument that cooldowns need to be implemented per project is, for at least one (very) common package manger, patently untrue.

# Wait 7 days before installing > npm config set min-release-age 7

vlovich123

This literal example is actually addressed by the Debian example - the security team has powers to shuttle critical CVEs through but it’s a manual review process.

There’s a bunch of other improvements they call out like automated scanners before distribution and exactly what changed between two distributed versions.

The only oversight I think in the proposal is staggered distributions so that projects declare a UUID and the distribution queue progressively makes it available rather than all or nothing

xg15

> Okay sure, but what happens when a high CVE is discovered that requires immediate patching

I'm pretty sure, once cooldowns are widely implemented, the first priority of attackers will become to convince people to make an exception for their update because "this is really really urgent" etc.

onionisafruit

The people who will benefit from a cooldown weren’t reviewing updates anyway. Without the cooldown they would just be one more malware victim. If you don’t review code before you update, it just makes sense to wait until others have. Despite what the article says, the only people who benefit from a rush to update are the malware spreaders.

whoamii

Cooldown is merely a type of flighting. Specifically, picking a flight beyond canary.

skybrian

It's open source. Free riding is expected and normal. We all benefit from the work of others.

If you're not doing the work yourself, it makes sense to give the people who review and test their dependencies some time to do their work.

ryanjshaw

This doesn’t solve the problem either, which is that of the Confused Deputy [1]. An arbitrary piece of code I’m downloading shouldn’t be able to run as Ryan by default with access to everything Ryan has.

We need to revitalize research into capabilities-based security on consumer OSs, which AFAIK is the only thing that solves this problem. (Web browsers - literally user “agents” - solve this problem with capabilities too: webapps get explicit access to resources, no ambient authority to files, etc.)

Solving this problem will only become more pressing as we have more agents acting on our behalf.

[1] https://en.wikipedia.org/wiki/Confused_deputy_problem

_3u10

I’ve never seen code that is downloaded run itself. Why not be the change you want to see in the world and run sudo or spawn your browser in a jail. Or download as another user.

2001zhaozhao

> Dependency cooldowns turn you into a free-rider

Avg tech company: "that's perfect, we love to be free riders."

8cvor6j844qw_d6

Not everyone has the same update cycle. That's not free-riding. The framing around not being on the latest version as irresponsible doesn't hold up.

pamcake

Right.

Not to mention the (apparently not obvious?) option of detaching review- and release versions. We still look at the diff of latest versions of dependencies before they reach our codebase. That seems like the most responsible.

Besides, why stop there? Everyone installing packaged builds from NPM are already freeriding from those installing sources straight from Github releases. smh

suzzer99

Yeah this. If I don't buy the new iPhone XX.0 but instead wait for XX.1, which could include software and hardware fixes, does that make me a free rider?

dominicq

> Fundamental in the dependency cooldown plan is the hope that other people - those who weren't smart enough to configure a cooldown - serve as unpaid, inadvertent beta testers for newly released packages.

This is wrong to an extent.

This plan works by letting software supply chain companies find security issues in new releases. Many security companies have automated scanners for popular and less popular libraries, with manual triggers for those libraries which are not in the top N.

Their incentive is to be the first to publish a blog post about a cool new attack that they discovered and that their solution can prevent.

riknos314

Sure, but the alternative the author proposes not only allows for time for those scanners to run but explicitly models that time as a formal part of the release process.

Status quo (at least in most language's package managers) + cooldowns basically means that running those checks happens in parallel with the new version becoming the implicit default version shipped to the public. Isn't it better to run the safety and security checks before making it the default?

renewiltord

Exactly. In fact, we as a society pay them the same way we should pay artists: exposure.

absynth

Security people should love a delay in distribution as packages wait in the queue. Then they have an opportunity to report before anyone else.

arianvanp

I feel like this is false. These companies mostly seem to monitor social media and security mailing lists with an army of LLMs and then republish someone else's free labor as an LLM slop summary as fast as possible whilst using dodgy SEO practices to get picked up quickly.

They do do original work sometimes. But most of it feels like reposted stuff from the open source community or even other vendors

_kulang

I just feel like this problem is something where unfettered capitalism does not work. What we are discussing here is a public utility, and should be managed as such

p0w3n3d

It keeps me thinking that every company loves "those guys" who create OpenSource but won't give them a broken penny, nor support them in any other way

Servants! Just do your open source magic, We're impatient! Ah and thanks for all the code, our hungry hungry LLMs were starving.

joeframbach

> Python has multiple package managers at this point (how many now? 8?). All must implement dependency cooldowns.

No, nobody _has to_ implement it, and if only one did, then users who wanted cooldowns can migrate to that package manager.

charcircuit

One thing not addressed is the incentive for large software packages to make their own repositories that bypass this queue in order to have instant updates.

kartika36363

this is like the guiltying me about carbon offsets when there are mountains of burning tires in kuwait

renewiltord

Sure, in the way that people who only use Debian stable are free riding or using Rust are free riding nightly users.

qsera

One thing I don't understand about cooldowns is that it seems that if everybody uses cooldowns then there is no effective cooldown. Then you ll have to keep increase the cooldown period to get the advanatage...

nikanj

The admins of the hacked project are likely to notice the hack in a day or two. Malicious actors are a separate concern, but hacks can be mitigated with cooldowns even if everyone was using them

JoshTriplett

The primary benefit of cooldowns isn't other people upgrading first, it's vulnerability scanning tools and similar getting a chance to see the package before you do.

fendy3002

there are parties that don't want that cooldown, libraries or software writers. XZ utils backdoor are found by Microsoft and Postgresql developer Andres Freund due to high CPU usage (or latency? CMIIW) during SSH tests, those are the people who will keep the same workflow.

Terr_

That can sometimes be true, but the reverse is also problematic: Uniform automatic updates can turn some users who were happy with the status-quo into unwitting guinea pigs for unexpected features and changes, without informed consent.

All else being equal, I'd rather the people who desire the new features be the earlier-adopters, because they're more likely to be the ones pushing for changes and because they're more likely to be watching what happens.

gleenn

Those tools aren't floating in the ether: someone has to go download it and run it in some way, automated or otherwise. I think the suggestion is to make that a step before publication as the post suggests.

BrenBarn

Or you could just, like, not update things immediately just because you can. It's wild that we now refer to it as a "cooldown" to not immediately update something. The sane way would be each user upgrades when they feel they need to, and then updates would naturally be staggered. The security risks of vulnerabilities are magnified by everyone rushing to upgrade constantly.

moron4hire

Frankly, this reads as sometime going way too far to be contrary. Yeah, sure, Act Utilitarianism is different than Rule Utilitarianism. News at 11. But most developers don't get the luxury of fighting for the greater good. Most are fighting to keep their paycheck flowing so they can eat. What I'm saying is, insecure software comes from organizational dysfunction, not "bad developers adopting software too quickly." It's a corporate political problem to which you're attempting to apply technical management to solve.

unethical_ban

Hoo boy.

Anyone in the IT Ops side of things knows the adage that you don't run ".0" software. You wait for a while to let the kinks get worked out by those who can afford the risk of downtime, and of the vendors to find and work out bugs in new software on their own.

Are conservative, uptime-oriented organizations "free-riders" for waiting to install new software on critical systems? Is that a sin, as this implies?

The answer is no. It's certainly a quandry - someone has to run it first. But a little time to let it bake in labs and low-risk environments is worth it.

BlackFly

I think what you actually want is audit sharing as the cooldown period. No audit shared with the community yet? The package is still in cooldown. Or you can risk it and run unaudited dependencies or audit it yourself and potentially share that.

It seems to me that many organizations are relying on other companies to do their auditing in any case, why not just admit that and explicitly rely on that? Choose who you trust, accept their audits. Organizations can perform or even outsource their own auditing and publish that.

https://mozilla.github.io/cargo-vet/

vasco

If lawmakers understood even an iota of technology they'd be trying to legislate using your ID card to upload npm dependencies with more than 10k downloads instead of for watching porn.

But alas.

Dumbledumb

Would staying at an LTS version instead of running my production workloads on the bleeding edge also be free-riding, because I am depriving the community of my testing?

Dedime

The brilliance of the implementation of cooldowns: For someone to go download and run it, automated or otherwise, they simply follow the standard installation process.

Users who want take the extra precaution of waiting an additional period of time must decide to manually configure this with their tooling.

This practice has been a thing in the sysadmin community for years and years - most sysadmins know that you never install Windows updates on the day they release.

Having a step before publication means that's it's essentially opt-in pre-release software, and that comes with baggage - I have zero doubts that many entities who download packages to scan for malware explicitly exclude pre-release software, or don't discover it at all until it's released through normal channels.

burnto

Yes the publish-distribute delay pattern looks like a reasonable design.

But you’re not a “free-rider” if you intentionally let others leap before you. You’re just being cautious, which is rational behavior and should be baked into assumptions about how any ecosystem actually works.

usernametaken29

I thought that this article is largely theoretical in nature. I have almost never updated a dependency in a commercial product in a timely fashion, unless it was explicitly a vulnerability fix. I believe very few companies will do that. Upgrades cause frictions so people do as little of them as possible anyways. I was confused about the terminology to begin with because in a decade of software development I never had to advocate to slow down updating dependencies … that sounds like absolutely wishful thinking. Maybe we can pay money to audit new releases of software we depend on, sure, but that is an entirely different issue.

jcalvinowens

One thing people miss is that bugs in open source are much much easier to fix when you catch them right away. You find more bugs when you test aggressively, but the effort per bug is usually significantly lower.

I think the key is to differentiate testing from deployment: you don't need to run bleeding edge everywhere to find bugs and contribute. Even running nightly releases on one production instance will surface real problems.

dingdongditchme

Having skimmed the article I understand the title. While I agree on some level I wholly disagree on another: to me "dependency cooldown" is a way to automate something as old as time: the late-adopter-laggard. Although I am a tech-nerd and like the latest stuff. I have almost always let other people try it out first. I've missed out on some things because of it but if you are more conservative in your actions it just happens naturally. I think it is OK to have a dependency cooldown, in fact not everybody should update to the newest stuff right away. It's good to have cascaded updates. See the crowd-strike incident in 2024. If some people want to be later in the chain so be it. They will also miss out on important security updates by their cooldown time. I'd advocate for the feature despite never having used it. So "collectively rational" in my mind.

swiftcoder

The problem is making it a default (or even popular). If everyone tries to move themselves later in the chain, you just moved detection later in the chain as well

bob1029

You can do this everywhere. Not just libraries. I take great pleasure in using the old 2022 LTS builds of Unity. The stability of these products is incredible compared to the latest versions. I simply have to ignore console errors in unity 6. In 2022 they are much more meaningful.

Think about how much cumulative human suffering must be experienced to bring you stable and effective products like this. Why hit the reset button right when things start getting good every time?

internet_points

Then I sincerely hope my bank and doctor and government offices are all free-riders.

Dependency cooldowns, like staged update rollouts, mean less brittleness / more robustness in that not every part of society is hit at once. And the fact that cooldowns are not evenly distributed is a good thing. Early adopters and vibe coders take more chances, banks should take less.

But yeah, upload queues also make sense. We should have both!

leni536

I am surprised I don't hear about vim/neovim/vscode plugin supply chaim attacks. Feels like a similarly lucrative target to language package managers.

UqWBcuFx6NV4r

I genuinely don’t know why this warranted a blog post at all, yet alone such an accusatory one, let alone now when everyone has already talked this to death.

gcupc

Free riding is a well established industry best practice.

yossarian

I wrote the original (?) cooldown post that’s linked in this response.

I think this post is directionally accurate (cooldowns are a form of free-riding, which is the goal for mostly unpaid open source maintainers), but misses a key part of the original argument: you’re not free-riding on other maintainers, but instead on a number of “supply chain security” companies that are financially incentivized to find malware as quickly as possible.

The recent wave of open source malware demonstrates this (as originally speculated in my post): Trivy, LiteLLM, etc. were detected by scanning parties, not by users being victimized; victimization also happened, but wasn’t actually necessary for timely discovery at all. That’s the core premise behind cooldowns.

I agree with the points about configurability and variance, however. It’s not clear to me that different tools within an ecosystem (much less different ecosystems) will ever align on cooldowns besides the high-level idea. I’m also not sure it’s a good use of anyone’s time to fight that battle; I’ve mostly thought of cooldowns are a layer atop lockfiles, so the “goal” is to lock the cooled-down dependencies once and practice discipline when updating them. Easier said than done!

Edit: I should also say, I essentially agree with the idea that an upload queue is more correct than a client side cooldown. But it’s also a harder political problem: it requires indices to become more active participants in overriding the queue for vulnerability releases, for example. This is, in my experience, a hard sell for the maintainers of these indices (insofar as it’s more work).

k749gtnc9l3w

I guess centralising the decision to circumvent the delay for a security update is the main benefit over everyone tracking the security news and trying to work around their own cooldowns (then buying fake vulnerability news and installing the attacker's release anyway).

Variance itself might not even be that bad. It is certainly more convenient to know in advance when your backdoor either gets unnoticed and widely deployed or not, than to do per-target investigation about cooldown policies.

Also, I wonder if some of the package managers decides that grabbing from upload queue for-review access is a feature, and then variability is back…

calpaterson

I think I failed to explain that I think most important (and undesirable) free riding in my view is of eg, commercial users of, say, the litellm package waiting 2 weeks to adopt it when personal users do not. In that case, as you say: the victimisation is happening and serves pretty much no purpose. We just need to wait longer for scanners to run pre-distribution. Hence queues instead.

One of the examples I give in the article is Debian, who effectively are basically an upload queue for broken and buggy FOSS projects. Debian are old and run on a much smaaller budget than I bet NPM ever have done.

I concede there would be cost in switching to the debian-kind of orientation for them, but I think most of the work is in the switchover and not in doing it once you've switched. Package indexes are necessarily headed for a future of social co-ordination ("yanking releases, maintaining embargoes, dealing with typosquatting and coordinating 0days"). I think managing a slight delay to package distribution is a small but very worthwhile addition to those responsibilities.

It's kind of unfortunate that all the language package managers have been "instant distribution" for so long as that is a mechanism which is so specifically vulnerable to supply chain attacks. It's worth at least some pause to think about why that didn't happen with linux distributions, but did with PyPI.

reidrac

What I don't understand is how delaying dependency updates by default is a good policy. What happens if you have a dependency that has a security vulnerability or a bug affecting reliability? Would you delay installing the fix two weeks because the cooldown?

Managing dependencies is hard and the industry has been ignoring the problem for a long time. The "cooldown" sounds a bit like one of those "life hacks" more than an actual strategy.

nicoco

You can override the cooldown for a specific package if you need.

I work in academia. In my lab, everybody does conda install xxx or pip install xxx where xxx in an obscure package with 18,149 transitive dependencies every day. It is hard to quantify, but I am pretty sure a one-week minimal package publishing date policy would accomplish a lot already. Definitely not a silver bullet. Definitely not addressing the OP concerns here. But it would have prevented the recent llm-something package takeover.

kryptiskt

It's not a problem. You can just update it explicitly.

In every thread about vulnerabilities in language package repositories, there are always someone claiming that we should go back to getting all the packages from distros. There is truth in that that is more secure, and the reason for it being more secure is not the vetting. Distro maintainers can't vet packages more than cursorily (if you don't believe me, ask a maintainer), there is no added security there. But they usually have an extreme cooldown period (and even rolling-release distros delay packages for a little time), that helps them avoid a lot of issues with freshly baked software.

k749gtnc9l3w

Well, first of all there are many deep-deps that are nontrivial to exploit from network (through all the layers that might fail and cancel the operation on funny input…) but can access network if actively and overtly malicious code is injected into them. You'd probably prefer to keep those somewhat buggy and vulnerable to exotic attacks rather than a bit less buggy most of the time but sometimes fully taken over.

calpaterson

What I don't understand is how delaying dependency updates by default is a good policy. What happens if you have a dependency that has a security vulnerability or a bug affecting reliability? Would you delay installing the fix two weeks because the cooldown?

It's a good default because allowing a little time prior to adopting a package allows people (and automated scanners, who are improving quite quickly) to notice security issues.

And yes, I agree: there will certainly be overrides that you want for that. Deliberately delaying the adoption, for example, of OpenSSL 1.0.1g (which fixed heartbleed) would be counter-productive.

Part of the question here, imo, is how and where you achieve those two goals. Dependency cooldowns are an anti-social way to achieve the first goal and are totally counter-productive to the second.

squarism

This reminds me of cron jobs that aren't really about time. Unless I have a date based thing, let's say a birthday, then I don't want time in it. I know I'm over-fitting here but I'm working on a tool that replaces these cron fallback situations so it's kind of been a theme (likely not original or discovered) for me.

I mean, 2 days of cooldown is trying to proxy for "it's probably reviewed, probably vetted"? I don't know what the real signal is or could be but I don't think it is the 2 days part. So my hunch, and I understand that this is not easy, is that there's something else hidden in there. When the article talks about a queue and basically promotion, that's closer to what I mean. Then it's not about time or the cooldown, it's about a logical event: "publication -> distribution"

kryptiskt

Of course I'm a free rider, I can't conceivably be engaged with every package I use. Such is life. What I don't see is how waiting and seeing before using releases makes me any more of a free rider.

Package cooldowns approximate a gradual rollout, which is used by many apps and web services to limit the blast radius when releases have problems. I don't see it as a bug that users get to choose which group they're in, it's a nice feature that you can set your own risk profile. And the best part is that no coordination is needed at all, it requires no cooperation from package registries or other users. Now that the idea is out there I doubt that this genie can be put back into the bottle. No matter how many epithets for it you can come up with.

calpaterson

I don't see it as a bug that users get to choose which group they're in

People who were unlucky enough to run pip install litellm at an unfortunate moment were not consciously selecting a higher risk profile for themselves. They were just naive and unlucky. I think adopting a security posture for the whole ecosystem that relies on such a person biting into the cherry before you is anti-social in the extreme

talideon

The idea of having an upload queue is an interesting one, though I think applying some kind of jitter to when you take up a dependency is probably still a good thing. Consumers probably don't want shove their noses in the trough the moment a dependency update comes available regardless of the presence of an upload queue. Now, the question is whether you apply that randomly when the update is detected or use some kind of deterministic mechanism based on some project metadata, a secret of some kind, and dependency metadata.

k749gtnc9l3w

In terms of «free-riding» or not, if I am installing new releases quickly anyway, I might actually prefer that other people delay: economics of attacking everyone is better for the attackers than getting some installs on VPSes hosting personal blogs, so for the same amount of carelessness I should be attacked less not more. If everyone installs at once, any auto-updating setup is not getting any protection from the others being victimised!

orib

I don't think dependency cooldowns are it. I don't have a great source, but my gut says that compromises take longer to find than most people are willing to cool down for. There are some sites which say that the average supply chain attack takes 267 days to find, so if you cool down for that long you'd skip on the order of half of the attacks (yeah, yeah, average isn't median), but I don't trust them -- they feel a bit sloppy, and don't cite properly.

What's the 95th percentile? Do you need to stick to dependencies older than a year?

[1] https://deepstrike.io/blog/supply-chain-attack-statistics-2025
[2] https://www.breachsense.com/blog/supply-chain-attack-examples/

Also, anyone actually have numbers I can trust?

k749gtnc9l3w

I think another issue with such reports is that they classify threats from paying customers' point of view, which is always a large enough setup to be worth targeting/exploiting one-by-one. 4 million dollars median damages in one of the links seems a revealing scale-setting number. Cooldowns are relevant rather in case of lower-effort wide-spectrum attacks.

travisgriggs

Surfing the bleeding edge or free riding, I just wish these “thought leaders” would advocate LESS dependencies.

calpaterson

Are you able to appoint me thought leader? If so: call your mum more often