All of the repositories I own on GitHub -- public and private -- have Dependabot configured to update repository dependencies. Since almost all repos have at least MegaLinter configured to run when commits are added to a pull request, there's always something that needs to be watched. My default template repo has seven workflows, none of which I want to manually review daily, especially when there are hundreds of repositories.
I have very little problem putting source code out on GitHub that's intended for public consumption, even if I'm the only one who ever looks at that code. That said, I have a certain discomfort with storing Infrastructure as Code (IaC) into GitHub, even in private repositories.
Where it Hurts
Modern repositories multiply quietly. One service becomes three. Three become twelve. Before long, you are maintaining dozens or hundreds of repositories, each with its own workflows, linters, scanners, test runners, and release logic. Each repository may carry five, six, or seven GitHub Actions or Jenkins pipelines. Every dependency bump becomes a pull request. Every pull request triggers validation. The noise compounds.
Now multiply that by time.
A single dependency update rarely touches only one repository. Shared libraries drift. Container base images age. Transitive dependencies surface CVEs. Without automation, you are left manually scanning changelogs, running npm update, go get -u, or pip install --upgrade, committing changes, opening pull requests, and waiting for pipelines to pass. Each action may be individually small. The aggregate burden is not.
Read More