Running Renovate Locally in Jenkins
All of the repositories I own on GitHub -- public and private -- have Dependabot configured to update repository dependencies. Since almost all repos have at least MegaLinter configured to run when commits are added to a pull request, there's always something that needs to be watched. My default template repo has seven workflows, none of which I want to manually review daily, especially when there are hundreds of repositories.
I have very little problem putting source code out on GitHub that's intended for public consumption, even if I'm the only one who ever looks at that code. That said, I have a certain discomfort with storing Infrastructure as Code (IaC) into GitHub, even in private repositories.
Where it Hurts
Modern repositories multiply quietly. One service becomes three. Three become twelve. Before long, you are maintaining dozens or hundreds of repositories, each with its own workflows, linters, scanners, test runners, and release logic. Each repository may carry five, six, or seven GitHub Actions or Jenkins pipelines. Every dependency bump becomes a pull request. Every pull request triggers validation. The noise compounds.
Now multiply that by time.
A single dependency update rarely touches only one repository. Shared libraries drift. Container base images age. Transitive dependencies surface CVEs. Without automation, you are left manually scanning changelogs, running npm update, go get -u, or pip install --upgrade, committing changes, opening pull requests, and waiting for pipelines to pass. Each action may be individually small. The aggregate burden is not.
Infrastructure as Code (IaC) and Source Code Management (SCM)
However, IaC belongs in some kind of Source Code Management (SCM) tool. It has to go somewhere. So, I run a local instance of Gitea on my local cluster that's inaccessible to the public-facing Internet. To access it, one must be on my local network (or connected to my local network via a Virtual Private Network (VPN)). This way, I can keep my IaC code in a git repository without having to worry about who can see what. This doesn't negate good repository hygiene or established best practices (i.e., never put secrets, credentials, tokens, hostnames, SCMI (System Configuration Management Information) into an SCM tool!) but it does give me an additional layer of protection.
In cases where my IaC needs to include tokens and such, I store them in a separate .env file which can be ingested at runtime and excluded from being added to commits. For example, if I have a docker-compose.yml file, I'll use an env_file directive to pull in an env file that does include the variables I don't want stored in a repo.
Keeping .env content out of the repo
In addition to a .gitignore that precludes those files from being added to commits, I have an additional layer in my .pre-commit-config.yaml file so that pre-commit will refuse to allow commits with files that are named like .env:
# fail if a commit includes a file named '.env'
# BAD:
# .env
# foo/.env
#
# Good:
# sample.env
# env.sample
# share/examples/sample.env
- repo: local
hooks:
- id: no-dotenv-files
name: ".env are not allowed."
entry: "Files may not be named .env, .env.local, .env.development, etc."
language: fail
files: "^(.*[/])?[.]env([.](local|production|development)){0,2}$"Running Renovate
Getting Renovate to run under Jenkins is quite straightforward, especially if you're using runners in a Linux environment that allows you to run Docker containers.
Renovate makes a Docker image available that can be pulled and used either on the command line or in a CI/CD system like Jenkins.
The general concepts when using Gitea or GitHub are pretty similar, but there are nuances and small differences between the two that make one Jenkinsfile that works on both GitHub and Gitea a challenge, particularly when building Pull Requests (PRs). Since GitHub supports Dependabot natively, I'm not going to get into running Renovate on GitHub here. There's also an application one can use to simplify running Mend Renovate on GitHub as an app or as an action so I'm going to focus specifically on Gitea with Jenkins.
The Model
CI/CD: Jenkins
In my lab, I have Jenkins running in a FreeBSD jail on a TrueNAS CORE system with runners on several different Linux servers where Docker may be used to run containerized workloads. That is, Jenkins interacts with agents on other systems to run containerized workloads. There's an HAProxy instance in front of Jenkins to terminate SSL/TLS connections. Neither Jenkins nor the HAProxy instance are accessible from the Internet
SCM: Gitea
My lab has a containerized Gitea instance that's accessible locally. The file storage for Gitea is hosted on the NAS and the volumes are mounted using NFS. There's a central MySQL database to provide RDBMS services. MySQL and NFS can be problematic together, particularly due to file locking and fsync semantics, so I avoid placing the database data directory on NFS.
Caching Proxy: Nexus RM (optional)
I use Nexus Repository Manager as a caching proxy in front of the major Docker registries (DockerHub, GHCR, Quay, etc.) so that I can reference images through the local caching proxy. This gives me resilience in the event that the Internet connection is down, improved pull performance, and many fewer API calls.
So, instead of renovate/renovate:latest, I would use something like dockerhub.local/renovate/renovate:latest or, more likely, store the reference in a docker-compose.yml file that Renovate updates such that the tag (:latest) is replaced with a specific pinned hash.
Workflow
Here's how it works:
- each repository on the local Gitea system must invite a service account to collaborate on that repository. Having autodiscover configured is helpful, but I also want to be careful about privilege scoping for the Gitea PATs (Personal Access Tokens)
- once a night (
H H * * *(note: this is Jenkins-specific syntax)), it runs a pipeline on the Jenkins system - the Jenkins system pulls the Jenkinsfile from its local repository hosted on Gitea
- the Jenkinsfile runs on a runner where it instantiates a Renovate container
- the Renovate container pulls configured repositories it can access and scans for dependencies to update.
- if any dependencies are found, a Pull Request is submitted to the repository on the Gitea system with the updated dependencies. These dependencies are pinned to specific hashes, commits, etc.. The PRs use the Renovate service account's PAT to authenticate when pushing the new commit(s) to the local Gitea server
Bonus Extras
- send along a
RENOVATE_GITHUB_COM_TOKENso that Renovate may pull release notes from GitHub for any updates pulled from GitHub. This is optional, but it's also common to quickly exhaust public, non-authenticated API calls to GitHub's APIs (e.g., 60 unauthenticated calls per hour vs 5,000 authenticated calls) - if using a caching proxy for the image registry where Renovate is stored, be aware not to create situations where there are systemic dependencies that become failure points in the event of an outage; there are mechanisms to make this work for DockerHub directly using mirror configuration for the Docker daemon; this does not work with other registries (e.g., GHCR)
- the Folder Properties plugin for Jenkins can be very useful in setting environment variables that can be referenced in your Jenkins pipelines. This can be helpful when using multiple pipelines, when you don't want to include URLs or credential names in the Jenkinsfiles, etc.
- each repository may include a
renovate.jsonfile that will manage how dependencies are updated. I recommend grouping patch and minor updates into single PRs while each major update is in a separate PR.