We set up a GitHub Actions workflow with two separate jobs that run on every push to the main branch and on pull requests. Here's what happens automatically:
Lint Job: This job checks our code quality using ESLint. It runs on the latest Ubuntu environment, installs our dependencies with npm ci, and then runs our linting rules. If any code doesn't meet our style standards, the build fails and we know we need to fix it before merging.
Unit Tests Job: This job runs all our tests using Jest. We're using the experimental ES modules support in Node.js, which required adding some special flags. The tests make sure our code actually works as expected.
My partner's repo-snapshot project uses a different testing framework called Vitest instead of Jest. He has coverage configured in his vitest.config.ts file, which automatically tracks code coverage whenever tests run. His setup looks pretty clean and modern since Vitest is built specifically for modern JavaScript projects. He also have a GitHub Actions CI workflow, but it's structured differently from mine. While I split our workflow into two separate jobs (lint and unit-tests) that run in parallel, he combine everything into a single build job. His workflow uses pnpm instead of npm, which is a faster alternative package manager. In his single job, he run format checking, linting, and tests one after another. But if the format check fails, you have to wait for it to fix before the other checks can run.
When I helped write tests, I focused on testing the parseTomlConfig function, which reads and parses TOML configuration files. I wrote test cases covering different scenarios - when the config file doesn't exist (should return an empty object), when it contains valid TOML content, when the TOML is malformed (should throw an error), and when using a custom config file path. One thing that made testing easier in his project was that Vitest has really nice mocking capabilities. I could easily mock the file system functions like existsSync and readFileSync so tests wouldn't depend on actual files existing on disk. The beforeEach hooks let me clean up mocks between tests, which kept everything isolated and predictable.
Every time I push code, I know within a few minutes whether everything still works. I don't have to remember to run tests manually or worry that I forgot to check something. If someone else contributes to the project, the CI workflow automatically checks their code. This is huge because it means I don't have to manually review every detail - the automated checks catch obvious problems. The workflow file itself documents how to run tests and linting, which helps new contributors understand the project setup. Having that green checkmark on pull requests just feels good. It shows the project is well-maintained and takes quality seriously.
I did two optional challenges. I Added a Linter to the CI Workflow. This was actually pretty straightforward once we understood the YAML syntax. We created a separate job specifically for linting that runs ESLint on all our JavaScript files. The key was making sure to install dependencies first with npm ci before running the linter. This catches style issues and potential bugs before they make it into the codebase. It's saved us from some embarrassing typos and inconsistent formatting. I also seted up a Dev Container. I created a devcontainer.json file that defines a complete development environment using Node.js 20 on Debian Bookworm. The most important addition was the postCreateCommand that runs npm install and npm link. This ensures that whenever someone opens the project in a dev container, all dependencies are installed automatically and the repomaster command is available globally. Without npm link, the CLI tool wouldn't work properly inside the container.
Building a CI pipeline isn't just about running tests automatically, it's about creating a development workflow that makes it easy to maintain quality. The linter keeps our code clean, the tests verify functionality, and the dev container ensures everyone has a consistent environment. Together, these tools make collaboration easier and give us confidence that our code works as expected.
Top comments (0)