All Collections
Automation & API
Integrations
Gitlab CI / CD pipeline integration with TestCollab
Gitlab CI / CD pipeline integration with TestCollab

See how Gitlab CI/CD is being used for automation and updating of results back in TestCollab

Vishal Vaswani avatar
Written by Vishal Vaswani
Updated over a week ago

This article is being written to discuss a simple example that uses Gitlab Continuous Integration / Continuous Deployment (Delivery).

The Basics

This section is for those who do not know much about Gitlab CI / CD. If you are already aware about these, then you may opt to skip to next section.

  • Why Gitlab? Well, Gitlab is by far the most heard name when it came to CI / CD
    ​

  • How CI/CD helps? The term gained popularity with the adaptation of agile methodologies by software developers.
    ​
    The developers teams expanded, and there was also a need to deliver the application updates more frequently, so they had to rely more on automation techniques for testing, and also to get the application automatically delivered and deployed on production server for its availability to end users, yet reducing the human intervention.
    ​
    Technically speaking CI/CD, majorly, is a bridge between development and operations team, hence the term DevOps was coined. Dive into the details here

Pipelines , Runners , Jobs and Stages

Pipeline can be understood as the series of steps that need to be performed under various stages of build, test, deploy (and/or deliver)

In Gitlab, .gitlab-ci.yml is the key file in the project that tells the runner how the various jobs will be performed, and which stage a job belongs to.

Jobs of the same stage can run asynchronously (unless specified). While the stages are synchronous and by default failure of a job in a stage aborts the entire pipeline, this behaviour can however also be changed. More details

Back to our task

The purpose of the jobs included in the pipeline we are going to use is:

Jobs and their respective stages are named as run-tests and update-results

The stages

stages:
- run-tests
- update-results

The stages would run one after the other, and update-results will be executed even if the run-tests job fails, in order to update test cases in TestCollab in either cases.

The run-tests job

run-tests:
stage: run-tests
image: cypress/browsers:node18.12.0-chrome106-ff106
artifacts:
when: always
paths:
- mochawesome-report/
allow_failure: true
script:
- npm install mochawesome
- npx cypress run --browser firefox

Since the run-tests job requires cypress to run the tests, the image used is "cypress/browsers:node18.12.0-chrome106-ff106"

The artifacts are to be generated always and the path reserved for them is the "mochawesome-report" folder that we have in the repository.

allow_failure: true here tells the runner to not to abort the pipeline in case the job fails.

The script section first installs the necessary mochawesome library, and then it runs the cypress script in specified browser .

Note: cypress script is already present in the repository being used.

The update-results job

update-results:
stage: update-results
image: node:18.14.2
dependencies:
- run-tests
artifacts:
when: always
paths:
- testplanId.txt
script:
- |
npm install axios
npm install moment
npm install fs
rm -f testplanId.txt
node testPlanManagement.js --APIToken=$TCAPIToken --projectId=$TCProjectId --tagId=$TCTagId --assigneeId=$TCAssigneeId
TPId=$(cat testplanId.txt)
if [[ $TPId =~ ^[0-9]+$ ]]
then
export NODE_ENV=production
npm install -g testcollab-cypress-plugin
uploadTCRunResult --apiToken=$TCAPIToken --projectId=$TCProjectId --companyId=$TCCompanyId --testPlanId=$TPId --mochaJsonResult=/builds/vishal_giga/NewCI/mochawesome-report/mochawesome.json
else
echo "$TPId is not a number."
fi

A node (v18.14.2) image is used for this job

To give precedence to "run-tests", it has been added under dependencies.

The script section

What the script section does:

It first installs node packages required for script we are going to use.

The nodejs script explained

The nodejs script (code) run from a file and it uses TestCollab API to get a new test plan added to TestCollab project.

Arguments

The node script is passed the following values in relation with TestCollab :

  • API token of user who has right to create test plans in the project, here are the steps to generate API token if not done yet

  • Id of the project where test plan needs to be created

  • Id of tag that will be used to filter test cases that are to be added in test plan

  • Id of the user who will be assigned these test cases in test plan

How the value is returned

The node script saves the value of id of created testplan in a file testplanId.txt

If a test plan has been successfully created then its id is fetched from the file and stored in a variable TPId.

Using the plugin to update test results in TestCollab

This section focuses on the use of the plugin that TestCollab offers to automatically update the test results.

The next step involves setting up value of environment variable NODE_ENV to production.

The testcollab-cypress-plugin is installed globally, this will make anew command uploadTCRunResult available for use.

Finally, the uploadTCRunResult command that will perform the task of updating results in TestCollab. This command requires some arguments to be passed :

  • apiToken - same we used in previous section

  • companyId - the id of the company (registered in TestCollab)

  • projectId - the id of the project where test cases and test plan are (see previous section)

  • testPlanId - the id of the test plan the we created in the previous step

  • mochaJsonResult - this file is generated by the previous job of run-tests and the results are available in JSON format under mochawesome-report folder (see artifacts section of previous job)

Based on all the details supplied, the status of executed test cases will be updated in TestCollab along with the newly generated logs including the error details (in case of failure) appended as comment for relevant test case.

The nodejs script

const axios = require('axios');
const moment = require('moment');
const fs = require('fs');
const { stringify } = require('querystring');

let testPlanId = null;
let jsonString = [];
let APIToken = "";
let projectId = 0;
let tagId = 0;
let assigneeId = 0;

if (process.argv.length === 2) {
console.error("No arguments passed");
return process.exit(1);
}

process.argv.forEach(function (val, index, array) {
try {
if (val.toLowerCase().startsWith("--apitoken")) {
APIToken = val.substring(val.indexOf("=") + 1);
if (APIToken.length != 16) {
console.error("Invalid value for APIToken")
return process.exit(1);
}
}
if (val.toLowerCase().startsWith("--projectid")) {
projectId = val.substring(val.indexOf("=") + 1)
if (projectId.length == 0 || projectId * 1 != projectId) {
console.error("Invalid value for projectId");
return process.exit(1);
}
}
if (val.toLowerCase().startsWith("--tagid")) {
tagId = val.substring(val.indexOf("=") + 1)
if (tagId.length == 0 || tagId * 1 != tagId) {
console.error("Invalid value for tagId");
return process.exit(1);
}
}
if (val.toLowerCase().startsWith("--assigneeid")) {
assigneeId = val.substring(val.indexOf("=") + 1)
if (assigneeId.length == 0 || assigneeId * 1 != assigneeId) {
console.error("Invalid value for assigneeId");
return process.exit(1);
}
}
}
catch (err) {
console.error("Error while parsing command line arguments");
return process.exit(1);
}
});
addTestPlan(APIToken, projectId, tagId, assigneeId);

async function addTestPlan(APIToken, projectId, tagId, assigneeId) {
try {
await axios.post("https://api.testcollab.io/testplans?token=" + APIToken, {
"archived": false,
"title": "Test plan using API " + moment().format('YYYY-MM-DD:hh:mm:ss'),
"priority": "1",
"status": 0,
"test_plan_folder": null,
"description": "",
"start_date": null,
"end_date": null,
"project": projectId,
"custom_fields": [
{
"name": "0",
"id": 0,
"value": "0"
}
]
}).then(async (postTPAddResponse) => {
try {
jsonString = JSON.stringify(postTPAddResponse.data);
const TPData = JSON.parse(jsonString);
if (TPData != null && TPData.id != null) {
testPlanId = TPData.id;
await axios.post("https://api.testcollab.io/testplantestcases/bulkAdd?token=" + APIToken, {
"testplan": testPlanId,
"testCaseCollection": {
"testCases": [],
"selector": [
{
"field": "advancedFilters",
"operator": "jsonstring_2",
"value": "{\"filterType\":\"text\",\"type\":\"contains\",\"filter\":\"{\\\"sqlQuery\\\":\\\"tags = '" + tagId + "'\\\",\\\"jsonTree\\\":{\\\"id\\\":\\\"8b89b899-0123-4456-b89a-b18a73669bca\\\",\\\"type\\\":\\\"group\\\",\\\"children1\\\":{\\\"88aba9a9-cdef-4012-b456-718a7366dd7f\\\":{\\\"type\\\":\\\"rule\\\",\\\"properties\\\":{\\\"field\\\":\\\"tags\\\",\\\"operator\\\":\\\"multiselect_equals\\\",\\\"value\\\":[[" + tagId + "]],\\\"valueSrc\\\":[\\\"value\\\"],\\\"valueType\\\":[\\\"multiselect\\\"]}}}},\\\"simpleFilters\\\":{\\\"tags\\\":{\\\"filter\\\":[[" + tagId + "]],\\\"type\\\":\\\"equals\\\",\\\"filterType\\\":\\\"number\\\"}}}\"}"
},
{
"field": "tags",
"operator": "jsonstring_2",
"value": "{\"filter\":[[" + tagId + "]],\"type\":\"equals\",\"filterType\":\"number\"}"
}
]
}
}).then(async (postBulkAddResponse) => {
try {
jsonString = JSON.stringify(postBulkAddResponse.data);
const bulkAddData = JSON.parse(jsonString);
if (bulkAddData.status != null && bulkAddData.status == true) {
await axios.post("https://api.testcollab.io/testplans/assign?project=" + projectId + "&token=" + APIToken, {
"assignment_criteria": "testCase",
"assignment_method": "automatic",
"assignment": {
"user": [
assigneeId
],
"testCases": {
"testCases": [],
"selector": []
},
"configuration": null
},
"testplan": testPlanId
}).then(async (postAssignTPResponse) => {
try {
jsonString = JSON.stringify(postAssignTPResponse.data);
const assignTPData = JSON.parse(jsonString);
if (assignTPData.status != null && assignTPData.status == true) { fs.writeFileSync('testplanId.txt',testPlanId.toString())
return process.exit(0);
}
else {
console.error("Error with status of assign");
return process.exit(1);
}
} catch (err) {
console.error('Error while processing assign response: ', err);
return process.exit(1);
}
}).catch((err) => {
console.error("Error while making assign API call: " + err);
return process.exit(1);
});
}
else {
console.error("Error with status of bulk add");
return process.exit(1);
}
}
catch (err) {
console.error('Error while processing bulk add response: ', err);
return process.exit(1);
}
}).catch((err) => {
console.error("Error while making bulk add API call " + err);
return process.exit(1);
});
}
else {
console.error("Test plan id could not be fetched");
return process.exit(1);
}
} catch (err) {
console.error('Error after processing add test plan response: ', err);
return process.exit(1);
}
}).catch((err) => {
console.error("Error while making add test plan API call " + err);
return process.exit(1);
});
}
catch (err) {
console.error("Error while adding test plan " + err);
return process.exit(1);
}
}

Did this answer your question?