top of page

CI CD Using Jenkins Docker And Bit Bucket

Updated: Mar 13

Author: Anuj Bhasin


Overview

CI/CD is a very crucial and important stage in any project nowadays. Continuous integration and continuous deployment basically help customers for delivering applications by introducing automation in all stages of development. This can be achieved using DevOps methodologies. DevOps methodologies are the practices for delivering applications and high velocity without any or less downtime. There are many free DevOps tools available in the market. DevOps is not only required in web applications or software but it is also required in data engineering-related fields also, so DevOps for data is called DataOps.



Prerequisites
  • Bitbucket account and repository for committing Snowflake scripts.

  • Jenkins account.

  • Snowflake account with warehouse and development and production databases.

  • Docker Desktop is installed in the local system.

  • IDE with GIT integration (using VS sode is recommended for easy GIT integration ).


Flow

Firstly developer commits code in Bitbucket, and after that, in the Jenkins instance, which is running on the Docker Desktop developer builds and runs the Jenkins pipeline by inputting environment-specific parameters. If SQL code or scripts that are committed in the Bitbucket repository are syntactically and logically correct, then changes will deploy in Snowflake environment. The same flow will work for production environments as well. Just input parameters need to be changed according to the environment.


Flow Diagram

Implementation
  • Install GIT. While installing, remember to add GIT in the environment path and directly set up GIT from the version control tab on the left in VS code.

  • Clone your bitbucket repository to the local storage. It can be done using VS code directly from VS code version control tab; paste the URL of the repository.


Go to your Bitbucket Repo and clone using by clicking on the clone button and copy the URL and paste it into the CMD of the local Project folder.



  • Once the Repository is cloned, open the same in IDE and create one folder with name migrations; in that folder, create a script named V1.1.1__initial_objects.sql (make sure that there are two underscores after the version number and name should contain version and all scripts that are used in future should have proper versioning it will affect the pipeline).


Use the Below Code (this is only an example for some implementation)

CREATE SCHEMA DEMO_SCHEMA;


CREATE TABLE Demo_Table

(

F_NAME VARCHAR

,L_NAME VARCHAR

);


  • Commit your code and File using the below commands.


git add --all

git commit -m "your message"

git push -u

  • Now Jenkins needs to be deployed; we need to create a docker image and run it locally. For this, create a docker file named Dockerfile without any extension in the root of the bitbucket repository with the below contents.

FROM jenkins/jenkins:lts


# Install required applications

USER root

RUN apt-get update

RUN apt-get install -y docker.io


# Drop back to the regular jenkins user

USER Jenkins


  • Save changes and commit in Bitbucket Repository.

  • Now run the Docket Desktop application.

  • To create the custom docker image, run the below command from cmd or directly from the IDE terminal.


docker build -t jenkins . (This Dot is used in Command and not read as full stop)

docker run -p 8080:8080 -v /var/run/”docker.sock”:/var/run/”docker.sock” --name jenkins Jenkins ( Remove Apostrophes before running commands)


  • After running the above commands, you will get one password in the terminal; just copy and save it somewhere.

  • Now in the new terminal window, use the below command to give access to Jenkins to docker.


docker exec -it -u root jenkins bash -c 'chmod 666 /var/run/docker.sock'


  • Now for the Running Jenkins instance, again use the below command in the terminal.


docker start jenkins Jenkins


  • Now we have to configure Jenkins.

  • To access the UI of Jenkins, open localhost:8080 in the new tab of the web browser.

  • Enter credentials, i.e., the password that you copied and saved in previous steps.

  • In the next Steps, you will see customize Jenkins and select the Install all plugins option.



  • After installation of all plugins, you will be moved to create the first administrator user, then enter your details.



  • Now save and restart the page; now login to Jenkins UI using your credentials and select manage Jenkins in the left navigation bar under system configuration.



  • From the plugin, the manager clicks on the “Available” tab and enters the docker pipeline in the search box. You should see one result, the “Docker Pipeline” check box under the “Install” column next to this plugin and install.



  • Now Create Jenkins Pipeline Definition – Create Jenkins files named Jenkinsfile without any extension in the root of the bitbucket repo with the below code.


pipeline {

agent {

docker {

image "python:3.8"

args '--user 0:0'

}

}

stages {

stage('Run schemachange') {

steps {

sh "pip install schemachange --upgrade"

sh "schemachange -f migrations -a ${SF_ACCOUNT} -u ${SF_USERNAME} -r ${SF_ROLE} -w ${SF_WAREHOUSE} -d ${SF_DATABASE} -c ${SF_DATABASE}.Schemachange.CHANGE_HISTORY --create-change-history-table"

}

}

}

}

  • Save file and Commit the changes in bitbucket.



  • Now create an actual Jenkins Pipeline – from Main Jenkins Dashboard, click a new item on the left navigation bar, enter the name of the pipeline, and click on the pipeline item to select it.

  • You should go on and or you are here on the configuration page. Click on the Pipeline - tab and change the "Definition" parameter to Pipeline script from SCM; that is the default. Under the SCM section, select Git as the repo type and then paste the BitBucket repository URL that you saved above into the Repository URL field. Click on the Advanced button (which will reveal some additional parameters) and then enter these values for the below parameters:


  1. Field - Name: origin

  2. Field - Refspec: +refs/pull/*:refs/remotes/origin/pr/*

  3. Field - Branches to build: leave blank

  4. Field - Repository browser: (Auto)

  5. Field - Additional Behaviours: Wipe out the repository & force the clone

  6. Field - Script Path: Jenkinsfile

  7. Checkbox - Uncheck "Lightweight checkout."

Make sure you don't miss step No 5; you have to click on the "Add" button under "Additional Behaviours" and then select "Wipe out repository & force clone." This is an important step as this confirms that the Jenkins pipeline job will always be working with the latest version of your repository without any errors.

  • If you are using a Private bitbucket repository, then you have put your credentials.

  • Now we will add Pipeline Parameters; Pipeline parameters allow Jenkins to store values and variables which can be used in CI/CD.

  • Open your Pipeline job that is created, and click on configure on the left navigation bar; Under General TAB check if this Project is parameterized; add All Parameters as per requirement; take the Example of the Below parameters.

And here are the values to use for each parameter (please adjust as appropriate):


  • Click the save button to save these parameters.

  • Running and building Pipeline – click on the build with parameters option in Navigation Bar.



  • Verify that all the parameter values look correct, and click on the blue Build button to start the Pipeline. If all goes well, you should see a successful output that indicates the build was successful.

  • To view logs of the output, click on the logs icon on the stage or open specific build no.



  • Confirm changes in Snowflake account scripts are deployed on a Snowflake environment. If Pipeline runs successfully, all scripts have been deployed on the Snowflake environment automatically.

  • For More Scripts Developer needs to commit SQL Scripts with proper incremental versioning (Eg. the Last Script version is 1.1, then the next should be 1.2 in this way) and just need to run the pipeline again to deploy changes.

  • The same pipeline can be used to deploy all scripts on the Production environments but only at the time of Building parameters that need to be changed as per the production environment.

Conclusion

Using Dockerized Pipeline, we can run SQL Scripts by committing into the migrations folder of the bitbucket repository. These Scripts will run in the Snowflake environment when this pipeline is run and built.

Usage

Deployment of Snowflake scripts on the production environment in one go can be done using the same pipeline that is used for the development environment. We just need to change the username, password, and account name parameters as per the production environment.


1,611 views0 comments

Recent Posts

See All
bottom of page