Key questions to consider about pipeline as code
A pipeline as code entails writing in code the processes a development team uses to build and deploy code to production. To properly adopt the approach, study up on CI/CD and more.
Development teams often have many questions when they consider pipeline as code. Some of those questions include:
- What tools do we need?
- What if my team still practices Waterfall?
- How does infrastructure as code relate to the technique?
Developers should also ask themselves some additional questions, such as:
- How frequently do we want to deploy code?
- Will this change add value to our IT organization?
To explore these questions, SearchSoftwareQuality's executive editor Phil Sweeney and assistant editor Ryan Black talked to Mohamed Labouardy, co-founder of Crew.work and author of the book Pipeline as Code from Manning.
Give a listen to the episode and read through the transcript below for Labouardy's insights.
Editor's note: The transcript has been lightly edited for clarity and brevity.
Black: How do you see continuous delivery and continuous deployment as different? What is each process's respective use case as far as you see it?
Mohamed Labouardy: Yeah, so the difference is the human intervention. Basically, when we're talking about continuous deployment, you mean that you are deploying everything that goes to your remote repository, to our staging environment. However, when you are talking about continuous delivery, you have some business validation or some human interaction before the deployment goes on to production. Basically, in terms of the pipeline, it's the same. It's the same stages that will be executed. Just for the continuous deployments, you don't need any human intervention or business validation. But for continuous delivery, you will need someone to validate the deliveries before it goes through to a production or staging environment.
Black: And what are the circumstances where a team might want to have that human intervention? Are we talking about like where security is a higher concern?
Labouardy: There'll be a lot of considerations where you, when you want to deploy a tool to have continuous delivery. One of them is basically having [the] quality assurance team, for instance, to ensure that all releases are validated in terms of acceptance criteria, and all the tests are successful [i.e., pass]. You need to have this ensured before the deployment goes to production. And it depends also on the way you are working within the company. For instance, you have a lot of people that [are] using sprints of one week, so when you have a lot of users that are using your product, sometimes you don't want to deploy your release in the middle of the week. You want to wait to have a release that has a lot of features before doing that.
I think it's a good thing to start with continuous delivery before having continuous deployment. I have worked in multiple [companies] before. I have worked on early stage startups and in big firms. And what people need to remember is just the need to keep it small and keep it simple in the beginning before we head into continuous delivery or continuous deployment. You can first start with just having a remote repository and having a CI server like Jenkins or others that will just check the source code and build the packages before thinking about doing the deployment.
Sweeney: Right. And Mohamed, you write in your book that continuous deployment in particular is great, but that sometimes users and customers would prefer maybe not to see those automatic continuous deployments quite so frequently. Is it too much sometimes?
Labouardy: Yeah, it depends on the target user. I think if you are working for a big firm -- and you are working on some product, which is quite specific, which requires a lot of interaction with the clients -- you need to set up the documentation you need to update the change locks. There's a lot of operational and also a lot of logistics behind the deployments. Sometimes it's a good idea to not do frequent releases within the week and do some kind of patches. However, you need to have some staging environments to be prepared when you want to deploy early. You need to have multiple environments, so you can deploy your changes when you are ready to do it on production.
Black: In your book, you overview three categories of CI tools. And I was wondering if you can just kind of lay those out for us here.
Labouardy: Basically, the most famous ones are open source tools; you have things like Jenkins. You have also SaaS platforms like CircleCI, Travis CI, etc. You have also cloud-oriented tools like cloud build on AWS Code [Build, Code] pipeline, etc. There is some kind of overlap in terms of features. And it depends also on what you are using at the moment.
If you are already using something like AWS or GCP, maybe you will go with the service that is provided by these cloud providers. If you are using something like Jira or Confluence, maybe you will go with Bamboo because they have a better integration with Jira, etc. If you are still [in the] early stages, maybe you will use something like CircleCI or Travis CI, because they are well integrated with GitHub. If you are using GitLab, maybe you will go with GitLab CI.
For me, you need to pick the tool that works the best for you. For this book, I went with Jenkins, just to illustrate the concepts of pipeline as code. But all the other stuff that I have covered in this book can be applied easily on other CI servers.
Black: You actually are getting at a point I was going to ask about, how mutually exclusive these categories are, and to whom each category is the most relevant and kind of the most usable?
Labouardy: [To] give a real-world example, I have a lot of experience with Jenkins. Right now, I have started a new company, and right now [we] are using CircleCI, because it doesn't require a lot of operational, it doesn't require maintenance and upgrades. It integrates very well with GitHub. But once we start to scale, and once we will start to do some complex pipeline, maybe we will migrate to something more advanced, like Jenkins. It depends on the phase and also on the project you are working on.
I think for beginners, they can simply go with GitHub Actions or something like CircleCI. It gets the job done. For people that have advanced use cases and a lot of projects to maintain, and might have a dedicated DevOps team to manage the infrastructure, maybe they will go with something like Jenkins or cloud-provided service like AWS CodePipeline or Google Cloud Build.
Sweeney: And Mohamed with the CI tools from the big cloud vendors -- I can think about Azure Pipelines, AWS CodePipeline -- what do you think of those? Are there good use cases for where those might be better than an open source option or better suited to a particular audience?
Labouardy: They are great if you are already using -- if we take AWS as an example -- if you're already using AWS services, like Lambda, SQL, SNS, etc., they are integrated out of the box with those services. We will have an integration with the services. You can execute all the deployments and use the AWS commands within the CI service. However, if AWS is not your main infrastructure -- you have diverse projects; for instance, you have a web application, iOS application or the web application -- it might not be a good choice for people because it requires a lot of scripts and a lot of workaround to do.
And those services are quite early on; they are not mature yet. They were just released, I think, two or three years ago. So, there's a lot of things that are not working perfectly. It's not like Jenkins, which [has] existed more than 10 years now. And also, if you are using the services, you will have a problem, what we call lock-in; they will be locked into the services, so we may have some trouble moving to another cloud provider. Or if you want to try services locally, you will have this problem because, for instance, with Jenkins, you can just download the Docker image for Jenkins and you can run it locally. However, with services that are running on the cloud, you will have a lot of trouble if you want to test stuff that's happening on AWS or others locally.
Sweeney: Right, and you mentioned Jenkins quite a bit; that's the example that you use in your book. You find it obviously pretty compelling. What do you think it is about Jenkins that works so efficiently for this?
Labouardy: I think Jenkins, it's more about the community. You have a lot of plugins that exist today in the market for all the platforms that you might use during your CI/CD pipeline. I have integration for SonarQube, for AWS, for GCP, for Selenium, etc. However, it's not really a perfect solution.
A lot of people are frustrated by the UI and UX experience when they are dealing with Jenkins. So, people at Jenkins did a lot of efforts, the last couple of months, and they have released what we call a Blue Ocean mode in which you can have Jenkins on arbitrary UI. It's not that perfect yet, but they are doing a lot of work to improve the user experience.
I prefer Jenkins, just because it's quite easy to use, besides the operational overhead if you want to maintain it and deploy it. But it's really easy to use, because you will find a lot of resources and a lot of materials on the internet on how you can get started, you have tons of plugins on the marketplace that you can download for free. You can also create your custom plugins, because basically, if you have already done Java or Groovy, you can write your own plugin for custom integration, and you can publish it with the others easily. And so, yeah, it's more for the community and for the resources that are available and online. That's why, for me, Jenkins, it's a complete solution.
However, I have tried, as I said, other solutions that [also] work perfectly. I think the reader or the listener what they need to remember, just you need to understand completely how CI/CD works. And, at the end of the day, Jenkins edge is a tool that you can use. And what matters is how you can implement and achieve continuous delivery and continuous deployments. And this is what I was trying to do [in] my book is to illustrate the concepts and not the tools.
Black: Oftentimes we write about CI/CD -- in terms of the people doing, it's often bigger tech companies -- your companies in Silicon Valley, your Microsofts. But I think it's important to keep in mind a lot of IT shops do Waterfall-style releases, big-style releases … they're working with a legacy application in terms of they just haven't gotten around to adopting CI/CD. I'd be curious do you see in these legacy shops interest to maybe start thinking about switching to CI/CD? Are there some CI/CD-like practices they could adopt and apply to their Waterfall shop?
Labouardy: Basically, this is what I have covered in the first chapter, because even the book, it's more related for cloud-native applications -- for people that are already using Kubernetes or service application running Lambda function, etc. But I think those concepts can also be applied when you are dealing with the monolithic applications and people that [are] still using Waterfall cycles during development.
For me, it is the need to keep it simple in the beginning. They don't need to automate everything. In the beginning, for instance, we know that automating unit tests or integration tests requires a lot of time, and this is one of the things that makes a roadblock for people when they want to implement CI/CD. Because firstly, they heard CI/CD and they say, "That's great, we can implement this within the company." But once they get started, they found out that they have a lot of legacy, that they need to modernize their infrastructure first. It's also a mindset shift in terms of the organization, culture, etc. Because DevOps is not just tools. It's practices, and it's a way of thinking. And when, for instance, they choose a CI server, it requires a lot of maintenance, it requires a lot of operational overhead. At the end of the day, they will drop out the solution, and they will keep working with the current workflow.
My advice for people is just to keep it simple. What they can do first is just start by having a centralized repository in which they will have all this source code there. They might go further and have some kind of Gitflow model or branching strategy to structure the source code. That way, they can have something simple to track the different releases on the different versions of the source codes. And once they have this structured on a remote repository, either on a GitHub or GitLab or even on SVN repository, then they can start with a build server.
My advice for people is to go with the one that doesn't require a lot of maintenance and a lot of setup, especially for early-stage startups. When the person setting up all this stuff is the CTO, he has a lot of work to do. And if he adds the maintenance and the hosting of something like Jenkins, it will be a lot of stuff to deal with. Once they have chosen a CI server, they can, for instance, start with a pipeline. That will just check out the source code, once you push something to this remote repository, and it will build the package or compile the source code. This is the basic or the most simple pipeline they can start with. And then along the run, they can start, for instance, writing unit tests, adding some coverage report before going into an integration test. Because doing an integration test or answer on tests, it requires a lot of work, a lot of time because UI tests will change frequently. So, if you start implementing them first, you will have some trouble sync and be up-to-date with all the with all the changes that we do frequently on the UI, and you will end up dropping or not using the pipeline and you will have something which is outdated. And, at the end of the day, we want the pipeline to bring value to the team. We don't want it to be a roadblock or something people will be frustrated with.
I think, for people that are still using Waterfall cycles or legacy applications, my advice for them is just start with the simplest pipeline and iterate over it. They need to treat it the same way they treat their application. What I do, for instance, in my current company is I try to improve this pipeline on each sprint. I add some extra block, or update the pipeline … or add, for instance, Slack notification, etc. And that way in the long run you will end up with something which is efficient. And something that will bring a lot of value to the developers and all the organization that will set up this pipeline.
Sweeney: And, Mohamed, with the pipeline-as-code dev teams, they can be quicker and have more consistent releases. But there's also this idea that pipeline as code can reduce development costs. Do you think that's valid? And how would that play out, if so?
Labouardy: Yeah, I think it's going to reduce the development costs. And also it will help a lot of people [get] into onto DevOps practices. Because once you start writing your pipeline as code, you will end up with some kind of libraries or templates that [in the] long run you will start using when you have a new project or a new feature to integrate. So, we use a lot of time and a lot energy and the resources. Instead of DevOps, doing the pipeline on the CI server, you can have developers doing that because they just need to have this template. They will need you to override some variables like the name of the project, the GitHub repository, etc. And from there, they can just create a job based on this pipeline and that's it. You will gain a lot of value.
First, you will have entire organizations that will start using the pipeline. In terms of the responsibility, it will be shared between developers and DevOps. So, everyone will be able to create and play with the CI/CD pipelines. Also, you will use a lot of time and energy. And you can put your DevOps team on something that will add and bring a lot of value to the company instead of spending time doing support and creating jobs and maintaining them. You will have everyone on board within the organization. For me, pipeline as code is something which is very useful. It's the same thing when we talk about infrastructure as code. It's something that will add a lot of value to the organization. And it can be also like some sort of documentation, because all the pipeline will be versioned, and it will be the same way as your code. You can identify all the anomalies and bugs ahead of deploying the pipeline. For me, it brings a lot of value to people that are using this approach.
Sweeney: And pipeline as code, you also talk about as infrastructure code -- you see those as similar I assume?
Labouardy: Basically, it's the same approach. With infrastructure as code, you are just describing your entire infrastructure and templates files. And from there, you are using a tool that will just use the provider API, to convert the JSON structure that you have written on template files to commands. And with the pipeline as code it's quite similar. You are just describing your CI/CD pipeline stages or steps on templates file. And then you use a CI server that will just read these steps sequentially and will execute these stages. At the end of the day, either with infrastructure as code or pipeline as code, the stoplight files will be stored on a remote repository. And you will treat them the same way that you treat your source code. You can do pull requests, you can do review, you can also have some tests that will be executed each time you're at the stage or you remove a stage from the pipeline to ensure that everything would be working as expected so you won't break the pipeline.
Sweeney: Even though pipeline as code, it automates a lot of things, you're still going to need some sort of operations and infrastructure expertise. Could you talk about that skill set and where that would come in with a good pipeline-as-code operation?
Labouardy: I think even though pipeline as code -- as you said -- it makes the job easier for people to create jobs, build the CI/CD pipelines, but it requires [some new] knowledge. For instance, if we talk about infrastructure as code, even though if you are using a tool like Terraform or CloudFormation, you will need some basic knowledge of AWS services and their API and documentation in order to write the resources using the infrastructure-as-code approach. And it's the same way for pipeline-as-code approach. You will need some knowledge on how you can interact with the Jenkins plugin. For instance, if we were talking about Jenkins, and how you can, for instance, run unit tests, etc.
But I think there are a lot of resources today either on Jenkins or other CI servers that allow you to start quickly with pipeline as code. You just need to get familiar with the basics and also on the overall structure of our pipeline as code. For instance, if you are using Jenkins, you need to be familiar with how you can write a Jenkins file, how you can try [to] catch errors etc. Once you get familiar with [these] concepts I think it's really easy to write CI/CD pipeline from there.
Black: Well, thank you for joining us today, Mohamed. Where and when can people find your upcoming book, Pipeline as Code?
Labouardy: The book is available on Manning's website. And for now, there is two parts that were already released and the third part will be released in the upcoming weeks.
Editor's note: TechTarget readers get a 35% discount with code nltechtarget21.