Question

The company I work for now doesn't implement continuous delivery yet. We still deploy the project manually to server, file by file. Which is best practice: to manually deploy one project artifact for each deployment or keep doing the file-by-file deployment?

Was it helpful?

Solution

Which is best practice? to manually deploy one project artifact each deployment or keep doing the file by file deployment?

Neither.

Best Practice is to automate your deployment, completely and exclusively. That means nobody gets to put anything onto a server manually.

"To summarize the summary of the summary: People are a Problem." (Douglas Adams)

People make mistakes. If one of the files that you forget to copy across is a shared "library" that's been extensively changed, you can bring the whole Production site crashing down.

OTHER TIPS

Manual steps take a lot of effort and are risky: you might forget a necessary file. Maybe not everyone in your team knows which files need to be copied. All of these issues make deployments big, daunting, and rare – completely unnecessarily. Automation addresses these.

Even the simplest automation step can have big benefits, because deployments become trivial. A script that copies the files or artifacts via (S)FTP or Rsync or another technology is a great first step. You can later expand that script to perform pre-deployment and post-deployment steps on the server automatically, like restarting services.

Best practice would be to implement an automated process of some sort.

Be careful to check that there isn't a special reason for the 'file by file' approach which you would have to take into account.

With Continuous Delivery (or Deployment, actually) and moving each file by hand, you're looking at the two extremes. It's perfectly understandable that you can't/don't want to create a fully automated pipeline (yet). However, you should consider automating parts of the process.

Moving each file by hand is quite risky, and you could mitigate that risk by, for example, tagging your code repository, checking out that tag in your computer, building your artifacts and uploading them to your server. Each of these steps can be automated so that they're executed with a few mouse clicks, and this will greatly reduce the risk of forgetting a file or accidentally pushing to prod some extra files.

Automate what you can, once step at a time. The fact that you can't afford a fully automated CD pipeline shouldn't discourage you from automating some parts.

Best practice would be to do a cost/benefit analysis for your particular deployment for your particular company.

The general answer is "don't do things manually, automate." This is generally the right answer for general sorts of companies. The uniformity of the answers you are receiving should be some indication of just how strongly the community finds this to be best practices. If your company feels that automation is not the right tool, they should have some understand what makes them unique. That uniqueness should be factored into your decision making process. There is no "best practices" when the sample set is 1.

Questions such as "how many files" and "how often are things updated" and "what are the consequences of breaking things" and "how quickly can you roll back a bad change" are important questions to answer. If you automate, many of these questions become unimportant, but they are essential for properly assigning the costs and benefits for a manual update process.

There are plenty of shades of grey in between manual file-by-file copy and continuous delivery.

Start by reducing the complexity of the deployment process, for example by using a zip file, an rpm style packaging, an infrastruture as code management tool (such as puppet or chef) or even just a simple script which copies the files for you from a staging area on the ftp server.

Deployment processes with more manual steps are more likely to have errors (and thereby fail) - like others have said, take the human element out of it.

You don't need to implement full continuous delivery (which is costsly, and takes effort / investment / innovation over time) - start simple, make it work, demonstrate the benefits - and go from there.

It depends on the software technology (or stack) you are using (interpreted language, compiled language, desktop app, mobile, etc.), soft. dev. department policies, if you have the tools to automate it, how critical is your app, and one important thing to consider is your software architecture (how your app was designed). This is why you have here different answers. As a rule of thumb, the best approach will be to reduce as possible human intervention in deployment tasks, to avoid mistakes. A good practice will be testing everything in a QA Server (consider using a virtual server if budget is an issue) before deployment, and having reverse procedures to restore to previous version in case of disaster (ALWAYS have a backup).

Licensed under: CC-BY-SA with attribution
scroll top