Question

I'm developing a node.js web server which will have built source code of server and client part at once. I mean, my web server is on the isomorphic level. this means editing source code of client part eventually give some effects on server-side code.

so when I've finished to develop (fixing bugs, etc.) my web server application enough, I would build it and upload to my own physical web server and restart the web server application to apply (execute) built source codes.

In this situation, my problem just came out:

How do I automate all the publishing (building, uploading, restarting, etc.) task?

For now, I'm deploying new source codes in the following steps:

1. found a bug or required feature
2. develop to implement features or fix problems
3. build the web server or client source codes
4. upload it to the physical web server
5. restart web server application that is run on the physical web server

this is quite ridiculously annoying tasks when I have really tiny problems to fix on the source code. (like typo errors)

I had googled about 'CI' or 'CD', but I've got some information of Jenkins but they aren't enough to give me the answer because most of them uses Git as their (maybe) middleware to publish their code. I don't want to publish any part of my source code publicly.

I would write some bash scripts that'll automate publishing stuffs but I want to accomplish it as like I'm in company-level environment. anyways, I have absolutely no experience how convenient the distribution process of BIG companies is. so I don't how they accomplish their work on my problem.

Additionally, In the node.js environment, I must restart the web server to apply built source codes which is uploaded to the physical web server. However, this will prevent users from accessing the web app I made while the web server is restarting. this sounds really critical.

so I'll be really thankful if someone let me know about whether there's a edgy way to apply new source codes without restarting web server application in node.js environment.

Was it helpful?

Solution

A Bash script is exactly how it happens in many companies. Avoid steps that you have to perform manually. Automate them so that you only have to kick off the process and can then sit back. Being able to do a deployment with a single command enables “Continuous Deployment” workflows: if a deployment is super easy and repeatable then you can do them often, for small changes, and the risk of deployment is small.

There are often additional considerations:

  • the exact version that was deployed should be in version-control
    • and the state of your version control should always be in deployable state
  • before deployment, an automated test suite should be run

This gives rise to the idea of performing deployment through a build server (CI server) such as Jenkins: whenever you commit a new version, the build server fetches the code, builds and tests it, and then (optionally) deploys that code automatically. This allows you to move quickly but requires that you have a reasonable automated test suite to prevent accidental regressions.

Not all of these deployment pipelines consists of Bash scripts. A specific CI tool may have its own conventions or plugins that you can use rather than writing a Bash script that runs some SSH commands. Many cloud providers also offer APIs. In a sense, a Dockerfile is also just a glorified Bash script. But a script to tie everything together is not wrong.

The “deployment needs a server restart” problem is not generally avoidable, but it is not necessarily much of a problem. The usual approach is to run your web app behind a proxy or gateway. During deployment you start a second server with the new version, then let the gateway shift traffic to the new server until the old server can be shut down – and if there is a problem, the deployment can be quickly reverted by shifting traffic back to the old version. This pattern is also known as green–blue deployment. Here the server might be a process, container, or virtual machine. Whether you need to deploy without downtime depends on your traffic. For small sites a few seconds of downtime is usually unnoticeable.

At a really large scale with a redundant microservice architecture that runs on a cluster (e.g. managed through Kubernetes), deployment involves replacing cluster nodes one by one and monitoring error rates and other performance metrics during deployment. Because everything is redundant, any deployment problems will only affect very few users (or possibly none: one testing strategy is to send real requests to a new version but ignore the responses, or to compare the responses to the old version).

But if you don't operate at that scale, you don't need this high-end approach. You also don't need the costs of the necessary infrastructure. A simple bash script already goes a long way.


Further reading:

  • The twelve-factor app. A short, basic, and important guide for best practices of web app architecture and deployment. Might have a slight bias towards Heroku-oriented practices, but is quite sensible and easily generalizable.

  • How to deploy software by Zach Holman. A lengthy but good read on high level deployment concepts. Assumes an organizational maturity that is not applicable for your circumstances, but provides background and context.

  • GitHub Flow: a short marketing piece of GitHub that presents a simple team-oriented workflow in the context of continuous deployment. (The name should not be confused with the unrelated Git Flow). Note that GitHub and Git in general never require you to publish your code to the public. However, build servers and similar tools do need access to a version control repository, and the easiest way to get that is to pay another company to host your repository (you can also host it on a server of your own). Consider also competitors such as Gitlab or Bitbucket, which also directly integrate some CI functionality.

Licensed under: CC-BY-SA with attribution
scroll top