Question

I work in a small programming team supporting a larger organisation. This year our manager has decided we are going to use Oracle Apex technologies to handle the vast majority of our company data.

This would be ok, except we only have one Apex server. Our manager has decreed that everything happens in that one instance. Our team is developing apps, while our manager demo's them, and our internal clients use them, which for obvious reasons is already causing problems!

I can only expect this to get worse as we become more heavily invested in Apex, the apps get more complex and the number of users grows. I've heard that best practice is to have separate development, testing and production environments but why is this the case?

The question: Why should we have separate development, testing, and production environments?

Was it helpful?

Solution

Why should we have separate development, testing, and production environments?

You have several activities going on concurrently:

  • development - where developers commit code, make mistakes, experiment, etc...
  • testing - where tests are run, manually or automated, and due to complexity, can consume a lot of resources.
  • production - where value is created for customers and/or the business

Do you want all of these happening in the same environment? Do you want the business to grind to a halt because a new test has pushed your servers into swapping on hard-drives and are consuming every core on the processor? Do you want your tests to grind to a halt because a developer made a convoluted fork-bomb out of a scaling experiment? Do you want code that you thought worked because of a developer's twine and duct-tape in the tests to run in production? Do you want developers working with potentially sensitive production data (I know this isn't a concern in all businesses, but it is in a lot of them)?

What prevents these issues from happening?

Separate environments.

So what do you need?

You need separate environments.

To put it formally

You need separate environments for the following reasons:

  • to reduce risks of blocking business and software development.
  • to reduce risks of putting code into production that passed tests due to developers' ad-hoc rigging.
  • to reduce risks of production data getting into the wrong hands (very important when organizations deal with sensitive data, like ID numbers and financial and health information) or getting intermingled with test data or destroyed.

For your context, a new technology platform

Maybe this isn't truly production yet (since it's a relatively new platform), but you'll get your separate environments when the business starts to rely on it and they are either wise enough to foresee the risk or realize it by learning the hard way.

OTHER TIPS

I've heard that best practice is to have separate development, testing and production environments but why is this the case?

It isn't that clear cut these days.

Many places don't do manual testing anymore, so don't have test data per-se. Many more places have such scale, that they can't reproduce their production environment due to cost. And especially with the explosive growth of microservices, it becomes challenging to keep rapidly shifting environments in-sync enough to ensure that testing in a QA environment reproduces things accurately to catch bugs that would show up in production.

Why should we have separate development, testing, and production environments?

  • If having your test data seen by users would be very bad.
  • If having your prod data seen by devs/testers would be very bad.
  • If you can't trust your developers not to break stuff badly, and you can't fix that situation quickly.
  • If you have automated CI in place such that promoting code is quick and easy.

Essentially, if the cost of having the environments is less than the cost of not having the environments.

The main (and most apparent) reason is that you never want to mix testing and production data. This can get incredibly confusing very quickly for both users of the system as well as developers. When you are administering quality assurance and unit tests (which you should be doing), you need to make sure they're in a totally segregated environment. If something explodes in your development environment or in QA, it would adversely affect production and thus live users and their important data! You absolutely do not want this to effect production!

This extends to the normal services that you should be using in your day-to-day work such as your version control. You can't possibly utilize version control properly if the code you're controlling is in a live environment! Your users will be going insane -- what if you need to revert or rollback? What if you make a drastic mistake fifteen commits deep? How will you handle branching?

At the absolute least, you should be separating your environment into several virtual instances, but you really need to do exactly what you said and have completely separate instances for each environment; ideally development, QA, staging and production.

All of this ultimately amounts in a humongous detriment not just to your front-facing application (and thus your reputation with your users), but also to your team's efficiency.

Having only just one Oracle instance available does not mean the same as "no separation between dev, test & production environments"!

You wrote in a comment

We currently use different schemas for different projects

Ok, so by dedicating some projects only for development, and some for testing, you can separate your environments to some degree, by using different schemas. I guess you already did this, since this is the only sensible approach I know when no instance separation is planned. I cannot imagine your manager to be so insane that he wants you to mix up dev data, test data and customer data all in one schema in arbitrary manners. He most probably just wants to save money by not buying a second server or invest money into the licence for a second instance.

Therefore, the real question you need to ask is:

Do we need to use different instances and/or servers for separating development, testing, and production environments, or is schema separation sufficient?

That makes the answer not so clear-cut as in the other answers here. Different schemas allow different access rights, so you can at least get some isolation to a certain degree inside of one Oracle instance. However, your developers will most probably need some administrative rights inside "their" schema, so it will be harder to make sure they won't have access to production data if you just use one instance.

Moreover, one instance/one server will also mean shared resources between dev, test and production - shared user/schema administration, shared disk space, shared CPUs, shared network bandwidth. Combine this with the "law of leaky abstractions", and it will be clear that using just one instance will have a certain risk of getting unwanted side-effects between dev, test and prod environment.

In the end, you have to decide for yourself: can you effectively work with the drawbacks of the approach? Is your application not so resource-intense, and your production data not so "secret" that it is tolerable to have a reduced level of separation between dev, test and production than the level you would get from a "three instances/three servers" approach? If you cannot effectively work that way, or not without a high risk of interfering production in a way you start losing customers, you have all the arguments you need to convince your manager to buy at least a second server.

You need multiple environment types and maybe even multiple servers in each environment.

Developers can update code in development. The code might not even work - perhaps the application will not even start.

That does not impact QA who is testing the latest stable build in their own environment.

As both development and QA are updating their environments, production is on the latest release build from six months ago and is not impacted by the changes in other environments.

These changes being rolled out in various environments can be code or data. Perhaps QA needs to test a database script designed to fix some bad data in production. Maybe the script makes the problem worse - restore a database backup and try again. If that happened in production, you could have a very serious financial problem.

What happens if you have multiple releases? Maybe you are developing version 2.0, but still need bug fix releases on the 1.0 maintenance branch. Now you need multiple development and QA environments to ensure you can always develop and test either branch when a critical bug is discovered and needs to be fixed yesterday.

You've already noticed the problems that not having separate environments causes. Right there you've got the fundamental reason for separate environments: to eliminate the problems caused by the conflicts that inevitably arise when trying to do development, testing and production operations in a single environment. The same reasoning applies to giving developers individual sandboxes to work in, it keeps one developer's mistakes or even just incompatible changes from crippling the entire development team.

This is also the best argument you can make to management for changing away from the single environment: pointing to the problems a single environment's already causing, showing the trend line and arguing that sooner or later if things continue as they are there's going to be a problem that'll cost much more to clean up (both in direct effort and in lost customer confidence in your company's ability to deliver services) than reconfiguring things for separate environments will cost.

There are many opposing forces and dynamics. There is cost of having many servers and the cost of having just one. I think this question may telate further than just database? I may be misunderstanding, but it does relate to a systemic misunderstanding that is out there re costs of tangibles vs abstract

Typically the obvious costs are easy to understand.

Abstract costs are harder to quantify and thus harder to understand. TechnicalDebt, cost of errors, cost of stress, burden on developers, effects of change, regression testing, impact of downtime and so on are harder to explain.

different environments Environments are usually separated for data and/or purpose. Each environment has a different function. The rate of change on a system, ie. how often it will be updated, what sort of changes and effects of change, are all considered.

We use different environments to trivialise change.

We use different environments, so we offer robustness and certainty of the environment that has not changed.

We use environments to consider the effects of a change.

We use environments to reduce the costs involved with change.

it costs a lot to test and stabilize a system You create environments to secure the investment that was made on the stable environment.

You are never too small a team to adhere to pragmatic, cost saving, diligent and proven patterns of process.

Using one environment for everything is like storing all your photos on one harddrive, you can do it, but you gonna regret it.

some people need proof I have been in many situations dealing with clients or others who dont appreciate the costs of ensuring robustness and following best practices. I would suggest you put together some examples of real cases where the costs are clearly defined.

Licensed under: CC-BY-SA with attribution
scroll top