Avoiding vendor lock-in with containers

One of the common concerns around using public cloud is supplier lock-in. There is a worry about being trapped with a cloud vendor once you move your services to the cloud due to the time and money invested during the migration, which would be lost if you had to re-architect and migrate your infrastructure onto a new platform.
However, if you decide to use containers, cloud-hosted data and applications can become significantly more portable, helping to elevate some of these concerns.

What are containers?
A container is a way of packaging up code and all its dependencies so that an application runs quickly and reliably from one computing environment to another. In short, a container allows an application to be packaged and isolated from the IT environment it is stored in.

A good way to picture this is by analogy to physical shipping containers: the items inside the containers are isolated from where they are stored (the ship), and from the items in other containers, and the container itself follows a set of standardised sizes, enabling them to be used on any other shipping container across the world.

Container platforms are provided by third-party companies who are agnostic to the cloud platforms, so your developers will need to become familiar with deploying to your chosen container platform.

Key benefits of containers
1.Portability of information
The major benefit of using containers is the portability they enable. Since the application in the container is isolated from the environment it is stored in, you are able to move the container to other locations knowing that your applications will work in the same way without modification. In effect, this helps mitigate the worry of supplier lock-in for many, giving users the option to switch cloud providers without having to worry about losing all the work done to build and migrate your IT infrastructure.
Let us say you deploy your applications using containers on a public cloud platform, such as AWS, and you decide to make the switch to Azure. The only work needed from you is moving the containers, as there will be no need to reconfigure what is inside them.

2.A new approach to storing data
A second benefit is how containers encourage microservice architectures. When hosting monolithic applications, the method in the past has been to store it the whole application on one or two larger VMs. With a microservice approach, these big applications are unbundled into component pieces, which can then be deployed individually as containers, allowing the different pieces to talk to each other, typically using HTTP.
This approach allows you to be more agile because of the ability to update each component part separately. It also allows you to get much more reuse of the individual components for other services. For example, a component as part of your revenue and benefits application can be reused as part of your social care management platform. This reuse can lead to you having to pay less and do less development.
One thing to bear in mind is that this microservice approach works well when you are building and developing your own applications, as you can make the choice to use this microservice architecture. However, where you are buying pre-built applications from a third-party vendor, it will depend on if they have adopted a container approach for that application.

What to do next
As you can see, containers offer a wide variety of benefits that are more than likely to be relevant to your organisation. Where you are developing applications in-house using your own development team, it is worth considering the container approach going forward.

You might also consider taking the following actions:

Look where your application vendors are in terms of supporting containers
Review what skills you have in-house to use and deploy to containers and begin to upskill in your chosen container platform
Find out where different cloud providers are with their container platforms
There are tools starting to appear that look at your legacy estate and attempt to convert them to a container approach. It is still fairly early days for these tools, but they are worth keeping an eye on.

Posted by Andy Powell.
Andy has over 30 years IT experience in a wide range of roles including networking, system administration, software development, website/digital delivery, IT strategy, solutions architecture and national & international policy advice. He is a strong technical writer and experienced communicator and has spoken at conferences and events all over the world.

Andy is CTO at Jisc.

Moving away from ‘lift and shift’

Lift ‘n Shift, Hybrid or In-flight Transformation, which cloud migration is correct?

When working across the public sector, with organisations broadly in the same ‘business’, it’s hard not to think that it would be here, if anywhere, that a ‘one size fit’s all’ approach would apply to technology. Surely, they are providing similar services, within the same framework of legislation and policy and therefore they would have the same needs?

To some extent this is true, and there are technology vendors that have capitalised on that commonality. For example, the software for revenue and benefits, which is in use across the local authority sector. Equally since the public cloud native expectation by central government indicates that those familiar with the sector understand there is a benefit that can be felt by all.

However, even if providing broadly the same services, our experiences show us that no two organisations are the same. They will have invested significantly in different applications and infrastructure over the years, have an array of different contractual arrangements to honor. Perhaps, most importantly, there will be cultural and skills differences between one organisation and the next. Therefore, a project to transform will always be from a different starting point.

The roadmap that any CIO builds needs to be underpinned by an understanding of all of these nuances:

1. Know what you have.
This can be the hardest part but I cannot emphasis enough how important it is. A solid discovery of applications, infrastructure, skills and culture will lay the foundations for well-considered change. You can’t know how to get to your goal if you don’t know where you are starting from.

2. Understand upfront what responsibilities are yours, and take ownership of them, and what are your suppliers/partners.
For example, will they conduct a discovery exercise for you or do you need to provide a list of applications and where they are currently housed? A lack of clarity in this area can slow projects to a snail’s pace and waste time and money.

3. Know where you want to get to and by when, with key milestones along the way.
For a detailed plan to be built the discover piece will need to come first, however you should have a broad vision of the goals of change with senior stakeholder buy-in from the get go.

4. Think about how you are going to make existing assets work.
Without overly compromising the end vision. You’ll need a plan that brings existing asset to end of life at a good pace, whilst ‘sweating’ the value. It’s also important to realise from the start that not all applications will be able to be cloud hosted, and therefore there may need to be a longer roadmap for those assets.

5. Find quick wins.
However smooth a path, transformation will cause disruption. You will break things and you need to take the workforce with you on that journey. Quick wins such as Office 365 implementation, that are broad reaching and easy to see benefit from will help to win buy-in for further change. This can be seen as a “gateway application” to the cloud and instill confidence in your organisation that you’re on the right path

In the current technology and broader social political landscape it seems that there is nothing more constant than change. Public sector organisations, just like their private sector counterparts, need to be always looking to the next challenge and finding the right tools to enable better outcomes.

Our research and experience shows us that the public sector is lagging behind in technology adoption. This is a concern because PSOs have an even greater need to do more with less and wring the most out of every pound spent. That’s why we exist – to help close that gap.

Colm Blake

Colm is a Solutions Consultant with many years wide ranging experience within the IT industry. Whilst working on a large scale public sector project he worked on a cloud deployment and was instantly convinced that this was the future of our industry and promptly changed career path. At Jisc he works closely with our members and local authorities to clearly understand their needs and ensure that the platform produced will provide a cost effective and resilient service.

RCUK Cloud for Research Workshop – January 2018



The RCUK Cloud Working Group are hosting their 3rd annual workshop at the Francis Crick Institute in London on January 8th 2018.

This event will bring together researchers and technical specialists to share expertise in the application of cloud computing technology for the research community.

The meeting will include presentations from a range of research domains including particle physics, astronomy, the environmental sciences, medical research and bioinformatics.


To register for this free event, please visit: http://bit.ly/rcuk-cloud-workshop2018-reg

The working group also welcomes submissions for talks, posters or proposals for breakout sessions.

Key themes

This workshop will focus on key areas to address in order for the potential of cloud computing for research to be fully realised:

  • Tackling technical challenges around the use of cloud: for example, porting legacy workloads, scenarios for hybrid cloud, moving large data volumes, use of object storage vs. POSIX file systems.
  • Cloud as enabler for new and novel applications: e.g. use of public cloud toolkits and services around Machine Learning, AI, use of FPGAs and GPU based systems, applications related to Internet of Things and Edge Computing
  • Perspectives from European and international collaborations and research programmes
  • Policy, legal, regulatory and ethical issues, models for funding – case studies for managing sensitive or personal data in the cloud
  • Addressing the skills gap: how to educate researchers in how to best take advantage of cloud; DevOps and ResOps


Microsoft Education and Jisc GÉANT Launch Event

Screen Shot 2017-11-13 at 18.58.50

Microsoft and Jisc would like to invite you to an event at the Microsoft campus in Reading on 6th December 2017. This event will focus on The Journey to Digital Transformation, and celebrate the availability of Azure cloud services under a pan-European purchasing framework negotiated by GÉANT.

As a response to disruptive sector changes and increased competition, many UK Universities are embarking on a quest to digitally transform their institution. This event is designed to help those who are already in progress or are just commencing the journey. During the event customers, Jisc, partners and the Microsoft Education Team will share vision and aspects of the process.

Sessions will address questions such as:

  • How can more time and budget be focussed on innovation and creativity?
  • What will be the impact of Artificial Intelligence on teaching and learning, student analytics and research?
  • How will institutions manage a multi-faceted environment from on premise to Public Cloud?
  • How can an institution differentiate in an increasingly competitive environment?
  • Can central IT Services provide an agile, scalable and protected environment for researchers?

To find out more, and express your interest, click here.

The Cloud, do the benefits outweigh the challenges?


“Cloud technology and services create some challenges, but certainly present great opportunities in education.” John Cartwright – UCISA

There is no doubt that education is moving onto the cloud, the benefits are huge, but you need to recognise that one size does not fit all. You need to know about the potential disadvantages so that you can mitigate for them and develop your plans around them.

So what are the known benefits?

  • Costs: get huge computing power without any upfront costs •
  • Flexibility: instantly scale resources up or down as needed •
  • Capacity: access virtually unlimited compute resources •
  • Resiliency: support high availability and disaster recovery with a more durable infrastructure
  • Innovation: shift internal IT resources from maintenance to innovation •
  • Collaboration: provide anytime, anywhere access to shared information and learning resources

What should you be worrying about and planning around?

  • Data security: nearly 70% of UK HE IT leaders say it is their biggest challenge. There is no doubt that there are concerns, but a carefully structured security strategy and working closely with the major suppliers will ensure that data is safe
  • Existing investments: you will have invested heavily in on-premise infrastructure and you will need to make sure that you squeeze every last penny of ROI out of that hardware. Again, a clear plan for migration and product retirement will help
  • Supplier risk: the cloud marketplace is maturing but some niche vendors may not be around forever. Make sure that you have read every last sentence of any contract you sign and be sure that you are clear about off-loading data and managing any novation required

It’s well worth the effort. A sound and well-implemented cloud strategy will help you retain students and support ground-breaking research projects.

Student experience

student experienceStudents are now paying customers with expectations to match. They assume that anytime, anywhere access to learning is the least your institution will offer. On top of that, they want the ability to collaborate with fellow students and staff, trouble-free streaming of video and audio content and the ability to submit that crucial essay at the last second from wherever they may be, in the library or in bed.

No institution has limitless resources. The cloud is the obvious place to maximise what you offer your students without overburdening the IT department. With clever use of public, private and hybrid cloud you can offer online access to course materials, a well-stocked, up-to-date library and 80% of learning institutions have now moved student email to the cloud as well.

Competition for students has never been more fierce and you are now competing on a global stage. Being on the cloud allows you to throw off the shackles of a campus. You can now deliver capabilities, course materials and data at scale. The rising popularity of MOOCs shows the impact cloud delivery is having on education.

With the rise and sophistication of student analytics the cloud allows your institution to harness the power of data. This data allows you to recognise and manage risk factors meaning that more students achieve more effectively, making your institution an attractive choice.


ResearchResearch is a key revenue stream for your institution with the right research projects bringing in direct revenue and creating a halo effect, making the institution attractive to potential students and investors.

The way that research is conducted today makes the cloud an ideal choice. Once research was conducted behind closed doors. It’s now community-driven with academics from different disciplines, institutions, private corporations and countries working together on research projects. They need to be comfortable using large data sets securely in real time. Cloud technology provides an effective, affordable platform for the raw computing power and the required collaboration that is so vital.

With more and more valuable research projects IT departments were being swamped with provisioning requests. The Cloud has made this easier to manage and it is now sustainable.

The four pillars of success

change aheadThe move to the cloud has been described as a journey but, in reality, there is no final destination. As long as technology keeps evolving your strategies will need to evolve too. The journey never ends but there are four pillars that will support a successful cloud strategy.


  1. Establish what you want to achieve – don’t start from the capabilities of the technology; start from what you actually want and need and then work out how technology can help. You might want to answer questions like these:
    • How can we attract, serve and retain more students?
    • What do students say we need to improve?
    • What are our key research capabilities and is that enough?
    • How will we develop peer networks with other institutions?
    • What types of private enterprises might be attracted to work with us?
    • What capabilities do we need to further our institution’s mission?
  2. Make data central to your strategy – it’s all about the data, what it is, where it comes from, how and where you use it. Data needs to be the starting point from which everything else flows. Make sure you can answer the following questions:
    • How will we make data accessible without putting it at risk?
    • Have we established a proper chain of custody for data at every point in its journey?
    • How will we develop a data protection compliance framework with our partners, other institutions and private companies?
    • Are there interoperability issues with any of our potential partners?
  3. Take the long-term approach – If you are going to push workloads onto the cloud you need to understand how you are going to get them back again if and when you need to. Technology is moving fast and today’s latest-big-thing could be tomorrow’s has-been. Don’t forget to consider how you will manage your cloud environment as it grows. Make sure that you have the right roles and skills and have a training plan to ensure your strategy is sustainable. Ask yourself:
    • How will we manage thousands rather than dozens of cloud-based VMs?
    • How will staff roles change to manage cloud integration rather than on-site maintenance?
    • What additional business processes will we need when we start offering cloud services?
  4. Work with the right partners – the right partner or partners will give you an informed view of what to migrate and where to hold it based on experience from other institutions in a similar position. They will undoubtedly help but make sure that you these key pitfalls:
    • Has your proposed partner got a strong record of working with educational institutions? Ask them for references and follow them up. It can be a long, expensive and complicated job moving and unpicking the mistake.
    • Is your partner giving you impartial, expert advice on where to place different workloads? Remember that they may be selling more than advising.

If you do it right, cloud technology will open up a world of opportunities helping you attract, retain and graduate more students and provide a solid platform for research projects that make a real difference.

cloud platform


What does it cost to migrate apps to the cloud?

61880457, cloud_development

You might see moving apps to the public cloud as a way to cut costs. A carefully planned migration will, of course, save you money but beware! You might find that the move is an expensive experience which leads to increased costs down the line. Make sure that you don’t fail to identify which applications will provide the biggest bang for their buck after a move to the cloud.

Three warning signs that you shouldn’t migrate apps to the cloud

In general, there are three types of application or application requirements that suggest that your app could cost more to run in the public cloud than on premises.

  1. Poorly built and designed applications – the app may be using too many resources. The best way to understand this is to go back to the code and see what it tells you. Good developers should have understood how to use resources effectively, but you mustn’t rely on it. You can use code analysers to understand when and where inefficiency exists. If you don’t have access to the source code, you can use an application profiler, which monitors application behaviour and reports issues, such as too many I/O requests. When an application consumes an excessive amount of resources, the only path to success is to refactor or rewrite the application to make the most out of the native cloud platform. That, however, adds risk and costs money.
  2. Apps that are spread too far from the data – remember where your data is, on-premises or in the cloud. If you are leveraging your applications in the cloud think about network latency and the financial and personal costs of poor performance.
  3. Apps that have very strict security and compliance requirements – you may find that moving some types of workloads or data to a public cloud will need creative security solutions. You could end up spending considerable amounts of time and money on a specific and fiddly change required to mitigate the move.

So before you decide to migrate any apps take a deep breath and have a realistic look at what your new platform can provide and only select the workloads that will offer the most value before you migrate any apps to the cloud. If you do that the cost savings are there if you don’t you face higher costs and confused executives.


Thank you TechTarget.com


Why does a Chapel stand firm at the centre of the Northern Powerhouse?


Salem Chapel

Salem Chapel in Leeds was established in 1791 as a Dissenting chapel in opposition to the Church of England and has been a vital part of the city’s community for more than three centuries. The 19th century saw Leeds become an industrial powerhouse and in 1822 Joshua Tetley built a brewery meters from Salem Chapel’s front door; the chapel did not waiver. More surprisingly, but for some, more importantly, it was the place where Leeds football club was founded in 1919. The Salem Chapel has stood firm in a world of change.  It finds itself in the midst of change again in its new role as home to one of aql®‘s datacentres .

‘Improving digital infrastructure will help equip businesses and universities of the Northern Powerhouse with the building blocks they need to grow and compete effectively in the global market.’  Northern Powerhouse Minister Andrew Percy

aql signingThe Northern Powerhouse is far less a Whitehall programme than an initiative driven by the North for the North. It is bringing new investment to key Northern cities like Leeds. On the 13th October 2016, the north’s major universities signed a deal to ensure that 21st-century digital infrastructure is available to education and medical research.

When large data sets need to be shared data centres come into their own. When you visit aql®’s secure, carrier-neutral data centres with their direct access to the Janet network you recognise a place that will support the UK academic community’s need for high-performance IT infrastructure. aql® already hosts the main high-capacity northern access point into Jisc’s Janet network, giving national and international access to the academic community. This network also has a direct connection into IXLeeds – the Northern Internet Exchange – which provides an opportunity for high-capacity access between the Janet network and other commercial networks and key healthcare data stakeholders such as EMIS, making it ideal for supporting public-private big data research projects.

aql datacentre leeds


Looking at institutions’ computers gently blinking and humming in their brand new racks you can only imagine the activity going on to support research, critical back-office systems and IP telephony. It is clear immediately that the space is designed with high-performance in mind. If you jump up and down the highly reinforced floors are solid and the high cooling  systems and impressive power capability are apparent. You can be confident that the equipment you spent time and money moving and installing will work to its maximum capacity 24 hours a day 365 days a year. And when you walk outside and see the 20-foot high electric fence surrounding the facility you know that aql® has the expertise to keep your equipment and data secure. Safe migration onto the cloud is becoming increasingly necessary for educational institutions and this fundamental change is supported by datacentres you can trust.

We now live in a world where ‘big-data’ is the norm and being able to support the provision of these huge processing needs opens the doors to significant benefits. Jisc’s position working with universities and commercial suppliers mean that the highest quality, most cost-effective solutions can be developed and shared.

‘We are very pleased to be able to … pass on the cost savings by centrally procuring this service on institutions’ behalf. The northern data centre is one of two shared datacentres Jisc facilitate for UK HEIs and the scalability of service they provide means they are as cost-effective as they are efficient.’  Jeremy Sharp, Director of Strategic Technologies at Jisc

Jisc is working closely with aql® to support the academic community’s key role in the Northern Powerhouse. We know that The Salem Chapel will be seeing more changes in the months and years to come and are confident that it is up to the challenge facing it.

Salem aql

Top 10 facts you need to know about cloud economics

Cloud Economics

Cloud’s economic model is unique. Whether you are a cloud sceptic following a cloud-first mandate or are completely bought-in to the promise of agile infrastructures you must understand the sometimes counterintuitive cloud economics.  Whatever you are concentrating on you will almost certainly have to justify to your CFO the increase in Cloud spending. While the better speed and agility will prove their own value how can you be sure that your internal users are employing responsible practices to keep costs efficient?

Here are ten facts about cloud economics that will help you identify how your organisation can optimise cloud usage.


Cloud isn’t the only solution but it does provide you with more sourcing options. Public cloud leverages many familiar concepts like standardisation and automation but the speed and variable pricing model are new and you are no longer locked into multi-year contracts or large server purchases. But beware, not every application benefits from this model. Variability isn’t always beneficial when you compare it with the discount you get with a longer commitment. Part of your cloud due-diligence will be properly understanding the incentive system of any model you select.


Low costs per virtual machine aren’t what make cloud cheaper. Cloud infrastructure saves you money only when you aren’t using it. Best fit workloads are those with transient or dynamic properties. Buying new servers to accommodate a short burst in usage isn’t cost effective but using public cloud can and will let you scale as you need and pay-per minute / hour. But remember to properly analyse how long your ‘short burst’ is actually going to be, cloud may not be the most cost effective route.


If you build a private cloud you become a cloud provider and the economics change. If you want to see savings you will need to build into your model: the high cost of software, high-end infrastructure, supporting performance expectations, maintaining excess capacity and meeting developer expectations. Follow in the footsteps of public cloud providers, focus on net-new services and build with standardised commodity components. Failing to do this leads to over-spending, documentation shortages and falling short on service level agreements.


Moving some or all of your business to the public cloud does mean someone else will be running the infrastructure but you are still responsible for managing, securing, monitoring and backing up cloud deployments. Facility management and hardware support will diminish but new governance and integration responsibilities take their place. Some cloud providers will offer you some help but remember that it might not be free and you still have to protect all your IT assets and ensure the performance and availability. If your public cloud ROI is dependent on headcount reduction you may well be disappointed.


Unless your data centre contract is coming to an end your tech support is entirely consultant based your tech management costs are only going to go up with cloud. Cloud enables a long list of things that simply weren’t possible before it’s existence but now need tech support and implementation such as genomic processing and supplying resources for two-week marketing events.

The real ROI is the increased speed and agility which translates into faster, better customer engagement. Better cloud management helps to optimise cloud usage and spending meaning that the customer experience can be the priority. Remember that developers want speed and agility and the cloud makes it easier for them to circumvent your infrastructure to get it. You can deliver the autonomy they want via self-service portals and application programming interfaces and protect our healthy infrastructure policies and consistency with templates that abstract the details.


Public cloud providers increase their margins by pushing average sustained utilisation rates as high as possible. Providers do this by moving around customer workloads to minimise the number of physical machines running.

A jar full of rocks always has room for the sand you pour into the remaining space. By encouraging customers to buy lots of small VMs cloud providers can improve their utilisation and so their margins. They will financially reward you for breaking apart your large apps into smaller components. Don’t be wooed by the price, many of your applications won’t adjust well to this seemingly minor change and you might find yourself spending money to save money.


This is the classic pets-versus-cattle analogy. Cloud providers use commodity infrastructure knowing that they will have a higher VM failure rate. When this happens they terminate it and start another. This places the onus on the customer to design for application resiliency rather than infrastructure resiliency. You may be happy with this change but it might present significant issues for your existing applications. If an application is not built for the cloud (loosely coupled and highly scalable) an abundance of small VMs that frequently fail will mean poor performance. This is not a weakness of the providers it is a fundamentally different approach. If you choose to host workload in a public cloud you will need to calculate the cost of any work required to make this seamless.


You might be planning to move existing systems-of-record applications onto public cloud. This may well mean redesigning or rewriting your application and that’s not simple. Breaking applications into smaller components will allow for more granular control over scalability of each element. Components that need more capacity will include a set design template, automating the creation of new machines that can be run independently. You may need some duplication of controller information to remove bottlenecks and dependencies on one machine. The work may be onerous but it will be cost-effective; independent scaling, self-sustaining mini-components will help manage costs during peak usage and increase resiliency. Loss of a VM will now mean reduced capacity, not a systemwide failure.


You may well be tempted when you see that Amazon’s many additional services can eliminate thousands of labour  hours from you app teams and yield a self-sustaining service with minimal maintenance. Be careful; the more you acclimatise to using high-order features and services, that seem so cheap individually, the harder it will be to transition to another provider who does not offer the same capabilities. You have to weigh the value of the services with the lock-in and potential migration costs against the long-term value of the cloud provider you are using. Determine the level of abstraction and the amount of choice required by your customers but keep in mind that every add-on service you adopt correlates to the amount you are locked in. Of course, lock-in may not be bad if your provider is giving you great value but don’t restrict your options. Revenue benefits may erode while switching costs rise.


It’s a cliche but there are always costs you didn’t expect and it is worthwhile making sure you have explored as many of them as  you can. This is a short list you might benefit from considering: Software licences; Data-out charges; Mitigating latency; Direct connections; Onboarding error rates; Migration charges; Employee time (not just IT, remember finance, HR and legal); Backup and business continuity. Ask the cloud providers about them and see how it might alter their offer.

The cost benefits of moving to the cloud are real and worthwhile but make sure that you manage your expectations and do the work that’s needed to maximise your spend.

Thank you Forrester – https://go.forrester.com/



RCUK Cloud Working Group Workshop

Cloud Workshop

Share your expertise in the application of cloud computing technology for the research community with other researchers and technical specialists.

The RCUK Cloud Working Group and The Cloud for Research Special Interest Group exist to help researchers and technical specialists using cloud computing technologies and services to share knowledge and expertise. The Working Group is planning an innovative workshop focusing on the potential for cloud computing in research to be fully realised:

  • technical integration: addressing the challenges in moving and running research workloads on public and private cloud
  • equipping the research community with the skills they need to exploit cloud
  • tackling legal and regulatory issues around the use of public cloud


The workshop will consist of a series of presentations from invited speakers along with the opportunity to meet and network with other members of the research community.   The programme will be finalised over the coming weeks but will include talks from representatives from research organisations, public cloud providers and the OpenStack community.

Proposal for plugfest / Interactive Session

We don’t want the whole day to be presentations and talks, we would like people to demonstrate some Real Work™. Ideally, the focus for these interactive sessions should be on interoperation and/or the use of open standards, particularly for building or using hybrid clouds for research (but hybrid could also be HPC/cloud, etc.) Standards can be any of relevance related to cloud APIs, use of container technologies, technologies for bulk data movement and also access control and single sign-on.   This session is still to be confirmed but if you would like to be involved please submit your interest and ideas with your registration.

Find out more and register for the workshop here, we look forward to talking with you.


7 ways to implement a cloud disaster recovery strategy

Cloud disaster recovery


There’s a lot resting on a CIO’s shoulders when if comes to disaster recovery (DR) plans. Data is now a core asset so disaster recovery is no longer just about system recovery but also about data recovery. You will probably be surprised to hear that about 40% of organisations don’t have a disaster recovery plan of any sort and even if they do exist they may well not be maintained to reflect the ever-changing infrastructure, and worst of all they are not tested.

There are tried and tested best practices which will  help you put together a robust disaster recovery strategy, below we suggest 7 you will find an invaluable place to start.


Don’t leave your DR planning to a  few IT people who have a bit of time on their hands. Make DR planning a strategic and business imperative and make sure that all your business colleagues are proactively informed. Encourage them to give feedback while being aware that you are the lead on this programme


Risks run from manmade to natural disasters and come in all shapes and sizes from idiotic mistakes to tsunamis. Assign each one a likelihood of occurrence, being neither too confident nor too pessimistic.  Your plan should include a systems prioritising strategy, categorising your systems by criticality.  Be aware of scenarios where any downtime might be critical and those where it might be some time before major issues will occur.


IT teams have in place rock-solid, secure and stable infrastructures and people are unwilling to mess with them. If they are asked to they often play the ‘security’ card. You need to counter this by reassuring your team of the trust your institution places in external suppliers, from HR, legal and financial and that IT is no different. This argument might be easier if you are already using SaaS applications like email, Office and ERP tools so use this success to leverage your case.


Disaster recovery should appear high on the list of budgetary priorities for any IT team; it rarely does. So you might piggyback DR costs for planning, solution selection, deployment and testing on some other IT effort and virtualisation is one of the most appropriate. Virtualization gives you portability of applications and the pay-as-you-go cloud economic model gives you an affordable off-site option for any DR strategy. Don’t forget that you will need a robust recovery option which ensures that applications and data are recoverable without threatening business continuity.


Mobility is becoming one of the top concerns for any IT team and with Gartner predicting that by 2017 50% of employers will require staff to bring their own devices into the workplace suddenly the risk of data loss from personal devices is a major issue. It is essential that you work with your institution to develop a AUP (acceptable use policy). This will provide a framework for what the enterprise can and can’t do with an employee-owned device and how much access any employee can have to institutional data. Your DR plans will need to revolve around this policy.


Don’t let fear of the unknowable impact on your smooth running of your team. Set sensible expectations for your team and put in place regular check-points to make them feel confident that they are heading off disaster with their work. Over the long term you need to build a culture where DR testing is no different from testing an application before deployment; don’t let it become stigmatised.


There are numerous risks and contingencies which you will need to account for in any DR plan. Be savvy and use the cloud and virtualisation to more easily meet the DR requirements within your budget. If you use real-world examples, preferably from within your own institutions, and show how you will manage any crisis without damaging activity or security you are half way towards making your DR plan part of the fabric of running your business.

If you can’t stand up in front of your senior management team, tell them you have a comprehensive DR plan and demonstrate how risks are mitigated and continuity assured, you need to go back to the top of this list and start again.

Look at ComVault for more information on this.