RCUK Cloud for Research Workshop – January 2018



The RCUK Cloud Working Group are hosting their 3rd annual workshop at the Francis Crick Institute in London on January 8th 2018.

This event will bring together researchers and technical specialists to share expertise in the application of cloud computing technology for the research community.

The meeting will include presentations from a range of research domains including particle physics, astronomy, the environmental sciences, medical research and bioinformatics.


To register for this free event, please visit: http://bit.ly/rcuk-cloud-workshop2018-reg

The working group also welcomes submissions for talks, posters or proposals for breakout sessions.

Key themes

This workshop will focus on key areas to address in order for the potential of cloud computing for research to be fully realised:

  • Tackling technical challenges around the use of cloud: for example, porting legacy workloads, scenarios for hybrid cloud, moving large data volumes, use of object storage vs. POSIX file systems.
  • Cloud as enabler for new and novel applications: e.g. use of public cloud toolkits and services around Machine Learning, AI, use of FPGAs and GPU based systems, applications related to Internet of Things and Edge Computing
  • Perspectives from European and international collaborations and research programmes
  • Policy, legal, regulatory and ethical issues, models for funding – case studies for managing sensitive or personal data in the cloud
  • Addressing the skills gap: how to educate researchers in how to best take advantage of cloud; DevOps and ResOps


Microsoft Education and Jisc GÉANT Launch Event

Screen Shot 2017-11-13 at 18.58.50

Microsoft and Jisc would like to invite you to an event at the Microsoft campus in Reading on 6th December 2017. This event will focus on The Journey to Digital Transformation, and celebrate the availability of Azure cloud services under a pan-European purchasing framework negotiated by GÉANT.

As a response to disruptive sector changes and increased competition, many UK Universities are embarking on a quest to digitally transform their institution. This event is designed to help those who are already in progress or are just commencing the journey. During the event customers, Jisc, partners and the Microsoft Education Team will share vision and aspects of the process.

Sessions will address questions such as:

  • How can more time and budget be focussed on innovation and creativity?
  • What will be the impact of Artificial Intelligence on teaching and learning, student analytics and research?
  • How will institutions manage a multi-faceted environment from on premise to Public Cloud?
  • How can an institution differentiate in an increasingly competitive environment?
  • Can central IT Services provide an agile, scalable and protected environment for researchers?

To find out more, and express your interest, click here.

The Cloud, do the benefits outweigh the challenges?


“Cloud technology and services create some challenges, but certainly present great opportunities in education.” John Cartwright – UCISA

There is no doubt that education is moving onto the cloud, the benefits are huge, but you need to recognise that one size does not fit all. You need to know about the potential disadvantages so that you can mitigate for them and develop your plans around them.

So what are the known benefits?

  • Costs: get huge computing power without any upfront costs •
  • Flexibility: instantly scale resources up or down as needed •
  • Capacity: access virtually unlimited compute resources •
  • Resiliency: support high availability and disaster recovery with a more durable infrastructure
  • Innovation: shift internal IT resources from maintenance to innovation •
  • Collaboration: provide anytime, anywhere access to shared information and learning resources

What should you be worrying about and planning around?

  • Data security: nearly 70% of UK HE IT leaders say it is their biggest challenge. There is no doubt that there are concerns, but a carefully structured security strategy and working closely with the major suppliers will ensure that data is safe
  • Existing investments: you will have invested heavily in on-premise infrastructure and you will need to make sure that you squeeze every last penny of ROI out of that hardware. Again, a clear plan for migration and product retirement will help
  • Supplier risk: the cloud marketplace is maturing but some niche vendors may not be around forever. Make sure that you have read every last sentence of any contract you sign and be sure that you are clear about off-loading data and managing any novation required

It’s well worth the effort. A sound and well-implemented cloud strategy will help you retain students and support ground-breaking research projects.

Student experience

student experienceStudents are now paying customers with expectations to match. They assume that anytime, anywhere access to learning is the least your institution will offer. On top of that, they want the ability to collaborate with fellow students and staff, trouble-free streaming of video and audio content and the ability to submit that crucial essay at the last second from wherever they may be, in the library or in bed.

No institution has limitless resources. The cloud is the obvious place to maximise what you offer your students without overburdening the IT department. With clever use of public, private and hybrid cloud you can offer online access to course materials, a well-stocked, up-to-date library and 80% of learning institutions have now moved student email to the cloud as well.

Competition for students has never been more fierce and you are now competing on a global stage. Being on the cloud allows you to throw off the shackles of a campus. You can now deliver capabilities, course materials and data at scale. The rising popularity of MOOCs shows the impact cloud delivery is having on education.

With the rise and sophistication of student analytics the cloud allows your institution to harness the power of data. This data allows you to recognise and manage risk factors meaning that more students achieve more effectively, making your institution an attractive choice.


ResearchResearch is a key revenue stream for your institution with the right research projects bringing in direct revenue and creating a halo effect, making the institution attractive to potential students and investors.

The way that research is conducted today makes the cloud an ideal choice. Once research was conducted behind closed doors. It’s now community-driven with academics from different disciplines, institutions, private corporations and countries working together on research projects. They need to be comfortable using large data sets securely in real time. Cloud technology provides an effective, affordable platform for the raw computing power and the required collaboration that is so vital.

With more and more valuable research projects IT departments were being swamped with provisioning requests. The Cloud has made this easier to manage and it is now sustainable.

The four pillars of success

change aheadThe move to the cloud has been described as a journey but, in reality, there is no final destination. As long as technology keeps evolving your strategies will need to evolve too. The journey never ends but there are four pillars that will support a successful cloud strategy.


  1. Establish what you want to achieve – don’t start from the capabilities of the technology; start from what you actually want and need and then work out how technology can help. You might want to answer questions like these:
    • How can we attract, serve and retain more students?
    • What do students say we need to improve?
    • What are our key research capabilities and is that enough?
    • How will we develop peer networks with other institutions?
    • What types of private enterprises might be attracted to work with us?
    • What capabilities do we need to further our institution’s mission?
  2. Make data central to your strategy – it’s all about the data, what it is, where it comes from, how and where you use it. Data needs to be the starting point from which everything else flows. Make sure you can answer the following questions:
    • How will we make data accessible without putting it at risk?
    • Have we established a proper chain of custody for data at every point in its journey?
    • How will we develop a data protection compliance framework with our partners, other institutions and private companies?
    • Are there interoperability issues with any of our potential partners?
  3. Take the long-term approach – If you are going to push workloads onto the cloud you need to understand how you are going to get them back again if and when you need to. Technology is moving fast and today’s latest-big-thing could be tomorrow’s has-been. Don’t forget to consider how you will manage your cloud environment as it grows. Make sure that you have the right roles and skills and have a training plan to ensure your strategy is sustainable. Ask yourself:
    • How will we manage thousands rather than dozens of cloud-based VMs?
    • How will staff roles change to manage cloud integration rather than on-site maintenance?
    • What additional business processes will we need when we start offering cloud services?
  4. Work with the right partners – the right partner or partners will give you an informed view of what to migrate and where to hold it based on experience from other institutions in a similar position. They will undoubtedly help but make sure that you these key pitfalls:
    • Has your proposed partner got a strong record of working with educational institutions? Ask them for references and follow them up. It can be a long, expensive and complicated job moving and unpicking the mistake.
    • Is your partner giving you impartial, expert advice on where to place different workloads? Remember that they may be selling more than advising.

If you do it right, cloud technology will open up a world of opportunities helping you attract, retain and graduate more students and provide a solid platform for research projects that make a real difference.

cloud platform


What does it cost to migrate apps to the cloud?

61880457, cloud_development

You might see moving apps to the public cloud as a way to cut costs. A carefully planned migration will, of course, save you money but beware! You might find that the move is an expensive experience which leads to increased costs down the line. Make sure that you don’t fail to identify which applications will provide the biggest bang for their buck after a move to the cloud.

Three warning signs that you shouldn’t migrate apps to the cloud

In general, there are three types of application or application requirements that suggest that your app could cost more to run in the public cloud than on premises.

  1. Poorly built and designed applications – the app may be using too many resources. The best way to understand this is to go back to the code and see what it tells you. Good developers should have understood how to use resources effectively, but you mustn’t rely on it. You can use code analysers to understand when and where inefficiency exists. If you don’t have access to the source code, you can use an application profiler, which monitors application behaviour and reports issues, such as too many I/O requests. When an application consumes an excessive amount of resources, the only path to success is to refactor or rewrite the application to make the most out of the native cloud platform. That, however, adds risk and costs money.
  2. Apps that are spread too far from the data – remember where your data is, on-premises or in the cloud. If you are leveraging your applications in the cloud think about network latency and the financial and personal costs of poor performance.
  3. Apps that have very strict security and compliance requirements – you may find that moving some types of workloads or data to a public cloud will need creative security solutions. You could end up spending considerable amounts of time and money on a specific and fiddly change required to mitigate the move.

So before you decide to migrate any apps take a deep breath and have a realistic look at what your new platform can provide and only select the workloads that will offer the most value before you migrate any apps to the cloud. If you do that the cost savings are there if you don’t you face higher costs and confused executives.


Thank you TechTarget.com


Why does a Chapel stand firm at the centre of the Northern Powerhouse?


Salem Chapel

Salem Chapel in Leeds was established in 1791 as a Dissenting chapel in opposition to the Church of England and has been a vital part of the city’s community for more than three centuries. The 19th century saw Leeds become an industrial powerhouse and in 1822 Joshua Tetley built a brewery meters from Salem Chapel’s front door; the chapel did not waiver. More surprisingly, but for some, more importantly, it was the place where Leeds football club was founded in 1919. The Salem Chapel has stood firm in a world of change.  It finds itself in the midst of change again in its new role as home to one of aql®‘s datacentres .

‘Improving digital infrastructure will help equip businesses and universities of the Northern Powerhouse with the building blocks they need to grow and compete effectively in the global market.’  Northern Powerhouse Minister Andrew Percy

aql signingThe Northern Powerhouse is far less a Whitehall programme than an initiative driven by the North for the North. It is bringing new investment to key Northern cities like Leeds. On the 13th October 2016, the north’s major universities signed a deal to ensure that 21st-century digital infrastructure is available to education and medical research.

When large data sets need to be shared data centres come into their own. When you visit aql®’s secure, carrier-neutral data centres with their direct access to the Janet network you recognise a place that will support the UK academic community’s need for high-performance IT infrastructure. aql® already hosts the main high-capacity northern access point into Jisc’s Janet network, giving national and international access to the academic community. This network also has a direct connection into IXLeeds – the Northern Internet Exchange – which provides an opportunity for high-capacity access between the Janet network and other commercial networks and key healthcare data stakeholders such as EMIS, making it ideal for supporting public-private big data research projects.

aql datacentre leeds


Looking at institutions’ computers gently blinking and humming in their brand new racks you can only imagine the activity going on to support research, critical back-office systems and IP telephony. It is clear immediately that the space is designed with high-performance in mind. If you jump up and down the highly reinforced floors are solid and the high cooling  systems and impressive power capability are apparent. You can be confident that the equipment you spent time and money moving and installing will work to its maximum capacity 24 hours a day 365 days a year. And when you walk outside and see the 20-foot high electric fence surrounding the facility you know that aql® has the expertise to keep your equipment and data secure. Safe migration onto the cloud is becoming increasingly necessary for educational institutions and this fundamental change is supported by datacentres you can trust.

We now live in a world where ‘big-data’ is the norm and being able to support the provision of these huge processing needs opens the doors to significant benefits. Jisc’s position working with universities and commercial suppliers mean that the highest quality, most cost-effective solutions can be developed and shared.

‘We are very pleased to be able to … pass on the cost savings by centrally procuring this service on institutions’ behalf. The northern data centre is one of two shared datacentres Jisc facilitate for UK HEIs and the scalability of service they provide means they are as cost-effective as they are efficient.’  Jeremy Sharp, Director of Strategic Technologies at Jisc

Jisc is working closely with aql® to support the academic community’s key role in the Northern Powerhouse. We know that The Salem Chapel will be seeing more changes in the months and years to come and are confident that it is up to the challenge facing it.

Salem aql

Top 10 facts you need to know about cloud economics

Cloud Economics

Cloud’s economic model is unique. Whether you are a cloud sceptic following a cloud-first mandate or are completely bought-in to the promise of agile infrastructures you must understand the sometimes counterintuitive cloud economics.  Whatever you are concentrating on you will almost certainly have to justify to your CFO the increase in Cloud spending. While the better speed and agility will prove their own value how can you be sure that your internal users are employing responsible practices to keep costs efficient?

Here are ten facts about cloud economics that will help you identify how your organisation can optimise cloud usage.


Cloud isn’t the only solution but it does provide you with more sourcing options. Public cloud leverages many familiar concepts like standardisation and automation but the speed and variable pricing model are new and you are no longer locked into multi-year contracts or large server purchases. But beware, not every application benefits from this model. Variability isn’t always beneficial when you compare it with the discount you get with a longer commitment. Part of your cloud due-diligence will be properly understanding the incentive system of any model you select.


Low costs per virtual machine aren’t what make cloud cheaper. Cloud infrastructure saves you money only when you aren’t using it. Best fit workloads are those with transient or dynamic properties. Buying new servers to accommodate a short burst in usage isn’t cost effective but using public cloud can and will let you scale as you need and pay-per minute / hour. But remember to properly analyse how long your ‘short burst’ is actually going to be, cloud may not be the most cost effective route.


If you build a private cloud you become a cloud provider and the economics change. If you want to see savings you will need to build into your model: the high cost of software, high-end infrastructure, supporting performance expectations, maintaining excess capacity and meeting developer expectations. Follow in the footsteps of public cloud providers, focus on net-new services and build with standardised commodity components. Failing to do this leads to over-spending, documentation shortages and falling short on service level agreements.


Moving some or all of your business to the public cloud does mean someone else will be running the infrastructure but you are still responsible for managing, securing, monitoring and backing up cloud deployments. Facility management and hardware support will diminish but new governance and integration responsibilities take their place. Some cloud providers will offer you some help but remember that it might not be free and you still have to protect all your IT assets and ensure the performance and availability. If your public cloud ROI is dependent on headcount reduction you may well be disappointed.


Unless your data centre contract is coming to an end your tech support is entirely consultant based your tech management costs are only going to go up with cloud. Cloud enables a long list of things that simply weren’t possible before it’s existence but now need tech support and implementation such as genomic processing and supplying resources for two-week marketing events.

The real ROI is the increased speed and agility which translates into faster, better customer engagement. Better cloud management helps to optimise cloud usage and spending meaning that the customer experience can be the priority. Remember that developers want speed and agility and the cloud makes it easier for them to circumvent your infrastructure to get it. You can deliver the autonomy they want via self-service portals and application programming interfaces and protect our healthy infrastructure policies and consistency with templates that abstract the details.


Public cloud providers increase their margins by pushing average sustained utilisation rates as high as possible. Providers do this by moving around customer workloads to minimise the number of physical machines running.

A jar full of rocks always has room for the sand you pour into the remaining space. By encouraging customers to buy lots of small VMs cloud providers can improve their utilisation and so their margins. They will financially reward you for breaking apart your large apps into smaller components. Don’t be wooed by the price, many of your applications won’t adjust well to this seemingly minor change and you might find yourself spending money to save money.


This is the classic pets-versus-cattle analogy. Cloud providers use commodity infrastructure knowing that they will have a higher VM failure rate. When this happens they terminate it and start another. This places the onus on the customer to design for application resiliency rather than infrastructure resiliency. You may be happy with this change but it might present significant issues for your existing applications. If an application is not built for the cloud (loosely coupled and highly scalable) an abundance of small VMs that frequently fail will mean poor performance. This is not a weakness of the providers it is a fundamentally different approach. If you choose to host workload in a public cloud you will need to calculate the cost of any work required to make this seamless.


You might be planning to move existing systems-of-record applications onto public cloud. This may well mean redesigning or rewriting your application and that’s not simple. Breaking applications into smaller components will allow for more granular control over scalability of each element. Components that need more capacity will include a set design template, automating the creation of new machines that can be run independently. You may need some duplication of controller information to remove bottlenecks and dependencies on one machine. The work may be onerous but it will be cost-effective; independent scaling, self-sustaining mini-components will help manage costs during peak usage and increase resiliency. Loss of a VM will now mean reduced capacity, not a systemwide failure.


You may well be tempted when you see that Amazon’s many additional services can eliminate thousands of labour  hours from you app teams and yield a self-sustaining service with minimal maintenance. Be careful; the more you acclimatise to using high-order features and services, that seem so cheap individually, the harder it will be to transition to another provider who does not offer the same capabilities. You have to weigh the value of the services with the lock-in and potential migration costs against the long-term value of the cloud provider you are using. Determine the level of abstraction and the amount of choice required by your customers but keep in mind that every add-on service you adopt correlates to the amount you are locked in. Of course, lock-in may not be bad if your provider is giving you great value but don’t restrict your options. Revenue benefits may erode while switching costs rise.


It’s a cliche but there are always costs you didn’t expect and it is worthwhile making sure you have explored as many of them as  you can. This is a short list you might benefit from considering: Software licences; Data-out charges; Mitigating latency; Direct connections; Onboarding error rates; Migration charges; Employee time (not just IT, remember finance, HR and legal); Backup and business continuity. Ask the cloud providers about them and see how it might alter their offer.

The cost benefits of moving to the cloud are real and worthwhile but make sure that you manage your expectations and do the work that’s needed to maximise your spend.

Thank you Forrester – https://go.forrester.com/



RCUK Cloud Working Group Workshop

Cloud Workshop

Share your expertise in the application of cloud computing technology for the research community with other researchers and technical specialists.

The RCUK Cloud Working Group and The Cloud for Research Special Interest Group exist to help researchers and technical specialists using cloud computing technologies and services to share knowledge and expertise. The Working Group is planning an innovative workshop focusing on the potential for cloud computing in research to be fully realised:

  • technical integration: addressing the challenges in moving and running research workloads on public and private cloud
  • equipping the research community with the skills they need to exploit cloud
  • tackling legal and regulatory issues around the use of public cloud


The workshop will consist of a series of presentations from invited speakers along with the opportunity to meet and network with other members of the research community.   The programme will be finalised over the coming weeks but will include talks from representatives from research organisations, public cloud providers and the OpenStack community.

Proposal for plugfest / Interactive Session

We don’t want the whole day to be presentations and talks, we would like people to demonstrate some Real Work™. Ideally, the focus for these interactive sessions should be on interoperation and/or the use of open standards, particularly for building or using hybrid clouds for research (but hybrid could also be HPC/cloud, etc.) Standards can be any of relevance related to cloud APIs, use of container technologies, technologies for bulk data movement and also access control and single sign-on.   This session is still to be confirmed but if you would like to be involved please submit your interest and ideas with your registration.

Find out more and register for the workshop here, we look forward to talking with you.


7 ways to implement a cloud disaster recovery strategy

Cloud disaster recovery


There’s a lot resting on a CIO’s shoulders when if comes to disaster recovery (DR) plans. Data is now a core asset so disaster recovery is no longer just about system recovery but also about data recovery. You will probably be surprised to hear that about 40% of organisations don’t have a disaster recovery plan of any sort and even if they do exist they may well not be maintained to reflect the ever-changing infrastructure, and worst of all they are not tested.

There are tried and tested best practices which will  help you put together a robust disaster recovery strategy, below we suggest 7 you will find an invaluable place to start.


Don’t leave your DR planning to a  few IT people who have a bit of time on their hands. Make DR planning a strategic and business imperative and make sure that all your business colleagues are proactively informed. Encourage them to give feedback while being aware that you are the lead on this programme


Risks run from manmade to natural disasters and come in all shapes and sizes from idiotic mistakes to tsunamis. Assign each one a likelihood of occurrence, being neither too confident nor too pessimistic.  Your plan should include a systems prioritising strategy, categorising your systems by criticality.  Be aware of scenarios where any downtime might be critical and those where it might be some time before major issues will occur.


IT teams have in place rock-solid, secure and stable infrastructures and people are unwilling to mess with them. If they are asked to they often play the ‘security’ card. You need to counter this by reassuring your team of the trust your institution places in external suppliers, from HR, legal and financial and that IT is no different. This argument might be easier if you are already using SaaS applications like email, Office and ERP tools so use this success to leverage your case.


Disaster recovery should appear high on the list of budgetary priorities for any IT team; it rarely does. So you might piggyback DR costs for planning, solution selection, deployment and testing on some other IT effort and virtualisation is one of the most appropriate. Virtualization gives you portability of applications and the pay-as-you-go cloud economic model gives you an affordable off-site option for any DR strategy. Don’t forget that you will need a robust recovery option which ensures that applications and data are recoverable without threatening business continuity.


Mobility is becoming one of the top concerns for any IT team and with Gartner predicting that by 2017 50% of employers will require staff to bring their own devices into the workplace suddenly the risk of data loss from personal devices is a major issue. It is essential that you work with your institution to develop a AUP (acceptable use policy). This will provide a framework for what the enterprise can and can’t do with an employee-owned device and how much access any employee can have to institutional data. Your DR plans will need to revolve around this policy.


Don’t let fear of the unknowable impact on your smooth running of your team. Set sensible expectations for your team and put in place regular check-points to make them feel confident that they are heading off disaster with their work. Over the long term you need to build a culture where DR testing is no different from testing an application before deployment; don’t let it become stigmatised.


There are numerous risks and contingencies which you will need to account for in any DR plan. Be savvy and use the cloud and virtualisation to more easily meet the DR requirements within your budget. If you use real-world examples, preferably from within your own institutions, and show how you will manage any crisis without damaging activity or security you are half way towards making your DR plan part of the fabric of running your business.

If you can’t stand up in front of your senior management team, tell them you have a comprehensive DR plan and demonstrate how risks are mitigated and continuity assured, you need to go back to the top of this list and start again.

Look at ComVault for more information on this.




Immerse yourself in AWS


The Cloud is becoming ever more central to all parts of higher and further education from central IT to research, teaching and learning. Amazon recognises that help is needed to find your most appropriate route through what can be a maze.

AWS immersion days will show you best practices for deploying applications, optimising performance, monitoring cloud resources, driving efficiencies, reducing costs and more. And you will meet AWS staff so you can start building beneficial relationships to help you as and when your institution changes and you need advice or help.

You can see more details and register here. If you can’t go to this one keep your eye on the AWS website for the next announcement.


10 Mistakes to avoid when choosing a cloud

cloud mistakes

The benefits of moving to the cloud are compelling but with a new cloud infrastructure comes a new set of challenges and risks that require a new way of thinking. From the very start, you will need an in-depth understanding of the new mindset that cloud technologies demand. If you don’t learn quickly you may well fall foul to one of these 10 mistakes.

  1. Leaping before you look – cloud is a means to an end: you want happy users using applications that meet their needs and your security and compliance needs. Make sure that this aim does not get lost in a complex, technical strategy.
  2. Assuming all clouds offer the same service – private, public or hybrid, the right cloud mix depends on your specific requirements as well as the applications and infrastructure you have already invested in. Don’t forget your requirements will almost certainly change. So when you choose your cloud supplier don’t think you have ‘found the one’. Choosing a cloud supplier and structure is not a marriage and you may well need to change suppliers, avoid being locked in.
  3. Becoming too stressed over varying performance levels – different suppliers will provide varying services levels for a given application. They will vary in different regions and for various set-ups. It is up to you to plan for specific performance levels and be ready to tweak them until you reach your goals.
  4. Expecting any application to run on any cloud infrastructure – cloud providers are not OS agnostic. If your infrastructure is heavily dependent on Windows Google will not be an option for you. Some legacy systems aren’t supported by any cloud provider. Do your homework in detail before committing to a provider.
  5. Forecasting the same cost distribution for all your resources across all suppliers – the more specialised your application the more likely your cost is to vary considerably across different suppliers.
  6. Assuming all suppliers work to the same SLA – every supplier will have their own SLA and these might even vary from service to service. Remember to read the small print very carefully before you sign up and pay particular attention to novation clauses.
  7. Not aligning 3rd party support across different cloud providers  – applications based on specific virtual appliances are unlikely to be supported to the same level by every cloud provider. Don’t leave it until an advanced implementation stage before you discover this, check the small print early on.
  8. Designing an application without considering your cloud provider’s unique characteristics – if you ignore these unique variables you are setting yourself up for an unpredictable project as far as cost, performance and later maintenance are concerned.
  9. Not leveraging the full potential of the cloud – choosing a supplier doesn’t mean you have to migrate all your applications in one fell swoop. Don’t forget that for each service and application you might choose a different approach, you just need to beware of creating an overly complex infrastructure.
  10. Ignoring disaster recovery and automated migration requirements – application downtime is always going to be a  challenge. It is up to you to ensure all your applications remain available through cloud outages and that it is easy to migrate to your chosen cloud provider from your existing set-up and legacy systems. It is up to you to  plan for Recovery Point Objective, Recovery Time Objective and automated migration requirements for the long term, some suppliers may help but not all, check before you sign.

Whatever route you take keep at the forefront of your mind that what you need and get today WON’T be the only thing you need tomorrow, remain agile and flexible to make the most of being on the cloud.

Share this with your colleagues and don’t forget that there is more advice and support on this blog.

Thanks to CloudEndure.com