Why does a Chapel stand firm at the centre of the Northern Powerhouse?


Salem Chapel

Salem Chapel in Leeds was established in 1791 as a Dissenting chapel in opposition to the Church of England and has been a vital part of the city’s community for more than three centuries. The 19th century saw Leeds become an industrial powerhouse and in 1822 Joshua Tetley built a brewery meters from Salem Chapel’s front door; the chapel did not waiver. More surprisingly, but for some, more importantly, it was the place where Leeds football club was founded in 1919. The Salem Chapel has stood firm in a world of change.  It finds itself in the midst of change again in its new role as home to one of aql®‘s datacentres .

‘Improving digital infrastructure will help equip businesses and universities of the Northern Powerhouse with the building blocks they need to grow and compete effectively in the global market.’  Northern Powerhouse Minister Andrew Percy

aql signingThe Northern Powerhouse is far less a Whitehall programme than an initiative driven by the North for the North. It is bringing new investment to key Northern cities like Leeds. On the 13th October 2016, the north’s major universities signed a deal to ensure that 21st-century digital infrastructure is available to education and medical research.

When large data sets need to be shared data centres come into their own. When you visit aql®’s secure, carrier-neutral data centres with their direct access to the Janet network you recognise a place that will support the UK academic community’s need for high-performance IT infrastructure. aql® already hosts the main high-capacity northern access point into Jisc’s Janet network, giving national and international access to the academic community. This network also has a direct connection into IXLeeds – the Northern Internet Exchange – which provides an opportunity for high-capacity access between the Janet network and other commercial networks and key healthcare data stakeholders such as EMIS, making it ideal for supporting public-private big data research projects.

aql datacentre leeds


Looking at institutions’ computers gently blinking and humming in their brand new racks you can only imagine the activity going on to support research, critical back-office systems and IP telephony. It is clear immediately that the space is designed with high-performance in mind. If you jump up and down the highly reinforced floors are solid and the high cooling  systems and impressive power capability are apparent. You can be confident that the equipment you spent time and money moving and installing will work to its maximum capacity 24 hours a day 365 days a year. And when you walk outside and see the 20-foot high electric fence surrounding the facility you know that aql® has the expertise to keep your equipment and data secure. Safe migration onto the cloud is becoming increasingly necessary for educational institutions and this fundamental change is supported by datacentres you can trust.

We now live in a world where ‘big-data’ is the norm and being able to support the provision of these huge processing needs opens the doors to significant benefits. Jisc’s position working with universities and commercial suppliers mean that the highest quality, most cost-effective solutions can be developed and shared.

‘We are very pleased to be able to … pass on the cost savings by centrally procuring this service on institutions’ behalf. The northern data centre is one of two shared datacentres Jisc facilitate for UK HEIs and the scalability of service they provide means they are as cost-effective as they are efficient.’  Jeremy Sharp, Director of Strategic Technologies at Jisc

Jisc is working closely with aql® to support the academic community’s key role in the Northern Powerhouse. We know that The Salem Chapel will be seeing more changes in the months and years to come and are confident that it is up to the challenge facing it.

Salem aql

Top 10 facts you need to know about cloud economics

Cloud Economics

Cloud’s economic model is unique. Whether you are a cloud sceptic following a cloud-first mandate or are completely bought-in to the promise of agile infrastructures you must understand the sometimes counterintuitive cloud economics.  Whatever you are concentrating on you will almost certainly have to justify to your CFO the increase in Cloud spending. While the better speed and agility will prove their own value how can you be sure that your internal users are employing responsible practices to keep costs efficient?

Here are ten facts about cloud economics that will help you identify how your organisation can optimise cloud usage.


Cloud isn’t the only solution but it does provide you with more sourcing options. Public cloud leverages many familiar concepts like standardisation and automation but the speed and variable pricing model are new and you are no longer locked into multi-year contracts or large server purchases. But beware, not every application benefits from this model. Variability isn’t always beneficial when you compare it with the discount you get with a longer commitment. Part of your cloud due-diligence will be properly understanding the incentive system of any model you select.


Low costs per virtual machine aren’t what make cloud cheaper. Cloud infrastructure saves you money only when you aren’t using it. Best fit workloads are those with transient or dynamic properties. Buying new servers to accommodate a short burst in usage isn’t cost effective but using public cloud can and will let you scale as you need and pay-per minute / hour. But remember to properly analyse how long your ‘short burst’ is actually going to be, cloud may not be the most cost effective route.


If you build a private cloud you become a cloud provider and the economics change. If you want to see savings you will need to build into your model: the high cost of software, high-end infrastructure, supporting performance expectations, maintaining excess capacity and meeting developer expectations. Follow in the footsteps of public cloud providers, focus on net-new services and build with standardised commodity components. Failing to do this leads to over-spending, documentation shortages and falling short on service level agreements.


Moving some or all of your business to the public cloud does mean someone else will be running the infrastructure but you are still responsible for managing, securing, monitoring and backing up cloud deployments. Facility management and hardware support will diminish but new governance and integration responsibilities take their place. Some cloud providers will offer you some help but remember that it might not be free and you still have to protect all your IT assets and ensure the performance and availability. If your public cloud ROI is dependent on headcount reduction you may well be disappointed.


Unless your data centre contract is coming to an end your tech support is entirely consultant based your tech management costs are only going to go up with cloud. Cloud enables a long list of things that simply weren’t possible before it’s existence but now need tech support and implementation such as genomic processing and supplying resources for two-week marketing events.

The real ROI is the increased speed and agility which translates into faster, better customer engagement. Better cloud management helps to optimise cloud usage and spending meaning that the customer experience can be the priority. Remember that developers want speed and agility and the cloud makes it easier for them to circumvent your infrastructure to get it. You can deliver the autonomy they want via self-service portals and application programming interfaces and protect our healthy infrastructure policies and consistency with templates that abstract the details.


Public cloud providers increase their margins by pushing average sustained utilisation rates as high as possible. Providers do this by moving around customer workloads to minimise the number of physical machines running.

A jar full of rocks always has room for the sand you pour into the remaining space. By encouraging customers to buy lots of small VMs cloud providers can improve their utilisation and so their margins. They will financially reward you for breaking apart your large apps into smaller components. Don’t be wooed by the price, many of your applications won’t adjust well to this seemingly minor change and you might find yourself spending money to save money.


This is the classic pets-versus-cattle analogy. Cloud providers use commodity infrastructure knowing that they will have a higher VM failure rate. When this happens they terminate it and start another. This places the onus on the customer to design for application resiliency rather than infrastructure resiliency. You may be happy with this change but it might present significant issues for your existing applications. If an application is not built for the cloud (loosely coupled and highly scalable) an abundance of small VMs that frequently fail will mean poor performance. This is not a weakness of the providers it is a fundamentally different approach. If you choose to host workload in a public cloud you will need to calculate the cost of any work required to make this seamless.


You might be planning to move existing systems-of-record applications onto public cloud. This may well mean redesigning or rewriting your application and that’s not simple. Breaking applications into smaller components will allow for more granular control over scalability of each element. Components that need more capacity will include a set design template, automating the creation of new machines that can be run independently. You may need some duplication of controller information to remove bottlenecks and dependencies on one machine. The work may be onerous but it will be cost-effective; independent scaling, self-sustaining mini-components will help manage costs during peak usage and increase resiliency. Loss of a VM will now mean reduced capacity, not a systemwide failure.


You may well be tempted when you see that Amazon’s many additional services can eliminate thousands of labour  hours from you app teams and yield a self-sustaining service with minimal maintenance. Be careful; the more you acclimatise to using high-order features and services, that seem so cheap individually, the harder it will be to transition to another provider who does not offer the same capabilities. You have to weigh the value of the services with the lock-in and potential migration costs against the long-term value of the cloud provider you are using. Determine the level of abstraction and the amount of choice required by your customers but keep in mind that every add-on service you adopt correlates to the amount you are locked in. Of course, lock-in may not be bad if your provider is giving you great value but don’t restrict your options. Revenue benefits may erode while switching costs rise.


It’s a cliche but there are always costs you didn’t expect and it is worthwhile making sure you have explored as many of them as  you can. This is a short list you might benefit from considering: Software licences; Data-out charges; Mitigating latency; Direct connections; Onboarding error rates; Migration charges; Employee time (not just IT, remember finance, HR and legal); Backup and business continuity. Ask the cloud providers about them and see how it might alter their offer.

The cost benefits of moving to the cloud are real and worthwhile but make sure that you manage your expectations and do the work that’s needed to maximise your spend.

Thank you Forrester – https://go.forrester.com/



RCUK Cloud Working Group Workshop

Cloud Workshop

Share your expertise in the application of cloud computing technology for the research community with other researchers and technical specialists.

The RCUK Cloud Working Group and The Cloud for Research Special Interest Group exist to help researchers and technical specialists using cloud computing technologies and services to share knowledge and expertise. The Working Group is planning an innovative workshop focusing on the potential for cloud computing in research to be fully realised:

  • technical integration: addressing the challenges in moving and running research workloads on public and private cloud
  • equipping the research community with the skills they need to exploit cloud
  • tackling legal and regulatory issues around the use of public cloud


The workshop will consist of a series of presentations from invited speakers along with the opportunity to meet and network with other members of the research community.   The programme will be finalised over the coming weeks but will include talks from representatives from research organisations, public cloud providers and the OpenStack community.

Proposal for plugfest / Interactive Session

We don’t want the whole day to be presentations and talks, we would like people to demonstrate some Real Work™. Ideally, the focus for these interactive sessions should be on interoperation and/or the use of open standards, particularly for building or using hybrid clouds for research (but hybrid could also be HPC/cloud, etc.) Standards can be any of relevance related to cloud APIs, use of container technologies, technologies for bulk data movement and also access control and single sign-on.   This session is still to be confirmed but if you would like to be involved please submit your interest and ideas with your registration.

Find out more and register for the workshop here, we look forward to talking with you.


7 ways to implement a cloud disaster recovery strategy

Cloud disaster recovery


There’s a lot resting on a CIO’s shoulders when if comes to disaster recovery (DR) plans. Data is now a core asset so disaster recovery is no longer just about system recovery but also about data recovery. You will probably be surprised to hear that about 40% of organisations don’t have a disaster recovery plan of any sort and even if they do exist they may well not be maintained to reflect the ever-changing infrastructure, and worst of all they are not tested.

There are tried and tested best practices which will  help you put together a robust disaster recovery strategy, below we suggest 7 you will find an invaluable place to start.


Don’t leave your DR planning to a  few IT people who have a bit of time on their hands. Make DR planning a strategic and business imperative and make sure that all your business colleagues are proactively informed. Encourage them to give feedback while being aware that you are the lead on this programme


Risks run from manmade to natural disasters and come in all shapes and sizes from idiotic mistakes to tsunamis. Assign each one a likelihood of occurrence, being neither too confident nor too pessimistic.  Your plan should include a systems prioritising strategy, categorising your systems by criticality.  Be aware of scenarios where any downtime might be critical and those where it might be some time before major issues will occur.


IT teams have in place rock-solid, secure and stable infrastructures and people are unwilling to mess with them. If they are asked to they often play the ‘security’ card. You need to counter this by reassuring your team of the trust your institution places in external suppliers, from HR, legal and financial and that IT is no different. This argument might be easier if you are already using SaaS applications like email, Office and ERP tools so use this success to leverage your case.


Disaster recovery should appear high on the list of budgetary priorities for any IT team; it rarely does. So you might piggyback DR costs for planning, solution selection, deployment and testing on some other IT effort and virtualisation is one of the most appropriate. Virtualization gives you portability of applications and the pay-as-you-go cloud economic model gives you an affordable off-site option for any DR strategy. Don’t forget that you will need a robust recovery option which ensures that applications and data are recoverable without threatening business continuity.


Mobility is becoming one of the top concerns for any IT team and with Gartner predicting that by 2017 50% of employers will require staff to bring their own devices into the workplace suddenly the risk of data loss from personal devices is a major issue. It is essential that you work with your institution to develop a AUP (acceptable use policy). This will provide a framework for what the enterprise can and can’t do with an employee-owned device and how much access any employee can have to institutional data. Your DR plans will need to revolve around this policy.


Don’t let fear of the unknowable impact on your smooth running of your team. Set sensible expectations for your team and put in place regular check-points to make them feel confident that they are heading off disaster with their work. Over the long term you need to build a culture where DR testing is no different from testing an application before deployment; don’t let it become stigmatised.


There are numerous risks and contingencies which you will need to account for in any DR plan. Be savvy and use the cloud and virtualisation to more easily meet the DR requirements within your budget. If you use real-world examples, preferably from within your own institutions, and show how you will manage any crisis without damaging activity or security you are half way towards making your DR plan part of the fabric of running your business.

If you can’t stand up in front of your senior management team, tell them you have a comprehensive DR plan and demonstrate how risks are mitigated and continuity assured, you need to go back to the top of this list and start again.

Look at ComVault for more information on this.




Immerse yourself in AWS


The Cloud is becoming ever more central to all parts of higher and further education from central IT to research, teaching and learning. Amazon recognises that help is needed to find your most appropriate route through what can be a maze.

AWS immersion days will show you best practices for deploying applications, optimising performance, monitoring cloud resources, driving efficiencies, reducing costs and more. And you will meet AWS staff so you can start building beneficial relationships to help you as and when your institution changes and you need advice or help.

You can see more details and register here. If you can’t go to this one keep your eye on the AWS website for the next announcement.


10 Mistakes to avoid when choosing a cloud

cloud mistakes

The benefits of moving to the cloud are compelling but with a new cloud infrastructure comes a new set of challenges and risks that require a new way of thinking. From the very start, you will need an in-depth understanding of the new mindset that cloud technologies demand. If you don’t learn quickly you may well fall foul to one of these 10 mistakes.

  1. Leaping before you look – cloud is a means to an end: you want happy users using applications that meet their needs and your security and compliance needs. Make sure that this aim does not get lost in a complex, technical strategy.
  2. Assuming all clouds offer the same service – private, public or hybrid, the right cloud mix depends on your specific requirements as well as the applications and infrastructure you have already invested in. Don’t forget your requirements will almost certainly change. So when you choose your cloud supplier don’t think you have ‘found the one’. Choosing a cloud supplier and structure is not a marriage and you may well need to change suppliers, avoid being locked in.
  3. Becoming too stressed over varying performance levels – different suppliers will provide varying services levels for a given application. They will vary in different regions and for various set-ups. It is up to you to plan for specific performance levels and be ready to tweak them until you reach your goals.
  4. Expecting any application to run on any cloud infrastructure – cloud providers are not OS agnostic. If your infrastructure is heavily dependent on Windows Google will not be an option for you. Some legacy systems aren’t supported by any cloud provider. Do your homework in detail before committing to a provider.
  5. Forecasting the same cost distribution for all your resources across all suppliers – the more specialised your application the more likely your cost is to vary considerably across different suppliers.
  6. Assuming all suppliers work to the same SLA – every supplier will have their own SLA and these might even vary from service to service. Remember to read the small print very carefully before you sign up and pay particular attention to novation clauses.
  7. Not aligning 3rd party support across different cloud providers  – applications based on specific virtual appliances are unlikely to be supported to the same level by every cloud provider. Don’t leave it until an advanced implementation stage before you discover this, check the small print early on.
  8. Designing an application without considering your cloud provider’s unique characteristics – if you ignore these unique variables you are setting yourself up for an unpredictable project as far as cost, performance and later maintenance are concerned.
  9. Not leveraging the full potential of the cloud – choosing a supplier doesn’t mean you have to migrate all your applications in one fell swoop. Don’t forget that for each service and application you might choose a different approach, you just need to beware of creating an overly complex infrastructure.
  10. Ignoring disaster recovery and automated migration requirements – application downtime is always going to be a  challenge. It is up to you to ensure all your applications remain available through cloud outages and that it is easy to migrate to your chosen cloud provider from your existing set-up and legacy systems. It is up to you to  plan for Recovery Point Objective, Recovery Time Objective and automated migration requirements for the long term, some suppliers may help but not all, check before you sign.

Whatever route you take keep at the forefront of your mind that what you need and get today WON’T be the only thing you need tomorrow, remain agile and flexible to make the most of being on the cloud.

Share this with your colleagues and don’t forget that there is more advice and support on this blog.

Thanks to CloudEndure.com 

What does data egress mean for higher education?

Researchers are beginning to rely on cloud computing to help them drive through breakthrough science projects and they need to be able to feel confident about how much this is going to cost. Amazon recognizes this and last March they offered a discount for qualified researchers when they are downloading or sharing data. Amazon have written a useful post on their blog which makes the situation clear for researchers everywhere.

Amazon database

Setting up an Amazon Web Services account is straight-forward if you use the Jisc Amazon Web Services Portal and don’t forget that you will benefit from the speed and security of the Janet network.

We know the importance of your research and know that the Jisc / Amazon / Arcus partnership will keep you at the cutting edge of what you need.


Have you got a hybrid cloud road-map?

Hybrid Cloud in hand

Building a hybrid cloud road map

Cloud is a hot commodity in the IT community just now and we are all thinking that we should be migrating all or some of our IT infrastructure. The environment is rapidly changing and hybrid cloud is all the rage. But it can be hard to put a long-term hybrid cloud strategy in place.

The hybrid cloud model allows you to deploy a combination of private and public cloud services. You can move workload between clouds using them at their optimum comfort level. Cloud bursting allows you to use public clouds if and when capacity demands spike.

Hybrid clouds are dynamic IT entities that can be challenging to deploy and manage. Having a robust road-map will help you mitigate oversights or unexpected industry changes that could cripple your hybrid cloud. It’s important to consider all the trade-offs and not loose site of some of the more invisible benefits.

What MUST be part of the map?


It is not impossible that you will be expected to expand the benefits of your cloud infrastructure. Don’t forget:

  • Public cloud may not absorb all growth, budget monthly OPEX to handle extra IT burdens
  • Data stores may remain in private cloud and CAPEX will be needed for on-premises expansion

Public cloud providers are business partners, you do not control their behaviour and the relationship may be finite. Have contingency plans to cover:

  • The provider going out of business – how are you going to get your data back and where are you going to store it in the short term?
  • The provider failing to uphold its SLA – how are you going to move your business elsewhere? You should have an up-to-date list of alternative suppliers and have had initial conversations to start building a relationship?

We all know that there is a raft of official compliance to be managed and cloud is no exception. Perfectly acceptable cloud deployment today may not be compliant when regulations are introduced or updated. Make sure that you have a tried and tested process for introducing change and managing a compliance audit.


Three hybrid benefits you might be missing


Moving from one cloud platform to another can be complicated and expensive. Containers make it easier to move workloads between various cloud types. Containers encapsulate application workloads thus providing portability.

Moreover, you can use clustering services such as Google Kubernetes and Docker Swarm. Container clusters are easy to manage and work well with hybrid cloud computing where you might be using different types of cloud for different purposes.


If you have multiple cloud platforms for various applications and services you may well be dealing with native interfaces and that can be complex and disorderly. With a Cloud Management Platform (CMP) resources can be managed from a single domain with good automation and controls.

Users will get one, simple access route into public and private clouds even when there are different interfaces . Institutions can benefit from other automation services to manage usage through policy-based approaches that work with many back-end cloud-based technologies in a single unified system.


You may have already bought rafts of hardware for part of your cloud strategy. If the strategy changes and you are under pressure to move entirely to public cloud remember to point out that you can’t recover those hardware costs. In many cases maintaining a private cloud as part of a hybrid cloud strategy will be more cost-effective. The hardware investment has already been made and a hybrid approach can help recover that value.

Cloud strategy is always built on moving sands and you will need to be agile to stay upright; having a proper understanding of the environment and some reliable, basic building blocks will help.

What benefit do I get from using Jisc procured services? 

What is procurement?

It is vital that when you are planning any new capital expenditure you properly plan the procurement, the benefits of doing so are clear:

  • All stakeholders will be included in the planning process and share the decision making
  • It allows you to properly manage expectations on timescale and outcomes
  • It will allow you to recognise the range of support you need to fulfil the ideal solution

However, procurement is not always a straight forward process especially if you are required to follow the EU Procurement directives. These directives will be required if you are spending in excess of £164,176 (this value covers the entire life of a contract – not just a single years’ worth of spend).

Procurement Process

As well as reaching the right decision on the technology or services you are purchasing you will need to:

  • Establish the correct procurement procedure to be followed, you may need external support for this.
  • Draft Official Journal of the European Union (OJEU) or other advertisement, after Brexit this will continue to be the case until new legislation is passed and to-date there is no schedule on this.
  • You will need to draft procurement / contract documentation e.g. Pre-qualification Questionnaire (PQQ), Issue Invitation to Tender (ITT), draft terms and conditions of any final contract to all interested suppliers.
  • Manage and evaluate supplier’s tenders as well as feeding back to all unsuccessful tenderers.
  • Comply with your institution’s financial regulations or, in the case of major purchases, comply with EU regulations, after Brexit this will continue to be the case until new legislation is passed.

Procurement can be bureaucratic and time consuming and it is advisable to seek advice and support from a procurement specialist who can ensure that you select the most appropriate route.

How does pre-procurement help? 

 If you choose a pre-procured service, such as Amazon Web Services, Google Apps for Education or Microsoft 365, you can be confident that we have been through a rigorous procurement process and contracted the very best terms and conditions for the sector.

You will:

  • NOT be tied down by complex and lengthy processes
  • NOT have to spend money and time on expensive legal support
  • NOT have to source procurement support if you do not have it in-house
  • NOT have to justify your approach to senior management; Jisc is a trusted procurement partner.
  • Benefit from the sectors buying power (economies of scale). Jisc is more than likely to achieve better value for money than procuring the same services on your own.
  • Have in place a robust audit trail of procuring services in line with legislation and gaining value for money.

Rather than regretting the day you ever started a complex procurement procedure with anonymous multinational companies, if you choose pre-procured services you can rapidly be signed up and benefitting from improved efficiency and value for money.

Can Jisc help if you do choose to do a procurement?

If you do need to do a procurement yourself we have a useful guide to help you with the journey.



On-premise vs cloud; what’s more cost-effective for your apps?

Cloud and Apps

It is very easy to be lured onto the cloud by the concept of only paying for resources when you use them; moving your budget spend from CAPEX to OPEX is an attractive way to manage stretched budgets.

Do a triage first

But … it’s not as straight-forward as you may like and you need to be careful. Some applications aren’t suited to run in public cloud for either technological or financial reasons. You will need to do a careful triage and properly understand your application portfolio and analyse where they best sit. Ultimately much of the cloud vs on premises decisions come down to whether the application is designed to run in the cloud.

It’s not just about cost

It is important to dig deeper, look beyond the cost savings, why else are you moving to the cloud and is this the best thing to do? There are some useful Cloud cost analysis tools such as CloudCheckr and Cloudability  that will help avoid any surprises.

Remember that a change in platform will almost certainly require a change in culture. You will need to work into your implementation plans resistance from colleagues as well as technical hiccups. Granularly tracking resource use may throw up some unexpected budgetary considerations that will need careful management across your institution.

What’s the next level?

It’s going to take a while, but over time people throughout your institution will better understand and have more expertise to use cloud services at scale which will result in cost optimisations you didn’t expect to be possible. And you never know, one day the apps themselves will seek out the most efficient platform.

You can read more about this at TechTarget and finesse your cloud usage.