Protecting your workloads behind AWS CloudFront

If you run a website serving static data and need a caching solution, AWS CloudFront is the go-to service for this. It works by providing multiple ‘edge’ locations, which are simply data centres located in geographical hot-spots around the world. Data is accessed from the nearest data centre. For example, when accessing your London-region hosted website through CloudFront in Australia a user will be redirected to their closest edge location. If there is already a cached version of the website, CloudFront will deliver this content without having to request it from the origin in London. Not only does this offer improved performance for your end-users, it also reduces load on the origin servers.

CloudFront also offers additional security. Among its many features, it only allows layer 7 traffic through, which protects origins from layer 4 vectors of attack. Additionally, it allows a Web Application Firewall (WAF) to be associated with the distribution, offering further layer 7 protection.

A layer 4 Distributed Denial of Service (DDoS) attack operates at transport layer of the TCP stack. A common method is to flood the site from multiple locations with SYN requests – part of the three-way SYN (client) -> SYN ACK (server) -> ACK (client) handshake – until all its resources are consumed waiting for the ACK packet from the client that never arrives. As CloudFront never passes layer 4 to the origin, the site is protected. (AWS Shield does go some way to detect and mitigate such attacks outside of CloudFront).

A layer 7 DoS attack operates at a specific application protocol level, such as HTTP. An HTTP attack may involve sending a large number of GET requests to the site, which would eventually become overwhelmed and unresponsive. CloudFront can potentially mitigate such attacks naturally by simply returning cached versions of the requested objects. It can also block malformed HTTP requests. The addition of a WAF to CloudFront would provide additional protection against various other attacks, such as SQL injection that aim to steal or corrupt data.

The distributed nature of CloudFront means that every edge location will have a different IP address range. These ranges frequently change as new edge locations are constantly added to the mix.

This means that origins need to permit access from a wide range of locations, which would require them to be open to all IP addresses at Security Group (SG) and Network Access Control List (NACL) levels. From security point of view, this poses an issue. While origin details are never revealed via CloudFront, there is still a possibility of finding them by randomly scanning IP addresses. With CloudFront bypassed, its security and performance benefits are too.

There are a number of ways to only allow CloudFront into your origins. For S3, it is possible to enable Origin Access Identity (OAI) at distribution level, which would only allow CloudFront traffic to S3 buckets. This can be configured when creating your distribution or distribution behaviour.

For Load balanced traffic using ELBs, while there is the usual AWS WAF and AWS Shield bundle that can be applied (Shield is deployed on all services automatically). Shield Advanced offers additional protection – at a price. However, none of these solutions would deny access to traffic outside of CloudFront.

A possible solution would be to apply custom headers at CloudFront level using Lambda@Edge and filter traffic to the ELB through a WAF, looking for these headers. However, headers can be spoofed, and it would add an overhead (however small) of having to tag each packet with the header.

The other common solution is to restrict traffic via a SG. However, keeping the SG updated would traditionally be an ongoing manual task and the maintainer would need some way to keep track of when IP address ranges for CloudFront change.

At Jisc we have come up with an automated solution to this problem. Luckily, AWS supply a ready-made SNS topic, which provides notifications when those IP addresses change. AWS also supply a list of all IP address ranges used by CloudFront in a JSON-formatted file.

The SNS topic, as well as triggering an email, can trigger a Lambda function. We have taken advantage of this and have created a Lambda function which downloads the updated JSON file, compares its list to the IP addresses in a SG and will automatically update the SG, if necessary. One of the challenges we faced is that a single SG, by default, has a limit of 50 rules. At the time of writing, there are 68 CloudFront IP ranges, which wouldn’t fit into a single group. One way to mitigate this is to have this limit raised by AWS, however, there is always a chance of the limit being breached as more CloudFront locations come online and the cycle would repeat.

A longer-term solution is to split the SG into two or more groups and maintain a list on each, through the Lambda function. Each SG has a tag identifying it as a CloudFront SG as well as the sequence/list number. This ensures that rules are split evenly across each SG. By using tags, it is not necessary to specify which SG needs to be updated with what rules as the Lambda can automatically identify them. Our single Lambda function can update several SGs attached to several ELBs across different VPCs in a single run.

So, there you have it. By utilising the SNS topic and a Lambda function, we have a totally automated way to keep your origins behind CloudFront secure.

Improved reporting for our Managed Azure customers using automation

One of the many things we offer as part of our Managed Azure service is a monthly report with advice on cost savings, security and cloud best practice. Fortunately, the native tool, Azure Advisor, provides personalized information on exactly these categories. However, we had a problem. Although Azure Advisor allows manual export of its reports from the Azure portal, this was time consuming for our Service Desk team to update into the format that we want to provide to our customers. This re-formatting includes customisation for each customer, adding a notes field so we can add more detailed explanations and the ability to track remediation progress and exclude categories or certain priorities of recommendations.

We knew some automation was needed!

Since PowerShell is now fully supported by Azure Functions and has a lot of built-in Azure functionality in the Az module, we decided this was the language to use.  Practicing what we preach, we leveraged Azure Platform as a Service resources to host a truly cloud-native solution. The script needed to be able to connect to the customer’s Azure Advisor and pull out any recommendations before exporting these to a customised Word document.

This is what we did – a fairly smart solution to our requirements.

To make the connection we used a Service Principal, which is a security identity used by apps, services, and automation tools to access specific Azure resources. To ensure we connected to the correct Azure Advisor instance for each customer, we needed to know the customer’s name, the ID of their Azure Active Directory tenant, the ID of their Subscription and the same for the Service Principal itself.  Finally, we required the Service Principal Client Secret, which is the equivalent of its password. The values of these parameters are stored in Table storage except for the Client Secret, which is kept in a Key Vault.

The script initially connects to Azure in a user context and pulls the previously described parameters for all the customers into an array. It then disconnects and uses the Service Principal with the Reader role in the customer’s subscription to re-authenticate to Azure. Once authenticated, it calls the Azure Advisor API to refresh and get the latest recommendations. Once it has filtered, sorted and exported them to an appropriately customised Word Document, it loops round to start the next customer. Finally, the Word documents are emailed to our Service Desk team via a free SendGrid account and also archived to cool tier blob storage in a Storage Account for future reference.

As mentioned, our original intent was to run the script from an Azure Function with a timer trigger. This would mean we would pay only when the script was running and there would be no need for patching or other maintenance.  Unfortunately, we found during testing that the PowerShell module used to create the Word document was not able to work properly in a Function due to a .Net dependency in a piece of middleware. To overcome this hurdle without redeveloping the whole module, we currently run the script from a virtual machine using the Windows Task Scheduler. The virtual machine is, in turn connected to the Azure Automation On-Off solution which powers it on for a few hours each month to allow the script to run and for patching to take place. This means that the Pay-As-You-Go virtual machine is only costing us the few pounds per month whilst it’s powered on.

We also contacted the developer of the module to make him aware of the issue and we hope in a future release it will be resolved so that we can move to the much neater solution using an Azure Function.

The final part of the automation was a separate script to allow on-boarding new customers. This pulls the name, tenant and subscription information from the customer Low Level Design document to create the Service Principal and Client Secret with the appropriate Azure role and then save this information in the Table storage and Key Vault ready for the next time the reports are generated.

The end result is a fully customised report delivered to our Service Desk team ready for them to check and annotate before passing on to our customers which helps us align to our ISO accreditations.

End-to-end this was about three days of work to put together, including writing the script and some infrastructure as code around the Azure resources. We’d estimate this is probably a saving of 12 days a year for our Service Desk manager and saving him the task of manual exporting, filtering and formatting. It also means that our reporting is completely consistent across all our Azure customers and much less susceptible to human error.

Are you well-architected?

If you currently work in a UK university or college, then the chances are that somebody, somewhere in your institution is already a customer of AWS or Microsoft Azure. Maybe both? In most cases, that usage of AWS or Azure will be known about and managed by the central IT deparatment, in some cases, possibly not.

Maybe you have one or two public cloud proof-of-concept projects on the go. Or maybe you’ve gone further and have some production workloads running on AWS or Azure?

If so, how confident are you about the way you are using public cloud? Is security locked down? Do you have the right mechanisms in place to control and manage your spend? How resillient is your cloud-hosted service to component failures? How will it respond to spikes in demand?

We can offer you an independant, external and impartial review of your existing cloud usage in the form of a Cloud Architectural Review. The review will assesses your current public cloud estate and associated operational processes against your business objectives from the perspectives of:

  • operations
  • security
  • reliability
  • performance
  • cost optimisation.

We can then make architectural and operational recommendations with respect to cost-benefit, best practices, processes, implementation approaches and timescales. All our Solutions Architects are certified in line with Azure and AWS best practices and all have experience of deploying a wide range of services to the cloud.

If that sounds of interest, please speak to your Account Manager or get in touch with Jisc Cloud Solutions via the usual channels: cloud@jisc.ac.uk.

We also understand that not all universities and colleges have yet started their move into public cloud. If you fall into this group then you may be interested in a new service that we are developing, tentatively called Public Cloud Landing Zone.

In this context, a ‘landing zone’ is a well-architected public cloud tenancy, or group of tenancies, into which a university or college can start deploying services to the cloud. Well-architected in both a technical sense but also in a policies and processes sense. A Public Cloud Landing Zone engagement will involve both workshop type activity (to understand requirements and current practice), technical deployments using infrastructure as code, as well as documentation, etc. Although we have done much of this kind of activity many times in the past (when deploying a particular service to the cloud) we haven’t wrapped the component parts together into a single service and we haven’t really done it at the scale of a whole university or college – hence the reason we are treating it as a new service.

The creation of a well-architected ‘landing zone’ allows technical, operational and governance stakeholders in the institution to develop the skills, experience and confidence needed to use public cloud technologies. In turn, this allows IT staff, researchers and academics to experiment and explore capabilities within the confines of secure, compliant and well-governed platforms, providing reassurance against unexpected costs and security risks.

Again, if this sounds of interest, please get in touch.

AWS silly season – here we go

The AWS re:Invent annual conference in Las Vegas kicks-off next week, which means we are about to be snowed under by hundreds of new service announcments, product updates and the like. This year, AWS have started this process slightly early, so as not to overwhelm people during the week of the conferenece. There have been lots of announcements already.

Here are a few that I’ve spotted slipping past in my inbox that I think will be of interest to our members and customers. But there’s probably a lot of things that I’ve missed, so I suggest you keep an eye on the AWS blog for yourself. There will be so much coming out of AWS over the next week or so that keeping up will be more or less a full time job.

SES account-level suppression lists – For those of our customers that are sending large amounts of outbound email from their AWS-hosted services using SES (Simple Email Service), keeping up with bounces and complaints is a challenge. If they fail to stop sending mail to email addresses that have previously bounced, they run the risk of being blocked by AWS. (AWS have to do this to preseve the integrity of the SES service as a whole). AWS has now announced the availability of account-level suppression lists, which can be used by customers to protect their sender reputations and improve overall delivery rates for messages.

AWS managed rules for AWS WAF – AWS WAF is a web application firewall. It lets you define rules that give you control over which traffic to allow or deny to your application. You can use AWS WAF to help block common threats like SQL injections or cross-site scripting attacks. You can use AWS WAF with Amazon API Gateway, Amazon CloudFront, and Application Load Balancer. For most of our customers, we define and manage a set of rules in collaboration with them. AWS managed rules gives us a way of piggy-backing on the knowledge in AWS, choosing sets of rules that are maintained by AWS staff.

Least outstanding requests algorithm for load balancing requests – Sounds like a minimal announcement but I suspect it will actually be very useful. You can now use a ‘least outstanding requests’ algorithm, as well as plain old round-robin, to determine how Application Load Balancers share load across their target resources.

AWS Cost Categories – You can use use AWS Cost Categories to define custom rules to map to your internal business structures. After defining categorization rules, the system will organize your costs starting at the beginning of the month. Customers can visualize and monitor spend by viewing these categories in AWS Cost Explorer and AWS Budgets. We will look at the options here, particularly with regards to how we utilise this in our forthcoming Billing Portal.

Use employee attributes from your corporate directory for access control – You can now use your employees’ existing identity attributes, such as cost center and department, from your directory to create fine-grained permissions in AWS. Use these  to implement attribute-based access control to AWS resources and simplify permissions management.

As I say above, these are just a few of the many announcements that AWS have made over the last couple of days. I’ll be keeping an eye of future announcements and summarising the ones that I think are most relevent to our members and customer here.

The Capex to Opex Shift

Despite all the benefits of cloud, we often hear concerns about cloud. These generally fall into the following 6 categories which can be addressed with skills/knowledge or processes, be that creating or updating.

One concern within Business is the capex to opex shift. Whilst on premise kit is treated as capex because they are owned assets, consuming cloud services is opex which is treated differently.

I have created a little video of less than 9 minutes – an Accounting 101 demonstrating the capex to opex shift and giving some tips on how to understand it, accept it and move on – there is no magic wand!

A key takeaway is that only assets that you own can be treated as a capex and therefore depreciated. Prepaying for future services can go to the balance sheet as a prepayment and hit the P&L in the month you benefit from the service, but it will hit the P&L as the type of cost it is, eg IT costs, not depreciation. Why is this important? Because IT costs impact the ‘operating profit’ line whereas depreciation is taken into account after ‘operating profit’. Why is ‘operating profit’ important? It is deemed to be a key metric of an organisations’ financial health; the ongoing profitability from day-to-day operational trading. In many industries remuneration schemes use this figure.

Whilst there isn’t a magic wand to make cloud capex, it is important to understand that moving to cloud is much more than a cost conversation. If you haven’t read it already, check out my previous blog on Digital Economics, which highlights that cost is just one aspect of moving to cloud, the real value is in growing the revenue as a result!

 

AWS Savings Plans

AWS have announced a new pricing feature called Savings Plans, offering a way of saving up to 72% on your compute (EC2 and Fargate) spend. Even though I suspect that in most cases the realised savings will be lower than this headline figure, there is no doubt that they will be substantial in many cases. This is a pretty big innovation in how customers can buy AWS resources.

Full details on the AWS Savings Plans web page.

Savings Plans is a new flexible pricing model that allows you to save up to 72% on Amazon EC2 and AWS Fargate in exchange for a commitment to a consistent amount of compute usage (e.g. $10/hour) for a 1 or 3 year term. Savings Plans offers significant savings over On Demand usage, just like Reserved Instances, but automatically reduces your bills on compute usage across any AWS region, even as usage changes.

For members and customers who buy their AWS thru us, we will be assessing your usage and making recommendations for how best to take advantage of this new facility. For anyone else, I strongly suggest doing this analysis yourselves, even if you already make use of Reserved Instances (RIs).

Savings Plans look to give much greater flexibility than RIs in the way they can be applied, particularly from the perspective of moving workloads between EC2 and Fargate.

Working with the Warwick Employment Group

The Cloud Solutions team in Jisc works with a variety of members and customers from across education and the wider public- and third- sectors on a variety of projects and activities. For many, our primary focus is to help them with the strategic planning for their IT infrastructure, particularly as it relates to cloud adoption (obviously!). What are the pros and cons of moving to the cloud? How does the TCO compare to on-prem? How ready are they to move? Where are they with their digital transformation? What does their infrastructure roadmap look like? That kind of thing.

For others, the strategic decisions have already been made. What they need is practical help in the form of professional services and/or managed services, typically focusing on architecting new services in the cloud, re-architecting existing applications to take advantages of the new functionality offered by the cloud, or, in a few cases, simply migrating services to the cloud pretty much as they are.

Over the next few months, we’ll share some of the work we have been doing with members and customers, just to give a flavour of the kinds of areas we can help with.

One such customer is the Warwick Employment Group (part of Warwick University Services Limited) who are responsible for Jobs.ac.uk, the leading international job board for careers in academic, research, science and related professions. The Jobs.ac.uk team had been an existing customer of Eduserv for a long time – since well before the public cloud as we know it today became available and well before the merger between Jisc and Eduserv was first mooted. Back in early 2017 they came to us wanting to gain greater agility in the way their service was delivered, better resilience against server failures and the ability to think about taking their services to a much wider audience.

As far as I recall, they already had Amazon Web Services (AWS) in mind. We talked to them about the benefits they would gain from re-architecting their services on AWS and did some analysis of what their likely costs would be. A migration project was agreed. I doubt that we told them at the time but they were the second AWS customer that we did any significant re-architecting for (after Bristol City Council for whom, at the time, we had just completed a migration of their website to AWS).

As with all our cloud projects, we adopted an infrastructure as code approach from the ground up, using CloudFormation to capture the deployments and designing an AWS account and Virtual Private Cloud (VPC) structure in line with UK Government OFFICIAL guidance and AWS best practice. We took their database layer into the Amazon Relational Database Service (RDS) and used multiple Availability Zones to provide much greater resilience than had previously been possible in the Eduserv data centre.

One of the features of the Jobs.ac.uk service is the large numbers of email messages that get sent out – that is their primary job alerting mechanism. The volume of emails required the use of the Amazon Simple Email Service (SES) – our first experience with that service. As a well-known public-facing service, we have also had to work hard to keep the service secure.

I’m pleased to say that we continue to work closely with the Jobs.ac.uk team, now as Jisc Cloud Solutions rather than Eduserv, providing them with a mix of ongoing managed service (patching, backups, etc.) as well as professional services and advice where they need it.

Digital Economics

This week I presented at the UCISA IG19 conference about ‘Quantifying the value and cost of cloud’, a session to support your digital strategy, increase your financial knowledge and understand the value-based business case. I also introduced ‘digital economics’; seeing the bigger picture of how technology can enable businesses transform, not just cut costs but grow new revenues, increase customer excellence and create new products and services. The term digital economics is fairly new. Gartner are using it, others are calling it the value proposition. So maybe you heard it here first! Read on to find out more.

Technology is solving business problems and enabling business transformation, it isn’t technology for technology’s sake. Digital transformation is leading the adoption of cloud, cloud alone isn’t digital transformation. I got into the conversations about cloud a few years ago, by learning about cloud economics to support customers with the business case of moving to cloud, explaining the capex to opex shift and total cost of ownership (TCO) model. Whilst these are still important factors when changing IT models, the fact remains that using cloud compared with running an on-premise datacentre is not like for like. There are limitations in the TCO model plus that is just the first step of migration, next you optimize through right sizing, reserved instances, storage optimization, etc and go serverless to really reduce costs.

The conversation moves up to the value proposition which involves better resiliency, efficiency and being able to focus on value-add instead of keeping the lights on. Importantly by supporting the technology with the right culture, an organization will gain the ability to quickly deploy enabling innovation, agility and pace to market just to name a few, but the conversation doesn’t stop there.

Now the conversation has gone further, to the strategic outcomes. Cloud enables future technologies like AI and ML which opens new opportunities from data, creating business intelligence which can help companies get closer to their customers, understand trends and respond quicker to the market. To remain relevant and be a market leader this is key. Having an innovative culture means you can release new products and services quicker and ultimately grow revenues. So we are no longer just talking about changing cost models, we are talking about business transformation, customer experience and growing revenues. This is digital economics.

This is shown in the following chart, an upside-down triangle illustrating that the strategic outcomes outweigh the cost.

How to approach this? Always start with why; address the purpose and ensure the business strategy and objectives lead with the technology strategy aligned. Think big, strategically with a digital strategy aligned to the business strategy driven by business outcomes. With exec sponsorship this clear vision will underpin decision making across the organisation. Start small to get users onboard with the change, look to do ‘lighthouse’ projects. Learn fast underpinned with a culture embracing experimentation, accepting failure to fail fast, learn fast and ultimately increase innovation. Then iterate, transformation is continuous, so lots of small changes frequently deployed as this keeps up the fast pace and limits the impact.

Is your organisation on this journey? Many are embracing digital transformation and business transformation. Others aren’t and failing. Higher Education has some different opportunities and challenges to private sector, however many principles still apply. By getting closer to your students, having insightful data and getting to understand their needs, could result in being their chosen provider of lifelong learning, not just an initial degree. This increases the value from that initial relationship.

Education 4.0 is all about embracing technology in the future of learning, an area changing as a result of technology and student expectations. For the sector the challenge is to future-proof as it isn’t immune to disruption. In the USA Udacity has entered the online education market with no campus, is gaining thousands of students and already boasts 80,000+ graduates. With student fees so high, students will be looking for alternatives. The opportunity is to gain lifetime access to the students to be their provider of lifelong learning, therefore increasing their overall value.

What does success look like for your organisation? Suggestions would be collaboration, speed to market, doing things differently, great customer and user experience, business intelligence from data insight, business transformation, continuous iteration and being future ready. And don’t forget to mention digital economics – you heard it here first!

HE top tips from our experts: improving student experience and optimising service delivery with Cloud

Our Jisc Cloud Solutions consultants, Lyn Rees and Paul Ross, have gathered their top recommendations for Higher Education institutions that want to make the best out of student experience and shape a more effective service delivery by using Cloud technologies.

Improving Student Experience

The digital landscape is continually offering new technologies to improve our lives, and that is no different in the HE student arena. The students themselves are becoming more and more demanding in regards to universities’ digital capabilities and services as they themselves are becoming ever more connected.

  • The ever-evolving expectations of digitally native students – from mobility to high resolution streaming content, means that public cloud services are well positioned to serve their needs.
  • When it comes to security, students have high expectations of their institutions. Capable anti-phishing protection is increasingly expected, particularly among more vulnerable students such as those where English is not their first language.
  • Student demand for ubiquitous digital services is on the rise, with an expectation of 24 by 7 access on the go from their mobile devices – demands easily met by cloud solutions.  They value unified calendaring and timetabling capabilities and one-stop access via mobile apps.
  • Whilst the students of today may be considered digitally native, there is still a great deal of value in providing training and guidance to enable them to make the most of digital tools and services, fostering efficient, safe and collaborative practices.
  • Paper driven processes belong in the last century!
  • There’s an ever-greater demand for lecture capture and blended learning capabilities from students across the sector. Cloud SaaS based offerings have lowered the barriers of entry to these platforms, enabling institutions to quickly establish cost-effective services which can scale with demand.

Optimising Service Delivery

HE leaders working in the digital space are not only looking outward in terms of student experience, but also inward, especially on how they can make internal processes faster and more seamless. From culture change to lowering costs, here’s our thoughts on how you can tackle some of your institution’s digitisation challenges.

Know Your Cloud Environment

  • Track return on investment by incorporating feedback loops to measure the success of new applications and services as they are consumed by users
  • Commit to this approach and incorporate it into your business case.
  • Leverage the capabilities that cloud technology offers to analyse operational metrics and visualise your services through dashboards and rich periodic reporting.
  • Make data-driven decisions to optimise your cloud infrastructure and services

Improving Productivity

  • As your team’s experience and confidence of managing cloud technology grows, consider leveraging automation to make deployments more robust, speed up new projects, and reduce the chance of human error by reducing repetitive manual tasks.
  • Embrace commodity cloud services where possible and focus resources where they can deliver the most value.
  • Make the most of the software, systems and licencing you already own e.g. Office 365 or Google G Suite.

Culture

  • Is cloud fully understood in terms of the use cases, productivity and digital innovations? Make time for experimentation and learning and ensure investment in staff skills.
  • Develop a culture that recognises cloud as the primary vehicle for building digital capabilities.  However, this approach needs to be pragmatic; it must be based in reality and acknowledge the realities of the existing environment.
  • Everybody wants to be more agile; but not every organisation can operate like a cloud-first lean start-up.  If you’re not developing apps and services from the ground up, you can still adopt agile approaches – take a ‘fail fast’ approach by measuring user feedback and always look to deliver incremental and valuable gains.

Lowering Costs

  • Only build where you are confident you can deliver return on investment.  Careful thought is required here; you don’t want to start building up technical debt.
  • Design for the cloud, instead of ‘lift n shift’.  If you are doing the latter, make sure you’re aware of the risks and come up with a plan to optimise.
  • Public cloud gives smaller colleges and institutions an upper hand when resourcing or budgets are limited.

You can find more of these recommendations and insights on our ‘Digital leadership in HE’ report we published with ucisa earlier this year. If you are interested in learning more about new technologies for improving student experience and making your business operations more effective, watch our webinar ‘What will the campus of the future look like?’.

Jisc Cloud Solutions NEWS : new G-Cloud services

G-Cloud is a lightweight procurement option for the public sector, created initially in 2012 by the Government Digital Service (GDS) but now owned and managed by Crown Commercial Service (CCS) and entering its 11th iteration. Its original intention remains: to provide an agile and easy to use procurement route for organisations in the wider UK public sector that want to buy cloud services in line with the Government Cloud First policy.

Although primarily targeted at central and local government, G-Cloud can also be used by other ‘public’ bodies including those in the third sector and in education. As a result, it is increasingly being recognised as an easy way to buy cloud services by universities and colleges.

At G-Cloud 10, Jisc listed a single service – GovRoam. The merger with Eduserv brought another 16 G-Cloud services into the Jisc fold and we have now submitted our new combined set of services to G-Cloud 11 – the latest iteration of G-Cloud, which went live on July 2.

Jisc is a trusted technology advisor and ally of the education, public and third sectors. We provide best-in-class technology advice, engineering and support and work as part of your team to transfer knowledge at every step. As a not-for-profit, we can be an allied technology partner and reinvest any profits back into the communities we earn them in.

We see public cloud technology as a key enabler of a digital revolution in the sectors we serve. Our consultants, architects, engineers, developers and support staff are the best at what they do and dedicated to delivering the best service possible whilst also transferring their knowledge and skills to our customers.

Together our services provide a full suite to support your use of cloud services from start to finish. They can be taken in sequence to support your entire cloud journey, or selected as needed to enhance just those parts of your programme where you need support.

Below is a brief overview of the services we offer on G-Cloud 11:
Advise
• Cloud Architectural Review – advice on optimisation, cost control, performance enhancements, security improvements and service resilience
• Cloud Strategy & Roadmap – assess your IT estate and operating model before setting out a strategy for public cloud adoption

Design
• Cloud Design & Deployment – develop high-level and low-level designs for your use of public cloud

Deliver
• Cloud Migration – technical and project management expertise to move your services to public cloud
• Office 365 Migration – consultancy and implementation expertise to support application migration from an on-premise model to a SaaS model

Support
• Managed AWS – a highly reliable, scalable, low-cost infrastructure platform in the cloud
• Managed Azure – a highly reliable, scalable, low-cost infrastructure platform in the cloud
• Managed Database – the day-to-day running, maintenance and backup of your databases
• Managed Office 365 – management, support and advice to drive and optimise your use
• Managed Website Protection – DDoS mitigation and Web Application Firewall protection for your public-facing websites
• Disaster Recovery as a Service – a managed service offering monitoring and management of your disaster recovery environment.

At every step of every engagement we aim to transfer our knowledge and skills to you because, by doing so, we will have a greater impact on society and become trusted and long-term allies. Our ultimate intention with all our services is to empower our members, public and third sector organisations to become digitally independent.
Our services can be found on the digital marketplace here.
For more information, please speak to your account manager or email cloud@jisc.ac.uk