Cloud Communications is the future, but not with bad design This past week has seen one of the UK’s largest (and certainly well-known) hosted telephony platforms experience a major services outage – leaving a huge number of businesses and sub-contractors without services for a significant length of time. As is the norm, many took to social media to air their ever-growing frustrations, but most, did that very British thing and said nothing and took it on the chin.
Anyway, what’s another service outage? They happen all the time these days.
In some sense, this seems par for the course, where customer expectations once again do not remotely align with the promises they believe they were given when they signed their service-provider contracts.
All of the above could be filed under ‘nothing new to see here’ if it wasn’t for the repeated comments by those affected about why their Call Divert capability/service hadn’t kicked in as they were told before they signed up.
But wait, wasn’t one of the big selling points of moving to cloud-based solutions their reliability? Leveraging those economies of scale. Where your cloud service provider had made all the big investments to ensure seamless fail-over of hardware and software systems, power supplies and other eye-watering overheads, so you didn’t have to? And you believed them!
The cynic in me would say that if this were analogised as a board game, there would appear to be far more snakes than ladders. Ironically, and sadly for the hosted telephony provider, this unfortunate outage happened just a day before the release of a new research report highlighting that businesses were four times more likely to leave a brand because of bad customer experience.
Don’t get me wrong, I’m not blaming the customer support team here. If anything, the support team were updating and responding very regularly to their clients. No, the customer experience error was committed way before the outage occurred – by promising something that couldn’t be delivered. That said, this of course could have been a totally unforeseen set of unique circumstances. Normally, where systems would be crashing about their ears, you could at least take some comfort in knowing that through the robust design and carefully curated resilient system architecture, you, the end user, would never know anything was amiss. As they say, Ignorance is bliss, just not this time.
To me, this smacks of buying into the sales pitch without digging into the contractual detail. Our long-standing and ever-patient Customer Services Director, Trevor Mockett spoke of these and many other matters back in May 2017 with two articles. Most notable of all his comments must be the following observation:
“Cold, hard proof.
Assessing these issues appears to require a leap of faith. You don’t know if the provider’s claims on any of these fronts are honest until you’ve actually experienced their service. By that time, of course, you’ve signed the contract and you’re past the point of no return.”
Caveat emptor.
There are many that might see this outage example as yet another reason why the ‘cloud’ is still an immature choice for such a critical service as communications. An on-premises, internally managed legacy system is far more dependable, or at least a known quantity
But cloud-based telephony and unified communications are enjoying an enormous upswing in popularity these days. At a time when most business communication platform sales are either flat-lining or experiencing downward cycles, hosted comms is the only clear rising star of the industry. In comparison to its legacy alternatives, it promises low-cost, decentralised, omni-present, always up-to-date and reliable performance. And I would assert that the technology is not what’s at fault here. After all, your average hosted solution is really just a very big version of what was typically running on a customer site.
We inevitably end back at the ‘how you design, implement and maintain it’ part of the service provider discussion. Dare I mention that horrible cliché ‘failing to plan is planning to fail’!
I’ve written before about service reliability, particularly into the claims made by those stating 100% and 99.999% availability. As a quick reminder, the percentages below illustrate how much downtime gives you in relation to performance criteria:
99.9% = 31,557 seconds = 8 hours 45 minutes 57 seconds
99.99% = 3,154 seconds = 52 minutes 34 seconds
99.999% = 315 seconds = 5 minutes 15 seconds
99.9999% = 32 seconds
100% = have a guess?
These are very sobering numbers. Think about the last time you had a service outage. How long did it last? Does your current SLA match your current service experience?
Our commitment to 100% availability underpins our data centre and managed service portfolio; providing our customers with peace of mind that they will always have access to their data and applications. It’s what we call Business Assured.
Since we first opened our data centre in 2011, we have never had a service affecting-outage. Ever. We put this down to our unique approach to data centre infrastructure management; optimising performance, power and cooling to ensure 100% availability.”
To know more about how we achieve and maintain this 100% uptime performance record or how we get businesses collaborating better, contact us.
Thinking smart: The role of business intelligence in a cloud-first strategy Is moving critical IT infrastructure to the cloud something of an inevitability? Digital transformation needs to be carried out at the right time; meeting business needs whilst maximising return on legacy on-premises infrastructure. This places an onus on organisations to develop a ‘cloud-first’ strategy.
According to LogicMonitor’s Cloud Vision 2020 study, 83% of workloads will run from the cloud by 2020. Compare this to today’s 68% and you begin to realise that the growing appetite for cloud models shows no signs of slowing.
By blending off- and on-premises infrastructure, organisations are benefiting from the availability, flexibility and cost-efficiency that this environment provides; indeed 72% of cloud users employ this hybrid strategy. However, you can’t just ‘lift and shift’ your infrastructure because of the service disruption this would cause, let alone budget impact.
To combat this, cloud-ready organisations need to adopt a staged approach. The question is, how do you identify and prioritise the order in which you migrate your technology? The answer lies in business intelligence (BI).
THE POWER OF BI
Business intelligence utilises the Big Data that’s gathered from your network and turns this information into actionable intelligence, which organisations can use to improve decision making and mitigate risk.
Using network device reporting and data analytics tools allows you to gather credible business intelligence that you can use to build your technology roadmap and inform your journey to a hybrid environment. By collecting, collating and interpreting key information on the devices that make up your network, you can plan your phased migration according to criticality and lifecycle.
THE RESULTS
Deploying BI tools gives you a snapshot of the health of your network and lifecycle of your equipment. Tools such as Double Red will also generate reports that grade your devices according to business importance; highlight areas of concern and recommend remedial actions. This allows you to prioritise your phased move in terms of:
- Elements that are ‘at risk’ (those that are vulnerable, end-of-life or out of support)
- Devices that are nearing the end of their lifecycle
- Equipment graded as ‘safe’
You can then use this information as part of your wider digital transformation strategy and timeline; determining which applications can be migrated to public or private cloud, moved to colocation, upgraded on-premises or sweated in their current state.
How the Cloud is Helping to Solve Law Enforcement Challenges “Bad boys, bad boys, whatcha gonna do?” In 1989, the TV show COPS made its debut with a unique concept: have a camera crew follow police officers as they take down thieves, drug dealers, and other criminals. Fast-forward nearly 30 years, and today approximately 95% of large police departments are using body-worn cameras (BWCs) or have committed to using them soon to record police officers’ day-to-day activities. While these innovative devices are improving police and community relations, even resulting in a 90% decrease in citizen “use of force” complaints, they’ve also created a mountain of seemingly unmanageable surveillance footage. Now, the question facing law enforcement agencies is, how is body camera footage stored?
Police Body-Worn Camera Usage Soars
Today, 34 states and the District of Columbia have created police camera laws, and they continue to be a focus of state lawmakers who are increasing funding through state and federal grants. That’s not all. Lawmakers now want recordings to be on high-definition video to enhance clarity, and protect officers from false accusations of misconduct. They also want to implement minimum retention time for BWC, dash cam, and static surveillance video (in Texas, for example, police camera video must be retained for at least 90 days). That’s a lot of video, requiring a lot of storage space. Think about it: with dash cams alone, police were dealing with terabytes of data; add BWC footage into the mix, and now they’re forced to manage petabytes.
Cloud Computing in Law Enforcement
Along with the influx of new video footage, agencies also need to store police reports, photographs, crime mapping, analytics, fingerprints, and other classified and sensitive information. To manage all this data, law enforcement agencies are increasingly turning to cloud computing. Most clouds are highly scalable, and able to increase storage capacity with the flip of a switch to accommodate increasing data needs. But when moving to the cloud, organizations need to keep in mind security and compliance laws and regulations that they are bound to.
Cloud Computing Laws and Regulations
The International Association of Chiefs of Police (IACP) has set up some Guiding Principles on Cloud Computing in Law Enforcement. Think of them as a CJIS checklist; most are pretty straightforward, and we’ve simplified many below (you can view the IACP’s more in-depth guidelines here).
1. FBI CJIS cloud compliance must be met.
Cloud providers must comply with the requirements of the Criminal Justice Information Service (CJIS) Security Policy and acknowledge that the policy places restrictions and limitations on the access, use, storage, and dissemination of CJI and must comply with them.
2. All data storage systems must meet the highest common denominator of security.
With the increase of locally-collected data such as body-worn cameras, law enforcement agencies should store all collected data at the highest level of security (often the FBI CJIS standard).
3. Data ownership and data mining.
Almost all cloud service providers specify that the client owns the data, but the IACP requires it in writing—along with the procedure for migrating data to another service, or back to in-house servers (this is known as cloud repatriation). The IACO also advises agencies to make it clear that data is off limits for any data mining or ancillary operations of that cloud provider.
4. Auditing.
Cloud service providers must allow law enforcement agencies to conduct audits of performance, use, access, and compliance.
5. Integrity.
Providers must maintain physical or logical integrity of CJI by separating law enforcement agency storage and services from other customers.
6. Availability, Reliability, and Performance.
The degree to which the cloud service provider is required to ensure availability and the performance of data and services is dependent on the criticality of the service provided. For some services, such as the retrieval of archived data or email, lower levels of availability may be acceptable, but for more critical services like Computer-Aided Dispatch, levels of 99.9% or greater are required.
Security and CJIS Compliance on the Cloud
The cloud offers a whole new way for law enforcement agencies to securely store valuable footage and files while remaining CJIS compliant and following IACP guidelines. Thrive works with state and local organizations and can help you make a seamless move to the cloud. Our Cloud service is a virtual private cloud solution designed for national, state, regional, and local government agencies. We ensure strict security protocols, 99.99%+ uptime, and a complete compliance package; meeting the requirements of CJIS, HIPAA, PCI, SOC, and SSAE16. Contact Thrive today to learn more about our Cloud services.
The case for Software-Defined Wide-Area Networking (SD-WAN) Enterprise networks do not have it easy. They are facing an unprecedented level of demand; driven by the combined pressures of digital disruption, operational complexity and cyber security.
The continued growth of mobility, the IoT and big data applications is adding to what is already a lack of insight into IT operations. Legacy, frequently siloed systems see many IT departments spending 3x as much on network operations as they do the network itself.
Add to this the ever-changing cyber security landscape and its easy to see why the industry is ready for a change. Business demand for SD-WAN infrastructure and services will see a compound annual growth of over 69% over the next 3-5 years (IDC). By the end of 2021, Cisco predicts that 25% of all WAN traffic will be software-defined.
What is SD-WAN?
SD-WAN is the application of software-defined networking technologies to wide area, enterprise networks. It is used to secure WAN connections between branch offices, remote workers and data centre facilities that are geographically dispersed.
Effectively a network overlay, SD-WAN is carrier agnostic and transport Layer independent. It promises reduced operational costs, greater control over network applications and simplified management.
Who needs it?
It might be easier to say who doesn’t need it. Any organisation that relies on public or private networks to operate their business should be considering SD-WAN. More specifically, if you are contemplating any of these initiatives, SD-WAN should be front of mind:
- Use of video or bandwidth intensive applications
- Deploying hybrid WAN topologies at remote locations
- Planning to review/optimise existing branch routers
- Migrating away from MPLS
- Increasing bandwidth/network resilience
Managed SD-WAN
Whilst SD-WAN promises greater simplicity and visibility, management of the network and its component elements is required to ensure your WAN infrastructure continues to be a business enabler, rather than an inhibitor.
Many businesses will seek to employ SD-WAN as a managed service from a trusted technology partner to ensure they make the most of the benefits available. Improvements in business agility, reduced capital expenditure, ease of management, reduced maintenance costs, even greater resilience can be realised.
Thrive has established best-practice processes and resources for managing the implementation of software-defined networks. Our network monitoring and management solutions are backed by leading SLAs and our customers benefit from the transparency of a single provider for CPE and the underlying connectivity.
Moving your critical infrastructure to the cloud isn’t as hard as you think Cloud adoption rates have continued to rise through 2018. Having been the driving force behind digital transformation for small and medium sized businesses, it is predicted that this year will see a tipping point for enterprise cloud adoption.
According to Forrester, this year will see over 50% of enterprise workloads moved to the cloud. This prediction is supported by the findings of a LogicMonitor survey published earlier this year, in which respondents foresee 83% of workloads will be in the cloud by 2020. The survey suggests 41% will be running on public cloud platforms, with 20% using private cloud and a further 22% adopting a hybrid approach.
The naysayers that predicted cloud would have limited appeal to medium-large enterprises have had to admit that the cloud “bubble” is not going to burst. Adoption rates have continued to grow year-on-year as organisations of all sizes seek to take advantage of the reliability, flexibility and cost-effectiveness that cloud brings.
While these benefits are widely accepted, some organisations still feel a degree of reticence around making the jump to a cloud first strategy. This may be because of a perceived gap between what ‘good’ looks like (i.e. the ability to move their desired apps to the cloud) and the realities of budget, skills and resource constraints. For some, this makes a cloud-first strategy something that organisations pay lip service to, rather than committing to digital transformation.
There are two other reasons that organisations often cite when asked about their hesitance to move: Firstly, the belief that their critical applications are not cloud ready, and secondly a desire to maximise ROI on legacy technology – adopting the cloud for new tech only.
One or more of these reasons may ring true for your organisation, but we’d like to point out that you can have the best of all worlds – maximise the returns on your legacy investment, migrate your business-critical apps to the cloud and realise the cost, scale and availability benefits of cloud infrastructure.
‘Traditional’ vs. ‘cloud built’ applications
The business applications market has been experiencing its own revolution in recent years. Larger organisations are effectively re-writing traditional apps in a native cloud environment; creating cloud-scale, container-based applications, rather than an on-premises solution based within a single OS.
Cloud-scale, container-based apps (like Netflix or Salesforce) can run in hyper-scaler environments on public cloud infrastructure. The sheer volume of containers used (thousands) means these applications can be rapidly scaled up and down. It also means they are incredibly resilient, even if running on low-level SLA hardware, as they can tolerate a lot of the underlying hardware going offline without affecting performance.
In comparison, most small-medium sized organisations don’t have access to the budget or development resource required to transition their legacy apps in to cloud-scale apps. As a result, they are more likely to rely upon traditional software applications.
However, this does not mean your apps aren’t cloud ready. For traditional business-critical applications that require 100% uptime, private cloud infrastructure or colocation of your hardware in a third-party data centre can deliver the same benefits of scalability and availability.
A hybrid approach
With applications at different stages in their lifecycle, your cloud adoption strategy is likely to feature a phased migration. Between their current and desired state, organisations will migrate some systems early to take advantage of improvements in availability, security and scalability.
But there’s no need to stop there. Legacy applications are seen to hold companies back, but this shouldn’t be the case. Moving these apps now to a Private Cloud (eliminating the need to re-build them) provides a true cloud first strategy through this blended, hybrid approach.
Many of your applications can be moved sooner than you think and Thrive’s team is ready to advise you on just this. Speak to us to find out more about them.
Four vital criteria when choosing a virtual meetings solution While email remains the most ubiquitous form of communication, with worldwide traffic set to hit 281.1 billion emails per day by the end of 2018, it isn’t the ‘be all and end all’ of office communiqué. Information can get lost, messages misinterpreted and deadlines missed. This doesn’t exactly make for a collaborative environment.
As workforces are becoming widely dispersed, with team members working together in the office, from home and across borders, virtual meeting solutions are recognised as an excellent way for people to connect and collaborate irrespective of where they work.
You may think that deploying a solution out-of-the-box is a quick and easy way to achieve greater collaboration within your organisation, but without the correct planning and management it’s a goal you’ll struggle to achieve.
To help you on your way, we’ve put together a list of what you need to watch out for when choosing a virtual meetings solution for your business.
1. Don’t deploy a solution that doesn’t fit
Checking that your organisation would benefit from a virtual meetings solution may sound like an obvious first step, yet many decision makers will fall into the trap of assuming that a problem needs fixing without talking to their colleagues first. Inevitably, this leads to them implementing something that is poorly received and may not solve anything.
Asking your teams how they communicate on an every-day basis, where they struggle and what they think can be improved, ensures that the solution complements your own bespoke needs. It also helps you build your business case by demonstrating the value it can bring to staff.
2. Keep users top of mind – now and in the future
Making your users’ lives easy must be the top priority of your project; after all, they will use the technology every day. Providing them with a solution that can be used across all their devices, permits them to share files, thoughts and ideas securely and ties into other systems they use in their everyday working lives, will help drive adoption as well as increase productivity and satisfaction.
As this is a solution you’ll look to keep for years to come, you should make sure that it will suit the needs of your future workforce, which will be made up of tech-savvy millennials to whom flexibility and workplace satisfaction are key drivers.
3. Take time to form your strategy
Without a uniform strategy, there’s always a danger that your staff will go away and choose their own tools. This problem can be amplified when different teams all pick different tools from each other; sharing knowledge across these teams becomes even more difficult as individuals will not be willing to use a different tool for every project they work on.
Forming a strategy that promotes collaboration, using a defined set of tools, will minimise the use of ‘shadow IT’ and encourage much greater user adoption.
4. Think beyond collaboration tools
While the tools you implement form an important part of your strategy, you also need to look at your culture and working environments.
Promoting a culture that encourages collaborative working may be something you already do, in which case it’s a matter of ‘tweaking’ it to encourage the use of your new tools. However, if this cultural shift is new, you should involve your staff throughout the process to make sure they’re aware of how the solution will change the way they work for the better.
Alongside this, it’s also key that your workspaces are kitted out to encourage collaborative working. Think about quiet areas and huddle spaces in the office, coupled with the right applications for remote and mobile workers.
Need some help deciding on a collaboration tool for your business? Let us help.