Image: Gorodenkoff/Adobe Stock
Data is as viable as the physical systems supporting and powering the servers that store it. Threats — from cybersecurity exploits to climate change — have major implications for those systems, and with emerging technologies only increasing demand for power and creating new vulnerabilities, maintaining infrastructure is as much about resiliency as keeping the juice flowing, the air conditioners running and the security cameras pointed at the door.
Image: Joseph Vijay, CEO of Intelli-Systems
Joseph Vijay, CEO of Intelli-Systems, a Melbourne, Australia-based critical systems infrastructure company that supports infrastructure for mines, hospitals, data centers and more, said buyers of critical infrastructure need to focus more on how the infrastructure purchase decisions they make will influence both their bottom line and the environment. And, he added, they need to become more discerning about their service providers, to make sure they have specialists on board who have the skills to identify and isolate early warning signs before there is a failure.
Jump to:
He spoke to TechRepublic about the role of the critical infrastructure industry in adapting to these changes and how it is evolving to meet current and future challenges.
KG: Where does Intelli-Systems fit into the infrastructure ecosystem?
JV: We’re critical environment specialists, which, in a broader sense, involves securing highly controlled processes that, if they fail, will cause great disruption to business, destroy asset value and, in a worst case scenario, harm human life. So from that perspective, “critical environment” can have a very broad application. In data center applications, customers want us to support the IT workload, but we also support industrial workloads and commercial spaces — which might involve protecting physical infrastructure: making sure lifts, boom gates, escalators are safe. We also support hospitals, aged care centers and research labs.
SEE: Salesforce announces Einstein GPT for field service (TechRepublic)
KG: These are very divergent settings and systems. What are the commonalities between them that make it something that one company can do?
Five elements of critical infrastructure
JV: So, if we just talked about the applications, the technology elements that support these applications really comprise five elements. They are critical power — which you need to operate technology; and precision cooling …
KG: Definitely for data centers.
JV: Exactly right. Then fire prevention and fire suppression, and you need to make sure that these environments only allow access to authorized personnel, so access control. And when people are operating in your space, you want to make sure that it’s properly surveyed. So you have surveillance. So five elements: power, cooling, fire suppression, access control and surveillance. This is the solution portfolio that we support.
KG: And is that unusual for one company to offer a turnkey solution across all of these areas? HVAC (heating, ventilation and air conditioning) and security seem far apart from a technical and engineering point of view.
JV: Yes, but it is also what the market needs, so it is our strategy to continually invest and expand across this full solution portfolio. As of today this full portfolio is only available for select niche customers, but in time we plan to make this more generally available to all our customers. It’s not easy for them to find one company that can provide this diverse portfolio.
1-vendor approach to critical infrastructure
KG: How do companies handle these complex requirements then?
JV: Any organization that needs a combination of these technologies supporting those very critical environments typically needs to rely on five or even six organizations collaborating. That can be a very complex operation which relies on a large prime contractor to take on the overall responsibility which isn’t always viable or accessible for mid-size businesses. This can often mean that they end up becoming reactive which introduces a great deal of risk to their business operations.
IT and infrastructure: inside expertise with outside support
KG: Is this also typical of data centers? I think there are over 8,000 data centers globally, all of which have critical demands in power, security, cooling, obviously, and everything else. It kind of reminds me of how IT works; there are a lot of vendors out there with specific areas of expertise. How do data center engineering teams handle that?
SEE: Space … the final frontier for data storage? (TechRepublic)
JV: Data centers rely on large facility management firms to oversee and manage a maintenance contract and hire specialists to provide targeted maintenance for key components. It’s not so dissimilar to the IT landscape where you’ve got companies like Deloitte, Accenture, Capgemini and Ernst & Young who may, in some cases, have in-house specialists to provide contract management, governance, risk mitigation and so on and so forth. And then they would hire specialists. But in some cases data center organizations have a system integrated and in-house expertise to perform general maintenance for these systems, and a good procurement team. What they want are specialists to give them point solutions to get them out of a jam.
KG: For those enterprises that don’t need a large organization to handle complex integrations, though, do they have the expertise in-house to handle power, cooling, etc.?
JV: Or they already have the capabilities to manage multiple contractors or multiple suppliers. But trying to work with multiple organizations for one environment can be challenging. You have a HVAC specialist who comes in and says, well, I’m only thinking about HVAC. And then a power specialist, so you have a lot of siloed views of the situation. What we want to be for those organizations is experts across these various technologies because we understand how these various elements interact with each other. So when we talk to you about a critical environment, it’s not just about one element, it’s about all five, so we can give you a holistic solution.
Operational Technology remote, AR and on-premises
KG: How much of what you do involves on-premise software and hardware. Do you have proprietary cloud services?
JV: We do. In Operational Technology, which is where we live, there are functions we can triage through remote monitoring, through software-as-a-service and proprietary data collection and monitoring tools. This lets us ping a remote device for status and systems health, for instance, and make data-based evaluations on whether a system’s condition is critical or action can be delayed before it needs to be triaged or addressed. However, in a lot of cases we have to attend in person to resolve the issue or perform any maintenance, so we focus on ways to minimize our time to triage and cost to serve to provide greater value for our customers.
KG: What are some examples of this and does it include a UX for monitoring critical systems?
JV: One example: We have invested in augmented reality technology involving augmented reality lenses that connect to our teams remotely, which is particularly important for the work we do in mining as well as hospitals and medical facilities where systems need to be monitored remotely and support needs to be provided across several distributed locations. For us to get to a site on time can be challenging and expensive for us and the customer. Augmented reality lets us connect to someone local wearing these AR lenses that allow us to see what they see and connect back to our platform.
KG: How does this work in practice?
JV: Our specialists sitting in our operations center can see exactly what their local peer is seeing, remotely. Then we get to work out — assess — is this something that we need to turn up on site and deliver, or can we provide instruction over the wire and have that local person or our local partner solve the challenge? We do things like that to overcome some of the gaps in software that currently exist with Operational Technology. What is proprietary is our knowledge base which we use to be efficient and stay safe. So one of the challenges that we have to contend with day to day is that sometimes technology that we interface with can literally kill us if we’re not careful.
Data centers operate well below capacity
KG: I wanted to ask you about power consumption. Data centers, I think I read, use as much power cooling servers as they do running them because of heat generation.
JV: One of the challenges data centers have is they’re monolithic. You can have modular data centers, of course, but even the modular design has limitations in how easily they can scale. So typically, any data center designer never wants to have a situation where they’re under-provisioned — data centers also want to grow. So they have this issue; they typically operate at about 60% to 70% of their design load. Because they always have headroom to grow. Now, in one sense, you can think about that headroom as necessary, because you want to be able to scale easily. But the issue we’re facing today are data centers that were built 30-odd years ago and have never achieved capacity. This is why, when you talk about things like power usage and effectiveness, it’s never a very good story.
KG: So they are inefficient because they aren’t using all of their potential assets?
JV: Yes. People advertise their PUE (power usage effectiveness) from the design point of view; it is designed to achieve a certain efficiency level, but if you apply that to operating standards or how they are typically operating, very seldom would they achieve that because you always need headroom you can scale into. So you are not efficient because you are over-provisioned.
KG: I don’t understand that part of it.
JV: Well, if you think about it, if I’m provisioning for 100%, okay, I’ve got all these power feeds coming in. I’ve got all this cooling technology and equipment sitting around that is able to support that 100% workload, right? But I’m only operating at 70%.
Stranded capacity: The grid delivers whether you use it or not
KG: So you’re pulling far more power than you need because one day you might need it? So you have a bunch of empty racks.
JV: Well, empty racks, and a whole bunch of power that you’re sourcing from the grid. When you take power from the grid, the grid needs to know how much power you intend to use, and they will provision the grid to be able to deliver that power to you consistently. Now, whether you use that power or not, they still need to provision power and make it available should you need to use it, because that’s the commitment they make. If you are in your house, you know, and you’ve got a certain power load that you need to use, you’d never wanna be in a situation where you can’t use certain appliances just because a grid is unable to supply that power to you. This is stranded capacity — capacity that has to be made available purely because of your design estimate, and if you’re operating under your design estimate, all of that is just sitting around. It is stranded. So the question one needs to ask themselves is, am I truly going to ever use that capacity? And if I can’t see a need for that capacity, can I safely downsize without impacting my business operations?
Difficulty in downsizing
KG: Why is it difficult to downsize power capacity at a data center?
JV: Because it’s not simple to unravel these Operational Technologies. If you imagine when you set up a UPS (uninterrupted power supply) and are designing to a certain power envelope, and you want to have a certain runtime in the event your main power feed fails, you want batteries to support you, or you’ve built a solution that supports that. Aside from the technical complications of downsizing, you also have to consider the commercial implication of downsizing because you look at it as, well, I’ve built this, I’ve invested in this asset, I want a certain life cycle from it, I want to depreciate it over a certain period of time, and if I depreciate it any sooner then I’m not getting the best bang for my buck. Finance people are usually not so happy when technology managers turn around and say, “Whoops, we’re oversized, and now we’d like to get rid of some of this [equipment and capacity].”
KG: What are the technological implications of this? And what are the solutions to making this possible?
JV: So there are the various elements that need to integrate to work together to support this critical environment. It’s hard to downsize: You’ve got this cooling stuff sitting there and you’re going, well, hang on, the cooling infrastructure was designed to support this original power envelope. If I reduce the power infrastructure I then need to make changes to the cooling and related critical infrastructure. You typically have to contend with a lot of risk when you have to do this. This is where we can be very effective in trying to pick this apart, to look at every aspect of a critical environment, not just one siloed element. Because we are more informed: If we make changes in one place, we know we will have to make changes over here. Being holistic allows for a much broader plan, the ability to map it out and simulate the implications of what change would look like.
KG: Because in many cases you’re managing several systems at once?
JV: It’s not that different from a managed services provider in the IT landscape. You don’t have a managed service provider that just deals with computers, or just deals with security. They deal with the network, storage and backup and compute and so on. So there again, you just can’t make a change to your network capacity without thinking about the implications of everything else.
Growing focus on efficiency
KG: Are you seeing customers saying they want to change how they adapt to needs and use less power?
JV: Yes, customers want to know how to rightsize their power usage. We believe it has two elements to it: first, knowing how much you need. If we give you enough sensors, monitoring, et cetera, and show you what your power profile is, you have a better chance of being clear about what you actually need. The other element is improving your efficiency or eliminating waste. So this is where we can be very direct: helping companies achieve their sustainability goals. Everything that you receive from your main’s power should be equal to what’s available to be used on the other side. But usually there is an inefficiency, a certain lag factor that creates overheads.
KG: How do you quantify that? What kinds of inefficiencies are you seeing?
JV: UPS’s maybe 15, 20 years ago, were operating at a power factor of 0.8, 0.9, so 20% inefficiency, if you like. But today, modern technology is getting very, very close to unity, and can therefore achieve 100% efficiency through better technology. Having said that, it isn’t always the case that just because you’re using a UPS that is 15 years old you should obviously replace it with something brand new. It’s about looking at what you’re actually using. And then, if that makes sense, well, let’s replace it for you, safely.
KG: And all of this, of course, is ideally scalable in both directions? You’re able to gain efficiency based on your usage, but then if you grow and you demand more, you shouldn’t have to spend a lot of money … ?
JV: Correct. And the vendors are doing a great job in making their technology more modular. So where systems were previously very monolithic, today you can upgrade in step, so you don’t have to worry about building for 10 years into the future. You can build for what you need now knowing that you can daisy chain and add as you scale and grow.
Prospects for achieving net zero
KG: What is your role in helping companies do this, and perhaps even achieve net zero?
JV: It’s twofold. First, being a specialist provider that knows exactly what we’re doing, so we’re not wasting time or resources. This means we’re able to assess your situation very quickly, perform or give you options for change. Second we’re able to execute on that, without disruption, and making sure that we’re safe while we’re doing it. Also, we actively partner with leading vendors in our space and we are able to leverage our privilege as key partners to benefit our customers.
KG: Given where things stand now, do you think data centers will achieve net zero within 30 years?
JV: I am optimistic that datacenters will evolve to become a whole lot more efficient and especially if the industry can become more focused on specialization and we can more rapidly embrace technology including AGI, we can be more targeted on how we may improve our design and consumption. For now, the call to action is to make continuous incremental change and we are committed to supporting our customers on this journey.