Welcome to MidWeek Brief 3: The Briefening.
Almost every Wednesday, we’ll take a rapid-fire run-through of all the big stories of the week so far, as well as preview emerging stories we’re keeping an eye on.
Defining Edge Computing
A broad group of edge computing stakeholders are bringing clarity and research to the most-hyped term since cloud computing. Best of all, the project was approached in a completely open and neutral fashion.
These sorts of initiatives fail to catch on when vendors start pork barreling their self-interests into it. There’s none of that vendor self-interest here. The primary concern is bringing clarity to edge.
Marketing teams love “edge” – they abuse “edge”. Everything is being called “edge computing”. Seriously: there’s even marketing out there that translates to “center edge”. You know who you are. That’s not how it works.
Vapor IO, Packet, Ericsson UDN, Arm, and Rafay systems got together and hooked up with Structure Research and Edge Research Group to release a 3-punch combo:
State of the Edge Report: I can’t tell you how much it pleases me that this group were the ones to write this. Structure Research and Edge Research have my full endorsement (if they want it). Note: the report is free, but you do need to fill out contact info.
An Open Glossary of Terms: no other revolution was in need of a proper lexicon as badly as Edge computing
A Collaborative Market Map: a living, breathing map of edge vendors and how they fit together.
We’ll take a thorough, deeper look in a future article, but we felt it important to get the word out.
Microsoft Azure Suffers 11-hour outage in Europe
Microsoft Azure suffered a major outage in Northern Europe last week affecting cloud storage and networking systems.
A subset of customers using a laundry list of services experienced connection failures when trying to access resources hosted in the region.
Engineers identified an underlying temperature issue in one of the data centers in the region. It triggered an infrastructure alert, which caused a “structured shutdown” (cascading failure?) of storage and network devices in the location, to ensure hardware and data integrity. They shut down the data center to save it, basically.
I get that cloud and colo are best frenemies, but outages like this are marketing fuel for colo providers. Unless it’s their data center. These outages are very public and costly to customers, making them ripe for fear, uncertainty, and doubt.
I had written a glowing blurb about Microsoft, set to publish in last Wednesday’s (cancelled) MidWeek brief. I praised Microsoft’s 4-pronged approach to efficiency, ironically focusing on software efficiency gains in particular. In a bid for transparency, I’ve decided to publish it as-is below, with added commentary post-outage in italics.
Microsoft Using AI To improve Operational Efficiency
Microsoft is working towards better operational efficiency through software and AI. Microsoft’s cloud facilities have been 100% carbon neutral since 2012 (in part thanks to renewable energy credits) and it’s committed to using more renewable energy sources.
Technically, they achieved better than 100% carbon neutral during the outage, when factoring the renewable energy credits and the whole turning off the data center thing. (-ed.)
Renewable energy is only part of the story. It’s working to deliver IT services with a smaller environmental footprint through several fronts. It’s focusing on four efficiency buckets in total: IT operational efficiency, IT equipment efficiency (more efficient chips and sharing designs through Open Compute), data center infrastructure efficiency and renewable electricity.
Software and AI can help reduce power consumption. Microsoft has always touted its know-how when it comes to things like smart software failover.
you’re killing me, here, Microsoft. (-ed.)
The ability to spin up and down smartly and on the fly has enabled Microsoft to experiment with facility design, and with alternate sources of power like cow waste.
Seriously – if I was stranded on an island with a Microsoft engineer, I’d have complete confidence they’d be able to MacGyver us out of there
This is literally what I wrote hours before the outage occurred. I’m holding the MacGuyver award until a future date. (-ed.)
Azure, Exchange and SharePoint account for about half the energy consumed in Microsoft data centers. Cloud-based programs to reduce resource consumption have contributed to a 20% global energy reduction at their facilities. Microsoft is on track for a 75% reduction in carbon emissions by 2030, using 2013 as the base year.
Microsoft has renewable energy projects in 3 continents totaling 1.2 gigawatts of power.Amazon Web Services and Google Cloud both have solar and wind farms. Amazon is getting 40% of its power from these farms and has vowed to get that up to 100%.
In the United States alone, data centers consume about 70 billion kilowatt-hours (kWh) of electricity each year. It’s roughly 1.8 percent of the total electricity consumed in the country. By 2020, energy consumption is expected to hit 73 billion kilowatt-hours – enough to power 6 million homes for a year. Keep in mind that the number would be higher if it weren’t for efficiency improvements already realized.
AI changing the face of Data Center Management
The operational promise Artificial Intelligence poses is massive. While we looked at energy efficiency gains above, we now turn to another problem: the seemingly eternal division between IT and Facilities. Infrastructure management leveraging AI and predictive capabilities might be what gets IT and Facilities on the same page. A holistic approach to data center management is needed.
Data Center Infrastructure Management (DCIM) is evolving to capture the entire picture. Basically, the problem is DCIM is largely purpose-built to look at a single, fixed facility. As infrastructure grows increasingly distributed, solutions need to be able to capture the entire environment. It needs predictive capabilities across a distributed environment.
Rhonda Ascierto of the Uptime Institute identified an emerging software category called DMaaS (data center management as a service). The key feature of DMaaS is an ability to aggregate and analyze large sets of anonymized customer data.
However, DMaaS operates over a wide area network (WAN) in order to capture a holistic picture. This means potential latency problems or service interruptions. Ascierto notes that traditional DCIM still plays an important role as on-prem, real-time alarm monitoring for mission critical infrastructure.
Many data center operators build their own data center management capabilities, largely through cobbling together different pieces. The operational division between IT and facilities means these systems are often built around a team’s priorities and responsibilities. Facility ops focuses on facility metrics, IT focuses on applications. This setup perpetuates the divide.
Data center management’s most promising feature will mature alongside Artificial Intelligence and machine learning: prediction capabilities. In addition to health snapshots, the ability to predict how changes impact the bigger picture is clutch.
IBM is planning to launch 18 new cloud availability zones
The zones will be across IBM-owned data centers and colocation facilities.
More availability zones means customers can locate isolated instances of cloud across a wider geography. It means more fault tolerance, better performance within the geography, and the ability to house data within a given country’s borders. That last one is becoming increasingly important in light of the data privacy discussion. The GDPR is driving infrastructure investment strategies in Europe.
IBM has invested over $3B in its cloud business, keeping with $1B commitments made in the past.
Digital Realty opens third Toronto data center
When providers repurpose buildings, I can’t help but think of the symbolism and smile. In the case of Digital Realty, it acquired the former Toronto Star newspaper printing plant and turned it into its third Toronto data center. From print media, up springs new industries housing mission critical infrastructure. Rather than strip out the plant’s history, the new data center honors the past through its aesthetics.
The data center will comprise of 23 computer rooms at full build, each ranging from 8,600 to 13,000 square feet, powered by 1-3 megawatts per room. The facility is adjacent to a utility substation, with direct access to low-cost power. “Hydro” in Canadian parlance.
Dupont Fabros acquired the building and land in 2016 prior to its acquisition by Digital Realty.
The Toronto Market is home to more than 15,000 technology companies along the Toronto to Waterloo corridor.
There’s several instances of converting industrial properties with good bones into data centers. QTS converted the former Chicago Sun-Times printing plant in 2016. Canadian data center operator eStruxture acquired the Montreal Gazette’s former printing facility in Montreal for conversion into a 30MW data center.
Former Printing Plants and similar industrial properties offer attractive economics. They’re also built to hold and power heavy machinery, providing the foundation needed by a data center.
Rackspace Said It’s Now Offering Colocation
DCT saw an interesting story floating out there a couple of weeks ago. Rackspace, which made its name as a high-touch managed hosting provider, announced it was now offering colocation.
Basically, the company acquired Datapipe in 2017. Datapipe had a reseller agreement in place. Rackspace will continue to resell colo. As far as I can tell, there’s not much more to it.
I thought the announcement was at least strategically interesting: it’s embracing the one-stop IT shop approach.
After a bit of back and forth, I was ghosted by Rackspace. I don’t blame them. I was likely going to bring up a keynote speech Rackspace gave several years ago. The key point of which: If you see a restaurant that offers Chinese, Mexican and Greek food, do you think they’ll be great at any of them?
I’m fascinated by Rackspace, particularly post-going private. The company was somewhat of a trailblazer, embracing cloud early on. Being a public company at the time, they also had to deal with the brunt of the cloud cannibalizing core hosting business discussion.