+44 (0)1753 797100 info@logicalis.com

Dealing with Really Big Data – Hardware or software?

| 9th January 2017 | No Comments

Rodrigo Parreira looks at the model that data behemoths like Google, Facebook and Amazon adopted to cope with a requirement to process and store huge quantities of big data – a model that is on the verge of going mainstream – and asks “how do they do it?”

In the early 2000s, IT giants like Google, Facebook, Amazon and Twitter started to churn through gargantuan amounts of data; a trend that has only intensified with time. Google’s search engine alone reproduces the entire internet on its own servers, in real-time and with numerous back-up options.

Their ability to process and store vast quantities of information begs the question – how on earth do they do this? What sort of infrastructure does an organisation of their stature need to do such heavy lifting? How enormous do their data centres need to be? How much eye-watering capital expenditure are we talking about here?

Changing the name of the big data game

The answer is surprisingly modest – thanks to a shift to services. These behemoths are developing applications internally to run on generic and low cost servers. Virtualised servers, firewalls and networked storage are basically being mimicked on White Boxes – cheap, generic and highly commoditised equipment.

For almost two decades this approach has been the preserve of these internet giants, but maybe not for much longer.

Topping the adoption bell curve

The model has been so successful it’s on the verge on going mainstream. Large corporations (banks, industries, telecom operators etc.) have begun to ask themselves, if it works for the Internet giants, why can’t it work for our big data?

Their thinking has been buoyed by the growing maturity of the open-source market, manifest in numerous initiatives over the past 10 years, such as KVM, OpenStack, Open Nebula and Open Daylight, and the thousands of globally integrated developers who have mobilised in virtual communities.

This is the story of technology today – the shift to services. Indeed, the ubiquitous move to software-as-a-service, infrastructure-as-a-service – basically any technology delivered as a service, has redefined the way we use hardware and software.

It’s easy to see why. As the lines between software and hardware have blurred, services have essentially replaced physical infrastructure with a far more flexible, on-demand virtual environment – and set businesses on the greatest transformative journey of their lifetime.

Today then, the question for some may still be “Hardware or software?” For the forward thinkers, however the answer is “Services.”

Rodrigo Parreira

About Rodrigo Parreira

After gaining a B.A. in Physics with a Ph.D. in Mathematical Physics at the University of São Paulo, Rodrigo Parreira started his career as a researcher and university professor at Princeton University in the United States - one of the eight universities of the American Ivy League and recognised as one of the most prestigious in the world.

In the corporate sector, he worked at McKinsey & Co., working in its telecommunications practice, before joining Cluster Consulting, a specialised strategy consulting company based in Barcelona, Spain. In 2000, Parreira joined Promon at Promon IP and, over a nine year period served as Business Development Director for Promon Engenharia, CTO of Promon Technologia, and Executive Director of PromonLogicalis. Between 2007 and 2008 he was also a member of the Executive Board of the Promon Group.

In March 2009, Parreira was appointed Chief Operating Officer (COO) of Logicalis for the Southern Cone (Argentina, Uruguay, Paraguay, Chile, Perú, Ecuador and Bolivia), before assuming the position of CEO of Logicalis for the Southern Cone region in 2010. Today, he is CEO of Logicalis Latin America, with responsibility for corporate growth and regional integration across Latin America.

Leave a Reply

Your email address will not be published. Required fields are marked *