Planned downtime; is it a thing of the past?

    Planned downtime; is it a thing of the past?
    By Matt Hurford, national systems engineering manager, NetApp Australia and New Zealand

    Like many new technologies, when virtualization was in its early days, few people could see how important it would become. In retrospect, it is abundantly clear that virtualization was an incredibly significant development in enterprise IT, going on to completely transform how we architect, provision and manage our application platforms.

    IT managers, CIOs and even CEOs are constantly striving to ‘do more with less’, and this mantra was the central premise behind the virtualization revolution. However, while it’s easy to focus solely on virtual servers when we think about virtualization, ‘do more with less’ also applies to other areas of IT.

    We tend to associate virtualization exclusively with the ‘compute’ function, but at NetApp, we are interested in learning from server virtualization, by applying the motto of ‘do more with less’ to the world of storage and data management.

    When it comes to storage, infrastructure architects have three key demands:

    • Greater availability or ‘uptime’
    • Better agility to move various workloads around their hybrid cloud, and
    • Increased return on investment, by being able to do more with their existing hardware.

    The answer to these challenges lies in the storage operating system, not necessarily in the hardware itself. This is why NetApp has for a long time been committed to its investments in software, and now, new ‘software defined storage’ technologies are enabling virtual storage to be deployed, providing secure multi-tenant environments that run on existing physical storage hardware.

    This does not mean there is a software-only answer; hardware is still important and relevant, and an increasing focus must be applied to ensuring businesses are using hardware that is optimised for the task of data management in the cloud. Just as virtualization did not make the future of high performance servers go away entirely, ‘software-defined storage’ has not rendered hardware completely irrelevant either.

    In a practical sense, software-defined storage means a business can easily separate data held by its various functions, from finance to HR, in multiple secure environments, without losing the ability to easily and non-disruptively move that data around. Or, for a cloud service provider, it means being able to store various customers’ data in different, secure environments within the same, shared hardware. This approach drives substantially more value from the investment already made in hardware. 

    At NetApp, we see this trend going a step further, with storage operating systems running as a Virtual Storage Appliance. This will mean businesses can reap the benefits of the ‘smarts’ behind the storage, independent of which vendor’s hardware they are running, taking us another step closer to truly software defined storage.

    For businesses on the cusp of this important IT transformation, it is crucial to understand that software-defined storage is a strategy, not a product, and in order to be successful, it must be deeply rooted in the organisation’s overall IT approach. With one unified OS and cohesive management tools, true software-defined storage will not serve as a management layer on top of complex storage infrastructure, but simplify and streamline data management challenges to enable IT to really ‘do more with less’.