Data is sexy, data storage isn’t. Does anyone even care what network storage is these days?
These days the focus of most media coverage is around data – what it is, how much is being created, the amount of effort to protect and hack data, and how regulation is struggling to find a middle ground between protection and the right to be on social media.
There is still interest around the evolution of media – from magnetized bits to quantum technology.
What has disappeared on the radar these days is network storage. You knew of these as direct storage, network attached storage, and storage area networks. These days, about the closest thing to network storage is cloud storage, and even then, I can get into technical arguments with storage specialists, so I’d rather not.

Instead, I had an interesting Q&A with Chad Chiang, senior product manager at Synology. Let’s face it, network storage is still important because without it, there is no Netflix, Spotify, Facebook, Microsoft Azure, Google or AWS.
Do you see foresee a blurring of lines between enterprise storage and consumer storage solutions – where does the line between SAN and NAS stop?
The line between enterprise and consumer solutions is blurring. As internet technologies like TCP/IP and Ethernet proliferate, some Storage Area Network (SAN) products are making the transition from fibre channels to the same IP-based approach that Network-Attached Storage (NAS) uses. Similarly, some NAS solutions can be connected to the fibre infrastructure.
Organisations are always on the lookout for affordable and accessible storage and backup solutions, and options that integrate technologies and features that were previously seen as costly and unattainable are very attractive to business leaders.
Will businesses move on from Big Data into using ML, or Deep Learning to gain better insights into their data? How automated does Synology see analytics becoming?
While the use of machine learning (ML) to derive insights from data is becoming a strong trend among businesses, our priority as a data management company is to protect our users’ data. That’s why we are very cautious and prudent about using machine learning or deep learning to study and analyse data. That said, we use deep learning to predict and prevent system anomalies.
Have approaches to backup changed? In a multi-cloud environment, what is the best way to keep your data safe?
Many businesses now encrypt data locally on NAS solutions before the data is up or transferred to the cloud, so no one will be able to read the data except the permitted NAS user(s).
Regardless of whether it is a multi-cloud environment, the best way to keep data safe is to follow the Backup 3-2-1 rule, which is keeping at least three (3) copies of your data and storing two (2) backup copies on different storage media, with one (1) of them located offsite.
In today’s unrelenting cyberattacks, hypergrowth of data, and evolving compliance requirements, what best practice should enterprises observe?
When it comes to data storage and management, the best practice is to regularly create backups, and being careful not to let this practice lapse due to negligence or complacency. Ransomware is still a prevalent threat to individuals and organizations around the world. When IT systems administrators are constrained by a limited budget, data backup may not be among their top priorities.
One of the other challenges organizations face is the provision of effective collaboration tools and file access systems for employees operating from different offices and locations. Remotely accessing a traditional Windows file server is traditionally only possible with a VPN connection.
For many organizations, the sharing of files is possible only via emails or FTP. That’s why some organizations have deployed public cloud storage and collaboration tools along with their Windows file server, although business leaders would need to look for third-party solutions to overcome these issues. With data organisation solutions, organizations can enjoy the convenience of cloud accessibility while maintaining strong data governance and privacy.
We’ve seen the deployment of new storage solutions like all-flash arrays and hyperconverged infrastructure – do you foresee these becoming more popular in the future?
All-flash arrays and hyperconverged infrastructure are up and coming storage solutions. As prices of SSDs have been dropping dramatically, we think the deployment of SDD-storage solutions need not be limited to enterprises because of its price anymore. We believe more SMBs can now afford all-flash storage solutions as well.
How should IT architecture be designed to take into consideration the varying benefits of different technologies, for example cloud and edge computing, without unnecessary additional spend?
Edge computing allows for data storage and computing to happen closer to the data source, or even offline. This drastically improves response times and bandwidth and is thus a very suitable way to process Big Data which has become a business priority for many organisations.
However, not all data needs to be processed and analysed immediately. Businesses need to first examine the requirement of their IT environment deployment and data storage requirement, the datatype needed to be stored, the proportion of the cold and hot data, IOPS and latency allowed before determining whether their existing IT structures needed to be replaced or revamped.
The existing IT structures do not need to be replaced or revamped if they are not encountering any issues mentioned above. Edge computing is still a relatively new trend, and not all organisations would necessarily see a strong ROI from their investment; rather than jumping on hot new technological trends, organisations should instead assess their businesses needs and priorities first.









