Humanity’s digital storage needs are constantly increasing. While the amount of data produced in 2010 was only 2 zettabytes (1 zettabyte = trillion gigabytes), it has been multiplied by more than 32 times in 10 years to reach 64.2 zettabytes in 2020. And it’s far from over: this figure could reach 180 zettabytes in 2025, an increase of around 280% in 5 years.
Storage media are therefore naturally led to evolve, from the magnetic tape of the 1930s to the SSD disk, becoming smaller and smaller & more and more efficient. Until their latest avatar: DNA, yet billions of years old.
According to numerous studies, the deoxyribonucleic acid molecule which contains our genetic information and that of all living organisms on Earth, could be the perfect solution to store all the cold data. Cold data is the data that is rarely accessed but is considered highly valuable, such as archives.
The principle behind DNA storage is simple: binary digital data (0 or 1) are converted into nucleotides (the 4 molecules of DNA: A, C, G & T). The DNA is then produced by dedicated machines & stored in an aqueous solution.
The benefits of DNA over current storage methods are quite compelling:
- Solidity and durability: DNA can with stand extreme weather conditions, whereas current physical media are much more fragile. When stored in the proper environment, DNA can be decrypted even after millions of years.
- Energy-efficient: Today, Data centers consume 2% of global electricity.
- Small in size: This is one of the most fascinating properties of DNA. Data centers occupy an ever-increasing volume of 167 km2 worldwide. On the opposite, DNA has an exceptional capacity to densify information. While the nucleus of a cell in our body measures less than 10 micrometers, the DNA it contains would measure almost 2 meters. Data storage has evolved in such a way that, previously we could store more than 100 DVDs on a single SD card, then 2 years of music on an USB, but with DNA you could fit all the world’s data in a space equivalent to a shoe box.
The idea of using DNA as a storage medium is not entirely new. Richard Feynman (Nobel Prize winner in Physics) had already formulated it in 1959. But it was not until 2012 that the first technical tests were carried out by Harvard teams. Today, initiatives are flourishing to make this technology feasible. From start-ups to large companies, including university research groups, a lot of organizations are working on this. Major technological leaps have been made, widening the field of possibilities.
But there are still many obstacles before the industrialization of this storage method happens. Production costs and processing time remain too high for industrial use. We will have to wait until 2030 to see the impact of this promising technology in our lives…
As DaaS is set to become widespread, the tech giants are already thinking about the workspace of tomorrow.
“Imagine. You are sitting in Starbucks sipping your coffee, and with a snap of your fingers, your work screen appears in front of your eyes as you left them at home. Rather than calling a colleague when you need some help, the colleague will teleport to you, see your documents, stand by your side to help you, then disappear in an instant. This is what we call the infinite office. ”
This is what Mark Zuckerberg said last July in a podcast with “The Verge”, presenting his new flagship concept: the Metaverse.
A few months later, building on the success of its Teams videoconferencing application, Microsoft announced the new features that will be added to it via the 3D Mesh application. The upcoming features are automation, setting up avatars, integration of participants in a common virtual setting, increased attention to non-verbal interactions, facilitated creative brainstorming-type teamwork, etc. Microsoft itself calls this tool a “Metaverse platform.” And that’s not all: Apple, Alibaba & Tencent have also invested in this very popular technology.
Is the Metaverse, going to represent the workplace of the future, much more than just an ultra-sophisticated game world? Of course, for the moment, it is only about virtual work meetings, which is different from the concept of virtual desktop as in Virtual Desktop Infrastructure (VDI). In the first case, it is a place where meetings take place (i.e. the office or the meeting room), in the second case it is virtual computer desktop Cloud and remote resources.
However, we are very close to that time where the VDI will allow you to connect to your professional metaverse: a virtual desktop that will allow access to applications allowing to simulate fake physical workplaces in augmented reality….
The ability to teleport into a virtual world and meet your colleagues from your living room is certainly puzzling. But the potential long-term consequences on work habits are even more so. Below are few questions that arise after understanding Metaverse:
- Will telework be 100% generalized for everyone?
- Is business real estate doomed to disappear, as physical offices are almost useless?
- Will we be able to create CEO-like virtual physical offices with a 360 ° view of the Manhattan skyline, from our small apartment located in the suburbs of Paris?
- Will we be able to shake hands with colleagues for years to come, knowing only their avatar?
- Will we be subjected to ultra-sophisticated surveillance systems controlling keystrokes on the keyboard and mouse movements?
- What will be the future of break rooms and coffee machine places, where people usually mingle, chat and share secrets and ideas?
A fascinating concept, which goes well beyond corporate IT, poses profound societal questions. Blog to be continued…
What are the key performance indicators for a multi-cloud environment?
“Private cloud or public cloud? We have wondered a few years ago. Now obsolete, this question has been replaced by “How to manage a multitude of different clouds? ”
First of all, a basic understanding: The multi-cloud combines different providers of public clouds: Amazon Web Services (AWS), Microsoft Azure, IBM / Red Hat, Google Cloud Platform … The hybrid cloud combines private cloud (i.e. private hosting, on the company’s site) and public cloud, through one or more providers.
In fact, multi-cloud and hybrid environments have become, if not the norm, at least an unavoidable trend for the next few years. Increasingly, companies host their data on different platforms. According to Gartner, three-quarters of medium and large companies have adopted this type of strategy by the end of the year. For IDC, 90% of companies around the world even use various platforms and hosting for their data by 2022.
The question of knowing how to manage these hybrid environments, and especially how to measure their performance, therefore becomes central for CIOs. Financial indicators obviously come into play, but technical parameters must also be planned, in order to have a unified vision of these sometimes very heterogeneous environments.
Watch out for hidden costs here! In addition to costs related to subscriptions and licenses, you should also include expenses related to maintenance, support, data storage, network, as well as training and change management within teams.
Security and network indicators
Measuring security incidents per team and per month, as well as the resources mobilized for processing, provides an overview of the security state of the infrastructure. The measurement of latency and packet loss also provides insight into the condition of the network, as does the measurement of response time, bandwidth and speed.
The measurement of application performance as well as the quality of the user experience is already carried out by many companies. But it is also crucial to consider the infrastructure on which the application is based, to have a complete mapping of the application infrastructure. In the case of large application catalogs, application performance measurement solutions (APM solutions, for Application Performance Management) are essential.
Assessing security and cloud standards assigned by industry, institutions or customers (ISO international standards, European regulations such as the GDPR, ANSSI reference at the French national level for instance etc.) is sometimes a tedious exercise, but necessary to judge its compliance with industry standards.
If you wish to be supported in the implementation of your multi-cloud or hybrid cloud project, contact us.
When undertaking a new VDI project, it is extremely important to understand your environment, your users, your budget, and the politics surrounding your organization. There are a few things that must be done which can have a drastic impact on the success of any VDI project. These include:
# 1 Understanding your users
You may think you know what your users are using but you would be surprised. Oftentimes when audits are run, companies realize that there are apps in use for which users don’t have enough resources or that there are apps as well that users are not using for which the company is paying for in licensing. It is also important to understand your users from a Windows user profile perspective and truly understand the desktop experience that needs to be equalled if not bettered through VDI.
# 2 Make sure that users get the resources they need
When moving to VDI, you want to be sure that users will have a smooth experience. Just like on a physical PC, the user’s VM will need an allocation of memory, storage, and GPU. This requires a careful choice in servers and storage. Many companies have moved towards Hyper Converged technologies to allow for optimal performance and the ability to scale with ease. It is highly recommended that storage be All Flash.
# 3 Graphic Intensive Apps
Some apps such as Autocad or Adobe Premiere cannot be run without a GPU card on the server. This is usually a significant investment in a VDI solution and must be put into consideration. There are some great advancements in this area which have allowed applications that used to be excluded in VDI a possibility.
# 4 Ensure proper connectivity
If you have users in remote locations or if they have a poor internet connection this may result in a poor experience. You will need to test from that location to ensure the experience is satisfactory. Otherwise there will be a very upset user on hand.
# 5 Understand all of the peripherals
From printers, to scanners, to card readers you have to ensure that all will work with your VDI solution. Certain peripherals may not play well in certain VDI solutions or require a specific local operating system due to driver availability.
# 6 Choose Easy to Manage and secure endpoints
Linux based Thin or Zero clients tend to be a great option to get you away from the day to day complicated management of Windows endpoints and give greater security. Keep Windows only on the VMs and go for a more secure and lightweight OS on the endpoint. Some thin clients offer customization possibilities, if some applications need to be run from the local desktop only for example.
#7 Printing and Scanning
One of the areas many VDI projects stumble is on the handling of printing and scanning. These do consume bandwidth and can have adverse impacts on the user experience. It is also not uncommon for there to be problems with drivers or challenges finding specific printers where users need to print. There are solutions on the market such as ZeePrint to address such challenges, improve the user experience as well as make management way simpler.
#8 Ensure proper profiles and folder redirection
With VDI you will need to opt for Roaming or Mandatory profiles and each brings about positive and negative points. You will need to make sure you choose the right approach and stay organized to not lose track of any of the policies in place. Different VDI vendors provide tools to help with this in addition to other supplementary products existing in the market to help with simplifying the management of the overall user desktop environment.
#9 Use a free sizing calculator
There are several VDI calculators out there to help. These tools of course only offer a limited approach, which overlooks many factors, but they can help you get a first idea, which will need to be refined later. Industry veteran André Lebovici has created a very useful one which can certainly help. Click here to access.
Courtesy from ZeeTim.
Drastic changes in strategy are challenging: either it is a complete success or a complete failure. One of those crazy bets was the creation of Amazon Web Services by e-commerce giant Amazon. Take a look back at some of the most interesting anecdotes about the birth of a cloud giant.
In 1999, Sun Microsystems was the benchmark for storage solutions. Since data storage was the most important thing for a business like Amazon, it had not skimped on costs and invested a fortune in the Sun servers.
However, after 1999 came the year 2000, and the bursting of the dot-com bubble. Many start-ups ruined by the bursting of the bubble had then put their Sun servers up for sale on Ebay, often for a pittance.
Faced with the sudden and unexpected depreciation of Sun servers, Amazon could have renegotiated the cost of operating its storage solutions, but its leaders made a much more unexpected and radical choice, a complete paradigm shift: invest in HP servers (Linux).
Sounds classic today, but at that time it was relatively a daring choice as Amazon was still a small company. Linux was born only in 1994, at the same time as Amazon, and was not much trusted.
Because of the integration of this new system, Amazon had to stop all its development projects on its e-commerce platform for nearly a year, plunging its revenues, to the point near to bankruptcy in the midst of a recession. And with the shockwave of 9/11 Amazon came close to bankruptcy, as its director of business development at the time, Dan Rose, admitted.
But once the transition has finished, the Linux technologies reduced infrastructure operating costs by 80%, while stabilizing the website beyond imagination. But that’s not all: the Linux infrastructure has allowed great scalability too. Jeff Bezos had the intuition to decouple and partition storage spaces, so that different teams can work independently from each other, each using their dedicated space…
This idea of pooling resources was to experience another development: highly seasonal, like most distribution activities, Amazon’s activity heavily mobilized servers in November and December, but left them largely in surplus for the rest of the year. Why not lease this excess server capacity to other companies in need? Amazon Web Services was born …
The creation of Amazon Web Services marked a break in the history of computing, making storage a simple resource instead of a heavy investment, thus allowing the explosion of startups and the emergence of an entire ecosystem marked by risk taking and innovation.
Today, Amazon Web Services represents the majority of Amazon’s revenues, and is the group’s locomotive in terms of innovation: AI services, machine learning, new ARM Graviton processor for its servers, serverless instances, etc. However, the biggest competitor remains Microsoft Azure, which particularly stood out during the pandemic. In new technologies like anywhere in the world, being the first to have the right idea does not necessarily guarantee to maintain your leadership in the long term…
The issue of a sovereign European cloud is not new. We remember the inglorious fates of CloudWatt or Numergy a few years ago. Also, the news of the creation of the Gaia-X project, carried by the Franco-German couple, was scrutinized from the start.
With the ambition to establish itself as a European alternative to Azure, AWS & Google Cloud, this consortium of some 300 companies held its first plenary session on 22 nd January, 2021.
While on paper the initiative looked great, the project began to arouse misunderstanding and criticism when it began to include American and Asian players: Microsoft, Google, Alibaba, Huawei …
Gaia-X spokespersons, however, want to reassure and recall the principles that will apply to all members, including non-Europeans: compliance with European governance rules, board of directors made up of companies with their headquarters in Europe only…
Yet by opening the door to the very people it was meant to counter, didn’t the Gaia-X project bring the wolf into the fold? Google is making no secret of its opposition to European data protection principles , the Chinese government has reaffirmed its grip on Alibaba’s management , and Palantir’s data processing raises many questions.
While the roadmap announced on January 22, 2021 is very ambitious, it remains to be seen whether, in this already matured digital world, can Europe create its own path and offer a real alternative to the American and Asian giants.