We are experiencing a data explosion. Today, we generate more data in an hour than during the entire year 2000, and more data will be created in the next three years than in the past 30 years. In 2020, we saw the signs of this growing curve as researchers, pharmacists, governments and health institutions directed resources towards the development of vaccines, new treatments and other ways to help the world stay healthy during the pandemic. These efforts required the generation and processing of large amounts of data. Whether in the health field or in other applications, the only realistic way to deal with all the information we are generating is to use processing and aggregation tools along with machine learning (ML) models that help us understand them.
Know more about the engineering salary.
Historically, ML has always been a computationally heavy workload, and it couldn't be run anywhere except on the most powerful hardware. As we make progress in software and silicon, that will start to change. And, as we are getting closer to the edges, what we will see next year is an acceleration in the adoption of ML models in all sectors and governments. In manufacturing, it will be incorporated into production lines, being able to detect anomalies in real time, and in agriculture, models will help to more intelligently manage valuable resources, such as soil and water.
We will also see an explosion in connections between machines (M2M). In 2018, according to Cisco's annual internet report, only 33% of existing connections on the internet were M2M type. If you have a voice assistant, rely on any smart home device, or are following the rapid evolution of cars and trucks, you can imagine what lies ahead: a proliferation of sensors and devices connected to the cloud and also to each other. In 2021, M2M connections are expected to reach 50% of all connections.