The Invisible Power of Big Tech


Today, reports from GSMA, DataReportal, and the ITU indicate that approximately 5 billion people own a smartphone connected to the internet, and roughly the same number use at least one social network. That represents nearly 60% of the world’s population. According to Soax Research and DataReportal, average daily usage time is around 2 hours and 30 minutes.

In general, people believe social media is free and funded by advertising. However, a deeper analysis quickly reveals that social platforms use people far more aggressively than people use them.

At first, the data left behind by users in searches was used solely to improve the service — a reinvestment cycle centered around the user experience. But under pressure for exponential profits, Google discovered behavioral surplus: data that exceeded what was necessary to improve the service and could instead be processed by “machine intelligence” to create prediction products.

Today, an enormous amount of data is collected during the use of these platforms, including:

  • accessed links
  • watch time
  • posts
  • private chats
  • likes
  • pauses in videos
  • recurring topics
  • login schedules
  • locations
  • biometric data

All of this is collected, stored, and processed to build mathematical profiles capable of predicting each user’s behavior.

These datasets and behavioral models are used in countless ways and for purposes that go far beyond simply pushing advertisements and increasing corporate profits.


“If You’re Not Paying, You’re the Product”

While people use social media for entertainment, communication, and information, the platforms use people by collecting massive amounts of behavioral data for hours every single day. That data is processed into analytical profiles capable of mapping:

  • political interests
  • emotional vulnerabilities
  • consumption patterns
  • addictions
  • personal relationships
  • social circles
  • frequently visited places
  • access schedules and screen time
  • personal problems
  • attention span
  • health-related issues
  • and much more

For all of this to work, users do not need to explicitly reveal these details. The system identifies patterns and compares them against billions of existing profiles for which it already possesses extensive behavioral data.

Once these models become sufficiently populated and refined, they create an asset that previously barely existed: guaranteed outcomes for clients.

When platforms can predict the behavior of entire groups with a high degree of accuracy, it becomes relatively easy to push a product, amplify an idea, massively increase sales, or even influence the course of real-world events with just a few clicks. The algorithm knows who should see something, when they should see it, how it should be presented, and for how long in order to produce the desired outcome.

At their core, these platforms pursue three objectives:

  • maximize engagement
  • maximize platform growth
  • maximize advertising revenue

The algorithm has already learned that, to achieve those goals, it does not need to remain attached to truth. Falsehoods can perform up to six times better than factual information. Nor does it need balance, since extremism and polarization generate significantly more engagement.

What the algorithm truly needs is to deliver the right dose of dopamine in the right format, ensuring that users remain inside the platform and keep the machinery running.

Human beings have been transformed into raw material, much like oil, livestock, or minerals. Every human action becomes data. Every piece of data helps build a personal model. Every model becomes increasingly capable of predicting the behavior of individuals and the groups to which they belong.


Personalized Worlds

One of the most critical aspects of the relationship between individuals and social networks is that, in practice, every user inhabits a personalized reality carefully assembled specifically for them according to what they are most likely to consume — even if they are not consciously aware of it.

Algorithms use the massive amount of collected data to understand users’ preferences and aversions, then construct content flows designed to keep them increasingly attached to the platform.

The news shown to a conservative is not the same as the news shown to a socialist. And even when the same subject appears, it is delivered differently, through distinct sources and carefully adapted approaches designed to capture attention based on previous content that successfully achieved the same result.

This is how “filter bubbles” emerge: virtual fragments of reality optimized to maximize user retention.

The result is an erosion of shared reality on a massive scale, since each person increasingly “lives” inside a world whose informational foundations and behavioral patterns differ from everyone else’s to varying degrees.

Virtual fragments of reality Virtual fragments of reality


A Power Beyond Economics

Even setting aside the economic factor — which alone is already extraordinary, considering that Big Tech companies move trillions of dollars annually and rank among the wealthiest institutions in human history — the true power being examined here lies elsewhere:

  • deep knowledge of the human mind, including its nuances, desires, fears, addictions, weaknesses, and strengths
  • the ability to predict individual and collective movements with only a few clicks
  • the capacity to alter perceived reality, to varying degrees, for billions of people
  • the offering of a unique asset to clients: the promise of successful outcomes
  • the ability to position themselves several steps ahead in critical corporate decisions

These companies control:

  • the flow of information
  • the distribution of news
  • public visibility
  • political reach
  • collective opinion
  • social perception

It is not uncommon to hear someone say something like: “I was just thinking about something, and when I opened my phone, there it was.” Certainly, some of these situations are merely coincidence or probability. But to some extent, this phenomenon is real: models built from years of behavioral data become capable of predicting patterns that the human mind itself struggles to comprehend.

At times, it can feel as though these platforms are reading thoughts before they are even fully formed.

This represents a level of power that even the most brutal tyrants in history could scarcely have imagined, because it was once inconceivable. Tyrants sought to dominate bodies and, at most, pressure minds toward specific outcomes. Modern technology, however, has created a vast imbalance of power between ordinary individuals and corporations whose scale is, in practical terms, almost immeasurable.


Instrumentarian Power and the “Big Other”

In The Age of Surveillance Capitalism, Shoshana Zuboff argues that these corporations possess a new form of power: instrumentarian power — a system designed to automate society itself.

Unlike totalitarianism, which relied on violence, instrumentarianism uses the “Big Other” — an omnipresent digital apparatus — to monitor and shape human behavior in subtle and often imperceptible ways.

Its ultimate objective is to deliver guaranteed outcomes for clients — advertisers, insurers, political actors, and major institutions — by reducing human uncertainty as much as possible.

Mass surveillance and large-scale data collection Mass surveillance and large-scale data collection


The Consequences of This Business Model

Many of the very people who helped build these systems have expressed serious concern about the effects these practices are generating — and may continue to generate in the future.

Aza Raskin, who popularized the concept of infinite scroll; Tristan Harris, former Google design ethicist; Tim Kendall, former president of Pinterest; Justin Rosenstein, engineer who worked at Facebook and Google; Jeff Seibert, former Twitter executive; Sandy Parakilas, former Facebook operations manager; Chamath Palihapitiya, former Facebook vice president for user growth; Guillaume Chaslot, former YouTube engineer; among others, frequently speak about:

  • persuasive design
  • psychological manipulation
  • addiction mechanisms
  • infinite scrolling
  • collective behavior
  • engagement metrics
  • aggressive growth strategies
  • harmful effects recognized by the companies’ own executives
  • instant psychological rewards
  • mass data collection
  • mathematical behavioral modeling
  • predictive behavior analysis
  • the commodification of human experience
  • the dynamics of polarization

These are not merely outside critics.

They are former executives, designers, and engineers from within the system itself.


Conclusion

Big Tech companies are not merely large technology corporations offering global services and selling advertising.

They have become structures capable of mapping human behavior at both the social and individual level — capable of predicting, with increasing efficiency, the decisions made by people and groups across countless areas of life.

The power of Big Tech is not only technological. It also represents a new economic order that treats human experience itself as free raw material for hidden commercial practices of extraction, prediction, and behavioral sales.

Whether consciously or not, these corporations have evolved into a form of power that, not long ago, was not simply impossible — it was unimaginable. The technologies they developed expanded humanity’s understanding of what large-scale social control could become.

When combined with the immense economic weight these corporations already possess, it becomes difficult to comprehend the true limits of this capability — and even more difficult to understand the position of ordinary individuals standing before this Leviathan.

The deepest consequence of this system is the erosion of individual autonomy. Modern life has made the internet essential, but the cost of participating in it is gradually surrendering privacy and control over oneself.



References