The Dark Forest Hypothesis
Liu Cixin's Postulate
There are many ideas about why it is that we find ourselves alone in our stellar neighborhood (and indeed, may find ourselves alone in the galaxy). The phenomenon itself is called the Fermi Paradox, which like all physical paradoxes is a bit of a misnomer. It's more of an oddity, really, given what little we know about the probabilities involved we'd intuitively expect the cosmos to look different than it does.
So: where are the intelligent aliens? Maybe they don't exist. Maybe we can't detect them. Or maybe, Liu Cixin argues, they're being sensible and are hiding.
As far as the imaginable range of technological capabilities goes (even without resorting to yet-undiscovered technology), we are probably not a very advanced civilization. Our culture is still young and always changing, the vast mass of our technology is less than a hundred years old. Assuming there is a lot that could still be discovered and improved over, say, the next thousand years, even the next million years, it seems reasonable to equate our developmental stage to that of a galactic fetus.
Civilizations with the relative capabilities of a fetus, should they find themselves in a dark and dangerous forest, would be well-advised not to make a ruckus - lest they attract the attention of predators. One may even argue that any civilization should stay quietly in hiding, because the relative might of galactic predators is likely to overwhelm anyone regardless of their knowledge or capabilities. It is, afterall, much easier to destroy than it is to build and preserve.
Any technological society who tries to contact other civilizations needs to weigh the benefits against the risks. In this kind of analysis, weighing unknown benefits against unknown risks, it is relatively easy to paint the risks as infinitely large. The main reason for this is that we do not perceive ourselves to be in trouble currently. It seems unreasonable for a welathy person to bet their entire fortune on a coin toss in exchange for an unknown reward, whereas a poor person could be seen as rational for engaging in such a gamble. Earth is not in enough trouble to roll the dice, it seems.
The Dark Forest argument is compelling on several fronts, some rational, some emotional, some ideological. It is an explanation for the emptiness in our skies: everyone is being reasonable and is hiding. It also fits in the current political climate, because it's an appeal to isolationism, and it paints the foreign as hostile by default. It also implies that, as humans, we must all fall in line without exception, or perish. On the cosmic stage, there is no room for exploration or whimsy, no friendship, and even an exchange of culture and ideas is worthless at best (and likely much worse than worthless).
One Coherent Force
Let's examine how such a dark forest would have to work. It is easy to imagine having one or two bad neighbors, but that is not enough in this case. If the dark forest hypothesis applies, it applies on a truly enormous scale - since we do not see any technological signatures even beyond our galaxy. The dark forest would have to extend for billions of light years around us. Without introducing new physics, this seems to rule out one single adversary, because no coherent force can be expected to threaten that much territory over these distances. Even a single large machine empire would not be enough to control such a vast expanse, because over the huge time frames involved, we would certainly expect defects and goal drift to result in local decoherences. Such local "unity failures" would then be detectable.
A Universal Psychological Principle
If one coherent force is not the source of danger in the dark forest, it will have to come down to many independent civilizations projecting the threat. Again, this would have to be a unified effect across the observable universe. Maybe given X civilizations in a volume of space, there are always Y% predators who all work and think the same way: exterminate everyone they see. Technically speaking, the risk of extermination does not even have to be 100%, it would just have to be sufficiently high so nobody risks sticking their neck out.
However, this scenario also has statistical problems. Even granting for a while the assertion that the prevalence of such predators is a universal psychological law that results in a perfectly uniform threat projection across the entire universe, it still doesn't quite work. We would still expect there to be local conflict. We would expect to occasionally see civilizations getting wiped out, no matter how careful they are, because staying hidden as a technological advanced society is thermodynamically improbable over long periods of time. Yet, we see no interstellar warfare. At the very least, the industrial support structures to enable these kinds of clashes should be detectable - maybe not in every case, but so far we see zero interstellar war machines out there.
Even if every single civilization is a predator, and the predators themselves are always careful to avoid detection, we still expect these attempts at stealth to go wrong occasionally. And then, the resulting fireworks should be visible.
A Forest Filled Exclusively with Small Rodents
If there are no interstellar wars, and we can detect no infrastructure in place for such wars, maybe the problem is not that all civilizations are predators, maybe the issue is that all civilizations see themselves as victims. Nobody ever gets hurt, but everyone is unwilling to risk exposure, because there might be predators. This scenario, too, suffers from improbability problems. We'd still expect to detect the occasional stealth civilization, precisely because it is very hard to have a lot of technology without at least giving off a telltale heat signature. A bigger improbability still is the fact that this model again requires every single civilization to be extremely disciplined, with absolutely no room for exceptions or accidents.
Death Rays
For the dark forest to work, every galaxy in our vicinity would have to feature at least one civilization-killing device. One might imagine a couple of ways to build such a device. The issue is, that anything we can come up with leaves a detectable trace. For example, imagine a gigantic hemisphere of mirrors in orbit around a star. A technologically advanced civilization might then direct a focused beam of sunlight anywhere in the galaxy to sterilize any planet.
To make really sure complete sterilization is achieved, we would want to send multiple beams from multiple angles for an extended period of time. Such mirror structures would be detectable, because even if they had a non-reflective mode, they would give off heat and occlude their host stars in particular ways.
This kind of device would certainly work to sterilize Earth. But they would have to know specifically which planet to target. Also this method is decidedly less useful at eradicating civilizations that are themselves spacefaring. The more spread out a civilization is, the harder it is to kill. These downsides render the death ray useless, at least by itself.
Relativistic Bullets
In order to supplement their phalanx of death ray emitters, a hostile civilization would need to employ high velocity guided missiles that can strike smaller targets and colonies. Those missiles would need to find all outposts and spaceships, and then correct their course autonomously to both evade countermeasures and to make sure they eradicate every single place that hosts life, which includes technological life with arbitrarily small heat signatures, such as dormant artificial intelligences. If this targeted eradication process went wrong only once, retaliation would be inevitable. This, again, would result in a detectable war. According to the data we have, none of this has happened anywhere.
Sleeping Drones
There could be a sleeping drone in every star system, in every galaxy. It lies waiting on the outskirts, carefully monitoring every planet for signs of technology. Once it discovers a tech signature, it slams a giant asteroid into the planet and goes back to sleep. The problem with this is not that the amount of drones required is very high, because a self-replicating drone would easily find enough material to procreate pretty much everywhere in the universe, except maybe in the most metal-poorest of regions where lifeforms are unlikely to arise.
However, every single one of these drones would have to adhere to the program. We expect such drones to occasionally malfunction or even to mutate and become harmless, and they would likely pass these traits on to their offspring. If there was an inter-drone immune system prompting the original drones to destroy the mutated ones, soon there would be more warfare amongst the death drones than against the burgeoning species they are supposed to control. Furthermore, if such a drone was waiting in our solar system, it would likely have acted by now, since we may already be about to develop defensive options beyond its killing capabilities. If we ever venture beyond Earth, it will probably be too late to kill us with sufficient certainty.
Final Analysis
A dark forest scenario remains an intriguing solution for immediate stellar neighborhoods, maybe even larger parts of galaxies. It is easy to imagine one or a few bad actors supressing technological civilizations within their reach. In such a small volume of space, the tech signatures of everyone involved are easier to hide, and probabilistic effects based on mutation and goal drift are surmountable. We'd also expect the time span between detectable events to be sufficiently large to explain why we have never observed any activity, and it's easier to imagine how a small number of potential victim civilizations would keep quiet over longer periods of time without anyone changing their minds about it.
However, as a large-scale solution to the Fermi Paradox, the dark forest falls down, and hard. Judged as a solution for the entirety of the observable universe, the dark forest is an exceptionally weak proposition that depends on several key factors having highly improbable and spacially uniform values. I'm tempted to chalk this up to the kind of intellectual handwaving that always occurs when a model is ideologically convenient. To be fair, I don't believe anyone is completely immune to such biases.
In final consideration, the hypothesis still has a place as a piece of the puzzle. It seems reasonable that there is not one single solution to the Fermi "Paradox", it is rather a combination of many factors, almost all of which drive the probability of encountering another civilization downwards.