Programming anti-patterns
There are many books, blogs, academic papers and tutorials which claim that you should follow certain software engineering principles or practices when developing software. These practices are often referred to as "patterns", while supposedly poor practices are referred to as "anti-patterns". Unfortunately, a cursory pass over such materials with a critical eye is sufficient to reveal the truth: these so-called patterns do not actually solve any problems and should themselves be considered anti-patterns.
Object-oriented programming
It doesn't matter whether you consider the term object-oriented to mean a system where objects send messages or have their methods called. The core criterion for determining whether a system is "truly" object-oriented is the presence of encapsulation, i.e. the binding of functions to the data structures they operate on. Encapsulation is fundamentally an anti-pattern because data structures are intrinsically declarative, while functions transform inputs (data structures) into outputs (other data structures). Introducing a tight coupling for the sake of assigning "responsibilities" to a data structure can only lead to less reuse and poor handling of cross-cutting concerns. This can be clearly seen in programming trends; inheritance gave way to composition, which led to inversion of control (dependency injection). None of these techniques fully resolve the issue, as they preserve the underlying dichotomy of loose coupling and the binding of functions to data.
Object-oriented programming is sold on the idea that it produces elegant abstractions, usually by providing examples that model some aspect of the real world. Sometimes this transfers in a reasonable way, such as for GUI programming, but most of the time it does not. Object-oriented programs are embarrassingly amusing to read and incredibly painful to write, which is why most new languages try to discourage the use of this paradigm.
Resource Acquisition Is Initialisation
Memory management is one of the most important practical aspects of software engineering, yet it is frequently done poorly. Manually allocating and freeing memory for individual objects is tedious and error-prone; as a result many languages have employed various strategies to ease the burden on the programmer. The most common strategy is garbage collection, and while this may be suitable for a large number of use cases (especially on modern hardware), the fact that pause times are unbounded is often unacceptable for programs which need to perform real-time operations, especially as the heap grows.
Another strategy, which originated in C++, is known as "Resource Acquisition Is Initialisation" (RAII). It was introduced to paper over the complexity of exception handling in a language with manual memory management by ensuring that memory associated with an object on the heap is released when a reference to that object goes out of scope, i.e. by calling its destructor. One of the issues with throwing an exception is that you don't know where it will be handled. In C++, the programmer is required to ensure that all heap objects which are no longer in scope are freed, a task which then becomes quite onerous. In languages which emphasise garbage collection, it is generally not necessary to execute destructors, which makes the average runtime cost of exception handling significantly less expensive compared to RAII.
Unfortunately, there is another problem – malloc() and free() are themselves unbounded! Obtaining and returning memory to and from the operating system is not free, and although a clever malloc implementation may be able to optimise this cost, executing a single operation will always be faster than executing many operations. The only way to avoid this is to allocate all of the memory you require for a set of real-time operations upfront, and since many garbage collectors only run during allocation, you may not have gained anything by opting for manual memory management.
RAII also requires additional semantics to work reliably, a concept which is referred to as "ownership". The ownership model is about as useful as object-oriented programming, i.e. not at all. You either have to suffer the consequences of not being able to copy a pointer anywhere or use reference counting, which has its own performance drawbacks. When you consider that most allocation patterns are bursty, the implication is that different objects naturally have a relationship to one another in terms of lifetime, and this in turn implies that you can allocate and free all of the memory required as a single contiguous block, avoiding both the penalties associated with RAII, as well as eliminating (or at least reducing) the risk of double-free and use-after-free bugs. RAII cannot see the wood for the trees, and that is why it is an anti-pattern.