The Faster You Unlearn OOP, The Better For You And Your Software

Published December 09, 2018 by Dawid Ciężarkiewicz, posted by GameDev.net
Do you see issues with this article? Let us know.
Advertisement
Quote

Object-oriented programming is an exceptionally bad idea which could only have originated in California.
  - Edsger W. Dijkstra

 

 

Maybe it's just my experience, but Object-Oriented Programming seems like a default, most common paradigm of software engineering. The one typically thought to students, featured in online material and for some reason, spontaneously applied even by people that didn't intend it.

I know how succumbing it is, and how great of an idea it seems on the surface. It took me years to break its spell, and understand clearly how horrible it is and why. Because of this perspective, I have a strong belief that it's important that people understand what is wrong with OOP, and what they should do instead.

Many people discussed problems with OOP before, and I will provide a list of my favorite articles and videos at the end of this post. Before that, I'd like to give it my own take.

 

Data is more important than code

At its core, all software is about manipulating data to achieve a certain goal. The goal determines how the data should be structured, and the structure of the data determines what code is necessary.

This part is very important, so I will repeat.

Quote

goal -> data architecture -> code

One must never change the order here! When designing a piece of software, always start with figuring out what do you want to achieve, then at least roughly think about data architecture: data structures and infrastructure you need to efficiently achieve it. Only then write your code to work in such architecture. If with time the goal changes, alter the architecture, then change your code.

In my experience, the biggest problem with OOP is that encourages ignoring the data model architecture and applying a mindless pattern of storing everything in objects, promising some vague benefits. If it looks like a candidate for a class, it goes into a class. Do I have a Customer? It goes into class Customer. Do I have a rendering context? It goes into class RenderingContext.

Instead of building a good data architecture, the developer attention is moved toward inventing “good” classes, relations between them, taxonomies, inheritance hierarchies and so on. Not only is this a useless effort. It's actually deeply harmful.

 

Encouraging complexity

When explicitly designing a data architecture, the result is typically a minimum viable set of data structures that support the goal of our software. When thinking in terms of abstract classes and objects there is no upper bound to how grandiose and complex can our abstractions be. Just look at FizzBuzz Enterprise Edition  – the reason why such a simple problem can be implemented in so many lines of code, is because in OOP there's always a room for more abstractions.

OOP apologists will respond that it's a matter of developer skill, to keep abstractions in check. Maybe. But in practice, OOP programs tend to only grow and never shrink because OOP encourages it.

 

Graphs everywhere

Because OOP requires scattering everything across many, many tiny encapsulated objects, the number of references to these objects explodes as well. OOP requires passing long lists of arguments everywhere or holding references to related objects directly to shortcut it.

Your class Customer has a reference to class Order and vice versa. class OrderManager holds references to all Orders, and thus indirectly to Customer's. Everything tends to point to everything else because as time passes, there are more and more places in the code that require referring to a related object.

Quote

Instead of a well-designed data store, OOP projects tend to look like a huge spaghetti graph of objects pointing at each other and methods taking long argument lists. When you start to design Context objects just to cut on the number of arguments passed around, you know you're writing real OOP Enterprise-level software.

 

Cross-cutting concerns

The vast majority of essential code is not operating on just one object – it is actually implementing cross-cutting concerns. Example: when class Player hits() a class Monster, where exactly do we modify data? Monster's hp has to decrease by Player's attackPower, Player's xps increase by Monster's level if Monster got killed. Does it happen in Player.hits(Monster m) or Monster.isHitBy(Player p). What if there's a class Weapon involved? Do we pass it as an argument to isHitBy or does Player has a currentWeapon() getter?

This oversimplified example with just 3 interacting classes is already becoming a typical OOP nightmare. A simple data transformation becomes a bunch of awkward, intertwined methods that call each other for no reason other than OOP dogma of encapsulation. Adding a bit of inheritance to the mix gets us a nice example of what stereotypical “Enterprise” software is about.

 

Object encapsulation is schizophrenic

Let's look at the definition of Encapsulation:

Quote

Encapsulation is an object-oriented programming concept that binds together the data and functions that manipulate the data, and that keeps both safe from outside interference and misuse. Data encapsulation led to the important OOP concept of data hiding.

The sentiment is good, but in practice, encapsulation on a granularity of an object or a class often leads to code trying to separate everything from everything else (from itself). It generates tons of boilerplate: getters, setters, multiple constructors, odd methods, all trying to protect from mistakes that are unlikely to happen, on a scale too small to mater. The metaphor that I give is putting a padlock on your left pocket, to make sure your right hand can't take anything from it.

Don't get me wrong – enforcing constraints, especially on ADTs is usually a great idea. But in OOP with all the inter-referencing of objects, encapsulation often doesn't achieve anything useful, and it's hard to address the constraints spanning across many classes.

In my opinion classes and objects are just too granular, and the right place to focus on the isolation, APIs etc. are “modules”/“components”/“libraries” boundaries. And in my experience, OOP (Java/Scala) codebases are usually the ones in which no modules/libraries are employed. Developers focus on putting boundaries around each class, without much thought which groups of classes form together a standalone, reusable, consistent logical unit.

 

There are multiple ways to look at the same data

OOP requires an inflexible data organization: splitting it into many logical objects, which defines a data architecture: graph of objects with associated behavior (methods). However, it's often useful to have multiple ways of logically expressing data manipulations.

If program data is stored e.g. in a tabular, data-oriented form, it's possible to have two or more modules each operating on the same data structure, but in a different way. If the data is split into objects with methods it's no longer possible.

That's also the main reason for Object-relational impedance mismatch. While relational data architecture might not always be the best one, it is typically flexible enough to be able to operate on the data in many different ways, using different paradigms. However, the rigidness of OOP data organization causes incompatibility with any other data architecture.

 

Bad performance

Combination of data scattered between many small objects, heavy use of indirection and pointers and lack of right data architecture in the first place leads to poor runtime performance. Nuff said.

 

What to do instead?

I don't think there's a silver bullet, so I'm going to just describe how it tends to work in my code nowadays.

First, the data-consideration goes first. I analyze what is going to be the input and the outputs, their format, volume. How should the data be stored at runtime, and how persisted: what operations will have to be supported, how fast (throughput, latencies) etc.

Typically the design is something close to a database for the data that has any significant volume. That is: there will be some object like a DataStore with an API exposing all the necessary operations for querying and storing the data. The data itself will be in form of an ADT/PoD structures, and any references between the data records will be of a form of an ID (number, uuid, or a deterministic hash). Under the hood, it typically closely resembles or actually is backed by a relational database: Vectors or HashMaps storing bulk of the data by Index or ID, some other ones for “indices” that are required for fast lookup and so on. Other data structures like LRU caches etc. are also placed there.

The bulk of actual program logic takes a reference to such DataStores, and performs the necessary operations on them. For concurrency and multi-threading, I typically glue different logical components via message passing, actor-style.  Example of an actor: stdin reader, input data processor, trust manager, game state, etc. Such “actors” can be implemented as thread-pools, elements of pipelines etc. When required, they can have their own DataStore or share one with other “actors”.

Such architecture gives me nice testing points: DataStores can have multiple implementations via polymorphism, and actors communicating via messages can be instantiated separately and driven through test sequence of messages.

The main point is: just because my software operates in a domain with concepts of eg. Customers and Orders, doesn't mean there is any Customer class, with methods associated with it. Quite the opposite: the Customer concept is just a bunch of data in a tabular form in one or more DataStores, and “business logic” code manipulates the data directly.

 

Follow-up read

As many things in software engineering critique of OOP is not a simple matter. I might have failed at clearly articulating my views and/or convincing you. If you're still interested, here are some links for you:

 

Feedback

I've been receiving comments and more links, so I'm putting them here:

 

Note: This article was originally published on the author's blog, and is republished here with kind permission.

Cancel Save
-11 Likes 56 Comments

Comments

GameDev.net

Related reading: industry professional and GameDev.net moderator Brooke @Hodgman recently published a piece outlining his counter-arguments to typical objections to OOP:

 

December 09, 2018 09:32 AM
Aceticon

It is genuinely interesting that people who don't know how to use OO for the reasons OO exists (reduce the likelyhood of bugs, reduce the amount of information that must be communicated between developers and control complexity by reducing cross-dependencies so that very large projects can be done efficiently, to pick just a few examples) put up their own deeply flawed pseudo-OO strawman as an example "OO" and then proceed to argue that their imaginary construct shows how shit OO is and why people should stop doing it.

Even more funny is that this is basically a back-to-spaghetti-code movement that reverses what happened 25 years ago, when people figured out that making everything have access to everything and be able to change everything was spectacularly bad from the point of view of making code that has few bugs and can be maintained and extended.

It seems to be a sadly common thing in the Game Development branch of IT that people who have very little knowledge of how to architect large scale solutions and how to make effective and efficient software development processes, like to, from their peak certainty (and ignorance) spot in the Dunning-Krugger curve, opinionate about software architecture concerns without even a mention of things like development process efficiency in aggregate (not just coding speed, something which is the least important part of it), inter and intra-team dependencies, flexibility for maintenability and extendability, bug reduction and bug discovery and removal efficiency.

Maybe it's something to do with so many developers in the Industry not having to maintain their own code (game shipped = crap code and design problems solved) and being in average more junior than the rest of the industry so less likely to have seen enough projects in enough different situations to have grown beyond being just coders and to awareness of technical design and architectural concerns in the software development process?

I'm a little sad and a little angry that people who have not demonstrated much in the way of wisdom in terms of software development processes, are trying to undo decades of hard learned lessons without even understand why those things are there, a bit like saying "I never had a car accident and don't like wearing a seatbelt, so I want to convince everybody else not to wear seatbelts".

December 09, 2018 10:58 PM
AMDphreak

@Aceticon OO does not reduce the likelihood of bugs. It increases them because the programmer has to memorize a giant map of connections that are documented in a non-hierarchical manner. Tracking the state of variables (especially hidden state within objects) across the run-time of an application is the biggest source of bugs. Coders forget what is set to what, and when they pass that problem down to the next coder (a client, a new hire, etc) then that responsibility to know what is hidden adds to the likelihood of bugs. Bugs also increase based on LOC (Lines of Code) due to the fact that code is written and therefore has to be digested slowly, sequentially. Code makes little use of visual reasoning. Reading the code is the biggest bottleneck to understanding it. Just because you think reading is fun doesn't mean the rest of the world of competent engineers does.

It amazes me that people like you take the initiative to spread their objectively trash opinions about OO. OO always was a dumpster fire because they tried to take a paradigm that had limited use cases and force the rest of the programming paradigms to fit within its limitations.

Imperative Procedural code is the gold standard for understanding how a computer accomplishes what it does. Functional language is a sophisticated and simplified interface to creating procedural code. OO is neither. OO is not a natural evolution of procedural code. Combining functions and variables into a logical unit is and will remain a totally incompetent idea except in the limited sense that it can model an actual physical object. When you attempt to force people to think of a process as an object----which is what Java does, in particular----you're violating the need for code to represent what it literally does.

Lisp should have become the mainstay of systems development. Lisp-machines should never have died out. It is unforunate that clueless twits like you influenced the programming industry into the wrong direction because you were ignorant of the functional paradigm.

By default, a program is a process. When you do something in the world, you don't start an object….you start a process. You only say ‘start the car’ but in reality you are starting the process of driving. The object SERVES the goal. Programmers need to have a general goal in mind when they start their program, and therefore they must fit all other concepts and paradigms within the overarching paradigm of a process. When a program launches on the computer the OS launches---say it with me--a PROCESS.

When you are diagnosing a bug with software, you have to identify WHEN the bug happens and under what conditions it happens. You have to be able to simulate the process and state of variables in your mind to find the problem. OO intentionally makes this difficult and therefore causes bugapalooza. In order to diagnose the problem, your brain must convert the OO code in your brain into an AST (abstract syntax tree) and traverse the tree, selecting branches that reflect the decisions made by the computer as well as the changes in state of the variables. This is why OO is inferior for debugging. It forces you to sit and make guesses about where the damn bug is. Lisp, by contrast, inherently aids you in debugging your code, because its structure mimics the AST, so all you have to do is follow the code from start to finish to figure out where the bug happened.

Not only is the logical analysis about this solid, these observations are substantiated by evolutionary biology and child psychology. 1. The human prefrontal cortex (the planning area) evolved out of the motor cortex. Thought is abstract action. Source: Dr. Jordan Peterson's lectures free on YouTube. and 2. Children learn to execute actions before they learn how to simulate the state of objects in their environment, which is called Object Permanence. OO forces the user to constantly interrupt their attention from what they are doing to retrieve status information about categories of objects and the unknown state of those objects. It kicks you out of your execution and working-memory loop and side-tracks you with a long-term semantic memory recall task. In other words, it causes a biological “hard-fault”. You wouldn't use a knife to cut chicken if you couldn't see what the shape of that knife was. For all you know, some idiot labeled their dinner knife a kitchen knife. Likewise, in code you should never trust another programmer's interpretation of reality and what they believe is a reasonable thing to hide from you. You want to know the exact shape of that knife and how you can use it. OO classes obfuscate details behind category labels. And, if you still can't understand why Objects are so clearly unintuitive, just look at the rest of the animal kingdom: every living organism does things, but not all of them organize their thoughts into objects. The ‘recognition’ of any object they interact with is predicated on the environment's signals (in the form of neurotransmitters and chemical reactions). Action is the lowest level of existence and the most overarching mode of existence for all organisms. All other forms of cognition are sub-servient to that one.

It is incredible that people like you who have such commital opinions about the benefits of OO are allowed to work in the industry and repeatedly pollute it with your shitty code habits and bad influence.

Programming paradigms should be taught in this order:

Imperative → Procedural → Functional → Actor → Object Oriented

Propositional Logic (Prolog) should be added into that list somewhere as well, but I'm not sure where. Probably after Object Oriented, as Prolog requires reasoning about objects.

November 24, 2023 03:04 AM
Hodgman
7 hours ago, Aceticon said:

It is genuinely interesting that people who don't know how to use OO for the reasons OO exists (reduce the likelyhood of bugs, reduce the amount of information that must be communicated between developers and control complexity by reducing cross-dependencies so that very large projects can be done efficiently, to pick just a few examples) put up their own deeply flawed pseudo-OO strawman as an example "OO" and then proceed to argue that their imaginary construct shows how shit OO is and why people should stop doing it.

20 hours ago, GameDev.net said:

 @Hodgman recently published a piece outlining his counter-arguments to typical objections to OOP:

Yeah, you can almost re-frame this article as a checklist of signs that you're doing OOP wrong :D 

My quick feedback / comments on it:

  • Data is more important than code - yep, people often write stupid class structures without considering the data first. That's a flaw in their execution, not the tools they have at hand. Once the data model is well designed, OO tools can be used to ensure that the program invariants are kept in check and that the code is maintainable at scale.
  • Encouraging complexity - yep, "enterprise software" written by 100 interns is shitty. KISS is life. One of the strengths of OO if done right is managing complexity and allowing software to continue to be maintainable. The typical "enterprise software" crap is simply failing at using the theory.
  • Bad performance - As above, if you structure your data first, and then use OO to do the things it's meant to do (enforce data model invariants, decouple the large-scale architecture, etc)... then this just isn't true. If you make everything an object, just because, and write crap with no structure, then yes, you get bad performance. You often see Pitfalls of OOP cited in this area, but IMHO it's actually a great tutorial on how you should be implementing your badly written OO code :D 
  • Graphs everywhere - this has nothing to do with OO. You can have the same object relations in OO, procedural or relational data models. The actual optimal data model is probably exactly the same in all three paradigms... While we're here though, handles are the better pointers, and that applies to OO coders too.
  • Cross-cutting concerns - if the data was designed properly, then cross-cutting concerns aren't an issue. Also, the argument about where a function should be placed is more valid in languages like Java or C# which force everything into an object, but not in C++ where the use of free-functions is actually considered best practice (even in OO designs). OO is an extension of procedural programming after all, so there's no conflict with continuing to use procedures that don't belong to a single class of object.
  • Object encapsulation is schizophrenic - this whole things smacks of incorrect usages. Getters and setters are a code smell -- they exist when there's encapsulation but zero abstraction. There's no conflict of using plain-old-data structures with public, primitive type members, in an OO program -- it's actually a common solution when employing OO's DIP rule. A simple data structure can be a common interface between modules! If you're creating encapsulation at the wrong level, then just don't create encapsulation at that level... This section is honestly an argument against enterprise zombies who dogmatically apply the methods what their school taught them without any original thought of their own.
  • There are multiple ways to look at the same data - IMHO it's common for an underlying data model to be tabular as in the relational style, with multiple different OO 'views' of that data, exposing it to different modules, with different restrictions, for different purposes, with zero copies/overhead. So, this section is false in my experience.
  • What to do instead? - Learn what OO is actually meant to solve / is actually good at, and use it sparingly for those purposes only :)

 

December 10, 2018 05:00 AM
eugene2k

It's strange to see the author saying "OOP is bad and you should unlearn it" and then in the "what should you do instead" section of the article encounter words like "object" and "polymorphism".

December 10, 2018 06:39 AM
Guy Fleegman

Peoples' brains work in different ways, even when they're solving the same problem. The most important thing is that one's code is logical, clear, consistent and well documented.

Programming is like creating art. When you are comfortable, confident and efficient with your technique, it becomes an expression of yourself.

December 10, 2018 07:43 AM
jbadams
1 hour ago, eugene2k said:

It's strange to see the author saying "OOP is bad and you should unlearn it" and then in the "what should you do instead" section of the article encounter words like "object" and "polymorphism".

To be fair, you can use those things and not actually be following OOP principles.  Likewise, you can still write OO code in languages (such as C) which do not offer those facilities.

December 10, 2018 08:12 AM
0r0d

"Data is more important than code"

Here we have the core of most of the anti-OOP nonsense that seems to be the popular thing these days.   Looks like someone watched a YouTube video about data-oriented programming and now they know the truth that everyone else is clearly missing, so they must go out and spread the good word.

Sorry, but, that's a load of b.s.   In software there are many aspects that come together.  The programmer, the user, the code, the data, the development tools, the target hardware, etc.   None of those things are objectively the most "important" thing, and certainly not so for each and every piece of software to ever be written.  

OOP is just a tool, and shockingly one that can be used with other tools.  You can use that hammer AND a screw driver, you dont have to pick one over the other.   OOP has its strengths and benefits, which is why it has become one of the most popular programming paradigms in history.  It helps programmers think about solutions to problems in natural ways that are easy to think about.  It helps them to write maintainable code.  And the list goes on.  Now, can you abuse it and write terrible OOP code?  Yeah, sure.  Can you also write terrible data-oriented code?  Oh...yeah.

What you need to do is stop thinking in dogmatic ways, and just use the tools that best suit you and the problem you're trying to solve.  There is no "right" or "wrong" way to solve a problem in software engineering.  The best way is the one that works for you.  Of course that doesnt mean that all solutions are equally good.  But you cant figure out what's going to be that good or better solution by just making blanket statements about this thing being the most important, or that thing being it.   Use your brain, look at the problem, decide what's the best approach to solve it, the one that makes most sense to you.  That approach might be the best fit for you, and the wrong one for someone else.   There's no contradiction there.

December 10, 2018 10:01 AM
lawnjelly

There is a saying, which I feel is very appropriate here, in both directions:

Quote

 

December 10, 2018 10:04 AM
MGB
MGB
2 hours ago, Guy Fleegman said:

Peoples' brains work in different ways, even when they're solving the same problem. The most important thing is that one's code is logical, clear, consistent and well documented.

Programming is like creating art. When you are comfortable, confident and efficient with your technique, it becomes an expression of yourself.

That's fine when you work alone.  Collaborating is a different story though.

December 10, 2018 10:23 AM
MGB
MGB
11 hours ago, Aceticon said:

salient points

Amen.  Didn't see one mention of the main cost of development: maintainabiity.

December 10, 2018 10:29 AM
eugene2k
2 hours ago, jbadams said:

To be fair, you can use those things and not actually be following OOP principles.  Likewise, you can still write OO code in languages (such as C) which do not offer those facilities.

You can also use those things, not follow the actual OOP principles and then complain that OOP is bad ;) Or you could use the actual OOP principles but still engineer the system in a way that doesn't actually mirror its use and then complain that OOP doesn't work. I think that's what happened to the author.

December 10, 2018 10:30 AM
Aceticon
3 hours ago, Guy Fleegman said:

Peoples' brains work in different ways, even when they're solving the same problem. The most important thing is that one's code is logical, clear, consistent and well documented.

Programming is like creating art. When you are comfortable, confident and efficient with your technique, it becomes an expression of yourself.

I've worked about 14 years as a freelancer (contractor) Software Developer in a couple of Industries and far too often I was called in to fix code bases which had grown to be near unmaintainable.

Maybe the single biggest nastiest problem I used to find (possibly the most frequent also) was when the work of 3 or 4 "coding artists" over a couple of years had pilled up into a bug-ridden unmaintainable mess of mismatched coding techniques and software design approaches.

It didn't really matter if the one, two or even all of those developers was trully gifted - once the next one started doing their "art" their own way on top of a different style (and then the next and the next and the next) the thing quickly became unreadable due to different naming conventions, full of weird bugs due to mismatched assumptions (say, things like one person returning NULL arrays as meaning NOT_FOUND but another doing it by returning zero-size arrays) and vastly harder to grasp and mantain due to the huge number of traps when trying make use of code from different authors with different rules.

We're not Artists, at best we're a mix of Craftsman and Engineer - yes, there's room for flair, as long as one still considers the people one works with, those who will pick up our code later or even ourselves in 6 months or 1 years' time when we pick up our own code and go "Oh, shit, I forgot how I did this!".

Unsurprisingly it has been my experience that as soon one moves beyond little projects and into anything major, a team of average but trully cooperating developers will outdeliver a team of prima-donnas every day or the week in productivity and quality.

(And I say this as having been one such "artist" and "prima-donna" earlier in my career)

December 10, 2018 11:07 AM
Bregma

OOP. You keep using that term.  I don't think i means what you think it means.

December 10, 2018 11:32 AM
RandNR

Isn't core idea behind OOP that human can understand and remember object based stuff way better than anything else like symbols for instance? (in general).

I'd say it's fine unless you are stacking similar instances en mass without any logical reason and without well crafted procedures and transformations it's pretty obvious that at some point it will get too complex to handle.

December 10, 2018 12:03 PM
DavinCreed
Damn, good luck trying to take down OOP. There are plenty of bad examples of everything, but the tough thing to do is to take the principles as they are meant to be followed and address those directly. In this way you're attacking not just the best examples, you are addressing the ideal. Addressing only the bad examples or if we're being generous, what you find to be the common examples, is like playing your ideal team against the bench warmers and injured players of the other. Which is what it looks like you're doing here.

If you follow SOLID, GRASP and the very common KISS principle, then none of the problems you listed as inherent to OOP (I think you are incorrect in doing so), are problems for OOP. I recommend following principles to about 90-95% because that's about the peak balance between development time and the benefits of the principle.

December 10, 2018 02:43 PM
JWColeman

So, as a hobbyist, with minimal formal training in C++ or object oriented programming, how relevant is this stuff to me? Feels like just a lot of discussion waaaaay above my head rn :).

December 10, 2018 03:06 PM
DavinCreed
10 minutes ago, JWColeman said:

So, as a hobbyist, with minimal formal training in C++ or object oriented programming, how relevant is this stuff to me? Feels like just a lot of discussion waaaaay above my head rn :).

It's all relevant if you're programming. These principles aren't some way to brow beat people into line or for gate keeping (though there are people that do do things like that with them), the OOP principles really do help you to efficiently write code that is easier to maintain and plays well with others.

No one starts off with this stuff already, it takes time and experience to get things going well. As a hobbyist, read up on it, take it in a little at a time. Every once in a while (like every year or two), review the principles again. Each time you do you will learn a little more because you'll be a little higher up the mountain.

December 10, 2018 03:26 PM
Krohm

Dangerous article. Of course everyone is free to state his/her opinions but I feel like the cost of code maintainance is not taken in consideration at all. As far as I am concerned encapsulation is awesome... and I'm using Verilog those days!

Proper code engineering is difficult. I've seen more than a company dominate the competitors thanks to well engineered codebase and more than one company biting the dust under the weight of unmaintaneable code bases.

December 10, 2018 04:32 PM
MobilityWins

In Archer Voice: You Want to have Spaghetti Code? Cuz that's how you get spaghetti code

December 10, 2018 05:42 PM
Aceticon
4 hours ago, JWColeman said:

So, as a hobbyist, with minimal formal training in C++ or object oriented programming, how relevant is this stuff to me? Feels like just a lot of discussion waaaaay above my head rn :).

I started learning OO development ages ago from a Pascal and C background, still in Uni, because I felt my code was disorganized and hard to maintain and there must be a better way to do it. I was but a hobbyist back then, doing small projects (tiny even, if compared to many of the things I did later), but there was already enough frustration that I felt compelled to learn a different style of programming.

Even for a one-person hobbyist it's still worth it because it really cuts down on complexity of the whole code allowing a single person to tackle far more complex projects and significantly reduces the things a coder needs to remember (or look up later if forgotten) thus reducing forgetfulness bugs as well as time wasted having to rediscover, from the code, shit one did some months before.

I strongly recommend you get the original "Design Patterns" book from the gang of four (not the most recent fashion-following me-too patterns books) and read the first 70 pages (you can ignore the actual patterns if you want). It is quite the intro to the Object Oriented Design Principles and, already 20 years ago, it addressed things like balancing the use of polymorphism with that of delegation.

December 10, 2018 05:58 PM
Guy Fleegman
6 hours ago, Aph3x said:

That's fine when you work alone.  Collaborating is a different story though.

 

5 hours ago, Aceticon said:

Unsurprisingly it has been my experience that as soon one moves beyond little projects and into anything major, a team of average but trully cooperating developers will outdeliver a team of prima-donnas every day or the week in productivity and quality.

(And I say this as having been one such "artist" and "prima-donna" earlier in my career)

 

I agree, with both sentiments. When working in a group or working on a project that has to be maintainable for the unforeseeable future, the whole project should follow a consistent methodology.

Bad code is bad code though. Trying to go through poorly written OO code is just as difficult as going through a different poorly written methodology. It all boils down to trying to read the mind of another programmer. That's why documentation is so important, even in OO coding. I'd take a logically structured, consistent, well documented code base that doesn't even meet my ideal conventions any day over a mediocre OO code base.

Maybe the reason why some programmers are less inclined towards OOP is that they lean towards a more free-flowing mind and a less structured mind (while others are the reverse)? I've been sent to places to fix someone else's software too so I do understand where the resentment and frustration comes from, but I haven't seen enough examples to say that OOP is clearly better than any other methodology. I've seen chaotic, bloated OO code and equally confusing non-OO code. Bad code is bad code... and good code is good code, no matter what methodology it employs.

I suppose when the software industry matures to the point of, say, the housing industry, we'd have "building code" requirements. It might happen one day, but there are so many different languages and coding environments that we're still in the wild west of software development, I feel.

Anyway, I'm not an opponent of OOP. I think it's a great methodology. It's easy to build bloat and overly complex dependencies, but when followed rigorously and thoughtfully, it's beautiful... I mean, it's well engineered and maintainable! Sorry, "beautiful" is one of those artsy-fartsy words that "you know who" tend to use. ?

 

December 10, 2018 06:20 PM
Anri

I've spent the last two years programming in Assembly and C for retro computers such as the ZX Spectrum and Megadrive, and OOP just did not make sense there at all. One quickly learns - the hard way - that memory is a rare commodity and processing power is almost limited to addition and subtraction, with multiplication and division coming in at a very high premium...

Your program and its functions( or in an object's case "methods" ) need to be split into data preparation and processing, in a top-to-bottom fashion.  Calling even a single function can sometimes bring the program to its knees in terms of performance - local variables and passing data are not free-of-charge where memory is concerned.  And code...boy it takes far more code to do even the simplist of things...

Returning to OOP is like Marty McFly returning to 1985 - we have an abundance of processing power and memory, and boy are you glad to have kickass sound and graphics hardware to match! We missed you 3D soooo much!  But because we visited the past, we now understand how sloppy we have been with OOP in the present - treating objects like primatives, and creating them on the fly during method calls that are being called during a game loop...

I can say this with 100% confidence; if you spend time learning a structured language( C for arguments sake ) on limited hardware, alongside your OOP, then you will not go wrong with OOP.

For giggles, I was interrogated as to why I was using C instead of Assembly for ZX Spectrum programs. It was very much like how this thread has played out! 

December 10, 2018 07:59 PM
Tape_Worm

This article, and the citations it presents, terrifies me. 

I dread the potential flood of things like: "why are we using that crappy ol' OOP!?", "Duh, data oriented is SO much better and OOP is garbage, I read it on Breitbart once!", etc...

I'm not even referring to the potential commentary on this site.  In my own professional world, I can see things like this coming up and having to spend energy (that I could be using for other things) on explaining why articles like this need to be ignored and why it'll be a very cold day in hell before I allow the rewrite of our code. Especially to placate those people who've bought into the software development equivalent of fake news.

As someone said before.  OOP is a tool.  Like everything else, it has a time and a place and needs to be wielded correctly to avoid cutting your (or someone elses) fingers off.  Even principles like SOLID, which I endorse wholeheartedly, are nothing more than a set of guidelines/best practices and sometimes that can run counter to the solution of the problem you're trying to solve, but it doesn't mean you shouldn't try to follow them.

This kind of thing is basically the equivalent of saying, "hey, my phone has thumbprints on the screen, clearly thumbs are terrible and should be removed entirely and replaced with using my big toe!"

Personally, I'm of the mind that the only thing that's absolute, the "one true thing", is that you need to employ critical thinking skills when using your methodology/toolset/clothing style/haircut/etc... of choice. Otherwise you will end up making a straight up disaster.

December 11, 2018 05:42 PM
bdubreuil

I'm currently almost done reading the book Exceptional C++, from Herb Sutter. He says that people often model and implement badly object-oriented code, which may result in bad performances. OOP does not equal to inheritance, by the way. 

 

After briefly reading your article, it seems you are biased. You seem just personally against OOP but that's just your opinion.

 

Quote

The sentiment is good, but in practice, encapsulation on a granularity of an object or a class often leads to code trying to separate everything from everything else (from itself). It generates tons of boilerplate: getters, setters, multiple constructors, odd methods, all trying to protect from mistakes that are unlikely to happen, on a scale too small to mater.

I agree on that with you for Java and C#, as it is unfortunately some sort of habit of those programming languages. However, for C++, a good programmer will ensure that data is encapsulated when necessary and only in its intended scope. It is for keeping other programmers at bay from fiddling with data that its sole purpose is to be used in an orderly and specified manner by its scope. You must remember that any piece of code will probably be edited by another programmer. Besides, we could say that exposing all data would also generate noise for the programmers.

December 13, 2018 02:58 PM
Leonid Lazaryev

Pff... Is this OOP paradigm you blaming? Bad architectural decisions and someones' inability to write clean code is what you should blame instead.

December 14, 2018 07:41 PM
SNaidamast

If one were to read the history of the OOP development concepts starting with Simula-67 (the original OOP language; Sweden 1967) it would be found that the designers' original intentions were to develop a way for superior organization for the code of an application.  The results contained several other benefits such as data encapsulation, inheritance, polymorphism, and code re-use.

Problems arose when, like everything else in the Information Technology field, OOP was introduced to modern development with the release of Turbo Pascal 5.0.  Subsequently many developers began to promote the concepts of code re-use, data encapsulation, polymorphism, and inheritance without really understanding these concepts' limitations.  What happened then was the extreme hyping of OOP just like we now have with several current paradigms (ie: Agile) which in turn subsequently produced horribly designed application.  This was the result of market reinforcement for the use of all these concepts with remembering the simplest one of all, code organization.

The first major project in New York City that incorporated OOP as a banking system, which was written up in one of the city's dailies.  The developers created a nightmare scenario with their inheritance design believing that they could simply create inheritance hierarchies with infinite levels.  The reality of the matter was that inheritance should never really go beyond around 3 hierarchical levels with avoidance of the use of the "protected" attribute for methods.

Most failures with inheritance then were a result of misunderstanding the intents of the concept in the first place; like trying to apply it to any type of business requirement where it really wasn't needed.

The same holds true for all development paradigms.

As a result, I have to completely disagree with the author with his contentions about the use of OOP programming.  And if he has never developed a large, monolithic, mainframe application using standardized, procedural techniques (prior to OOP) than he would not understand the inherent advantages of using OOP simply to better organize one's code.

No one has ever stated that to use OOP properly, one must use all of its concepts.  They are there for when they do make sense but it is the inherent capability to organize one's code that makes OOP development a superior paradigm to procedural based development endeavors.

I believe the author should take a second look at OOP before writing such disparaging remarks about it...

 

 

 

 

 

December 17, 2018 04:52 PM
locoflop

Essentially the classic way of programming is procedural just like C, you can consider a file (or a cluster of files linked together) as a module. This module might have a public API that exposes global variables and functions. Other functions that the programmer wishes to hide because they are very technical are marked as static so they are accessible only within the file of where they are declared.

On the other hand in OOP imagine having such a module as described above, now call it a class, and have the ability to do lots sort of things with it. This is the so called flexibility / or portability. That instead of having your module stuck into place like a toolshelf in your warehouse, you would be able to have it like a toolbox and move it around and take it in different places with you.

 

December 17, 2018 06:19 PM
locoflop

However all of the meaning of OOP is actually when is applied in a specific context, only when used within a design pattern. If your software entities form some sort of high level structures, you want to organize them in better terms and have the ability to control them dynamically.

December 17, 2018 06:24 PM
Finalspace

OOP and myself, yeah. Its a like and dislike kind of scenario.
In 1998 (the saddest year i had so far in my life) i started out programming in borland delphi and was thrown into OOP from the very beginning. I had no one which teached me fundamentals and internet was still too expensive. The Delphi-Helpfile was the only thing which i learned from in the early days. But for some reason, i understood it from the very beginning, classes, inheritance, interfaces, static vs non-static, polymorphism, etc. So i was liking it from the very beginning. For decades i was coding in Delphi, also mixing in other languages like C++/Java, etc.

But since i started doing and seeing more and more professional work in the non-game development field, i started to see problems of over-using OOP.
There are so many people/experts out there, which abuses OOP to write the worst kind of software you can imagine -> barely working, exceptions everywhere, slow like hell, untestable, impossible to understand or to follow:
- Classes which are not classes
- Abstractions just for the sake of it
- Extendability without a reason
- Hiding everything just for the sake of it
- Using delegates/callbacks everywhere
- Using virtual functions for no reason
- Overuse of inheritance
- Misuse of polymorphism

If they would write it with less OOP´ness, the software would still be garbage - but i could at least understand it.
Unfortunatly this kind of shit, you will find all over the place - especially in expensive business software or in the java world.
But the main problem is, that those "experts" teach other people. This results in more people writing poor code, which makes me very sad :-( Another problem i often see, is that third party libraries or frameworks may forces you to write bad OOP code, due to its bad api design.

I am always surprised, how customers happiely use such software in production environments. Its like a miracle that those things work.

 

But what makes me so angry, that you can actually write good software when you use the proper tools at the right time, but people somehow have forgotten that or simply doesent care.

 

So the conclusion for me is:

OOP is totally fine, when well and not over-used.
If you easiely can follow the control flow of any kind of source, the chance are much higher that its well written - neitherless of its coding style.

December 18, 2018 10:19 AM
SillyCow
On 12/10/2018 at 12:07 PM, Aceticon said:

Maybe the single biggest nastiest problem I used to find (possibly the most frequent also) was when the work of 3 or 4 "coding artists" over a couple of years had pilled up into a bug-ridden unmaintainable mess of mismatched coding techniques and software design approaches.

 It didn't really matter if the one, two or even all of those developers was trully gifted - once the next one started doing their "art" their own way on top of a different style (and then the next and the next and the next) the thing quickly became unreadable due to different naming conventions, full of weird bugs due to mismatched assumptions (say, things like one person returning NULL arrays as meaning NOT_FOUND but another doing it by returning zero-size arrays) and vastly harder to grasp and mantain due to the huge number of traps when trying make use of code from different authors with different rules.

 

The "new" fashion in non-games architecture is "micro-services". In this approach you abstract everything. Even the compiler and the operating system. You get complete freedom of choice over you "art" style.

The assumption is: You should never re-use code across teams. A certain programmer/team can re-use their own code. However when something goes wrong and someone has to fix it: You just throw everything away, and let the new programmer start from scratch.

You do this by making sure that every little piece of code is completely encapsulated in it's own server. (it even get's compiled separately.)

This has performance costs (because the APIs are usually needlessly network based).

It has boilerplate development costs (because the APIs are usually needlessly network based).

However... The joy of being able to fix a problem by ripping out someone else's code, and then using your favourite framework to solve the problem, is really enticing.

After having worked in this style for the past several years, I don't know if I like it or not. However it is a very interesting philosophy when you work on a very large project. Also, I think that the recent improvement in Docker containers makes it very manageable if you do it right. That said, the performance costs probably make it unsustainable for game dev.

 

December 18, 2018 01:41 PM
Aceticon
1 hour ago, SillyCow said:

The "new" fashion in non-games architecture is "micro-services". In this approach you abstract everything. Even the compiler and the operating system. You get complete freedom of choice over you "art" style.

The assumption is: You should never re-use code across teams. A certain programmer/team can re-use their own code. However when something goes wrong and someone has to fix it: You just throw everything away, and let the new programmer start from scratch.

You do this by making sure that every little piece of code is completely encapsulated in it's own server. (it even get's compiled separately.)

This has performance costs (because the APIs are usually needlessly network based).

It has boilerplate development costs (because the APIs are usually needlessly network based).

However... The joy of being able to fix a problem by ripping out someone else's code, and then using your favourite framework to solve the problem, is really enticing.

After having worked in this style for the past several years, I don't know if I like it or not. However it is a very interesting philosophy when you work on a very large project. Also, I think that the recent improvement in Docker containers makes it very manageable if you do it right. That said, the performance costs probably make it unsustainable for game dev.

 

Whilst I have not worked in this style, I have designed systems architectures which made heavy use of segregating things into separate services (mostly to facilitate redundancy and scalability) and in my experience there is a significant cost associated with defining proper communications interfaces between services (aka APIs) and - maybe more importantly - changing them when changes of requirements result in changes to multiple "services".

In fact, the part of the secret in designing high performance distributed systems was to find a good balance between decoupling and performance (both program performance and software development process performance) and always be aware of fake decoupling (i.e. when things look like decoupled, but they only work as long as certain assumptions - such as, say, no more than X elements are sent - are the same inside the code on all sides).

The whole thing as you described it sounds as OO encapsulation but wrapped with a heavy layer that adds quite a lot of performance overhead and a whole new class of problems around things such as failure of request execution and API version mismatch (or even worse problems, if people decide to use networking between "services"), all the while seemingly not delivering anything of value (catering to programmer fashionism and prima-donna behaviours is not value, IMHO).

Both in the literature and my experience, the best level to have service APIs at is as self-contained consistent business operations (i.e. ops which must be wholly executed or not executed at all), and I can only imagine how "interesting" things start getting with such high levels of service granularity as you seem to decribe when dealing with things such as Database Transactions.

 

December 18, 2018 02:45 PM
Aceticon
4 hours ago, Finalspace said:

OOP and myself, yeah. Its a like and dislike kind of scenario.
In 1998 (the saddest year i had so far in my life) i started out programming in borland delphi and was thrown into OOP from the very beginning. I had no one which teached me fundamentals and internet was still too expensive. The Delphi-Helpfile was the only thing which i learned from in the early days. But for some reason, i understood it from the very beginning, classes, inheritance, interfaces, static vs non-static, polymorphism, etc. So i was liking it from the very beginning. For decades i was coding in Delphi, also mixing in other languages like C++/Java, etc.

But since i started doing and seeing more and more professional work in the non-game development field, i started to see problems of over-using OOP.
There are so many people/experts out there, which abuses OOP to write the worst kind of software you can imagine -> barely working, exceptions everywhere, slow like hell, untestable, impossible to understand or to follow:
- Classes which are not classes
- Abstractions just for the sake of it
- Extendability without a reason
- Hiding everything just for the sake of it
- Using delegates/callbacks everywhere
- Using virtual functions for no reason
- Overuse of inheritance
- Misuse of polymorphism

If they would write it with less OOP´ness, the software would still be garbage - but i could at least understand it.
Unfortunatly this kind of shit, you will find all over the place - especially in expensive business software or in the java world.
But the main problem is, that those "experts" teach other people. This results in more people writing poor code, which makes me very sad :-( Another problem i often see, is that third party libraries or frameworks may forces you to write bad OOP code, due to its bad api design.

I am always surprised, how customers happiely use such software in production environments. Its like a miracle that those things work.

 

But what makes me so angry, that you can actually write good software when you use the proper tools at the right time, but people somehow have forgotten that or simply doesent care.

 

So the conclusion for me is:

OOP is totally fine, when well and not over-used.
If you easiely can follow the control flow of any kind of source, the chance are much higher that its well written - neitherless of its coding style.

It was my personal experience, whilst going through a similar learning process myself in similar conditions (at about the same time, though luckily I jumped into Java and discovered the Design Patterns book early), that there is a stage when one has learned some Software Design and starts overengineering everything, resulting in such as heavy mass of things that "seem like a good idea" and "just in case it's needed" that it effectively defeats the purpose of the whole OO philosophy.

Eventually one starts doing things the KISS way, Refactoring code when the conditions that defined a design decision change, and designing software driven by a "what does this choice delivers and what does it cost now and later" thus producing much more maintainable deliverable functionality (what code delivers matters vastly more than The Code) and faster.

Looking back, I would say this transition only properly happened to me about 10 years after I started working as a Software Developer.

I reckon this is the point when one transits from Junior Software Designer to Experienced Software Designer. It takes time to get there and I suspect not that many make this transition.

There is a similar thing around Software Development Processes, which can be observed in, for example, how so many groups use things like Agile in a recipe-like fashion (and often ditching the most important non-programming bits) rather than elements of it selected based on the tradeoffs of what they deliver to the process versus what they cost (not just money cost, but also things like time, bug rates, flexibility, learning rates, etc) in the context of a specific environment (i.e. the subset of Agile for use in a big non-IT corporation is not at all the same as that for use in an Indie Game House).

 

PS. I think the bit you mentioned about the ignorant spreading their ignorance (and damn, I look at online tutorials and that shit is all over) is basically Dunning-Krugger in action, just like we see all over society and the Internet at the moment: people who have learned just enough to think they know a lot but not yet enough to understand just how much, much more they need to learn, are still very low in terms of knowledge in an area but at the peak of their self-confidence in terms of what they think they know, so they spread their ignorance around as if it's Wisdom, and do so with all the confidence of the trully ignorant.

The Original Post in this article is a stunning example of just that, which is probably why all the Old-Seadog Programmers around here seem to have jumped on it.

December 18, 2018 03:13 PM
Ivan Zherdev

OOP is all about interfaces. Interfaces are all about proper order.
If you write your code in functional paradigm, you make order, but only once... Any change to a data structure or an interface breaks everything... Big programs are very hard and expensive to code in C.

December 26, 2018 07:16 AM
_Silence_

And to give my two cents also.

If you look at the C++ standard library, and I believe here no one could tell it is a bad design, or bad OOP, since it has been designed by many C++ masters, we can see that:

  • use of many classes
  • use of many 'global' functions
  • most of classes use inheritance, even if one should not inherit from those classes
  • use of templates almost everywhere
  • use of namespace (few)
  • if we look for the virtual keyword in the public headers, there are, but not that much. This could be explained by the third point. About 547 virtual functions in about 60 classes. If we make a quick (and not really reliable) count for the keyword class, there are about 6000 classes. So there's about 1/10th of polymorphic classes (certainly more due to forward declarations, templates, comments...)
  • if one read about Scott Meyers books, and I also believe he's not a C++ duffer, he advises for example to create a class for non copyable objects. And all classes that don't have to be copyable should inherit from it (therefore all base classes of polymorphic hierarchies). This for sure adds a lot of inheritance and hierarchy. This also makes hierarchies more complex.
  • if you have a look at Gtkmm, which is a C++ wrapper of Gtk a GUI toolkit, they make over-use of classes, inheritance, polymorphism, templates and a mix of all of that. I also believe that Gtkmm doesn't do so just to use what C++ offers but which should not be used by nobody.
  • and we can go there for a long

Teachers are responsible for the over-use of (bad) OOP. But can they do differently ? When you learn OOP, you learn that a dog is an animal and a cat is an animal, and all of them should make a hierarchy. You learn that all animals can move (except some) and so you need polymorphism. And you have few years of school learning programming, OOP, C++, C#, java, algorithms, C, functional programming... and many other things. After these 3 or 5 years you have to tell to companies you are an expert. So you need to have been covered with all these aspects. Also, we should not forget that schools now target the work market where 90% of CS graduated people will do 'business software' where deciding people only believe in what big companies offer them. So they will do classes with java, redo the same classes in java with another so-called new technologies made by java few years later. They will do windows and buttons, they will manage DB and receive and send packets over TCP-IP with java means. Well, all those 90% of students should be ready for this. And all those 90% will never have to manage memory, to do pointer arithmetics. All those 90% of people will have to follow what java tells them what to do, follow a design pattern that java believes this is the pattern to follow. They will not have to think about other aspects since java will do it for them in its black box.

Also, I believe we can do bad OOP programming just as we can do bad imperative programming. See for example the Win32 library. They don't make the use of prefix to name their functions, their variables and so on. This results in a so much intrusive library that you cannot call your own function CreateWindow for example. If you have a look at many other C libraries, they always use prefixes to avoid name clashes. A good alternative to Win32 native library is the well know Qt. It is in C++, uses OOP (inheritance, polymorphism...). Since namespaces did not existed when Qt was created, they use prefixes (and still keep them...). But Qt obliges you to declare some weird statements in all your classes, which is also very intrusive.

What I was wanting to tell in the previous paragraph is that bad programming is everywhere. Nothing is perfect. For simple functions, it is often very difficult to make it completely robust and reliable for any uses (see many stackoverflow topics for example. Sometimes you use something else which is poorly designed or poorly implemented because of companies policies. Sometimes it will be because the project is old and at the time of its creation nothing better was known or practicable. What we did in game programming 30 years ago is not the same at what was done 15 years ago, and is not the same to what we do now. And for sure it will be different in a dozen years.

And to avoid bad programming now we have new design methods such as data-oriented programming or ECS and design patterns. Most of them will make the programmer focus on what is important to what they are meant for. But no more. They are not the absolute salvation and new problems will arise as soon as we reach another area of programming (ie. networking, parallelism, user interface, cloud computing, AI, cryptography...). But at least now we know that for such interactive and graphical programming we have means to avoid doing things badly.

December 26, 2018 11:30 AM
sirius707

the way I see it, it's very easy to write horrible code in oop. in fact if you don't know what you're doing (which is usually the case nowadays) you will write horrible oop code by default. so yes its disadvantages outweigh it's advantages greatly.

January 03, 2019 07:25 PM
DavinCreed
27 minutes ago, sirius707 said:

the way I see it, it's very easy to write horrible code in oop. in fact if you don't know what you're doing (which is usually the case nowadays) you will write horrible oop code by default. so yes its disadvantages outweigh it's advantages greatly.

It's easy to write horrible code. It's actually difficult to write horrible code that follows OOP. It's difficult to follow OOP though, it takes a lot of reading and experience. Here's the good thing though, all that work learning about OOP and learning how to follow the principles, makes you a better developer. Because OOP principles don't exist for no reason, they make development easier and more efficient. Not just for solo devs, but also when working on a team. Can't tell you how many times I have to clean up code from a fellow developer at work who doesn't follow OOP principles. Because it's easier and faster for me to refactor the code following OOP than to try to hack into the mess to try to add the required change without causing bugs and/or bad performance.

So its advantages far outweigh the disadvantages.

January 03, 2019 07:52 PM
Brain

Someone else here mentioned the thrill of gutting out the previous Devs code and solving the problems with your own framework, basically a rewrite.

I haven't worked for a game development company before in a professional capacity however I have and do work as software lead for several large commercial projects.

In any large, established commercial software project, a complete rewrite is commercial and professional suicide, by rewriting you throw away years or even decades of development, bug fixes, enhancements and tweaks and it's simply a fallacy to think you can just come in, rewrite it from scratch and retain those many decades of development, you're basically back to square one.

Here is a good example, one of many on Google: https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/

It's pretty much an open source thing, "I can do much better, throw that old crap away, ooh new and shiny!"

Every now and again we get some rep drop by saying he can improve our bespoke ERP with several decades of development by replacing it with solution X, we immediately know to show him the door without delay.

Hope this helps someone else before they make such a fundamental mistake (note: don't confuse refactor with rewrite, they're completely different beasts and refactoring is often a great idea...)

January 03, 2019 10:16 PM
Xest

 

On 1/3/2019 at 10:16 PM, Brain said:

In any large, established commercial software project, a complete rewrite is commercial and professional suicide, by rewriting you throw away years or even decades of development, bug fixes, enhancements and tweaks and it's simply a fallacy to think you can just come in, rewrite it from scratch and retain those many decades of development, you're basically back to square one.

Sorry Brain, but I fundamentally disagree. I have done exactly this and it's been nothing but successful both for the project and my career (I also don't work in game development, I work for a large multi-national financial corporation). It's not easy, and it's not something that should be done on a whim, but to suggest it shouldn't ever be done and will always end badly is plain stupid, and is precisely why so many users are stuck, unhappily having to use crap software in so many circumstances.

I have a hard time believing many projects that have been around for decades has had anything other than positive enhancements and has consistently avoided technical debt given the very concept of technical debt, and the will to tackle it has only really come to the for over the last decade. The fact legacy projects have decades of cruft in them is precisely why you often replace them, high levels of technical debt creating high maintenance cost, performance bottlenecks, security issues, and so forth are typically rife in legacy software precisely because they were built before we knew how to make better software. If nothing else, I've yet to see a proprietary project over 15 years old that isn't completely and utterly awful, and I've seen many.

It's only really now with the growth of devops and commonplace automation of builds, testing, and application of quality gates through tools such as Sonar and Fortify that we're really beginning to start building software that we can make sure stays high quality. Sometimes square one is exactly where you want to be, on a new, fair board, when the game is snakes and ladders, and the old board was rigged with snakes on every square from 2 to 100. Sometimes you just need to take a step back and ask your users what they're actually trying to do, what they actually want, rather than what they've been forced to do inefficiently using a variety of hacks and workarounds using legacy stuff that they've come to assume is the only option because no one ever showed them they can have something better.

As an example over the last year we had a load of clients (including tier 1 banks) using legacy versions of our software, and that of our competitors approach us because they needed to achieve GDPR compliance. The reality is we could never have done so with the existing software, the lack of functionality around auditing, security, and tracking of data through the system meant that to embed that functionality into the existing version would've taken about 1.5x as long as just starting from scratch did. Sure, starting from scratch meant we lost some obscure functionality that no one understood, but that ended up being a good thing because upon examination that functionality only existed and was used as a fudge to get around deficiencies in the software in the first place, and thanks to re-writing it and doing it properly we could solve their actual problem, rather than have them need to rely on a shitty undocumented hack.

I'm not saying rewrites are always the right thing, and that they always turn out better, god only knows I sympathise with your comment on people who go "Ooh, shiny!", but if you have talented staff, who know how to build good software, you can produce a sensible high level plan with staged releases that will show continuous progress, and a clear statement about what you're trying to achieve rewriting from scratch, and that as such you have management buy in because you've managed to sell it because it has tangible benefits, then why wouldn't you do it? What I am saying is that saying rewrites are never the right thing is as utterly stupid as saying rewrites are always the right thing.

Your first pass on a piece of software will never be your best pass, you'll always do it better the second time around. The same is true of rewriting someone elses software if you've worked on it sufficiently. If you've not got talented devs I understand where you're coming from, but any tech lead should be able to shepherd a team through a succesful rewrite of a piece of software they're responsible for, and if they can't then they've no business being a tech lead, it's part and parcel of the job to be able to find the cheapest options both short and long term for looking after a piece of software, and if long term it's to replace it, which sometimes it will be, then they should be able to do that.

The idea that those who came before are beings of legend whom no one could ever best in future is nonsense, in fact, all too often those that came before didn't even understand OOP because it was still young and so churned out low quality unmaintainable dross instead, much like the author of this article in fact.

January 07, 2019 01:08 PM
Brain
9 hours ago, Xest said:

I have a hard time believing many projects that have been around for decades has had anything other than positive enhancements and has consistently avoided technical debt given the very concept of technical debt, and the will to tackle it has only really come to the for over the last decade. The fact legacy projects have decades of cruft in them is precisely why you often replace them, high levels of technical debt creating high maintenance cost, performance bottlenecks, security issues, and so forth are typically rife in legacy software precisely because they were built before we knew how to make better software. If nothing else, I've yet to see a proprietary project over 15 years old that isn't completely and utterly awful, and I've seen many.

I maintain one right now that is over ten years old and is still maintainable, neat and tidy.

The thing to aim for isn't to completely throw away and rewrite from scratch, as in the example i posted, but to treat development like painting the forth rail bridge. It should be rewritten a subsection at a time by careful refactoring, with each refactor considered and ensuring that each component is properly isolated and uses proper object oriented design.

Methodologies such as agile can and do encourage such refactoring but only if time is put aside to do this as the default of agile is to only accept user stories and bugfixes, so features just pile in, repeatedly, along with their bugs.

By the time you reach the last component and have signed it off, and everything is "rewritten" (read: nicely refactored) you can start again.

There are even ways to completely change the paradigm of the program, for example switching from a really simple program where design and layout are badly merged together with business logic, to one that uses for example an MVC design.

I can't confirm that games do this as they're generally more 'disposable', however its plain to see the source code these days for commercial engines (engines having more longevity than the games that are created in them) such as unreal engine and the source for them has been refactored in this way coming all the way from UE3 to the current UE4, without any complete rewrite. You can even run diffs against the code to still find some remaining ancient code, the adage being "if it ain't broke, don't fix it".

 

January 07, 2019 10:46 PM
Aceticon
16 hours ago, Brain said:
On 1/3/2019 at 10:16 PM, Brain said:

In any large, established commercial software project, a complete rewrite is commercial and professional suicide, by rewriting you throw away years or even decades of development, bug fixes, enhancements and tweaks and it's simply a fallacy to think you can just come in, rewrite it from scratch and retain those many decades of development, you're basically back to square one.

---

It's pretty much an open source thing, "I can do much better, throw that old crap away, ooh new and shiny!"

The thing to aim for isn't to completely throw away and rewrite from scratch, as in the example i posted, but to treat development like painting the forth rail bridge. It should be rewritten a subsection at a time by careful refactoring, with each refactor considered and ensuring that each component is properly isolated and uses proper object oriented design.

I've worked as a contractor for almost 2 decades in a couple of industries with maybe 15-20 different companies and have seen both situations were a complete rewrite was the chosen solution and others were continuous refactoring was it.

I've also been brought in far too many times as the external (expensive) senior guy to fix the mess the software has turned into.

It really depends of the situation: continuous refactoring is the best option in my opinion if done from early one and with few interruptions, though it requires at one or two people who know what they're doing rather than just the typical mid-level coders.

However, once a couple of significant requirement changes come trough and are hacked into a code base and/or a couple of people were responsible for the software, each thinking they know best and doing code their way mismatched from the ways already used in the code, the technical debt becomes so large that any additional requirement takes ages to implement. When that happens, that software has often reached a point were a full rewrite is a more viable solution than trying to live with it for the meanwhile whilst trying to refactor it into something maintainable. This is much more so if the software is frequently updated with new requirements.

My gut feeling is that where the balance lies depends on the business environment where that software is used is one generating frequent requirement changes or not - in environments where there is a near constant stream of new requirements it's pretty much impossible to do any refactoring of large important blocks, since any urgent new requirements that come are likely to impact that code and have time-constrainsts which are incompatible with the refactoring (as you can't really refactor and make new code at the same time in the same code area).

That said, maybe half the full rewrites I worked in or seen done turned out to be very messy affairs all around, mostly because good Business Analysts and Technical Analysts are as rare as hen's teeth so the software that ended up made didn't actually implemented the same user requirements as the old software.

January 08, 2019 02:46 PM
thaler.jonathan

I'm not sure if your approach is really that much better especially in terms of maintainability and robustness. When you talk about data-centric programming then immediately (pure) functional programming comes into mind, most prominently represented by Haskell. Interestingly there exists actual high quality research on using Haskell for game programming - you might check out the following links:

https://dl.acm.org/citation.cfm?id=871897
https://dl.acm.org/citation.cfm?id=2643160
https://dl.acm.org/citation.cfm?id=3110246
https://dl.acm.org/citation.cfm?id=3122944
https://dl.acm.org/citation.cfm?id=3122957
https://dl.acm.org/citation.cfm?id=2976010
A quake3 clone in Haskell: https://wiki.haskell.org/Frag

Oh and I think you might be very interested in Ted Kaminski's blog: https://www.tedinski.com/

 

January 08, 2019 03:53 PM
thaler.jonathan

Also you might have a look at Tim Sweeneys talk on Programming Languages in Game Programming (The Next Mainstream Programming Language:): https://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf

January 10, 2019 12:29 PM
UnshavenBastard
On 12/10/2018 at 6:00 AM, Hodgman said:
  • Cross-cutting concerns - if the data was designed properly, then cross-cutting concerns aren't an issue. Also, the argument about where a function should be placed is more valid in languages like Java or C# which force everything into an object, but not in C++ where the use of free-functions is actually considered best practice (even in OO designs)

Nitpick about C#: You are not forced to have everything in an object. You can have static classes and say "using static" if you like, so from a usage standpoint, there's hardly a difference between that and putting free functions in namespaces.

January 14, 2019 01:16 PM
SapphireSpire

I understand both sides of the discussion. OOP is good at keeping code organized and maintainable however it does introduce a heap of complexity that makes it difficult for anyone but the authors to understand it well enough to make good use of it. But the same applies with or without OOP. The real problem is the text. You can't simply glance at source code and see the overall structure of anything but a "Hello world!" project. You have to examine the files in detail, memorize a whole bunch of long complicated names, which could take you the rest of your life. The core principles of OOP would work better in a general purpose VPL.

January 29, 2019 05:00 AM
Draika the Dragon

As a learning programmer, still in uni, I am also starting to realise how difficult OOP is to deal with. Java was taught to me as a first year coding language and I have been playing around with it a lot. I'm grateful for how readable Java developers make their code, it makes learning other people's code quite digestible. But the way code has to be structured into a deep hierarchy makes it really difficult to use. Last week I stumbled upon an experience where in order to implement a feature I had to use reflection just to expose a variable in an API that was set to private simply because of the whole black box idealogy, along with a hacky subclass to adjust the behaviour of a method that was using that variable. This doesn't feel right at all, but it was the only solution. I submitted as an issue to the API's github and the author went ahead and implemented the feature, as I didn't feel like forking the project just to make one thing public.

A lot of Java blogs tell us how you should hide instance variables behind getter and setter methods for encapsulation, but it just feels so cumbersome. Having a lot of getters and setters tends to hint to the fact that the variable should really belong in another class... or not. It really depends on what the variable is for right? If it's basically a database class then it's natural, but if it's an object that does things it tends to feel a bit unnatural in use. Also, don't setters break the concept of immutability and give the class less control of its own behaviour? But without them some things just wouldn't work, like how would you make a clock class without a setTime() somewhere? I know these arguments have been said many times before but there should really be some concrete definition of at what point or what level these features belong.

I had a few different classes in my game and tended to get more and more difficult adding in new features when I don't know where the variable should go. I could put it in their proper sensible class, but then I have to telescope it's reference along a chain of other classes and ends up getting very coupled with other classes. This means whenever I wanted to add a feature I had to refractor the whole codebase, which was getting really annoying at one point because for one new feature I kept breaking 20 other features. At some point I ended up mimicking the MVC pattern. It feels more natural and sensible having a class handle data that's serialisable (with basically public fields), and a class monitoring that data and presenting view level representations of that data. For example, if I have a grid of game entities, I can store the entities in an array in one class, make modifications and send events in an update method, and let my view level classes determine what the player sees based on what happened. It just feels so much less messy that way than coupling the behaviour to the visual representation of the events.

To be specific, i'm using libGDX as my game development library. libGDX has a library called Scene2D, which is a 2D actor graph library. When poking around forums and discord, I noticed people really disliked it for anything except UI building with it's UI sub-library, despite its vast amount of features ,and usefulness in other areas being a general 2D actor graph library. One of it's biggest disadvantages is that serialisation is not naturally supported, so saving game's state to file becomes really confusing. However, I feel like it's more manageable when I have a backing class that stores all my entities as data structs, and creating actors from the data+handling behaviour dynamically.

Please let me know if my thoughts make sense, I am not the most experienced developer in the world for sure as I'm still learning things (I tend to be really ambitious and don't like making clones.. good thing I didn't start off with an mmorpg though ?).

February 07, 2019 10:51 AM
Dawoodoz

Coders without a computer science degree will often start with a class hierarchy without considering the problem or if they need to store any data to begin with. They hit a wall from isolating their code too much before they know what to encapsulate and for what purpose, introduce a huge generic graph system that floods the instruction cache, fragments the data cache, stalls the memory bus, interrupts the execution window, trashes the heap with tiny fixed-size allocations, leaks memory from cycles and crashes with null pointer exceptions. Then I point out that all they need to make all that is a tiny global function with a traditional algorithm and pre-existing data structures, often possible to implement using a few SIMD assembly intrinsics and multi-threading for a 200X performance boost.

February 27, 2019 11:47 AM
VoxycDev

Rule of thumb: initially, write all your classes as unrelated, even if they share a common function or two. Only introduce OOP relationships where and when it's clearly necessary. For example, if 4 out of 10 functions in the two classes are the same, that warrants the creation of a common ancestor.

April 25, 2019 10:32 PM
Rich Brighton

I think it might be helpful to distinguish between Java / C# and C++. Objects in Java / C# means smart-objects which is an oxymoron. In Java Object orientated programming would be better called subject orientated programming.

I played around a bit with C++ while trying to learn DirectX 11 at the same time. It was a bit of a steep learning curve. Anyway I abandoned that and got into C#. I soon started to fall in love with strongly typed code. It was amazing 50% of the time when my code compiled, it actually did what I wanted it to. My experience of C and C++, I did some C and Pascal way back, was that getting the code to compile was only, the beginning of a long painful journey. With C#I could rip my code apart, restructure and rewire (in other words refactor) and quickly get it to work again. I would even get the weird bug occasionally, where the code was doing what I wanted it to do, but I didn't know why. Its quite fun to try and track down why your code is working when it shouldn't be.

But I was also frustrated by the lack of multiple inheritance and other short comings. C# was designed to be better than Java, but not too much better, as to become a threat to the C++ native Windows platform. So as soon as I came across Scala, with its quasi multiple inheritance it was good riddance to Microsoft, goodbye and thanks for all the fish.

So from what I can make, from very limited knowledge, is that the problem with C++, was Bjarne's' "Not one CPU cycle left behind!" (relative to C). This was a great marketing slogan but a complete disaster in high level language design. You don't need 100% runtime efficiency, 80% is good enough and allows for huge beneficial trade off's in compile time safety, run time safety, compile times, reduced compiler bugs, the quality of tooling, ease of learning etc. And so the problem with smart object in c++ is that they are not very smart and can't even copy themselves properly.

So I see it as a choice, or rather a balance, between smart objects and dumb data. Java's "Everything is a (smart)" object is dumb. Unfortunately Scala some what doubled down on this, but is now sensibly looking to back track. Silent boxing leads to criminal and unnecessary inefficiency. An Integer is dumb data data. An integer doesn't know, how to produce a string, convert itself to a double. An Int doesn't even know how to add itself to another Int. An requires operations to be applied to it. It has no methods. It has no identity. So to avoid boxing we must know the narrow type at compile time. However syntactically we can still write it as if these were methods.

5.toString

myInt.toString

So in Scala, there is usually a choice between trait / class based inheritance or type classes. Between a smart object that carries its methods around it with it to be dynamically dispatched at run time, or dumb data where operations must be applied at compile time. But the good thing is that you can still use type classes with smart objects, with objects that inherit from AnyRef. But also the type class instances that perform the operations on the different data types can themselves inherit.

April 26, 2019 12:49 PM
Boo_the_space_hamster

I've been working as a developer in a java shop for quite some time now, and yes, I do feel the author's pain. I must admint that there have been times that I dreamt of hiring a Terminator to kill James Gosling before he could invent java or to smash up all the windows of the Oracle head office.

Maintaining a huge pile of java (and we do java EE, which adds an entire dimension of madness) can be a nightmare. I often feel like Miss Marple trying to chase a variable through layers of functions that just forward it to another function in another class, then hitting an interface and having to figure out which of the several implementations is used in this situation and then completely disappearing. Only to discover that it has been magically injected elsewhere else through a deployment descriptor (silently read config file). Often full text search through everything has been my best friend.

 

So yes, it is easy to create an object oriented jungle. Especially with the style that java very much stimulates: dividing everything into tiny objects that all jalously hide their data, lots of layering, hiding complexity in implicitly read config files, dragging in frameworks for everything (and not maintaining them, because that is not a user story). But at least I am old enough to know that creating a C nightmare is at least as easy, especially if people try to be a bit too clever. Often the thing that keeps C programmers in line is that the whole thing will likely not compile or core dump quckly if they go too far. I much prefer C++ if only because it doesn't try to bully you into some particular coding style, however in a corporate environment  that advantage will probably disappear fast. I still think that OO has many very valuable aspects and should not be written off so easily. If one properly, it is a life-saver. Too bad is is so often used improperly.

 

Having said that, my current project uses an ECS approach and I have to say it works very nicely. It does have a specific usecase however. I think ECS works best with RPG or Civilization style games or a certain kind of physical simulation, because those have entities that have loads of data and behaviour. And most of the behaviour touches only a relatively small part of the data. In a pure OO like approach, those classes easily grow out of control, or they degenerate into a bag of pointers to sub-objects. In a way that converges to some kind of ECS, where all the sub-objects take the role of components. My experience is that as long as you keep the Systems small and simple, the concept is very modular and easy to maintain and expand. Often adding more features is no more than adding a component or two plus a system. It's easy to start small and scale up. In a pure OO approach this requires much more planning. But I wouldn't like to do ECS without OO. ECS in pure C sounds like a real challenge to me.

July 19, 2019 10:14 AM
bagger2

This article does some bold statements, but lacks a lot of proper argumentation. I think it's smart to always consider possible alternatives, but just stating OO is a bad idea is simply unsubstantiated.

September 08, 2020 10:09 AM
BuffaloRoundOut

(I tried many times to post this but gamedev.net was having some kind of severe nginx issue. I've lost all my italicization but that's not necessarily a bad thing!)

Response (favourable and 100% in agreement with) to 'The Faster You Unlearn OOP, The Better For You And Your Software', December 10, 2018 by Dawid Ciężarkiewicz.

These articles are good, and beyond timely - I'm typing from 2022. OOP as some kind of "general solution" fell a couple of decades ago, but bright eyed wonderlings still keep rediscovering it and reinventing the same wheel.plugin(mistake)'s as always. Zed Shaw's excellent talk "The Web Will Die When Oop Dies" covers a lot of the same ground, argued just as well.

It's about now, and I see the dev world has not disappointed and been true to form, that we see all the “Aww you just don't understand it!” comments roll in.

Communism: it's fine, you're just not doing it right!

Lassez Faire Capitalism: it's fine, you're just not doing it right!

Gun Ownership: it's fine, you're just not doing it right!

Agile/DevOps: it's fine, you're just not doing it right!

and at the top of the pile, the original, the O.G. :

Object Orientation: it's fine, you're just not doing it right!

OOD is only a limitedly applicable paradigm. It's highly domain specific, and so are the language components that make use of it.

It's quite useful in some places - but not many. When you do have a sea of objects, and their interactions don't change their behaviour much, that's important, then yes it's fine. But even computer games get into trouble with that approach sometimes, and they have forever been seen as OO's major consumer.

Just as an example, OO has NO place in the Web. The Web is Batch, it's a 1960's paradigm - data in, munge, data out. Building giant graphs of OO structures on a page call is one of the stupidest approaches ever thought up and a major reason any Web apps are so horribly inefficient and bug prone. The Web is purely procedural - and no, ORM is not a valid thing, RDBMS has nothing whatsoever to do with OO and even the way one views the data is completely different in both (they can coexist, however - eg. an RDBMS can output transaction records for OO graph updates, resulting in a usefully searchable transaction log, but I digress).

And when you get to the gleefully over-applied, under-thought out concept of plugins, you're really in a world of pain. SOLID, and other doctrines pulled out of somebody's rectum, insist that you can do everything inside classes, keep all classes atomic, and solve all problems with the magic fairy dust of Inheritance.

You can't. It doesn't work that way. The real world isn't that neat and tidy, and neither is Computer Science. You often find that it's easier and results in better code to create a handler class - or even skip the class altogether and just make a big, procedural Case statement! - for various actions that can happen to your class population, rather than try and inherit those handlers. 'If thing.is_this_class', not 'descendant.run_handler()'.

This gets far worse when you have objects acting as handlers for other objects - ie ‘plugins’ - as the author mentions above. I have seen far too many nightmare scenarios, and tried to debug them, wasting days - days - of work, where you try, with all the tools available, to follow the execution path from one method, through another class, up its inheritance tree, out to its specific method call … only to find that it references another class, another method and so on.

This isn't “You're just not doing it right!” It's trying to be “Object Oriented” across the board. You can't. You have to keep OO in its domain specific kingdom, and go procedural, functional, ladder-logic (yes I am that old!), or whatever else where it's required.

OO is just a ‘thing’. It's not, as I started out by saying, some kind of magic, general “solution”. And it can go very wrong, through no fault of the developer - sometimes (often) OO is just in no way suited to the shape of the problem. No matter HOW you graph it out.

Oh, and the work of pure fiction entitled "Design Patterns: Elements of Reusable Object-Oriented Software" (thankfully this book is fading from common memory) should be abandoned, banished, shunned and traduced from development history. Gang of Four, go sell your contrived, concocted SmallTalk based crazy somewhere else. The rest of us are interested in maintainability, efficiency and clarity. And through those, reliability and security.

February 23, 2022 04:30 AM
arnulf

We easily get used to the same software we’ve used for years at work. Plus, we get accustomed to a certain work format, and technologies like SaaS apps and subscription-based services can seem complex and unfamiliar, so click here to find out more about these services. Therefore, I like working with Binary Studio, whose specialists are ready to consult and continue to support you even after the product development is complete.

July 08, 2024 09:04 PM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Advertisement