I have a question about the general case of implementing functionality regarding generated code/classes from Protocol buffers. General in the sense that, the problem that I am having is in Java right now, but the question is applicable to object-oriented programming as a whole...
Basically, the way that serialization and deserialization would work for my project normally is that I would write custom serializers and deserializers for json for each language that I'm working in, and these would covert json messages into classes that have fields from the json, but with custom methods that I write.
Using protocol buffers gives me a number of advantages, but one important one in my mind is that the automatically generated code means I am not having to write serializers and deserializers manually. However, it is unclear what the proper way is to implement logic that involves this deserialized data. The official tutorial (https://developers.google.com/protocol-buffers/docs/javatutorial#parsing-and-serialization) warns against using inheritance for a situation like this:
Protocol buffer classes are basically data holders (like structs in C) that don't provide additional functionality; they don't make good first class citizens in an object model. If you want to add richer behavior to a generated class, the best way to do this is to wrap the generated protocol buffer class in an application-specific class. Wrapping protocol buffers is also a good idea if you don't have control over the design of the
.proto
file (if, say, you're reusing one from another project). In that case, you can use the wrapper class to craft an interface better suited to the unique environment of your application: hiding some data and methods, exposing convenience functions, etc. You should never add behavior to the generated classes by inheriting from them. This will break internal mechanisms and is not good object-oriented practice anyway.
But I'm not sure what is supposed to be done instead...
My options appear to be:
- Ignoring this warning and creating a class for inheritance anyways. This means modifying the automatically created protobuf code which I don't want to do, or as in this post (https://blog.teamdev.com/protobuf-serialization-and-beyond-part-3-value-objects-e3dc7b935ac), using a protoc plugin to automatically add this inheritance structure to the automatically generated types. A closed Github issue on protobuf (https://github.com/protocolbuffers/protobuf/issues/4481) talks about problems with this solution as well - although technically supported, the use of plugins is not well documented or supported and still goes against the warning from the tutorial. There may be some kind of casting wizardry I can do to have this work without modifying the generated protobuf code, but I'm not sure... This method seems to make the system easiest to change, as it is automatically responsive to changes in the underlying schema - only methods that serve a proper function (not getters and setters) would be written by me, and would throw compile-time exceptions if their underlying fields no longer work, making sure I make changes to only application code when my schema changes (exactly what I want).
- Creating a wrapper class as suggested in the above warning, encapsulating the generated code. This follows the delegation design pattern, which to my understanding is not easily implemented in every language. In Java, for example, the delegation must be written manually and must be changed in code every time the protobuf schema changes, which is obviously not ideal, and I'm not sure how I would implement the Decorator/ Adapter patterns that I'm looking for without inheritance (see first point) or code duplication because of this. This also gets pretty wild as the size of the schema increases, and without an easy way to implement delegation I think this is unsustainable... (This stackoverflow answer and the comment below basically explains what my concern with this method is: https://stackoverflow.com/a/16360532/17885960). There is the Lombok delegation class (experimental), and Scala for example supports delegation natively, but I'm not sure this is the best way forward...
- Not using Protobuf generated classes in domain logic, instead manually creating classes with custom logic and mapping the protobuf messages / generated classes to these custom objects as data enters/leaves the application. This is basically the same as what happens currently with JSON in the application, except instead of mapping JSON directly, I would be first converting the messages to protobuf generated classes, then coverting these protobuf generated classes into the custom classes. I might be able to get away with using something like gson in Java to handle the automated conversion between these two types (protobuf-generated class -> json -> custom class and vice versa), but this adds overhead to the system which never existed before (instead of just parsing json into a class, I'm parsing json into a class, then that class into another class), and I still won't be benefitting from the automatic methods created by the protobuf generated code. This method is the most serialization-format-agnostic, as the same classes that support json can support protobufs, but this method also seems to need the most boilerplate code out of all the options - I would be duplicating all the getters and setters of the protobuf generated classes, as well as writing domain logic, as well as writing converter classes/methods. This method is suggested in the following threads: https://softwareengineering.stackexchange.com/a/170822, https://stackoverflow.com/a/16360532/17885960, https://groups.google.com/g/protobuf/c/pABidPhYqIo).
The threads I find on this topic are old and sparse - is this a rare problem to have, is it too niche, or am I just overthinking things?
What is the best way forward, and how can I better understand how to think about and solve this kind of problem in the future?
Aucun commentaire:
Enregistrer un commentaire