We are about to develop a Python wrapper for a REST API in our platform. Right now, the platform has 12 different APIs, which are unrelated in a function, they don't necessarily share the same base url, and have different authentication methods. However they are related in a sense, that they are part of the platform. At this moment we are developing the client for a single API (out of the list of twelve), but it's very likely that the remaining apis will have their Python wrapper in the future.
To avoid headeaches down the road, we've been discussing the structure of the package and can't 100% agree on the structure
One point of view is creating (ultimately) 12 different packages, one for each api. That is (names are non-pythonic for the sake of clarity)
$ pip install platform_apiclient_foo
$ pip install platform_apiclient_bar
$ pip install platform_apiclient_spam
...
and in a code use it like:
from platform_apiclient_foo import ClientFoo
from platform_apiclient_bar import Bar
The argument for this is that the do not have a common function and thus should be separated.
The second point of view is creating one package encompassing all current and possible future apis, which would require to install single package
$ pip install platform_apiclient
For each API, there would be a separate module, leading to this usage
from platform_apiclient.spam import SpamClient
from platform_apiclient.foo import FooClient
I am definitely fan of the second approach as it seems to be easier to maintain (from the devs perspective), easier to use (from the user's perspective) and might benefit from at least some code reuse. An argument against the second approach might be how to take into account different versions of the REST APIs (altough this might be solvable by an argument when instantiating the client class).
Which of the two desgins seems more appropriate? Are there any other ways to solve this? Am I missing something some arguments pro/against any of the two?
Aucun commentaire:
Enregistrer un commentaire