I don't understand why numeric filters are included. The library is written in python, so shouldn't a lambda function based filter be roughly as fast but much easier/clearer to write.
I'm not the author, but this implementation has the benefit of being a JSON compatible DSL that you can serialize. Maybe that's intentional, maybe not.
It does look like Python's comprehensions would be a better choice if you're writing them by hand anyway.
Yea, In my opinion using Python's list comprehension is more readable and code checkable.
Here's the usage example from the README:
from leopards import Q
l = [{"name":"John","age":"16"}, {"name":"Mike","age":"19"},{"name":"Sarah","age":"21"}]
filtered= Q(l,{'name__contains':"k", "age__lt":20})
print(list(filtered))
Versus:
[x for x in l if ('k' in x['name'] and int(x['age']) < 20)]
Outputs:
[{'name': 'Mike', 'age': '19'}]
Also from the readme:
> Even though, age was str in the dict, as the value of in the query dict was int, Leopards converted the value in dict automatically to match the query data type. This behaviour can be stopped by passing False to convert_types parameter.
If you are concerned that your Python is making single-digit extra function calls, then you should be using a different language. (However, you might want a C extension that can be used from Python.)
That said, it's trivial to apply multiple filter lambdas in one pass -- the most natural way is a comprehension.
Still, you might be surprised by how fast filter(cond_1, filter(cond_2, data)) actually is. The OP didn't present that performance comparison, nor can I see any reason they gave to avoid comprehensions.
Why would you assume single digit extra calls? If the list is N million, then you would do a constant multiple of iterations of that. That’s a non trivial overhead in production applications.