Memory usage is still a bit large and takes up a bit too much performance on a normal computer, see how it can be tuned down. Is it possible to support remote invocation (e.g. allow us to deploy the model on our own server and get the text via callback). Or provide more lightweight models that are optional.
Please authenticate to join the conversation.
Completed
Feature Request
4 months ago

Teng Fu
Get notified by email when there are changes.
Completed
Feature Request
4 months ago

Teng Fu
Get notified by email when there are changes.