In the last blog I tried to explain the answering mechanism in corobo. Now, it’s time to deploy it. Deployment is not always as easy as working locally. But there’s docker that makes it easier.
Answering module uses spacy which requires training model to work. That training model is about 1 GB of data. Adding that to corobo Dockerfile didn’t quite make sense because that’d increase the size of the image by 1 GB.
Since we want corobo to be adaptable by other orgs and since answering plugin is coala specific(for now), we didn’t include answering module in the corobo docker image.
We created a microservice to access the answering infrastructure. There are 2 endpoints:
/answerwhich grabs the question from
questionquery parameter and returns an array of answers with scores.
/summarizewhich grabs the text to be summarized from
textquery parameter and returns a json respone with summarized text as value and
Now, the answer for a question is retrieved by making a call to
/answer endpoint which is then summarized by calling
The microservice is deployed in its own docker container, has its own docker image and ofcourse it’s own Dockerfile. This decouples the answering deps from corobo, and hence not affecting other orgs’ corobo instance and also no increase in corobo docker image’s size.