A predictive model is a simple way to set up AI-based predictions and recommendations in Workflows based on your data. Save time by automating decisions based on machine learning driven insights.
A predictive model takes an input, like all the data in a table, then intelligently guesses the probability of an event based on the information.
If you provide the input and the event to try and predict, the model will take care of the guessing.
Setting up a predictive model in Catalytic is simple and you can use our Predictive Model Qualifier to figure out if you have a use case that’s ready for AI.
Predictive models can be used to automate and predict common cognitive actions, like categorizing or classifying an email. You can leverage predictive models in some of the following ways:
- Determine if a company expense should be automatically approved
- Predict the likelihood of a sales opportunity to close within the next quarter
- Categorize and route an email based on text in the subject and body
- Recommend a compensation bonus amount based on employee performance data
- Suggest the best leader for a project based on project criteria
Each predictive model is based on a data table, so you will first need to have your data in a data table before following these steps to create a predictive model:
From the Workflow Settings page, select to open the Data Tables section, then select Add a Predictive Model.
- On the next screen, select Create Predictive Model
- Name the model. For What would you like to analyze, select the data table you want to base the predictions on.
- For Which field would you like to target for prediction?, select the field to target for the prediction.
- After you pick a target, all other fields in the table are included in What fields should be included in the analysis?. Remove fields that should not be included in the analysis.
- Select Create Model.
The predictive model may require over an hour to initially train depending on the amount of data in the model. You will receive an email when it completes.
Once you have trained a predictive model, use the Field: Make a prediction action to make predictions. The action targets the value of a field which can be Single Choice, Integer, Decimal, or a True/False type field.
When first using the predictive model, we recommend a simple guided implementation that leaves decisions up to people while the model increases in accuracy.
The accuracy of predictive models increases with more data and more predictions. A minimum of 300 rows is a great starting point.
When adding the predictive model to a Workflow, start out with a guided implementation. This leaves decisions up to people while the model increases in accuracy:
- Use the Field: Make a prediction action to rate the confidence of a prediction.
- Include the prediction confidence and prediction input and in task instructions and let a person make the decision.
As each decision is made the prediction accuracy will increase under the guidance of a person. Once the confidence accuracy is high, you can remove the manual task and set conditions based on the confidence to make predictions automatically.
The predictive model is based on a logistic regression machine learning algorithm, an example of supervised learning that takes a number of variables as input, and predicts the most likely value of a single output variable based on trends that it has learned from historical examples. It uses Amazon’s machine learning service.
Binary Classification, Multi-Class Classification, Regression, and Natural Language Processing are all supported within the predictive model.
Our natural language classification is based on a bag-of-words approach using a logistic regression model. The algorithm automatically determines the set of terms that are most predictive of a given category, and calculates weights to apply to each term. Natural language features are considered along with all other input fields that are provided to the predictive model.
The Accuracy for the model is communicated as “Low”, “Moderate”, “High”, “Very High” in Catalytic. To assess the accuracy, we set aside a percentage of the training dataset to use for evaluation, and run the model against that evaluation dataset once trained. We then compute an F1 score over the result to determine accuracy.
Every time the model performs a prediction, it provides a confidence level, which is essentially the probability that it thinks it is right. We scale the probability by the F1 accuracy score so that a model with low accuracy doesn’t produce predictions with high confidence.
Thanks for your feedback
We update the Help Center daily, so expect changes soon.
Paste this URL anywhere to link straight to the section.