Note: The CentrlGPT feature is not available by default. Please reach out to your Account Manager to learn more about this feature.
CentrlGPT leverages the power of Large Language Models to act as a risk copilot for diligence teams, dramatically enhancing the efficiency and effectiveness of risk processes. CentrlGPT is designed as a secure enterprise solution to be trained with your custom risk models and implemented as a seamless extension of your current CENTRL platform.
CentrlGPT powers Smart Evaluation to simplify reviewing hundreds and sometimes thousands of client assessments. Today, diligence analysts take hours to review assessments, limiting their reach. Going beyond what a human can do, Smart Evaluation brings unparalleled speed and consistency to evaluation to read hundreds of pages of answers and attachments in minutes to identify risk, measure answer coverage, and identify potential high-risk exceptions.
This article covers frequently asked Questions & Answers about our CentrlGPT feature for users. Please see our other CentrlGPT help articles here:
-
Yes, we use a version of OpenAI developed for enterprise users with complete security and data privacy. Our license is configured not to store data from our requests. We do not use the free ChatGPT service. For further information about general data privacy, please click here: API Data Privacy.
-
The AI market is rapidly changing, and we are evaluating whether to use an open-source model in the future.
-
The Large Language Model is pre-trained on tens of gigabytes of data. We leverage this existing model to ask fundamental questions about provided answers, such as, " Does this answer mention X. "
-
Often in business, a domain expert must explain terminology and provide extra context when working with generalized teams. Like human teams, a generalized LLM works on domain-specific topics with some context from that domain. Domain-specific LLMs use specific domain knowledge to produce quicker and more accurate results than a generalized model.
How accurate are the results? We have heard that these generative AI models are prone to hallucinations. Have you done any analysis?
-
Our results are highly accurate, and we've intentionally taken a path that avoids hallucinations. Hallucinations often come from writing open-ended prompts or asking the model to generate large amounts of text.
-
We like to use an analogy to painting: An analyst would need help to paint a Monet replica and might include elements from other artists by mistake. This difficulty mirrors the challenges of generating lengthy responses from an LLM or responses to vague questions. The same analyst would have a much easier time answering a checklist about familiar aspects of a Monet like this is a painting, the scene is outdoors, there is a focus on nature, the image is soft or blurry, or the scene contains water. We use a checklist of criteria to ask basic yes or no questions, which models like humans and are highly accurate at answering.
-
We use your data to generate a prompt to ask the Large Language Model. We do not train the model.
-
Getting started is as easy as selecting a questionnaire and working with our team to enter in answer criteria.
-
We're excited to continue to improve CentrlGPT to make it incredibly powerful. In our short-term roadmap, we're focused on bringing more self-service capabilities so users can build out their example data and criteria as they respond to existing assessments. We're also looking at ways to automatically recommend best practice evaluations to make CentrlGPT valuable without any setup. Separate from self-service, our team is looking at expanding CentrlGPT to flag risk on any response type and even generate scores.