This article provides a brief overview and answers common questions about CentrlGPT. For a more in-depth look at CentrlGPT, sign in to the CENTRL platform and read our full guides:
CentrlGPT Overview
CentrlGPT is the first of its kind generative AI solution that revolutionizes risk and diligence processes. In our public release, CentrlGPT focuses on the challenge of evaluation. Today, organizations issue dozens to hundreds (sometimes thousands) of client assessments. Clients must provide dozens of responses within each assessment and attach numerous documents. As a result, it takes hours for analysts to read through completed assessments and identify issues or gaps.
Intelligently Flagging Responses
CentrlGPT reads through responses and documents and automatically raises flags to focus analysts to items that demand their attention. When an analyst starts their day they can immediately focus on flagged items and raise escalations earlier in the review cycle.
Smart Evaluation of Client Answers
Inside assessments, CentrlGPT not only highlights items but also generates a robust rationale to explain why CentrlGPT decided to raise a flag (or not raise a flag). Under this rationale, CentrlGPT reviews key evaluation criteria against both the answer and any attachment. With these details, analysts can quickly filter flagged items and understand what's missing from a client response.
Robust Document Analysis
Analysts can define evaluation criteria for each document when requesting specific documents like audits, policies, or standard reports. On review, CentrlGPT takes over the heavy lifting and reads through each client document to measure coverage. To make analysts' lives more effortless, CentrlGPT will report on where each criteria was met to save analysts from searching through pages of text.
CentrlGPT contains more features like tailoring evaluations with your feedback. To read about CentrlGPT more in-depth, log in to the platform and access the user-only guide here: CentrlGPT: Viewing Smart Evaluation.
Common Questions about CentrlGPT
What AI tool are you using?
- We use OpenAI, the most prominent Large Language Model.
Is this secure? What about data privacy?
- Yes, we use a version of OpenAI developed for enterprise users with complete security and data privacy. Our license is configured not to store data from our requests. We do not use the free ChatGPT service. For further information about general data privacy, please click here: API Data Privacy.
Do you plan to add or use other open-source models?
- The AI market is rapidly changing, and we are evaluating whether to use an open-source model in the future.
How are you training the model?
- The Large Language Model is pre-trained on tens of gigabytes of data. Learn more about training your data in our user guide.
When you say domain-specific LLM, what does that mean
- Often in business, a domain expert must explain terminology and provide extra context when working with generalized teams. Like human teams, a generalized LLM works on domain-specific topics with some context from that domain. Domain-specific LLMs use specific domain knowledge to produce quicker and more accurate results than a generalized model.
How accurate are the results? We have heard that these generative AI models are prone to hallucinations. Have you done any analysis?
- Our results are highly accurate, and we've intentionally taken a path that avoids hallucinations. Hallucinations often come from writing open-ended prompts or asking the model to generate large amounts of text.
How is my data being used? Does CENTRL take data from my environment to train?
- We use your data to generate a prompt to ask the Large Language Model. We do not train the model.
How much data and time do we need to get started with CentrlGPT?
- Getting started is as easy as defining feedback as you evaluate.
What is the roadmap for CentrlGPT?
-
Sign in to read more about our roadmap: CentrlGPT: FAQs.