DataGPT will work with any dataset, including complex and messy data, and integrate with any cloud or customer data warehouse.
Configure your data schema using DataGPT’s automated setup tools. You simply define your dimensions and metrics, and DataGPT takes care of the rest.
In a few clicks you can create and customize your data navigator with the key metrics you would like to analyze and the results are instantly displayed.
DataGPT’s algorithm analyzes all your data to surface the important insights reducing the time users need to spend digging for answers. Additionally, our custom developed data cache significantly reduces the time and cost of data queries by multiple orders of magnitude. What used to take days can now be done in seconds.
Is your data and analytics team already at capacity? No problem! DataGPT will take care of setting up your customized data navigator, training all users, and each week they will summarize the analysis and present key insights in a weekly business review so your team maximizes the power of DataGPT.
How is this different than using ChatGPT’s Code Interpreter?
When it comes to performing data analysis, Code Interpreter has significant limitations.
DataGPT solves these limitations.
The result: users don’t need technical expertise or SQL experience to ask any type of question in natural language and have their data analyzed using DataGPT’s advanced analysis algorithms
Help me understand a bit more about how it works behind the scenes?
Customers use our schema builder where they define which tables, dimensions and metrics to focus on. After these definitions are set, we automatically generate an ETL to pull the relevant data. The data is then stored in our “lightning cache” format which is specifically designed and optimized over several years by our team of data scientists. This enables DataGPT to process millions of data points in seconds and simultaneously cuts costs. What would cost tens of thousands a month is now just a few dollars.
How long does it take to set up?
Typically it only takes a day to complete the one-time schema setup.However we find depending on your data maturity some customers may require a few iterations to the schema to optimize the value and capabilities of DataGPT. An assigned data consultant will provide that support and guidance during onboarding. The long-term benefit of that effort is streamlined data management and consistent definitions across all of your organization.
How much maintenance is involved?
None! Once the schema is defined it’s rare a customer needs to go make adjustments.
How does it handle metrics or dimensions that are not well defined in our existing tables?
If you have definitions in your database, we look at those definitions directly. If you don’t, when setting up the schema you can add SQL to fine-tune dimensions or create new ones. This also allows you to define the appropriate way to handle legacy data or missing entries. For example, if you wanted to track Daily Active Users, you can instruct the schema to look at the login event and count the unique users.
Is the analysis happening in real-time?
Yes. As users ask questions, DataGPT analyzes your data in real-time so you always see the latest results. This allows users to ask any question and always get an answer right away.However, note that the data analyzed by DataGPT is updated based on when customer’s data is fully updated in their data platform. Usually customer’s data collection is completed at certain time, so we define that time in the schema and have our ETL run at that time each day so that the analysis is accurate.
How can I trust the analysis?
We utilize an in-memory C++ database that has been finely tuned for data analysis over several years. This enables us to handle the most complex queries you might have, from auto-segmentation to the examination of literally every possible variable combination, to determine which factors are relevant, trustworthy, or outliers that should be ignored. This means each analysis is not just accurate but also highly transparent—each result can be independently verified or replicated with your own queries.
Can we train the AI models?
The analysis itself is performed by our core analytics engine and the AI translates that analysis in to the everyday language in DataGPT’s chat interface. Therefore there is no need to train the analysis itself, but the algorithms used can be customized or fine tuned as required. This fine tuning is completed during the schema setup process. Additionally, you can provide feedback on answers provided by DataGPT which allows us to continually fine tune how the results are presented.
What kind of questions can I ask? And what kind of analysis can DataGPT perform?
DataGPT supports key metric analysis, root-cause analysis, segment impact analysis, historical comparative analysis, and trend analysis. It can answer simple questions like “Which customers contributed the most to revenue this week” or “How did my campaign perform across each source” and complex questions like “Why did Conversions drop this week?”
It does not yet support forecasting or simulation (what-if scenarios), but that is a key part of the next iteration of the product pending customer feedback and requests.
How are the visualizations generated?
With each DataGPT response a matching visualization is provided to support the response. The visualization is generated based on the data used to generate the response and displayed as a time series, stacked chart, 100% stacked chart, bar chart, or line chart.
Tell me about your data security?