Cost Savings And Business Benefits Enabled By Monte Carlo Data + AI Observability Platform
A Forrester Total Economic Impact™ Study Commissioned By Monte Carlo, April 2025
[AssetCompany]
A Forrester Total Economic Impact™ Study Commissioned By Monte Carlo, April 2025
For organizations to react to data quality issues as they arise is no longer sufficient. Augmented data quality solutions empower organizations to take proactive measures and prevent data quality problems at the point of ingestion. Real-time data profiling, data validation, and data monitoring capabilities detect anomalies before they impact operations. These capabilities ensure data conformity to predefined quality and compliance standards while providing continuous monitoring of mission-critical data. Data observability takes proactive data quality to the next level by helping organizations understand data health and performance. It enables organizations to gain confidence in the integrity, accuracy, completeness, and reliability of their data; identify areas for improvement; refine their strategies; and drive operational excellence.1
Monte Carlo allows organizations to detect and remediate data and reliability issues faster and more efficiently. It saves data personnel time and reduces the likelihood that a data quality issue will go far enough to affect customers or key decisions, avoiding costs associated with suboptimal decision-making and lost revenue. AI/ML models and agents powered by Monte Carlo-monitored data deliver higher quality results, and improved trust in data encourages more collaboration and participation in data across business units. Monte Carlo also gives organizations visibility into compute and storage cost-saving opportunities in their data estates.
Monte Carlo commissioned Forrester Consulting to conduct a Total Economic Impact™ (TEI) study and examine the potential return on investment (ROI) enterprises may realize by deploying Monte Carlo’s data + AI observability platform.2 The purpose of this study is to provide readers with a framework to evaluate the potential financial impact of Monte Carlo on their organizations.
To better understand the benefits, costs, and risks associated with this investment, Forrester interviewed eight decision-makers at six organizations with experience using Monte Carlo. For the purposes of this study, Forrester aggregated the interviewees’ experiences and combined the results into a single composite organization that is an industry-agnostic organization with 15,000 employees and revenue of $6 billion per year.
After the investment with Monte Carlo, the interviewees collectively explained that their organizations could detect and remediate data quality issues faster and more efficiently than before, saving valuable data personnel time while reducing the likelihood that a data quality issue goes far enough to affect customers or key decisions, avoiding costs associated with suboptimal decision-making or lost revenue. AI/ML models and agents powered by Monte Carlo-monitored data are inherently more likely to deliver better results, and improved trust in data encourages more collaboration and participation in data across the business. Monte Carlo also gave organizations visibility into where there are opportunities to save on compute and storage costs in their data estates.
Quantified benefits. Three-year, risk-adjusted present value (PV) quantified benefits for the composite organization include:
Reclaimed more than 6,500 data personnel hours annually. The composite organization detects data quality issues earlier with Monte Carlo, and they are less resource-intensive for data analysts and engineers to detect and resolve.
Avoided more than $1.5 million in lost revenue due to data downtime. Deploying Monte Carlo reduces the likelihood and duration of customer-facing data downtime incidents, allowing the composite organization to avoid lost revenue stemming from these issues.
Improved efficacy of AI and ML models. Deploying Monte Carlo to monitor data-fueling AI models for key operations functions enables the composite organization to save money from the higher quality outputs of these models.
Reduced 65% of redundant data product creation and validation efforts. With Monte Carlo monitoring the composite organization’s data estates, there is an organizationwide improvement in data trust that results in more frequent collaboration across teams, less rework or redundant efforts, and better decision-making using mutually agreed-upon data.
Saved 2.5% on cloud data warehouse compute and storage costs. Monte Carlo provides the composite organization with views into where and how data is being used in addition to views into query and pipeline performance. This visibility allows it to prioritize important data pipelines and data assets while avoiding storage and compute costs associated with less valuable data assets or inefficient queries.
Unquantified benefits. Benefits that provide value for the composite organization but are not quantified for this study include:
Improved customer experience and reputational benefit. With a Monte Carlo partnership, bad data affects the composite organization’s end customers less often, resulting in less customer-facing data downtime and contributing to a better customer experience overall.
New product and revenue opportunities. The composite’s customers perceive its partnership with Monte Carlo as a data and AI product value-add, potentially leading to new opportunities and additional revenue.
Increased data confidence. Employees of the composite perceive data monitored with Monte Carlo as more trustworthy, leading to a cross-team willingness to use data or data products from other teams or sources.
Improved data governance. Monte Carlo ensures data and AI meet quality governance standards, helping the composite stay compliant with industry regulations and avoid regulatory fines and reputational damage.
Costs. Three-year, risk-adjusted PV costs for the composite organization include:
Estimated Monte Carlo consumption costs. Organizations pay Monte Carlo on a consumption model based on the number and types of monitors deployed. Other factors influencing Monte Carlo-attributable spend include hosting specifics and additional compute or data storage resources.
Personnel effort for Monte Carlo deployment and ongoing management. The composite must dedicate internal resources for the initial Monte Carlo deployment, ongoing identification of additional data assets and domains for monitoring, and additional monitoring setup or adjustment.
The representative interviews and financial analysis found that a composite organization experiences benefits of $3.07 million over three years versus costs of $670,000, adding up to a net present value (NPV) of $2.40 million and an ROI of 358%.
Return on investment (ROI)
Benefits PV
Net present value (NPV)
Payback
Role(s) | Industry | Employees | Annual Revenue |
---|---|---|---|
Director of data engineering Manager of data products |
Airline | 24,000 | $9.6 billion |
Director of engineering | Cybersecurity | 2,000 | $900 million |
Senior manager, data analytics and architecture | Food processing | 34,000 | $20 billion |
VP, business intelligence (BI) analytics and strategy | Media | 10,000 | $14 billion |
Product owner, data quality and observability Product line lead, data platforms |
Pharmaceutical | 100,000 | $70 billion |
Associate vice president (AVP) of product strategy | Stock exchange | 8,500 | $6 billion |
The interviewees noted how their organizations struggled with common challenges, including:
Issues with data quality. Interviewees told Forrester that before implementing Monte Carlo, data quality issues were often missed until they manifested in disruptive ways, interrupting reporting, decision-making, or the end customer. Data analysts lacked the tools to identify issues properly before they became larger issues, and data analysts and data engineers spent an inordinate amount of time fixing broken data. The senior manager of data analytics and architecture at a food processing organization summarized: “[Fixing bad data] was taking a lot of time from our engineering and analytics teams. Analytics teams should focus more on generating insight from data than bookkeeping the data.”
Reactive and challenging remediation efforts. The longer data quality issues went undetected, the more difficult they became to remediate, especially since the issues were rarely localized. Interviewees recognized the significant effort required of their organizations’ data engineers for remediation. Additionally, issues were frequently found late or by end consumers.
Low organizational trust in data. As data quality issues surfaced, interviewees’ organizations noted that it was impossible to foster a singular trust in data. Teams would create reports using their own data or spend extra time validating data from other teams.
Negative impacts to the business or customers. In worst case examples, interviewees said that unchecked data quality issues emerged in situations that either impacted business operations (resulting in additional costs) or directly affected customers (resulting in negative customer experiences, reputational damage, or lost revenue).
Limited visibility into expanding cloud data storage and compute costs. As organizations’ data warehouses expanded, they found that limited visibility into pipeline or query efficiency, insufficient understanding of data being stored, and issues resulting in data backfills contributed to excessive compute and storage costs.
The interviewees’ organizations searched for a solution that:
Was platform agnostic.
Could integrate with other data (and non-data) tools.
Would allow more users to interact with the organization’s data.
Could prepare the organization’s data architecture to take advantage of analytics such as generative AI (genAI) and AI agents.
Based on the interviews, Forrester constructed a TEI framework, a composite company, and an ROI analysis that illustrates the areas financially affected. The composite organization is representative of the interviewees’ organizations, and it is used to present the aggregate financial analysis in the next section. The composite organization has the following characteristics:
Description of composite. The composite organization is a global, industry-agnostic organization with $6 billion in annual revenue and 15,000 employees.
Deployment characteristics. The composite organization has a data warehouse or lakehouse spend of about $3 million annually. It initially deploys Monte Carlo across three domains and 1,000 monitored assets (tables), expanding to 3,000 observability assets by Year 3. The platform may ingest tens of thousands to hundreds of thousands of assets for visibility or lineage purposes at no additional cost. The composite opts for the client-hosted deployment scenario, hosting the Monte Carlo agent and the data store within its own environment. Across the organization, data analysts have historically been responsible for data quality issue detection and data engineers have been tasked with remediation.
$6 billion annual revenue
15,000 employees
Client-hosted deployment scenario
$3 million overall warehouse/lakehouse spend
3,000 observability assets (tables) actively monitored by Year 3 (tens of thousands processed for visibility)
Ref. | Benefit | Year 1 | Year 2 | Year 3 | Total | Present Value |
---|---|---|---|---|---|---|
Atr | Reclaimed time for data personnel | $241,678 | $261,079 | $280,986 | $783,743 | $646,584 |
Btr | Avoided losses due to data and AI downtime | $473,425 | $503,014 | $532,603 | $1,509,042 | $1,246,253 |
Ctr | Improved efficacy of internal decision-making AI models with better data quality | $95,625 | $255,000 | $382,500 | $733,125 | $585,054 |
Dtr | Internal collaboration benefit from improved data trust | $164,093 | $164,093 | $164,093 | $492,278 | $408,074 |
Etr | Avoided cloud data storage and compute costs | $63,750 | $74,375 | $85,000 | $223,125 | $183,283 |
Total benefits (risk-adjusted) | $1,038,570 | $1,257,561 | $1,445,181 | $3,741,312 | $3,069,248 |
Evidence and data. Deploying Monte Carlo allowed interviewees’ organizations to detect data quality issues earlier via health monitoring — an efficiency that resulted in productivity improvements for personnel across their data estates. Data analysts used the platform’s AI-driven anomaly detection and recommendations for early issue detection so data engineers could remediate issues before they became larger and less localized. Organizations’ data analysts and data consumers were less prone to data and AI downtime, which allowed them to remain on task, contributed to faster decision-making, and saved time.
The AVP of product strategy at the stock exchange noted that their data quality issues often originated at the beginning of a month and were not detected until later, which put a major burden on the organization’s data engineers to remediate. The interviewee said: “[Our engineers] would have to go to the trading system teams and have them rerun trade data transformations. It is an involved process, so that was a big pain point. There's a lot of manual labor to do all that.” The same interviewee also emphasized the benefit of detecting data quality issues earlier with Monte Carlo, as fewer personnel resources were required for remediation and resolution time was cut to a fraction. The interviewee explained: “We’ve caught issues that would [previously] take the team three to five days to resolve. Now we catch them on day one and resolve them pretty much in place.” Since deploying Monte Carlo, their organization frequently catches and resolves errors on the same day they occur, which previously required tens of hours of data engineer or analyst time, avoiding an estimated 90% of the previous remediation effort.
The same interviewee highlighted additional observability efficiencies for pipeline development, noting: “We spent a lot of time making large, backlog-like changes that we pushed out once every six months versus being much more agile and getting feedback sooner. So our development processes were just much longer, which was not ideal from both an analyst point of view and from an engineering point of view.”
The senior manager of data analytics and architecture at a food processing organization explained that their data analysts and data engineers could focus more on their core responsibilities given the features within Monte Carlo. They estimated a 50% to 60% efficiency gain for their organization’s data personnel since their analysts could set data quality rules in Monte Carlo, and engineers now had data lineage tracking capabilities to approach data quality issue resolutions from a more informed position.
Interviewees at the airline noted that since onboarding Monte Carlo, data quality issue escalations that required the effort of data personnel dropped dramatically. Employees catch issues sooner with fewer resources required to resolve them. Data personnel now spend an estimated 10% to 15% of their time monitoring and acting upon data quality issues, whereas the interviewees estimated this effort to be significantly more before Monte Carlo.
Interviewees also highlighted that although data personnel reaped a significant share of the efficiency benefits of Monte Carlo, data and AI downtime impacted other personnel as well. The AVP of product strategy at the stock exchange noted that finance and accounting personnel were frequently subject to delays in their monthly closes resulting from data reconciliation efforts at the end of the month.
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
Six data quality issues per month require manual intervention.
Eight data engineers are typically responsible for data quality issue remediation. Each data engineer spends an average of 10 hours on remediation, normalized across all issue severities.
Data quality issues are caught sooner and resolved faster with Monte Carlo, improving resolution time (and reducing personnel effort) by 80% to 90% from Year 1 to Year 3.
The fully burdened hourly rate for a data engineer is between $66 and $68 from Year 1 to Year 3.
There is a 75% productivity recapture for data engineers, as not all time reclaimed will be repurposed toward value-added work.
Twelve data analysts are responsible for detecting quality issues along with their other responsibilities.
Each downtime incident costs a data analyst 3 hours in downtime.
The fully burdened hourly rate for a data analyst is between $39 and $41 from Year 1 to Year 3.
There is a 50% productivity recapture for data analysts, as not all time reclaimed will be repurposed toward value-added work.
Risks. This benefit will vary among organizations based on:
The baseline number of data quality issues within an organization relative to the potential for improvement with Monte Carlo.
An organization’s data estate relative to the severity and traceability of undetected issues.
The skill and capacity of an organization’s data engineers and data analysts.
Results. To account for these variances, Forrester adjusted this benefit downward by 10%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $647,000.
Data quality issue resolution improvement
Ref. | Metric | Source | Year 1 | Year 2 | Year 3 | |
---|---|---|---|---|---|---|
A1 | Data quality issues requiring intervention | 6 per month/average for the composite | 72 | 72 | 72 | |
A2 | Data engineer FTEs responsible for resolving data quality issues | Composite | 8 | 8 | 8 | |
A3 | Hours for data engineering team to resolve data quality issues (pre-Monte Carlo, normalized across all data quality issue severities) | Interviews | 10 | 10 | 10 | |
A4 | Improved resolution speed with Monte Carlo | Interviews | 80% | 85% | 90% | |
A5 | Data engineer hours saved | A1*A2*A3*A4 | 4,608 | 4,896 | 5,184 | |
A6 | Fully burdened hourly rate for a data engineer (rounded) | Assumption | $66 | $67 | $68 | |
A7 | Productivity recapture for data engineers | Assumption | 75% | 75% | 75% | |
A8 | Subtotal: Reclaimed data engineer productivity | A5*A6*A7 | $228,096 | $246,024 | $264,384 | |
A9 | Data analyst FTEs responsible for quality issue detection and other tasks | Composite | 12 | 12 | 12 | |
A10 | Data analyst downtime per issue (hours) | Interviews | 3 | 3 | 3 | |
A11 | Data analyst downtime avoided with improved data quality incident resolution speed on Monte Carlo (hours) | A1*A4*A9*A10 | 2,073.6 | 2,203.2 | 2,332.8 | |
A12 | Fully burdened hourly rate for a data analyst (rounded) | Assumption | $39 | $40 | $41 | |
A13 | Productivity recapture for data analysts | Assumption | 50% | 50% | 50% | |
A14 | Subtotal: Avoided data analyst downtime | A11*A12*A13 | $40,435 | $44,064 | $47,822 | |
At | Reclaimed time for data personnel | A8+A14 | $268,531 | $290,088 | $312,206 | |
Risk adjustment | ↓10% | |||||
Atr | Reclaimed time for data personnel (risk-adjusted) | $241,678 | $261,079 | $280,986 | ||
Three-year total: $783,743 | Three-year present value: $646,584 |
Evidence and data. Interviewees provided examples where unchecked data quality issues resulted in excess costs or lost revenue. Deploying Monte Carlo reduced the likelihood of small data quality issues becoming larger over time and having similar outcomes.
The VP of BI analytics and strategy at a media organization explained that data and AI downtime resulting in broken or incorrect dashboards for their business users had dramatic effects on decision-making. As a media organization where ad revenue is approximately 30% of annual revenue, decisions about advertising inventory are made by several teams every day. Before deploying Monte Carlo, data quality issues that affected these dashboards resulted in suboptimal decision-making with millions of dollars in implications. With Monte Carlo, it is inherently less likely that data quality issues persist long enough to affect these dashboards and decisions as frequently as before.
Despite efforts to validate data quality before sending reports to internal and external clients, the AVP of product strategy at the stock exchange noted that customers sometimes detected data quality issues. They summarized: “We did do basic record checking, but we didn't do a whole lot of row level validation or anything like that. What would end up happening relatively frequently is we would send a report, and they would come back to us and say, ‘Hey this doesn’t look right.’ So then we would have to go investigate what was going on from there.” The interviewee also added that Monte Carlo helped their staff detect issues that would have resulted in regulatory fines if left unchecked and unresolved, including an issue where customers in one market would have been overbilled by $1.5 million. The interviewee concluded: “When we're tracking our revenue internally, we’re potentially misrepresenting it. But with our customers, if we overbill them even once, then the trust dissolves.”
The same interviewee noted that some data quality issues resulted in customers receiving rebates that they were not entitled to, leaving revenue opportunities on the table. The interviewee concluded: “The more trust the business has in the data, the more likely they are to make more money. They may try things like pricing changes to optimize how we think about collecting revenue.”
The senior manager of data analytics and architecture at a food processing organization said that their team quickly caught a significant data quality issue about product availability due to Monte Carlo. They explained that the issue nearly caused incorrect product availability information to populate for their retail customers, potentially jeopardizing large orders due to the appearance of stockouts. While quality issues that affect the bottom line occur infrequently, the interviewee acknowledged that the consequences of just one issue getting through would be dramatic.
The manager of data products at an airline recalled an example where a small deployment to make a change to a marquee database object (a table used by analysts to understand flight schedules and operations) used across hundreds of dashboards resulted in an accidental truncation of the table. Due to Monte Carlo alerts, the issue was resolved before it could result in major disruptions.
The same interviewee noted that their airline’s data team used Monte Carlo to further support the business by setting up custom alerts. The alerts are triggered by certain data points within a set that correlate to a particular business condition or outcome. The interviewee explained: “We’re not just setting up monitors to check whether the data is trustworthy, but we’re setting up monitors to indicate whether a particular event has occurred so someone can go and act. I think this is an untraditional way of using Monte Carlo, but valuable in enabling those business workflows.”
The director of engineering at a cybersecurity organization noted that it provides reporting for customers based on data queries from its cloud data warehouse. If a query is broken or takes too long to complete, it will time out and the customer will not have their report. The interviewee explained that Monte Carlo alerts on-call staff to these issues or timeouts so they can proactively remediate them.
With Monte Carlo providing trust and consistency, the product line lead for data platforms at a pharmaceutical organization indicated that the business was less prone to data and AI downtime. Business units that rely on this data, such as manufacturing, were also less likely to experience downtime that could affect operations or revenue. They explained, “Monte Carlo is behind the scenes enforcing that trust through our data mesh dashboard where there’s visibility into whether the data products meet the fitness metrics.”
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
Fifteen percent of the organization’s total data quality issues are customer facing to some degree.
During customer-facing data and AI downtime, 5% of the organization's hourly revenue is impacted.
With Monte Carlo, data quality issues are resolved 80% to 90% faster from Year 1 to Year 3, minimizing the lost revenue of customer-facing data and AI downtime incidents.
While the value of minimizing customer-facing data and AI downtime incidents may be different across organizations, avoided revenue and profit loss has been quantified for the composite organization based on the interviews.
Risks. This benefit will vary among organizations based on:
An organization’s industry or business specifics relative to the impact or severity of customer-facing data and AI downtime.
The baseline number of data quality issues within an organization relative to the potential for improvement with Monte Carlo.
Results. To account for these variances, Forrester adjusted this benefit downward by 20%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $1.3 million.
Ref. | Metric | Source | Year 1 | Year 2 | Year 3 | |
---|---|---|---|---|---|---|
B1 | Percentage of total data quality issues resulting in customer-facing data and AI downtime | Composite | 15% | 15% | 15% | |
B2 | Average duration of customer-facing downtime issue (hours) | Composite | 2 | 2 | 2 | |
B3 | Customer-facing data and AI downtime hours | A1*B1*B2 | 21.6 | 21.6 | 21.6 | |
B4 | Hourly revenue for composite (rounded) | $6 billion/8,760 hours | 684,931.51 | 684,931.51 | 684,931.51 | |
B5 | Hourly revenue affected by customer-facing data and AI downtime | Composite | 5% | 5% | 5% | |
B6 | Average revenue impact per hour (rounded) | B4*B5 | $34,246.59 | $34,246.59 | $34,246.59 | |
B7 | Improved speed of resolution with Monte Carlo | Interviews | 80% | 85% | 90% | |
Bt | Avoided losses due to data and AI downtime | B3*B6*B7 | $591,781 | $628,767 | $665,753 | |
Risk adjustment | ↓20% | |||||
Btr | Avoided losses due to data and AI downtime (risk-adjusted) | $473,425 | $503,014 | $532,603 | ||
Three-year total: $1,509,041 | Three-year present value: $1,246,252 |
Evidence and data. Interviewees explained that deploying Monte Carlo to monitor their organizations’ data estates allowed their AI models, which supported key business decisions, to leverage higher quality data and facilitate more efficient outcomes.
The AVP of product strategy at the stock exchange noted that trust in data with Monte Carlo allowed them to apply AI modeling to strike optimization in the options market and determine which of an estimated 245 million strikes were most likely to trade and make money, a job previously required of humans. The interviewee continued: “Our AI model looks at volume metrics, trends, and things like that and determines which strikes are going to be the most likely to trade, and trades are how we make money. We’re putting trust into a machine for something that humans used to do, and having Monte Carlo lends that trust to the business that AI is going to do the right thing, at least based on the data. When we first launched strike optimization, we got a 5% increase in revenue. The person [who managed strike] could go off and do something else. So it's an operational efficiency, but we're picking the right strikes, getting more trades, and increasing revenue.”
The senior manager of data analytics and architecture at a food processing organization said that with Monte Carlo, there was a level of trust in the data where key decisions could be made through AI models across the organization. The interviewee noted that AI models fueled by Monte Carlo-monitored data were being used in most functional areas, including supply chain (routing, warehouse management) and sales (price optimization). The interviewee explained: “[Monte Carlo] gives us trust in the data for the models we run behind the scenes. By making optimized routing decisions for loading our trailers for instance, we save significant money.”
Interviewees at an airline shared that their organization trusted Monte Carlo-monitored data to power AI and ML models for decision-making. Models supporting dynamic pricing decisions for in-flight products, as well as models supporting to-the-minute flight logistics and operations were inherently more trusted with Monte Carlo-monitored data. The director of data engineering at the airline summarized: “We haven’t had any issues where [bad data quality] has affected these models, but if it were to happen, we've got the technology in place with Monte Carlo that would flag it for us.”
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
Between 0.75% and 1.25% of the composite organization’s total revenue is influenced by AL/ML models, representing a journey from a medium-high AI/ML maturity level.
Monte Carlo helps the composite organization improve its supply chain operation AI models with better quality data, leading to efficiencies between 0.25% and 0.75% annually.
While the composite organization achieves revenue improvement via operations cost savings, this example was selected to illustrate the value that better data quality with Monte Carlo can have on AI/ML use cases. Organizations in different industries with different AI/ML use cases may see value manifest in other ways, as highlighted by the interviews.
Risks. This benefit will vary among organizations based on:
The specific AI/ML models or use cases that Monte Carlo-monitored data feeds at an organization.
The number of AI/ML models or use cases at an organization.
The baseline data quality at an organization relative to the potential for improvement with Monte Carlo.
Results. To account for these variances, Forrester adjusted this benefit downward by 15%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $585,000.
Ref. | Metric | Source | Year 1 | Year 2 | Year 3 | |
---|---|---|---|---|---|---|
C1 | Total revenue | Composite | $6,000,000,000 | $6,000,000,000 | $6,000,000,000 | |
C2 | AI/ML maturity (percentage of revenue affected by AI/ML models or applications) | Assumption | 0.75% | 1.00% | 1.00% | |
C3 | Total revenue impacted by AL/ML models or applications | C1*C2 | $45,000,000 | $60,000,000 | $60,000,000 | |
C4 | Revenue improvement from improved model efficacy with Monte Carlo monitored data | Interviews | 0.25% | 0.50% | 0.75% | |
Ct | Improved efficacy of internal decision-making AI models with better data quality | $112,500 | $300,000 | $450,000 | ||
Risk adjustment | ↓15% | |||||
Ctr | Improved efficacy of internal decision-making AI models with better data quality (risk-adjusted) | $95,625 | $255,000 | $382,500 | ||
Three-year total: $733,125 | Three-year present value: $585,054 |
Evidence and data. Before deploying Monte Carlo, interviewees noted that distrust in data across teams in their organizations was common. Teams would recreate reporting based on their own data or spend extra time validating data from other teams. With Monte Carlo monitoring the organizations’ data estates, interviewees spoke of improved data trust across their organizations that resulted in more frequent collaboration across teams, less rework or redundant efforts, and more decision-making using mutually agreed-upon data.
The VP of BI analytics and strategy at a media organization explained that before deploying Monte Carlo, each division of the company managed its own data function and teams rarely shared data or reporting. Implementing Monte Carlo as part of its greater decentralized data strategy provided business users with trust in the data that allowed for a self-service data model. Business users across every division could self-service the data they needed for decision-making, and data projects that used to be division-specific were now enterprisewide for the same cost. The interviewee summarized, “Monte Carlo has helped us bridge the gap between the business users across the organization and our data team.”
The same interviewee noted that a centralized, nine-member data team now supported the entire organization despite the data estate growing three times larger. Before deploying Monte Carlo as part of their data strategy overhaul, more than 30 people supported the organization’s business users across different divisions.
The senior manager of data analytics and architecture at a food processing organization cited increased collaboration within their business and technical communities as a major benefit of Monte Carlo. Data traveled across multiple projects owned by multiple people and every change could potentially impact another project, model, or team. The interviewee summarized: “Observability gives visibility to our technical community too. Our models are much more stable, and that trust and collaboration becomes much stronger because we have a tool that is observing all of the data interactions.” This interviewee noted that a previous lack of data trust led to the creation of redundant data assets by data scientists across the company.
The interviewees at a pharmaceutical organization noted that Monte Carlo enabled trust in the data in their business community, which was especially important with the decentralized self-service data mesh approach their data team had worked to foster.
At the airline, organizational trust in the data had encouraged more users to make decisions based on data. The director of data engineering noted: “We used to only have about 30 to 50 people that would actually go into the data warehouse, write queries, and build reports to show to senior leaders, which is a pretty small number given our company size. We have grown that number tremendously. We have over 1,000 users in [our data warehouse] now, and I think that growth can only really happen when you have people that trust the data you're putting in their hands.”
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
Six teams staffed with an average of four FTEs each create and consume data products.
With Monte Carlo, shared trust in data across the organization fosters more centralized efforts to create and share data products. The composite avoids 65% of the effort previously spent creating team-specific data products as it breaks down silos across the organization.
The average fully burdened annual salary for an employee who creates data products is $110,000.
It achieves a 75% productivity recapture on avoided effort, as not all reclaimed time will be repurposed toward value-added work.
Risks. This benefit will vary among organizations based on:
The number of teams or employees creating and consuming data products within an organization.
The degree to which data assets are shared or trusted within an organization.
The skill and capacity of personnel tasked with creating data products within their respective teams.
Results. To account for these variances, Forrester adjusted this benefit downward by 15%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $400,000.
Ref. | Metric | Source | Year 1 | Year 2 | Year 3 | |
---|---|---|---|---|---|---|
D1 | Internal teams creating and consuming data products | Composite | 6 | 6 | 6 | |
D2 | Average data FTEs per team working on team-specific data products | Composite | 4 | 4 | 4 | |
D3 | Percentage of data personnel time spent on data product creation and validation | Interviews | 15% | 15% | 15% | |
D4 | Fully burdened annual salary for a data FTE (rounded) | Assumption | $110,000 | $110,000 | $110,000 | |
D5 | Annual cost of data product development and validation per team per year | D1*D2*D3*D4 | $396,000 | $396,000 | $396,000 | |
D6 | Avoidable data product development and validation with Monte Carlo | Interviews | 65% | 65% | 65% | |
D7 | Productivity recapture | Assumption | 75% | 75% | 75% | |
Dt | Internal collaboration benefit from improved data trust | D5*D6*D7 | $193,050 | $193,050 | $193,050 | |
Risk adjustment | ↓15% | |||||
Dtr | Internal collaboration benefit from improved data trust (risk-adjusted) | $164,093 | $164,093 | $164,093 | ||
Three-year total: $492,278 | Three-year present value: $408,074 |
Evidence and data. Expanding data estates at interviewees’ organizations caused significant cost increases due to limited visibility into query or pipeline efficiency, the types of data going into storage, and issues resulting in data backfills. Monte Carlo provided the organizations with views into where and how data was being used. The platform also allowed them to prioritize data assets while avoiding storage and compute costs associated with poorly performing queries or unused/less valuable data assets.
The director of data engineering at an airline told Forrester that Monte Carlo enabled them to better understand opportunities for optimizing data engineering workload queries or self-service analyst queries, allowing the queries to run faster and saving on compute costs with the cloud data warehouse provider. Alongside other custom dashboards and monitors built by the data operations team, the organization saved more than 40% in data warehouse compute costs over the past year, some of which was attributable to understanding gleaned from Monte Carlo.
The director of engineering at a cybersecurity organization explained that they used Monte Carlo to investigate and save money on their most expensive queries. The interviewee explained, “With the [Monte Carlo] performance dashboard, we can actually see what our most expensive queries are so we can bring down the costs.” They continued: “We spend close to eight figures with [our cloud data warehouse], so Monte Carlo can help us save over 1% on that bill, and sometimes a poor-performing query can be that or more. And that’s being conservative.”
The product line lead for data platforms at a pharmaceutical organization explained an issue that was caught early with Monte Carlo before there was any downstream impact to their data (or excessive costs), noting: “We started ingesting thousands and thousands more records than what those tables were supposed to bring in normally. Once they get into the warehouse and start transformation and processing all the data that's all junk, costs will go up. If you detect an anomaly and you stop the pipeline up front, it can save money.”
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
The composite organization spends between $3 million and $4 million annually from Years 1 to 3 with its cloud data storage provider.
It avoids 2.5% of this spend with Monte Carlo, as insights, monitoring, and alerts contribute to better data hygiene and data warehouse allocation. Performance dashboards allow the composite organization to identify and improve its worst-performing and most expensive pipelines and queries.
Risks. This benefit will vary among organizations based on:
An organization’s total spend with a cloud data storage provider.
The baseline quantity and performance of an organization’s queries and ability to act upon insights from Monte Carlo.
Results. To account for these variances, Forrester adjusted this benefit downward by 15%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $180,000.
Ref. | Metric | Source | Year 1 | Year 2 | Year 3 | |
---|---|---|---|---|---|---|
E1 | Total spend with cloud data storage provider | Composite | $3,000,000 | $3,500,000 | $4,000,000 | |
E2 | Avoidable costs with Monte Carlo efficiency gains | Interviews | 2.5% | 2.5% | 2.5% | |
Et | Avoided cloud data storage and compute costs | E1*E2 | $75,000 | $87,500 | $100,000 | |
Risk adjustment | ↓15% | |||||
Etr | Avoided cloud data storage and compute costs (risk-adjusted) | $63,750 | $74,375 | $85,000 | ||
Three-year total: $223,125 | Three-year present value: $183,283 |
Decreased spend with cloud data storage provider
Interviewees mentioned the following additional benefits that their organizations experienced but were not able to quantify:
Improves customer experience and reputational benefit. With a Monte Carlo partnership, interviewees noted that bad data and AI affected their end customers less often. This outcome resulted in less customer-facing data and AI downtime (as quantified in benefit B) and contributed to a better customer experience overall.
Uncovers new product and revenue opportunities. Some interviewees cited their Monte Carlo partnership as a data product value-add for their end customers that contributed to additional product opportunities and customer retention. The AVP of product strategy at the stock exchange noted their ability to win significantly more exchange-traded funds (ETFs) after building trusted data pipelines with Monte Carlo. They explained: “We could launch a product to help with our ETF business because we could build trusted pipelines with Monte Carlo. Now our winning in the ETF space is up 75%.”
Empowers the organization with data. Interviewees told Forrester that data monitored with Monte Carlo was inherently perceived as more trustworthy and “agreed upon” across their organizations, leading to cross-team willingness to use data or data products from other teams or sources. Multiple interviewees noted that the Monte Carlo deployment enabled a decentralized self-service data strategy and allowed higher quality data to be used by more employees more frequently, contributing to better results across the business.
The value of flexibility is unique to each customer. There are multiple scenarios in which a customer might implement Monte Carlo and later realize additional uses and business opportunities, including:
The downstream impact of decision-making on better quality data. Interviewees expressed optimism that monitoring their data with Monte Carlo would contribute to better quality data and contribute to better decision-making overall, especially as AI/ML decisioning became more prevalent within their organizations. The AVP of product strategy at the stock exchange noted that in addition to the AI/ML use cases already in practice, more were on the horizon and included trading behavior A/B testing historically managed by employees, which would save personnel time and potentially lead to better business results.
Flexibility would also be quantified when evaluated as part of a specific project (described in more detail in Total Economic Impact Approach).
Ref. | Cost | Initial | Year 1 | Year 2 | Year 3 | Total | Present Value |
---|---|---|---|---|---|---|---|
Ftr | Estimated Monte Carlo consumption costs | $0 | $162,225 | $243,338 | $324,450 | $730,013 | $592,347 |
Gtr | Personnel effort for Monte Carlo deployment and continued development | $21,050 | $22,770 | $23,115 | $23,460 | $90,395 | $78,479 |
Total costs (risk-adjusted) | $21,050 | $184,995 | $266,453 | $347,910 | $820,407 | $670,826 |
Organizations pay Monte Carlo on a consumption model based on the number of assets (under monitoring) and the type of monitor. Other factors influencing Monte Carlo-attributable spend include hosting specifics and additional compute or data storage resources required. Depending on these factors, costs will vary. Pricing has been estimated for the composite organization based on the factors below.
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
It monitors 1,000 to 3,000 data assets annually with Monte Carlo from Year 1 to Year 3. Pricing with Monte Carlo increases each year as the number of monitored data assets (specifically number and type of monitors on those assets) increases.
The composite organization hosts the Monte Carlo agent and the data store within their own environment.
A typical customer may spend between 5% and 10% of their total fee with their data warehouse or data lakehouse on Monte Carlo consumption. Based on this, the composite organization spends an estimated $150,000 to $300,000 with Monte Carlo from Year 1 to Year 3.
The composite spends an additional 3% annually (the Monte Carlo consumption cost) on additional compute resources and hosting required for its deployment scenario. This figure is typically between 1% and 5% for most organizations.
Please contact Monte Carlo for an overview of hosting options and pricing for your organization.
Risks. This cost will vary among organizations based on:
The number of data and AI observability assets under management with Monte Carlo.
The specific Monte Carlo hosting scenario an organization selects.
Results. To account for these variances, Forrester adjusted this cost upward by 5%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $592,000.
Ref. | Metric | Source | Initial | Year 1 | Year 2 | Year 3 | |
---|---|---|---|---|---|---|---|
F1 | Estimated Monte Carlo consumption costs | Composite | $150,000 | $225,000 | $300,000 | ||
F2 | Additional compute resources for Monte Carlo | F1*3% | $4,500 | $6,750 | $9,000 | ||
Ft | Estimated Monte Carlo consumption costs | F1+F2 | $0 | $154,500 | $231,750 | $309,000 | |
Risk adjustment | ↑5% | ||||||
Ftr | Estimated Monte Carlo consumption costs (risk-adjusted) | $0 | $162,225 | $243,338 | $324,450 | ||
Three-year total: $730,013 | Three-year present value: $592,347 |
Evidence and data. Interviewees characterized their Monte Carlo deployments as efficient and relatively brief, requiring few internal resources. Several interviewees highlighted the value of nontechnical users being able to self-service on Monte Carlo. Once deployed, interviewees noted that their organizations spent additional effort on:
Identifying additional data assets and domains for monitoring.
Setting up or tweaking existing or additional monitoring.
Working on integrations.
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
Four FTEs spend 20% of their working hours over a two-month period for the initial Monte Carlo deployment.
Once deployed, employees spend 25 hours per month on Monte Carlo-related activities as detailed above.
The average fully burdened annual salary for personnel working on the Monte Carlo deployment is $137,280.
The average fully burdened hourly rate for personnel working on Monte Carlo-related activities post-deployment is $66 to $68 per hour from Year 1 to Year 3.
Risks. This cost will vary among organizations based on:
The scope and complexity of an organization’s Monte Carlo deployment as determined by the initial number of data and AI observability assets under management, as well as the hosting scenario.
The degree to which Monte Carlo is expanded to additional observability assets or domains relative to ongoing personnel hours spent on Monte Carlo-related activities.
The skill and capacity of personnel working on Monte Carlo-related deployment and ongoing activities.
Results. To account for these risks, Forrester adjusted this cost upward by 15%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $79,000.
Ref. | Metric | Source | Initial | Year 1 | Year 2 | Year 3 | |
---|---|---|---|---|---|---|---|
G1 | Monte Carlo deployment length (months) | Composite | 2 | ||||
G2 | Personnel working to implement Monte Carlo | Composite | 4 | ||||
G3 | Personnel time on task | Interviews | 20% | ||||
G4 | Fully burdened annual salary for implementing personnel | $66/hour*2,080 | $137,280 | ||||
G5 | Monte Carlo implementation personnel costs | G1*G2*G3*G4 | $18,304 | ||||
G6 | FTE hours spent on monitoring, monitor development, and integrations per month | Interviews | 25 | 25 | 25 | ||
G7 | Fully burdened hourly rate for an FTE | Assumption | $66 | $67 | $68 | ||
G8 | Ongoing Monte Carlo personnel costs | G6*G7*12 months | $19,800 | $20,100 | $20,400 | ||
Gt | Personnel effort for Monte Carlo deployment and continued development | G5+G8 | $18,304 | $19,800 | $20,100 | $20,400 | |
Risk adjustment | ↑15% | ||||||
Gtr | Personnel effort for Monte Carlo deployment and continued development (risk-adjusted) | $21,050 | $22,770 | $23,115 | $23,460 | ||
Three-year total: $90,395 | Three-year present value: $78,479 |
Initial | Year 1 | Year 2 | Year 3 | Total | Present Value | |
---|---|---|---|---|---|---|
Total costs | ($21,050) | ($184,995) | ($266,453) | ($347,910) | ($820,407) | ($670,826) |
Total benefits | $0 | $1,038,570 | $1,257,561 | $1,445,181 | $3,741,312 | $3,069,248 |
Net benefits | ($21,050) | $853,575 | $991,108 | $1,097,271 | $2,920,905 | $2,398,422 |
ROI | 358% | |||||
Payback | <6 months |
The financial results calculated in the Benefits and Costs sections can be used to determine the ROI, NPV, and payback period for the composite organization’s investment. Forrester assumes a yearly discount rate of 10% for this analysis.
These risk-adjusted ROI, NPV, and payback period values are determined by applying risk-adjustment factors to the unadjusted results in each Benefit and Cost section.
The initial investment column contains costs incurred at “time 0” or at the beginning of Year 1 that are not discounted. All other cash flows are discounted using the discount rate at the end of the year. PV calculations are calculated for each total cost and benefit estimate. NPV calculations in the summary tables are the sum of the initial investment and the discounted cash flows in each year. Sums and present value calculations of the Total Benefits, Total Costs, and Cash Flow tables may not exactly add up, as some rounding may occur.
From the information provided in the interviews, Forrester constructed a Total Economic Impact™ framework for those organizations considering an investment in Monte Carlo.
The objective of the framework is to identify the cost, benefit, flexibility, and risk factors that affect the investment decision. Forrester took a multistep approach to evaluate the impact that Monte Carlo can have on an organization.
Interviewed Monte Carlo stakeholders and Forrester analysts to gather data relative to Monte Carlo.
Interviewed eight decision-makers at six organizations using Monte Carlo to obtain data about costs, benefits, and risks.
Designed a composite organization based on characteristics of the interviewees’ organizations.
Constructed a financial model representative of the interviews using the TEI methodology and risk-adjusted the financial model based on issues and concerns of the interviewees.
Employed four fundamental elements of TEI in modeling the investment impact: benefits, costs, flexibility, and risks. Given the increasing sophistication of ROI analyses related to IT investments, Forrester’s TEI methodology provides a complete picture of the total economic impact of purchase decisions. Please see Appendix A for additional information on the TEI methodology.
Benefits represent the value the solution delivers to the business. The TEI methodology places equal weight on the measure of benefits and costs, allowing for a full examination of the solution’s effect on the entire organization.
Costs comprise all expenses necessary to deliver the proposed value, or benefits, of the solution. The methodology captures implementation and ongoing costs associated with the solution.
Flexibility represents the strategic value that can be obtained for some future additional investment building on top of the initial investment already made. The ability to capture that benefit has a PV that can be estimated.
Risks measure the uncertainty of benefit and cost estimates given: 1) the likelihood that estimates will meet original projections and 2) the likelihood that estimates will be tracked over time. TEI risk factors are based on “triangular distribution.”
The present or current value of (discounted) cost and benefit estimates given at an interest rate (the discount rate). The PV of costs and benefits feed into the total NPV of cash flows.
The present or current value of (discounted) future net cash flows given an interest rate (the discount rate). A positive project NPV normally indicates that the investment should be made unless other projects have higher NPVs.
A project’s expected return in percentage terms. ROI is calculated by dividing net benefits (benefits less costs) by costs.
The interest rate used in cash flow analysis to take into account the time value of money. Organizations typically use discount rates between 8% and 16%.
The breakeven point for an investment. This is the point in time at which net benefits (benefits minus costs) equal initial investment or cost.
Total Economic Impact is a methodology developed by Forrester Research that enhances a company’s technology decision-making processes and assists solution providers in communicating their value proposition to clients. The TEI methodology helps companies demonstrate, justify, and realize the tangible value of business and technology initiatives to both senior management and other key stakeholders.
1 Source: The Data Quality Solutions Landscape, Q4 2023, Forrester Research, Inc., November 7, 2023.
2 Total Economic Impact is a methodology developed by Forrester Research that enhances a company’s technology decision-making processes and assists solution providers in communicating their value proposition to clients. The TEI methodology helps companies demonstrate, justify, and realize the tangible value of business and technology initiatives to both senior management and other key stakeholders.
Readers should be aware of the following:
This study is commissioned by Monte Carlo and delivered by Forrester Consulting. It is not meant to be used as a competitive analysis.
Forrester makes no assumptions as to the potential ROI that other organizations will receive. Forrester strongly advises that readers use their own estimates within the framework provided in the study to determine the appropriateness of an investment in Monte Carlo. For any interactive functionality, the intent is for the questions to solicit inputs specific to a prospect's business. Forrester believes that this analysis is representative of what companies may achieve with Monte Carlo based on the inputs provided and any assumptions made. Forrester does not endorse Monte Carlo or its offerings. Although great care has been taken to ensure the accuracy and completeness of this model, Monte Carlo and Forrester Research are unable to accept any legal responsibility for any actions taken on the basis of the information contained herein. The interactive tool is provided ‘AS IS,’ and Forrester and Monte Carlo make no warranties of any kind.
Monte Carlo reviewed and provided feedback to Forrester, but Forrester maintains editorial control over the study and its findings and does not accept changes to the study that contradict Forrester’s findings or obscure the meaning of the study.
Monte Carlo provided the customer names for the interviews but did not participate in the interviews.
Richard Cavallaro
April 2025
https://mainstayadvisor.com/go/mainstay/gdpr/policy.html