Business Intelligence Blog from arcplan
22Aug/120

The Pitfalls of Budgeting & Planning Deployments: Part I

by

The financial success of any organization begins with a well-structured budget for each department. Just having this budget/plan in place does not guarantee success, but not having a plan guarantees failure.

But with only half of organizations’ plans being accurate and two-thirds not being able to investigate the details of their budgets in real time, many companies are planning major modifications to their budgeting and planning because they are painfully aware of the shortcomings of their process.[1]

The implementation of a comprehensive budgeting, planning and forecasting (BP&F) tool should not be taken lightly. Though BP&F solution deployments can take as little as 6 weeks or as much as 4 months depending on the complexity, these projects require significant forethought, resources, and strategy. With this in mind, here are some pitfalls to avoid if at all possible. These of course apply to all business intelligence projects, not simply BP&F:

1) Unrealistic scope
An unrealistic scope is the #1 project impediment we see at arcplan. Because our solution, arcplan Edge, is fully customizable, we enable our customers to not compromise on their requirements. However, not every company has the funds to deploy every item on their wish list in Phase I of the project. So what can you do to ensure project success?

Continue reading this post >>

10Aug/120

Invest in Good Data Before Big Data

by

Big data is without a doubt 1 of the top 5 BI trends of 2012. The hype around big data has driven many companies to hoard massive amounts of structured and unstructured information in the hope of unearthing useful insight that will help them gain competitive advantage. Admittedly, there is significant value to be extracted from your company’s growing vault of data; however it is data quality – not necessarily quantity – that is your company’s biggest asset. So here are 3 reasons why you should devote more of your IT budget to data quality:

1) Because good data quality sets the stage for sound business decisions.
Sensible business decisions should be based on accurate, timely information coupled with the necessary analysis. Decision-makers need to be equipped with facts in order to plan strategically and stay ahead of the competition – and facts are entirely based on having correct data. Though it’s not as “sexy” as big data, mobile BI, or cloud, data quality should be the foundation of all of these other initiatives.

Admittedly, achieving data quality is tough. Gartner analyst Bill Hostmann says, “Regardless of big data, old data, new data, little data, probably the biggest challenge in BI is data quality.” It crosses department lines (both IT and business users must take responsibility), and processes that have multiple levels of responsibility often suffer from the “everyone and no one is responsible” conundrum. It’s also a complex process that requires laying out common definitions (what is a customer, what are our conventions for company names – Inc. or no Inc. – for example), performing an initial data cleanse, and then keeping things tidy through ongoing data monitoring, ETL, and other technologies.

But ensuring that your data is timely, accurate, consistent, and complete means users will trust the data, and ultimately, that’s the goal of the entire exercise if you see this first reason as the most important. Trusting the data means being able to trust the decisions that are based on the data. Clean up the data you have in place, then you can move on to a strategy that incorporates additional sources of big data.

2) Because you have to.

Continue reading this post >>

5Jul/122

Evaluating Data Quality Improvement Technologies – Part II (ETL)

by

If you’re ready to deploy a business intelligence reporting and analytics solution, then data quality is probably on your mind. Last week, we demystified master data management (MDM) and how it combines technology with consensus-building to improve and maintain data quality. Today let’s review another technology option for data quality initiatives: ETL.

Extract, transform, and load (ETL) tools are widely used to improve data quality. ETL tools first combine data from normally heterogeneous sources to a single repository (the extract phase), transform the data by cleansing and organizing it to optimize reporting and analysis, and then load the cleansed data into a data warehouse or other system, where end users can then rely on vetted information. When just starting out, many organizations simply use ETL to cleanse their data – like resetting dates or adding default values to empty fields. More advanced data cleansing can mean “deduping” duplicate customer records or parsing fields like “Full Name” into separate fields, “First Name” and “Last Name.”

Enterprise ETL tools are expensive, with Oracle’s Data Integrator costing $23,000 per processor, for example. There are open source and low-cost ETL tools out there, but they’re not generally suitable for enterprise-scale data quality initiatives. ETLTools.net has compiled a list of them as well as their limitations.

The advantages of ETL tools vary. Obviously SAP ETL tools offer tighter integration with SAP products; the same with Oracle. Some tools are faster than others to implement and learn, and some offer better support and documentation than others. Beyond advanced data cleansing support, it’s important to carefully consider the following list of items before purchasing any ETL tool:

Continue reading this post >>

27Jun/120

Evaluating Data Quality Improvement Technologies – Part I (MDM)

by

The big data phenomenon has driven many organizations not only to increase analytics initiatives, but also to focus on improving data quality in order to make reliable decisions. After all, what good is a lot of bad data? Aberdeen says that when data is “unhealthy,” it’s “one of the root causes of inefficient, time-consuming business processes [and] inaccurate decisions.”[i]

So what exactly have companies been doing to manage their data and improve data quality? Some have implemented enterprise-scale master data management (MDM) systems to centralize information and others have implemented extract, transform, and load (ETL) tools to profile and cleanse their data. The size of your company (and your IT budget) may dictate the options you have for managing your data, but there’s always a way to ensure data quality, one way or another. Let’s evaluate some of the options out there. Today we’ll tackle MDM.

As the name suggests, master data management is a huge initiative. Its goal is to create a single, unified view of an organization by integrating data from a number of sources – a centralized master file that contains a single version of the truth. According to Aberdeen’s 2012 report on The State of Master Data Management, most organizations aren’t doing anything crazy when it comes to MDM. In fact, 45% of companies (mostly SMBs under $500 in revenue) are simply using spreadsheets or individual databases to manage and store their master data. Others (21%) are using internal talent to develop home-grown MDM systems.

However, MDM is not just a technology-focused effort…

Continue reading this post >>

29May/120

Poor Data Quality – Part 2: Who Should Be Held Responsible?

by

We’re almost half way through the year and by now more than half of us have forgotten or become complacent about our New Year’s resolutions. But it’s never too late to get back on track! The same is true about data quality management; it’s never too late to restore order to your company’s data and combat the consequences of poor data quality. Data quality management should be an ongoing process since bad data affects business intelligence systems and ultimately the decisions based off of BI. It’s a big job and someone has to take responsibility for it. Who should that be?

“Data quality is not solely an IT issue…success depends mostly on involvement from the business side…Business professionals must ‘own’ the data they use.”

–  Gleanster Deep Dive: How Top Performers Improve Data Quality for Better Business Intelligence, January 2011

The knee-jerk reaction to the question of who should be held accountable for maintaining data quality is “the data steward,” “the data quality manager,” or any variation of that role. But who is the data steward? I believe that each organization should have several data stewards and that they should be the content owners or really, the people who most care about data quality. Here are a few examples:

The marketing director who scrubs the CRM system to ensure that lead information is correct often wears the hat of data quality manager. Data quality is important to marketers because good data (email addresses, mailing addresses, and other segmentation fields like revenue and industry) is necessary to avoid fail points in communication and to ensure that the target audience receives your message. With a 2011 Experian QAS research report revealing that 90% of organizations believe as much as 25% of their departmental budgets were wasted during the last year as a result of inaccurate contact data, you can bet that your marketing team has a CRM data clean-up project in the works. Sometimes that means using an appending service to fix bad email addresses and sometimes that means manual research and data entry, but there is true ROI for marketing data quality initiatives.

The account manager who oversees a territory and enters sales and account information in the CRM system is also responsible for data quality…

Continue reading this post >>