Jun 27, 2014

Microsoft Excel tip #3: Flash Fill

Every once in a while, Excel can surprise even us hardened pros. We discovered "Flash Fill" today (totally by accident) while working on some client work and it made our afternoon. Bear in mind we are using Excel 2013, so this might be a new feature. Check out the video below.

Have a great weekend!

Flash Fill in Excel from Shooju on Vimeo.

Jun 12, 2014

The Efficient Analyst (part three): No more waiting

Nothing kills analyst productivity like the waiting game. In a world with big demands and tight deadlines, hitting roadblocks to progress can result in huge frustrations and last-minute scrambling. Where does this waiting time come from? Quite simply, analysts must wait for data if they can't get it on their own when they need it. In part three of this series, we'll review the causes of analyst wait time and offer suggestions on how to fix it.

The drivers of wait time

Analyst wait time comes from two sources: humans and technology. On the human side, data bureaucracies or process bottlenecks can make analysts dependent on the availability of other data-capable colleagues, which can burden specific individuals and create internal strife. On the technology side, bottlenecks are caused by a lack of accessibility, such as permission restrictions. hard-to-use interfaces, or complex ETL processes that do not occur frequently enough to meet the needs of data users.

The human side

Process bottlenecks arise out of labor specialization. Perhaps there is a member of your team (let's call him or her "Data Person") who has been entrusted with managing all or some of your organization's data. Data Person might have the best data skills, or the volume of data might seem to require a point person to maintain it.  

It's important to note that Data Person is not JUST a data geek, IT professional or programmer. They are THE person (or persons) on whom the organization depends to get, use, filter or generally utilize data. There are some obvious problems with this. First, if Data Person is sick, on vacation or leaves the company, analysts don't have access to the data they need (or will waste time trying to figure it out on their own). Second, if Data Person is swamped by requests and new data updates, bottlenecks can cause analyst work to be delayed or create stress among the team (including and especially Data Person).

Good data persons try to document processes so that their absence or busyness does not slow things down. While documentation is nice, it does not solve the greater issue: the skill gap between Data Person and the average data user. 

The technology side

Challenges arise when technology creates a barrier to data access. This can be by design (access is restricted on purpose) or due to structural complexity (getting the data is simply too hard for the average data user). Either way, lack of access almost guarantees data bottlenecks. Even in cases where there are no permission restrictions on data access, hard-to-use data structures still mean that Data Person must query or extract the data for the user.

Reduce waiting time by closing the gap

Closing the gap that causes wait time must be approached from both angles. Technology must be used to create a system where data is easy to access and meets ever-changing analyst needs. This can be done creating simpler interfaces, less complex data structures or by empowering analysts to execute ETL jobs so that they are in greater control of their data they need in real time.

On the other side, Data users must be well trained and comfortable using data so that they are empowered to get what they need when they need it. Data training usually requires substantial effort to enhance the capabilities of the entire team, and it can take months to get everyone up to speed. This is not a quick fix - it's an investment in the future productivity of your organization.

But what about Data Person?

We are not advocating for an elimination of the Data Person role but rather a re-allocation of that persons skills. Rather than positioning him or her as reactive to the individual data needs of the organization, Data Person must be proactive in implementing easy-to-use technical solutions while training analysts to use it. Oh, and do this will also doing their regular jobs during the transition. Easy? Not so much. But give us a shout - we might be able to help.

Data persons or data users - do you have any thoughts on this? What's wrong with our approach? Are we being realistic about the capabilities of organizations to close "the waiting gap"?

Jun 6, 2014

Transparency Camp 2014: But how do I GET the data

On Friday, May 30th, I was lucky enough to give a talk at Transparency Camp 2014 on the challenges of data collection. I've made the slides available below. Many thanks to all who attended!


Jun 3, 2014

The Efficient Analyst (part two): Store only the data you need

The Indiana Jones Warehouse

The more stuff you own, the more time you spend managing it. 

While this insight from my mother was brought to life when referring to clothing and home furnishings, it is hugely applicable to the world of data storage and analysis.

Whenever we start a data project with a client, we are always shocked by how much data they store that is simply not used. Not data that they generate, either. One of our clients had an SQL database with 1,800 economic indicators stored for 210 countries going back several decades. After talking with analysts directly, we discovered there were three HUGE problems with this approach:

The first problem was that most of the data pulls included only two core indicators: GDP and population. The second was that most of the data pulls were for six unique countries (USA, Japan and the BRICs). And the final problem was that various parts of the data were updated monthly, which meant that at least one member of the analyst team was responsible for refreshing this data twelve times a year, taking him two to three full days to complete each update.

So, let's review:

The database in question had (1,800 indicators x 210 countries) 378,000 unique data series.

The most commonly used ones? Only twelve.

The company was spending 200-300 man hours a year to update one data set - of which 99.99968% was almost never being used. The associated cost of updating this data doesn't even include direct maintenance, storage requirements or troubleshooting.

But why?

When asked why they were holding on to this much unused data, the client gave the response that all hoarders give:

"Because we might need it"

Obviously this is not a good reason to store gigabytes of superfluous data. A successful database structure must be fluid enough that it can adapt to changing data requirements. Otherwise not only will you waste time storing and maintaining it, but your database will collapse under its own weight when changes arise.

Cure inventory waste

Here are four ways to prevent huge amounts of inventory waste in analytical databases.

1. Flexibility is everything. Flexibility allows you to store only the data you need; adding and subtracting data when your needs change. There are two ways to practice flexibility. First is the manner in which its structured and stored. SQL databases can be too rigid for some applications. While it is good for some data, make sure you look at NoSQL options like MongoDB or CouchDB that can be better for storing price strips or economic indicators.  Second, inserting and retrieving data must be easy. Simply put, if it is hard to put new data in or pull data out then adjusting to new data requirements become nearly impossibles.

2. Think about the use cases first. This applies to reducing many different kinds of data waste, but database architects and analysts must come together on needs and requirements before engaging in a project. Conversations between the two must be forward looking, thinking thoroughly about how the database will adapt to rapidly changing requirements.

3. Keep the data updated. This seems obvious, but data that is out of date helps no one. Keep it fresh through manual updating or an automated ETL process, and it will be used regularly. Otherwise, analysts will find different solutions (like going directly to the source) which will drive down the value of your database enormously. 

4. Use analytics to track usage. IT must understand what data their analysts are using, how frequently and for what purpose. By using services like SplunkKibana or the analytics in SQL Server, you can detect which data is being used and by whom, keeping your database lean and useful.

Readers: What did we miss? How do you and your team make sure that the data you maintain is being used effectively?