Free PowerBI Template to analyze Employee Attrition! Sujeet Pillai November 26, 2020

 

Are you aware that it costs employers 33% of a worker’s annual salary on average to hire a replacement? That is what the research states, along with the fact that 75% of the causes behind employee attrition are preventable. Employee attrition analysis, as stated in one of our previous articles, focuses on identifying these preventable reasons behind employee turnover and the times of the year when you should expect maximum attrition.

Losing a customer-facing employee is especially concerning as clients are more comfortable speaking with people they have an established rapport. Moreover, it hampers your organization’s collective knowledge base, and relationships and has a direct effect on revenue. Customer issues and escalations are more likely to increase when you have new folks on the job.

Hence it’s no surprise that employee engagement and employee retention are hot topics in the industry today (especially in a sales organization). The primary focus initially will revolve around preventing employee attrition and identifying methods to improve employee engagement.

There are several qualitative and quantitative metrics for organizations to improve retention. The first step, however, is to build an attrition model that helps you identify the WHO and the WHEN of attrition. Once you have identified these metrics, it becomes easier to focus your attention on key aspects from a retention perspective.

As a part of our conversations, we noticed industry leaders being aware of actionable insights that can be drawn from employee information. However, there was a uniform challenge for everyone in terms of the absence of an out-of-the-box tool. The ones available in the market are custom-made or developed on-demand and need time and monetary investment. These are standalone tools that have separate licensing and integration requirements along with onboarding and learning needs from a usage standpoint. Power BI helps you overcome these challenges and does not need any technical know-how or onboarding. Considering that it comes as a bundled app with Microsoft suite we have put together a free template that will suffice your needs.

 

All you need to do is open this template with Power BI and connect it with employee information to see insights, as seen in the first screenshot above. These graphs have been created using the standard attrition rate formula and display attrition rate by year. You can also drill down within a year and find out the attrition rate by month, as seen in the second screenshot above.

Claim your free tool here.

The above tool focuses on the trend of attrition within a year. We will soon follow up with a behavioural analysis of attrition and its reasons. Stay tuned and drop us a note at Incentius if you have any specific needs above and over the free tool we’ve provided. We would be happy to get on a quick consultation call to address it.

 

Wow! 6 years of Incentius! Sujeet Pillai September 11, 2019

 

This is such a clear indication of how busy we were that we missed acknowledging our 5th anniversary.
Nevertheless, so here we are, completing our 6th year. I couldn’t be prouder to announce that
since the first year, we have continually grown and evolved as a company. At this moment, when we are all on
cloud nine, I would like to share our little story as to how it all started.

Background:

It all started during the time when Amit and I used to hang out for a couple of drinks every month at one
of our favourite joints and talk about all sorts of lemons life was throwing at us.
Dubeyji (Yeah! That’s what we call our Director of Finance – Mohit Dubey), who had recently shifted to Noida to take care of his
family business, used to join us on the occasional skype chats. Amit & I have known each other since 1999 from our time as batchmates
at IIT Bombay. We met Dubeyji in 2004 in our early professional days as we all started our consulting careers in the same company.
All of us were among the first few employees to join the recently started India operations for the company, which gave us a lot of
initial exposure to project ownership, people management and client management. Reminiscing about our time in consulting and the
challenges associated with different kinds of situations at the start of our careers was our favourite pastime.

Ideas abound, 2006-2012:

We were a trio that always had an entrepreneurial itch about us. In those days, we met on a monthly
basis to talk about the wacky business ideas we had and try and how we could take them to market.
Looking back, our early ideas may have had potential however, they were not well-formed. More importantly,
it’s now evident that we lacked complementary skills and real-world experience in running a company.

Fast forward a few years and, Jaitex exports (Dubeyji’s family business) was doing pretty well; Amit was
close to making Associate Principal at ZS, and I had moved to a more core technology role and had managed
a couple of early-stage start-ups. Before we knew it, it was 2013, and instinctively, we knew that said entrepreneurial
itch had to be scratched sooner rather than later. When you perform your first strike, always pick the move
that you know the most. Healthcare, Incentive Compensation, Sales Force Analytics and technology had to be the
core competencies of our first venture.

The intervening 5-6 years had given us the required complementary skills and experience, which was key to pioneering
this dream of ours. Amit doubled up on knowledge in the pharma SFE domain, Dubeyji was ready with valuable seed
capital and real-world accounting and finance knowledge and I finally had the confidence to run the operations of
a company independently. Talking to clients, day in and day out, understanding the practical problems of the industry
and realizing the gaps between their expectations and reality is what we focused on as our vision.

Incorporation, 2013:

On the 9th of September 2013, Incentius – a new-age technology-focused company, was incorporated, and we were ecstatic.
Our initial goal was to use our strengths and become implementation partners of several ICM players and SPM players in
the healthcare space and also provide them with other analytics and operations support. We started talking to several clients
but received constant feedback on the challenges they were facing with the existing players in the space. The primary concern
that was shared with us was around the lack of intuitiveness of some of the existing solutions and also the significant focus
on an expensive outsourcing model. Hence our vision evolved towards the capabilities and services we provide today. Targeting
delivering high-end analytical solutions, developing advanced technology solutions for complex business process management
systems and creating in-depth analytical models along with data-driven reports for enterprises were the key aspects we focused on.
Our strategy was to be a service organization which focused on providing technical support to various established consulting
companies that were trying to expand their portfolios. In mid-2014, Amit joined us, and we focused on more advanced projects which
helped clients solve complex problems and fructify their technology vision in complex business processes. Incentius focused on
engagement models and helped various small consulting firms deliver their projects in a unique rapid prototyping first strategy.

Today, 2019:

I take pride in the fact that today Incentius has close to 25 clients and has delivered more than 100 projects over the years.
The clients that we started with on day one of Incentius are still with us. I am particularly thankful to all my clients who
showed their faith in us and believed that we would push, build and deploy every requirement of theirs. You are our best critics
and have helped us grow an inch every day. A company is nothing without brilliant people, and I’d like to thank every one of our
current and former employees who contributed immensely to bring us to today. They are the true assets of Incentius.
Lastly, I would like to thank my Partners – Amit and Dubeyji, for their constant support and for sticking together through our great
and sometimes truly scary times to make our little dream into reality.

Of course, none of this would have been possible without the support and love of my wonderful family – My amazing wife and my
extremely handsome sons.

Cheers!

 

A low risk approach to design & develop enterprise web apps Sujeet Pillai January 27, 2017

 
             
Enterprises are constantly commissioning, creating and deploying web applications to achieve various objectives across their business operations. The standard operating procedure for such deployments is a proposal stage, an extensive discovery and functional requirements documentation stage, implementation, testing and rollout. This traditional approach turns out to be a very expensive and time-consuming affair, and as a result, most ideas are nixed early. To mitigate this situation and boost innovation in enterprises, Incentius proposes an alternative strategy for such deployments. We call this the thick prototype strategy internally.

Thick Prototype Strategy

The core aspect of this strategy is to enable the enterprise business owner to commission a ‘thick’ prototype for new ideas at a relatively lower cost and quicker turn-around time. This is basically a prototype web frontend that encompasses almost all front-end functionality. It lays out in detail the frontend design and workflow and implements most functionality (or mimics the functionality) in full. This thick prototype will also mock-up sample reports, visualizations and dashboards at this stage. The focus is on mimicking the front-end look and feel and workflow functionality.

A new discovery process

The primary change in such a strategy is that the discovery process is now embedded with the prototype creation stage. The thick prototype is iterated based on client inputs; hence, requirements and functionality are captured in the live visual form rather than in functional requirements documentation. If the client is capable of providing a consistent dataset of mock data to populate this prototype, it makes this stage even more effective. This aspect helps thick prototypes also mimic backend functionality to certain limits as well. A large number of enterprise apps are essentially workflows with a reporting/visualization layer when they’re broken down to the bare bones. This is why the thick prototype is quite effective as a discovery process since workflow interaction points are beautifully explained using visual prototypes rather than extensive textual documentation and screenshots. The dynamic prototype also allows for capturing the exact workflow functionality better than the static screenshot or extensive textual requirements. Business users/owners can understand information flow, visual element positioning and logical operations early on in the app development stage. Usually, the business users/owners don’t get a view into how the application looks and feels until the UAT (User Acceptance Testing) stage. The visual approach to discovery in the Thick Prototype case also facilitates better user experience-based design.

Better funding and reduced risk

The thick prototype concept also enables quicker buy-in from senior stakeholders and allows for rapid deployment of resources and funding for such enterprise application projects. Business leaders are generally more willing to fund projects that they can visualize. These thick prototypes can also help create end-user excitement and hence advocacy for the application. Gaining end-user buy-in early, along with incorporating any relevant feedback from them, also works as a conduit to better funding for such enterprise apps. In addition to better funding, the thick prototype concept also reduces the risk in such applications. The thick prototype is inexpensive to build in comparison to even a full discovery stage in the traditional approach. The thick prototype would only typically run between 15-25% of the total estimated project cost. This prototype is also sufficient to demo to relevant stakeholders and gain buy-in from them. A GO/NO-GO decision point is included after the thick prototype is delivered. As a result, if stakeholders are not agreed to the proposed solution, the total sunk cost of the project is fractional and not substantial. Since the thick prototype discovery process is iterative and open to suggestions and changes early, this substantially reduces the risk of rework during the build phase, reducing the risk of scope creep and allowing for better control over project costs.

Scaling during build

If the right approach and technology are used in building this thick prototype, it can be appropriately scaled during the build phase to create the full application. Such an approach reduces design anxiety amongst the project owners since the final system will look and work pretty much exactly how the prototype did. The project owners, end-users and sponsors relate a lot better to the final application delivered than in the traditional approach. A supplementary benefit is that training documentation/videos/resources can be built reliably off of the thick prototype which is available early and can be pre-gamed to the user base before the actual launch of the full application. This helps in creating awareness about the solution up-front with the help of different support avenues and allows the user base to hit the ground running in terms of the usability of the application during the launch phase.

Spur innovation amongst enterprise apps.

The low financial risk involved in this thick prototype concept can lead to spurring innovation in the enterprise application space. A larger number of business owners can try out their ideas without extensive budget allocations. Such innovation will lead to the next big idea in enterprise management that would create greater value for shareholders, efficiency amongst employees and transparency for the upper management.

Case Study

A client recently approached us to create a dashboard to supply their field reps with the ability to track sales performance. However, they were very particular that the dashboard should be based on modern design concepts and intuitive and easy for the field reps to follow. Our suggestion was to adopt the thick prototype paradigm for the project. Incentius created a prototype frontend for this dashboard with UI/UX elements mimicking how it would perform in production with mock data. The client could visualize the front end and even present it to some key opinion leaders in the field and get feedback before the project was commissioned completely. This also helped the field leaders provide a list of changes they would like to see. Incentius iterated on the prototype until all stakeholders were satisfied and then scaled the same prototype to the full application. This approach was critical to the client having complete control over exactly the product they were receiving and made the launch of the dashboard a success. The approach was so successful that the client requested for an extension of this platform to add new feature sets and wants to first look at a thick prototype for these additional requests before moving to the full build. This paradigm has become the standard operating procedure for this engagement.

Incentius pioneered this concept through trial and error to get the technology, build approach and project management just right. We have successfully created several such thick prototypes and scaled them to full applications as well. Drop us a note if you would like your business owners to spur game-changing innovation in your organization while being exposed to lower risk.

What do you think of our Thick Prototype paradigm? Please feel free to let us know in the comments.

 

A free tool to visualize costs associated with salesforce attrition Sujeet Pillai July 21, 2015

 

Voluntary attrition is costly. This fact is well known. This becomes more prominent in industries where
salespeople play a much larger role in driving the top line. It is desirable to keep track of the salesforce
voluntary attrition rate on an ongoing basis and analyze attrition trends and patterns. You can read more about this here.

The important question, however, is “how do we quantify the effect of attrition on a salesforce?”. Is it
possible to estimate the financial impact of losing one salesperson? Can we create a model to understand the
impact and implications of various factors?

The key aspect to consider when calculating the financial impact of salesforce attrition on an organization is the
loss of productivity. While fixed transition costs like hiring, termination, and interview costs are important, the
loss of productivity during the transition period between losing one salesperson and replacing him/her with
another is the single largest source of financial loss that impact an organization on the whole from attrition.
This is especially pronounced when you have a recurring revenue model. Loss of productivity happens during the
ramp-down period (reduced productivity period before a rep quits), the ramp-up period (reduced productivity period
just after a new rep joins), and the vacancy gap (zero productivity period when no rep exists in the seat).

Incentius has created a standard cost of attrition model that you can tweak to your needs by using the
parameters you’ve estimated for your salesforce. Use it to estimate the budgetary impact of attrition within your
salesforce. Also, estimate the savings you’ll create by reducing attrition by a certain level. This helps you
allocate a budget to HR activities that help reduce attrition.

Access the model here.

Read more posts about other HR analytics here – http://www.incentius.com/blog-posts/analysis-of-internal-vs-external-hiring-decisions/

Would you like to get a sample salesforce attrition model for your company? Please email us at
info@incentius.com

And if you got this far, we think you’d like our future blog content, too. Please subscribe on the right
side.

 

Gauge IPL teams overall performance consistency using analytics (Cricket 2008 – 2015) Sujeet Pillai May 4, 2015

 

IPL Special Blog: Analyzing team performance

IPL season is upon us, and the madness is in full flow. Along with the IPL comes the inevitable discussions on
which team is better, who is more consistent, etc. Here at Incentius, we want to take an analytical stab at this
question. How can we compare teams across the matches that they’ve played? Note that we want to look at overall
consistency and not just win performance. If a team loses a close match, it should still be rated higher than a
team that lost a match by a big difference. Similarly, winning teams should get more credit if they have a big
victory as opposed to those who have won close victories.

Normalizing scores

The core question here is one of normalizing scores in a game. We’ll do it on a scale of 0-1. For example, if the team
A defeats team B in a match, team A would get a ‘performance metric’ of say, 0.60 and team B would receive a
performance metric of 0.4 (Hence adding up to 1). The greater the extent of the victory, the higher team A’s metric
would be, and the lower team B’s metric would be. The normalizing to 1.0 ensures that every game is equally
weighted. This is the approach we choose to take. If we wanted to weigh each game differentially, say by how long
back that game was, we could do that as well. Or we could weigh each game by the total performance of both teams.
That would give high scoring matches a higher weightage than low-scoring matches.

Measuring the extent of performance

The second question is how we define the extent of the victory. Obviously, the number of runs scored by a team
should be part of it. Similarly, the total number of wickets lost in an inning should be part of it. So a team
that scores 150 in an inning by losing 5 wickets in 12 overs is better than a team that scores 150 in an inning
by losing 7 wickets in 12 overs.

Strategy for measurement

So what strategy should we use for such measurement? Let’s use the Duckworth-Lewis method in a reverse fashion. The
The Duckworth-Lewis method is a statistical method that estimates the number of runs an average cricket team is

expected to make based on the resources they have. In cricket, the ‘resources’ are the number
of overs to be bowled and the number of wickets in hand. Essentially Duckworth-Lewis is a matrix that contains a number
of wickets in hand on one axis and the number of overs left to be bowled on the other axis. The matrix values of
intersecting points tell you how many runs the team is expected to make. Read more about the Duckworth-Lewis
method and its application on its Wikipedia
page

The Duckworth-Lewis method is primarily used for 50 overs matches. However, luckily for us, Rianka Bhattacharya,
Paramjit S. Gill and Tim B. Swartz have calculated the T20 table for the same in their paper, available here. Here is their table:

Overs/Wickets 0 1 2 3 4 5 6 7 8 9
20 1 0.969 0.93 0.879 0.813 0.722 0.599 0.448 0.297 0.176
19 0.956 0.909 0.877 0.83 0.769 0.683 0.565 0.42 0.272 0.153
18 0.917 0.867 0.829 0.787 0.732 0.654 0.542 0.402 0.257 0.139
17 0.877 0.823 0.789 0.738 0.697 0.628 0.522 0.387 0.246 0.128
16 0.835 0.782 0.753 0.705 0.664 0.602 0.503 0.374 0.235 0.12
15 0.792 0.743 0.709 0.669 0.626 0.574 0.484 0.362 0.227 0.112
14 0.751 0.707 0.673 0.637 0.593 0.546 0.464 0.35 0.218 0.105
13 0.715 0.674 0.636 0.603 0.562 0.515 0.443 0.338 0.21 0.098
12 0.683 0.637 0.602 0.568 0.529 0.475 0.419 0.326 0.202 0.091
11 0.65 0.599 0.566 0.533 0.497 0.439 0.393 0.313 0.194 0.085
10 0.613 0.56 0.526 0.501 0.46 0.408 0.361 0.3 0.186 0.079
9 0.579 0.523 0.479 0.461 0.425 0.378 0.331 0.283 0.177 0.072
8 0.54 0.483 0.443 0.417 0.389 0.349 0.302 0.261 0.167 0.066
7 0.493 0.442 0.402 0.374 0.354 0.321 0.272 0.234 0.157 0.059
6 0.417 0.385 0.357 0.33 0.317 0.29 0.242 0.2 0.145 0.052
5 0.362 0.334 0.31 0.286 0.273 0.255 0.215 0.17 0.122 0.044
4 0.308 0.28 0.261 0.241 0.224 0.207 0.183 0.142 0.1 0.035
3 0.254 0.228 0.211 0.194 0.177 0.165 0.144 0.116 0.079 0.025
2 0.197 0.172 0.155 0.141 0.127 0.119 0.106 0.093 0.062 0.016
1 0.137 0.113 0.097 0.085 0.073 0.067 0.06 0.052 0.042 0.009

Calculating performance

Now, how do we use the Duckworth-Lewis table to calculate performance for a match? What we’re going to do is
calculate an ‘effective score’ for each team. i.e. if a team finished their innings without using up all of their
wickets and all of their overs, we’re going to argue that they could have used those remaining overs & wickets
to score extra runs. The D/L table defines the number of extra runs they could have made.

Hence, if in an inning a team is all out or finished all 20 overs:

effective_score = actual_score

If however, a team still has wickets in hand and overs left to play:

effective_score = actual_score + extra runs(Wickets in hand/Overs left -> looked up on D/L Table)

Once the effective_score of each team is calculated, we calculate the performance metric by using:

performance_metric_team_a = team_a_effective_score/(team_a_effective_score + team_b_effective_score)

Analyzing performance

Now that we’ve calculated the performance metric of every team for every match let’s ask the data questions.

Consistency

Which teams have been most consistent across 2008-2014?

Chennai is the highest, no surprises there. Mumbai comes in second. KKR, despite their two wins, are 5th, explained
by their dismal performance in 2009. Kings XI haven’t won a single IPL but come in third.

* Note that Sunrisers Hyderabad and Deccan Chargers have been merged together.

Biggest Win

Which match had the greatest win in IPL history in terms of relative effective score? Funnily it seems the first
Every match of the IPL was the biggest victory. Remember Brendan
McCullum murdering the RCB attack

Kolkata Knight Riders 222/3 (20/20 ov); Royal Challengers Bangalore 82 (15.1/20 ov)
Performance Metric – KKR = 0.7302, RCB = 0.2697

Then comes the
the second match of IPL 2009
:

Royal Challengers Bangalore 133/8 (20/20 ov); Rajasthan Royals 58 (15.1/20 ov)
Performance Metric – RCB = 0.6963, RR = 0.3037

In third is the 1st
Semi-final of IPL 2008
:

Rajasthan Royals 192/9 (20/20 ov); Delhi Daredevils 87 (16.1/20 ov)
Performance Metric – RR = 0.6882, DD = 0.3118

Head-to-head performance

How do teams usually perform when pitted against each other.

It seems Chennai vs Delhi is the strongest performance in favour of Chennai. Next comes Chennai vs Hyderabad, also
in favour of Chennai. Third is Mumbai vs Kolkata in favour of Mumbai.

Hope you had as much fun reading this blog as we had writing it. Check out some of our other blogs at http://www.incentius.com/blog

Drop us a mail at info@incentius.com to get in touch. Also,, subscribe to this blog on the right to get future blog posts like this one.

 

How to analyze employee attrition – HR analytics Sujeet Pillai October 8, 2014

 

An issue that every company deals with is attrition. Sales being an especially high attrition function makes this analysis paramount. Sales attrition is a result of several components, including unoptimized sales compensation, unrealistic quotas, ineffective mentoring, career-path ambiguity, training inefficacy or just bad recruiting. Hence the ability to slice and dice sales attrition in many ways to understand trends and their root causes can seriously help sales leadership make the required changes to build a healthier, more performing sales force.

Numerically analyzing attrition is a bit tricky. This stems from the fact that the base of employees is continually in flux. Every month new hires join the salesforce, some employees are involuntarily terminated, some voluntarily leave the company, and some others go inactive without leaving the company, like when they go for a long-term leave of absence. Additionally, the quality of attrition is important. Let’s say two companies of about the same size lose about 25 salespeople a month. Are they experiencing the same problem? What if one company is losing more experienced salespeople whereas the other was losing mostly salespeople only 2-3 months in the company? Both these companies have wildly different problems. The first one may have an issue like their sales compensation program not rewarding top performers enough, but the other one may have a recruiting issue since new hires are probably not relating their job to what they were told during recruiting.

There are myriad ways in which we can slice attrition. In this blog, we’ll list a few methods:

Attrition rate

The rate of attrition or the inverse retention rate is the most commonly used metric while trying to analyze attrition. The attrition rate is typically calculated as the number of employees lost every year over the employee base. This employee base can be tricky, however. Most firms just use a start-of-year employee count as the base. Some firms calculate it on a rolling 12-month basis to get a full-year impact. This ratio becomes harder to use if your firm is growing its employee base. For example, let’s say on Jan 1st of this year, there were 1000 employees in the firm. Over the next 12 months, we lost 100 employees. Is it as straightforward as a 10% attrition rate? Where it gets fuzzy is how many of those 100 employees that were lost were in the seat on Jan 1st. Were all the 100 existing employees as of Jan 1st, or were they new hires during the year that termed? Hence the attrition rate must be looked at from several views.

Existing employee attrition

This type of view asks the question, “How many of my employees who worked here a year ago today have left”. This fixes the set of employees you’re looking at to just those that were employed 12 months ago. The figure below plots how many of those employees hired 12 months ago are still in their seats over the 12 months. The plot would always be a strictly decreasing curve. Here is a sample graph from a firm.

 

Such a view is more important to look at because it tends to show you where slightly more tenured employees leave. This leaves out those that joined and quit in 3 months which is more of a localized recruiting issue rather than a systematic issue in the company.

Total employee attrition

This type of view displays total attrition by month. It does not discriminate as to which employees quit, whether they were new hires or they were 3-year tenured employees. The plot below shows how many employees quit each month over the last 12 months.

 

Employee Tenure

Closely related to employee attrition is total employee tenure. The more employee attrition, the less the average tenure of employees in the company. Let’s look at a few ways we can track employee tenure performance over time. First, let’s define what a tenured employee means in the context of your company. Some folks call this the break-even period as to the time it takes for an employee to mature. This may vary from 6 months to 1.5 years based on the complexity of the sales process, the tools involved and the product sales lifecycle.

Tenured employee proportion

First, let’s look at the proportion of employees who are tenured vs those that are new. Higher the tenured proportion, the better job the company is doing at retention. This also directly impacts sales performance because tenured employees tend to do better than new ones.

 

Tenured employee actual

In a growing company, however, just looking at the proportion of tenured employees to new is not enough. As new hires come in to increase the employee base, the Tenured employee proportion would auto-decline. Hence we should also look at the actual number of employees who are tenured. Below is the graph of actual tenured vs new hire employee counts for a firm.

 

Notice in recent months that both tenured and new hire counts have been increasing. This means that while the company is hiring more to increase the new hire counts, it’s doing a better job at retention as well since more tenured employees are staying on.

Batch-wise churn analysis

Is every batch of new hires that join your company the same? Do they perform identically? A view of that can be very useful to study the ’employee lifecycle’ to see how they start off and how they perform as they mature. This information can be critical when cross-referenced with the hiring sources and may also be used for recruitment performance.

Below is a view of employees that were hired in a particular month and how that batch churned over their lifetime at the company. This is typically possible to do if we have at least 2-3 years of company performance history.

 

Conclusion

HR analytics is an up-and-coming area that can make HR departments in companies highly data-driven and improve their efficiency manifold. Incentius can help build such HR analytics for firms to revolutionize their HR operations.

As a bonus, we have a free interactive tool to estimate the financial impact of attrition of a salesperson. Would you like to get a sample employee attrition analysis for your company? Here is a free tool for your use. Feel free to download and try it out, and do write to us at info@incentius.com if you need a detailed analysis of several other metrics.  

And if you got this far, we think you’d like our future blog content, too. Please subscribe on the right side.

 

Uncover the secrets of top sales people through analytics Sujeet Pillai September 4, 2014

 

What do top salespeople do differently? Can we analyse the behaviour of top performers using sales performance data to identify traits that can be used for coaching lower performers? There is a lot of literature on how to improve salespeople’s efficiency by focusing on the softer aspects of sales coaching. What if there’s behaviour that top performers indulge in without knowing it themselves? Maybe the data can talk to us.

Let’s take the example of a software firm. It sells three different types of products with the option of attaching services to each of those products. These products are non-competing, so multiple products can be sold to the same customer. Let’s say these products are an anti-virus solution, a firewall solution and an anti-malware solution. In short, we’ll call them AV, FW and AM. Each of these product lines also has different flavours, which are progressively priced. For, eg., anti-virus solutions may have a home, SMB and enterprise editions.

Define Performance

First, let’s define performance. Without defining performance appropriately, how do we know who are the top performers? All the salespeople have assigned quotas and hence we’re going to simply use revenue achievement to quota as our performance measure. Hence people with high revenue achievement are top performers. Let’s quintile all the reps on the basis of revenue achievement and place them in the 5 quintile buckets.
We have identified our top performers. Let us now look at different types of analyses on the performance data of the top two quintiles to understand behaviours that set them apart from the rest of the salesforce.

Portfolio split

First, let’s analyze the split of product sales by quintile, both in terms of units sold as well as revenue dollars.

Product split by quintile (Revenue)

Product split by quintile (# of units)

There is some variation in the split of product sales by quintile. The absolute units and revenue are much higher in the top quintile (which is obviously expected). However, there doesn’t seem to be a distinct trend to suggest that the top performers favour one product over the other. Hence there doesn’t seem to be any significant behavioural trend on the product portfolio split.

Cross-selling

Do top performers sell more products on average to the same customer? This is commonly known as cross-selling. Let’s take a look at the split of one product deals, two product deals and three product deals sold by performance quintile.

Multi-product deal split by quintile

Now there’s a trend! It seems the top performers tend to sell multiple products to every second customer. It’s distinctly clear that the lower performers tend to sell much more single-product deals than the top performers. There may be additional opportunities within their converted prospects that they may be leaving on the table. Hence if we encourage, incentivize and train our lower performers to do more cross-selling and ask the right questions to their prospects to uncover extra opportunities, we can improve our overall sales performance. The company’s marketing department can also probably offer some customer discounts on second and third products in the same order to encourage this behaviour further.

Upselling

Are the top performers selling higher revenue product flavours as compared to the lower performers? Let’s take a look at the average revenue per product line per customer. That should give us a good view of this behaviour.

Average revenue per product line per customer by quintile

Hmm. It seems that the top performers are selling higher revenue product flavours as compared to the lower performers. Note that due to pricing differences between products comparing the average revenue levels between products is irrelevant. It’s not as significant a difference as the multi-product split, but a trend exists to suggest better up-selling by the top quintiles. Hence there is some advantage for the company to gain by improving coaching and incentivizing up-selling. Certain types of discretionary manager-approved customer offers can also help this trend along.

Multi-location deals

Are top performers selling to larger corporate groups? This is a more pronounced possibility in SMB sales. Many times the owner of one Mcdonald’s actually owns a set of franchises. This is especially possible in the franchise business. This offers the possibility of making one sales pitch and capturing several orders at once. Let’s look at the average number of orders per customer by quintile. Note that this may not be straightforward to analyze if your sales systems define an order as unique to one customer. It may require some special data-cleansing/mining to identify customer name patterns to identify them as the same customer.

Average orders per customer

So it seems about 1 in 10 customers that the top quintile sells to is a multi-location deal. This trend can be exploited by the company and help train the salespeople to inquire with their prospects about other businesses that they may own. Again marketing can play a role here by incentivizing customers to purchase company products for all their businesses by offering discounts or some free services on multi-location deals.

Additional thoughts

We have only skimmed the surface with the analysis that we’ve done. For larger salesforces, there may exist clusters of top performers within the top quintile that behave differently. For example, one cluster may go for volume and maximize their deal counts, whereas another cluster may spend more time with the customer to sell higher revenue flavours. Their revenue achievement to goal performance may be the same, but their path to get there may be different. Identifying such clusters can help in further understanding top-performer selling behaviour and help in driving coaching requirements. Additionally, for larger salesforces, it may be beneficial to decile performance rather than use quintiles. This helps isolate the top performers into a smaller sample and enhances the visualization of trends.

Incentius can analyze your sales data and identify such behaviours that can help your company push sales performance.

Would you like to get a sample analysis of your sales data for your company? Please email us at info@incentius.com

And if you got this far, we think you’d like our future blog content, too. Please subscribe on the right side at Incentius.

 

Analysis of internal vs external recruitment decisions Sujeet Pillai July 18, 2014

 

Leaders often face the dilemma: Is it better to promote internally or hire externally? During the early growth phase,
this question becomes even more important because of the inevitable conflict between maintaining internal culture vs.
Hiring someone from outside to fulfil certain skill requirements. This decision is eventually driven by the ability
to mentor newly promoted employees using the right training programs and sufficient lead time. It also
depends on the organizational situation and if there is enough room for failure or not. Once the decision is taken,
the immediate next question is whether it will help the company in the long run.

There are obvious pros and cons associated with both approaches. Internal movers have long experience within
the firm, and they are more likely to be ambassadors of the firm culture and have already acquired important
firm-specific skills that new hires will lack. New hires, on the other hand, bring in a desired skillset due to prior
experience, fresh perspective and insights from other companies/industries. In general,
internal promotions send out a better signal to its employees about abundant growth opportunities in the company,
but on the other hand, it requires a strong training process in place to help promoted employees acquire specific skills
required to be successful in the new role.

According to Wharton management professor Matthew Bidwell, in his research paper titled Paying More
to Get Less:
The
Effects of External Hiring versus Internal Mobility
(pdf) “External hires” get significantly
lower
performance evaluations
for their first two years on the job than do internal workers who are promoted into similar jobs. They also have
higher exit rates, and they are paid “substantially more.” About 18% to 20% more. On the plus side for these
external hires, if they stay beyond two years, they get promoted faster than those who are promoted internally
.This behaviour, to some extent, can be explained using human psychology. Internal hires generally have better
knowledge about existing processes and also, in general, have a better rapport with leaders. On the other hand,
external hires take some time to learn new processes, prove themselves and eventually build rapport with leaders.
Exit rates are also on the higher side for external hires because there is generally less acceptance for failure in
case of a lack of rapport.

Overall, external hiring has grown much more frequent since the early 1980s, especially for experienced high-level
positions and especially in larger organizations. “It used to be that smaller organizations always preferred external
hires due to a lack of internal talent while big ones focused more on internal mobility. But now, the pendulum has
shifted towards external hiring and away from internal mobility for large organizations as well.

What is the specific situation in your company? Let’s specifically look at sales positions. Analysis of the performance
of salespeople based on actual sales can be of immense help to take better decisions in the future. For example,
let’s look at the sales performance for one particular role divided into internally promoted reps vs external hires.
Let’s assume that the incentive for a sales rep is based on their monthly revenue quota achievement. Performance can
be differentiated/analysed using various metrics and visualizations.

Comparison of performance over months in a new role

Fig 1: Average quota achievement across months in new role

The above graph helps us understand the initial performance level of internal hires vs external hires in the new role
. From the graph, we can observe that the initial quota achievement % for the internal hires is better than the external
hires, while the external hires start catching up around nine months into their new role. When doing such an analysis,
it would be better that you consider at least 2 years of data so that we get a decent sample size to gauge the trend.

Retention Analysis

Fig 2: Retention rate in the new role across months

As is visible from the above graph, the retention rate of external hires, on average, is higher in the first year in
the role than the internal hires. This stands to reason as newly hired employees have the reluctance to quit within the
first year of their new jobs. Starting 18 months, however, external hires probably start leaving faster due to various
reasons such as work culture differences, inability to meet quotas, general differences with leaders, etc.

Percentage of promotions from the group over time

Fig 3: % of promotions from internal hire and external hire groups over time

The above graph analyzes the promotion behaviour across the 2 groups. From the visual,
observe that internal hires continue to fare better till about 2 years. After this, the external hire group tend to
do better.

By comparing retention rate and promotion analysis together, we can understand if the money spent on hiring
externally is justified by the number of leaders produced from external hires.

This blog post illustrates the approach using sales performance analyses rather than using performance evaluation
which may have some induced human bias element (mostly in favour of the internal hires). There are various other
metrics which can be used over time, such as historical performance data, to understand the impact of hiring decisions.

 

Analyze sales compensation payout sensitivity using python (Jupyter) Sujeet Pillai June 20, 2014

 

Introduction

A critical component of sales compensation plan design is testing the plan against various scenarios of the sales performance of individual salespeople. Typically a newly designed sales compensation plan is tested against historical sales performance to gauge how the new compensation plan would payout if it had been implemented in the previous plan period. While using historical performance can help, it’s also important to test it against a randomized sales performance set to check the sensitivity of your payouts to sales variations. When your sales compensation plan rolls out, it typically affects sales rep behavior, and we must ensure that such changes don’t result in any budgetary surprises. This is also a good analysis to run on a periodic basis to estimate the risk to your sales compensation budgets from any unforeseen market events.

Python and IPython

Such sensitivity analysis requires a large number of repeated sales compensation calculations and an ability to quickly introduce randomness, variation, etc., to generate data scenarios. Traditionally Excel has been used for similar analyses. However, with the volume of calculations and looping constraints, the entire process becomes very cumbersome. Python with the numpy library is perfect for such data manipulation. Matplotlib is the visualization tool of choice to be used in combination with IPython.

IPython is a powerful interactive python shell with support for rich data visualization. Specifically, the IPython notebook is a browser-based IPython shell with support for in-browser data visualizations and embedding rich media. We will use the IPython notebook for this analysis to take advantage of the in-browser visualizations that are offered.

Here is some help on how to set up IPython on your computer to try out.

IPython setup

First, let’s set up IPython for the needs of our analysis.

# start IPython notebook
IPython notebook

This should open up a list of notebooks in the current working directory. Create a new notebook by clicking “New Notebook”.

Let’s use IPython magic commands to set up our IPython environment such that all graphs are plotted in the browser.

#Use in browser graphs
%matplotlib inline

Also, let’s import all the python libraries we’ll be using for this exercise.

import numpy
import matplotlib
import CompUtils

The Plan

Let’s assume that the compensation plan setup is a payout curve of revenue achievement to a payout percentage. The related table is given below:

Revenue AchievementPayout Percentage 0%0% 50%35% 80%75% 100%100% 120%150% 300%500%

The points in between consecutive revenue achievement points are interpolated linearly between the payout percentages.

Let’s initialize the plan in our IPython notebook

 # Initialize Rate Table
 payout_table = CompUtils.Rate_Table()
 payout_table.add_row(0,0)
 payout_table.add_row(0.5,0.35)
 payout_table.add_row(0.8,0.75)
 payout_table.add_row(1.0,1.0)
 payout_table.add_row(1.2,1.5)
 payout_table.add_row(3.0,5.0)

Inputs

Next, let’s set up some salesforce attributes. Let’s assume that the salesforce has 1000 sales reps, each with a Target Incentive of $5,000 per month. Let’s also assume that their revenue goal is $10,000 per month.

number_of_reps = 1000
target_incentive = 5000
revenue_goal = 10000

Now we’re ready to set up the simulation. For revenue, we’re going to assume that these 1000 sales reps perform at a mean sales revenue of $11,000 with a $4,000 standard deviation.

mean_sales_revenue_of_population = 11000
std_dev_sales_revenue = 4000

Simulation

The next step is to run the actual simulation. We’ll set it up so that these steps are followed: – the sales revenue for 1000 reps will be randomized around the mean and standard deviation, – their achievement of goal will be calculated – their payout % looked up on the curve – their final payout calculated as the product of payout % and target incentive.

sales_revenue = numpy.random.normal(mean_sales_revenue_of_population,std_dev_sales_revenue,
number_of_reps) # Randomized set of sales revenue
revenue_achievement = sales_revenue/revenue_goal # Revenue achievement set of population
payout_pct = CompUtils.calculate_lookup_attainment(revenue_achievement,payout_table.rows)
rep_payout = payout_pct * target_incentive
total_salesforce_payout = numpy.sum(rep_payout)
print 'Total Salesforce Payout = %d' % total_salesforce_payout

This would print something like:

Total Salesforce Payout = 6675583

Ok, so we’ve run the simulation using a random set of sales revenue inputs and calculated the total salesforce payout. However, that Total salesforce payout number would keep changing as the random sales revenue input set changes. Hence let’s run this simulation repeatedly, say 100 times, and then summarize the results from those 100 runs. How do we do that? Let’s refactor the above code as follows:

iterations_array = []
for i in range(100):
    # Randomized set of sales revenue
    sales_revenue = numpy.random.normal(mean_sales_revenue_of_population,std_dev_sales_revenue,number_of_reps)
    revenue_achievement = sales_revenue/revenue_goal # Revenue achievement set of population
    payout_pct = CompUtils.calculate_lookup_attainment(revenue_achievement,payout_table.rows)
    rep_payout = payout_pct * target_incentive
    total_salesforce_payout = numpy.sum(rep_payout)
    iterations_array.append(total_salesforce_payout)

That was easy. Basically, we created an array to store the result from each iteration and pushed the total_salesforce_payout into that array. Let’s now visualize the results and derive some useful statistics out of these iterations:

matplotlib.pyplot.hist(iterations_array,bins=20,color='green')
print 'Mean Total Salesforce Payout = %d' % numpy.mean(iterations_array)
print 'Median Total Salesforce Payout = %d' % numpy.median(iterations_array)
print 'Standard Deviation Total Salesforce Payout = %d' % numpy.std(iterations_array)
print 'Max Total Salesforce Payout = %d' % numpy.max(iterations_array)
print 'Min Total Salesforce Payout = %d' % numpy.min(iterations_array)

This should produce an output as follows:

 

Mean Total Salesforce Payout = 6657248
Median Total Salesforce Payout = 6645881
Standard Deviation Total Salesforce Payout = 117827
Max Total Salesforce Payout = 6963018
Min Total Salesforce Payout = 6385194

Excellent, so for that run, we’ve identified the range of possible payouts, the mean, the median, and the deviation potential of the total salesforce payouts. The distribution graph shows where the total payouts of the 100 iterations we performed landed.

Sensitivity

Our simulation now shows the variation possible in the payouts as a result of random sales revenue performance inputs. We can tweak this simulation by increasing the standard deviation on the sales revenue inputs to analyze the effect on the total payout. Similarly, we can also gauge the change in the total payout with a variation in the mean of the sales revenue randomized input. That will display our plan payout’s sensitivity to variations in sales performance.

Let’s refactor our code to introduce a sales increase factor and then run our simulation again. We’ll basically multiply the mean of the sales revenue input by this factor before calculating the randomized input. Effectively we’re going to test the sensitivity of plan payout to 2%, 5%, 7%, 12%, 15%, 18%, 20%, and 25% increases in mean sales.

results_array = []
sales_increase_factor_list = [0.02, 0.05, 0.07, 0.12, 0.15, 0.18, 0.2, 0.25]

for sales_increase_factor in sales_increase_factor_list:
    iterations_array = []
    for i in range(100):
        # Randomized set of sales revenue
        sales_revenue = numpy.random.normal(mean_sales_revenue_of_population*(1+sales_increase_factor),std_dev_sales_revenue,number_of_reps)
        revenue_achievement = sales_revenue/revenue_goal # Revenue achievement set of population
        payout_pct = CompUtils.calculate_lookup_attainment(revenue_achievement,payout_table.rows)
        rep_payout = payout_pct * target_incentive
        total_salesforce_payout = numpy.sum(rep_payout)
        iterations_array.append(total_salesforce_payout)
    results_array.append(dict(sales_increase_factor=sales_increase_factor,
                                mean=numpy.mean(iterations_array),
                                median=numpy.median(iterations_array),
                                std=numpy.std(iterations_array),
                                max=numpy.max(iterations_array),
                                min=numpy.min(iterations_array)))

Ok, so we’ve run the simulation for each value in the sales increases factor list. Now let’s plot and display our results.

# Set up matplotlib options

matplotlib.pyplot.figure(figsize=(9,6))
font = {'family' : 'Arial',
        'weight' : 'normal',
        'size'   : 14}

matplotlib.rc('font', **font)

# Plot lines
matplotlib.pyplot.plot(map(lambda x:x['sales_increase_factor'],results_array),
                       map(lambda x:x['mean'],results_array),label='mean',lw=2)
matplotlib.pyplot.plot(map(lambda x:x['sales_increase_factor'],results_array),
                       map(lambda x:x['min'],results_array),label='min',ls='-.')
matplotlib.pyplot.plot(map(lambda x:x['sales_increase_factor'],results_array),
                       map(lambda x:x['max'],results_array),label='max',ls='-.')

# Set up plot options
matplotlib.pyplot.legend(loc=3)
# matplotlib.pyplot.axis(ymin=0)
matplotlib.pyplot.ticklabel_format(style='sci', axis='y', scilimits=(0,999999999))
matplotlib.pyplot.xlabel('Sales Increase Factor')
matplotlib.pyplot.ylabel('Total Salesforce Payout')

# Display in table form
from IPython.display import HTML
html='''<table>
            <thead>
                <th>Sales Increase Factor</th>
                <th>Mean Total Payout</th>
                <th>Max Total Payout</th>
                <th>Min Total Payout</th>
                <th>Increase in Payout</th>
            </thead>
'''

formatted_numbers = map(lambda x:['{:.0%}'.format(x['sales_increase_factor']),'${:,.0f}'.format(x['mean']),'${:,.0f}'.format(x['max']),'${:,.0f}'.format(x['min']),'{:.0%}'.format((x['mean']/results_array[0]['mean'])-1)],results_array)
# print formatted_numbers
html = html+(''.join(map(lambda x: '<tr><td>'+('</td><td style="text-align:right">'.join(x))+'</td></tr>',formatted_numbers)))+'</table>'
HTML(html)

This should generate an output that looks like this:

Sales Increase FactorMean Total PayoutMax Total PayoutMin Total PayoutIncrease in Payout 0%$6,661,012$6,936,747$6,355,4510%2%$6,855,002$7,128,312$6,661,2553%5%$7,123,175$7,344,958$6,799,0927%7%$7,346,115$7,599,005$7,072,25810%12%$7,859,148$8,091,215$7,560,60418%15%$8,161,052$8,491,135$7,831,55323%18%$8,461,816$8,754,745$8,233,40027%20%$8,657,557$8,924,484$8,347,14030%25%$9,188,707$9,437,714$8,940,52938%

Let’s look at the results. It shows that at an $11k sales revenue mean with a $4k standard deviation, your comp plan is likely to pay a mean payout of about $6.6M with a max of $6.9M and a min of $6.3M. As your sales performance improves this number obviously goes up. For a 12% increase in sales, your plan is expected to pay about $7.85M on average (max of $8.09M and min of $7.56M).

That represents an 18% increase in payout for a 12% increase in sales performance. That is the estimate of your compensation plan’s sensitivity.

Our sensitivity scenarios were quite straightforward. We basically multiplied by a straight factor to improve your sales performance. However, these scenarios can be as complex as the questions you’d like to ask of the data. For Example:

What if in response to my new compensation plan, the top 20% of my salesforce did not change their performance however the next quintile improved their sales performance by 10% and the third quintile by 35%. What would the effect on my total payout?

Such questions are easy to analyze once you have a basic sensitivity model as we have set up. The advantage of using python in such a case is it’s easy to swap scenario types without much rework.

You can download all the codes here

To learn more about Incentius capabilities, visit our data analytics page

Do you need additional help with sales compensation plan design/testing for your company? Please email us at info@incentius.com

And if you got this far, we think you’d like our future blog content, too. Please subscribe on the right side.

 

Sales compensation modeling and analytics in Python Sujeet Pillai January 9, 2014

 

Introduction

Over the past few years, python has grown into a serious scientific computing language. This owes itself to several mathematical packages like NumPy, scipy, and pandas maturing and being trusted by the scientific community. These packages and the inherent simplicity of python had a democratizing effect on the ability to perform complex mathematical modeling without expensive proprietary software.

In this blog post, I wish to utilize these techniques that are commonplace in the scientific community to try and apply them to the sales compensation/analytics domain. I’m going to attempt to model a basic sales compensation plan using NumPy and python. The intent is to keep the model simple and to get it up and running fast. Future blog posts will introduce more complex elements and more advanced modeling techniques. The hope is that this basic model will demonstrate the power of these python-based modeling techniques and their simplicity.

The Interface

Firstly, let’s settle on an input/output interface. I believe business users are still some way off from setting parameters/data in formats like JSON/XML/YAML. This eliminates those possibilities. As most of us are used to Excel (more than we like to admit), I’d like to keep the input/output interface as Excel. I’d like the model to read inputs and parameters from Excel and write the final results into an output sheet on Excel. Python luckily has an excellent package called openpyxl that allows it to interface with an Excel file (Excel 2007 onwards only). We’ll use this package to read our inputs and parameters and to write the outputs of our model. DataNitro and Pyxll are two other python packages you can use for interfacing with Excel; however, they run as an addition to Excel rather than as an independent python package.

The Plan

I’ve chosen a very simple Achievement vs Attainment Payout Table for the purposes of this model. I’ve created a named range “PayoutTable” in my excel file (named “plan_modeling.xlsx”) containing this payout table.

I’m going to assume a simple payout table with linearly interpolated attainment between two subsequent performance levels. Below is a graphical look at my payout curve. The slope of the line above the highest performance level defined in the table is determined by the “Infinity Rate” Parameter

Curve

The Target Incentive Dollars for the given class of reps is defined using the “TIC” parameter.

Earnings for the plan will be calculated as the attainment as calculated using the payout table based on the achievement times of the Target Incentive.

Simulation Parameters

To simulate the performance of a salesforce on this plan I’m going to use a normal distribution of achievements around some mean and standard deviation. These parameters are defined in my Excel file as “AverageAchievement”, and “StdDevAchievement”. The sample size of the population of sales reps that I use to model this plan is defined by the “SampleSize” parameter. Overall the plan and simulation parameters section looks like the following.

Inputs

Model Setup

Alright! Time to write some code!!

First, let’s import the required packages. Numpy and openpyxl

#!python
import numpy
import openpyxl
from openpyxl.style import NumberFormat

I’ve created two helper packages. openpyxl_helpers has methods for frequently used interactions with the excel file using the openpyxl library. CompUtils sets up a Rate_Table class to maintain a payout table and has a method to apply the rate table to an achievement distribution

#!python
import CompUtils
from openpyxl_helpers import *

Open the workbook using openpyxl

#!python
# Open workbook
wb = openpyxl.load_workbook('plan_modeling.xlsx')

Pull in all the plan and simulation parameters from the excel file using their named ranges.

#!python
# Read plan parameters
payout_table_range = read_table_from_named_range(wb,'PayoutTable')
infinity_rate = read_value_from_named_range(wb,'InfinityRate')
TIC = read_value_from_named_range(wb,'TIC')

# Read simulation parameters
average_achievement = read_value_from_named_range(wb,'AverageAchievement')
std_achievement = read_value_from_named_range(wb,'StdDevAchievement')
sample_size = read_value_from_named_range(wb,'SampleSize')

Next, we’ll initialize the Rate_Table class from our helper package CompUtils. We’ll add all rows from the payout_table_range that we read from the excel file and then set the infinity rate parameter. The Rate_Table class expects the lookup field to be called ‘start’ and the value field to be called ‘base’. Since we’ve used ‘achievement’ and ‘attainment’ instead, we’ll override those names by passing startKey and base key.

#!python
# Initialize Rate Table
payout_table = CompUtils.Rate_Table()
payout_table.add_row_list(payout_table_range,startKey='achievement',baseKey='attainment')
payout_table.set_infinity_rate(infinity_rate)

Finally, all our parameters have been retrieved, our rate table has been set up. Now we’re ready to do some modeling. Let’s first generate the achievement sample using our simulation parameters. We’ll use the numpy.random.normal function.

#!python
# Populate the achievement sample
achievement_distribution = numpy.random.normal(average_achievement,std_achievement,sample_size)

Now calculate the attainment for this achievement sample by applying the rate table to it.

#!python
# Calculate Attainment based on the Payout Table
attainment = CompUtils.calculate_lookup_attainment(achievement_distribution,payout_table.rows)

Calculate earnings by multiplying the attainment values against TIC. Note that the attainment is a numpy.ndarray and is being multiplied by a scalar TIC. Numpy allows us to do such multiplications simply and it inherently understands how to handle such an operation.

#!python
# Calculate Earnings as Attainment times Target Incentive
earnings = attainment * TIC

And that’s it. We’ve already calculated earnings for all reps in the sample. And it took all 3 lines of code for the full model!! Now let’s write some of our results out in excel.

#!python
# Create an output sheet for us to write to
output_ws = wb.create_sheet(title='Output')

# Write Achievement, Attainment and Earnings into columns
write_list_of_values(output_ws,'A1',list(achievement_distribution),'Achievement',number_format=NumberFormat.FORMAT_PERCENTAGE_00)
write_list_of_values(output_ws,'B1',list(attainment),'Attainment',number_format=NumberFormat.FORMAT_PERCENTAGE_00)
write_list_of_values(output_ws,'C1',list(earnings),'Earnings',number_format=NumberFormat.FORMAT_CURRENCY_USD_SIMPLE)

That gave us the raw values of the achievement sample, attainment, and earnings of that sample. Now let’s create some more useful analytical tables. The openpyxl_helpers package has a couple of predefined methods to help create a quick distribution table and some statistics in a stats table. Let’s add those to our output sheet now.

#!python
# Create a Distribution Table
create_distribution_table(output_ws,'E1',earnings,number_format=NumberFormat.FORMAT_CURRENCY_USD_SIMPLE)

# Create a Stats Table
create_stats_table(output_ws,'I1',earnings,number_format=NumberFormat.FORMAT_CURRENCY_USD_SIMPLE)

Finally, let’s write the output to a new file.

#!python
# Write the output to a new file
wb.save(filename='/home/sujeet/Desktop/plan_modeling_simulated.xlsx')

Our newly generated output file should have a new sheet called “Output” with data like the below screenshot. The stats table gives us some useful metrics, and the distribution table can be used to plot a bell curve of the rep’s earnings.

Outputs

The tip of the iceberg

This is a very basic model. My intent was only to display how painless it is to use python, NumPy, etc., to create a basic sales compensation/analytics model. This code can be adapted to perform analysis on a complex model just as easily. Imagine the potential in quota-setting models, territory alignments, call planning, etc.

You can get the code for this whole model, including the helper packages, at sujeetpillai/simple_python_comp_model

Incentius can help create, maintain, and tweak such models to achieve your business goals. This is a zone that we’re very excited about, and we can’t wait to help you realize the potential for such techniques!

How did you like this blog post? Let us know in the comments below or on our social media pages.

If you got this far, we think you’d like our future blog content, too. Please subscribe on the right side.