Biggest Weakness Flashcards

1
Q

What is your biggest weakness? How have you been working on it?

A

I tendency to search for the perfect solution, and that can be sometimes paralyzing.

When I build a feature, I spend significant time reviewing the details, continuously to limit bugs and ensure design is flawless.
The is in direct conflict with my ability to move fast and deliver value more quickly.

I worked on improving this. I’ve use a prioritization framework to evaluate my activites now.

  1. What brings me the most value in my day to day, what should I prioriotize?
  2. Timebox my effort - commit the a timeframe to achieve something
  3. Teaching myself to be content with continuos improvement, rather than immediate perfection
    1. Personal life, leaned into books that discuss entrepreneurship
    2. Lean Startup
    3. Everything is Figureoutable

Challenging myself to adopt a test & learn mindset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Tell me about a time when you makde a mistake. How would you open?

A

Let me tell you about a time when I made a mistake when setting up data for a presentation.
Demand Proof of Concept

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

For makng a mistake, Demand Proof of Concept, what was the the situation?

A

Situation: Macy’s needed to buy a new Demand Forecasting.
I started a proof of concept to evaluate two 3rd party systems against each other.

Siumation: My job was to work with these two 3rd party system providers and setup a simulation in which we could test both systems against each other, and against our current Demand Forecast system to see - who is more accurate at predicting customer demand.

Think of this like an A / B / C test, but we’re using historical data:

  • A - our control - how our own DF system predicted customer demand
  • B - Vendor 1
  • C - Vendor 2
  • A Demand forecast needs historical sales to predict the future, and promotional data to understand how demand will fluctuate.
  • *My job** was to the following:
  • Pass histoical sales data for August - October of 2019 to the two vendors
  • Pass promotional data for same timeframe - was this product on sale, if so what was the price?

Output - Both vendors will use their demand forecast engines to generate a Demand Forecast for every single week within the timeframe

We’ll compare forecast accuracy against the two system, and the current systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

For making a mistake, Demand Forecast POC, how did I make a mistake?

A
  • We tested about 100 item’s forecast accuracy across all three system.
  • I presented the results our business stakeholders, but there was an error in the data for 2 of the items - they were
  • Baby Boys Coveralls - they come in 4 sizes.
  • The Data was reflecting a forecast for 40 sizes, not 4, and this inflated the projected Demand Forecast for the 2 new systems we were testing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Can you explain how the mistake occurred?

A
  • It was an oversight in how we should measure forecast accuracy.
  • We wanted to test how two systems would predict customer demand for 100 products.
  • We send both vendors hisotircal sales data for all 100 products, down to the size level. For example, a Baby Boy Coverall comes in 4 sizes, so we send both vendors historical sales data for all 4 sizes.
  • We sent sales data for 40 sizes for two Baby Boys Coveralls instead of 4 sizes, and this inflated the Demand Forecast and hurt forecast accuracy for the Baby Boys department, signifying an outlier.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do you send sales data for 40 sizes instead of 4 sizes?

A

Summary: There are old versions of the same sizes still selling, and we sent data for ALL sizes to both vendors.

  • This gets into a bit of Master data management and retail hierarchy.
    Most are made up of a style (think Baby Boy Coverall) and sizes (think 0-3 months old, 3-6 months old, 6-9 months old, etc.), and each style has a number 12345, and each size has a unique identifier called a size code - think A476

Most items in Gap have this simple hierarchy.

In rare cases, an item can have duplicates of the same size attached to them, because . Think Baby Boy Coverall, with multiple version of 0-3 months, 2x, 3x, 4x.

How can that happen?
* Sometimes our manufacturues recycle sizes of our Baby Boy Coveralls replace the size codes when they recreate, even though they represent the same size description.

During our data scrub we did not look for this scenario, and thankfully it only affected two of our items.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How did you address the issue? What were my actions?

A
  1. Take accountability and provide transparency.
  2. Correct the issue as quickly as possible - provide a new output of the data.
  3. Took measures to make sure it doesn’t happen again - showed the business that all other items and sizes match.
  4. Bring stakeholders along the journey with me. ←- Provide updates and an overview how we did the exercise.

How can I improve trust?

  • Bring stakeholders along the journey with me, so they can ask questions and give insights.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How did you take take accountability and restore transparency?

A
  • I took full accountability and provided transparency.
  • I met with key stakeholders to explain the issue face to face, and explained how it was skewing results in the data for the Baby Boys Department.
  • I explained the process we were count to verify that the sizes on an item count and composition of sizes as they were forecasted back in October of 2019.

Originally, we were passing sales data, but ot checking against a sizes count by style that was originally forecasted between August 2019 and October 2019.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How did you fix the data? How did the stakeholder react?

A
  • I provided another cleaned output of the data in less than a week.
  • Show our stakeholders the corrected output with all sizes matching, for all 100 items -> To start to rebuild confidence.
  • Showed our stakeholders the overall impact to forecast accuracy after the data correction - with forecast accuracy being largely unchanged overall but forecast accuracy improving greatly for the Baby Boys department.
  • They were of course discouraged by made to be hesistant by the data error, but appreciated the transparency.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How did you repair the broken trust? What did you learn?

A
  • Bring Stakeholders on a journey with you.
  • This whole situation was the impetus for me re-thinking how I engage with stakeholders, epscially with data science heavy projects.
  • I changed my structure to based on “bringing my key stakeholders on a journey with me” -
  • Worst thing I can do - Go off on my own, work on a data science project, and present results several weeks later
  • Best thing I can do - Bring my stakeholders on the step by step jorueny with me, keep them informed of the project - they could have helped provide me key insight into some of the data outliers with the Baby Boy Coveralls - they could have called out this outlier earlier and I could have fixed it prior to the conversation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What were the metrics? What was Baby Boys department demand vs. others?

A

Baby Boys - 20% Forecast Accuracy - Customer demand was inflated by at least 40%
* Little selling in the older sizes, but enough to inflate demand

Overall Forecast Accuracy was 65% and 66% from both of the vendors.
Original Forecast Accuracy was 65%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly