Targeted Resume Questions Flashcards

1
Q

Briefly describe your job at Comcast.

A

Writing and refactoring automation tests and framework for iOS application. 75-80% of my time was spent on this.

Manual regression testing involving 3rd party smart devices and API integration with mobile clients.

End-to-End automated and manual tests for new platform.

Multiple teams with competing deadlines and changing processes, using Agile and Scrum principles.

increased precision in test case management and writing

increased automation visibility by getting them out early in the sprints and running them for iOS regression

increased QA visibility with developers — adding changes directly to iOS application, working with multiple departments for account creation and maintenance scripts, working with front-end developers for responsiveness testing and automation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What kind of automation work did you do with iOS there?

A

When I came in, they had some old automation code that wasn’t being run. I was given the task of “figuring out how to fix it”.

Task 1: Break it down into clearly definable goals. Big Goal 1 was “Reliable Automation” (i.e., build confidence in our automated test runs).

Created an EPIC to “Review / Refactor existing automated test cases”. We started with 326 defined scenarios, ~60% of which were passing.

Task 1.5: Broke each feature into its own separate Epic Task, with the goals: (1) Reflect on data (which scenarios are breaking and why); (2) Fix scenarios or defer (low hanging fruit first - the goal is reliable automation); (3) Prune outdated scenarios.

Task 2: Recurring automation runs. Once we have some test confidence, we should run them in CI or some other daily scheduled run.

Task 3: Add test cases. Once we have confidence and scheduled runs, we can start expanding the test cases. This also involved an intermediary task of building out the framework.

Other tasks: discuss automation in our weekly iOS QA status reports. Addressed concerns with automation (only two people on the team wrote and ran automated tests). Getting the automation in as part of our CI was a big step towards increasing involvement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What kind of manual testing did you do?

A

Exploratory/session-testing, functional, 3rd party integration.

Mostly using Charles Proxy tool (rewrite rules to define API parameters, etc.), Splunk event logging (using Rails-API) (a lot of the testing was done by OPs and Adapters team), web socket IO testing (WireShark), accessibility testing.

Builds were distributed on Box/Hockey.

Encouraged “dogfood” testing.

Critical bugs found: WWXH showing 3rd party connections multiple times, faulty alert texts, Login looping when failed password (didn’t give maxAttempts due to an API failure), Table View issues, network testing (WiFi/3G/4G/LTE and faulty networks simulation), destructive/edge-case testing, device coverage strategy, sanity and smoke tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How did you improve the iOS automation framework?

A

Goal: parity of our Regression Plan and our scenarios (i.e., make the scenario language match the language used in our test plan). It forced us to use clear language when writing new test cases, and forced us to write cukes that clearly matched our test plan. This is after all the entire point of cucumber.

Give our scenarios “Test Case No” that are unique! This was something I had to explain to the rest of the team.

Better error handling (move the sleep functionality into it’s own class in the framework).

Remove logic from the tests. The test language should be a pseudo-DSL that just asks the framework to perform tasks on the AUT. It doesn’t tell the framework HOW to do them. KMSI box checked/unchecked, type of account to sign in with, hiding/showing keyboard, etc.

Enforce code standards (set good examples). There was a lot of case statements that looked for query strings and ran the queries. I refactored this by making the “query methods” separate in the class, and to run simple send method on the base page class. Separate the query from the logic. Easier to fix.

For metrics, we had the statement: during regression, we run our extensive regression suite (~750 test cases), and then run targeted tests, sanity and smoke tests for each new build. Sanity was a bit more extensive than smoke, with a few more devices.
“The devices that we run automation were on devices that we’ve done smoke testing on. Smoke testing typically takes about 30 minutes.”

“So the plan is to goal is to run automated on Sanity Suite. This will take parts of full regression and replace smoke and Sanity. At this point we have some of smoke automated. We need to try to get Sanity automated. We want to shorten our regression cycle to at least 5 days (currently over 10).”

We automated 16% of our regression tasks on our last FTR.

One thing I did was to refactor the Screen Appearance tests. We had a Screen Appearance scenario for every single screen (about 12). Each screen had a dozen or so elements to verify. Using scenario outlines, this meant that there were ~144 tests for screen appearance alone, and they were brittle. I wrote a ScreenAppearance class that used the ImageMagick gem, and for each of the screens it would just take a screenshot and save it as a file, storing the device/orientation/screen name. At the end of the smoke test run we could open this folder and quickly verify all the screen appearances manually.

Useful commenting. Wrote the test cases from our regression smoke / sanity tests as comments in the feature file. Then I wrote a bash script to just ack/awk the file and take a count. Then we could easily get our current scope.

Documentation. Updated the README file to include the minimum steps required to run automation on local machine. Also added some quick “sanity debugging” for common problems with Calabash/environment quirks (like setting environment variables, network connection, etc.).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How did you improve process at Comcast?

A

Our Regression Test Plan was ~750 “test cases” (each row could represent multiple test cases, probably more like 1,700) all in one big table on Google Docs.

Improving the framework and getting automation integrated.

Enforcing above standards.

Increasing developer collaboration.

Bringing my own particular skill set to the job.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How did you report your findings or discuss the “release readiness” of the app prior or post release?

A

The idea of “metrics” as guide-posts, or markers as a way to measure progress. Given the task, to “fix automation”, one of the necessary correlated tasks was to SHOW how I was fixing automation. The metrics were easily readable and interpreted.

“Major Features Introduced in 7.10” (broken down into Area - Story).

“Noteworthy Leaked Defects” (ordered by severity)

“Testing Assessment” (description of the kind of testing we did, device matrix, networks and bandwidth, environments, automation, etc.)

“Leaked Defect Summary Count” (broken down by Component (3rd party, log in, security, etc.) in the rows, and severity in the columns, with total bug counts for each component.

Bar chart showing the last 6 (minor) releases and how many release blockers, critical, major, and minor bugs were found in each. Easily compare how many bugs we’re introducing and their severity.

“Test Case Numbers” - how many test cases we currently have, including automation. Eventually will include “hours saved” metrics to show the effect of our automation efforts on smoke/sanity testing.

“Test Execution” Device matrix and OS combinations.

“Testing Tools” - what we used for Story management, defect reporting, automation, build distribution, version control, CI, testing.

“Appendix” with a more detailed description of the bugs, including their JIRA ticket numbers and metadata.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What was it like working on multiple high-level projects with competing interests and deadlines?

A

Involved a lot of back and forth with the different managers to determine priorities.

On the one hand, this project was “a big deal” and needed a lot of effort to build up the testing effort.

On the other hand, we were releasing iOS product every month, which needed my attention for regression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How did you work with Product Owners?

A

Essentially a context-based approach. The tests we were writing depended very heavily on the work actually being done in the sprint.
The team was self-organizing. We were made of individuals from every team, and we used each others experience and input to develop our strategies, often on a day-to-day basis.

This was all new ground. We were building a platform, not a feature, that would later have to be brought in to the other teams. A lot of the initial sprint work was just “getting something to showcase” and building out the backend later.

It involved mobile teams, web app teams, hardware departments (for the gateways and account creation), etc. A lot of leg work.
I started looking into using the Galen framework for responsiveness testing. Galen uses YAML templating and runs on Selenium Grid/SauceLabs, etc. It tests element location relative to other elements.
A benefit to working directly with the POs was I heard things like “Those type of accounts are going away” - high level business decisions coming directly down the pipeline. This was helpful in knowing where to spend my time automating.

We used Trello to monitor Big Picture chunks that tied in all the platforms, and JIRA to manage individual team work.

Working with PMs and customers themselves is a great way to define “patterns of usage”, which can help when evaluating our testing efforts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How did you work with developers to design ‘testable code’?

A

Automation writers depended on developers to add new Accessibility Labels to the codebase in order to accurately locate elements. I had some personal experience with Xcode and Objective-C so I knew this was a real simple process. After talking with the lead developer, we came to an agreement that automators could just cut a new branch, add in the accessibility labels we needed, and submit a PR. Since the Calabash build was separate from the release build (calabash.framework had to be part of the build target), it wasn’t a big concern.

Also had to work with the lead developer to set up a private build server to get our Calabash builds. For some reason it didn’t play well with AntHill pro.

For the new platform, this involved adding sensible class and ID names at the beginning. This was already being done.

It was very important to get the automation tests up and running early in the sprint. This was essentially a context-driven approach. We were building something to showcase, with the assumption that the underlying backend logic would be built out later. But that wouldn’t change our automation tests — so we had a reliable mechanism for ensuring the initial product requirements didn’t break as we implemented the backend.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What kind of end-to-end testing did you do at Comcast?

A

A project I worked on was designing a new platform for a new type of user.

One of the criterion that we decided during planning was that Automation was required for a user story to be “Done”.

Calabash was good because it allowed me to write one set of Cucumber scenarios and have them run on Android and iOS.

The selenium framework was a bit of a mess and we had a very limited time frame, and the lead developer had doubts about Selenium playing well with Web Components and Polymer, so I added the automation using Ghost Inspector (similar to Builder). This allowed me to quickly get automation up and running in the first sprint. A lot of what we were doing was “building something to showcase”, with the assumption that the actual backend code would get done incrementally.

“If they had a particular gateway device, show X screen on First Time User Login.”
Used Zephyr Enterprise to track our test cases and integrate with JIRA.

This also helped bring visibility to the rest of the teams. This platform project involved both mobile teams (iOS and Android), and I heard things like “man I don’t know if we can even do iOS automation.” iOS automation is NOT specific to the iOS team.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Tell me about your role as Test Automation Engineer at The Neat Company.

A

Writing automated test cases and building out the framework for web application.

Increasing QA visibility by getting automated smoke tests integrated with CI build, scheduled AMIs, integrating with TestRail API, and displaying it on large HD TV.
Increasing QA participation in writing automated tests.

Improving process by writing scripts to expose and consume performance metrics.
Improving process by providing system testing guidelines and tools (neat-cli, environment tools, data collection in mongo for 3rd party machine learning).

Improving process by involving automation with regression, allowing for more targeted regression and exploratory testing.

Working directly with customers via customer support representatives and usability sessions, and helping build out an admin console for customer issue handling.

Performance testing — targeted (high profile partnership, feature-based — cacheing), general benchmarking using New Relic and direct customer interaction, and recommendations (using targeted tools like siege).

Increasing QA participation by integrating multiple off-shore teams and teaching them general Scrum practice and guidelines (bug report templates, testing strategies, testing plans).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe a bit about the automation work you did there and how you integrated it with the team.

A

Wrote documentation on how to run on local machine, how to contribute, style guides and best practices, tools for the job, etc.

Worked with the devops engineers to run a scheduled EC2 instance creation, download the GitHub repo which contained all the automation code, run all the tests as a cron job, and output the report.

Also wrote a script which communicated directly with the TestRail API to report our automation test runs in real time. We displayed this on an “information radiator” (giant HD TV) located in the middle of the development team.

Testing APIs using command line tools like curl or HTTPie, or GUI clients like Postman and CocoaRestClient, or Charles re-write rules, to make sure calls are being routed through the right proxy servers, or that the APIs are returning the right codes.

Testing APIs at their creation is essential so that we’re not surprised by the errors we might find at the functional level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Tell me about the Webapp Automation Framework that you wrote.

A

Watir, Selenium (2.x), TestRail.
Used vanilla WebDriver for a lot of the framework code itself.

Encapsulated webdriver code within the framework to isolate it from tests.

The framework was essentially an application to test an application.

Automation framework acts as a buffer between SUT and tests. That is, the tests don’t operate directly against the web application (instead, they call methods in the framework).

Started by writing the tests themselves. Assuming the framework exists, how would I want the tests to look?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Was automation received well in the team? How did you introduce it?

A

This brought us closer to the “Agile goal” of daily code confidence. Prior to this we would prioritize certain areas for our regression cycle due to time constraints.
Regression times before releases went down, allowing for exploratory and performance testing.

See above for CI/build integration. We initially ran the automation on every build in Travis-CI. Turned this off in favor of the EC2 scheduled instances. WHY? Because it was slowing down the builds. WHAT DID I LEARN? I should have kept the basic smoke tests running, to encourage automation enthusiasm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How did you get other testers to contribute to automation?

A

Using tools such as IDE/Builder, holding ‘automation sessions’ where I’d discuss automation basics and code reviews.

Also, building out the framework itself and letting the tests write in a pseudo-DSL.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Tell me about this “lightweight Rails” app that you built to help automation.

A

PM/Director wanted the whole team on board with automation. Off-shore to handle manual regression testing. I offered to help work with the team and get them up to speed (assuming zero coding experience).

The early compromise was record & playback tools. I suggested IDE/Builder since it ties into Selenium, produces easy to parse JSON, is extensible, and runs on Sauce Labs.

As I tried to get the team to move more towards Ruby/WD automation, my early solution was to start building out the framework for more elaborate automation testing. I created a Rails app that used simple forms and UI elements to drive backend operations (through our cli-tool, API calls, Rails’ mailer, etc.).

A lot of this work eventually made its way into the framework.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is this API that you wrote for performance metrics?

A

Part of narrowing down the performance metrics for OCR/Parsing process.

Worked with developers to ensure our model workflow_items_in_progress contained information about OCR processing times (received_by_SPI, begin_processing, end_processing).

Previously I’d have to open a Rails console on an actual EC2 instance and gather this information myself. I was doing it through a rails runner one-off script.

I turned this into a simple API for basic GET retrieval of processing information based on date/time parameters.

18
Q

Tell me about this command line interface that helped automate workflows?

A

Command line tool basically consumes the API exposed by our web app backend.

Item upload (modifying headers to simulate different clients), mail-in (SendMail API).

Useful for HP performance testing: wrote a Bash script to upload 500-1000 items, then call API to get times and perform averages calculations (previously export a script to run on a Rails instance).

19
Q

What is this “Jam Session” project that you worked on?

A

Stemmed from production customer support constantly having to get developer help for running queries (items in a db, OCR’d items, etc.), performing actions on customer records (extending expiration dates, tier changes, etc.) outside the payment system, etc.

The idea was pitched and they wanted someone to represent the customer interest. As QA, I met at least once a month with a customer support manager to discuss issues, etc. I also participated in customer usability sessions (unboxing to usage).

This was my first taste of Rails and Ruby beyond a scripting language. Building an application, how routing works and MVC design patterns.

Had a simple authentication system (different tiered customer support representatives could perform different actions).

It continued to evolve, encouraging our Rails developers to build more thorough APIs. By the time I left it was something customer support couldn’t imagine life without.

20
Q

Tell me about some Rails scripts that you wrote.

A

Data querying and manipulation directly through ActiveRecord (creating users, removing relationships, upgrading user plans, destroy data that meets particular AR criteria, etc.).

Query AR for account models and make sure they contain particular attributes (i.e., post-deploy testing after migrations) - processing limits, expiration dates, etc.

21
Q

What about these JS scripts that you wrote to run on Mongo?

A

Data collection that would run on mongo instances and return output in JSON used in HP performance testing.

For example: collecting particular types of data (all invoices from a production instance in a particular month), output to a shell script that I could run to download/collect the actual invoice images from S3 using AWS-cli. Then we’d package this all up and send it off to a third party that handled machine learning training for our parsing accuracy.

db.entities.find({
type:”invoice”,
“_id”: {
“$gte” : objIdMin,
“$lt” : objIdMax
}
},{
“_id”:1,
“account_id”:1
}).limit(5000).forEach(
function(invoice) {
print( “aws s3 cp s3://neat-images-prod/” + invoice.account_id + “/” + invoice.id + “.pdf “ + invoice.account_id + “” + invoice._id + “.pdf”);
});

run via mongo –quiet neat_prod&raquo_space; /tmp/get-november-2014-invoices_v3.sh

22
Q

What kind of performance testing did you do at The Neat Company?

A

Used JMeter for benchmarking and performance testing realistic user scenarios.

Also used cli tools like Siege, Postman, etc.
Exercise various API endpoints at different levels of concurrency and load.

Required working with DevOps and systems engineers for environment setup and configuration (e.g., scaling up EC2 instances to mimic production, load configuration - how much load before a new instance is fired up). Sometimes rolling back changes, running performance scripts to test before/after deployment of new releases.

We used Chef for configuration management and standing up our AMIs through AWS.

Used to determine whether or not it’d be worth it to release with particular features like cacheing.

Used JMeter to output the data into CSV, then wrote a graphing script in R to analyze the data and pretty print it. This was used in a company wide presentation to show web app navigation response time before/after implementation of Postgres cacheing of data (instead of querying the database with each call).

Measured averages of throughput, latency, deviations.

Familiar with database tuning: replica sets, indexing.

Familiar with system tuning: load balancing, cacheing at the application level, etc.

Most of my performance testing at the Application/HTTP level, hitting API endpoints and testing database interaction through AR queries.

23
Q

Have you collaborated with any off-shore teams?

A

Teams in Hyderabad, Pune, Ukraine, Brazil, Colorado, NYC, third party uTest/Applause, and our data entry team for machine learning.

Our OCR/parsing engineering team didn’t have knowledge outside their particular domain, and the Rails and desktop developers were too busy implementing features to work closely with them. As QA, I could bring my wide area of knowledge to their aid. I could talk directly to them and ask “what exactly do you need?” And I set up their little learning tool they wrote, and played around with it for myself. Then I could bring those requirements to the web app team and collect the data they needed.

Worked with the data engineers to set expectations on parsing accuracy and OCR performance.

Worked with devops engineers (Brazil, onsite) for environment set up and configuration, rolling back changes.

Outside specific needs, worked with EVERY team for information gathering and expectation setting.

24
Q

So were you an automation engineer your entire time at The Neat Company?

A

No, I moved over into that role as I started collecting more and more responsibilities for the automation of the web app.

Leading the Mac Desktop team as domain expert as we implemented the Cloud sync functionality.

Reporting directly to the VP and trusted with business-level decisions, by using Bug Modeling and predictions, raising flags early and often, performing back log reviews and ‘bug scrubs’ (categorizing and severity tracking).

Increased developer-QA communication by participating in investigations, root-cause analyses, providing detailed steps to reproduce, stack traces and logs, and recommending fixes.

Enhanced process
mentoring a QA intern,
documenting testing strategies and tools (environment set up, neat-cli),
managing a 3rd party mobile testing resource (uTest/Applause),

leading the testing effort for a high-profile partnership (performance testing, manual system integration testing, and weekly showcasing to stakeholders)

Increased interaction with customers via weekly meetings with customer support representatives, usability sessions, maintaining bug workflows with customer origins.

25
Q

Who did you report to as QA Engineer at The Neat Company?

A

I’ve worked both independently and under immediate supervision.

Our VP was a bit of a hands-off manager.
Weekly ‘status report’ meetings. She expected us as team leads to provide her with domain knowledge and feedback and to raise concerns early and often so that she could represent QA interests directly to the CEO/CTO.

Analyzed my own workload and assigned bugs in terms of the larger “burn down” of the sprint, and informed my manager when I thought we had unrealistic expectations (i.e., lots of bugs open, bugs taking longer than expected, developers not fixing as fast as we thought, etc.).

26
Q

What business level decisions were entrusted to you?

A

Since I reported directly to the VP who reported directly to the CEO, I was trusted with business-level decisions when she was unavailable.

Basically, signing off on releases and production migrations.

27
Q

You were lead of the Mac QA team? What were your responsibilities?

A

Grew from intern to SME in the Mac desktop domain.

Our “teams of one” slowly grew as we acquired offshore and 3rd party resources.
Managed Hyderabad and Ukraine resources, creating documentation for testing, best practices, bug report templates and expectations, wrote scripts to aid environment set up.

Helped new hires completely new to Agile/Scrum, by giving them bug templates, giving them guidelines and holding them accountable.

Encouraged developer collaboration, but not developer-run testing. Basically tried to give them the courage to perform more technical testing tasks (like API testing, etc.) so that they wouldn’t rely on developer’s dictating what/how to test.

28
Q

How did you improve process a QA engineer at The Neat Company?

A

I’m a firm believer in cross-team communication, and at the very least ‘best guess’ root cause analyses.

Detailed steps to reproduce (the tester should try them for themselves on clean environments), environment details, stack traces/logs/screenshots (‘tools for the job’), and any relevant information.

One of the biggest issues I had in working with 3rd party vendors and offshore resources was the communication barrier, both in terms of language, willingness to voice concerns, and time differences.

Another big problem with root cause analysis is time constraints. I found that sometimes it is best left for the developer who is more comfortable with the particular domain, even though I always want to ‘learn more’.

I would never go to a developer saying “this doesn’t work” without having done my due diligence. I encouraged my team members to do the same by not allowing them to do that to me.

29
Q

How did you set quality goals at The Neat Company?

A

Believer in “log everything”, but I also believe that you can’t release perfect software. A big bug backlog is a good way to hammer out quality expectations and goals for the whole team. Understanding what we hope to accomplish and what we expect from our releases.

Doing this enough can give a stable idea of what our quality goals are for each release. What do we want to have fixed by alpha? beta? release?

By categorizing bugs (performance, functionality, workarounds, etc.) and being forced to rank them by severity, every team member advocates for themselves and owns quality.

30
Q

What were some ways you helped QA process at The Neat Company?

A

Created a QA home page on Confluence wiki, where I could organize regression plans, general testing procedures (how to test Neat Scan, how to use AWS-cli, how to use neat-cli, how to run automation on local machine, etc.).

Was constantly available (via face-to-face, Slack, email) for questions. More than happy to answer any, but not more than happy to do someone’s work for them.

I also mentored a QA intern.

31
Q

What is this about ‘managing a third party resource’ in mobile testing?

A

Renovated our test plans to be understood by people with no knowledge of our product. Isolated the “need-to-know” functionality and didn’t give them more information than they needed.

Two possible problems of communication: not enough information and too much information. Both lead to a corruption of intention/expectation.

Worked with the onsite developers to provide uTest with binaries.

Reviewed all bug reports an provided feedback.

32
Q

What experience do you have using UI measurement tools?

A

Tracking real-time usage via API calls (New Relic) and functionality concentration for triaging performance testing. Who is using the app, which version are they using, and what are they doing the most?

Rollbar for constant crash reports which could provide more information (such as crash reports, user environment details, and times of occurrences). It took away the “oh I think I saw that crash before” aspect.

Crittercism for our mobile apps. Crash reports sent right to my email that could be categorized and tracked in JIRA.

33
Q

What do you mean by “go to” tester for technical issues? Can you give me an example?

A

When I was an intern, expectations for me were low or non-existent. Developers outside my immediate circle didn’t always “know what I could do”. As a result, some cards were officially marked for QA, but unofficially tested by the developers.
Over time, I gained a reputation not just for technical abilities, but for my motivation to learn new things. The question became not “do you know how to test that?” But, instead it was assumed that if I didn’t know, I’d figure it out.

Before long, when a card requiring technical sophistication or containing some technical jargon came to QA, my manager would just say “give it to Thom.”

Problem: user calls in and they have a lot of items in their account. Causes locking issues in mongo primary when customer support uses admin tool on their account which affects all users. It called an AccountMetrics model that used Mongo count queries.

The fix: use an ElasticSearch wrapper that indexes the counts and provides them to the AccountsMetrics model

To test: (1) fetch current item count status from mongo, (2) run Chef command to stop ElasticSearch indexers (which store documents and make them searchable), (3) upload some items, (4) verify no change in mongo or account admin page, (5) restart indexers, (6) verify count + 1.

34
Q

Describe how you “led the testing effort” during this HP partnership?

A

The HP project was essentially an HP scanner that could scan directly to a Neat cloud account, where the scanned images would be OCR’d, parsed, organized, and made searchable.

Expectations and ownerships were made clear by the PM on both sides.
Tested processing time and item field parsing accuracy.

Processing:

Chose a data set that was agreed on by both parties (top vendors, generally “expected to be good” receipts, and business cards).

Pulled a cross section of raw image data from an S3 bucket that our image processing application used.

Scaled up the environment to closely match (caveat) Production (where there are minimum EC2 instances it scales more rapidly than Staging).

Uploaded 1000 images and waited until they were done processing.

Ran Rails script to query all items within that date/time and gather their image processing data, exported as JSON.

Ran a second script to parse the JSON and run calculations on average times for each of the item types, and dumped the output to CSV.

Used the CSV to generate graphs and present them to HP stakeholders.

Accuracy:

Chose a data set agreed on by both parties (top vendor receipts, generally ‘happy path’ data).

SSH’d into HP device and scp’d the raw image (i.e., before processing). Did the same with our flagship device.

Same as before, uploaded the items (5 receipts, 5 contacts, 5 documents, each 300 times).

Ran a mongo script to collect the parsed attributes and output them to JSON.
Ran a second script to parse the JSON and check against expected values (imported from a CSV data file). Calculated differentials, etc.

Presented findings to HP stakeholders.

This was all designed from scratch by me,

35
Q

What kind of scripting did you do?

A

Desktop environment set up was a pain point with users outside development (customer support, PM, marketing reps at Macworld, etc.). We had a detailed help page in our customer support wiki that outlined the steps necessary. I provided a neat-clean script written in Python which automated those steps.

Developer environments for automation testers required a lot of tools they didn’t currently have (Sublime, RESTclient, command line utilities). I wrote a Bash script to set everything up using homebrew.

Ruby command line scripts using HTTParty for basic CRUD, creating users, etc.

36
Q

Did you collaborate with customers at The Neat Company?

A

Weekly meetings with customer support manager to discuss bugs found in the wild, recurrent usability issues, etc.

Customer service reps would triage bugs to make sure they were reproducible, then bring QA into the picture.

Brought these back to the developers.
Helps to set quality goals and expectations by classifying them - Performance, cosmetic, database migrations after upgrades, crashes. And ranking them - “What kind of bugs are most common for our users?” “What is the biggest pain points among our users?” “What renders our app non-functional?” “What are some workarounds?”

Helped users fix specific issues, or communicated workarounds.

37
Q

So you started as an intern at The Neat Company. Describe how that was.

A

Formed the foundation of Scrum process which I would continue to use throughout my career — including:

Feature/story/work estimation through Iteration Planning Meetings, where I was outspoken with regard to time constraints, testing effort, and knowledge spikes.

Developer-QA-Business collaboration through daily standup

Outspoken participation during Retrospectives

Using communication skills for clear and detailed bug reports, test cases and plans.
Issue tracking and bug management through JIRA.

Working with multiple offshore teams.

Learned how to effectively manage my time and resources within tight deadlines and business constraints.

38
Q

What kind of testing did you do as an intern?

A

Story testing, exploratory and scenario based testing, regression testing, post-release testing and production issue testing.

Had an amazing mentor who is now finishing his Phd in applied mathematics.

Got into the practice of understanding the SUT before thinking about how to test it.

Got into the practice of taking notes while testing. Organizing those notes into broad topics and flow charts. When I took over as lead, these notes and flow charts functioned as high level documentation and best practices for the rest of the team.

39
Q

What was a typical stand up session like at The Neat Company?

A

Benefited from having a really good BA who strictly adhered to Scrum practices.

Literally had to stand up, lengthy conversations were taken off line, bugs were expected to be referred to by number and brief description, etc.

Showed me the value of Scrum practices in SDLC.

40
Q

What was your participation like as QA in Iteration Planning Meetings?

A

Made me realize my added value early in my career. Gave me a good expectation of story requirements before they came to me in the form of JIRA cards.