Targeted Resume Questions Flashcards
Briefly describe your job at Comcast.
Writing and refactoring automation tests and framework for iOS application. 75-80% of my time was spent on this.
Manual regression testing involving 3rd party smart devices and API integration with mobile clients.
End-to-End automated and manual tests for new platform.
Multiple teams with competing deadlines and changing processes, using Agile and Scrum principles.
increased precision in test case management and writing
increased automation visibility by getting them out early in the sprints and running them for iOS regression
increased QA visibility with developers — adding changes directly to iOS application, working with multiple departments for account creation and maintenance scripts, working with front-end developers for responsiveness testing and automation
What kind of automation work did you do with iOS there?
When I came in, they had some old automation code that wasn’t being run. I was given the task of “figuring out how to fix it”.
Task 1: Break it down into clearly definable goals. Big Goal 1 was “Reliable Automation” (i.e., build confidence in our automated test runs).
Created an EPIC to “Review / Refactor existing automated test cases”. We started with 326 defined scenarios, ~60% of which were passing.
Task 1.5: Broke each feature into its own separate Epic Task, with the goals: (1) Reflect on data (which scenarios are breaking and why); (2) Fix scenarios or defer (low hanging fruit first - the goal is reliable automation); (3) Prune outdated scenarios.
Task 2: Recurring automation runs. Once we have some test confidence, we should run them in CI or some other daily scheduled run.
Task 3: Add test cases. Once we have confidence and scheduled runs, we can start expanding the test cases. This also involved an intermediary task of building out the framework.
Other tasks: discuss automation in our weekly iOS QA status reports. Addressed concerns with automation (only two people on the team wrote and ran automated tests). Getting the automation in as part of our CI was a big step towards increasing involvement.
What kind of manual testing did you do?
Exploratory/session-testing, functional, 3rd party integration.
Mostly using Charles Proxy tool (rewrite rules to define API parameters, etc.), Splunk event logging (using Rails-API) (a lot of the testing was done by OPs and Adapters team), web socket IO testing (WireShark), accessibility testing.
Builds were distributed on Box/Hockey.
Encouraged “dogfood” testing.
Critical bugs found: WWXH showing 3rd party connections multiple times, faulty alert texts, Login looping when failed password (didn’t give maxAttempts due to an API failure), Table View issues, network testing (WiFi/3G/4G/LTE and faulty networks simulation), destructive/edge-case testing, device coverage strategy, sanity and smoke tests.
How did you improve the iOS automation framework?
Goal: parity of our Regression Plan and our scenarios (i.e., make the scenario language match the language used in our test plan). It forced us to use clear language when writing new test cases, and forced us to write cukes that clearly matched our test plan. This is after all the entire point of cucumber.
Give our scenarios “Test Case No” that are unique! This was something I had to explain to the rest of the team.
Better error handling (move the sleep functionality into it’s own class in the framework).
Remove logic from the tests. The test language should be a pseudo-DSL that just asks the framework to perform tasks on the AUT. It doesn’t tell the framework HOW to do them. KMSI box checked/unchecked, type of account to sign in with, hiding/showing keyboard, etc.
Enforce code standards (set good examples). There was a lot of case statements that looked for query strings and ran the queries. I refactored this by making the “query methods” separate in the class, and to run simple send method on the base page class. Separate the query from the logic. Easier to fix.
For metrics, we had the statement: during regression, we run our extensive regression suite (~750 test cases), and then run targeted tests, sanity and smoke tests for each new build. Sanity was a bit more extensive than smoke, with a few more devices.
“The devices that we run automation were on devices that we’ve done smoke testing on. Smoke testing typically takes about 30 minutes.”
“So the plan is to goal is to run automated on Sanity Suite. This will take parts of full regression and replace smoke and Sanity. At this point we have some of smoke automated. We need to try to get Sanity automated. We want to shorten our regression cycle to at least 5 days (currently over 10).”
We automated 16% of our regression tasks on our last FTR.
One thing I did was to refactor the Screen Appearance tests. We had a Screen Appearance scenario for every single screen (about 12). Each screen had a dozen or so elements to verify. Using scenario outlines, this meant that there were ~144 tests for screen appearance alone, and they were brittle. I wrote a ScreenAppearance class that used the ImageMagick gem, and for each of the screens it would just take a screenshot and save it as a file, storing the device/orientation/screen name. At the end of the smoke test run we could open this folder and quickly verify all the screen appearances manually.
Useful commenting. Wrote the test cases from our regression smoke / sanity tests as comments in the feature file. Then I wrote a bash script to just ack/awk the file and take a count. Then we could easily get our current scope.
Documentation. Updated the README file to include the minimum steps required to run automation on local machine. Also added some quick “sanity debugging” for common problems with Calabash/environment quirks (like setting environment variables, network connection, etc.).
How did you improve process at Comcast?
Our Regression Test Plan was ~750 “test cases” (each row could represent multiple test cases, probably more like 1,700) all in one big table on Google Docs.
Improving the framework and getting automation integrated.
Enforcing above standards.
Increasing developer collaboration.
Bringing my own particular skill set to the job.
How did you report your findings or discuss the “release readiness” of the app prior or post release?
The idea of “metrics” as guide-posts, or markers as a way to measure progress. Given the task, to “fix automation”, one of the necessary correlated tasks was to SHOW how I was fixing automation. The metrics were easily readable and interpreted.
“Major Features Introduced in 7.10” (broken down into Area - Story).
“Noteworthy Leaked Defects” (ordered by severity)
“Testing Assessment” (description of the kind of testing we did, device matrix, networks and bandwidth, environments, automation, etc.)
“Leaked Defect Summary Count” (broken down by Component (3rd party, log in, security, etc.) in the rows, and severity in the columns, with total bug counts for each component.
Bar chart showing the last 6 (minor) releases and how many release blockers, critical, major, and minor bugs were found in each. Easily compare how many bugs we’re introducing and their severity.
“Test Case Numbers” - how many test cases we currently have, including automation. Eventually will include “hours saved” metrics to show the effect of our automation efforts on smoke/sanity testing.
“Test Execution” Device matrix and OS combinations.
“Testing Tools” - what we used for Story management, defect reporting, automation, build distribution, version control, CI, testing.
“Appendix” with a more detailed description of the bugs, including their JIRA ticket numbers and metadata.
What was it like working on multiple high-level projects with competing interests and deadlines?
Involved a lot of back and forth with the different managers to determine priorities.
On the one hand, this project was “a big deal” and needed a lot of effort to build up the testing effort.
On the other hand, we were releasing iOS product every month, which needed my attention for regression.
How did you work with Product Owners?
Essentially a context-based approach. The tests we were writing depended very heavily on the work actually being done in the sprint.
The team was self-organizing. We were made of individuals from every team, and we used each others experience and input to develop our strategies, often on a day-to-day basis.
This was all new ground. We were building a platform, not a feature, that would later have to be brought in to the other teams. A lot of the initial sprint work was just “getting something to showcase” and building out the backend later.
It involved mobile teams, web app teams, hardware departments (for the gateways and account creation), etc. A lot of leg work.
I started looking into using the Galen framework for responsiveness testing. Galen uses YAML templating and runs on Selenium Grid/SauceLabs, etc. It tests element location relative to other elements.
A benefit to working directly with the POs was I heard things like “Those type of accounts are going away” - high level business decisions coming directly down the pipeline. This was helpful in knowing where to spend my time automating.
We used Trello to monitor Big Picture chunks that tied in all the platforms, and JIRA to manage individual team work.
Working with PMs and customers themselves is a great way to define “patterns of usage”, which can help when evaluating our testing efforts.
How did you work with developers to design ‘testable code’?
Automation writers depended on developers to add new Accessibility Labels to the codebase in order to accurately locate elements. I had some personal experience with Xcode and Objective-C so I knew this was a real simple process. After talking with the lead developer, we came to an agreement that automators could just cut a new branch, add in the accessibility labels we needed, and submit a PR. Since the Calabash build was separate from the release build (calabash.framework had to be part of the build target), it wasn’t a big concern.
Also had to work with the lead developer to set up a private build server to get our Calabash builds. For some reason it didn’t play well with AntHill pro.
For the new platform, this involved adding sensible class and ID names at the beginning. This was already being done.
It was very important to get the automation tests up and running early in the sprint. This was essentially a context-driven approach. We were building something to showcase, with the assumption that the underlying backend logic would be built out later. But that wouldn’t change our automation tests — so we had a reliable mechanism for ensuring the initial product requirements didn’t break as we implemented the backend.
What kind of end-to-end testing did you do at Comcast?
A project I worked on was designing a new platform for a new type of user.
One of the criterion that we decided during planning was that Automation was required for a user story to be “Done”.
Calabash was good because it allowed me to write one set of Cucumber scenarios and have them run on Android and iOS.
The selenium framework was a bit of a mess and we had a very limited time frame, and the lead developer had doubts about Selenium playing well with Web Components and Polymer, so I added the automation using Ghost Inspector (similar to Builder). This allowed me to quickly get automation up and running in the first sprint. A lot of what we were doing was “building something to showcase”, with the assumption that the actual backend code would get done incrementally.
“If they had a particular gateway device, show X screen on First Time User Login.”
Used Zephyr Enterprise to track our test cases and integrate with JIRA.
This also helped bring visibility to the rest of the teams. This platform project involved both mobile teams (iOS and Android), and I heard things like “man I don’t know if we can even do iOS automation.” iOS automation is NOT specific to the iOS team.
Tell me about your role as Test Automation Engineer at The Neat Company.
Writing automated test cases and building out the framework for web application.
Increasing QA visibility by getting automated smoke tests integrated with CI build, scheduled AMIs, integrating with TestRail API, and displaying it on large HD TV.
Increasing QA participation in writing automated tests.
Improving process by writing scripts to expose and consume performance metrics.
Improving process by providing system testing guidelines and tools (neat-cli, environment tools, data collection in mongo for 3rd party machine learning).
Improving process by involving automation with regression, allowing for more targeted regression and exploratory testing.
Working directly with customers via customer support representatives and usability sessions, and helping build out an admin console for customer issue handling.
Performance testing — targeted (high profile partnership, feature-based — cacheing), general benchmarking using New Relic and direct customer interaction, and recommendations (using targeted tools like siege).
Increasing QA participation by integrating multiple off-shore teams and teaching them general Scrum practice and guidelines (bug report templates, testing strategies, testing plans).
Describe a bit about the automation work you did there and how you integrated it with the team.
Wrote documentation on how to run on local machine, how to contribute, style guides and best practices, tools for the job, etc.
Worked with the devops engineers to run a scheduled EC2 instance creation, download the GitHub repo which contained all the automation code, run all the tests as a cron job, and output the report.
Also wrote a script which communicated directly with the TestRail API to report our automation test runs in real time. We displayed this on an “information radiator” (giant HD TV) located in the middle of the development team.
Testing APIs using command line tools like curl or HTTPie, or GUI clients like Postman and CocoaRestClient, or Charles re-write rules, to make sure calls are being routed through the right proxy servers, or that the APIs are returning the right codes.
Testing APIs at their creation is essential so that we’re not surprised by the errors we might find at the functional level.
Tell me about the Webapp Automation Framework that you wrote.
Watir, Selenium (2.x), TestRail.
Used vanilla WebDriver for a lot of the framework code itself.
Encapsulated webdriver code within the framework to isolate it from tests.
The framework was essentially an application to test an application.
Automation framework acts as a buffer between SUT and tests. That is, the tests don’t operate directly against the web application (instead, they call methods in the framework).
Started by writing the tests themselves. Assuming the framework exists, how would I want the tests to look?
Was automation received well in the team? How did you introduce it?
This brought us closer to the “Agile goal” of daily code confidence. Prior to this we would prioritize certain areas for our regression cycle due to time constraints.
Regression times before releases went down, allowing for exploratory and performance testing.
See above for CI/build integration. We initially ran the automation on every build in Travis-CI. Turned this off in favor of the EC2 scheduled instances. WHY? Because it was slowing down the builds. WHAT DID I LEARN? I should have kept the basic smoke tests running, to encourage automation enthusiasm.
How did you get other testers to contribute to automation?
Using tools such as IDE/Builder, holding ‘automation sessions’ where I’d discuss automation basics and code reviews.
Also, building out the framework itself and letting the tests write in a pseudo-DSL.
Tell me about this “lightweight Rails” app that you built to help automation.
PM/Director wanted the whole team on board with automation. Off-shore to handle manual regression testing. I offered to help work with the team and get them up to speed (assuming zero coding experience).
The early compromise was record & playback tools. I suggested IDE/Builder since it ties into Selenium, produces easy to parse JSON, is extensible, and runs on Sauce Labs.
As I tried to get the team to move more towards Ruby/WD automation, my early solution was to start building out the framework for more elaborate automation testing. I created a Rails app that used simple forms and UI elements to drive backend operations (through our cli-tool, API calls, Rails’ mailer, etc.).
A lot of this work eventually made its way into the framework.