And So It Begins

Well, I finally bought a URL to further identify my name. I have been debating purchasing because it meant a few things. First, it means that I am out an entire $48. Second, it means that I should probably post regularly so that the identification is worth it, and continue making the name known. Lastly, it could add some credibility or validity to what I’m saying, though I’m not sure it always should.

The Journey


If you’ve read my original post when I began to write, you might remember that I have wanted to write a blog for a few years. You may also remember where my QA Blog journey officially began earlier this year. It started as a goal for myself, as part of my career development. I wanted to be more engaged with the QA community, and I also want to be a teacher/mentor of the QA role, training people to be more analytical, to think outside the box, and to test like they’ve never tested before.

These desires led me to start writing a blog online. I had no expectations of followers, or active readers, or really anything like that, but to fulfill a career goal, I needed to be somewhat active with my writing. Once I got a decent amount of pieces written to make it worth mentioning my blog to people, I figured it was time to make it real by purchasing a URL.

The Choices

I came up with a few different ones and tossed them around in my head while also presenting them to some friends and co-workers. I wanted to be sure that whatever I bought, I would be happy with for an extended period of time, which right now is just year to year. A few of my ideas were:

  • (although taken, there is no real site)

I heard back from some people, and it was pretty much an equal consensus, they all liked This was originally my second favorite option right after qaingtheworld, but I assumed that URL was a lot harder to read and say to people, so it quickly became a secondary option.

The Questions

I have had a few people ask me some questions about what is next, what do I want this to look like over the next few months or years, and my answer right now is simply, I don’t know. I have dreams of certain things happening in my career, but right now I’m just going to continue expanding my knowledge of the QA life, try to become even more analytical, and take that where I can in my future.

Mobile Testing Emulators

It’s time to write about the research I’ve been doing to find a solution to the testing limitations of mobile testing on a desktop computer. I know that all of the issues could not be solved, but I wanted to see what we could accomplish with online software.

I think it’s important to first define the difference between Simulators and Emulators. While these two are nearly synonymous, there are slight differences that could assist in your determination to look into one product or another. Simulators imitate the appearance, design, or the basic features of a device, whereas an emulator will reproduce the features and actions of the device.

Lastly, my research was to find something that could resolve URLs locally, because we want to test our websites in lower testing environments before we release to production, so it’s important to be able to test our sites this way. That being said, let’s get started.


I’ve got to start with my favorite product that I tested with, and say the most about it. BrowserStack has it together. They offer a laundry list of devices to choose from, with different browsers to test on with each device. They have both emulators and simulators, so you can test on actual real devices, and also by simulation if the device is a bit older. You are also able to resolve URLs locally so we were able to access our sites in house. It allows automation testing using Selenium and Appium, using actual mobile devices. I didn’t look into this feature too much, I focused more on functional testing for my efforts. Lastly, it has a built in code debugger, and the ability to inspect and read the console, which is really nice for testing iOS.

A few notable items is that the trial is 30 minutes of functional testing, but this can be overcome by signing up with different email addresses once that expires. The trial only allows you to tap into a couple iOS devices, and a couple older Android devices. The trial is a little laggy and doesn’t always display the design without minor glitches, but it worked pretty well for my needs. On the plus side, they show the entire phone so it really feels like you’re testing on a device, and you get the full experience.

BrowserStack Home

The cost of the full version can add up quickly, but they do have a variety of offerings to allow you to customize to your needs. The other downside of this product (and most the others) was that the OS versions are not the most up to date, but rather uses the OS version that was released on the phone you are testing (for example, iPhone 5 uses iOS 6.0). The last downside is their screenshot tool cannot resolve URLs behind a login screen, but that’s only an issue for our type of testing.

Overall BrowserStack gets my vote.


A product of SmartBear, CrossBrowserTesting has a lot of really great features to test with. Their trial is 100 minutes of manual testing, which allows you to get a lot done to see if you want to pursue the product further, however it only allows 5 minutes at a time, so you have to be fast. Like others, their trial only allows a few Apple and Android devices, but upon payment, unlocks a ton more options. They also have a lot of devices to choose from, and many of their devices are available for testing both as real emulators and simulators. They even offer real devices as far back as iPhone 3GS and Galaxy S3, if that is a need for you.

Their trial is extremely laggy, and pretty slow to load up (which cuts into your 5 minutes), but unsure if that is just the trial or if that would be the paid version as well. Another minor downside is that their viewable area is just the screen, not the full device, but that part didn’t bother me too much, it just doesn’t give you the entire feel of using the mobile device.

They offer a great screenshot tool that allows you to quickly compare up to 25 different browsers, and they’ll even tell you design differences that they find. This tool even allows you to go behind a log in screen by passing in a username and password (though I wasn’t able to get this to work on their mobile devices, only the desktop versions).

Overall this product was great, other than it being very slow on connection, and a 5 minute trial window, this obviously has a lot of great features.


GenyMotion is a pretty cool product, and is locally installed on your computer, which in theory provides some speed increases and doesn’t fully rely on a solid web connection. This product also allows you to test things like interruptions, battery usage, network connectivity changes, and more. It actually seems to do a lot of really cool things that were not available in the above products.

However, a few things to note is that this is only for testing Android devices, there is no iOS or other available devices. Also, because it’s locally installed, you are required to have the space available and the memory requirements. It also requires some additional software, and to run against local URLs, it requires a bit of server configurations.


This was a fun product to try, because SauceLabs offers every device and every OS version to all those devices. You simply select a device, and then an available OS and get started. However, it takes a very long time to spin up the testing simulator. It appears that you can test local sites, but it doesn’t come without a lot of server/proxy work, of which I was not going to test in my short research period.

The coolest thing about this was all the available devices and OS versions, allowing you to test different scenarios and configurations.

The Others

There was another handful of products that I tried that didn’t give me what I needed or were old and outdated. These products may help in some areas, but for testing locally, or needing to test on the latest device, they didn’t step up to the plate. Or, some required money up front or further work to even try it out, of which I also wasn’t interested in my research period.

  • iPadian
    • Requires $20 to download.
  • Air iPhone
    • Old and outdated. Not even worth the time.
  • Xamarin Testflight
    • Requires in depth knowledge of Visual Studio and runs locally.
    • Old and outdated, does not work like I needed it to work.
  • Safari browser
    • Doesn’t work great on Windows and doesn’t do what I needed.
  • MobiOne
    • More of an App builder and tester, not for testing our sites.
  • Smartface
    • I just could not get it to install, and then was contacted by their sales team a bunch.
  • Sigos AppExperience
    • Free trial requires meeting with a sales rep. I didn’t do this, but the app looks cool.

The Winner

If you find yourself needing this type of testing software, I recommend BrowserStack. It costs some money, but will give you a lot of really great features. I was able to find a handful of bugs during my trial period as well, which was extra fun to realize the value before looking further into this product.

Determining Mobile Coverage of a Desktop Application

Over the past few weeks at Zywave, I’ve been tasked with researching and defining a mobile readiness coverage scale to attach to the miscellaneous products we develop. We have a wide range of products, and their usability on mobile devices varies between them, so we wanted to have something to easily see which products are ready for mobile and which products aren’t, and then what amount of overall functionality is offered.

The Grading Scale

I began by researching the basic key phrases I could think of in Google:

mobile coverage grading scale, product functionality available on mobile, how to rate my product’s ability on a mobile device, how to determine what features to remove from mobile, what to tell a customer who asks about my mobile software, compare mobile coverage to desktop, etc.

Unfortunately, I found that I wasn’t able to pinpoint a widely used list, or set of guidelines by the mobile development community. Perhaps this was because I wasn’t able to figure out the best Google phrase. Or possibly because I don’t know the websites to go to, in order to find a list like I was looking for. I’m hoping those aren’t the reasons, and I can just blame it on the fact that one doesn’t really exist. However I did learn quite a bit about mobile testing limitations if testing on a desktop computer. More on that later.

Since I was overall unsuccessful in my endeavors, I set out to use the little bits I found, and to create a list that would work for Zywave’s products. I found a few different ideas for informing customers of how much of your application’s functionality is available on mobile, such as percentage based system, yes/no for mobile availability, a number scale (which seemed to be somewhat like percentage), a list of features available or missing, and a grading scale. I chose to try out the grading scale for our use.

This is what I came up with, and continuously under review, so a handful of things could change over time, or we might scrap it all together and start over with a different idea (the joys of agile).


Application is mobile ready, approved by business for use, and complete. This signifies that all or nearly all application functionality that is available on a desktop browser is also available on a mobile device (perhaps with a small amount of business-decided features removed from mobile). This also signifies that the features available on a mobile device are without bugs, and a mobile device can be used by the consumer as a desktop alternative, when preferred. This grade also refers to active and immediate development support and a high level of external advertising for use on mobile devices.


Application is mobile ready, approved by business for use, and complete. However, this signifies that about half of the feature set of the product is available on mobile, and the mobile device can only be used for those features. This typically would refer to a business decision to only release a subset of features, so the consumer can utilize these features when on the go, but the preferred method of product use is via a desktop browser. This might also refer to a product with more than half of it’s functionality available to the user on a mobile device, but with a large amount of known compatibility issues across mobile platforms. This grade also refers to a fair amount of mobile development support, and some external advertising for use on mobile devices.


Application is mobile ready and approved by business for use, however, a very limited subset (about 25% or less) of functionality is available to the user, regardless of business decision or code limitations. This could also refer to a product in active mobile development for production, attempting to reach a better grade, but testing is incomplete or there are many known compatibility issues. This grade also refers to a low amount of mobile development support, and little external advertising for use on mobile devices.


Application could be mobile ready, but business decisions are made to ignore this product. Or, product is available on mobile and can be logged into, but no functionality is available to the user or functionality works incorrectly. This grade also refers to no mobile development support, and no external advertising for use on mobile devices.


Application cannot and will not be mobile ready due to business decisions, code limitations, or time restraints. This product has no mobile functionality, it cannot or should not be logged into, and no mobile development support exists for this product. There should be no mention of this product’s functionality for mobile use.

Why The Grading Scale

After spending some time thinking about which option would work best for Zywave, I landed on the Grading Scale option because it seemed to categorize our products into groups that could easily be defined by a paragraph or two. However, it’s possible this could group two products together in the same Grade, that shouldn’t belong together, or, we might need to further evaluate spacing out the Grades a bit more so that D and F don’t feel too close together.

I thought about the simple Yes/No option, but this didn’t account for the list of functionality coverage of the product, nor did it inform the end-user of anything beyond: “Yes, you can access our product from a mobile device.” Should I be able to do anything? Should I expect bugs? How about using my favorite feature in the product, can I do that?

The Percentage Based System would have worked great to define the list of functionality coverage of the product, but this could have easily gotten outdated with the additional of one new feature-set. Additionally, I didn’t want to assume I knew 100% about every product at Zywave, and all of it’s functionality, and then look into how much of that is available on mobile, then do the simple calculation:


I also found that the number system could always be at 50%, even though your product is releasing more and more mobile functionality. Here’s an example:

Say your desktop product does four things total, and two of those things are available for use on a mobile device (due to the product being older, and there isn’t time or support for the two missing features). Business decides to release two more features, but only one of them is to be available on mobile, the other is strategically decided not to be covered. Technically you still have a 50% coverage, even though this could promote your product to the next letter grade.

The Number System could be used in two different ways. It could nearly mimic the Percentage Based System, perhaps with some rounding for feature expansion, but at some point, it’s still inaccurate, it just doesn’t require quite as much updating or tracking. However, it could also be used similar to my Grading Scale, in that a higher number (or lower if you’re into golfing) represents a paragraph of defined coverage, availability, business plans, and more. This does allow more options (1-10) and therefore better definitions between numbers. The problem I encountered was that the numbers naturally locked your brain into a percentage-like system, and I wanted to steer away from that.

My overall dilemma, however, is how to recognize when a product has gone from one grade to another. Perhaps a big list of features was just released to production, and the business has decided not to build and support mobile coverage for it (which I actually have built into Grade A still, but for arguments sake, we’ll pretend the Grade is going down). I was previously at an Grade A, but now that puts me down to a Grade B. Is it that easy? Perhaps it is as easy as updating our spreadsheet of products and their mobile readiness. But if not, maybe that would an opportunity to redefine our grades.

All Encompassing

How do you develop one list that can work for all your applications? We develop a vast amount of products at Zywave, with entirely different capabilities on mobile devices. To write one list that defines it all can be tough, but I attempted to do so. Maybe I should have just written a paragraph about each product individually and been done with it, but where’s the fun in that.

Does anyone have experience with this? Is there already a list out there that does what I’m looking for? Comment below with your thoughts, suggestions, personal recommendations, or anything!