Mobile Testing Emulators

It’s time to write about the research I’ve been doing to find a solution to the testing limitations of mobile testing on a desktop computer. I know that all of the issues could not be solved, but I wanted to see what we could accomplish with online software.

I think it’s important to first define the difference between Simulators and Emulators. While these two are nearly synonymous, there are slight differences that could assist in your determination to look into one product or another. Simulators imitate the appearance, design, or the basic features of a device, whereas an emulator will reproduce the features and actions of the device.

Lastly, my research was to find something that could resolve URLs locally, because we want to test our websites in lower testing environments before we release to production, so it’s important to be able to test our sites this way. That being said, let’s get started.


I’ve got to start with my favorite product that I tested with, and say the most about it. BrowserStack has it together. They offer a laundry list of devices to choose from, with different browsers to test on with each device. They have both emulators and simulators, so you can test on actual real devices, and also by simulation if the device is a bit older. You are also able to resolve URLs locally so we were able to access our sites in house. It allows automation testing using Selenium and Appium, using actual mobile devices. I didn’t look into this feature too much, I focused more on functional testing for my efforts. Lastly, it has a built in code debugger, and the ability to inspect and read the console, which is really nice for testing iOS.

A few notable items is that the trial is 30 minutes of functional testing, but this can be overcome by signing up with different email addresses once that expires. The trial only allows you to tap into a couple iOS devices, and a couple older Android devices. The trial is a little laggy and doesn’t always display the design without minor glitches, but it worked pretty well for my needs. On the plus side, they show the entire phone so it really feels like you’re testing on a device, and you get the full experience.

BrowserStack Home

The cost of the full version can add up quickly, but they do have a variety of offerings to allow you to customize to your needs. The other downside of this product (and most the others) was that the OS versions are not the most up to date, but rather uses the OS version that was released on the phone you are testing (for example, iPhone 5 uses iOS 6.0). The last downside is their screenshot tool cannot resolve URLs behind a login screen, but that’s only an issue for our type of testing.

Overall BrowserStack gets my vote.


A product of SmartBear, CrossBrowserTesting has a lot of really great features to test with. Their trial is 100 minutes of manual testing, which allows you to get a lot done to see if you want to pursue the product further, however it only allows 5 minutes at a time, so you have to be fast. Like others, their trial only allows a few Apple and Android devices, but upon payment, unlocks a ton more options. They also have a lot of devices to choose from, and many of their devices are available for testing both as real emulators and simulators. They even offer real devices as far back as iPhone 3GS and Galaxy S3, if that is a need for you.

Their trial is extremely laggy, and pretty slow to load up (which cuts into your 5 minutes), but unsure if that is just the trial or if that would be the paid version as well. Another minor downside is that their viewable area is just the screen, not the full device, but that part didn’t bother me too much, it just doesn’t give you the entire feel of using the mobile device.

They offer a great screenshot tool that allows you to quickly compare up to 25 different browsers, and they’ll even tell you design differences that they find. This tool even allows you to go behind a log in screen by passing in a username and password (though I wasn’t able to get this to work on their mobile devices, only the desktop versions).

Overall this product was great, other than it being very slow on connection, and a 5 minute trial window, this obviously has a lot of great features.


GenyMotion is a pretty cool product, and is locally installed on your computer, which in theory provides some speed increases and doesn’t fully rely on a solid web connection. This product also allows you to test things like interruptions, battery usage, network connectivity changes, and more. It actually seems to do a lot of really cool things that were not available in the above products.

However, a few things to note is that this is only for testing Android devices, there is no iOS or other available devices. Also, because it’s locally installed, you are required to have the space available and the memory requirements. It also requires some additional software, and to run against local URLs, it requires a bit of server configurations.


This was a fun product to try, because SauceLabs offers every device and every OS version to all those devices. You simply select a device, and then an available OS and get started. However, it takes a very long time to spin up the testing simulator. It appears that you can test local sites, but it doesn’t come without a lot of server/proxy work, of which I was not going to test in my short research period.

The coolest thing about this was all the available devices and OS versions, allowing you to test different scenarios and configurations.

The Others

There was another handful of products that I tried that didn’t give me what I needed or were old and outdated. These products may help in some areas, but for testing locally, or needing to test on the latest device, they didn’t step up to the plate. Or, some required money up front or further work to even try it out, of which I also wasn’t interested in my research period.

  • iPadian
    • Requires $20 to download.
  • Air iPhone
    • Old and outdated. Not even worth the time.
  • Xamarin Testflight
    • Requires in depth knowledge of Visual Studio and runs locally.
    • Old and outdated, does not work like I needed it to work.
  • Safari browser
    • Doesn’t work great on Windows and doesn’t do what I needed.
  • MobiOne
    • More of an App builder and tester, not for testing our sites.
  • Smartface
    • I just could not get it to install, and then was contacted by their sales team a bunch.
  • Sigos AppExperience
    • Free trial requires meeting with a sales rep. I didn’t do this, but the app looks cool.

The Winner

If you find yourself needing this type of testing software, I recommend BrowserStack. It costs some money, but will give you a lot of really great features. I was able to find a handful of bugs during my trial period as well, which was extra fun to realize the value before looking further into this product.

Mobile Testing Limitations on a Desktop Computer

When developing and testing mobile websites, there can be a handful of things to keep in mind, things that won’t be testable from a desktop computer. Although responsive design and basic functionality are testable, there is a greater list of things that need to be tested otherwise.

I have been doing some research lately for some mobile simulators and emulators so that mobile testing can be accomplished, but would not require purchasing all devices to keep them in house. That being said, let’s get into some of the testing limitations for mobile websites.

Design Limitations

There are a handful of things that cannot be tested by resizing your screen, even if using responsive design or a mobile website. Some of these might be intended, but most just need to be considered when developing or testing.

Hover affect

The hover affect is often used when designing desktop applications or websites, yet completely (almost) unavailable for use when designing a mobile app or website. Keep this in mind and do not allow crucial information to be hidden behind a hover affect.

A couple examples of a hover affect would be a button or link, as well as popups or tooltips. Images also contain hover affects sometimes, such as zoom features or even informational text about the image. Be sure to test alternative options for these things if they contain information that the user needs.

Hidden pages

Sometimes there is a need for pages to be hidden when working in a mobile view. This can be because the functionality is too difficult or time consuming to develop on mobile, or because it’s not an on-the-go page that will be visited. Always be aware of these pages however, if there is a difference between hiding from the user’s view and the page should be inaccessible for the user. If the latter, be sure to test entering the URL directly, and ensure the page doesn’t load or that some type of message displays.

I’ve been asked why we should worry about users entering URLs directly, as this requires them to know extensions beyond the basic .com. I usually give these questioners the basic answers like, an email contains the link, or, people may have favorites set up and shared between devices (more prevalent with Chrome and Android devices), but really, the reason is more around business requirements and expectations, or simply going above and beyond our responsibility as a tester.

Business decided features

Similar to the above approach, sometimes individual features should be hidden from the mobile user, due to restrictions on functionality, security reasons, a product manager’s decision, and others. This functionality is usually pretty easy to test responsively by resizing your screen, and the only details to note are the break points. At what pixel resolution should these features be hidden.

However, some companies like to design their mobile websites with a device detection and then rerouting to a mobile site. If this is the case, you might be able to test this on a desktop computer by navigating to the URL, but be sure to test the mobile rerouting on a device, as that cannot be tested otherwise.

Just be sure that if there are features that are not supposed to show on a mobile device, that they don’t.

Pinch zooming

Mobile pinch zooming is something that can be slightly tested using a desktop computer, simply by scrolling (zooming) in and out of the page. However, testing this way may not be sufficient, as this is not how mobile devices render their screens on pinch zoom.

Imagine the pinch zoom like the Microsoft Magnifier tool. It simply zooms in on the screen instead of restructuring the pixel density. So when testing zoom, be sure your site doesn’t realign objects, or if it does, that it’s purposeful.

Fingertip density

An important thing to test is how objects appear on a mobile device, and if it is too small for use. This is especially important for buttons, or anything that requires a click, as the standard fingertip is a quarter-inch or more. No need for frustration due to fat-fingering, just be sure the buttons are big enough to click on.

Device Limitations

There are also some things that are much more difficult to test without using an actual physical device. Though some might be somewhat testable with an emulator, there is nothing like testing on the device itself.

Slide outs or dragging

A lot of sites have side navs or other features that are accessible by sliding out from the side, or by dragging objects on the screen. Trello, for example, does a great job of utilizing their drag and drop feature on the cards, with the ability to drag to the side causing the page to scroll. Be sure to test this with respect to the device’s ownership over certain dragging motions such as the drag down from top or up from the bottom showing the phone’s menu. Or on an iPhone, if you drag from the left, it pulls up the app switcher tool.

Excessive battery use

Something that is nearly impossible to test on a desktop computer is how much battery your app or website will use. Depending on the user of your site, this may not matter, but don’t discount the battery hog that a site could be, especially when doing things like reporting or building documents.


Interruptions happen all the time when working, and this is no different when using your mobile device. Some interruptions aren’t as bad, such as a text that can be ignored. But there are other notifications that take over the whole screen, like a phone call, that if accepted or ignored, should return you to your site. However, based on the security of the site you’re testing, you may consider any interruption a reason to log the user out. Always consider interruptions when testing, such as locking the device, answering a call or text, or merely switching applications to check the date on your calendar.

Network traffic changes

It’s happened to all of us, we are working on something on WiFi, then we walk outside with our phone and it switches from WiFi to LTE. How does the device handle this switch, and how do we even test this? If this is a concern for your site or app, then be sure to walk around while testing. Go to the elevator, the concrete stairs, or even for a walk outside. Be sure your site handles this switch as expected and no errors are thrown on network changes.

There are a handful of other limitations to testing mobile sites on a desktop computer, so be sure to think through these when testing, and go above and beyond the testing effort next time. It’s sure to pay off for your end users.

Determining Mobile Coverage of a Desktop Application

Over the past few weeks at Zywave, I’ve been tasked with researching and defining a mobile readiness coverage scale to attach to the miscellaneous products we develop. We have a wide range of products, and their usability on mobile devices varies between them, so we wanted to have something to easily see which products are ready for mobile and which products aren’t, and then what amount of overall functionality is offered.

The Grading Scale

I began by researching the basic key phrases I could think of in Google:

mobile coverage grading scale, product functionality available on mobile, how to rate my product’s ability on a mobile device, how to determine what features to remove from mobile, what to tell a customer who asks about my mobile software, compare mobile coverage to desktop, etc.

Unfortunately, I found that I wasn’t able to pinpoint a widely used list, or set of guidelines by the mobile development community. Perhaps this was because I wasn’t able to figure out the best Google phrase. Or possibly because I don’t know the websites to go to, in order to find a list like I was looking for. I’m hoping those aren’t the reasons, and I can just blame it on the fact that one doesn’t really exist. However I did learn quite a bit about mobile testing limitations if testing on a desktop computer. More on that later.

Since I was overall unsuccessful in my endeavors, I set out to use the little bits I found, and to create a list that would work for Zywave’s products. I found a few different ideas for informing customers of how much of your application’s functionality is available on mobile, such as percentage based system, yes/no for mobile availability, a number scale (which seemed to be somewhat like percentage), a list of features available or missing, and a grading scale. I chose to try out the grading scale for our use.

This is what I came up with, and continuously under review, so a handful of things could change over time, or we might scrap it all together and start over with a different idea (the joys of agile).


Application is mobile ready, approved by business for use, and complete. This signifies that all or nearly all application functionality that is available on a desktop browser is also available on a mobile device (perhaps with a small amount of business-decided features removed from mobile). This also signifies that the features available on a mobile device are without bugs, and a mobile device can be used by the consumer as a desktop alternative, when preferred. This grade also refers to active and immediate development support and a high level of external advertising for use on mobile devices.


Application is mobile ready, approved by business for use, and complete. However, this signifies that about half of the feature set of the product is available on mobile, and the mobile device can only be used for those features. This typically would refer to a business decision to only release a subset of features, so the consumer can utilize these features when on the go, but the preferred method of product use is via a desktop browser. This might also refer to a product with more than half of it’s functionality available to the user on a mobile device, but with a large amount of known compatibility issues across mobile platforms. This grade also refers to a fair amount of mobile development support, and some external advertising for use on mobile devices.


Application is mobile ready and approved by business for use, however, a very limited subset (about 25% or less) of functionality is available to the user, regardless of business decision or code limitations. This could also refer to a product in active mobile development for production, attempting to reach a better grade, but testing is incomplete or there are many known compatibility issues. This grade also refers to a low amount of mobile development support, and little external advertising for use on mobile devices.


Application could be mobile ready, but business decisions are made to ignore this product. Or, product is available on mobile and can be logged into, but no functionality is available to the user or functionality works incorrectly. This grade also refers to no mobile development support, and no external advertising for use on mobile devices.


Application cannot and will not be mobile ready due to business decisions, code limitations, or time restraints. This product has no mobile functionality, it cannot or should not be logged into, and no mobile development support exists for this product. There should be no mention of this product’s functionality for mobile use.

Why The Grading Scale

After spending some time thinking about which option would work best for Zywave, I landed on the Grading Scale option because it seemed to categorize our products into groups that could easily be defined by a paragraph or two. However, it’s possible this could group two products together in the same Grade, that shouldn’t belong together, or, we might need to further evaluate spacing out the Grades a bit more so that D and F don’t feel too close together.

I thought about the simple Yes/No option, but this didn’t account for the list of functionality coverage of the product, nor did it inform the end-user of anything beyond: “Yes, you can access our product from a mobile device.” Should I be able to do anything? Should I expect bugs? How about using my favorite feature in the product, can I do that?

The Percentage Based System would have worked great to define the list of functionality coverage of the product, but this could have easily gotten outdated with the additional of one new feature-set. Additionally, I didn’t want to assume I knew 100% about every product at Zywave, and all of it’s functionality, and then look into how much of that is available on mobile, then do the simple calculation:


I also found that the number system could always be at 50%, even though your product is releasing more and more mobile functionality. Here’s an example:

Say your desktop product does four things total, and two of those things are available for use on a mobile device (due to the product being older, and there isn’t time or support for the two missing features). Business decides to release two more features, but only one of them is to be available on mobile, the other is strategically decided not to be covered. Technically you still have a 50% coverage, even though this could promote your product to the next letter grade.

The Number System could be used in two different ways. It could nearly mimic the Percentage Based System, perhaps with some rounding for feature expansion, but at some point, it’s still inaccurate, it just doesn’t require quite as much updating or tracking. However, it could also be used similar to my Grading Scale, in that a higher number (or lower if you’re into golfing) represents a paragraph of defined coverage, availability, business plans, and more. This does allow more options (1-10) and therefore better definitions between numbers. The problem I encountered was that the numbers naturally locked your brain into a percentage-like system, and I wanted to steer away from that.

My overall dilemma, however, is how to recognize when a product has gone from one grade to another. Perhaps a big list of features was just released to production, and the business has decided not to build and support mobile coverage for it (which I actually have built into Grade A still, but for arguments sake, we’ll pretend the Grade is going down). I was previously at an Grade A, but now that puts me down to a Grade B. Is it that easy? Perhaps it is as easy as updating our spreadsheet of products and their mobile readiness. But if not, maybe that would an opportunity to redefine our grades.

All Encompassing

How do you develop one list that can work for all your applications? We develop a vast amount of products at Zywave, with entirely different capabilities on mobile devices. To write one list that defines it all can be tough, but I attempted to do so. Maybe I should have just written a paragraph about each product individually and been done with it, but where’s the fun in that.

Does anyone have experience with this? Is there already a list out there that does what I’m looking for? Comment below with your thoughts, suggestions, personal recommendations, or anything!