AboutSauceCon brings together the global community of Sauce Labs users and automated testing experts. Join us next March 1-2 in San Francisco, where teams from around the world will come together to learn from each other and level up their automated testing and continuous delivery skills.
Check back for updates!
Senior Automation Engineer, Net-A-Porter
Slow, Flaky, and Legacy Tests? FTFY – Our New Testing Strategy at Net-A-Porter
As part of a recent replatforming exercise, Net-A-Porter has worked hard not only to refresh their technology, but to create a great testing culture. As a result, they have come a long way from “throwing tests over the wall.” Testing concerns are now part of their NFRs and technical decisions. Developers are responsible for unit and functional tests, working alongside test specialists who are part of every delivery team for guidance.
In this session, Adela Mosincat and James Collins will walk attendees through how Sauce Labs has helped in this process. She will review how the “test-runner” was born to facilitate the consistent and seamless running of their tests on Sauce Labs across teams. The test-runner is a Docker image, whose purpose is to handle the running of the tests both locally and in their CI pipeline. It parallelizes the tests as much as possible, executes intelligent waiting for VMs and retrying for flaky tests, supports test tagging and quarantining and handles reporting to both Sauce Labs and Jenkins.
In addition to the test-runner, Adela and Jim will review how Net-A-Porter improved their dev/test culture and CI pipeline, and share what they learned along the way.
Adela is a Senior Automation Engineer at Net-a-Porter working on developing automation frameworks and advising several teams on automation testing strategies. A quality advocate promoting quality from the beginning of the life cycle. When not developing tests, she enjoys hiking, skiing and DIY projects.
VP, Engineering, Life360
How Life360 Went From a 6 Week to 2 Week Mobile Release Cycle With Automation
Changing a development process is hard. Everyone wants to wait to ship products until they feel just right. But as counterintuitive as it sounds, if you release faster, your quality goes up! In this presentation, Amol Kher will describe the “agile” process that Life360 used in 2014 where they shipped every 5-6 weeks. During this time, the mobile app was slow and crashed often, and they released when they felt things were good enough.
They knew they had to get better, but how did they chart a course to a fast release process? Slowly and carefully, of course. They moved to a fixed release date, flexible scope approach gradually over nine months and were able to release every two weeks consistently. Users loved getting frequent updates and engineers loved not having frequent fire drills. How did they do it?
1. Trunk Based Development
2. Quarterly metrics on quality
3. Feature flagging and experiment flags
4. Investment in unit testing
5. Investment in automation (Sauce Labs)
6. Using alpha and beta channels
7. Champions for monitoring quality
During the session, Amol will discuss their progress and show attendees how this all played out in charts.
Amol Kher leads a team of mobile and platform engineers at Life360, a company that builds software for families. Early on in his career he worked at Microsoft, Google and Netflix. Amol is passionate about building high quality products and building high performance teams. Outside of work, he enjoy readings, chess and working with his child on Lego Robotics or fencing.
Senior Automation Engineer, Twitter
Which Tests Should We Automate?
More and more teams are coming to the realization that automating every single test may not be the best approach. However, it’s often difficult to determine which tests should be automated and which ones are not worth it.
When asked “which tests should we automate?”, Angie Jones’ answer is always “it depends.” Several factors should be considered when deciding on which tests to automate and many times that decision is contextual.
Join in on this session where Angie will explore features and associated tests, then discuss whether the tests should be automated or not considering the factors and context provided.
Attendees to this session will take away:
- Identification of the key factors to consider when deciding which tests to automate
- How to gather the data needed to make these decisions
- A formula that can be applied to any test to determine if it should be automated or not
Angie Jones is a Senior Software Engineer in Test at Twitter who has developed automation strategies and frameworks for countless software products. As a Master Inventor, she is known for her innovative and out-of-the-box thinking style which has resulted in more than 20 patented inventions in the US and China. Angie is also an adjunct college professor who teaches Java programming and is a strong advocate for diversity in Technology. She volunteers with organizations who champion this cause such as TechGirlz and Black Girls Code.
VP, Expert Engineer, Global Technology, JPMorgan Chase
The Tale of The Sauce and The Superhero: Engineering a Secured Enterprise-scale Test Automation Platform
In this talk I will show you how to build an enterprise scale secured test automation platform using Sauce Labs. When adopting public cloud, security poses significant challenges in the way you engineer your infrastructure and solutions. Come, learn how to secure your infrastructure & applications and the best practices when it comes to adopting public cloud for your organization. I call it extreme engineering and yeah, it takes you from being a human to a Superhero!
Ashish (aka Mr. Doomsbuster) is an Expert Engineer at JPMorgan Chase. It is his job to make 28,000 developers more productive. He is the architect for the firm’s first enterprise scale test automation platform, Nautilus. Ashish co-owns and runs the firm’s internal Open Source initiative. He is passionate about web, Linux, Dev tools and developer productivity. He loves programming and wants to build his own version of Jarvis, he calls it Bageera. He is a superhero movie fanatic and loves writing with fountain pens. He dreams of ending his purpose on earth by taking a spaceflight to Mars (probably in his self created iron-man like suit)!
Automation Architect, Scripps Networks Interactive
End to End UI Testing: Stop The Madness!
When it comes to automating web-based user interface tests with Selenium, reliability is a common challenge. Unreliable tests lead to a loss of trust with developers, which leads to tests becoming ignored and eventually disregarded. And high reliability is a must when the UI tests are an integral part of the deployment pipeline, determining if a build can automatically be promoted down the pipeline to a higher environment.
There are many factors that impact the reliability of UI tests. Most of these get a fair amount of coverage. But the side effects of performing end to end tests, namely the problems of test scope and test dependencies get overlooked and are often mysterious and hard to understand. How many external systems or services does your test touch that are not part of your build that you are testing? Does your test suite fail when there are glitches in those systems? But how do you remove these external dependencies and still get full coverage for you build?
It is possible, and in this session, Brian Saylor will delve into the means to do just that using several methods and techniques we have used successfully with complex websites such as HGTV.com and FoodNetwork.com. Attendees will learn some techniques to remove some of the less obvious issues contributing to the occasional mysterious test failure in a complex web application, and will also get a glimpse at the issues that come with performing automated tests on a complex web application.
During his software development career, Brian Saylor has played many roles including senior software engineer, software development manager, and subject matter expert for web technologies and content delivery networks among other roles with both large companies and small startups. He is recognized for his ability to solve complex technical challenges in a practical manner. Most recently he is the automation architect for Scripps Networks Interactive where he oversees plans and strategies for test, release, environment, and data integrity automation for the Scripps Networks family of websites. A few of the websites he works with include HGTV.com, FoodNetwork.com, and TravelChannel.com.
Product Quality Architect, Blackboard
Transform Culture Using DevOps Principles
At the heart of DevOps is the idea that organizations break down silos and teams work together to innovate faster, reducing the length of feedback loops and delivering value faster. In this presentation, Ashley will describe how Blackboard is using DevOps principles—collaborative practices, iterative improvements, incremental testing, and more—to transform their development culture so everyone owns quality.
Join Ashley as she lays the groundwork for iterative and continuous improvement through a defined mission and specific goals. She also explores how cross-team collaboration broke down silos and helped align team members with the necessary skills to meet their quality goals. One key to their success was recognizing that we cannot test everything—even if we want to. Instead of a huge, unmanageable test suite, they implemented an incremental testing approach which teams can own and successfully maintain. Ashley shares examples of how to implement a continuous delivery pipeline, illustrating the reduced feedback loops that led to better, faster software delivery. Ashley reports, “Now that we’ve seen the way of DevOps, we don’t want to go back.”
Ashley is a Product Quality Architect at Blackboard, Inc, a leading provider of educational technology, where she helps establish and drive testing practices throughout the organization. She’s an international speaker that has shared her experiences at industry events including Selenium Conference, Software Test Professionals Conference, and soon at TISQA, SauceCon and Better Software Conference/DevOps West. She also enjoys sharing her experiences through writing as a guest blogger for SauceLabs. A proponent of open source, Ashley believes in giving back to the software community and serves as a member of the Selenium Project’s Steering Committee and co-chair of the Selenium Conference, with a focus and passion for diversity and inclusion throughout the industry.
CEO, Sauce Labs
Welcome and Introductory Remarks
Charles Ramsey has been the Chief Executive Officer of Sauce Labs Inc since April 2015 and served as its Chief Revenue Officer from February 2015 to April 2015. Charles has 25 years of industry experience. He was a Venture Partner at JMI Equity. Prior to joining JMI Equity in 2005, Charles held a number of roles at Quest Software, Inc. including Vice President of Marketing and Sales. He served as Vice President of Sales at Computer Intelligence, and was employed in sales at IBM. Charles has served as a Director of ServiceNow, Inc. (Formerly, Service-now.com, Inc), and now serves as a Director of Configuresoft, Inc. Charles has a Bachelor of Arts from the University of California, San Diego and a Master of International Management from the American Graduate School of International Management.
Senior QA Automation Consultant, NTT Data
The Waiting Game – How To Design Reliable Selenium Tests
The biggest issue when it comes to Selenium tests is attempting to interact with the page elements when they are not ready. Trying to click an element before it is available, trying to select a value from a dropdown when it is not yet populated, checking an element attribute before it is available – these are some of the most common reasons for failures when it comes to Selenium tests. Such issues can be avoided by using the WebDriverWait Java class to redesign the way you interact with your page. In this talk, Corina Pip will show you how to get from click to click and wait, from select a dropdown value to wait and select, and so on. She will show you how to replace your standard Selenium commands, like click, with customized waits that you can write to adapt to your test environment conditions. Corina will show you how to rethink your page interactions from a waiting perspective. And, as an added bonus, how you can replace some of the assertions you write with corresponding wait methods.
Corina Pip is a senior test automation consultant focusing on testing by means of Java, Selenium, TestNG, Spring, Maven, and other cool frameworks and tools. Her previous endeavours include working on navigation devices and in the online gaming industry. She loves automation and always tries to learn something new, improving her automation skills and spreading the knowledge to her peers. Apart from work, Corina is a testing blogger, a traveler, an amateur photographer and a GitHub contributor. She also tweets at @imalittletester.
Senior Automation Engineer, Vanguard
The Power of Polymorphism: Testing Android and iOS From a Single Test Suite
Maintaining end-to-end test suites is always a challenge. Mobile typically multiplies that challenge by forcing companies to maintain multiple suites across different mobile OSs. However, we can utilize Object-Oriented design patterns to enable polymorphism and allow us to run the same test cases on multiple different platforms. In this presentation, Craig Schwarzwald will show you how. The strategy is as follows:
- Create interface classes for any/all Appium Page Objects (borrowing a Selenium term in our case representing each mobile screen)
- Create common implementations of those interfaces for any/all locators that are the same between the different mobile platforms.
- Create platform specific implementations of those interfaces for ONLY the locators that are different across platforms.
- Create tests that reference the interfaces (not the implementations).
By following the above rules, we can generate a single test suite that can easily run all our tests across multiple different mobile OS platforms! Come prepared to view code examples of how we can put these Object-Oriented design patterns in place so you can go back to your organization armed with the specifics of how to implement these changes at your company.
Craig Schwarzwald has more than a decade of professional scripting and automation experience. For the past seven years he has focused on creating and maintaining the Selenium framework used by hundreds of testers, developers, and automation engineers at a large financial organization. Widely regarded as his company’s Selenium expert, Craig holds weekly “office hours” sessions to supply solutions to teams’ most difficult Selenium-based challenges. While on a mobile development team, Craig switched gears to Appium where he was able to take the same selenium skills and develop a mobile framework capable of running the same test cases across both Android and iOS. In his spare time, Craig enjoys bowling, playing softball, and having passionate discussions about Selenium, test automation, and any other Shift Left related topics. Follow Craig on Twitter @AutomationCraig.
Creator of Appium
Appium: How to Write a Symphony
In the last couple of years, Appium has added a few additional platforms to its tool belt and has become a tool of choice outside the iOS/Android “mobile apps” domain in which it rose to popularity. When it started, Appium was a tool, but it has now become a platform by which many people contribute implementation to automate many different things. Appium is becoming the language by which devices are scripted. This presents many exciting new challenges for Appium and automation developers in the years ahead. This talk will cover a brief overview of the universe of Appium-supported platforms and devices, what the Appium development community has learned from extending Appium to support this growth, and proposed future questions as to how Appium and developers might evolve to even better accommodate supporting many more platforms and devices.
Dan Cuellar is the creator of the open source mobile automation framework Appium, and an Engineering Manager at Apple. Prior to joining Apple, Dan served as Principal Development Manager at FOODit in London. Previously, he headed the test organization at Shazam in London and Zoosk in San Francisco, and worked as a software engineer on Microsoft Outlook for Mac and other products in the Microsoft Office suite. He is an advocate of open source technologies and technical software testing. He earned a Bachelor’s degree in Computer Science, with a minor in Music Technology, from the world-renowned School of Computer Science at Carnegie Mellon University in Pittsburgh.
QA Manager, Thomas Cook
Mobile Testing of Web Apps: Emulators vs. Real Devices
Mobile testing is a challenge. It raises hundreds of questions straight off the bat. How do you minimize the long list of business-critical test configurations with no risk for quality? Where to test new functionality and where to run regression? How often to revisit configuration priorities? Which tools to use for automation? And while you’re racking your brain to answer those, you start realizing one more crucial aspect which changes your world forever. It’s provisioning pain. Purchasing, maintenance, decommissioning, OS upgrades, distribution between team members (which may be also distributed). Even trivial charging often becomes a real pain point. Yes, it’s not just yet another browser to download and install.
In this session, attendees will find out how to build an effective mobile testing strategy for a web app taking into account business priorities, technical limitations, project and process peculiarities. Darya Alymova will cover:
- Mobile testing of Web App: is it only about GUI?
- Emulation: to be or not to be dilemma
- Real devices: physical or virtual?
- Our cloud experience
- Our recipe for harmonious mobile testing
Darya Alymova has more than eight years of experience in QA. During this time she’s been actively gathering both hands-on and managerial experience within various domains and dealt with web, mobile web and native applications, hardware, firmware, API and Mobile SDK. Over the last four years, Darya’s professional challenges have included building test strategies, definition and implementation of test processes, both manual and automation, for different solutions, most of which included mobile.
Selenium Ninja, Author of Elemental Selenium
The Death of Flaky Tests
Dave Haeffner is here to bury flaky tests, not to praise them.
For far too long we have allowed flaky tests to live in our world, creeping in and eroding the trust in our Selenium tests. They have wrought unreliable test results, noise, and ultimately an infinite amount of time wasted throughout our industry. Some blame Selenium for this. Others point a finger at the browser vendors. Some call out either instabilities in their Application Under Test or poor design practices in their test code.
Flaky Tests are often talked about in whispers, and ultimately accepted as a part of reality that we cannot control.
Dave says it’s time for a revolution! Join him as he shows you how to take the power back from Flaky Tests and end their reign once and for all.
Dave Haeffner is the writer of Elemental Selenium — a free, once weekly Selenium tip newsletter that’s read by thousands of testing professionals. He’s also the creator and maintainer of the-internet (an open-source web app that’s perfect for writing automated tests against), and author of The Selenium Guidebook. He’s helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto (to name just a few). He’s also an active member of the Selenium project and has spoken at numerous conferences and meetups around the world about how to use Selenium successfully.
Senior Engineering Manager, Uber
Optimizing the Software Development Lifecycle: Key Principles for Technology and Staffing
In this session, Denali will discuss time as a key principle for software development. Beyond servers, virtual machines and containers, she will cover how to push the boundary of the optimized continuous integration system and provide a comparison between unikernels and function-as-a-platform components in the software development lifecycle. Denali will discuss what developer time optimization means for productivity, how to compound this as an investment in people, as well as training and teaching as a first-class business competency and profit stream.
Denali Lumma is a senior technology and people focused leader with an outstanding track record of engineering and team excellence at startups, midsize and global companies. Currently, Denali is at Uber working across security, engineering and people analytics. Previously she worked at Salesforce.com, in charge of continuous integration for desktop, native mobile, and mobile web applications. In her spare time, Denali is the non-executive director for Savage Jazz Dance Company, a local non-profit distinguished by its disciplined dancers and instructors and its celebration of jazz music’s wide range of expression.
Engineering Manager – QA and R&E, Criteo
Growing Up the Right Way – An Example of How to Scale When You Aren’t a Start-up Anymore
The current environment we live in demands the ability to scale at the drop of a hat. In this session, we will cover the questions to ask when you are faced with the scaling vs. cost dilemma, things like “I have a limited budget, where should I spend it?”, “How do I take advantage of my team’s diverse, yet non-correlating, coding experience?”, and “Where should I start to make the biggest bang for the buck? We’ll cover the options and tools you have to make sure you’re scaling without forsaking quality, and why it is ok to fail (as long as you do it fast). I will share real world examples of our company being forced to grow up and the pains and glorious successes that went right along with it. Plus there will be treats and who doesn’t like treats?
Derek has an MBA with an emphasis in e-commerce from DePaul University.He has been in the QA field for the past 20 years in every role imaginable from QA Analyst to Test Automation Consultant to now having led QA Teams for the past 15 years. Derek has worked for Fortune 500 companies and start-ups and has built and managed teams from 1-45 testers. When he’s not at work, he enjoys running Ironman triathlons (not training for them) and spending time with his amazing wife and 2 daughters
Diego Fernando Molina
Software Engineer in Test, Zalando SE
The Holy Trinity of UI Testing
The first step before testing is defining what we want to test. This may sound trivial, but in reality this not done often properly. We tend to oversee the obvious and we start testing without knowing what we want to accomplish. Do we want to validate the user behavior? Do we need to check that the page design is responsive on different devices? Knowing what is important and what needs to be validated can help us enormously to have a clear purpose.
When we know the purpose of our test, we can start planning, coding, executing and improving our tests. But overall, we will know what approach we can use to develop the test. Functional, layout and visual testing are the three pillars of the UI testing trinity. These are three approaches we can use to develop focused tests, tests that are asserting a specific aspect of a web application.
But how can we identify what approach to use? When should we combine them? This session will help attendees to define what they want to test and what approach to use when developing the test. It will go deeper through scenarios and code examples that show how to create tests with great assertions and clear purpose, tests that give value to the team. It will also discuss scenarios where a functional test is not enough, or where a visual test is better than a layout test. This talk’s main goal is to offer a different perspective when testing a web application through the UI testing trinity.
Diego Fernando Molina is a Software Engineer in Test at Zalando, specialized in testing tool development and advising feature teams how to test better. He is one of the maintainers of the official docker-selenium project (https://goo.gl/F4k5Lz), and he is the creator of Zalenium, a dynamic Selenium Grid. He spends most of his time working with different teams and finding ways to do UI testing in a more simple way. You can find him often in the IRC/Slack channel for Selenium.
Mobile Visual Testing: Uphill Battle Of Mobile Visual Regression
There are many types of testing companies need to perform in order to have confidence in their product: security testing, integration testing, system testing, performance testing, and more. Often, mobile developers focus on ensuring that main end-to-end flows of their applications work by relying on frameworks like Appium or Robotium. However, in the mobile domain, visual testing is essential as mobile devices differ drastically in capabilities, display dimensions and even operating systems. Visual regression testing targets specific areas of visual concepts like layouts, responsive design, graphics, and CSS. Because modern mobile applications are built as hybrid and native applications, there is no way to scale this sort of testing using manual resources; hence, visual test automation should be a crucial piece of the testing stack. In this talk, the audience will learn about major visual testing frameworks targeting both responsive web applications and native mobile applications. As a part of this presentation, few open source, and paid solutions will be demoed such as Applitools, Galen and Percy.io.
Dmitry Vinnik is a senior software engineer at Salesforce and has been passionate about software quality since the very beginning of his career. He started out as a quality engineer, and was able to bring test expertise into his current software engineering role to ensure delivery of a high quality product. Dmitry is also a Scrum Master focused on making his team more efficient and productive. His background involves studying medicine and bioinformatics in addition to software and quality engineering.
Senior Architect & Evangelist, Applitools
Not Only Cars: “AI, Please Test My App”
Autonomous cars were a sci-fi dream not 10 years ago. A computer driving a car? No way. But it did happen, and is happening. And if scientists do it for a complicated task such as driving, can they do it for automated regression testing? In this talk, Gil Tayar explores what is being done in the field today, but also speculates about the future. He’ll introduce the six levels of autonomous testing (that correspond to the five levels of autonomous driving), and try and figure out what kind of help current AI techniques can bring to automated testing.
30 years of experience have not dulled the fascination Gil Tayar has with software development. From the olden days of DOS, to the contemporary world of Software Testing, Gil was, is, and always will be, a software developer. He has in the past co-founded WebCollage, survived the bubble collapse of 2000, and worked on various big cloudy projects at Wix. His current passion is figuring out how to test software, a passion which he has turned into his main job as Evangelist and Senior Architect at Applitools. He has religiously tested all his software, from the early days as a junior software developer to the current days at Applitools, where he develops tests for software that tests software, which is almost one meta layer too many for him.
Test Automation Architect, Gannett | USA Today
Achieving Continuous Integration (CI) Excellence through Test Automation
Test Automation roles continue to evolve and will be entirely different in the future. At Gannett | USA Today Network, the change has started by blurring the lines between Test Automation and DevOps daily tasks with Test Automation owning continuous integration (CI), defining CI best practices, building the CI pipeline, and being the quality gatekeeper of product releases. In this presentation, Greg Sypolt will discuss and cover:
- Setting expectations for CI
- CI ownership as a community activity, not an individual one
- Defining a continuous testing strategy
- Designing repeatable and disposable CI architecture
- Setting CI standards
- The Test Automation Engineer’s role and responsibilities
Greg Sypolt (@gregsypolt) is Test Automation Architect at Gannett | USA Today Network, Fixate IO Contributor, and co-founder of Quality Element. Responsible for test automation solutions, test coverage (from unit to end-to-end), and continuous integration across all Gannett | USA Today Network products.In the last three years, he has helped change the testing approach from manual to automated testing across several products at Gannett | USA Today Network. To determine improvements and testing gaps, he conducted a face-to-face interview survey process to understand all the product development and deployment processes, testing strategies, tooling, and interactive in-house training programs.
Principal Software Engineer, Buzzfeed
Testing Without Assertions – Using Sauce Labs As a Real User for Continuous Analytics Testing
At Buzzfeed, analytics is extremely important as a feedback loop to the content creators. They use a client side JS library to send these analytics events which is well tested; however, in early 2016 there were a series of data outages that prompted the team to revisit their testing strategy. Testing network requests to downstream systems can be tough. There’s only so far that Selenium tests can reach and you can often end up with tests that offer you false confidence in the complete system. In this presentation, Ian Feather will describe how they approached this problem and ultimately the solution they chose: using Sauce Labs to run continuous cross-platform tests against our production systems. Ian will share a real-life case study on how BuzzFeed knows their analytics are working correctly, a general technique for testing network requests to downstream systems, and more.
Ian Feather has been working at BuzzFeed since March 2016 on front-end infrastructure in a role that encompasses testing, automation, performance and resilience. He is particularly fond of working on problems relating to scale: both of sites and of teams. Ian’s background is in Front End Web Development and has recently stepped away from feature development to focus on the Front End Ops side. He is a big fan of continuous delivery and creating a safe environment to push code quickly and easily. Prior to BuzzFeed, Ian held positions at Schibsted Media, Lonely Planet and Burberry.
Sr. Manager, Software, Oath (Yahoo+AOL)
Conquering The Wild West of End-to-End Automation
In this presentation, attendees will learn about the technical challenges, best practices, and the principles that Jenny Hung and her team at Yahoo and AOL followed to build a reliable and scalable test automation infrastructure across desktop, mobile app, and mobile web platforms on Sauce Labs, running end-to-end tests to detect bugs and regression.
The audience will learn tips and best practices for running desktop and mobile automation on Sauce Labs. Jenny will cover the following:
- Challenges of UI automation on desktop vs. mobile apps
- Solutions/approaches and best practices
- Design of their end-to-end test automation framework
- Code snippets and demo of desktop and mobile app automation on Sauce Labs
Jenny Hung manages end-to-end (E2E) integration and automation team in Oath for Yahoo Gemini ad platform. The team runs desktop and mobile automation tests on Sauce Labs. Jenny has CS degrees from Stanford and UC Berkeley.
Creator of Selenium, Co-founder Sauce Labs, Founder Tapster Robotics
Check back for details!
Jason is a co-founder of Sauce Labs and the founder of Tapster Robotis. He started the Selenium project in 2004 at ThoughtWorks. He later joined Google to work on large-scale web testing for Gmail, Google Maps, and other teams. He left Google to co-found Sauce Labs as CTO to create a cloud-based Selenium service. In late 2013, Jason took leave from Sauce to help with the HealthCare.gov turnaround. He is also the creator of Tapster, a mobile app testing robot that’s been featured in Popular Science, Wired,Tech Crunch, and the MIT Technology Review.
Director of Open Source, Sauce Labs
Drivers of Change: Appium’s Hidden Engines
Appium is primarily known as a mobile automation framework, the equivalent of Selenium for mobile. This is true. But Appium’s vision goes beyond mobile. Appium wants to take the WebDriver protocol into all areas of automation. Appium is for apps—not just mobile apps. How does this work? Like Selenium before it, Appium is organized around the concept of a ‘driver’–a bit of code that turns the WebDriver protocol into automation behaviors for a specific platform.
In this talk, Jonathan Lipps will give an overview of Appium’s various drivers and how they work. There’s no magic in Appium, but there is some sleight of code, which will be revealed! (Don’t worry, it’s all there on GitHub already.) The discussion will answer questions you may have had about Appium, such as: how does Appium actually work? Why are there differences between Appium for iOS or Appium for Android? Why do new mobile platform releases sometimes mean Appium works differently?
Finally, Jonathan will explore some of Appium’s plans for the expansion of its drivers into new territories, for mobile and beyond. Who knows, maybe you’ll have an idea for an Appium driver yourself! After this talk, you’ll be in a position to know exactly how the Appium project gives its community the tools to make their own drivers in addition to the supported on.
Jonathan Lipps has been making things out of code as long as he can remember. Jonathan is currently the director of ecosystem and integrations at Sauce Labs, where he leads a team of open source developers to improve the web and mobile testing ecosystem. Jonathan is the architect and project lead for Appium, the open source, cross-platform mobile automation framework. He has worked as a programmer in the startup world on and off for over a decade but is also passionate about academic discussion. Jonathan holds master’s degrees in philosophy and linguistics, from Stanford and Oxford respectively. A San Franciscan, Jonathan is an avid rock climber, yogi, musician, and writer on topics he considers vital, like the relationship of technology to what it means to be human.
Director, Commercetest Ltd.
Using Mobile Analytics To Improve Both Our Testing and Our Apps
There is a sea of data available to help improve mobile app testing once we know what to look for and how to apply data that’s relevant. In this talk, Julian Harty combines practical work with cutting-edge research on ways we can improve our approach to testing by incorporating various sources of data and feedback, including reviews and mobile analytics.
Some users volunteer information, bug reports, reviews, tweets, in-app feedback, etc. We may also be able to enable in-app analytics to collect data while an app’s being used which increases our data sources from a relatively small vocal minority to the vast majority of users.
We can use data gathered from the app, app stores, and other sources. In addition we have the opportunity to shape and guide the information collection to augment existing data, while streamlining what’s collected (and what’s not).
Julian will cover practical and logistical aspects of how to use data to test the tools and processes and in turn, improve the apps we test.
Julian Harty has been actively involved in many aspects of testing and development of mobile apps globally since 2006. This includes roles at Google, eBay, etc as well as contributions to open source apps (such as Kiwix for Wikipedia) and test automation including Selenium and several test automation frameworks for mobile apps. He’s contributed to the highly successful Mobile Developer’s Guide to the Galaxy, co-authored the Mobile Analytics Playbook, and wrote perhaps the first book on test automation for mobile apps. Currently he’s studying a PhD part-time to find ways to improve the testing and development of mobile apps using mobile analytics in addition to the other bits and pieces.
Quality Engineer, Github
Maintaining Quality In Open Source Projects
With software testing, it’s a good idea to keep the end user in mind, but with open source software this is absolutely necessary. Open source projects largely rely on community engagements and involvement. Whether that be asking questions, reporting issues, or submitting pull requests.
Testing an open source project must account for working with the community in these ways. There must be processes in place to support community members in their efforts to contribute and to leverage the feedback they provide in order to impact the product’s quality positively.
In this session, Meaghan will share strategies to maintain high quality in open source projects. She will draw on her experience testing open source projects at GitHub and discuss the ways collaborating with the community has allowed her to test more creatively while keeping the users in mind.
Meaghan Lewis is a Quality Engineer at GitHub. She is skilled in automation for both web and mobile applications, and an advocate for embedding quality throughout software delivery practices. Meaghan has worked with companies ranging in size from 50 – 50,000 employees, and across numerous industries. She enjoys learning, applying, and sharing testing practices with her peers.
Automation Architect, Optum
Testing Humans. Machines are easy, Humans are not.
Living in a world with such incredibly advanced technology, we are becoming profoundly more efficient with the many machines we use on a daily basis. Computers, tablets, cell phones, etc. have, in many ways, enabled us to perform tasks that were simply unimaginable in the recent past. But do these advancements also come with the price of us losing our soft skills, our ability to effectively communicate with and work with others as a team? In an environment in which Mike Millgate lives by the mantra “Quality is a Team Effort”, is our ability to navigate and manipulate these machines also taking away from our abilities to communicate in person with those team members in the way that is necessary to succeed in our chosen fields?
Join Mike in understanding exactly why these soft skills are absolutely imperative in creating a successful workplace environment, and also in exploring ways we can preserve and strengthen these skills in the workplace.
Mike Millgate is an Automation Architect at Optum, with more than 15 years experience. He is a self-proclaimed troublemaker with a penchant for testing, automating, and finding mistakes. His tagline is See. Try. Ask. Learn. Share. Mike leads by example, offering up his fun-loving leadership knowledge and troubleshooting skill set to build successful self-sustaining teams. He is an energetic, driven, perfectionist, looking to spread the culture of: See. Try. Ask. Learn. Share… He is a self proclaimed Automation Architect / DevOps Extraordinaire, promoting the mantra “Quality is a team effort.”
Senior Automation Engineer, IPC Systems
Testing Beyond the Network Boundaries with WebRTC
IPC’s Unigy 360 is a cloud-based software-as-a-service (SaaS) solution engineered to provide reliable and secure access to global financial market participants. Because it is an “anytime, anywhere, any device” application, we required a Chrome-based container (WebRTC), running on Desktop and IOS app, to serve as a “soft client ” for our cloud solution.
In addition, within our Continuous Integration (CI) environment, we needed to test different voice calls in in-network scenarios, such as NAT/VPN, in a scaled reliable setup. In this talk, we will show how we used Selenium to simulate browser endpoints, running scripts from Jenkins to invoke Maven/TestNG based automation frameworks, to test these calls. Using Sauce Labs, we were able to scale our workflows to run 70+ parallel client login test cases, and in doing so identified 50% of our customer issues in our development environment in different network scenarios NAT/Open Internet.
With 10+ years of experience in automation on Networking products( Routers, Switches and optical devices), Telecommunication (Call manager) and Traffic Generator (Ixia, Agilent and Smart Bits). Having good experience in fitting automation for sanity testing, Smoke testing, Scale testing, Performance testing and conformance testing with open source automation tools and proprietary third party tools. Interested in cloud technology delivered through web service and evolution of browser/Mobile apps to communicate and collaborate the different users via WebRTC. Having experience in continuous integration with Jenkins, Puppet, Maven, Test NG and Java. My current studies evolve how to widen automation to handle different network scenarios like NAT, Firewall, VPN and Latency and to automate customer scenarios with WebRTC using Open source test tools (Selenium).
Principal Automation Architect, Magenic
Well, THAT’s Random – Automated Fuzzy Browser Clicking
Roughly speaking, ‘fuzzing’ is testing without an oracle; e.g., testing without knowing what a specific outcome should be. When fuzzing, we don’t necessarily know what should happen, but we have a good idea of some things that shouldn’t happen, such as 404 errors and server or application crashes. We generally apply fuzzing to produce these kinds of errors when we’re testing text boxes, but why should text boxes have all the fun? Websites created today are highly interconnected, multi-server applications that include connections to out-of-network servers that are not under our applications’ control. This situation makes it difficult to both enumerate and control all the possible combinations of paths through our system. Even if we could identify all the possible paths, most organizations would not have the time to test all of these scenarios, regardless of whether or not they apply automation to help with that testing. During this session, Paul Grizzaffi explores how expanding our automation approach by using randomization can help mitigate the risks associated with hard-to-enumerate application scenarios. By using random clicking, we can provide testers with additional information via exploring paths through the application which are not intuitive, but which are still valid. We’ll discuss why creating a random clicker doesn’t have to take a lot of effort, how this approach is rooted in the facets of High Volume Automated Testing (HiVAT), and some considerations of which to be mindful when using randomization.
Paul Grizzaffi is a Principal Automation Architect at Magenic. His career has focused creating and deploying automated test strategies, frameworks, tools, and platforms. He holds a Master of Science in Computer Science and is a Certified ScrumMaster from Scrum Alliance. Paul has created automation platforms and tool frameworks based on proprietary, open source and vendor-supplied tool chains in diverse product environments (telecom, stock trading, E-commerce, and healthcare). He is an accomplished speaker who has presented at both local and national meetings and conferences. He is an advisor to Software Test Professionals and STPCon, as well as a member of the Industry Advisory Board of the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas. Paul looks forward to sharing his experiences and expanding his automation and testing knowledge of other product environments.
Lead Committer, Selenium Project & Creator of WebDriver
Lessons From a Decade in Selenium
The first commits to Selenium were made late in 2004. The first commits of the WebDriver APIs were in 2007. The W3C WebDriver spec brewed for over six years. In this time, we’ve seen an explosion in popularity and usage. Selenium now has the backing of all the major browser vendors, and an increasingly large ecosystem of tool providers. The protocol has extended to cover not only desktop browsers, but mobile browsers, native applications, and even on to the desktop, scaling from a single user on a single machine, to massive grids and cloud providers.
What have we learnt over the years from this growth? What lessons can we take from the accumulated experience of our industry? Would we do anything differently? Are we going to do something different as we move forward? Was the W3C spec a good idea? In this keynote, I’ll try and answer these questions, and more, as we cover the history of the project.
Simon Stewart is the creator of WebDriver, the open source web application testing tool, as well as a core Selenium developer. WebDriver remains a hot topic as it is currently going through a W3C (World Wide Web Consortium) specification process, which Simon is a co-editor of.
He describes himself as “undeniably hairy”, and holds a BSc in computer science from Nottingham University.
Lead Developer in Test, Gamesys LTD
Be The Player. Test Key Customer Flows As Part Of Your CI Pipeline
As a result of expanding your test coverage on the last stage of delivery, automated testing can be tricky. People start to lose confidence when they see the value of their tests decrease as environments where they run on become less and less stable and fail intermittently. That translates into pipelines becoming red more often and in the worst case scenario being switched off or bypassed in order to release.
Attending this talk will reveal subjects of lean test scenarios which are architected by using component objects (extended page object pattern) and actions related to drive key customer user journeys.
Tomasz always liked to break stuff. Since he was young he’s never needed any manuals to make things work, or not work… This passion changed into a desire to enhance product quality in his professional life. He sharpened his development skills and widened his software horizons whilst he was studying automation and robotics. He currently works in London at Gamesys, where he enjoys discovering and implementing new tools, and solutions to support continuous delivery pipeline with automated tests and infrastructure. He strives to always iteratively improve the feedback loop and highlight the risk by using quality testware. Tomasz has implemented various test strategies and architected automated test suites for mobile, web and hybrid applications.
QAT Practice Lead, Magenic
Sauce.Net – Making .Net automation awesome
Let’s face it, those of us who are working in the .Net stack have been playing catch up the last couple of years. Our testing tools, automation frameworks and DevOps integrations have been lacking. That has changed with the rise of tools like Selenium; the radical improvements made to VSTS and premium services like Sauce Labs. We can now have first-class test automation in the .Net world. In this session Troy Walsh will walk you through how to leverage Sauce Labs to create .Net automation. He will cover the basics, starting with how to connect to the service. Then he will dive into how to configure, categorize and annotate your tests so you get the best experience. Finally he will pull it all together and show you how it can be used to add Continuous Testing to your CI/CD pipeline.
Troy Walsh is the national practice lead for Magenic Technologies’ quality assurance and testing division. He started his career in the DevOps space as a Release developer, creating custom build, deployment and installation solutions for Epic Systems. Following Epic, he transitioned to consulting where he developed a love for test automation. Troy is a frequent blogger (Magenic.com and CIO.com) and speaker (Twin Cities Test Automation Group, The Executive Leadership Institute, STPCon and more) with a passion for test automation and DevOps.
Sergio Neves Barros
QA Technical Architect, Gamesys LTD
Transitioning From Selenium to Appium, How Hard Can It Be?
Testers have all heard of Selenium and have used it to test web sites. It’s the de facto standard in writing automated tests and most browsers have either incorporated drivers into their builds or provided separate drivers that allow Selenium to interact with the browser. But what about Appium? It’s using the Selenium JSON wire protocol, so users should just be able to point their tests at an Appium server, right? During this talk, Sergio Neves Barros will discuss the (historical) challenges of mobile web testing, platform differences between Appium and its drivers and Selenium, some of the additional endpoints/features Appium provides, the common features between Appium and Selenium, and discuss the future of mobile testing.
As QA Technical Architect with more than 10 years of experience in the field of automated testing, Sergio Neves Barros has worked on automating technologies such as html 5, canvas, flash, native apps and rest APIs. Sergio has contributed to the Appium open source project to expand its capabilities of testing with Safari on physical iOS devices. Most recently, he has been focusing on Performance Testing (e.g. using J-Meter) and Security Testing (ZAP Proxy) to expand his arsenal of test tools/drivers.
Managing Director, Omni Sourcing
Shifting Left Using Sauce Analytics
As companies begin to Shift Left with their software development, data analytics has become a pivotal force in how business leaders make strategic and tactical decisions. With the emergence of real-time customer feedback, it is imperative to deliver applications quickly and accurately. Omni will explore the use of Sauce Analytics to enable today’s software methodologies to drive value and customer experience for companies. Attendees will take away a new way of looking at defects and metrics to help drive overall value for their organizations.
William Harrison is an innovative technology leader with more than 20 years of experience enabling IT Governance, Test Management and Automated Solutions. William has developed and implemented solutions for companies in the Banking, Financial Services, Manufacturing, Public Services, Retail and Telecom industries. As a Managing Director at Omni Sourcing, William is responsible for delivery and strategic alliances with software companies to provide solutions to companies to enable quality and software delivery.
Getting Started With Appium
Learn what Appium is and how it works, as well the components of an Appium script, how to construct them, and how to run your scripts locally and on Sauce Labs.
Getting Started With Appium (Feb 28, 8:00a – 10:00a)
In this session you will learn what Appium is and how it works, as well the components of an Appium script and how to construct them. You will then create and execute an Appium script in a local environment and then run your test on Sauce Labs.
- Manual testers who want to get started with automated mobile application testing
- Automated testers and developers who have some knowledge of automated mobile testing but want a more comprehensive overview
- Some understanding of Java or other programming languages
- Some experience with writing a simple script
Primary Learning Objectives
- Understand what Appium is and how it works
- Understand what the components of an Appium script are, and how to construct them
- Run an Appium script in a local environment
- Run an Appium Script on Sauce Labs
A Guide To Getting Started With Sauce Labs
Learn about Sauce Labs features, including managing accounts and teams, how to configure existing tests to run on Sauce Labs, the Sauce Connect Proxy and the Sauce REST API.
Getting Started With Sauce Labs (Feb 28, 8:00a – 10:00a)
This workshop will provide an overview of Sauce Labs features, and help you understand the architecture behind Sauce Labs and how the service works. You will learn how to setup and manage accounts and teams, how to configure existing tests to run on Sauce, and gain an understanding of what Sauce Connect Proxy is and when to use it. You will also learn how to access and view the test details page, and how to use the Sauce REST API to provide information about tests.
- Individuals who need an introduction to Sauce Labs and to set up their initial accounts and tests.
- Attendees should already be familiar with Appium/Selenium and automated testing
Primary Learning Objectives
- Understand the architecture behind Sauce Labs and how the service works
- Understand how to set up and manage accounts and teams
- Understand how to configure existing tests to run on Sauce
- Understand what Sauce Connect Proxy is and when to use it
- Understand how to access and view the test details page, and how to use the Sauce REST API to provide information about tests
Developing and Testing Applications on Real Devices
Learn about the differences between testing on real devices, emulators and simulators, and how to run manual and automated mobile tests on Sauce Labs.
Developing and Testing Applications on Real Devices (Feb 28, 10:15a – 11:45a)
Learn how to develop and test mobile applications on real devices. The objective of this workshop will be to help you gain an understanding of the differences between testing on Real Devices vs. emulators and simulators, and how to run manual and automated mobile tests on the Sauce Labs real device cloud.
- Developers and automated test engineers who have experience testing on emulators and simulators but have in interest in using real devices.
- Experience with automating mobile application tests with Appium
Primary Learning Objectives
- Understanding the differences between testing on Real Devices v. emulators and simulators
- Understanding how to run a manual mobile test with the Sauce real device cloud
- Understanding how to run an automated mobile test with the Sauce real device cloud
Troubleshooting with Analytics
Learn how to to troubleshoot and improve the efficiency of your tests and builds using the Trends and Insights analytics.
Troubleshooting with Analytics (Feb 28, 11:45a – 1:00p)
Learn how to to troubleshoot and improve the efficiency of your tests and builds using Trends and Insights analytics. This course will cover using Trends analytics to drill down into your builds to pinpoint problem tests, identify common errors, and get statistics that will help you improve the efficiency of your builds. You’ll also learn how to use the Insights analytics to help you achieve the maximum concurrency for your account.
- Developers and Testing Managers who want to understand how to use Analytics to improve their test coverage, efficiency, and success rate
- You must have an Enterprise account with Sauce Labs to access the Analytics features
Primary Learning Objectives
- Learn how to use filters to create and interpret Trend analytics
- Learn how to use Insights analytics to improve test and build efficiency
- Learn how to troubleshoot common use cases using Analytics
Getting Started With Selenium
Learn what Selenium is and how it works, how to set up a Selenium environment on your local machine, and how to run your tests locally and on Sauce Labs
Getting Started With Selenium (Feb 28, 8:00a – 10:00a)
In this session you will learn what Selenium is and how it works, as well as set up a Selenium environment on your local machine. You will then create and execute a Selenium script in a local environment and then run your test on Sauce Labs.
- Manual Testers who need to learn basics of automated testing
- Developers who already have some familiarity with automated testing but need to know more about the nuts and bolts of Selenium and Selenium script writing
- Familiarity with Java or other programming language in order to understand the basics of Selenium script code
Primary Learning Objectives
- Understand what Selenium is and how it works
- Setting up a Selenium environment on a local machine
- Creating and executing a script in a local environment
- Executing a script on Sauce Labs
Continuous Testing For Mobile Applications and Websites
Learn how to improve the efficiency of test/build process through parallel testing, and about advanced testing techniques and features such as assertions and test reporting.
Continuous Testing For Mobile Applications and Websites (Feb 28, 10:15a – 1p)
Building on a basic understanding of how to create Appium and Selenium test scripts, this course introduces the use of testing frameworks, with the example of TestNG for Java, to improve the efficiency of test/build process through parallel testing, and to provide advanced testing techniques and features such as assertions and test reporting. This course also covers methods for designing tests and test suites to take advantage of parallel testing capabilities, such as abstraction and the use of PageObjects, as well as considerations for application and website design that improve testability.
- Developers and automated testers with a basic knowledge of Selenium and Appium who want to learn about frameworks, advanced test and test suite design, and application and website design to improve their testing practices and the efficiency of their testing.
- Familiarity with Appium and/or Selenium and ability to write a basic test script
- Familiarity with Java or the ability to understand Java code
Primary Learning Objectives
- Understand how to use testing frameworks to run tests in parallel and for advanced testing features
- Understand how to design applications for testing
- Understand how to design tests and test suites to optimize test and build efficiency
- Understand how to incorporate continuous integration into the development cycle
Creating a Continuous Delivery Pipeline with Automated Testing
Learn how to set up a CD pipeline to incorporate testing in isolation/branch testing, as well as understanding how to design efficient tests and test suites for testing in isolation
Creating a CD Pipeline with Automated Testing (Feb 28, 2:00p – 4:00p)
In this workshop you will learn about the importance of automated testing in a continuous development/deployment process. You will also learn how to set up a continuous deployment pipeline to incorporate testing in isolation/branch testing, as well as understanding how to design efficient tests and test suites for testing in isolation.
- Developers and test engineers who want to develop or improve their continuous delivery process through the use of automated testing
- Attendees should have some background in software development processes and basic tools such as version control systems, IDEs, etc.
- Attendees should have some experience with automated testing and creating automated test scripts, as well as using testing frameworks for parallel test execution
Primary Learning Objectives
- Understand the importance of automated testing in a continuous development/deployment process
- Understand how to set up a continuous deployment pipeline to incorporate testing in isolation/branch testing
- Understand how to design efficient tests and test suites for testing in isolation
Sauce Labs Troubleshooting
Learn best practices for running your tests on Sauce, and how to deal with errors and other problems that can come up during your tests.
Sauce Labs Troubleshooting (Feb 28, 2:00p – 4:00p)
Learn best practices for running your tests on Sauce, and how to deal with errors and other problems that can come up during your testing process.
- Enterprise-level developers and QA testers working for organizations that have recently purchased Sauce
- Self-serve customers who want to improve the performance of their tests and builds running on Sauce
- Learners should already have a Sauce Labs account and be familiar with automated testing concepts and procedures
Primary Learning Objectives
- Understand the differences between running tests locally or on a local grid v. running tests on Sauce
- Understand common user errors
- Understand common errors generated by the Sauce Labs infrastructure
- How resolve both user and Sauce errors
What SauceCon 2017 Attendees Had To Say:
Speaking and attending several conferences over the last three years, this conference ranks #1 in my books.
I thought it was best conference I've been to in years - not just the content, but the seamless flow, the venue, the organization, the interaction and discussions in the corridors - I thought it was brilliant.
Brought back good code examples and strategies that I have created best practices around.
Super-relevant content and a lot of smart attendees who are knowledgeable and advanced in their automation.
Parc 55 Hotel | 55 Cyril Magnin Street, San Francisco, California, USA, 94102
Parc 55 Hotel
Applitools uses sophisticated AI-powered image processing technology to ensure that an app appears correctly and functions properly across all mobile devices, browsers, operating systems and screen sizes. By automating visual testing (including content, layout and appearance), Applitools helps companies dramatically shorten QA time while avoiding more software bugs than ever before, ensuring flawless UI across platforms. And in addition to increasing coverage, Applitools also substantially reduces maintenance efforts, due to its unique ability to automatically propagate changes across execution environments.
Founded in 2013, Applitools has more than 300 customers from a range of verticals, including Fortune 100 companies in banking, software, online retail, insurance, pharmaceuticals, and more.
Applitools is based in San Mateo, California and Tel Aviv, Israel. For more information and a free trial, please visit: applitools.com.
Do you want to be part of an awesome global team that is building a world-class platform for super fans of digital entertainment products and services? We’re seeking technology professionals and developers who want to build amazing experiences that affect the lives of millions of PlayStation users worldwide.
Our company, Sony Interactive Entertainment (PlayStation), unifies and integrates the strengths of PlayStation across hardware, software, content, and network services operations. At PlayStation, we are driven and passionate about delivering ground-breaking entertainment experiences and inspiring the imagination of consumers around the world. Come join us, Greatness Awaits You!
QASymphony is a leading provider of enterprise test case management, test analytics and exploratory testing solutions for agile development and QA teams. Our solutions help companies create better software by improving speed, efficiency and collaboration during the testing process. QASymphony has 500+ customers, including Salesforce, Barclays, Samsung, Office Depot and Dell. Sign up for a free trial at qasymphony.com