As Big Kev used to say… “I’m excited!”
Why? Because this is the first of our ‘fast five’ series of posts. You may recall me mentioning it here. It will be a series of posts where I interview some dead set awesome people from our industry, and what a way to start! I can’t thank Scott Barber enough for taking time out of his crazy schedule to answer my questions.
A prominent thought-leader in the area of software system performance and testing software systems in general, Scott Barber, founder and Chief Technologist of PerfTestPlus & Director of the Computer Measurement Group, makes his living writing, speaking, consulting, and coaching with the goal of advancing the understanding and practice of software testing. Scott has contributed to four books (co-author, Performance Testing Guidance for Web Applications, Microsoft Press; 2007, contributing author Beautiful Testing, O’Reilly Media; 2009, contributing author How to Reduce the Cost of Testing, CRC Press; 2011, author Web Load Testing for Dummies, John Wiley & Sons, Inc.; 2011), composed over 100 articles and papers, delivered keynote addresses on five continents, served the testing community as the Executive Director of the Association for Software Testing, and co-founded the Workshop on Performance and Reliability.
Check out his about.me profile for even more on this amazing tester!
So anyway, I hope you enjoy…
1. Starting at the beginning (always a good place), can you tell me how you found yourself in software testing?
Like many software testers, I fell into it. Out of High School, I accepted an Army ROTC Scholarship to pursue a degree in Civil Engineering at Virginia Tech (GO HOKIES!), which I completed before being swept off to repay the scholarship on active duty.
After about 4 years of good reviews as an Officer (and, shall we say, recognizing that I wasn’t really cut out for a career in the military), I had the good fortune to get recruited into a civilian company that was evaluating the replacement for an antiquated military system that I’d just so happened to complain about in front of the right person. Officially, that job was called “Business Process Re-engineering”, but I mostly did business analysis, technology evaluation, and data modelling.
It lasted until I couldn’t stomach billing clients who were paying with tax payer dollars to work out in the office gym and play StarCraft for 6 of 8 hours a day while receiving “efficiency” bonus checks and “earning” corporate Rookie of the Year honours.
By that time, I was part way through a Masters in IT, and thought I was going to be an Oracle DBA. So I took a job with a little company that thought I had potential and was going to train me to be their next “Super DBA Dude”… until one of their clients asked them to send someone to backfill a Configuration Management position until they could hire someone new, because they’d just fired a guy for… let’s call it “HR Violations”. I have to say that I’m no more cut out to be a configuration manager than I am to be a career Army officer.
Then one Sunday evening I got a call from my manager…
“Scott, be at the Marriott Hotel Conference Room at 8am tomorrow. Our CEO has an announcement to make.”
Being a good employee, I did as I was told and went to the meeting. As it turns out, the company was “merging” with a huge media company. It was to become their “development branch”. And somehow, that meant that we were all supposed to be happy about our base pay being reduced, giving up our bonuses and overtime… because we were going to get stock options?!?
I was less than enthusiastic about this. I called my friend of 10 years before I even left the parking lot of the hotel. I told him the circumstances. He said:
“Dude, that sucks. Send me your resume. We need performance testers. You’re a perfect fit.”
I said “Performance tester? What’s that?!?”
He replied “Don’t worry, you’ll like it.”
He was right. I love it & I’ve been doing it ever since (with a brief stint as a System Test Engineering Manager for a start-up while my youngest son was an infant).
2. You’re known as an ‘industry leader’ to many of us in the software testing community, especially in the world of performance testing. Do you think there are certain reasons for this?
The simplest answer is that I started sharing my thoughts publicly, at conferences, on forums and in articles very shortly after Alberto Savoia moved from being the ‘industry leader’ in performance testing to be a VP at Google and since so few people were sharing information on the topic, I very quickly became a very big celebrity among a very small group of people. Apparently, I’ve not proven myself to be too terribly stupid, since folks keep asking me to write, speak, collaborate, consult, etc.
I know that for me, when I realized that there was a community looking to me as a leader, I felt driven to not let them down — at least not due to a lack of trying. So from there, I guess it kind of became a self-fulfilling prophesy of sorts. The more people turned to me as a leader, the harder I worked to earn the position retroactively… which led people to look to me for leadership on topics beyond performance testing, etc. etc.
Writing really has been a large part of it though. In addition to well over 100 articles, I’ve authored or contributed to several honest to goodness (like my parents have signed hard copies) books. No one, and I say this with no arrogance or fear of contradiction, has composed more publicly available, original content specific to performance testing than I have. In some folks eyes, that makes me “the best”. Which I find flattering, but I’ll never forget something my father was rather fond of reminding me when I was a kid and was being cocky about something or another… “Just remember, being the best at something doesn’t actually make you any good at it.”
3. A two pronged question… where do you see performance testing heading in the future considering the change in the landscape of IT, i.e. mobile, cloud, etc. and what challenges do you see being in front of performance testers in these areas?
DevOps is where it’s at. The division between code & maintain shrinks every day. The days of the ubiquitous, monolithic, corporate owned and maintained data center are numbered (it’s still a reasonably large number, but the “count-down timer” has unquestionably started). The lines between Architect, Performance Tester, and Capacity Planner are getting increasingly blurry. Performance Testing in Production has become a responsible reality (at least when implemented, well, responsibly). The UX (User Experience) community is finally catching on that performance is at least as important as “pretty & intuitive”. The pendulum is swinging back from “process everything server-side” to “push everything we can to the client”. Before long, everything… like, including your shoes and your toaster… will be connected to a network of some sort.
And most of my clients are still trying to figure out how to reliably use their load generation tools to simulate humans using applications that make use of AJAX, Flash, or .NET Viewstate technologies. I don’t know about you, but this sounds like a problem to me.
Sooner or later, we’re going to stop getting lucky. Eventually, we’re not going to be saved by the next generation of hardware coming out “just in time” to prevent application disasters. One of these days, users are going to stop complaining about poor performance, and actually stop using/paying for services that perform poorly. The question is, will the majority of companies get out in front of this, or will the majority need to experience application disasters before getting proactive about performance?
That sounds all “doom & gloom”, but it’s really not that hard to avoid. While it’s true that consistently & deliberately delivering world-class performance, for a large, geographically diverse, user base spanning a wide variety of devices is a “world-class challenge”, it’s also true that *most* organizations can dramatically improve application performance with a very small investment – mostly in terms of individual & collective, continual, accountability & responsibility for performance from conception to headstone.
The easiest way to avoid performance disasters is by implementing things like unit-level performance testing, resource allocation budgets for objects/components/processes, and periodically generating some load (not production-simulation-grade load, just “some load”) against parts of the application as they achieve “working… at least mostly” status. If that sounds complicated, culturally it may be, but once implemented, I’m talking about a total of less than an hour per week per technical team member that will save *weeks* of pain when it’s finally time to execute production simulations because those simulations will be immediately focused on tuning instead of first having to resolve a cascading set of “oops” and “oh, hold on a sec” type issues.
Personally, I’m working on ways to get that message out at the manager and higher level, ways to educate people about the value of a cultural shift from “Don’t forget to do some performance testing” to making performance “just part of how systems are developed”, ways to make it easy for organizations to achieve at least user acceptable performance. I’m working on building bridges between the performance champions in the Design/Architecture, Developer, Tester, and Ops worlds. There are plenty of other individuals, groups, and corporations working on the techy “how to make X perform better” parts, and I’m glad for that, but I worry that no amount of techy “how to make X perform better” will keep up with the increasing demand for ever-better performance if organizations don’t realize that simply sticking a 500 hp engine and a sports suspension on their push lawnmower isn’t going to get the lawn mowed any faster.
4. Could you share with me a performance testing war story? Perhaps a time where performance testing wasn’t undertaken and should have been? Or a time when it wasn’t done properly? Happy for all confidential content to be taken out of course.
Once upon a time there was a company with a pretty new application. It functioned wonderfully, and was going to be the first of its kind on the market. All was progressing according to plan until one day someone said “It seems to get slow whenever we’re all testing at the same time!” So, the company did what you’d expect. They hired someone to come do performance testing.
At first, the performance testing revealed some issues that got resolved in fairly short order with performance improving marginally with each resolution, until…
One Monday morning, after a seemingly minor change was promoted over the weekend, or so the story was told to me, the performance tests showed amazing improvement! To be sure, they ran the tests again with the same results! The company was thrilled. They immediately started re-configuring firewalls and name servers so the great new application could be seen by the whole world. They sent out their press releases, and generally created a big buzz over the official launch of their new application the following Monday.
Well, the following Monday came and disaster struck. No, the application wasn’t slow, but most people who surfed to the grand opening were met with a pretty launch page and then a pretty frame with a not very pretty error message that included the phrase “service not available”.
“But how could this be?!? We did the right things, and our tests said all was well!” the executives all said “We’ll hire Scott Barber, he’ll save the day!”
Ok, so I took a little poetic license there at the end. I mean, we all want to feel like a superhero once in a while right? Besides, my sons already think my work is kinda dorky, so maybe if I tell my stories that way… Nah, you’re right, then they’ll think I’m kinda dorky too. :)
Anyway, want to know the punch line? I was the next performance testing consultant. It literally took longer for them to tell me the story than it took me to figure out what happened, and not much longer than that to resolve the issue and confirm that performance was back to good-enough (though admittedly, not as speedy as on that one Monday morning).
It turns out that the “minor change” that was promoted over the weekend had included an update to the Application Server Software, which (for some reason that is completely beyond me) required a re-entry of the licence key to enable it to allow more than 5 connections at a time. This wouldn’t have been a big deal, except the performance scripts had been developed to check items in the “pretty frame” (that were generated at a tier of the application prior to the Application Server) to validate the correct page was being presented. Thus, when the “pretty frame” was displaying correctly, the scripts reported no errors, even though the *real* content was only being delivered to the first 5 simulated users to trigger connections to the Application Server. To make matters more embarrassing, the tool even had the ability to see what any one simulated user was seeing while the script was running, but apparently, the performance tester before me, only thought to watch what the first simulated user was seeing.
The moral of the story? Never trust performance test results unless real humans are using the system during the test and those real humans’ experiences are congruent with what the results are telling you.
5. Last but not least, a scenario for you… my PM has advised that performance testing is too expensive due to the tool requirement and specialist resource need. He/she is happy to take a risk and tune after implementation if need be. Do you have any pointers for me in how to approach this ‘oh so common’ conversation?
1) See my answer to question 4 above.
2) Roll your eyes & tell him/her that with all of the Open Source tools, free-trial periods, and new/small vendors *itching* for the opportunity to trade free services for case studies & testimonials, that “too expensive” is either a naive assumption, or a stupid excuse, for not generating at least *some* load prior to production.
3) Tuning after implementation is fine… in fact, there are many situations in which I recommend that as the most cost effective and responsible option. Generating *some* load against a designed-to-be-multi-user system isn’t about performance testing anyway. It’s about finding things like the uninstalled license key in question 4. It’s about finding out that the database locks both reads and writes to *entire tables* every time it processes a query. It’s about making sure that when 2 different users log in from the same IP address their personal data doesn’t get all jumbled up.
4) Even if you can’t convince him/her to let you use an open source, free-trial period, or services for case-study tool, I bet you can convince a bunch of your friends in the office to spend a lunch hour doing one of the most valuable multi-user tests ever by simply sending the following to them in an email at about 11:20am “Try to accomplish as many of the following tasks as you can on the system between now and when the pizza I just ordered for all of you arrives. I’ll be standing in front of the pizza… I’ll trade you a plate for your notes.” It might initially cost you $100 out of your pocket… and get you a healthy scolding if he/she finds out about it before you have a chance to consolidate your findings, but I’ve taken that risk several times and have yet to end up not getting reimbursed for the pizza.
The reality is that a very small portion of the time in my career that I’ve spent under the task of “performance testing” has had very much to do with testing or tuning performance. It’s mostly had to do with detecting and helping folks resolve actual defects that just so happen to present themselves more readily under load conditions. So, all sarcasm and attitude aside, that’s the message I’d recommend starting with in that scenario.
A HUGE thankyou once again to Scott for answering my questions. I hope you enjoyed the read! Make sure you keep you eyes on Scott as he will be working on some very interesting topics in 2012. Take a look at this post for a bit of a hint. ;0)
If you have certain topics that you’d like to see covered in the ‘fast five’ series, please get in touch. I’ll try and work my magic!