KVIV 2013 - Behavior Driven Testing with Cucumber demystified

by admin Email

I presented an updated version of my talk on Behavior Driver Development (BDD) and Behavior Driven Testing (BDT) using Cucumber for the 'Royal Flemish Society of Engineers'

The slide deck in pdf format can be found on my employers website. You can download the file by clicking the 'Behavior Driven Testing with Cucumber demystified' link on that page.

The updates were done based on feedback I got from the Belgian Testing Days audience and my new co-workers.

This slide deck starts with a focus on what problems the behavior driven testing tries to solve, this should make it easier for people that are not up to speed with BDD. I also went deeper into detail on the outside-in aspect since many people did not grasp that concept from the definition of BDD.

At the end I did a demo using Cucumber, Capybara and Selenium. I think I downloaded that from this guy's GitHub and then changed it somewhat to make it more understandable.

 

I changed the 'googleDemo' feature file to make it according to the content of my talk and used only that feature file for the demo:

Feature: Search goolge for cucumber info
 
In order to learn about cucumber
As a internet user
I want to find the cukes.info website
 
Scenario: Search Google
When I search Google for "cucumber"
Then there should be a result for "cukes2.info/"

 

The demo uses Ruby 1.9.3 since the Cucumber - Capybara - Selenium stack does not yet work in Ruby 2.0.

These are the installation steps:

1. download Ruby 1.9.x (NOT 2.x)

2. install Ruby

3. download dev ket for the 1.9.x version

4. install devkit (https://github.com/oneclick/rubyinstaller/wiki/Development-Kit)

5. install cucumber gem (gem install cucumber)

6. install spec gem (gem install rspec)

7. install capybara gem (gem install capybara)

Belgian Testing Days 2013 - Behavior Driven Testing with Cucumber demystified

by admin Email

This is the talk I gave on the Belgian Testing Days, I added the slide deck as well as some extra information of things that were discussed during the talk and the Q&A afterwards.


Abstract

Behavior Driven Testing (BDT) is the lesser known companion of Behavior Driven Development (BDD). However, BDT can be used without BDD. When looking at the V-Model BDT can be used at the requirement definition level and the functional testing level. In Agile BDT is often used in the form of user stories using the 'given-when-then' format.

These can be used not only to define the behavior of an application but also as input for automated test tools using the Cucumber framework.

In order to create and maintain these user stories in a structured way, a Domain Specific Language (DSL) must be defined. There are some pitfalls when creating and maintaining a DSL both for requirement definition and as input for the automated test process.

These pitfalls will be listed and several solutions will be shown during the presentation. Steven also will highlight the effort required to start with a combination of BDT and an automated as well as the expected knowledge.

With some examples the return on investment can easily be explained, these examples can be translated to other companies without too much difficulty.


Slide deck

The slide deck can be seen here.

 

Should

I cannot stress enough the importance of the ?should? in the then steps, this little word will make sure the author of the user story (being a requirement writer or tester) challenges the premise of the test while writing it. The quality improvement gained when doing this is often underestimated. The quality of the user stories will be reflected in the quality of the automated tests that follow and will impact the quality of the software product.

 

DSL definition and story robustness

The same goes for the DSL, a healthy amount of nitpicking should be done when creating sentences for the user stories, they should be descriptive without being too long and they should be as much as possible technology independent.

Example for typing text in a textbox on a website:


When I type ?xyz? in the ?address_field? textbox


This is a typical sentence one would end up with when creating steps for the first time.

It is too technical because the textbox could change in a text area, in that case the step would not be accurate anymore. We can improve this to:


When I type ?xyz? in ?address_field?


This is still too dependent on technical knowledge since the address_field is the name of the textbox as programmed in the HTML, this name is subject to change and then all your stories would need updating. We can improve this by making the address field abstract in the story and programming the correct identifier for said field in the HTML in our step.rb file where all the steps are defined (i.e. programmed):


When I type ?xyz? in the address field


Still not perfect, the mention of address as a field is still a little too technical, the user we are describing the behavior off is not interested in that, if an end user is asked what he is doing he would likely say:


When I type ?xyz? as the address


This is where the nitpicking can start, after all there is no guarantee the end user is typing anything, perhaps he is copying text using the mouse buttons. It is better to make the whole description of the behavior even more abstract by noting the step as:


When I enter ?xyz? as the address


This last step is describing the behavior perfectly; it also makes your story more robust, if the address field becomes a text area the step is till accurate, of the field changes from name, it is still valid etc.


Writing testable code

The whole TDD and BDD practices lean of course on the testability of the code created. In practice this means that there must be an easy way to identify the controls that the end user will be using. Giving your GUI controls some kind of identifier that does not change (this means it is only used for testing and only to identify the control) will make your step definition code a LOT more robust. A good example can be found when testing HTML using Selenium web driver from Cucumber, you need to add the tag ID for each control that is needed for testing. The same is true when programming in .net and creating a desktop application using WPF, you will need to make sure the automation ID is filled for each control.

 

 

OWASP Belgium Chapter Meetings 2013 #1

by admin Email

This chapter meeting on the 5th of March was co-organised with secappdev. For the first time there was a need for the larger lecture hall, there were a lot of people present. One can only hope that this is going to be a trend J


25 Years of Vulnerabilities (by Yves Younan)

Yves presented a large number of slides with figures on vulnerabilities. The data came from the National Vulnerability Database, the Common Vulnerabilities and Exposures Database and vendors like Microsoft. Some of the numbers had to go through manual processing to make them usable and the effort that went into this research was high.

It was clear after the presentation that the numbers could not easily be used to compare for example the different browsers or operating systems. The manner of reporting (or not reporting for that matter) by the vendors and researchers is so different from product to product and vendor to vendor that the numbers cannot be used to compare them. Also the numbers only count vulnerabilities and do not always correctly show how much impact this vulnerability has (e.g. the Chrome web browser user sand boxing techniques that make it hard to exploit any of the vulnerabilities found).

So all in all a nice presentation; but without any conclusions. The full report can be found here (registration required).


Banking Security: Attacks and Defences (by Steven Murdoch)

Steven talked about the security of banking applications, both when using your debit/credit card at a point of sales terminal as when using the online banking. Since Steven is working in the UK his first examples were of course from UK banks. When it comes to online banking it seems there are a lot of different mechanisms being used today, in the Belgian marker (i.e. Belgian banks) this is not really visible, and most banks use more or less the same methods and tools to authenticate users.

Steven showed a movie that was also aired on British television where the copying and/or using of bank cards without knowing the PIN was shown. The reaction of the banking sector was hilarious ?

This was a very technical talk and  a little bit too far from my knowledge from web application security to understand all implications of the differences between all the systems.

 

So again an interesting OWASP evening session with topics to broaden our knowledge and understanding of security principles in general. Hopefully next time there will be something about web application security J

Belgian Testing Days 2013 - Tool selection: a successful approach by Bernd Beersma

by admin Email

Introduction

This is an article in two parts; first I will do a recap of Bernd Beersma?s talk about tool selection during the Belgian Testing Days. As an add on I will show how this process can be adapted to select open source tools as often used in security testing. I went to the talk just to see how other companies handle this process but I learned a lot of things and got a heap of good tips. So next time a company asks me to help them choose a tool I will certainly propose them this process ?

 

The talk

The slide deck can be found here.

Bernd showed quite convincingly that there is a process that can be followed that will deliver a high chance of success when selecting a new tool, the process is divided in several phases (thus far business as usual) and after each phase there is a ?go/no go? decision. This is the big difference between this process and the ones I saw being used in most companies. If after a phase there is not enough confidence that the tool(s) selected for the next round will be able to deliver sufficient quality then the process either goes back to the previous phase or stops all together (it would then start all over again with a clean sheet).


A poor selection of a tool can be caused by OR influenced by:

·         Poor business case

·         Lack of commitment

·         Poor automation

·         Cost


The 4 phases are a long list, a short list, a proof of concept (POC) and a pilot. It is important to note that Bernd tried to introduce some efficiency in the last two phases, the POC and pilot should be done on an actual environment so the time spend here is not lost if the tool is selected, in fact this will guarantee a ?running start? for the tool selected.

Next are some bullets of actions needed in each specific phase


The long list:

·         Identify stakeholders

·         Define goals

·         Create business case

·         Setup project team

·         Define general requirements (e.g. mind map)

·         Gather general information

·         Create long list (approximately 10 tools, possibly less)


The short list:

·         Define detailed requirements

·         Define priorities & weight for each requirement

·         Request for information (RFI)

·        Determine score (e.g. use a spreadsheet with the priorities and weights and score each line for each tool)

·         Create short list (e.g. top 3 tools)


The proof of concept

·         Draft POC requirements (i.e. an objective measurement of the POC)

·         Invite vendors for POC (shared involvement of vendor)

·         Execute & evaluate

·         Invite for a request for proposal (RFP)

·         Select the best suited tool for our purpose


The pilot

·         Basically same as POC but with just one tool/vendor

·         Select project

·         Define requirements

·         Invite vendor

·         Execute & evaluate project


The adaptation

When choosing tools for penetration testing we are often challenged by a myriad of tools, many of them small and single purpose. Often there is more than one small tool doing the same thing and choosing between them is difficult. Applying the procedure as above for all the tools needed for a penetration test would take too long. Also open source tools do not have vendors as such and RFI and RFP would not be possible. RFI will be replaced by searching the internet and trying to get an informed view and trying to replace a vendor, the RFP could be some kind of business case that estimates things like learning curve and cost, installation cost, maintenance cost etc. A great deal of objectivity is needed when estimating these costs since they can greatly influence the process. In most cases for a small tool the RFP can be dropped completely.

Most penetration testers use their own personalized system, many using open source tools or relative cheap tools (e.g. BurpSuite Pro), this means the stakeholders, project group and vendor related items can be dropped.

This leads to a slimmed down list of phases. I even put some estimates next to each phase.


The long list (1 day):

·        Define goals (since most of these tools are single purpose the general requirements cover this aspect also)

·         Create business case Estimate learning curve/effort

·         Define general requirements (e.g. mind map)

·         Gather general information

·         Create long list (approximately 5 tools, possibly less)


The short list (1 day):

·         Define detailed requirements

·         Define priorities & weight for each requirement

·         Determine score (e.g. use a spreadsheet with the priorities and weights and score each line for each tool)

·         Create short list (e.g. top 3 tools)


The proof of concept, pilot and implementation. These can be merged since most penetration test engagements do not last more than a couple of weeks.

·         Execute & evaluate in the next penetration test

·         Select the best suited tool for our purpose, possibly replacing another tool previously selected


Using this shortened method the total time spend on the tool selection should be significantly shorter while still keeping enough of the process to show other people how you came to that conclusion.

 

 

SANS evening session: "Patching your employee's brain" (by Pieter Danhieux)

by admin Email

Introduction

I wanted to see both talks during this SANS community night but due to traffic I missed half of Daan Raman?s talk. Even though the part of the talk I saw showed the research Daan did was sound and elaborate, I do not feel confident enough to correctly represent his work so I decided not to write a wrap up, sorry Daan, I hope I get there in time next time. Daan?s slide deck is available online for those interested in the research on Andoid malware

The second talk that evening was done by Pieter Danhieux and handled the education of people in the work place.


"Patching your employee's brain" (by Pieter Danhieux)

The slide deck can be found here. This was a largely non-technical talk about the different ways companies can instil a degree of security in their people, this talk is about making non security people more resilient against different types of attacks (such as choosing a good password, recognising a phishing email or call etc.). Pieter also included some good resources to find items like posters that are free to use.

The first part of the talk showed how easy it is to craft phishing emails and malware that are passing through the junk and anti-virus filters. Pieter also showed with some clear examples that humans are not good at evaluating risks, in fact the more uncommon risks are always perceived to be a lot more common than they are. Considering that most messages a normal users gets from anti-virus, firewall or browser are rather cryptic to them it is  not surprising that they make the incorrect decision when these are presented, after all they could be missing out on a funny picture of a cat ;-)

During the second part of the talk it was made clear we need a roadmap for security awareness in most companies. This roadmap should detail a security awareness program. Such a program is iterative in nature; it keeps looping through the same phases:

·         Deliver key message

·         Reinforce that message

·         Measure the effectiveness

It iterates on two levels, this process is repeated for each key message and as soon as the metrics show a message from the past is deteriorating that message needs to be iterated again.

The last part of the talk was about the pitfalls and common mistakes, the most important one I noticed personally is that you need active support and backing from the complete management if you want a security awareness program to succeed.

A very nice and informative talk indeed J

Comments disabled

by admin Email

The last couple of months the amount of SPAM I receive through the commenting system has grown out of proportions. Each day I need to clean out about 50 SPAM messages.

I decided to disable the commenting system for now since not that many comments were logged. All the old comments remain visible and with my next upgrade I will add an email address where comments can be send to, I will publish those manually.

I hope this leaves me more time to actually write articles, something clearly lacking the past couple of months :-)

OWASP BeNeLux days 2012 ? training day - Building a Software Security Program On Open Source Tools

by admin Email

This was a two-day training condensed into one day for the OWASP BeNeLux days 2012, the instructor was Dan Cornell. Dan is a fast paced talker but still easy to understand. Because there was so much material to go through we did not play around with the tools as much as I would have liked. In this recap I will just highlight some of the things that were most interesting for me.

 

The first quarter of the talk handled the methodology used to formulate, implement and measure the quality of the security effort for an organization. The title of the training mentions the SLDC (Software Development Lifecycle) and this is taken in its broadest sense, this is not just about writing software, getting it on the market and maintaining it. The methodology used is OpenSAMM.

 

First order of business for a company that needs/wants to implement more/better security in its products is to do a threat assessment. Next up is the design of a secure architecture. To do this you need to do threat modeling. You cannot defend against any and all threats at the same time and some applications are not as critical as others to protect. This will allow a company to allocate resources correctly since they are always a scarce commodity.

 

Dan?s company has developed a tool called ThreadFix. It looks like a very good tool with loads of functionality such as data collection and aggregation from multiple scan engines, over time reporting on security issues found, integration with bug tracking systems, comparing of different scans, automatic generation of WAF rules for issues found and polling of WAF rules from the WAF, reading of WAF logs to link issues found to actual attempts to exploit those on the WAF. ThreadFix also exposes a RESTful API and a command line interface so you can script it to server your needs. In fact this tool looks to have so much nice functionality that I will write an article on how to use it later.

 

The well known webgoat is used to train developers and make them aware of security problems, this tools is not only useful for penetration testers.

 

The web app scanner evaluation project  tests the  accuracy of scanners and can help to differentiate open source and commercial scanners. It is also a good starting point for an organization that needs to select a toolset for performing security tests.  Some of these tools can be integrated in the continuous build cycle whereas others will be used by the QA department to further check for security related issues.

 

When security related issues are found they must be rated in order to provide management with an indication of the severity. As an external pentester I use DREAD  to rate vulnerabilities since it is simple to calculate and explain to developers as well as management. I let them worry about internal priorities of the different applications since you need to be in the company to judge that correctly. The most important thing when reporting vulnerabilities is to include steps for remediation.

 

According to Dan, based on his extensive experience, security all testing takes up (on average) about 30% of the development time.  This gives a good indication on the amount of time needed if you need to make a very quick estimate.  


Code reviews should be done using both tools and manual inspection. In an agile environment the practice of code reviews is already in use, it is just a matter of training the developers to look at code also from a security perspective and not just a code quality perspective (although one might argue that security is an aspect of high quality code). The automated tools to do code reviews are called static analysis tools. They have the benefit of being run early in the development but they tend to be rather expensive and they are notoriously bad at finding logic problems. All of them need to be configured and some effort to set them up needs to be considered. It is clear that manual code reviews are a perfect complement to these tools.

Some examples of static analysis tools are:

  • findBugs : for JAVA code and is also available as eclipse plugin
  • cat.net : from Microsoft, does dataflow analysis, future plans not clear
  • brakeman : for Ruby on Rails, installs as a ruby gem, maintained by Twitter developers
  • agnitio : for manual code reviews, it includes a set of checklists and some grep like search capabilities

 

For companies using a lot of Microsoft products, Microsoft released the MBSA or Microsoft baseline security analyzer. This tool scans computers/scanners and returns recommendations to improve security for products like Internet Explorer, IIS MS SQL Server etc.

 

Finally Dan talked about mod_security. This WAF is now also available to protect IIS and Nginx.

Book Review: iWoz - Computer Geek to Cult Icon

by admin Email

Title

The title of the book is iWoz and the subtitle is Computer Geek to Cult Icon: How I Invented the Personal Computer, Co-Founded Apple, and Had Fun Doing It. This is a very good abstract to describe the content of the book J

Content

The book is an autobiography by Steven Wozniak and was written in part to set some ?facts? straight. It handles the life of Steven from his childhood until recent years. Some of the things Steve mentioned in his book contradict other books about the history of Apple, in this case I tend to believe what Steven has written, after all he was there first hand and most of the other writers were not.

I especially like that Steven explained what his reasons were/are to do certain things. For example the concerts he organized and where he lost a load of money were to him very valuable just because he was able to do something that other people enjoyed.

His great engineering skills are of course the center of the book. When he talks about designs and engineering you feel his passion and you are automatically drawn into the story.

Personal thoughts

I think this book will be liked by anybody that has an interest in engineering. At certain points Steve details how he was first to do something (e.g. add a keyboard and screen to a computer) and to some people this might come off as bragging. I think Steve just meant to note that indeed he was (to his knowledge) the first person to think about such things and put them into practice.

His style of writing is very simple, this is not a book meant to win prizes in literature but it just wants to get across the life story of Steve, I liked this a lot since you can feel Steve spend a lot of time writing (or co-writing) this book.

I liked the book a lot and would recommend it to everyone that likes technology and needs a bit of inspiration to get out of the sofa and start doing something.

OWASP Belgium Chapter Meeting #4

by admin Email

This meeting was hosted on the first evening of the BruCon conference on 26th of September 2012 in cooperation with the ISSA organization. The silde shows for the presentation can be found here: https://www.owasp.org/index.php/Belgium

 

First talk: Introducing the Smartphone Penetration Testing Framework

A talk by Georgia Weidman about the framework she created for smartphone penetration testing. This framework works on both Android and iOS phones. It's menu structure is similar to the SET (Social Engineering Toolkit). The demo's included several ways to attack smart phones and then leverage the control of the smartphone to gain entry in an organization. 


Second talk: Why your security products suck...


A talk by Joe McCray on the working of web application firewalls and different ways of circumventing the WAF. The WAF blacklists are a bunch (actually a lot) of rules that stop certain parameters from being entered (e.g. "1 = 1"). These rules often work using regular expressions and can often be defeated using following techniques:

  • encode the parameter multiple times in different encodings
  • create a variant of the parameter that falls outside of the regular expression (e.g. "3 > 4")

Joe brought some security products to use in his demo and the audience had the chance to play with them also.

 

Discussion: Pentesting, legal aspects

A discussion session moderated by me in order to start a project where companies in Belgium could get templates of contracts for penetration tests. With these templates some explanation could be delivered so both parties are aware of their duties and rights.

The result of this discussion will hopefully be the start to get such documents, further info will follow.

 

OWASP Belgium Chapter Meeting #3

by admin Email

This was the third chapter meeting this year and took place on 12th of September. The sildeshows for the presentation can be found here: https://www.owasp.org/index.php/Belgium

 

First talk: You are what you include: remote JavaScript inclusions

 A talk by Steven Van Acker about the dangers of including JavaScript from remote hosts in web pages.

There are three parties involved:

  •  The user browsing the website who potentially gets served malware
  •  The website owner who is responsible for including the offensive scripts
  •  The third party that is hosting the offensive script


All three parties can take actions to prevent misuse, for this I refer to the presentation itself or the excellent paper co-authored by Steven Van Acker.


The attack is very interesting, it works as follows:

  • Find a company that hosts JavaScript to other websites (e.g. google analytics) and that is used by a lot and/or high traffic websites
  •  Compromise the server and add malware to the JavaScript, make sure the functionality remains so no suspicion is raised
  •  Watch the explosion in compromised systems from unsuspecting surfers


This talk just confirmed my resolve in blocking as many scripts as possible in my browser using the excellent NoScript plugin.


Second talk: Modern information gathering

A talk by Dave van Stein.

The main focus of the talk was the techniques in information gathering that have no contact with the target system/server. A lot of tools were discussed, some of these aggregate the results of different other tools and find relations between them.

Check out the presentation to see the tools used, many are well known but some were new to me and looked very promising.

1 2 3 >>