Update on URL Crazy

by admin Email

My article on URL Crazy and email spying was updated with a little more content and published in Auditing and Standards Magazine. The magazine is not for free but should you want a reduction in the subscription cost, send me a message ...


A teaser for this magazine can be downloaded here.

Anatomy and mitigation of different DOS attacks

by admin Email

This is a reprint of part of an article I wrote for PenTest Magazine, I will publish only one chapter here as a teaser, if you want to read the full article you should go and buy the magazine ;-)

I'm going to discuss here some examples of different DOS attacks, in the magazine you will find additionally a generic description of how DOS and DDOS work, how IP spoofing can and cannot be used, how to mitigate DOS attacks and some interesting links for further reading. 


Teaser chapter: Examples of DOS attacks

Here are some examples of DOS attacks, each showing a different way of attacking a service/server.


An attacker sends UDP requests to a server (typically a game server running illegal software such as a Quake III server; these are usually running older unpatched versions of the server software). The attacker spoofs the IP address and uses the IP of the server he wants to attack.  The game server sends the answer to the server under attack. A small request such as asking for the game status of the server results in a reply with a larger amount of data and since many of these game servers exist around the world the attacker can start filling up the incoming network bandwidth of the attacked web service.


An attacker can bring down a mail server by misusing the ?undeliverable? functionality of misconfigured servers. Some mail servers send a message for each person in the ?to? field of an email for which the email is undeliverable. Email addresses are also easily spoofed. This attack works by finding several misconfigured mail servers and sending to these servers a mail where the ?to? field contains hundreds of email addresses which the attacker knows that do not exist on the misconfigured server. By putting an existing email address in the ?from? field, the email box of this person will receive hundreds of emails with the undeliverable message for each email the attacker sends out. The multiplication of data is so big that with a very small amount of emails an attacker can bring down a mail server.


An attacker creates an HTML page to requests the largest object (usually a picture) from a website he wants to attack; he then distributes this HTML page to either a botnet or an army of volunteers. In the HTML page there is a Javascript that refreshes the page every second. Each second the picture is requested by thousands of people eventually filling the outgoing network bandwidth of the server under attack.


The so called Smurf attack works by sending an ICMP echo request to the direct broadcast address of a network. In many cases all nosed in the network will return an answer (the ICMP echo reply). By spoofing the sender IP address to the address of the server we want to attack the attacker can have all these answers send back to the server he wants to bring down. An attacker will need to find a network with sufficient nodes (i.e. enough hosts that are answering) in order to bring down a server. The SMURF attack was widely used in the 90?s but since mitigation is easy it is not common these days. The first fix came in 1999, since then the default behavior of a router is no longer to forward ICMP echo requests. In order to further mitigate the attack a network administrator can configure hosts and routers not to respond to ICMP echo requests.


A variation to the Surf attack is the so called Fraggle attack. The attack works along the same lines as the Smurf attack but instead of sending an ICMP echo request to the direct broadcast address, it sends UDP packets. For this attack the same mitigation exists and it is also no longer considered a common attack.


A SYN flood attack works by exploiting the TCP 3 way handshake mechanism. In a normal case when two computers talk to each other using the TCP protocol they first set up a connection. Host A sends a packet to host B with the SYN flag on, Host B replies by sending a packet with the SYN flag and the ACK flags on. Host A now completes the connection set up by replying with a packet with the ACK flag on. An attack is possible by sending a packet to the server under attack with the SYN flag on and using a spoofed IP address that does not reply. The server under attack will now send a packet with the SYN and ACK flags on to the IP address that will not reply with any packet. The server will wait a small time, notice that he did not receive a packet with the ACK flag on and he will resend the packet (SYN and ACK flags on) to the IP address the attacker spoofed. The server will keep doing this until a set ?time out? period has elapsed. All this time (on a Windows NT machine, this could be up to three minutes!) the resources for the connection remain reserved for the spoofed IP address. If an attacker sends enough packets using a different spoofed IP each time then the attacked server will not have resources free to set up any real connections.

Securing a Linux desktop part 1: removing unwanted services

by admin Email


There are several approaches to securing software platforms such as a computer. I like to use this sequence when I try to get better security in place:

  1. reduce the attack surface

  2. have more than one mitigation in place against a certain attack

  3. patch and upgrade

  4. try to add defenses when new types of threats arise

Reduce the attack surface

This is actually a very simple principle, if your desktop does not use wireless connections then your desktop cannot be attacked through a wireless network. In practice this becomes a little more difficult, there are many programs you want or need to have on your desktop, each of these is a possible target for an attacker. On a Linux system there are a lot of services running in the background (same goes for Windows and Mac OS) and some of these you do not really need. By removing them you reduce the potential targets.

Some of these services are started when you boot your computer, it is possible you need one of these services but not all the time (e.g. wireless network connection is only needed when you do not have access to a network cable), these services will not be removed but could be stopped and started manually when they are needed. This reduces the time an attacker has to attack these services.

Additional benefits

By removing services from your system you will have some benefits other then reducing the attack surface such as a faster boot time and a slightly faster machine. Every service that gets removed does not need to be started at boot time and while your desktop is running that service is no longer using CPU and RAM. The benefits gained might not be noticeable on a modern machine though.


Since I'm going to fiddle with the services and daemons running on my system I'm running the risk of ruining my system, certainly since I do not know what each and every running service does. By installing the same setup your desktop has in a virtual environment I can safely play with the services without much risk. It is important to have the same software packages running in your virtual environment since some software packages make use of services are have their own services running.

Getting the list of services started at boot time

In Linux this is quite easy, open a terminal and type 'systemd-analyze', you can see how long your system needed to boot. Using the same command with the parameter 'blame' you receive a list of all services started and their start up time. Since we are not interested in reducing boot time (this is just a nice side effect of increasing our security), I ordered the list alphabetically.

Removal procedure

Now we need to decide what services we are going to remove. In order to make this decision we need to determine exactly what the service does and why it would be needed in our system.

The procedure goes as follows:

  1. get the first service from the list

  2. find out what the service does

  3. do you need this service at all?

  4. do you need it started at boot time or can you start it when actually needed?

  5. if the service is removed or changed to start manually reboot and check everything is still working

  6. check the service off the list and go back to step 1

To find out what a service does you can simply search the itnernet or use the command 'systemctl status service' where service is the name of the service (inclusing the .service suffix).

I'll illustrate this using an example, the first service in my list is abrt-ccpp.service. According to the internet this is part of the automated bug reporting tool. I do not need this service, if I find a bug in fedora I can always report it manually. This service can be removed.

Removing this service is easy, in a terminal use the command 'sudo chkconfig abrt-ccpp off'. Note you have to remove the word '.service' or the command will fail.

In case of the abrt-ccpp service my research also showed that there are two other services related to the automated bug tracking tool (abrt-oop and abrt), I cheated the procedure a bit and removed them also in the same iteration of our procedure.

After reboot I checked the systemd-analyze and my boot time was one second faster than before. I also checked the blame list to make sure the services were actually removed from the boot sequence. I then made a quick check to see if all my programs were still in working order.

My list of services

After going through this procedure until all services were checked I ended up with the list as shown at the bottom of this article, I added a column describing the purpose of the service as well as the result of the investigation to see if I needed this service or not.


A special note on the ipv6tables service, by removing this service I removed the firewall from all ipv6 connections on my system. The reason I did this is because I completely disabled the ipv6 stack on my system, I do not need ipv6 in my workplace not at home. Since ipv6 differs so much from ipv4 and since I'm not an expert in ipv6 security I opted not to use it at all. This is a good practice should you ever visit a conference or use free wifi in public places.

A good article on removing ipv6 from your system and some of the security concerns can be found here.

There were some services I disabled using the following commands:

ln -s /dev/null /etc/systemd/system/udev-settle.service

ln -s /dev/null /etc/systemd/system/fedora-wait-storage.service

ln -s /dev/null /etc/systemd/system/fedora-storage-init.service

ln -s /dev/null /etc/systemd/system/fedora-storage-init-late.service


Some final thoughts


I have removed no less then 21 services from my boot sequence and made my machine a little bit more secure. It was not without risks, removing services can crash your machine if you are not carefull but it is certainly worth the effort.


Services list


Name Purpose Result
abrt-ccpp Automated Bug Reporting Tool removed
abrt-oops Automated Bug Reporting Tool removed
abrtd Automated Bug Reporting Tool removed
accounts-daemon Accounts service start at boot
auditd Logs to separate log file, if removed logs to sys log removed
avahi-daemon mDNS/DNS-SD daemon implementing Apple's ZeroConf architecture removed
boot.mount Loads the /boot and is needed start at boot
console-kit-daemon Console manager start at boot
console-kit-log-system-start Console manager startup logging start at boot
cpuspeed Throttles your CPU runtime frequency to save power. start at boot
cups Network printing services start when needed
dbus Software communication protocol start at boot
fedora-autoswap Enables swap partitions start at boot
fedora-readonly Configures read-only root support start at boot
fedora-storage-init-late I don't use RAID or LVM so I do not need this removed
fedora-storage-init I don't use RAID or LVM so I do not need this removed
start at boot
start at boot
fedora-wait-storage I don't use RAID or LVM so I do not need this removed
hwclock-load System clock UTC offset start at boot
ip6tables Firewall removed
iptables Firewall start at boot
irqbalance Needed for multicore CPU's start at boot
iscsi I don't have iscsi removed
iscsid I don't have iscsi removed
livesys-late live CD left over removed
livesys live CD left over removed
lldpad Needed for fiber channel over ethernet, I don't have that removed
lvm2-monitor I don't use RAID or LVM so I do not need this removed
mcelog Log machine check, memory and CPU hardware errors start at boot
mdmonitor Software RAID removed
start at boot
netfs Mount network file systems, I need this but other might not ... start at boot
NetworkManager Networking start at boot
portreserve I only had cups in here and since I removed that I can remove this removed
rc-local Needed in boot process and shutdown process start at boot
start at boot
rsyslog System logging start at boot
rtkit-daemon Realtime Policy and Watchdog Daemon start at boot
sandbox Used by SELinux start at boot
sendmail I use thunderbird so I do not need this removed
smolt Monthly information send to fedora to assist developers removed
systemd-readahead-collect Faster boot start at boot
start at boot
start at boot
systemd-tmpfiles-setup Prepare /tmp start at boot
start at boot
start at boot
udev-settle I don't use RAID or LVM so I do not need this removed
udev-trigger Device management start at boot
udev Device management start at boot

An update on disclosure

by admin Email

An update on disclosure


Today Bruce Schneier wrote a short article on full disclosure, it is a good point to start to learn more after reading my article on responsible disclosure.

On web surfing and privacy

by admin Email

Is there any privacy when surfing?


Basically, no.

When you connect to the internet (legally) you will use an ISP (internet service provider). That ISP will have data on you in order to send you bills. When you start surfing the web all your requests go through this ISP and as such this company knows everything you do (i.e. They now that what websites you surf, what you download etc.)


How to improve you privacy from your ISP


First you could use a proxy server for surfing. If your country is blocking a certain website (for example because it hosts pirated software and movies) then changing the way you connect to the internet will circumvent this block. A nice list of proxy servers can be found here. There are several other good lists, a simple google search can point you in the right direction.


The main advantage of these servers is that your personal data (for billing) is separated from your surfing behavior. Your ISP will only see you going to the proxy server website and although it is possible to log what you send to that website an ISP will not do that unless a court order is issued (simply because the amount of data to be stored to do this would be impractical). This is an important step in having a little more privacy.


Any website that allows you to use encryption of the communication channel (this means any website that starts with https) will stop the ISP from listening in to the communication. This is because the communication is secured (hence the s in https) by encryption. In practice this means your ISP can watch you navigate to your Gmail account but they cannot see your password since that is send through an encrypted channel, also all the things you do while in your Gmail are encrypted.


How to fool tracking through advertisements


On many websites you can see advertisements, many of these are hosted through one company. Whenever you visit a website any advertisement on that website can send your data back to the company that hosts the advertisement. The data captured could include your IP address, the details of your browser (and all plug ins installed), the referring website (i.e. Where you were before visiting this website) etc. If there is enough data such as installed plug ins and a tracking through cookies an advertisement company could create a pretty good profile on you.


So you want to block advertisements and tracking cookies. In Firefox there is on option to tell websites you do not want to be tracked. I have this enabled but I do not put a whole lot of confidence in this. I also block advertisements using a plugin (AdBlock plus).

As a second layer of security I also use the Ghostery plugin to make sure tracking is very difficult by blocking as many known trackers as possible.


I'm not claiming every advertisement company does this but there is no way for us to check this so better safe then sorry.


How to improve privacy by disabling statistics gathering on websites


There are software packages for webmasters that allow them to gather statistics on the usage of their website. Some of these packages have the ability to be shared with third parties or the software vendor. Google Analytics is a good example of this. You can share the data with third parties and you as a webmaster are supposed to make sure the data is anonymous. I'm not sure what the EULA says about sharing with Google itself, it is not explicitly mentioned that the data must then also be anonymised.


I'm not claiming that every webmaster misuses this data gathered during a visit on their website but again since we cannot check this I prefer not to give them my data.


I installed the NoScript plugin and that blocks a lot of these statistics gathering utilities. As an added benefit it also blocks any malicious script by default, I need to allow what scripts I want to run. This does change your surfing experience since any flash website will be blocked by default and you will need to manually (temporary!) allow this website in order to see the content.


What about Tor?


Tor is the next thing you might want to install in your browser. It will protect your data while on route on the internet (similar to having https). However the entry and exit points in the Tor network are not protected. Also it will slow down your surfing experience a lot. If you are not doing anything politically dangerous or illegal then the benefits of Tor do not outweigh the benefits in my opinion.


I do use the https everywhere plugin for surfing to ensure that for every site where there is a possibility to use https I actually do so.


What about surfing using another account?


There are many ways to achieve this.


The easiest is going to an internet cafe and surf from there. This might add a little privacy, the internet cafe has your personal data (often some form of ID is needed) and the only thing that changes is that it is now the internet cafe that has your personal data instead of the ISP. The internet cafe will probably keep your data and the logs of your internet usage for a while, probably there is a law somewhere that dictates what and how long they are supposed to keep this. It will be a lot more difficult for companies to get this information unless the internet cafe is actually selling this data (after removing personal information).


The same can be said for libraries. Also in libraries you will need some form of ID. Since libraries are often linked to the government it is actually even worse then surfing from an internet cafe, you might as well just send all your data immediately to Big Brother ;-)


Another way is to use another persons (wireless) internet access, this would add privacy but it is also illegal if done without permission.


by admin Email

As some of you might have noticed the blog was taken down by my hosting company yesterday.

They had a problem that I was consuming too many resources.

I'm looking into this issue and in the meantime my hosting company was so nice to bring the blog back online.

This also means that if the issue reoccurs the website will be taken down again so there might be another downtime while I fox the issue(s).

I apologize for this inconvenience.

On ethical hacking, colored hats and hacktivism

by admin Email

What is ethical hacking?

Ethics are a somewhat subjective set of rules that people follow. When it comes to ethical hacking, the 'ethical' part usually means not doing anything illegal and following the responsible disclosure rules (as discussed in a previous article).


There is also something called the hacking ethics, it is a completely different subject that discusses what a 'hack' is and how the title of hacker can be gained etc. More information can be found in this wikipedia article.


What are the different colored hats and what does it mean?

This is actually quite easy, a white hat keeps everything completely legal, a black hat is a criminal and a grey hat is a person that tries to keep things legal but might break a law 'in good will' or because he has been given approval by the owner of a system to attack this system with means that are considered illegal (e.g. DOS attack or social engineering).


Social engineering is a good example of an illegal attack (at least in Belgium it is illegal for you to impersonate someone else and recent changes in laws also covers things like setting up a Facebook account with another name then your own) that is often used in penetration testing, the tester might use these techniques that are actually considered illegal even thought the owner of the system he is attacking might have given him approval to do so. I do not know of any legal cases that might have occured as a result of this (since the owner is not going to press charges there will probably never be a court case), for a court case to happen a third party would need to file a complaint and somehow prove that there has been damages done to him by this penetration test. I do know that there are legal cases where the owner of a system accuses the penetration tester of going outside the rules they had agreed on but that is of course a different matter.


What is hacktivism?

The term hacktivism refers to a person or group of people that use hacking as a mean of protest. A hacker that wants to support the anti whaling movement might for example deface the web pages of the whaling companies with disturbing pictures of whales being cut up.


Hacktivism by itself is a form of protest, it can be legal or illegal depending on the means used to protest. Creating a website to show pictures of a whaling fleet and show the names of companies that support this fleet is not illegal, defacing websites not owned by the hacktivist is of course illegal.


It is possible for people and groups to take things a step further and go from hacktivism to be vigilantes. There are people out there for example that break into websites to serve child porn, steal data on the users that surf to this website and then publicize this data to show who is looking at this child porn. Although this is strictly speaking illegal there is a wide support for these vigilantes from the 'outside world'.


There are of course also groups such as anonymous that with their actions are between hacktivism and being vigilantes but often using illegal means of protest. Just as in real life there are many shades of grey here.


by admin Email

I'm going to update my blog software as well as my statistics software. This wil cause a small outage on the website this morning.

I expect the blog to be back and fully operational in two hours.

I need to install new versions because several security issues had been identified and solved by both the blog software supplier and the statistics software supplier.


Many thanks for your patience.


And we are already back and operational, the updates were a lot faster than planned :-)

On responsible disclosure

by admin Email

What is responsible disclosure?

This is part of the ethical hacking mind set.

This is my opinion: when any person finds a vulnerability in software that person should report it to the software supplier (could be a vendor like Microsoft or just one person for a small open source tool). In the initial reporting of the issue that person should give the supplier sufficient time to fix the issue and release a patch or new version before releasing this information to the public. It does not really matter if the issue is security related or just a lay out problem in a GUI because this issue might be combined with another feature and/or issue to cause more serious problems. I also think that a lot of people discovering issues cannot predict if that issue actually has far reaching consequences or not since a future discovery might elevate a previously found issue to a whole new level. The fact that the reporter gave sufficient time to the supplier to fix the issue after reporting it is of course the 'responsible' in the responsible disclosure process. The disclosure actually means the reporter will make all information concerning the issue public.

By making public not only the issue but also how the issue was found, the reporter can help other people to look for similar issues in other pieces of software. If we follow this procedure we as a software user community are helping the suppliers to make better software.


Responsibilities of the reporter

The reporter should provide the supplier with as much information as possible, this will ensure that the supplier can pin point the issue and create a good solution for it.


The researcher should wait long enough before disclosing the issue. How much time is 'sufficient time'? Well of course that depends on a lot of different things such as the severity of the issue, the resources available to solve issues for that software product etc. As the reporter of the bug you might indicate a date on which you wish to disclose the information, the supplier can always ask you to delay that date for a good reason.


Responsibilities of the supplier

There are a number of people out there that search for issues in software and make a living out of this (such as software or security researchers). Also there are companies paying people who find bugs in their products (such as Google and Facebook). Because of this it is important to show that you are the (first) person to have found a certain issue since it might bring you something to help you in life (money, reputation ...) and since you cannot disclose the issue before a patch has been created and released and you have given time for people using the software to install the patch another person might claim he found the issue (that person might not feel the need for responsible disclosure). This is where the supplier plays an important role, the supplier can easily show who was the first to discover the issue and who has discovered the same issue before it was publicly known (thus also be a 'first' to discover it but perhaps not receiving the money for it).


What companies should never do is sue people that stick to the responsible disclosure for disclosing issues. If a company fails to react on a reporting of an issue or if the time they ask to fix the issue is deemed too long by the reporter, the reporter might feel obliged to make the information public without waiting for a patch. They do this to force the supplier to solve the issue. There are even companies out there that threaten to sue you for making the issue public.

Why your security should be like Shrek

by admin Email

What is the relation between security and Shrek?

Shrek is an Ogre, Ogres are like onions, onions have layers thus Shrek has layers. Security should also have layers and thus security should be like Shrek (in the sense that it has layers and not that it is comical and certainly not that it smells).


Now that I have explained the title we can go on with the real stuff, why does security need to be layered?


Reason 1: Because you want to spread risk

Security is all about risks and risk management, you are constantly investigating the chance of something happening (e.g. an SQL Injection attack), the change that this is successful and the amount of damage that could be done. The more measures you have to foil a certain attack from being successful the better since there is always a risk that one of your measures has a weakness.


An example should clarify this, suppose we are taking measures against a SQL Injection attack (SQLI). Our first line of defense could be the framework our application is using (e.g. parameterized queries as can be used in Hibernate). We could stop here and hope that there never will be a way to exploit the framework for SQLI. However someone could decide that this is not secure enough since a successful SQLI would be a marketing nightmare even if no useful data could be stolen. We add another layer of defense in the form of a Web Application Firewall (WAF). At this point we can keep adding more defenses if desired as long as they do not use the same techniques to stop the attack (because they would simply be exploitable in the same way). Another layer of defense could be to implement a more secure way of coding (e.g. using the OWASP Enterprise Security API (OWASP ESAPI)). This gives us three layers of defense with three different techniques used (parameterized queries for the Hibernate framework, white and blacklisting strings for the WAF and input sanitation for the ESAPI).


In a perfect world you will have at least two (technically different) defenses against every possible attack.


If you want to protect against the OWASP top 10 threats, that would mean 20 defenses. Luckily some solutions work against multiple attacks (e.g. OWASP ESAPI will help defend against SQLI and XSS and a number of other attacks).


Reason 2: Because of the OSI model

The OSI layer model is valid for every application that uses a form of communication and we should take action to protect all our applications' communications on each of the 7 layers of the OSI model. Since the OSI model is all about the communication to and from your application (i.e. network related) and we want this communication to be secure, we should protect every layer with some form of security.


One form of protection immediately comes to mind, the firewall. A firewall can protect communications starting from layer three and up. However you will probably need more than one firewall type since I do not know of one firewall that protects all layers from three to seven.


The example I mentioned above in reason 1 does NOT protect any of the layers in the OSI model since it protects against an attack on the application and not against an attack on the communication to and from the application.


There is a very good paper from SANS on the protection of data through all layers of the OSI model. It handles everything discussed here in deep detail and better than I could explain it so I will not go deeper into this subject in this article.


Reason 3: Because there are many types of attacks possible

This should be a no-brainer! It is only logical that there is no one defense that can protect against any and all possible attacks that exist and will be discovered in the future. If ever you see such a claim (e.g. by a vendor of a security product) you should stay away from anything that vendor does because clearly they are not afraid to lay to their (potential) customers.


I would advice you to have some kind of defense in place that can monitor and report suspicious behavior and perhaps automatically block this. A system like that would/could actually protect you from an unknown threat. A good example of such a tool would be the OWASP AppSensor project, this does require you to program hooks into your application so the AppSensor can monitor behavior but I've seen some very impressive implementations of this so I believe it is worthe the investment.



I think I made my point as to why you want security to be like Shrek, it should have layers like an onion (but not smell like one) and it should also scare potential attackers like an Ogre would

<< 1 2 3 >>