Open Innovation on SMEs for Cyber Security Awareness

This post is based on my minor thesis at TU,Berlin for my Masters. As the topic suggest, it is about open innovation on SMEs for cyber security awareness. Below is the introduction from my thesis. You can find the whole work on the attached pdf of my thesis. Hope you enjoy the read.


Security   and   privacy   are   inevitable   factors   for   every   enterprise   that   deals   with   user   data[26].   Hackers   with   malicious   intentions   always   find   enterprises,   irrespective   of   the   size of   the   organization,   as   profitable   targets   for   their   cyber   attacks.   In   a   recent   study focused   on   security   of   small   enterprises   in   America   [1],   it   is   evident   that   there   is   a   clear increase   in   the   number   of   security   incidents   during   the   recent   years.   T he   FBI   report published   last   summer   highlights   that   as   of   late   2014,   more   than   7,000   US   firms   were victims   of   cyber   attacks   through   phishing   email   scams   that   resulted   in   $740M   profit   loss collectively   [1].   A   large   number   of   retail   shops,   leisure   activity   businesses,   hotels,   health clinics   and   universities   are   getting   targeted   by   cyber   criminals.   The   study   conducted   by Symantec   reports   that   “almost   half   of   cyber­attacks   worldwide,   43%,   last   year   were against   small   businesses   with   less   than   250   workers”[1].   The   new   European   regulations aimed   at   protecting   customer   data   highlights   the   need   for   security   regulations   on   SMEs [2][3].

There   are   plenty   of   quality   defense   techniques   such   as   antivirus   softwares,   firewalls and   network   monitoring   tools,   that   makes   it   is   extremely   difficult   for   an   attacker   to cause   damages   to   a   network.   However,   contemporarily,   large   scale   cyber   attacks   are witnessed   in   the   industry.   Even   though   security   is   a   basic   necessity,   it   is   not   always treated   with   enough   importance.   Most   of   the   high   quality   security   or   protection mechanisms   are   proprietary   and   requires   a   license.   From   a   business   standpoint,   it   is observed   that   even   startups   or   small   and   medium   enterprises   (SMEs),   consider   security as   an   extra   feature   rather   than   a   fundamental   requirement,   due   to   three   main   reasons. The   first   and   the   foremost   reason   is   the   lack   of   proper   awareness   regarding   the importance   of cyber   security.   Sarah   Green,   a   cyber   security   expert   and   business manager   for   Cyber   Security   at   Training   2000,   says   that   one   of   the   most   dangerous phrases   used   by   small   businesses   is:   “ It’ll   never   happen   to   us.”   [2].   The   Second   reason   is the   high   cost   of   security   related   services/products   that   makes   it   hard   for   SMEs   to   afford it.   The   Third   reason   is   the   changing   nature   of   cyber   threats   based   on   the   new technologies   in   use.   Even   large   companies   are   prey   to   new   cyber   attacks   until   their security   analysts   formulate   and   leverage   quick   protection   mechanisms.   Such   new attacks   that   are   never   known   before   and   are   discovered   for   the   first   time   are   called   Zero Days   that   are   very   hard   to   detect   immediately   [5].   However,   the   focus   of   the   thesis   is on   SMEs   and   their   challenges   in   relation   to   cyber   threats.   Moreover,   it   also concentrates   on   open   innovation   among   SMEs   and   how   this   can   help   in   facing   cyber challenges collectively.

Read more …

Secure session management

Session management is as important as authentication. In a stateless protocol like HTTP, the user/client is remembered by the server with the help of session cookies. A cookie is characterized by four attributes: name, length, entropy, and content.

In HTTP, there are different kinds of cookies. Session – the one that lives as long as the browser is active. Persistent – the cookie that lives till a given date or time. Usually the expiry time is set using ‘Max‘ or ‘Expires‘ flags. Secure – this ensure that the cookie is used only with encryption and is never sent as plain text. HttpOnly – this cookie is used only in Http headers. It can’t be read or processed by client side scripts like java script, vbscript etc. The main idea is to prevent cross site scripting.  Then we have more cookies like SameSite, third party cookie, SuperCookie.

Following are some important points to remember in session handling:

  • Cookies should be transmitted only as HTTP header.
  • Never allow cookies in GET/POST parameters. This can lead to session fixation or CSRF attacks.
  • There should be different cookies after a privilege escalation. (Eg:- before and after login)
  • Cookies should be refreshed in regular intervals to avoid replay attacks.
  • A session cookie with higher privilege (like authentication cookie) should be always sent with Secure flag via HTTPS.
  • Use HttpOnly to avoid client side scripts using the cookie.
  • Set the Path and Domain variables properly. Otherwise, this can lead to a valid cookie in unexpected domains leading to xss attacks.
  • Perform proper testing on session timeouts to make sure that an timed out cookie is invalid. Otherwise this could lead to replay attacks.

Burp suite is an interesting tool to perform testing on session management. Burp Proxy is interesting to intercept every request and look at the headers manually and inspect them. The Burp intruder could be used for testing session timeouts/termination. We can set the Burp intruder to send requests with increasing delay and then check timeouts in the request engine in Intruder. The Burp Sequencer  is another interesting tool. This gives graphical representation of statistical analysis on bitwise as well as character strength of session. This is based on the entropy and randomness of the bits within a collected sample of 20,000 cookie samples.

You can find more reading here: session.

SSL/TLS interesting facts

The world of security is so vast. But it never failed to amaze me. I am excited about this journey of getting to know security in more details. I made a recent presentation about SSL at ERNW as part of my training. Even though I learned the protocol before, I realised there is much more to the protocol that I knew. I feel the area of SSL/TLS is beyond a mere protocol. It has become a new standard that a huge part of internet is relying on.  I really liked the book “Bulletproof SSL and TLS” by Ivan Ristic from SSL labs. I got the book signed by the author during the last Blackhat EU. But I never found enough time to read through the details. It was a great experience reading the book.

SSL/TLS comes between the application layer and transport layer in the network stack. It usually comes as an encapsulation of the application layer protocol like HTTP. These days, it is almost used in every field involving the internet, like web browsing, emails, instant messaging, VoIP, internet faxing and so on. The major idea behind SSL/TLS is to prevent man in the middle(MITM) attacks. There are numerous possibilities of how and why an MITM occurs. It could be individual hackers trying to steal bank pins. It could even be Government agencies trying to spy on their people and people of other nations. MITM could be categorized as active attacks and passive attacks. Active attacks are mainly aimed to trick authentication and trying to impersonate another person. Passive attacks are based on capturing network traffic and analysing the packets. This could lead to information disclosures. Sometimes, even encrypted traffic is analysed to find weakness in encryption used. In order to avoid the different types of man in the middle attacks, SSL was designed. It provides confidentiality, integrity, authentication and non-repudiation.

The protocol in itself is very interesting to learn. There are a bunch of attacks on SSL/TLS. It is also interesting to know about the defense mechanisms and evolution of TLS from 1996 until 2016, 20 years of growth. You can check the presentation slides for more details.

Read more.


Every application that requires to identify its users needs a security mechanism to keep track of logins and perform access control. The world of web is not safe and it is a necessity to have an authentication mechanism for every application that you might want to build/use.

There are mainly three kinds of authentication.

  • Knowledge based : something you know. Eg:- password, the TAN send to you via phone, passphrase, etc .
  • Ownership based : something you have. Eg:- smart cards, yubikeys, certificates.
  • Biometric : something you are. Eg:- fingerprint, iris

In a system that requires a strong security is highly recommended that you implement atleast two means of the above categories. It is generally called multi-factor authentication. On the other hand, there are 2-step verification / similar approaches that have two authentication mechanisms, but both falling in the same category. The google 2-step verification is an example for this.

Web applications makes use of various authentication protocols mainly basic access authentication, digest access authentication, form based authentication and OAuth. You can read in detail about the protocols in here.



I recently got an opportunity to present about Heartbleed at my new work place ERNW. I took some time to do a detailed study about the vulnerability.  I am quite amazed by its simplicity when compared to its huge impact. I know there are plenty of posts about Heartbleed. It is one of the super popular attack of recent times. Here is a brief summary of the key points.

Heartbleed is an implementation vulnerability on the subprotocol Heartbeat which is an extension to support “Keep-alive” functionality in TLS. This functionality aims to check if the other party involved in the communication is still available or not. This is mainly aimed at DTLS that works on top of protocols like UDP that doesn’t have any flow management unlike TCP. An attacker can receive upto 64KB of data leaked with one single heartbeat request. Also, as the heartbeat request is considered genuine, there are no traces in any of the logs. The heartbeat protocol could be advertised by both the client as well as the server. This means that the vulnerability could be used to attack either the client or the server.

What exactly is the vulnerability?

The Heartbeat extension mentioned above allows to send request with variable payload size. This means that an attacker can change the size of the payload that it needs from the server. If the size is greater than the default size, the server would end up leaking other content from the memory. There could be sensitive information within the memory which might get leaked.

Why is Heartbleed an implementation error?

Heartbleed vulnerability is found only in OpenSSL versions from OpenSSL 1.0.1 to 1.0.1f. Rest of the versions are secure. As per the RFC of TLS, a Heartbeat request could be sent only after the client as well as the server completes the handshake. Unfortunately, the vulnerable versions of OpenSSL allowed the Heartbeat request soon after the Client and Server Hello. This means, even before the encryption begins, a rogue client/server can give a malicious heartbeat request that would result in sensitive data leak.

How can we fix the issue?

Upgrade the OpenSSL version to 1.0.1g or above. In case of compromise, make sure that the certificates are revoked and new keys are issues. This is because there is a great chance that your key is leaked. Similarly, user credentials should also be changed to ensure that an attacker didn’t steal it. You could also remove the heartbeat extension as a temporary fix.

If you like to read more, please find the PDF : Heartbleed.

Getting started with network security monitoring

Network Security Monitoing (NSM) can be broadly classified into three areas being collection, detection and analysis [11]. There are plenty of tools that are designed exclusively for each field. Tools like tcpdump [14], wireshark [13], tshark[16], DNSstats [15], Bro [17], Chaosreader [18] come under collection tools. Their main job is to collect data from an interface and save as a dump. They are also called sensor tools. The next category is detection, which consists of two categories: signature based as well as behaviour based detections. These are tools that are used for detecting malicious behaviours/signatures in pcaps. Snort [19] and suricata [20] are the main signature based detection tools available. They work mainly based on the signature hashes developed by analysing a malware. The main inefficiency with this approach is that only ‘known’ malwares are detected. Zero days and new versions of previous malwares can easily evade such detections. Behaviour based detection are discussed in detail in the following sections. The third category analysis comes to play after an incident (security breach) is reported. These tools allows more deep packet analysis and wide inspection of packet data allowing marking, easy traversal and so on. Sguil [21], squert [22], wireshark and ELSA [23] are examples in this category. In total an NSM tool should contain a collector, a detector and then an analysis interface that allows to analyse the data. The research problems associated with each category is different. My analysis was mainly on collection and detection.


The two main questions to ask here is 1) What data needs to be collected (indexing, filters used), 2) What line rate is supported.  Some of the important libraries to consider in this area are libpcap [24] (winpcap for windows) used in Tcpdump & tcpdump-netmap, n2disk10g [25] from ntop, and FloSIS [2].  FloSIS [2] and n2disk10g [25] follow a different approach compared to libpcap. The first two are designed to keep up with multi-Gigabit speeds on commodity hardware (10Gbps). The design make efficient use of  parallelism capabilities in CPUs, memory and NIC. They also maintain two stage indexes in real time to increase query response (bloom filters). Libpcap, being the most used library is not efficient with high speed networks. (works well with 400-500 Mbps rate). But libpcap is the best library available for free and ready to use. N2disk10g [25] requires a license. FlosSIS is not open source and is not available.


There are plenty of research based on statistical data and session data. Below I am discussing only those areas that are relevant to HTTP protocol analysis and corresponding detection mechanisms.  This is related to HTTP headers and payload analysis without deep packet inspection.

There are correlation techniques where length of URL, request type (GET/POST), average amount of data in payload, hosts that rely on the same Web-based application (which could be the C2 server) etc are used for clustering malware based on HTTP. TAMD [5] (traffic aggregation for malware detection) talks about a technique on hashing blocks of payload and performing matches to identify correlations. Also capturing syntactic similarities in the payload. But this is applied in spam emails and not application data bytes.  PAYL [6] describes about a payload inspection technique based on profiling normal payloads. It calculates deviation distance of a test payload from the normal profile using a simplified Mahalanobis distance. A payload is considered as anomalous if this distance exceeds a predetermined threshold. A very similar approach is by SLADE: Statistical PayLoad Anomaly Detection Engine as used in BotHunter [7]. Here they use n-byte sequence information from the payload, which helps in constructing a more precise model of the normal traffic compared to the single-byte as used in PAYL [6]. Another approach is where the payload of each HTTP request is segmented into a certain number of contiguous blocks, which are subsequently quantized according to a previously trained scalar codebook [3].

I hope you enjoyed the reading. More on network security monitoring later.


[1] Behavioral Clustering of HTTP-Based Malware and Signature Generation Using Malicious Network Traces

[2] FloSIS: A Highly Scalable Network Flow Capture System for Fast Retrieval and Storage Efficiency

[3] Measuring normality in HTTP traffic for anomaly-based intrusion detection


[5] Classifying HTTP Traffic in the New Age

[6] K. Wang and S. Stolfo. Anomalous payload-based network intrusion detection. In Proceedings of the International Symposium on Recent Advances in Intrusion Detection (RAID), 2004.

[7] BotHunter: Detecting Malware Infection Through IDS-Driven Dialog Correlation

[8] Know your digital enemy

[9] Comparison on dump tools:


[11] Sanders. Chris, Smith. Jason,  Applied network security monitoring- collection, detection, and analysis.

[12] Bejtlich. Richard, The practise of network security monitoring.

[13] Wireshark:

[14] Tcpdump:

[15] Dnstats:

[16] Tshark:

[17] Bro:

[18] Chaosreader:

[19] Snort:

[20] Suricata: ttps://

[21] Sguil:

[22] Squert:

[23] ELSA:

[24] Libpcap:

[25] n2disk:

[26] TRE:

[27] A. K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: a review. ACM Comput. Surv., 31(3):264–323, 1999

Study of GhostNet

I wrote a report about GhostNet for one of my coursework. GhostNet is an advanced persistant threat with origin as China. Thought it might be interesting read for those who like to learn about advanced persistence threats in general and about GhostNet as a case study.

Abstract: The report discusses the history, motivation, operation and detection of an advanced persistent threat (APT) called GhostNet. GhostNet originates from China with one of the main target as Tibet. The history and political background about the conflict between Tibet and China for more than 50 years is a strong motivation for the emergence of the threat. Even though GhostNet began with Tibetan organisations as one of the main targets, the attack became widespread to about 103 countries infecting machines in governmental organisations and NGOs. The attacks makes use of the typical phishing email technique. The email contains contextually relevant information that makes it unsuspicious for the user to fall into the trap. The attack also includes the use of a command and control server. The report concludes with relevant detection techniques and countermeasures for protecting a system against GhostNet and similar advanced persistent threats.

PDF: GhostNet.

Enjoy reading!🙂