<< Back to Blog

Securing Websites with Threat Intelligence API

Many researchers have been monitoring malicious activities of cyber criminals who compromised vulnerable e-commerce websites in an effort to steal payment card information provided by their customers. Those hackers often injected a keylogger code function directly into the sites. It was the Javascript, PHP or another language keylogger which installed as a part of the websites and captured payment information entered by users. For examples, Magento Commerce and OpenCart platforms were impacted in 2016.

Definitely, it is crucial to secure e-commerce websites and prevent injecting attempts. One of the ways to minimise the chance to be hacked is to use blacklists which will block connections from compromised sources before they might steal sensitive information.

To secure a website, it is possible to use our Threat Intelligence API. RST Cloud team update the IP reputation lists every hour, so, you site will be always protected. To integrate a blacklist, you need to create a download job, for instance, put the following command in the CRON scheduler:

curl -k https://[custom-url]:[port]/api/[customer-id] > /etc/nginx/blockips.conf

Once the file copied add to the nginx.conf configuration file this line:

include blockips.conf;

Actually, the file will contain a simple list of compromised IP addresses.

deny 1.1.1.1;
deny 2.2.2.2;

Also, RST Cloud offers scores per IP, so, you can define the threshold and choose which IP you will block and which, for example, redirect to the specific page with the additional CAPTCHA-based check.

To illustrate the feature, we want to share a real-world example.

IP Reputation real-world example effect

Another option to secure the site is to use custom Threat Intelligence data which will be generated based on your website users activity. RST Cloud may detect suspicious and abnormal activity of different crawlers which tries to masquerade themselves or are still not included in IP reputation lists. We use machine learning algorithms which allow us to distinguish a person who is browsing a catalogue from a robot who is scanning it. This data is provided as a JSON-based structure where every entry has IP address, useragent, abuse reason and score fields.

To block or verify such clients and the additional URL is used:

http://[custom-url]:[port]/crawlers/[customer-id]

This option is included only in custom pricing plan but is very effective against unknown intruders.



Posted on January 04, 2017 by Yury Sergeev