Keeping Up with Change – Information Handling in the Infosec World

The information security (infosec) sector is a high-speed sector – the pace with which new information is coming up on an almost daily basis is tremendous. Therefore, staying up to date with new information represents a constant challenge to people working in that area.

Within this post, I want to quickly introduce the way I’m trying to handle and process new information in the infosec area. I will also introduce quickly the tools I use in order to support me with this effort.


In regards to infosec, there is plenty of information out there on the wide internet – so the challenge of finding the right information filtered out of the huge mass is becoming more and more important. Furthermore, in the infosec world you want actual information, since news from two years/months/weeks ago could be outdated already.

Sources of Information

The great news is: a huge amount of free, open sources for infosec information is available out there (OSINT). For instance: blogs, twitter, other social media, standard webpages etc.

Information for me normally comes in form of articles that cover a specific topic of interest. So this means, the information is bundled in plaintext form. There are some exceptions such as a videos and podcasts which I process differently – but the majority consists of text.

Since most of the information published today is free to use and easy to access, my key challenge was:

  • Which source is a good and reliable one?
  • From the picked sources, which information is actual and solid?
  • How can I access all information in a centralized way as much as possible?

These questions still represent a constant challenge to me since I haven’t found the perfect solution yet. I still need to collect information from different sources in order to further review it. The review process itself got faster throughout the years, but it’s still a mainly manual one.

Today I use the following information sources for getting the latest details around the infosec world: Twitter (Infosec folks), blogs via RSS, others (closed groups, sometimes reddit or other “social media”).

The image below shows a brief overview of the information flow. This setup works quite well for me so far.

Information flow in order stay up to date with the latest news in the infosec area.

Information flow in order stay up to date with the latest news in the infosec area.

Central Respository – Pocket App

After you have defined the sources of the information you intend to consume, the following question comes up: Where do you store all the interesting stuff that you need to review?

I chose the pocket app as my central resource – why?

  • It exists for multiple devices and is easy to use
  • The free version is totally sufficient
  • There is a strong API you can use for automating some stuff
  • The information I store there is not confidential – it’s public information anyway
  • Offline reading is possible

Pocket offers tagging for articles which I normally don’t use. All articles in the “My List” category are basically representing the “To-Read-List”. Once they are read, I push them into the archive where they remain. By doing so, I have a repository that goes back years and which helps me to understand better what I was reading, when and how intensive.

Statistics for Pocket

Via the offered Pocket API and with the support of a primitive Python script, I can easily extract information in order to calculate my own statistics. It helps me to understand how much articles I read in the last months, how many are still to read etc. I can also extract the URL again if needed – also offline, due to the usage of a local sqlite database. All these features does Pocket by default not offer which is unfortunate, but not a problem due to the API. The statistics feature for pocket was discussed on twitter a few years ago – see here. Obviously nothing has changed here yet – but perhaps this will come in the future.

Generally, my Python program structure looks like this:
First, I extract the archive and the “MyList” list via the API that Pocket offers. Afterwards, I store all content in a local sqlite database so that offline data processing is possible. Finally, I extract necessary fields for statistic purposes and visualize them with seaborn and matplotlib.

Request Token based on Pocket Consumer Key

Request Token based on Pocket Consumer Key

Request Archive from Pocket

Request Archive from Pocket

On github you can find various projects that are trying to resolve this statistics topic as well – here and here. Below, you can find some of my selfmade statistics from the last years.

My Pocket Statistics for 2017

My Pocket Statistics for 2017

My Pocket Statistics for 2018

My Pocket Statistics for 2018

My Pocket Statistics for 2019

My Pocket Statistics for 2019

The last step of my information process is the publishing of my monthly reading list. In this monthly overview, I put in all articles that I consider somehow interesting and useful. A general overview of all the monthly summaries can be found here.


Information and the corresponding management aspect is key for infosec people. I expect from myself to at least try to stay up to date with what is happening in the scene. Not an easy job to be honest. However, the setup mentioned above is helping me doing so.

There are multiple possibilities to extend the statistics in regards to e.g.: analyze what kind of articles you read so that you understand which areas you are not focusing on. But this is perhaps something for the future.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s