Category Archives: Featured

Advanced RAM Analysis course – 4th to 7th December 2017

The Advanced RAM Analysis course will be held in Bristol in the UK from the 4th to 7th December 2017.  This is a rare chance to benefit from this course in the UK, spaces are limited so book early.

This course focuses on RAM analysis for the digital investigator and what can be obtained in addition to any disk analysis.

You can check out the syllabus here Advanced Live Forensics flyer. (We have now added Truecrypt and Bitlocker decryption!)

The cost will be £1650 + VAT or £1999 + VAT residential

To book fill out the form below and we will get in contact with you as soon as possible

Your Name (required)

Your Email (required)

Number of places (required)

Your Message

Post expires at 11:32am on Monday December 4th, 2017

The King and the Apple

As the Court case between the FBI and Apple has gone away (for now), I offer a cautionary tale. To back door or not to back door – that is the question.

apple-4096-black

The King and the Apple

Once upon a time there was a Kingdom in the west ruled by a powerful King.  The people were affluent and happy and ate a lot of apples.  In the eastern part of the Kingdom was a man who made locks.  His locks were so good that most people in the land used them, his locks had made him a very wealthy man.

Although it was a generally good Kingdom, as with everywhere, there were those that would undermine the security of the Land, would break into houses and steal possessions of others.  There were even those that hated the Kingdom and would stop at nothing to terrorise it and its people.

One day the King visited the lock maker.  The King asked that the lock maker make him a special key that would let him enter the houses of those that would cause him trouble.  He promised that the key would only be used to open the doors of the worst criminals. This, he said, would make the Land more secure. The lock maker loved the King and wanted to do all he could to help.  He wanted everyone to be secure, so he made him the special key.

For a while the King used the key to open the houses of just the worst criminals but it worked so well that the King realised that it could be used to open the houses of anyone he didn’t like if he had more keys.  The lock maker wasn’t happy but the King had said that it would make the Land more secure so he made lots of keys for all his enforcement militia.

Although most of the militia were loyal to the King there was one man serving who was also giving information to the criminals for money.  He gave them a key.  Now the criminals could open all the doors.  The people heard about it and didn’t feel more secure, quite the opposite.

Meanwhile in a Kingdom far away in the east they had also heard about the lock maker’s amazing locks and had been buying them for many years.  However, the Emperor heard about the special key that could open any of the locks.  Knowing that it could be done, he tasked all his wisest subjects to find a way to make a special key.  It didn’t take long.  Now the Emperor could open all the locks, including the locks in the western land.

The King in the west heard that the Emperor could open all the locks, he didn’t feel so secure anymore.

Later, a young lock maker’s apprentice heard from the girlfriend of the friend of a person in the militia that you could make a key to open all the locks.  The young man was very intelligent and in no time at all had worked out how to make the key.  He believed that everyone should know how to do it, so he wrote to every newspaper in every town in the Kingdom so that anyone could make such a key.

Now, no one felt secure, not the people, not the criminals, not the terrorists, not the militia, not even the King.  No one bought the wonderful locks anymore and the wealthy lock maker went out of business.

However, in the land in the East a new clever young lock maker made an incredible lock that no one could open.  Everyone bought his locks and felt secure again.

A few months later the Emperor of the eastern land stood before the young lock maker.  The Emperor explained that if he could make him a key to open the new locks, that it would make everyone more secure.

The King in the West learned that he could no longer open locks, but the Emperor could, and tragically choked on the apple he was eating and expired.

 

 

Finding your external IP address

As I carry out a significant amount of OSInt work I often bump into the problem of needing to enumerate IP addresses.  This can include knowing what my own external IP address is.  Simply running ifconfig (or ipconfig in Windows) will provide my internal addresses but not the internet facing address from the router.  This is especially important when trying to ensure that you are hidden from a target.  It could be that I connect to a VPN or proxy elsewhere in the world but how can I be sure that my IP address is hidden?

A student on my recent Advanced OSI course related a story of a colleague researching a very dangerous group and suddenly realising that their VPN software had crashed and that their Police IP address was now visible in their targets logs – not good!

Their are loads of tools, especially Firefox plugins, that will report your IP and the IP of the site you are on, WorldIP is a favourite.  However, I wanted to write a small program that would monitor my IP and report if it changes.  I also wanted to be able to write a tool to do batch look ups of domains and IP’s and extract their Geolocation information.

I stumbled across freegeoip.net.  It is a simple IP look up site but with an API. It allows 10,000 look ups per day for free which is more than enough (for most days!).

To use just type into your browser –

freegeoipnet/csv

and it will return information about your own external IP address into a CSV file.  Lovely!  The results look like this…

217.42.***.***,GB,United Kingdom,ENG,England,Bristol,BS3,Europe/London,51.43,-2.61,0

You can also specify /xml, /json and /jsonp.

By adding a URL or IP address to the query will return the information about that address…

freegeoip.net/csv/ibm.com

…and it returns…

129.42.38.1,US,United States,NY,New York,Somers,10589,America/New_York,41.33,-73.70,501

or if you specify /xml…

<Response><IP>129.42.38.1</IP><CountryCode>US</CountryCode><CountryName>United States</CountryName><RegionCode>NY</RegionCode><RegionName>New York</RegionName><City>Somers</City><ZipCode>10589</ZipCode><TimeZone>America/New_York</TimeZone><Latitude>41.325</Latitude><Longitude>-73.698</Longitude><MetroCode>501</MetroCode></Response>

To do this programmatically perhaps from a Shell script I can just use wget

Freegeoip.net from wget

Freegeoip.net from wget

wget freegeoip.net/csv/ibm.com

Using this I can write a simple background tool that monitors my IP address and notifies me of any change.  It will also be easy to have a tool which can be pointed at a text file of IPs or domains and returns all the information to me.  That will save loads of time.

I’ll post the tools when I’ve done them.

 

Recreating files from the Volatility MFT parser

I was teaching RAM analysis at the Swedish Police Academy this week, which included a segment on parsing out the MFT.  This is an extraordinary capability that opens up a view of the disk to an investigator, which they may not have. Perhaps the RAM was taken but the plug pulled on an encrypted disk or maybe because of covert imaging considerations.

Each MFT entry is 1024 bytes which is taken up by the file name, accessed, modified and created dates and so on.  However, if this data plus the file data is less than the 1024 bytes then the raw file data, the hex, is written to the MFT itself rather than out onto the disk somewhere.

What file is going to be less than 1k you may ask, and the answer is, quite a few.  Think javascript files, small HTML pages, PGP signatures, small graphics, text files, printer drivers, the list goes on.

The MFT parser is simple to run once you have an instance of Volatility running (see https://code.google.com/p/volatility/)

Python vol.py mftparser –f pathtoRAM >> pathtoaTEXTfile

When you view the rather verbose output from the Volatility MFT parser you stumble across entries that look like Fig 1:-

Fig 1

Fig 1

 

Here we can see a GIF image named AU_bg_TopMiddle[1].gif created way back in May 2005.  The text output contains 3 columns: the virtual file offset addresses, the raw hex and the ASCII interpretation of the hex.

One of the students said it would be cool if we could ‘carve’ the original file out of the MFT result.  Of course you could simply use Foremost, Photorec or a host of other data carvers on the RAM dump itself and the image would be found but it would have no file name, no metadata and be completely unattributable.

So, this seemed like a good idea to try.

I copied the $DATA chunk out of the text file, but being a text file it was completely unfriendly, dragging the other 2 columns with it.  See Fig 2.

Screen Shot 2014-03-06 at 19.36.54

Fig 2

 

 

 

 

 

Next I manually deleted all the addresses, then all the interpreted ASCII to leave myself with the raw data. (Fig 3)

Fig 3

Fig 3

 

Then I fired up the awesome WINHEX and attempted to import it as ASCII-HEX but it was rather unhappy with the carriage returns and spaces.  After about 20 minutes of faffing about I eventually managed to have the raw data sat in the Winhex window and it saved as a tiny, pointless GIF.  But it worked!

That evening at the hotel I decided that a Python script was in order and a few hours later I had finished MFT2File.py.  You can download it here.

Life is now much easier.  Copy the chunk of data out of the MFT from below the $DATA line to the end of the Interpreted ASCII into a text editor like Notepad++ and save it in the same folder as MFT2File.py (to make life easier).  Also, make a note of the original file name.

Next, open a command shell, ‘cd’ into the folder with the .py and text files in and run:-

Python mft2file.py

First it will ask for the name of the original file.  Make sure you at least get the extension right.

Second, it will ask you for the filename of the text file you made (and path if you didn’t put it in the same folder like I suggested). Fig 3

Screen Shot 2014-03-06 at 08.53.56

Fig 4

 

 

 

 

 

 

That’s it.  The file will be magically recreated in the same folder as the .py file.

These files are small and some are text anyway like cookies and HTML file etc.  However, it is great to see them as the original file and a fun project.

P.S.- Michael Ligh from the Volatility dev team just let me know that Volatility 2.4 will have a dump option to achieve this which is superb!

 

Mapping Corporate infrastructure with Open Source data

Whilst teaching my recent OSI course we had spent a good deal of time mapping the online infrastructure of a company using Maltego.  The footprinting ‘machines’ are really superb and if you haven’t played with the tool go get it now!

Later in the day we were extracting company employee data from resources such as Data.com and LinkedIn and one of the students tried mapping the data with the Import option in Maltego.  He had mapped employee name to office location and the map provided an immediate view of the approximate physical infrastructure with larger numbers of employees naturally oriented to HQ’s and small numbers to sub-offices.  It was interesting to see.  We did some standard research and the ‘map’ had been correct in ID’ing the primary HQ and sub offices.

Of course, the output is only as good as the data but this is where a tool called Jigsaw comes in (http://www.pentestgeek.com/tools/).  Jigsaw was a business style social network where, if you uploaded your Contacts database you had access to the huge online repository.  It became so good, especially in the US that it was bought by SalesForce and re-branded Data.com.  The Jigsaw tool was incredibly good as you could extract vast amounts of information from the Jigsaw database on company employees so the data was obfuscated by SalesForce to make it fairly useless to the researcher.  However, it still provides a partial name, job role, office location and other useful data if we are purely looking for sets of information.

The Jigsaw tool is no longer available to the public by can be found on Kali (www.kali.org).  I won’t talk you through running it, its pretty self-explanatory but you start with simply running a search on the company of choice.

jigsaw -s BankofAmerica

Searching for Bank of America provided over 9000 employee records which I duly downloaded to a csv file.  Next, do a ‘data’ import into Excel, comma delimited, and save as an xls file.

Screen Shot 2013-10-29 at 17.04.50

Next use the import tool in Maltego and map the Employee field to a Person entity, Department to the Shop entity and the City to the Location entity.  When I tried to import the entire 9000 records, Maltego tried to generate over 28000 nodes and edges and simply fell over, however I re-imported selecting every 3rd record which worked fine.

During the import process you are asked to map columns to eachother.  Map Person to Location and Person to Shop.

Once imported select the bubble view and interactive organic mode.  This will have the effect of clustering related data together.  What is interesting is that employees are naturally ‘drawn’ to their City and departments primarily located in those Cities are also attracted.

Screen Shot 2013-11-02 at 20.22.38

The Bank of America physical infrastructure

We can straight away see the largest Yellow node (Location) bang in the middle of the cluster map is Charlotte, essentially most employees in the database say they work there.

Screen Shot 2013-11-02 at 20.24.06

 

 

 

 

 

A quick check online shows that Charlotte is indeed the BoA HQ.  The next ones are New York and the surprising locations of San Francisco, Miami, Plano and Wilmington.  This helps us to ID at a glance the primary locations.

Next, the grey nodes are Departments.  Again the map shows that most work, unsurprisingly in Finance and Administration followed by IT & IS, Support Marketing and Operations.  This can really help us to visually map out the organisation, giving us an idea of the comparative sizes of departments.

Screen Shot 2013-11-02 at 20.31.50

I am going to do a little more work on clustering Department to Location to help us know where primary departments are located.  Im not suggesting that this leaks anything particularly bad or dangerous but is an interesting view for a social engineering attack to begin.  It could be that a company hides (or at least doesn’t actively publish) its Research department locations but this approach could identify it.

——————–

OK, I’ve spent another hour playing around and there is some interesting data you can get from this view.  I mapped purely Location to Department and it was immediately apparent that I could quickly see where departments were and more importantly were not.

For example:-

Screen Shot 2013-11-02 at 20.57.36

 

 

 

 

 

 

 

We can see that IT & IS are in virtually every office, however:-

Screen Shot 2013-11-02 at 20.56.40

 

 

 

 

…we can see that Human Resources departments only appear to be represented in about 8 primary locations.  This would be vital information for a social engineer who could make a simple error by saying in a Phishing phone call that they were in HR in Atlanta, but its possible that there is no (or at least not large) HR department in Atlanta.

Interesting stuff, have a play and let me know your findings.

Extracting recent contacts from OSX Mail

Having spent the best part of the last decade working on Live Forensic techniques I’ve begun to turn my attention to OSX.  I’m an unashamed MacHead but have not spent much time thinking about ways to extract data from a live machine.

Tucked away in a SQL Lite table is a large list of ‘Recent Contacts’

Knowing who a suspect speaks to or emails can be very useful in an investigation and so I’ve started looking at the email system in OSX.  The inbuilt email app, Mail is very widely used and connects to the OSX Address Book for the management of contact data.  However, tucked away in a SQL Lite table is a large list of ‘Recent Contacts’, which contains the name and email address of recently contacted people who may or may not be in your standard contacts.

You can see this list by opening OSX Mail and browsing to Window – Previous Recipients.  This opens a box with all the recent contacts, but apart from being able to add the contact to your main contacts, there is no way to export them.

I’ve written a small shell script to extract the name and email from the SQL table and pop them in a csv file for you.

The code is very simple, just 2 lines:-

echo 'First Name,Surname,Email Address' > ~/Desktop/recentcontacts.csv

This simply writes the column heads to a CSV file on your Desktop

sqlite3 -csv ~/Library/Application\ Support/AddressBook/MailRecents-v4.abcdmr 'select ZFIRSTNAME, ZLASTNAME, ZEMAIL from ZABCDMAILRECENT;' >> ~/Desktop/recentcontacts.csv

This opens the MailRecents SQL file and pulls out the first name, last name and email address, writing them to the CSV file on your Desktop.

Easy!

For ease just drop the file somewhere, ‘cd’ to it and run – ./recentexport.sh

If it doesn’t run you might have a permissions issue so just type – chmod +x recentexport.sh

You can download the tool here.

Hope its useful to you.

iPhone Video Metadata

It's an iPhone!

It’s an iPhone!

First question, if you start a sentence with the word iPhone should you captialise the ‘I’, answers on a postcard please.

Second question came from a law firm that I often assist with digital forensics cases.  When an iPhone is used to take a video and then distributed does it contain any device ID information that can be used to trace it back to the original phone?

The answer, somewhat surprisingly knowing Apple, appears to be no, I cannot find any reference to the serial number, IMEI or ICCID numbers within the file although it is possible that the data is there but obfuscated in some way.

Whether there or not, looking at iPhone movie data is very interesting.  We are all used to the vast amount of metadata embedded within a photo but movies are a bit more of a dark area with not much written about it.  The movies are based around the QuickTime file type that is well documented by Apple which can be found here – http://developer.apple.com/library/mac/documentation/quicktime/qtff/qtff.pdf

The filetype is awash with metadata, some which are used by default in the iPhone and many that are not.  Although there does not appear to be anything to specifically identify the iPhone which shot the video there are some useful bits of data which could help.  I have focused on a video shot by an iPhone 5 and then emailed out of the device.

The QuickTime structure is based around Atoms and Keys.  Atoms are small 4 character tags such as ‘prfl’ for profile, ‘tkhd’ for the track header and many, many more.  There are also keys that are of specific interest to us as they contain the primary metadata that we may want.  The keys are in the ‘mdta’ atom and take the form of ‘com.apple.quicktime.author’, for example.

At offset 0x04 you come across the ‘ftyp’ atom which identifies the type of video to follow.  The iPhone uses QuickTime and so the tag which follows is ‘qt’.

Screen Shot 2013-05-29 at 11.33.11

 

Next is the ‘mdat’ atom which I guess stands for movie data and contains the data related to the movie itself.

Screen Shot 2013-05-29 at 11.33.27

 

Next is the ‘moov’ atom which partly indicates that the movie came from a Mac platform, ie the iPhone.  The ‘moov’ atom has a number of sub-atoms which brings us to the area we are interested in.

Once we pass all the obvious movie data we pick up a ‘keys’ atom which is then followed by metadata identified by the atom ‘mtda’.  The entire section can be seen in the image below.

Screen Shot 2013-05-30 at 11.56.35

There are several interesting tags here.

©mak«Apple – This identifies that the movie came from an Apple manufactured device.  Although this might sound obvious we might have a series of videos from a suspects computer that we think he may have taken.  However, if he is an Android and PC user then this would reduce the likelihood that he created them.

©swr«6.1.4 – This is rather useful as it tells us the IOS software version that was installed at the time that the video was taken.  Again, a scenario could be that a suspect accuses his co-defendant of shooting a video but we note that the co-defendants iPhone is running an earlier IOS version making it extremely unlikely that it was him.

©day«2013-05-27T21:38:21+0100 – This provides us with the time and date that the video was shot.  Helpfully this date does NOT change when the file is moved, emailed or uploaded.  This provides a solid line in the sand as to when the video was made.  The time is also adjusted from UTC so we see the real world time it was created.

©xyz«+52.5461-002.6371+115.546 – This tag ‘@xyz’ provides GPS location data provided by the GPS chip in the phone.  Although not delimited we can divide it up to provide:-

x – +52.5461

y – -002.6371

z – +115.546 – This appears to be the direction taken from the onboard compass.

This data depends on location data being turned on for Photos in the Privacy tab in Settings.

©mod«iPhone 5 – This is great, it doesn’t just tag the device as an iPhone but as an iPhone 5.  Again this may help us to identify the phone in a case that shot a video.  So we know the video was taken by an Apple iPhone 5 with firmware 6.1.4 on the 27/5/13 at 21:38:21 at a specific location.  That’s not bad information.

All the information is then repeated using different tags as follows:-

mdtacom.apple.quicktime.make

mdtacom.apple.quicktime.creationdate

mdtacom.apple.quicktime.location.ISO6709

mdtacom.apple.quicktime.software

mdtacom.apple.quicktime.model

So can we identify a specific device that shot a video?  Not definitively no, however we may have a case where a number of phones are seized, perhaps a couple of Androids, an iPhone 3 and an iPhone 5.  They may all have the same video on their phones showing illegal activity and be accusing one another of shooting it.  In this case we may have sufficient metadata to pinpoint the culprit.

When I first started looking at this I assumed that it was a purely academic exercise as our normal forensic tools probably report this data but it seems not.  A quick look in FTK with my test video only showed the Operating System dating, created, modified etc and not the embedded video created date.  There was also no extraction of ANY of the metadata we have discussed, no model, firmware, GPS data, anything!  Obviously you can manually work through the Hex to find the tags but it could easily be missed if we don’t know it’s there.

Hope that’s helpful to you?

Welcome to the new CSITech web site

It's the 'Home' button from an iPhone.  Trying to be imaginative!

It’s the ‘Home’ button from an iPhone. Trying to be imaginative!

After many years in the planning and execution (otherwise to be interpreted as a lackadaisical attitude to getting it done) we have at last launched a new web site.  Its all based around WordPress to make it easier to edit and publish and I have been very impressed at the ease to which you can build and work on the site.  The downside is that there is a very good chance that a new WordPress vulnerability will be found in the next year which will ensure that I end up as part of a vast Pharmaceutical BotNet.  Every silver-lining has a cloud!

The site is rather simple and highlights the things we do to earn a living.  It also, rather vitally, lists all the upcoming courses we teach along with a downloadable syllabus, costs and dates.  Each course page also allows you to request booking of places.  In the next few months I hope to be able to launch a calendar page with online payments too.

My blog will also move here from nickfurneaux.blogspot.com so bookmark the page for updates.  Each blog post will also have a Tweet notification so please remember to follow @nickfx.

Thanks to Zane Clements for helping me put the site together.

I hope that the following 2 minutes that you spend perusing the site will be a good use of your life!

Nick Furneaux

Maltego Machines and Other Stuff

Once again it has been several lifetimes of certain moths since I wrote a blog post.  I have been trying to write the text for my new web site whilst also writing a book.  That’s right loyal follower, I am writing a book!  The working title is Weaponizing Open Source Intelligence.  Obviously for those of you in the UK it will be Weaponising!  It should be pretty interesting not only covering advanced Open Source Techniques but how to understand how the data can be ‘weaponised’ into an attack against you or your organisation.  Should be good!

Anyway, 2 weeks back I taught the first Advanced Open Source Course to international acclaim and applause, well, all the students thought it was epic and enjoyed it.  The highlight seemed to be the real-world exercises where you do everything from hunting down bad guys to planning an attack on a company, loads of fun.

A good chunk of the course is focused on the tools from Maltego, CaseFile and primarily Radium, which, frankly, rocks.  If you haven’t seen the tool before take a look at Paterva’s YouTube channel at http://www.youtube.com/user/PatervaMaltego.  It is essentially a graphing tool to assist with ‘automated’ Open Source Intel gathering.

One of the interesting things about Radium is the ability to write your own Transforms (searches) but also to code up your own Machines to essentially daisy-chain commands together so that they run automatically.

During the course we had a segment given online by Social Engineering Guru, Chris Hadnagy where we discussed the identification of key people within an organisation to create targets for phishing targets and the like.  It can also be useful to identify people who may know eachother for the same purpose.  Obviously we are not teaching this to be able to carry out an actual attack but rather identify vectors can could be used by an attacker against us.

I thought it would be interesting to create a Radium Machine that would accept the input of a Domain, extract 50 or so documents and then rip out the meta data in the documents hopefully giving us real names email addresses and like.  Then we can remove any data that only appears once, working on the principle that we would like to ID people who had authored many documents.  I took a good go at writing it and thanks to Andrew at Paterva he tidied it up and made sure it worked properly.

If you have a version of Radium simply click the Machines tab, Manage Machines, New Machine.  You can type any old rubbish into the dialogue as it will be overwritten by this code anyway.  The code looks like this, simply cut and paste into the code window and press the ‘tick’ button to compile:-

————————————–

machine(
“MetadataMachine”,
displayName:”Metadata Machine”,
author:”Nick Furneaux (thanks to Andrew)”,
description: “Finds documents and their metadata for a domain and then deletes any documents where the meta data is not found in more than one document”
)
{

start {

/* Find all documents and then their Metadata */

// Get Documents
status(“Searching for Documents”)
log(“Finding Documents….”,showEntities:false)
run(“paterva.v2.DomainToDocument_SE”,slider:100)

// Get Metadata from Documents
status(“Extracting metadata”)
log(“Extracting metadata”,showEntities:false)
run(“paterva.v2.DocumentToPersonEmail_Meta”)

/* Remove all entities that have less than 2 links incoming to the entity*/

//now we select any people,phrases and email addresses
type(“maltego.Person”, scope:”global”)
incoming(lessThan:2)
delete()

type(“maltego.Phrase”, scope:”global”)
incoming(lessThan:2)
delete()

type(“maltego.EmailAddress”, scope:”global”)
incoming(lessThan:2)
delete()

/* Remove any remaining documents that no longer have children */

type(“maltego.Document”, scope:”global”)
outgoing(0)
delete()

/* Ask if you would like more work to be done on any extracted email addresses */

type(“maltego.EmailAddress”, scope:”global”)
userFilter(title:”Choose Email Addresses”,heading:”Email”,description:”Please select the email addresses you want to do more research on.”,proceedButtonText:”Next>”)
run(“paterva.v2.EmailAddressToPerson_SamePGP”)

}
}

——————————-

The first command that runs, looks at the Domain you have supplied and goes looking for Office or PDF documents posted to that Domain.

run(“paterva.v2.DomainToDocument_SE”,slider:100)

Next these documents have their metadata extracted.

run(“paterva.v2.DocumentToPersonEmail_Meta”)

Then we remove any metadata that has less than 2 links to it.

//now we select any people,phrases and email addresses
type(“maltego.Person”, scope:”global”)
incoming(lessThan:2)
delete()

type(“maltego.Phrase”, scope:”global”)
incoming(lessThan:2)
delete()

type(“maltego.EmailAddress”, scope:”global”)
incoming(lessThan:2)
delete()

Lastly, we display any email addresses and ask if you want more work done.  At the moment it just looks at a PGP server and tries to extract the registered name for that email address which could be useful.  We could do a web search for sites containing that address too.

userFilter(title:”Choose Email Addresses”,heading:”Email”,description:”Please select the email addresses you want to do more research on.”,proceedButtonText:”Next>”)
run(“paterva.v2.EmailAddressToPerson_SamePGP”)

As code goes, this is pretty simple and can help to automate tasks that you run regularly.  Interestingly the code also enables you to set timers to run the script every minute, hour or whenever.  This could be very useful for monitoring a specific Domain for new activity etc.

Thats all for now.  If you want to learn more about the Advanced Open Source Intelligence Course you can download a syllabus here – www.csitech.co.uk/Advanced_OSI_Syllabus.pdf.