The tutorial will be presented at the International Joint Conference 2017 on Biometrics in Denver, Colorado at October 1 from 9:30 am to approximately 5 pm. In this tutorial I will present the Biometrics Framework of Bob that I have developed during my stay at Idiap. The biometrics framework is designed to run biometric recognition algorithms in a comparable and reproducible manner, using several evaluation datasets including their evaluation protocols. Furthermore, it is easy to extend this framework to include other datasets, algorithms or even biometric modalities.
In this tutorial I will introduce the core concepts of biometric recognition and how they are implemented in the Biometrics Framework. In three hands-on examples of face recognition, I will show:
To be able to actively participate in the hands-on sections of the tutorial, you would need to bring your own laptop or join another person with a laptop. For non-US members, power converters from US outlet to other power supply systems will be provided. A multiprocessor system is preferred to be able to make use of the parallelization.
Due to limitations of Bob, natively we only support Linux-based and Mac-OS-based systems. For all other systems including Windows, we provide a Virtual Machine with all the required software and data pre-installed.
Warning
The Virtual Machine is 4.6 GB to be downloaded, and requires at least the same amount of free disk space to be unpacked.
Though the Biometrics Framework is written entirely in Python, only little Python programming experience is required. Most of the hands-on exercises might be handled with only changing parameters and running scripts on the command line.
First of all, to be able to follow the tutorial, you might want to download the latest version of the Tutorial Slides. These will guide you through the process and you will be able to check out some of the instructions that are currently not shown at the video projector.
In Unix and Linux operating systems, we will be using conda in order to install Bob and other required software. Please install conda for your operating system, or make sure that you have the latest version of conda installed. We have provided a conda environment file, which will create a new conda environment called bob by simply calling:
$ conda env create -f bob.yml $ source activate bob # activate the environment $ python -c "import caffe; import bob.bio.base" # test the installation
Warning
For MacOS, it seems that Caffe is not present in conda. Please remove it from the downloaded conda environment file.
MacOS users will need to install Caffe from the original website if they want to participate in the third hands-on experiments. In our small test we will not need GPU acceleration (there is no need to install CUDA or cuDNN, you can just skip these steps). Make sure that the Caffe build/installation directory is in your PYTHONPATH:
$ export PYTHONPATH=/path/to/your/caffe/install/python
After you did install Caffe, please try the above command line to test your installation again. If you have trouble to install Caffe, we can help you out during the lunch break.
In case any of the above steps fail, please consider to use the Virtual Machine as described in the next section.
All experiments will be run on a small face recognition dataset, the ATNT Database of Faces, formerly known as the ORL database. Please download and extract the database, e.g. using:
$ wget http://www.cl.cam.ac.uk/Research/DTG/attarchive/pub/data/att_faces.zip $ unzip att_faces.zip
Please register the directory of the database inside ~/.bob_bio_databases.txt (where ~ references your home directory), as described in the Database Installation Documentation of the Biometrics Framework. In our case, a single line would be sufficient (please create that file if it does not exist):
[YOUR_ATNT_DIRECTORY] = /path/to/your/atnt/directory/
For example, in the the Virtual Machine, the exact content of /home/bob/.bob_bio_databases.txt is:
[YOUR_ATNT_DIRECTORY] = /home/bob/Desktop/Tutorial/atnt
Next, you need to download the pre-trained VGG Network that will be used in the third hands-on experiment. You can download the VGG network using the following commands:
$ wget http://www.robots.ox.ac.uk/~vgg/software/vgg_face/src/vgg_face_caffe.tar.gz $ tar -xzf vgg_face_caffe.tar.gz
The default VGG network requires some changes to work as expected. Please either manually remove the last 4 layers in the vgg_face_caffe/VGG_FACE_deploy.prototxt or download the pre-adjusted network prototxt and copy it inside of the vgg_face_caffe directory. Also, please download the wrapper script that will be used to hook the VGG network into the Biometrics Framework of Bob.
The easiest solution for Windows systems would be to install the freely available VirtualBox binaries for your system. If you already have VirtualBox installed, please make sure to update it to the latest version.
After installation, download the Virtual Machine. Double-clicking on the downloaded file should add a new virtual machine into VirtualBox. Before booting the virtual machine, check if the Settings fit to your machine, e.g., if the assigned memory does not exceed the memory of your host system. After booting the virtual machine, you are automatically logged in with the following credentials:
Username: bob Password: bob
This account has sudo rights so you can install any additional software package of your choice. Since the Internet bandwidth at IJCB might be limited, it would be great if you could install these packages before the tutorial starts.
Inside the Virtual Machine, all required packages are already installed and all required data is downloaded to the Desktop/Tutorial directory, except for the Tutorial Slides.
If neither of the two alternatives seems to work on your machine, we will be able to help you out before the tutorial starts or in the first break. In case you have any questions or suggestions regarding the installation instructions, feel free to contact me.
I hope you will enjoy the tutorial and provide feedback by citing one of our papers:
@inproceedings{anjos2017continuously, title = {Continuously Reproducing Toolchains in Pattern Recognition and Machine Learning Experiments}, author = {Anjos, Andr{\'{e}} and G{\"{u}}nther, Manuel and de Freitas Pereira, Tiago and Korshunov, Pavel and Mohammadi, Amir and Marcel, S{\'{e}}bastien}, booktitle = {International Conference on Machine Learning (ICML)}, year = {2017}, pdf = {http://publications.idiap.ch/downloads/papers/2017/Anjos_ICML2017-2_2017.pdf} } @inbook{gunther2016face, title = {Face Recognition in Challenging Environments: An Experimental and Reproducible Research Survey}, author = {G{\"{u}}nther, Manuel and El Shafey, Laurent and Marcel, S{\'{e}}bastien}, editor = {Bourlai, Thirimachos}, booktitle = {Face Recognition Across the Imaging Spectrum}, edition = {1}, year = {2016}, publisher = {Springer}, pdf = {http://publications.idiap.ch/downloads/papers/2016/Gunther_SPRINGER_2016.pdf} } @inproceedings{gunther2012open, title = {An Open Source Framework for Standardized Comparisons of Face Recognition Algorithms}, author = {G{\"{u}}nther, Manuel and Wallace, Roy and Marcel, S{\'{e}}bastien}, editor = {Fusiello, Andrea and Murino, Vittorio and Cucchiara, Rita}, booktitle = {Computer Vision - ECCV 2012. Workshops and Demonstrations}, series = {Lecture Notes in Computer Science}, volume = {7585}, year = {2012}, pages = {547-556}, publisher = {Springer Berlin}, pdf = {http://publications.idiap.ch/downloads/papers/2012/Gunther_BEFIT2012_2012.pdf} } @inproceedings{anjos2012bob, title = {Bob: a free signal processing and machine learning toolbox for researchers}, author = {Anjos, Andr{\'{e}} and El Shafey, Laurent and Wallace, Roy and G{\"{u}}nther, Manuel and McCool, Chris and Marcel, S{\'{e}}bastien}, journal = {Association for Computing Machinery's Multimedia Conference 2012}, booktitle = {Proceedings of the ACM Multimedia Conference}, year = {2012}, pdf = {http://publications.idiap.ch/downloads/papers/2012/Anjos_Bob_ACMMM12.pdf} }