Open postdoc position at LIMSI combining machine learning, NLP, speech processing, and computer vision.
a toolkit for face detection, tracking, and clustering in videos
Create a new conda environment:
$ conda create -n pyannote python=3.6 anaconda
$ source activate pyannoteInstall pyannote-video and its dependencies:
$ pip install pyannote-videoDownload dlib models:
$ git clone https://github.com/pyannote/pyannote-data.git
$ git clone https://github.com/davisking/dlib-models.git
$ bunzip2 dlib-models/dlib_face_recognition_resnet_model_v1.dat.bz2
$ bunzip2 dlib-models/shape_predictor_68_face_landmarks.dat.bz2To execute the "Getting started" notebook locally, download the example video and pyannote.video source code:
$ git clone https://github.com/pyannote/pyannote-data.git
$ git clone https://github.com/pyannote/pyannote-video.git
$ pip install jupyter
$ jupyter notebook --notebook-dir="pyannote-video/doc"No proper documentation for the time being...