https://github.com/thatbrguy/Pedestrian-Detection

Pedestrian-Detection

Pedestrian Detection using the TensorFlow Object Detection API and Nanonets.

使用TensorFlow对象检测API和Nanonets进行行人检测。

Pedestrian Detector in action

This repo provides complementary material to this blog post, which compares the performance of four object detectors for a pedestrian detection task. It also introduces a feature to use multiple GPUs in parallel for inference using the multiprocessing package. The count accuracy and FPS for different models (using 1,2,4 or 8 GPUs in parallel) were calculated and plotted.

这个repo为这篇博文提供了补充材料,它比较了四个目标检测器在行人检测任务中的性能。它还引入了一个使用多个gpu并行使用多处理包进行推理的特性。计算并绘制了不同模型(并行使用1、2、4或8个gpu)的计数精度和FPS。

Dataset

The TownCentre dataset is used for training our pedestrian detector. You can use the following commands to download the dataset. This automatically extracts the frames from the video, and creates XML files from the csv groundtruth. The image dimensions are downscaled by a factor of 2 to reduce processing overhead.

城市中心数据集用于训练行人检测器。可以使用以下命令下载数据集。这将自动从视频中提取帧,并从csv groundtruth中创建XML文件。图像尺寸被缩小2倍以减少处理开销。
”ground truth”: 在监督学习中,数据是有标注的,以(x, t)的形式出现,其中x是输入数据,t是标注.正确的t标注是ground truth,错误的标记则不是。(也有人将所有标注数据都叫做ground truth)

wget http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/Datasets/TownCentreXVID.avi
wget http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/Datasets/TownCentre-groundtruth.top
python extract_towncentre.py
python extract_GT.py

Setup

1. For TensorFlow Object Detection API

Refer to the instructions in this blog post.

2. For Nanonets

Step 1: Clone the repo

git clone https://github.com/NanoNets/object-detection-sample-python.git
cd object-detection-sample-python
sudo pip install requests

Step 2: Get your free API Key

Get your free API Key from http://app.nanonets.com/user/api_key

Step 3: Set the API key as an Environment Variable

export NANONETS_API_KEY=YOUR_API_KEY_GOES_HERE

Step 4: Create a New Model

python ./code/create-model.py

Note: An environment variable NANONETS_MODEL_ID will be created in the previous step, with your model ID.

Step 5: Upload the Training Data

Place the training data in a folder named images and annotations in annotations/json

python ./code/upload-training.py

Step 6: Train the Model

python ./code/train-model.py

Step 7: Get Model State

The model takes ~2 hours to train. You will get an email once the model is trained. In the meanwhile you check the state of the model

python ./code/model-state.py

Step 8: Make Predictions

Create a folder named 'test_images' inside the 'nanonets' folder. Place the input images in this folder, and then run this command.

python ./code/prediction.py

FPS vs GPUs

For more stats, refer to the blog post.
The performance of each model (on the test set) was compiled into a video, which you can see here.

In light of GDPR and feeble accountability of Deep Learning, it is imperative that we ponder about the legality and ethical issues concerning automation of surveillance. This blog/code is for educational purposes only, and it used a publicly available dataset. It is your responsibility to make sure that your automated system complies with the law in your region.

全部评论
http://www.mobabel.net/%E8%BD%AC%E5%A6%82%E4%BD%95%E9%80%9A%E8%BF%87%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0%E8%BD%BB%E6%9D%BE%E5%AE%9E%E7%8E%B0%E8%87%AA%E5%8A%A8%E5%8C%96%E7%9B%91%E6%8E%A7%EF%BC%9F/
点赞 回复 分享
发布于 2019-11-13 10:28

相关推荐

有担当的灰太狼又在摸鱼:零帧起手查看图片
点赞 评论 收藏
分享
评论
点赞
收藏
分享

创作者周榜

更多
牛客网
牛客网在线编程
牛客网题解
牛客企业服务