当前位置:   article > 正文

【人脸考勤项目】人脸检测的5种方式_cv2人脸验证

cv2人脸验证

文章内容:

1)人脸检测的5种方法

        1. Haar cascade + opencv

        2. HOG + Dlib

        3. CNN + Dlib

        4. SSD

        5. MTCNN

一。人脸检测的5种方法实现

 1. Haar cascade + opencv

        Haar是专门用来检测边缘特征的。基本流程如下:

第1步,读取图片

img = cv2.imread('./images/faces1.jpg')

第2步,将图片转化为灰度图片,因为Haar检测器识别的是灰度图片

img_gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

第3步,构造Haar检测器

face_detector = cv2.CascadeClassifier('./cascades/haarcascade_frontalface_default.xml')

第4步,检测器开始检测人脸

detections = face_detector.detectMultiScale(img_gray)

第5步,迭代器解析

  1. for(x,y,w,h)in detections:
  2. cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),5)

第6步,显示

plt.imshow(cv2.cvtColor(img,cv2.COLOR_BGR2RGB))

第7步,参数调节

-- scaleFactor

        scaleFactor是用来调节检测人脸大小的范围的,举个例子scaleFactor = 1表示人脸检测范围从1开始检测,人脸离相机远,脸小,离相机近脸大,因此scaleFactor的取值能一定程度上影响识别的精度。

        但有时候不论怎么调节scaleFactor都会出现下述情况 ,此时需要minNeighbor调节人脸框的候选数量

 --minNeighbors

        minNeighbors指每个人脸框最小的候选数量,算法为了检测人脸,可能会在一个人物照片的多个地方去检测人脸,最后会识别出多个地方可能都是人脸,这时minNeighbors会对这些识别结果进行排序取出最可能是人脸的地方,试想一下,如果所有的方框都集中在某一个区域,那么是不是代表这个区域内是人脸的可能性更高,当然是这样,这个方框集中在某一个区域的数量就叫做人脸框的候选数量用minNeighbors表示,显然minNeighbors较大比较好,太大了会出现漏检。

 --minSize

        minSize表示最小人脸尺寸,maxSize表示最大人脸尺寸,这两个参数都是用来控制人脸大小的,如

detections = face_detector.detectMultiScale(img_gray,scaleFactor = 1.2,minNeighbors =7,minSize=(1,1))

2. HOG + Dlib

第1步,读取图片

  1. img = cv2.imread('./images/faces2.jpg')
  2. plt.imshow(cv2.cvtColor(img,cv2.COLOR_BGR2RGB))

第2步,构造HOG检测器,需要安装Dlib包(conda install -c conda-forge dlib)

  1. import dlib
  2. hog_face_detector = dlib.get_frontal_face_detector()

第3步,检测人脸

detections= hog_face_detector(img,1)#指的是scaleFactor=1

第4步,解析

  1. for face in detections:
  2. x = face.left()
  3. y = face.top()
  4. r = face.right()
  5. b = face.bottom()
  6. cv2.rectangle(img,(x,y),(r,b),(0,255,0),5)

第5步,显示

plt.imshow(cv2.cvtColor(img,cv2.COLOR_BGR2RGB))

3. CNN + Dlib

  1. import cv2
  2. import numpy as np
  3. import matplotlib.pyplot as plt
  4. plt.rcParams['figure.dpi'] = 200
  5. img = cv2.imread('./images/faces2.jpg')
  6. import dlib
  7. cnn_face_detector = dlib.cnn_face_detection_model_v1('./weights/mmod_human_face_detector.dat')
  8. detections = cnn_face_detector(img,1)
  9. for face in detections:
  10. x = face.rect.left()
  11. y = face.rect.top()
  12. r = face.rect.right()
  13. b = face.rect.bottom()
  14. c = face.confidence
  15. cv2.rectangle(img,(x,y),(r,b),(0,255,0),5)
  16. plt.imshow(cv2.cvtColor(img,cv2.COLOR_BGR2RGB))

 4. SSD

  1. import cv2
  2. import numpy as np
  3. import matplotlib.pyplot as plt
  4. plt.rcParams['figure.dpi']=200
  5. img = cv2.imread('./images/faces2.jpg')
  6. face_detector = cv2.dnn.readNetFromCaffe('./weights/deploy.prototxt.txt','./weights/res10_300x300_ssd_iter_140000.caffemodel')
  7. img_height = img.shape[0]
  8. img_width = img.shape[1]
  9. img_resize = cv2.resize(img,(500,300))
  10. img_blob = cv2.dnn.blobFromImage(img_resize,1.0,(500,300),(104.0, 177.0, 123.0))
  11. face_detector.setInput(img_blob)
  12. detections = face_detector.forward()
  13. num_of_detections = detections.shape[2]
  14. img_copy = img.copy()
  15. for index in range(num_of_detections):
  16. detection_confidence = detections[0,0,index,2]
  17. if detection_confidence>0.15:
  18. locations = detections[0,0,index,3:7] * np.array([img_width,img_height,img_width,img_height])
  19. lx,ly,rx,ry = locations.astype('int')
  20. cv2.rectangle(img_copy,(lx,ly),(rx,ry),(0,255,0),5)
  21. plt.imshow(cv2.cvtColor(img_copy,cv2.COLOR_BGR2RGB))

5. MTCNN

  1. import cv2
  2. import numpy as np
  3. import matplotlib.pyplot as plt
  4. plt.rcParams['figure.dpi']=200
  5. img = cv2.imread('./images/faces2.jpg')
  6. img_cvt = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
  7. from mtcnn.mtcnn import MTCNN
  8. face_detetor = MTCNN()
  9. detections = face_detetor.detect_faces(img_cvt)
  10. for face in detections:
  11. (x, y, w, h) = face['box']
  12. cv2.rectangle(img_cvt, (x, y), (x + w, y + h), (0,255,0), 5)
  13. plt.imshow(img_cvt)

  1. import cv2
  2. import numpy as np
  3. import matplotlib.pyplot as plt
  4. plt.rcParams['figure.dpi']=200
  5. img = cv2.imread('./images/test.jpg')
  6. img_cvt = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
  7. from mtcnn.mtcnn import MTCNN
  8. face_detetor = MTCNN()
  9. detections = face_detetor.detect_faces(img_cvt)
  10. for face in detections:
  11. (x, y, w, h) = face['box']
  12. cv2.rectangle(img_cvt, (x, y), (x + w, y + h), (0,255,0), 5)
  13. plt.imshow(img_cvt)

 5种人脸检测方式对比

视频流人脸检测 :

        1.构造haar人脸检测器

        2.获取视频流

        3.检测每一帧画面

        4.画人脸框并显示

  1. import cv2
  2. import numpy as np
  3. cap = cv2.VideoCapture(0)
  4. haar_face_detector = cv2.CascadeClassifier('./cascades/haarcascade_frontalface_default.xml')
  5. while True:
  6. ret,frame = cap.read()
  7. fram = cv2.flip(frame,1)
  8. frame_gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
  9. detection = haar_face_detector.detectMultiScale(frame_gray,minNeighbors=5)
  10. for(x,y,w,h) in detection:
  11. cv2.rectangle(fram,(x,y),(x+w,y+h),(0,255,0),5)
  12. cv2.imshow('Demo',fram)
  13. if cv2.waitKey(10) & 0xff == ord('q'):
  14. break
  15. cap.release()
  16. cv2.destoryAllWindows()

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小丑西瓜9/article/detail/701642
推荐阅读
相关标签
  

闽ICP备14008679号