<markdown>

# Project: finding Lane Lines on the Road

일시: 2017-12-24-17:38

## 프로그램 설치

Docker 이미지 다운로드

``` docker pull udacity/carnd-term1-starter-kit ```

``` git clone https://github.com/udacity/CarND-Term1-Starter-Kit-Test.git cd CarND-Term1-Starter-Kit-Test ```

Docker실행

``` docker run -it –rm -p 8888:8888 -v `pwd`:/src udacity/carnd-term1-starter-kit test.ipynb ```

Docker 접속

``` docker exec -it containerid /bin/bash ```

## Finding Lines of Color

이미지에서 특정 threshold 이하면 까만색으로 바꾼다. 못보던 구문. numpy에서 darray 다루는 방법. multidimentinal index와 slice를 사용한거다.

``` thresholds = (image[:,:,0] < rgb_threshold[0]) \

          | (image[:,:,1] < rgb_threshold[1]) \
          | (image[:,:,2] < rgb_threshold[2])
  

color_select[thresholds] = [0, 0, 0] ```

ndarray index and slicing

``` b = np.arange(9*9*3) c = b.reshape(9,9,3)

c[0,0,0] # 첫번째 픽셀의 빨간성분 0 ```

``` c[:,0,0] # 가장 왼쪽 컬럼 빨간 성분 array([ 0, 27, 54, 81, 108, 135, 162, 189, 216]) ```

``` c[:,:,0] # 모든 요소의 빨간성분 array(0, 3, 6, 9, 12, 15, 18, 21, 24], [ 27, 30, 33, 36, 39, 42, 45, 48, 51], [ 54, 57, 60, 63, 66, 69, 72, 75, 78], [ 81, 84, 87, 90, 93, 96, 99, 102, 105], [108, 111, 114, 117, 120, 123, 126, 129, 132], [135, 138, 141, 144, 147, 150, 153, 156, 159], [162, 165, 168, 171, 174, 177, 180, 183, 186], [189, 192, 195, 198, 201, 204, 207, 210, 213], [216, 219, 222, 225, 228, 231, 234, 237, 240) ```

thresholds를 selector로 사용했다. 다음 코드는 x를 y의 요소의 selector(filter)로 사용하였다. numpy.where를 참고

``` x = np.array([True, False, True, False]) y = np.arange(4) # [0 1 2 3 ] y[x] array([0, 2]) ```

## Canny Edges

Gray → GaussianBlur → Canny 순으로 Edge Detection을 한다.

Gray

``` gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY) ```

GaussianBlur ``` blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0) ```

왜 GaussianBlue를 하지? 노이즈 제거. 커널은 함수로 N x N matrix의 합성곱(convolution)

Canny ``` edges = cv2.Canny(blur_gray, low_threshold, high_threshold) ```

`low_threshold` 아래는 무시. `high_threshold`는 오케이. 둘 사이는 연결된거 보고 판단

## Hough Transform

Duda and Hart, 1972. 이미지에서 line detection하는 알고리즘. 커브 디텍션도 가능 HoughLinesP의 파라미터 바꾸기. `max_line_gap` 과 `min_line_length` 을 적당히 바꾸면 된다. 300, 1을 사용

Canny → Hough Transform → 결과로 나온 라인을 강조

``` lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]),

                          min_line_length, max_line_gap)

```

- rho : 0 ~ 1 - theta : 0 ~ 180 - threshold : 만나는 점의 수. threshold가 적으면 선검출 많지만

 정확도가 떨어짐

- `min_line_length` : 검출되는 최소선의 길이. 적으면 많은 선이 검출 - `max_line_gap` : 선과 선사이의 간격

라인 그리기 ``` for line in lines:

  for x1,y1,x2,y2 in line:
      cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),10)

```

참고: https://alyssaq.github.io/2014/understanding-hough-transform/

Hough Transform Quiz

``` import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2

# Read in and grayscale the image image = mpimg.imread('exit-ramp.jpg') gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)

# Define a kernel size and apply Gaussian smoothing kernel_size = 5 blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0)

# Define our parameters for Canny and apply low_threshold = 50 high_threshold = 150 edges = cv2.Canny(blur_gray, low_threshold, high_threshold)

# Next we'll create a masked edges image using cv2.fillPoly() mask = np.zeros_like(edges) ignore_mask_color = 255

# This time we are defining a four sided polygon to mask imshape = image.shape vertices = np.array((0,imshape[0]),(0, 0), (imshape[1], 0), (imshape[1],imshape[0]), dtype=np.int32) cv2.fillPoly(mask, vertices, ignore_mask_color) masked_edges = cv2.bitwise_and(edges, mask)

# Define the Hough transform parameters # Make a blank the same size as our image to draw on rho = 1 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 1 # minimum number of votes (intersections in Hough grid cell) min_line_length = 300 #minimum number of pixels making up a line max_line_gap = 10 # maximum gap in pixels between connectable line segments line_image = np.copy(image)*0 # creating a blank to draw lines on

# Run Hough on edge detected image # Output “lines” is an array containing endpoints of detected line segments lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]),

                          min_line_length, max_line_gap)

# Iterate over the output “lines” and draw lines on a blank image for line in lines:

  for x1,y1,x2,y2 in line:
      cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),10)

# Create a “color” binary image to combine with line image color_edges = np.dstack1)

# Draw the lines on the edge image lines_edges = cv2.addWeighted(color_edges, 0.2, line_image, 1, 0) plt.imshow(lines_edges)

```

## CarND-LaneLines-P1

프로젝트

``` https://github.com/udacity/CarND-LaneLines-P1 cd CarND-LaneLines-P1 docker run -it –rm -p 8888:8888 -v `pwd`:/src udacity/carnd-term1-starter-kit P1.ipynb ```

### 프로젝트 하면서 궁금한 코드

두개 점의 각도 구하기 ?

![](https://i.imgur.com/rgiaTbr.png)

``` import math angle = math.atan2(y2 - y1, x2 - x1) * 180 / math.PI; ```

두개의 점을 지나는선을 확장하기

``` b = y1 - (y2-y1)/(x2-x1)*x1 y = (y2-y1)/(x2-x1)*540 + b # 540일때 y y = (y2-y1)/(x2-x1)*0 + b # 0일때 y b = y1 - (y2-y1)/(x2-x1)*x1 x1_extend = (330-b)*(x2-x1)/(y2-y1) # y가 330일때 x x2_extend = (540-b)*(x2-x1)/(y2-y1) # y가 540일때 x ```

뭐하는 함수지?

``` cv2.fillPoly(mask, vertices, ignore_mask_color) cv2.bitwise_and(img, mask) ```

</markdown>

1)
edges, edges, edges