Sensors & Transducers 2014 by IFSA Publshng, S. L. http://www.sensorsportal.com Fast Adaptve Codng Unt Depth Range Selecton Algorthm for Hgh Effcency Vdeo Codng Fang Shuqng, Yu Me, Chen Fen, Xu Shengyang, Jang Gangy Faculty of Informaton Scence and Engneerng, ngbo Unversty, ngbo, 315211, Chna Tel.: 86-574-87600411 E-mal: janggangy@126.com Receved: 23 September 2014 /Accepted: 28 ovember 2014 /Publshed: 31 December 2014 Abstract: The emergng hgh-effcency vdeo codng standard employs a new codng structure characterzed by codng unt, predcton unt and transform unt. It mproves the codng effcency sgnfcantly, but also ntroduces great computatonal complexty on the decson of optmal codng unt, predcton unt and transforms unt szes. To reduce the encodng complexty, a fast adaptve codng unt depth range selecton algorthm s proposed. In the proposed scheme, frst of all, the average depth error between adjacent and ther co-located largest codng unt are utlzed to determne depth range of current largest codng unt. And then, depth scalng factor n the prevous and back frame are obtaned to shrnk the depth range. Furthermore, we also propose a depth range correcton algorthm for reducng msjudgment of changes n the larger sequences. Expermental results show that the former algorthm can save encodng tme of about 10% more than Shen s algorthm wth a BD-btrates loss of 0.81 % and a BD-PSR loss of 0.026 db. Correcton algorthm can save same encodng tme of Shen s algorthm wth a BD-btrates lowerng 0.76 % and a BD-PSR mprovement of 0.028 db. Copyrght 2014 IFSA Publshng, S. L. Keywords: HEVC, Depth range, Average depth error, Depth scalng factor, Temporal-spatal smlarty. 1. Introducton Currently, the content growth of hgh defnton (HD) and the HD broadcastng servce are offerng uses to enjoy hgh qualty and hgh resoluton. In the future, the content growth of ultra hgh defnton (UHD) and the UHD broadcastng servce [1-4] also wll be offered to users followng the demands to needs for hgher qualty and hgher resoluton. However, MPEG-2, MPEG-4 and H.264 have a dffculty to meet above requrements. Therefore, n January, 2010, Vdeo Codng Experts Group (VCEG) and ISO/IEC (MPEG) founded a Jont Collaboratve Team on vdeo codng to develop the next generaton vdeo codng standard, namely Hgh Effcency Vdeo Codng (HEVC) [5] whch ams to further reduce bt rate n half wth the same reconstructed vdeo qualty compared wth H.264 [6]. Whle HEVC took recursve quad-tree structured [7] codng unt (CU) whch makes HEVC codng more effcent, but t also makes the HEVC have several tmes hgher complexty. There have been extensve researches on reducng HEVC computatonal complexty. L [8] et al. employed spatal smlarty to predct depth range of current largest codng unt (LCU), but t had a lmtaton on tme savng because at least three depths were needed to traverse. Shen [9] et al. utlzed temporal-spatal smlarty to predct depth range of current LCU through assgnng weghts for http://www.sensorsportal.com/html/digest/p_2571.htm 245
adjacent LCU and co-located LCU n the prevous frame. Although the method could reduce canddates of CU depth, t needed some mprovement because of regardless of dfference of vdeo sequence. Fxed weghts were not sutable for all sequences. In addton, Km [10] et al. employed a threshold whch was obtaned by Bayesan theory tranng to termnate CU splttng process and Yu [11] et al. also termnated CU splttng process through mean squared error. Ths paper proposes a fast adaptve CU depth range selecton algorthm whch mproves Shen s method. We assgn weghts adaptvely accordng to average depth error between adjacent and ther colocated LCU for predctng depth range of current LCU accurately. Then, depth scalng factor n the prevous and back frames are utlzed to shrnk depth range (DR) thus reducng computatonal complexty contnue. Fnally, we put forward a correcton algorthm (CA) based on lnear weghted for depth scalng factor. The CA can acheve hgher performance than Shen s method. The paper s organzed as follows. In the next secton, the complexty problem n CU splttng process s analyzed. In Secton 3, the proposed method s descrbed n detal. Secton 4 shows the correspondng expermental results. Fnally, we conclude ths paper n Secton 5. 2. Computatonal Complexty Problem In HEVC, every CU wll be splt nto four equal sub-cus through recursve method. The sze of sub- CU s 32 32, 16 16 or 8 8. The fnal segmentaton result s determned by rate-dstorton functon [12]. Fg. 1 shows the segmentaton process. The ratedstorton functon s defned [13] as follows: J = SSE + w SSE + λ B, (1) luma chroma chroma where B represents necessary bts after predctng for current CU, SSE luma and SSE chroma s the sum of square error between orgnal block and reconstructed block about lumnance and chroma respectvely, w chroma s the weght of chroma and λ s the Lagrange multpler. 2 0 2 0 0 = 32 2 2 2 1 2 1 1 = 16 2 2 2 2 2 2 2 = 8 2 2 2 2 2 3 2 3 3 = 4 2 nu 2 nd nl 2 nr 2 Fg. 1. CU splttng process n HM. In HM, the CU s of four dfferent possble szes: 64 64, 32 32, 16 16 and 8 8. Each LCU can be splt nto four CUs recursvely up to the maxmum allowable herarchcal depth. CU splttng process n HM about a LCU s shown n Fg 1. Frstly, HM codes current 64 64 LCU and calculates ratedstorton cost. Then, t splts LCU nto four sub-cu whch depth s one and calculates rate-dstorton cost of the frst sub-cu. When depth s two or three, HM do the same process untl the depth reaches three. Fnally, HM starts recurson from depth 3 to depth 0. For nstance, f four 8 8 sub-cu block s ratedstorton cost s less than rate-dstorton of 16 16 CU, HM chooses 8 8 CU, otherwse choosng 16 16 CU. At the same tme, the leaf node CUs can be further splt nto predctng unt (PU)s. PU s the basc unt for predcton and t allows multple dfferent shapes to encode rregular mage patterns as shown n Fg 1. The transform unt (TU) s defned for resdual transform and quantzaton. From the leaf 246
node of CU, the TU can be splt nto four sub-tus recursvely untl the mnmal allowable TU depth, and sgnaled by a flag. As analyzed above, HEVC performs full search on all possble CU sze, mode and TU sze by evaluatng the rate dstorton (RD) cost. Therefore, t results n a substantal computatonal complexty of HEVC about nter frame. ADE = ( LADE + TADE ) / 2, (2) where LADE s average depth error between Left LCU and CLeft LCU, TADE s average depth error between Top LCU and CTop LCU. 3. Proposed Method In ths secton, a fast CU sze decson algorthm based on temporal-spatal smlarty and quad-tree codng structure s descrbed, ncludng adaptve CU depth range selecton algorthm and correcton algorthm. We start wth statstcal analyss for temporal-spatal smlarty, whch provde useful gudelnes for assgnng weghts. We utlze average depth error (ADE) whch s defned by prevous frame and current frame to assgn weghts for colocated LCU and adjacent LCU n order to get predctng depth range. Then, we can contnue shrnk depth range accordng to depth scalng factor n the prevous and back frames. Fnally, for reducng msjudgment of changes n the larger sequences, a correcton algorthm s proposed. 3.1. Statstcal Analyss for Temporal- Spatal Smlarty HEVC utlzes quad-tree structure and CU depth to decde segmentaton after traversng every depth n the [0,3]. However, we can reduce computatonal complexty through predctng DR n advance thus skppng unnecessary depth. Ths paper utlzes temporal-spatal smlarty to predct DR type of current LCU. We defne ADE accordng to Fg. 2 for utlzng temporal-spatal smlarty accurately. The ADE s defned as follows Fg. 2. Temporal-spatal smlarty of LCU. As we know, DR type of current LCU s smlar to adjacent LCU and co-located LCU n the reference. We should defne temporal smlarty factor (TSF) and spatal smlarty factor (SSF) n order to obtan relatonshp between ADE and temporal-spatal smlarty. The TSF and SSF are defned as follows: TSF = num /, (3) SSF = n /, (4) where represents nterval of ADE, wth 0 standng for [0, 1], 1 for [1, 2] and 2 for [2, 3]. num s the 4 4 number of equal depth n the nterval, n s the number of equal average depth n the nterval, s the total 4 4 number of nterval. The temporal-spatal smlarty results are demonstrated n Table 1. Table 1. The temporal-spatal smlarty results of LCU. CoL-LCU Left-LCU Top-LCU Sequence [0,1] [1,2] [2,3] [0,1] [1,2] [2,3] [0,1] [1,2] [2,3] BasketballDrll 0.576 0.307 0.039 0.334 0.345 0.791 0.376 0.384 0.801 BasketballDrllText 0.571 0.303 0.038 0.338 0.340 0.779 0.376 0.385 0.759 BasketballPass 0.775 0.453 0.344 0.201 0.325 0.333 0.598 0.599 0.667 BlowngBubbles 0.491 0.269 0.065 0.181 0.284 0.609 0.251 0.323 0.435 PartyScene 0.517 0.225 0.016 0.247 0.315 0.782 0.310 0.371 0.800 PeopleOnStreet 0.510 0.320 0.101 0.187 0.332 0.667 0.171 0.286 0.660 RaceHorsesC 0.413 0.279 0.158 0.102 0.217 0.250 0.132 0.250 0.278 Average 0.550 0.308 0.109 0.227 0.308 0.602 0.316 0.371 0.629 From Table 1, wth ncrement of ADE, temporal smlarty gradually reduces and spatal smlarty mproves. Accordngly, we can predct DR type of current LCU through obtanng adaptve weghts accordng to ADE. 3.2. Adaptve Depth Range Selecton Scheme Vdeo sequence has a strong temporal-spatal smlarty especally n flat area. The DR type of current LCU s smlar to adjacent LCU. At the same tme, co-located LCU n the reference can be 247
referental bass of current LCU because of hgh correlaton for vdeo sequence. So we can utlze adjacent LCU and co-located LCU to predct DR type of current LCU thus skppng unnecessary CU depth and PU predctng process. We can use average depth of co-located LCU, left LCU and top LCU to defne Depth pred n order to determne DR type of current LCU. Depth pred s defned as follows: Depth pred 0 = = w avedepth, (5) where =2, s the ndex of reference LCU, wth 0 standng for co-located LCU, 1 for left LCU and 2 for top LCU. avedepth s average depth of correspondng LCU. w s weghts of correspondng LCU and the sum s equal to 1. The weghts of colocated LCU, left LCU and top LCU s 0.1 ADE + 0. 5, 0.05 ADE + 0. 25 and 0.05 ADE + 0.25 respectvely accordng to analyss results as shown n Table 1. We can predct DR type of current LCU accordng to Depth pred as mentoned above. The relatonshp between Depth pred and DR type s demonstrated n Table 2. Table 2. Relatonshp between predctng depth and DR type. Depth pred Canddates Depth DR 0 0 [0,0] (0, 0.5] 0,1 [0,1] (0.5, 1.5] 0,1,2 [0,2] (1.5, 2.5] 1,2,3 [1,3] (2.5, 3] 2,3 [2,3] From Table 2, we can skp unnecessary CU depth accordng to DR type whch s obtaned through Depth pred thus reducng computatonal complexty. In order to prove accuracy of the DR type, we make a statstcs about the accuracy of DR type ncludng depth after HM algorthm. The accuracy s demonstrated n Table 3. From Table 3, msjudgment rate of fve DR type are less than 8 %, especally n frst three DR type only 1.82 %-3.08 %. So, proposed ACUDR algorthm can acheve hgh relablty. 3.3. Depth Scalng Factor Shrnk DR We can t reduce much computatonal complexty n nterval [0, 2] and [1, 3] because we stll have three depth need to traverse. As we know, neghbor frames have a smlar CU depth because of hgh correlaton for vdeo sequence. Therefore, we defne depth scalng factor (DSF) accordng to prevous and back frames n order to reduce DR thus reducng computatonal complexty contnue. The DSF s defned as follows DSF = countj, (6) j / where =0, 1 represents co-located LCU n the prevous and back frame respectvely, j represents CU depth. count j represents CU depth 4 4 number n the co-located LCU, =256. For DR= [0, 2], f DSF 00 and DSF 10 are equal to 0, depth 0 s elmnated from the canddate depth; f DSF 02 and DSF 12 are less than 0.125, we traverse only nterval [0, 1]. For DR= [1, 3], f DSF 01 and DSF 11 are equal to 0, depth 1 s elmnated from the canddate depth; f DSF 03 and DSF 13 are less than 0.125, we traverse only nterval [1, 2]. For DR= [2, 3], f DSF 02 and DSF 12 are less than 0.125, depth 2 s elmnated from the canddate depth; f DSF 03 and DSF 13 are less than 0.125, we traverse only nterval [2, 2]. 3.4. Depth Range Correcton Algorthm The algorthm devsed n secton 3.2 would result n errors n predctng sharp scene shftng sequences. Therefore, a correcton algorthm, CA, was nvented to deal wth ths drawback. Shown n Fg. 3, colocated LCU n adjacent two frames, together wth the left and upper LCU of current LCU were adopted to determne depth range to traverse through. The weghed formula s defned as follows: Table 3. The accuracy of DR type. Sequence [0,0] [0,1] [0,2] [1,3] [2,3] BasketballDrll 94.32 % 96.41 % 96.34 % 91.95 % 95.83 % Traffc 98.79 % 98.54 % 98.18 % 86.60 % 93.57 % BasketballDrllText 94.80 % 97.11 % 96.10 % 92.54 % 89.58 % BlowngBubbles 100 % 100 % 97.51 % 92.17 % 97.92 % Cactus 98.55 % 98.63 % 98.44 % 92.04 % 94.57 % PartyScene 98.10 % 97.59 % 96.80 % 92.21 % 93.64 % PeopleOnStreet 97.02 % 98.05 % 96.49 % 96.58 % 92.13 % ChnaSpeed 93.76 % 99.12 % 97.08 % 94.18 % 88.75 % Average 96.92 % 98.18 % 97.12 % 92.28 % 93.25 % Fg. 3. Temporal-spatal smlarty of LCU. 248
3 j 0 CU = = ( w count / ), (7) j where represents CU depth, CU stands for percentage of each CU depth; j s the ndex of reference LCU, wth 0 standng for co-located LCU n the prevous frame, 1 for LCU n the back frame, 2 and 3 for left and top LCU of current LCU respectvely. count j s the percentage of each CU depth n LCU j. w j s the weght of LCU j. w 0 and w 1 are set 0.3, whle w 2 and w 3 are set 0.2, symbolzes 4 4 number n one LCU, that s 256. When CU 0, CU 1, CU 2 and CU 3 are known, those depth percent coeffcents that are less than 0.1 wll be removed from canddates. The rest CU depth wll be traversed through to lower codng complexty. 4. Expermental Results and Dscussons To test the effectveness of the proposed algorthm, HM9.0 [14] was selected as the test model. The experments were conducted on a PC wth confguratons as follows: CPU Intel core 5-2500, clock speed 3.30 GHz, RAM 8 G, operatng j system Wndows 7 and developer tool Mcrosoft Vsual Studo 2008. 100 frames of each sequence were tested wth random access mode [15] and wth QP ncludng 22, 27, 32, 37. Expermental results were presented n the form of BD-PSR, (Bjøntegaard delta peak sgnal-to-nose rate), BDBR (Bjøntegaard delta bt rate) [16] and Δ T. BDPSR ndcates PSR dfference at the same btrate, whle BDBR shows btrate varance at the same PSR level. Δ T (%) s defned as follows ΔT = ( Tp THM ) / THM 100%, (8) where T P and T HM represents codng tme of the proposed algorthm and orgnal HM algorthm respectvely. Twelve sequences were tested on common test condtons. In the meantme, Shen s method was realzed and compared wth ths paper s ACUDR, ACUDR+DSF and CA algorthms. Table 4 shows the comparson among ACUDR, CA and Shen s algorthms. Table 5 reveals the dfference between ACUDR+DSF and Shen s method. Table 4. Comparson results among Shen s method, ACUDR and CA algorthms. Shen Proposed method Type Sequence ACUDR ACUDR CA BDPSR BDBR ΔT BDPSR BDBR Δ T BDPSR BDBR ΔT (db) (%) (%) (db) (%) (%) (db) (%) (%) Class A Traffc -0.031 1.01-37.57-0.025 0.82-36.42-0.024 0.80-32.42 (2560 1600) PeopleOnStreet -0.025 0.63-22.15-0.019 0.47-22.54-0.028 0.70-28.13 Class B BasketballDrve -0.022 1.23-33.85-0.018 0.89-32.73-0.012 0.64-30.73 (1920 1080) Cactus -0.012 0.60-32.22-0.009 0.45-31.69-0.012 0.57-29.53 Class C BasketballDrll -0.061 1.68-22.96-0.046 1.27-22.86-0.025 0.69-20.61 (832 480) BQMall -0.078 2.01-22.56-0.047 1.22-22.13-0.021 0.55-20.33 Class D BasketballPass -0.050 1.15-14.68-0.003 0.094-11.10 0.003-0.076-13.53 (416 240) BlowngBubbles -0.009 0.25-10.66-0.033 0.74-14.65-0.004 0.11-10.66 Class E KrstenAndSara -0.029 1.07-47.95-0.014 0.53-46.31-0.014 0.55-37.36 (1280 720) Vdyo4-0.038 1.48-43.78-0.029 1.16-43.14-0.004 0.15-40.10 Class F BasketballDrllText -0.07 1.80-22.73-0.051 1.31-22.28-0.021 0.55-20.75 ChnaSpeed -0.09 1.67-26.67-0.055 1.03-26.46-0.017 0.32-28.19 Average -0.043 1.22-28.15-0.029 0.83-27.69-0.015 0.46-26.03 Table 5. Comparson results between ACUDR+DSF and Shen s algorthms. Type Class A (2560 1600) Class B (1920 1080) Class C (832 480) Class D (416 240) Class E (1280 720) Class F Shen ACUDR+DSF Sequence BDPSR (db) BDBR (%) Δ T (%) BDPSR (db) BDBR (%) Δ T (%) Traffc -0.031 1.01-37.57-0.11 3.49-41.96 PeopleOnStreet -0.025 0.63-22.15-0.047 1.18-36.25 Kmono -0.010 0.33-32.59-0.037 1.27-43.70 Cactus -0.012 0.60-32.22-0.034 1.70-39.31 PartyScene -0.020 0.45-20.79-0.048 1.13-31.96 RaceHorsesC -0.018 0.52-15.86-0.041 1.16-27.34 RaceHorses -0.002 0.04-11.63-0.031 0.67-21.07 BlowngBubbles -0.009 0.25-10.66-0.022 0.62-19.53 Vdyo3-0.017 0.56-42.31-0.049 1.71-46.89 FourPeople -0.021 0.64-42.23-0.059 1.76-45.89 BasketballDrllText -0.07 1.80-22.73-0.08 2.06-32.80 ChnaSpeed -0.09 1.67-26.67-0.078 1.46-38.02 Average -0.027 0.71-26.45-0.053 1.52-35.39 249
From Table 4, t can be seen that compared wth HM9.0, Shen s methods saved 28.15 % codng tme, reduced 0.043 db PSR and ncreased BDBR by 1.22 %. The proposed ACUDR algorthm, took up nearly the same codng tme whle BDPSR ncreased by 0.014 db and BDBR decreased by 0.39 % compared wth Shen s method. CA algorthm was employed to deal wth sharp scene shftng sequences. In comparson wth Shen s, t cost comparable codng tme wth BDPSR ncreasng by 0.028 db and BDBR decreasng by 0.76 %. CA s especally applcable to BasketballDrll, BQMall and Class F sequences n that t predcted LCU depth range more accurately. Accordng to Table 5, the combnaton of ACUDR+SFD saved 35.39 % tme when compared wth orgnal HM9.0 algorthms. And t occuped 10 % less tme than Shen s dd wthout obvous ncrease n BDBR and decrease n BDPSR. Fg. 4 shows rate dstorton performance of sequences PeopleOnStreet, Kmono, PartyScene and BlowngBubbles tested wth each algorthm (HM9.0, Shen s, ACUDR, CA and ACUDR+DSF). From Fg. 4, t s clear that all the algorthms possessed comparable rate dstorton performance, ndcatng some relablty of the proposed algorthms. Fg. 5 shows the LCU parttons of each scheme. The red part demonstrates the msmatches wth the orgnal parttons. From Fg. 5, CA partton result s closer to the orgnal one than Shen s. There only exst several msmatches, whch have smlar CU depths wth orgnal ones. Therefore, the global rate dstorton performance stays stable. 5. Conclusons Ths paper proposes an adaptve CU depth selecton algorthm, amng to mprove Shen s methods. Frstly, average depth error was adopted to decde the weght of adjacent LCU and co-located LCU, whch was later used to determne LCU DR type. Secondly, depth scalng factors of the adjacent frames were used to shrnk DR. Fnally, wth respect to sharp scene change sequences, a depth range correcton algorthm was devsed, whch took advantage of neghborng frames depth scalng factors. Results showed that the proposed ACRDR+SFD took up 10 % tme less than Shen s dd whle BDBR ncreased by about 0.81 % and BDPSR decrease by 0.026 db. evertheless, CA cost nearly equvalent tme, wth 0.76 % lower BDBR and 0.028 db ncrease n BDPSR. Further researches on nter predcton modes to avod unnecessary mode traversal, thus lowerng codng complexty. (a) PeopleOnStreet (b) Kmono (c) PartyScene BlowngBubbles Fg. 4. Rate dstorton performance of sequences. 250
(a) HM9.0 (b) Shen (c) CA (d) ACUDR+DSF Fg. 5. Quad-tree parttonng results of BasketballPass sequence when QP s equal to 32. Acknowledgements The research work was supported by atonal atural Scence Foundaton of Chna (Grant o. U1301257, 61311140262, 61271270, 61171163), and Zhejang Scentfc and Techncal Key Innovaton Team (Grant o. 2010R50009). References [1]. Masoum Saed, Tabatabae Razyeh, Fez-Derakhsh Mohammad-Reza, Tabatabae Khatereh, A ew Parallel Algorthm for Frequent Pattern Mnng, Journal of Computatonal Intellgence and Electronc Systems, Vol. 2, Issue 1, 2013, pp. 55-59. [2].. K. Mttal, Ahmed Mohd, aaz Farha, Effcent palmprnt recognton system mplementaton usng 2D Gabor flters, Journal of Computatonal Intellgence and Electronc Systems, Vol. 1, Issue 2, 2012, pp. 154-160. [3]. Y. X. Zhang, Y. Lu, Z. Luan, et al, Desgn and mplementaton of vdeo mage system, Lqud Crystal and Dsplay, Vol. 28, Issue 3, 2013, pp. 424-428. [4]. Y. H. Wu, L. X. Jn, K. Zhang, et al, Fast P frame mode selecton algorthm based on monotoncty of Jmode value, Lqud Crystal and Dsplay, Vol. 28, Issue 2, 2013, pp. 266-272. [5]. T. Wegand, J. R. Ohm, G. J. Sullvan, et al, Specal secton on the jont call for proposals on hgh effcency vdeo codng standardzaton, IEEE Transactons on Crcuts and Systems for Vdeo Technology, Vol. 20, Issue 12, 2010, pp. 1661-1666. [6]. T. Wegand, G. J. Sullvan, G. Bjontegaard, et al, Overvew of the H.264/AVC vdeo codng standard, IEEE Transactons on Crcuts and Systems for Vdeo Technology, Vol. 13, Issue 7, 2012, pp. 560-576. [7]. G. J. Sullvan, J. R. Ohm, W. J. Han, et al, Overvew of the hgh effcency vdeo codng (HEVC) standard, IEEE Transactons on Crcuts and Systems for Vdeo Technology, Vol. 22, Issue 12, 2012, pp. 1649-1668. [8]. X. L, J. C. An, X. Guo, et al, Adaptve CU depth range, n Proceedngs of the 5 th JCT-VC Meetng, Geneva, Swtzerland, 16-23 March 2011, Artcle ID JCTVC-E090. [9]. L. Q. Shen, Z. Lu, X. P. Zhang, et al, An effectve CU sze decson method for HEVC encoders, IEEE Transactons on Multmeda, Vol. 15, Issue 2, 2013, pp. 465-470. [10]. J. Km, Y. Choe and Y. G. Km, Fast codng unt sze decson algorthm for ntra codng n HEVC, n Proceedngs of the IEEE Internatonal Conference on Consumer Electroncs (ICCE 2013), Las Vegas, USA, 11-14 January 2013, pp. 637-638. [11]. Q. Yu, X. F. Zhang, S. Q. Wang, S. W. Ma, Early termnaton of codng unt splttng for HEVC, n Proceedngs of the Sgnal & Informaton Processng Assocaton Annual Summt and Conference (APSIPA 251
ASC), Hollywood, USA, 3-6 December 2012, pp. 1-4. [12]. T. L. da Slva, L. V. Agostn, L. A. da Slva, Fast HEVC ntra predcton mode decson based on edge drecton nformaton, n Proceedngs of the European Sgnal Processng Conference, Bucharest, Romana, 27-31 August 2012, pp. 1214-1218. [13]. B. Bross, W. J. Han, J. R. Ohm, et al, Hgh effcency vdeo codng text specfcaton draft 10 (for FDIS & Consent), n Proceedngs of the 5 th JCT-VC Meetng, Geneva, Swtzerland, 14-23 January 2013, Artcle ID JCTVC-L1003. [14]. Jont Collaboratve Team on Vdeo Codng (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11. (https://hevc.hh.fraunhofer.de/ svn/svn_hevcsoftware/tags/hm-9.0). [15]. F. Bossen, Common test condtons and software reference confguratons, n Proceedngs of the 5 th JCT-VC Meetng, Geneva, Swtzerland, 14-23 January 2013, Artcle ID JCTVC-L1003. [16]. G. Bjontegaard, Calculaton of average PSR dfferences between RD-curves, n Proceedngs of the 13 th Vdeo Codng Experts Group (VCEG) Meetng, Austn, Amercan, 2-4 Aprl 2001, pp. 1-4. 2014 Copyrght, Internatonal Frequency Sensor Assocaton (IFSA) Publshng, S. L. All rghts reserved. (http://www.sensorsportal.com) 252