• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Nonlinear Prediction with Deep Recurrent Neural Networks for Non-Blind Audio Bandwidth Extension

    2018-03-12 12:12:22LinJiangRuiminHuXiaochenWangWeipingTuMaoshengZhangNationalEngineeringResearchCenterforMultimediaSoftwareSchoolofComputerScienceWuhanUniversityWuhan007ChinaInstituteofBigDataandInternetInnovationHunanUniversityofCommerce
    China Communications 2018年1期
    關(guān)鍵詞:局限于低保戶財(cái)政部門

    Lin Jiang, Ruimin Hu*, Xiaochen Wang Weiping Tu Maosheng Zhang National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan 007, China Institute of Big Data and Internet Innovation, Hunan University of Commerce, Changsha 00, China Software College, East China University of Technology, Nanchang 00, China Collaborative Innovation Center for Economics crime investigation and prevention technology, Jiangxi Province, Nanchang 00,China Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan 007, China Collaborative Innovation Center of Geospatial Technology, Wuhan 0079, China

    I. INTRODUCTION

    In modern telecommunications, audio coding becomes an essential technology which attracts lots of attentions. In particular, for mobile applications, packing the data into small space with efcient methods is benecial. The coding algorithms must be fairly simple because mobile processors are relatively not very powerful and less processing leads to lesser use of battery. Audio bandwidth extension(BWE) is a standard technique within contemporary audio codecs to efciently code audio signals at low bitrates [1].

    In the audio codecs, the signals are split into low frequency (LF) and high frequency(HF) parts, and are encoded by core codec and BWE respectively. The approach is based on the properties of the human hearing. The hearing threshold for high frequencies is higher than for lower frequencies (except very low frequencies), so high frequency tones are not heard as loud as the same amplitude tones at lower frequencies [2]. Also the frequency resolution of hearing is better on lower frequencies. Therefore, the coding bitrates for HF is far lower than for LF.

    Another useful feature of many types of audio samples is that the level of the higher frequencies is usually lower than the level of lower frequencies. And finally, the sound of many musical instruments is harmonic, which means that some properties of the frequency spectrum are very similar in lower and higher frequencies [3]. The similarity of frequency spectrum is also called the correlation between LF and HF.

    According to above mentioned properties and feature, on the decode side, the HF signals are usually generated by a duplication of the corresponding decoded LF signals and a priori knowledge of HF. Depending on whether transmitting parameters, the BWE methods have two categories: blind BWE and nonblind BWE. In non-blind BWE, a few parameters of HF are transmitted to the decoder side for reconstructing the high frequency signals.In this paper, we only discuss about non-blind BWE. For the sake of concise narrative, the term non-blind BWE will be replace with abbreviation BWE in the following section.

    In the audio coding standard, BWE is a necessary module for coding high frequency signal. For example, MPEG Advance Audio Coding (AAC) used a spectral band replication method (SBR)[4], AMR WB+ used a LPC-based BWE[5], ITU-T G.729.1 used a hierarchical BWE[6], China AVS-M used a LPC-based BWE in FFT domain[7,8], MPEG Union Speech and Audio Coding (USAC)used an enhanced SBR (eSBR)[9], 3GPP EVS used a multi-mode BWE method, including TBE[10], FD-BWE[41] and IGF[42]. There are two main categories of BWE methods:time domain BWE and frequency domain BWE.

    The time domain BWE performs adaptive signal processing according to the well-known time-varying source-filter model of speech production [1]. This approach is based on the Linear Predictive Coding (LPC) paradigm(abbr. LPC-based BWE) in which the speech signal is generated by sending an excitation signal through an all-pole synthesislter. The excitation signal is directly derived from a duplication of the decoded LF signals. The all-pole synthesis filter models the spectral envelope and shapes the fine pitch structure of the excitation signal when generating the HF signal. A small number of parameters of HF, as parametric representations of spectral envelope, are transmitted to the decoder side,such as Liner Prediction Cepstral Coefcients(LPCCs), Cepstral Coefficients (CEPs), Mel Frequency Cepstral Coefficients (MFCCs)[11,12], and Line Spectral Frequencies (LSFs)[13]. In order to improve the perception quality of coding, a codebook mapping technique is also introduced for achieving more accuracy presentations of HF envelope [14,15].

    As the basic principle of the LPC-based BWE is speech generation model, this approach is widely used in speech coding[5-7,16]. However, because the lower frequencies of voiced speech signals generally exhibit a stronger harmonic structure than the higher frequencies, the duplication of LF excitation should cause too harmonic components on the generated HF excitation signal, which should bring out the objectionable, ‘buzzy’—sounding artifacts [16].

    The frequency domain BWE recreates the HF spectral band signals in frequency domain. The basic principle of BWE approach is derived from the human auditory system in which hearing is mainly based on a shortterm spectral analysis of the audio signal.Spectral band replication (SBR) [4] is the most widely used frequency domain BWE method. SBR uses a Pseudo-Quadrature Mirror Filter (PQMF) description of the signal and improves the compression efficiency of perceptual audio codecs. This is achieved by simply copying the LF bands to the HF bands within the used filter bank, followed by post processing (including inverse filtering, adaptive noise addition, sinusoidal regeneration,shaping of the spectral envelope). However, if the correlation between low and high frequency becomes weak, the method will produce artifacts because the harmonic structure of the HF signal is not preserved. To remedy this,some methods were developed to maintain the harmonic structure: the phase vocoder driven Harmonic Bandwidth Extension (HBE)[17], the Continuously Modulated Bandwidth Extension (CM-BWE) using single sideband modulation [18], QMF-based harmonic spectral band replication [19], MDCT-based harmonic spectral bandwidth extension method[20]. These methods significantly improved the perception quality of coding. However,the coding artifacts are still existed inevitably because the replication method from LF to HF needs require the strong correlation between LF and HF [21].

    The above mentioned BWE methods have two steps to generate the HF signal. First, rebuild the coarse HF signal by copying LF to HF at the corresponding present time frame.Second, generate the final HF signal by envelope adjustment using the transmitted HF envelope data. On therst step, the similarity between the coarse HF and original HF will directly affect the perception quality of coding.Consequently, the weak correlation between HF and LF will result in the degraded perception quality of coding. Our investigation found that the correlation existed in the LF signal of context dependent frames in addition to the current frame. In this paper, our main goal is to achieve more accurate coarse HF signal for improving the perception quality of coding.We propose a novel method to predict the coarse HF signal by deep recurrent neural network using the context dependent LF signal.Then we replace the conventional replication method by our method in the reference codecs.Moreover, in order to conrm the motivation of our method, we also propose a method to quantitatively analyse the correlation between LF and HF signal.

    The paper is organized as follows. Section 2 describes the motivation of this paper. In section 3, the prediction method of coarse HF signal is given, while the performance of proposed method and comparison with others are shown in section 4,nally section 5 presents conclusion of this paper.

    Fig. 1. Generic scheme of BWE.

    II. MOTIVATION

    2.1 Overview of BWE scheme

    The generic scheme of BWE is shown ingure 1. In generic scheme of BWE, according to the perceptive difference of human system for HF and LF, the full band input signalSfullis split into HF signalShfand LF signalSlf.The LF signal is coded using a core codec,such as algebraic code excited linear prediction (ACELP)[5,7,8,10], or Transform Coded Excitation (TCX) algorithm[5,7,8], or MDCT-based Coding[6,9,10,36]. While HF signal is usually without an explicit waveform coding,only a small number of HF parametersPhfare extracted and transmitted to the decoder side.On the decoder side, the final HF signalS’hfis recreated using coarse HF signalChfand decoded HF parametersP’hf. The coarse HF signal is usually generated by the decoded LF signalS’lf. To produce a pleasant sounding HF signal, an intuitive approach is to increase the HF parameters. However, it is in conict with requirement of low bitrate. Some approaches are developed to enhance the similarity between coarse HF and original HF.

    In time domain BWE, the coarse HF signal is usually derived from the decoded LF excitation signal. To preserve the harmonic structure of the HF excitation signal, a nonlinear function is used [16,22,]. In [23], the HF excitation signal is generated by upsampling a low band fixed codebook vector and a low band adaptive codebook vector to a predetermined sampling frequency. In frequency domain BWE, the coarse HF signal is usually derived from a duplication of decoded LF subband signal in frequency domain. To preserve the harmonic structure of original HF, some post processing is usually introduced, such as inverseltering, adaptive noise addition, sinusoidal regeneration, shaping of the spectral envelope [20], single sideband modulation [18],and a phase vocoder [17].

    The above approaches are conducive to improve the similarity of coarse HF with original HF. However, the improvement is limited when the correlation between HF and LF signal becomes weak. More importantly, we found only the current frame decoded LF signal is used to generate coarse HF in existing method. According to the physical properties of the audio signal, we consider the correlation also exists in the LF signal of context dependent frames.

    2.2 Correlation analysis between HF and LF

    The motivation for all bandwidth expansion methods is the fact that the spectral envelope of the lower and higher frequency bands of the audio signal is dependent, i.e., the low band part of the audio spectrum provides information about the spectral shape of the high band part. The level of dependency will affect the accuracy of reconstructed HF signal. In existing BWE methods, only the current frame LF signal is used to recreate the coarse HF. The utilization of current frame is due to the shortterm correlation of audio signal. However,there is also a long-term correlation when fundamental frequency of voice changes slowly[24]. To reveal the long-term correlation for recreating the HF signal, we quantitatively analyse the correlation using mutual information between HF and LF.

    Taking into account the uncertainty and nonlinearity of audio signal, mutual information is an appropriate measure of correlation[25,26]. The mutual information (MI) between two continuous variablesXandYis given by[27]:

    whereh(Y) is the differential entropy ofYand is defined by an integration over the value spaceΩYofY:

    wherefY(y) denotes the probability density function (pdf) ofY. The conditional differential entropyh(Y|X) ofYgivenXis dened as:

    whereΩXis the value space ofXandfY,X(y,x)is the jointpdfofXandY. Throughout our correlation analysis,Xis a frequency spectral amplitude vectorALrepresenting LF band andYis a frequency spectral amplitude vectorAHrepresenting the HF band. The mutual information is dened in the discrete forms:

    wherep(aL,aH) is the joint probability of LF and HF,p(aL) andp(aH) denotes the prior probability of LF and HF respectively.

    In order to quantitatively analyse the correlation of various types of sound, we calculate theMIvalueunder different frame-shift, wheredenotes thei-th frame HF, thedenotes the (i-t) frame LF,tis frame shift. In figure 2, whent=0, the correlation is the greatest than others (the greater the MI value, the higher the correlation). It is easy to understand, because the HF and LF come from the same one frame. Just as we proposed the hypothesis in section 2.1, there also exists the correlation between thei-th frame HF and thei-1,i-2,i-3, … frame LF signal. Moreover, we also give the average MI values of various types of sound (e.g. speech,music and instrument) for evalu ating the correlation (seegure 3). Ingure 3, we alsond that the HF signal is not only associated with the LF signal of the current frame, but also associated with the LF signal of the front frame.All of this shows that HF construction can be derived from the LF signal of context dependent frames besides the current frame.

    Fig. 2. The MI (bits) of bagpipes sound under different frame- shift.

    Fig. 3. The average MI (bits) of various types of sound under different frame-shift.

    Fig. 4. The conceptual comparison between conventional and our method for generating coarse HF signal on Spectrogram.

    2.3 Selection of prediction method

    The purpose of this paper is to predict the coarse HF signal from LF. In particular, we will establish a nonlinear mapping model from LF to HF for achieving more accurate coarse HF signal. In blind bandwidth extension, nonlinear mapping model as a generic method is developed for expanding the wideband speech signal. In these methods, neural network is a usual selection due to its strong capacity on modelling [28-30]. As the mapping from LF to HF is extreme complicated, the model ability of previous shallow network is inadequate. In our previous work, we used a deep auto-encoder neural network to predict the coarse signal of HF [31]. The method significantly improved the perception quality of coding.Since only using the current frame LF signal,the improvement is limited when the correlation between LF and HF becomes weak.

    According to the correlation analysis of above mentioned, the correlation existed in the LF signal of context dependent frame besides the current frame. The selected mapping method is required having a model capacity on time series signal. The recent deep recurrent neural networks showed an excellent performance for large scale acoustic modelling [32].Consequently, we select it as a model tool for predicting the coarse HF signal.

    III. THE PREDICTION METHOD OF COARSE HF SIGNAL

    3.1 Problem statement

    In previous work, the coarse HF signal is usually generated by a duplication of the corresponding current frame LF signal. According to the correlation analysis in section 2, we will establish a nonlinear mapping model to predict the coarse HF signal using the context dependent LF signal. The conceptual comparison between conventional and our method is shown ingure 4. The prediction task can be formulated as a generative model problem in mathematical. This reformulation allows applying wide range of well-established methods.

    be the training set whereliandhiis thei-th frame decoded LF signal and the original coarse HF signal, respectively. Let’s divide the dataset into train and validation setswith sizeNlandNvcorrespondingly. Further we introduce the set of prediction functionin which we want tond the best model. Assuming neural network with thexed architecture, it is possible to associate the set of functionFwith the network weights spaceWand thus functionfand vector of weightsware interchangeable.

    Next step is to introduce the loss function.As in almost all generative models, here we are interested in accuracy error measure. In particular, we wishnd a “perfect” prediction function to generate the coarse HF signal with minimal error. We definedLoss(h,y) as the loss function, wherehis the original coarse HF signal,y=f(l) is the prediction value of the model from decoded LF signall. Assuming that there is a “perfect” predictionfF*in prediction function setF, our task is tondfF*in the best possible way. According to statistical learning theory[43], the risk associated with predictionf(l) is then dened as the expectation of the loss function :

    whereP(h,l) is a joint probability distribution over coarse HF signal train setHand decoded LF train setL. Our ultimate goal is to find a prediction functionfF*among axed class of function setFfor which theR(f) is minimal:

    In general, the riskR(f) cannot be computed directly because the distributionP(h,l) is unknown to the learning algorithm. However, we can compute an approximation offF*, calledempirical risk[43], by averaging the loss function on the training set:

    whereNlis the size of training set.Empirical risk minimization[43] principle states that the learning algorithm should choose a predictionf? which minimizes the empirical risk:

    whereidenotes frame index,jis the frequency spectrum coefcient index, andMis frame length,wandbis the network weights and bias item, respectively. Common approach here to reduce overtting is to check the validation error from time to time during the optimization process and to stop when it starts growing. Due to the validation error will go up and down in a short time, the criterion of starts growing is implemented on the consecutive frames, e.g. 5 frames. If the validation error goes up steadily, we will stop it.

    3.2 Prediction method

    The purpose of recurrent neural networks(RNNs) was put forward to deal with time-serial data. Motivated by its superior performance in many tasks, we propose a nonlinear mapping model to predict the coarse HF signal using deep long short-term memory recurrent neural networks.

    3.2.1 RNN

    Recurrent neural networks allow cyclical connections in a feed-forward neural network[33]. Different from the feed-forward ones,RNNs are able to incorporate contextual infor-mation from previous input vectors, which allows them to remember past inputs and persist in the network’s internal state. This property makes them an attractive choice for sequence to sequence learning. For a given input vector sequence x=(x1,x2,…,xT), the forward pass of RNNs is as follows:

    wheret=1,…,T, andTis the length of the sequence; h=(h1,h2,…,hT) is the hidden state vector sequence computed from x; y = (y1,y2,…,yT) is the output vector sequence; W are the weight matrices, where Wxh, Whhand Whyare the input-hidden, hidden-hidden and hidden-output weight matrices, respectively.bhandbyare the hidden and output bias vectors, respectively, andHdenotes the nonlinear activation function for hidden nodes.

    For our prediction system, because of the context dependency correlation phenomenon,we desire the model to have access to both past and future context. But conventional RNNs can only access the past context and they ignore the future context. So the bidirectional recurrent neural networks (BRNNs) are used to relieve this problem. BRNNs compute both forward state sequenceh→and backward state sequenceh←, as formulated below:

    Fig. 5. Long short-term memory (LSTM) [34].

    3.2.2 LSTM-RNN

    Conventional RNNs can access only a limited range of context because of the vanishing gradient problem. Long short-term memory(LSTM) uses purpose-built memory cells, as shown in figure 5 [34], to store information which is designed to overcome this limitation[34]. In sequence-to-sequence mapping tasks,LSTM has been shown capable of bridging very long time lags between input and output sequences by enforcing constant error flow.For LSTM, the recurrent hidden layer functionHis implemented as follows:

    whereσis the sigmoid function;i,f,o,aandcare input gate, forget gate, output gate, cell input activation and cell memory, respectively.τandθare the cell input and output nonlinear activation functions, in whichtanhis generally chosen. The multiplicative gates allow LSTM memory cells to store and access information over long periods of time, thereby avoiding the vanishing gradient problem.

    3.2.3 DBLSTM-RNNs-based Prediction Method

    In order to accurately predict the coarse HF signal using the context dependent decoded LF signal, we design the DBLSTM-RNNs with the dilated LSTM, as shown in figure 6[35]. The dilated LSMT can make sure the predicted coarse HF signalH(ht|lt,lt-1,lt-2,…,l1)emitted by the model at timesteptdepend on any of the previous decoded LF signal at timestepst,t-1,t-2, ... ,1. A dilated LSTM is a LSTM which is applied over an area larger than its length by skipping input values with a certain step.

    Stacked dilated LSTM efficiently enable very large receptive fields with just a few layers, while preserving the input resolution throughout the network. In this paper, the dilation is doubled for every layer up to a certain point and then repeated: e.g.

    1,2,4,…,512, 1,2,4,…,512, 1,2,4,…,512.

    The intuition behind this configuration is two-fold. First, exponentially increasing the dilation factor results in exponential receptiveeld growth with depth. For example each 1,2, 4,…, 512 block has receptive field of size 1024. Second, stacking these blocks further increases the model capacity and the receptiveeld size.

    Learning DBLSTM-RNNs can be regarded as optimizing a differentiable error function:

    whereMtrainrepresents the number of sequences in the training data and w denotes the network inter-node weights. In our prediction system, the training criterion is to minimize the sum of squared errors (SSE) between the predicted value and the original coarse HF signal. We use back-propagation through time(BPTT) algorithm to train the network. In the BLSTM hidden layer, BPTT is applied to both forward and backward hidden nodes and back-propagates layer by layer. After training network, the weight vectors w and bias vectors b will be determined, we can use the network to predict the coarse HF signal with the decoded LF signal, as formulated below:

    wherek=i-m+1,mis the timestep, andm=2d,ddenotes the depth of DBLSTM-RNNs.

    六是財(cái)政支持低保政策實(shí)施的手段單一。財(cái)政部門對(duì)低保對(duì)象的支持局限于低保補(bǔ)貼,對(duì)低保對(duì)象從事創(chuàng)業(yè)與就業(yè)的支持政策不明確,如低保戶創(chuàng)業(yè)和就業(yè)的啟動(dòng)資金支持或貸款貼息支持政策不明確,缺乏激勵(lì)。

    IV. THE EXPERIMENT AND EVALUATION

    In order to verify the validity of the proposed method, we used the DBLSTM-RNNs instead of the conventional replication method to generate the coarse HF signal on the reference codecs. For testing the ubiquitous capacity of our method, we selected 6 representative reference codecs as evaluation object.

    In this section, we first describe the reference codecs for evaluating the performance of proposed method. Then we train the DBLSTM-RNNs architecture on different reference codecs. Finally, we show the experimental results of subjective listening test,objective test and comparison of computation complexity.

    4.1 Test reference codecs

    (2)WB+ 3GPP AMR WB+ is an extended AMR-WB codec that provides unique performance at very low bit rates from below 10.4 kbps up to 24 kbps [5]. Its HF signal is encoded by a typical time domain BWE method(LPC-based BWE), and the coarse HF signal is achieved by copying the decoded LF excitation signals. The bitrate is set to 16.4 kbps in our experiments.

    (3)AVS Audio and Video coding Standard for Mobile (AVS-M, submitted as AVS Part 10) is a low bitrate audio coding standard proposed for the next generation mobile communication system [7,8]. It is also therst mobile audio coding standard in China. Its BWE is similar to WB+, and the coarse HF signal derived from a duplication of decoded LF excitation signal. Like as WB+, the bitrate is set to

    16.4 kbps for testing.

    (5)EVS The codec for Enhanced Voice Services, standardized by 3GPP in September 2014, provides a wide range of new functionalities and improvements enabling unprecedented versatility and efficiency in mobile communication [10]. For upper band coding,EVS uses different BWE methods based on the selected core codecs. On LP-based coding mode, TBE and multi-mode FD-BWE method is employed. On MDCT based TCX coding mode, an Intelligent Gap Filling (IGF) tool is employed, which is an enhanced noiselling technique toll gaps (regions of zero values)in spectra.

    (6)DAE This is an improved version of AVS P10 from our previous work [31]. The coarse HF signal is predict from the LF signal of current frame by a deep autoencoder. This method is selected as reference codecs because of its representative of prediction.

    The more details of test reference codecs are listed on Table 1.

    Table I. The details of test reference codecs.

    4.2 Experiment setup

    All networks are trained on an about 50 hour dataset consisting of TIMIT speech, Chinese speech, natural sounds and music. We randomly divided the database into 2 disjoint parts:80% for training, 20% for validation. Due to the different input signal on six test reference codecs, the training process is carried out separately on six reference codecs. The inputs of networks are the decoded LF signals which are extracted from each reference codecs, respectively. To the supervised data, the original coarse HF signals are extracted on the encoder side at each reference codecs, respectively.The frequency ranges are listed in Table 1.

    Our goal is to predict the coarse spectrum,so the parameters remain untouched. For AAC+ and USAC, SBR and eSBR technique,the QMF coefficients of decoded LFs as input signal, due to the complex form of QMF,the real and imaginary coefficients are input separately, and the HF coarse spectrum also is predicted separately. For WB+ and AVS,the excitation of decoded LFs as input signal,and the excitation signal of HFs is predict. For DAE, the MDCT coefcients of decoded LFs as input, and the HF MDCT coefficients are predicted. For EVS, our method is implemented just on TBE, and the proposed model replaced the nonlinear function module on TBE.The excitation of decoded LFs and HFs as input and output of model respectively. For all reference codecs, the smoothing process was implemented on time domain for the generated final HFs. We used an energy suppressed method between frames to reduce the noise.

    According to the Correlation analysis in section 2.2, the correlation is exists in the previous consecutive frames. In our implementation, we generally use the previous 5 frames decoded LFs signal to predict the current frame HFs coarse spectrum. However, the weak correlation (e.g. transient and other non-stable frames) maybe result in the strong distortion. In order to remedy it, before predicting, we implement the transient detec-tion on decoded LFs, if the frame is transient signal, we will don’t use it to predict. If the transient frame exceeds 2, we will only use the current frame to predict.

    The training of networks architectures is implemented on a CPU-GPU Cluster which is a high performance computing system of Wuhan University [37]. We use the asynchronous stochastic gradient descent (ASGD)optimization technique. The update of the parameters with the gradients is done asynchronously from multiple threads on a multicore machine. The number of hidden layers is set based on the observation of Spectral Distortion (SD) between outputs of model and original coarse HF signal (see figure 7). The results show that the SD value is dropt with the increase of the networks depth, and the change levelled off at depth 10. Taking into account the computational complexity, we set the networks depth to 10, and the predicted timestep is 210=1024.

    Fig. 7. The Spectral Distortion (SD) values of under different depth of networks.

    4.3 Subjective evaluation

    For evaluating the perception quality of coding, a subjective listening test was conducted using the comparison category rating (CCR)test [38]. 12 expert listeners participated in a CCR test compare pairs of audio samples, and evaluate the referenced one in each comparison with the replaced one (using proposed method to predict the coarse HF signal) using a discrete 7-point scale that ranges from much worse (?3) to much better (3). The resulting average score is known as the comparison mean opinion score (CMOS). For CMOS value, the score increased by 0.1 indicates significant improvement. By the way, the threshold is not a standard criterion. We use it is because it is a habitual rules on China AVS Workgroup,they usually accept a new technical proposal use this criterion [44,45]. The MPEG audio test files are used as test material (see table 2.), which is a well-known testles for evaluating the perception quality of coding of audio codecs. The results of subjective listening test are shown ingure 8.

    Table II. List of test material in our experiments.

    For the selected reference codecs, the CMOS of our method is more than 0.15 except the USAC, which indicates a signicant improvement using DBLSTM- RNNs instead of conventional replication method. And we alsond the CMOS is higher on codecs with the low bitrate. The average CMOS reaches 0.29 on WB+ codec (only 0.8kbps for BWE),which demonstrates the accuracy of coarse HF signal is important for perception quality of coding. For USAC, the improved potential is limited with less than 0.1 average CMOS.In USAC, a strategy of increasing bitrate for BWE is used to remedy the flaw of spectral band replication. The purpose, DAE is selected as reference codec, is to verify the contribution of context dependent LF signal compare with the current frame LF. From the CMOS ingure 8, the score reaches 0.18, a signication improvement is showed, and which illustrates the correlation indeed exists in the successive frame besides the present time frame.

    Fig. 8. Comparison mean opinion scores (CMOS) of quality comparisons between different reference codecs in the CCR test. The scores for various audio types are shown separately. Error bars indicate the standard error of the mean.

    Fig. 9. Objective difference grade (ODG) of quality comparisons between different reference codecs and our method in the PEAQ test. The scores for various audio types are shown separately.

    For various audio types (speech, music,instrument), the CMOS of our method appears an obvious discrimination. For speech test samples, the CMOS is the lowest, while the highest CMOS appears on instrument test samples. This phenomenon can be explained according to the frequency components of signal. For speech and instrument signals,the richness of the harmonic is different on HF bands, and the instrument is richer than speech. The richer harmonic will bring about the stronger correlation between LF and HF.Therefore, the performance of DBLSTMRNNs can be better for the case of rich harmonic signal.

    4.4 Objective evaluation

    In order to further evaluate the performance of the proposed method, we also implement an objective test using the perceptual evaluation of audio quality method (PEAQ) [39]. PEAQ,as ITU-R Recommendation BS.1387, is a standardized algorithm for objectively measuring perceived audio quality. A PQevalAudio test software tool [40] is used to evaluate the objective difference grade (ODG) between reference sample and test sample, the ODG values range from 0 to -4, where 0 corresponds to an imperceptible impairment and –4 to an impairment judged as very annoying. In order to match the subjective test, we used the same test material as the subjective test (see Table 2).

    The objective test results are shown ingure 9, as expected, the ODG are approximately consistent with the CMOS of subjective listening test. The average ODG increased by 15.39%, 22.76%, 17.05%, 7.45%, 11.84%,13.55% on AAC+, WB+, AVS, USAC, EVS,DAE, respectively, and the total average ODG increased by 14.67%. The objective test results are also further verify the better performance of the proposed method compare with the reference codecs.

    4.5 Computational complexity

    In order to assess the calculation complexity, a simple codec runtime is executed. A 368 seconds wavele is selected as test item, and the run environment of test is also same between different codecs. Taking account into the implement location of our method on codecs,the test is carried out on the decoder side. We used a GetTickCount function at Visual C++(windows.h) to capture the runtimes, including whole decoder of reference codecs and our method, single DBLSTM- RNNs module. In order to reduce the runtimes, the parameters of network are stored to memory instead ofle. All test programme is implemented on Inter(R) Core(TM) i3- 2370M CPU @2.40 GHz,4G memory, Windows 7.0 OS. The runtimes of each codec are listed in Table 3.

    The runtimes of our method are inevitably incremental because of its complex architecture of network. The average runtimes of decode increased by 42.64% using our method to predict the coarse HF signal. And the runtimes of RNNs module accounts for 40.25% of total decode procedure. Despite having expensive computation complexity, our method is still acceptable on some non-real-time application scenario.

    V. CONCLUSION

    A method for the prediction of the coarse HF signal in non-blind bandwidth extension was described in this paper. The method was found to outperform the reference method for bandwidth extension both in subjective and objective comparisons. According to the testing results, the performance was excellent for the low bitrate BWE, and the outstanding prediction capacity was emergent for the rich harmonic signal, like as instruments. In addition to improving the perception quality of coding,we also found that the context dependent LF signal was vital for generating more accurate HF signal.

    Though the proposed method has a superior performance, the expensive computation complexity will limit its application, e.g. real-time application scenario. Consequently, reducing the computation complexity is still required in future work. Moreover, the perception quality of coding on USAC codec was satisfactory,while the bitrate is still high (3.5kbps) for BWE. Reducing the redundant parameters of HF is also further work.

    Table III. The runtimes comparison (unit: second).

    ACKNOWLEDGEMENT

    We gratefully acknowledge anonymous reviewers who read drafts and made many helpful suggestions. This work is supported by the National Natural Science Founda

    tion of China under Grant No. 61762005,61231015, 61671335, 61702472, 61701194,61761044, 61471271; National High Technology Research and Development Program of China (863 Program) under Grant No.2015AA016306; Hubei Province Technological Innovation Major Project under Grant No.2016AAA015; the Science Project of Education Department of Jiangxi Province under No. GJJ150585. The Opening Project of Collaborative Innovation Center for Economics Crime Investigation and Prevention Technology, Jiangxi Province, under Grant No. JXJZXTCX-025;

    [1] Larsen, Erik R, and R. M. Aarts.Audio Bandwidth Extension: Application of Psychoacoustics, Signal Processing and Loudspeaker Design.John Wiley& Sons, 2004.

    [2] T. D. Rossing, F. R. Moore, and P. A. Wheeler.The science of sound. Addison Wesley, 3rd edition,2001.

    [3] Arttu Laaksonen. “Bandwidth extension in high-quality audio coding”.Helsinki University of Technology, 2005.

    [4] M Dietz, L Liljeryd, K Kj?rling, O Kunz. “Spectral Band Replication, a novel approach in audio coding“.Proc. 112th AES, 2002, pp. 1-8.

    [5] J. Makinen, B. Bessette, S. Bruhn, P. Ojala. “AMRWB+: a new audio coding standard for 3rd gen-eration mobile audio services”.Proc. ICASSP,2005, pp. 1109-1112.

    [6] Geiser B, Jax P, Vary P, et al. “Bandwidth Extension for Hierarchical Speech and Audio Coding in ITU-T Rec. G.729.1“.IEEE Transactions on Audio Speech & Language Processing, vol.15, no.8,2007, pp. 2496-2509.

    [7] Zhang T, Liu C T, Quan H J. “AVS-M Audio: Algorithm and Implementation”.EURASIP Journal on Advances in Signal Processing, vol.1, no.1, 2011,pp. 1-16.

    [8] GB/T 20090.10-2013.Information technology advanced audio and video coding Part 10: mobile speech and audio. 2014 (in Chinese).

    [9] Quackenbush S. “MPEG Unified Speech and Audio Coding”.IEEE Multimedia, vol. 20, no. 2,2013, pp. 72-78.

    [10] Bruhn, S., et al. “Standardization of the new 3GPP EVS codec”.Proc. ICASSP,2005, pp. 19-24.

    [11] A.H. Nour-Eldin and P. Kabal. “Mel-frequency cepstral coeffcient-based bandwidth extension of narrowband speech”.Proc. INTERSPEECH,2008, pp. 53-56.

    [12] Seltzer, Michael L., Alex Acero, and Jasha Droppo. “Robust bandwidth extension of noise-corrupted narrowband speech”.Proc. INTERSPEECH,2005, pp. 1509-1512.

    [13] Chennoukh, S., et al. “Speech enhancement via frequency bandwidth extension using line spectral frequencies”.Proc. ICASSP, 2001, pp. 665-668.

    [14] Hang B, Hu R M, Li X, et al. “A Low Bit Rate Audio Bandwidth Extension Method for Mobile Communication”.Proc. PCM, 2008, pp. 778-781.

    [15] Wang Y, Zhao S, Mohammed K, et al. “Superwideband extension for AMR-WB using conditional codebooks”.Proc. ICASSP,2014, pp.3695-3698.

    [16] V Atti,V Krishnan,D Dewasurendra,V Chebiyyam, et al. “Super-wideband bandwidth extension for speech in the 3GPP EVS codec”.Proc. ICASSP, 2015, pp. 5927-5931.

    [17] Nagel F, Disch S. “A harmonic bandwidth extension method for audio codecs”.Proc. ICASSP,2009, pp. 145-148.

    [18] Nagel F, Disch S, Wilde S. “A continuous modulated single sideband bandwidth extension”.Proc. ICASSP, 2010, pp. 357 -360.

    [19] Zhong H, Villemoes L, Ekstrand P, et al. “QMF Based Harmonic Spectral Band Replication”.Proc. 131st AES, 2011, pp. 1-10.

    [20] Neukam C, Nagel F, Schuller G, et al. “A MDCT based harmonic spectral bandwidth extension method”.Proc. ICASSP, 2013, pp. 566-570.

    [21] Liu C M, Hsu H W, Lee W C. “Compression Artifacts in Perceptual Audio Coding”.IEEE Transactions on Audio Speech & Language Processing,vol. 16, no. 4, 2008, pp. 681-695.

    [22] Krishnan V, Rajendran V, Kandhadai A, et al.“EVRC-Wideband: The New 3GPP2 Wideband Vocoder Standard”.Proc. ICASSP,2007, pp. 333-336.

    [23] Sverrisson S, Bruhn S, Grancharov V. “Excitation signal bandwidth extension”, USA, US8856011,2014.

    [24] Z?lzer U. “Digital Audio Signal Processing (Second Edition)“. Wiley, 2008.

    [25] Nour-Eldin A H, Shabestary T Z, Kabal P. “The Eect of Memory Inclusion on Mutual Information Between Speech Frequency Bands”.Proc.ICASSP,2006, pp. 53- 56.

    [26] Mattias Nilsson and Bastiaan Kleijn, “Mutual Information and the Speech Signal”.Proc. INTERSPEECH, 2007, pp. 502-505.

    [27] T. M. Cover and J. A. Thomas, “Elements of Information Theory”. Wiley, 1991.

    [28] Liu H J, Bao C C, Liu X. “Spectral envelope estimation used for audio bandwidth extension based on RBF neural network”.Proc. ICASSP,2013, pp. 543-547.

    [29] Liu X, Bao C. “Audio bandwidth extension based on ensemble echo state networks with temporal evolution”.IEEE/ACM Transactions on Audio Speech & Language Processing, vol. 24, no. 3,2016, pp. 594-607.

    [30] WANG Yingxue, ZHAO Shenghui, YU Yingying,KUANG Jingming. “Speech Bandwidth Extension Based on Restricted Boltzmann Machines”.Journal of Electronics & Information Technology,vol. 38, no. 7, 2016, pp. 1717-1723.

    [31] Jiang L, Hu R, Wang X, et al. “Low Bitrates Audio Bandwidth Extension Using a Deep Auto-Encoder”.Proc. PCM, 2015, pp. 528-537.

    [32] H Sak,A Senior,and F Beaufays. “Long shortterm memory recurrent neural network architectures for large scale acoustic modelling”,Proc. INTERSPEECH. 2014, pp. 338- 342.

    [33] Williams RJ, Zipser D. “A learning algorithm for continually running fully recurrent neural networks”.Neural Computation, vol. 1, no. 2, 1989,pp. 270–280.

    [34] Hochreiter S, Schmidhuber J. “Long short-term memory“.Neural Computation, vol. 9, no. 8,1997, pp. 1735–1780.

    [35] Oord A V D, Dieleman S, Zen H, et al. “WaveNet:A Generative Model for Raw Audio”. 2016. URL https://arxiv.org/ abs/1609.03499.

    [36] Herre J, Dietz M. “MPEG-4 high-effciency AAC coding”.IEEE Signal Processing Magazine, vol.25, no. 3, pp. 137- 142.

    [37] “High performance computing system of Wuhan University”. http://csgrid.whu.edu.cn/ (in Chinese).

    [38] “ITU-T: Methods for Subjective Determination of Transmission Quality. Rec. P.800”. International Telecommunication, 1996.

    [39] Thiede T., Treurniet W. C., Bitto R. et al. “PEAQ---The ITU Standard for Objective Measurement of Perceived Audio Quality”.Journal of the Audio Engineering Society, vol. 48, no. 1, 2000, pp.3-29.

    [40] McGill University, “Perceptual Evaluation of Audio Quality”. http://www.mmsp.ece.mcgill.ca/Documents/Software

    [41] Miao L, Liu Z, Zhang X, et al. “A novel frequency domain BWE with relaxed synchronization and associated BWE switching”,Proc. GlobalSIP,2015, pp.642-646.

    [42] Helmrich C R, Niedermeier A, Disch S, et al.“Spectral envelope reconstruction via IGF for audio transform coding”.Proc. ICASSP, 2015, pp.389-393.

    [43] Vapnik, Vladimir N. The Nature of Statistical Learning Theory. Springer, 1995.

    [44] LI Hong-rui, BAO Chang-chun, LIU Xin, et.al.Blind Bandwidth Extension of Audio Based on Fractal Theory. Journal of Signal Processing. Vol.29, no. 9, 2013, pp. 1127- 1133. (in Chinese)

    [45] Chinese AVS Workgroup, M1628: The specification of subjective listening test for AVS audio technology proposal. AVS Audio group special sessions. August 15, 2005, Wuhan China. (in Chinese)

    猜你喜歡
    局限于低保戶財(cái)政部門
    “民間互助文檔”不該局限于民間
    規(guī)范財(cái)政部門檔案管理工作的幾點(diǎn)思考
    卷宗(2021年22期)2021-04-15 01:22:17
    好畫質(zhì)不局限于分辨率 探究愛普生4K PRO-UHD的真面目
    美國“低保戶”約翰遜
    雜文選刊(2019年6期)2019-06-11 03:03:50
    “抓鬮”評(píng)低保,荒唐!
    廉政瞭望(2018年19期)2018-11-20 01:46:13
    “抓鬮”評(píng)低保,荒唐!
    廉政瞭望(2018年10期)2018-10-30 06:45:22
    疊穿西裝
    對(duì)財(cái)政內(nèi)部控制制度的研究
    中國市場(2016年29期)2016-07-19 04:39:18
    反腐倡廉工作中發(fā)揮財(cái)務(wù)部門作用的策略
    人民銀行與財(cái)政部門財(cái)政支出數(shù)據(jù)差異分析——基于新疆2008年-2013年數(shù)據(jù)
    国产精品免费大片| 两性夫妻黄色片 | av免费观看日本| 久久久久视频综合| 丰满少妇做爰视频| 精品人妻一区二区三区麻豆| 久久精品人人爽人人爽视色| 99热国产这里只有精品6| 9色porny在线观看| 下体分泌物呈黄色| 午夜视频国产福利| 久久人妻熟女aⅴ| 精品国产一区二区久久| 高清不卡的av网站| 亚洲图色成人| 欧美日韩综合久久久久久| 久久久久精品久久久久真实原创| 美女福利国产在线| 国产乱人偷精品视频| 国产一区二区激情短视频 | 观看美女的网站| 侵犯人妻中文字幕一二三四区| 国产又爽黄色视频| 欧美日韩一区二区视频在线观看视频在线| 十分钟在线观看高清视频www| 丝袜美足系列| 在线亚洲精品国产二区图片欧美| 少妇高潮的动态图| 精品久久久久久电影网| 大片免费播放器 马上看| 国产精品蜜桃在线观看| 亚洲精品美女久久久久99蜜臀 | 十八禁高潮呻吟视频| 国产成人精品福利久久| 成人国产麻豆网| 亚洲欧美日韩卡通动漫| 男人舔女人的私密视频| 亚洲成人av在线免费| 一区二区av电影网| 夜夜骑夜夜射夜夜干| 精品人妻一区二区三区麻豆| 亚洲欧美一区二区三区黑人 | 99久久精品国产国产毛片| 久热久热在线精品观看| 少妇高潮的动态图| 国产熟女午夜一区二区三区| 成人国产麻豆网| 男的添女的下面高潮视频| 国产一区二区激情短视频 | 黑人欧美特级aaaaaa片| 中文字幕免费在线视频6| 国产探花极品一区二区| 制服诱惑二区| 国产免费视频播放在线视频| 在线观看人妻少妇| 亚洲欧美一区二区三区黑人 | 丝袜脚勾引网站| 国产淫语在线视频| 22中文网久久字幕| 男女国产视频网站| 美女视频免费永久观看网站| 亚洲人成77777在线视频| 日韩制服骚丝袜av| 国产精品熟女久久久久浪| 国产精品久久久av美女十八| 一级片免费观看大全| 黑人猛操日本美女一级片| 久久精品夜色国产| 国产探花极品一区二区| 日日撸夜夜添| 一本久久精品| 午夜福利视频在线观看免费| 满18在线观看网站| 国产成人精品婷婷| 久久久精品区二区三区| 中国美白少妇内射xxxbb| 丝袜人妻中文字幕| 国产精品一区二区在线不卡| 久久久久久久国产电影| 亚洲成色77777| www.色视频.com| 性色avwww在线观看| tube8黄色片| 午夜福利在线观看免费完整高清在| 最新中文字幕久久久久| 精品国产国语对白av| 乱码一卡2卡4卡精品| 免费观看av网站的网址| 欧美精品av麻豆av| 欧美日韩亚洲高清精品| 免费女性裸体啪啪无遮挡网站| 在线观看三级黄色| 久久久久国产精品人妻一区二区| 国产精品人妻久久久久久| 免费人妻精品一区二区三区视频| 人人妻人人添人人爽欧美一区卜| 极品少妇高潮喷水抽搐| 亚洲精品自拍成人| 一区二区三区四区激情视频| 午夜福利视频在线观看免费| 天天操日日干夜夜撸| 欧美国产精品一级二级三级| 在线亚洲精品国产二区图片欧美| 寂寞人妻少妇视频99o| 精品久久国产蜜桃| 一边摸一边做爽爽视频免费| 看免费成人av毛片| 精品国产一区二区久久| 国产亚洲精品久久久com| 精品一品国产午夜福利视频| 国产 一区精品| 91精品国产国语对白视频| 国产欧美亚洲国产| 纵有疾风起免费观看全集完整版| 国产国语露脸激情在线看| 亚洲精品国产色婷婷电影| 欧美日韩国产mv在线观看视频| 女的被弄到高潮叫床怎么办| 国产精品人妻久久久久久| 边亲边吃奶的免费视频| 我的女老师完整版在线观看| 高清不卡的av网站| 国产无遮挡羞羞视频在线观看| 亚洲人成网站在线观看播放| 在线看a的网站| 国产精品国产三级国产av玫瑰| 精品久久久精品久久久| a级片在线免费高清观看视频| 亚洲熟女精品中文字幕| 午夜91福利影院| 一级爰片在线观看| 亚洲美女黄色视频免费看| 一区二区三区四区激情视频| 日本91视频免费播放| 国产女主播在线喷水免费视频网站| 有码 亚洲区| 国产精品国产三级国产专区5o| 欧美激情国产日韩精品一区| 日日撸夜夜添| 啦啦啦啦在线视频资源| 老司机影院毛片| 久久人人爽人人爽人人片va| 青春草国产在线视频| 免费看光身美女| 亚洲国产av影院在线观看| 国产精品久久久久成人av| 一区二区日韩欧美中文字幕 | 欧美日韩av久久| 亚洲,欧美精品.| 欧美bdsm另类| 99久国产av精品国产电影| 一区二区日韩欧美中文字幕 | 人妻系列 视频| 大香蕉久久网| 国国产精品蜜臀av免费| 999精品在线视频| 欧美日韩综合久久久久久| 色哟哟·www| 免费看av在线观看网站| 亚洲av电影在线进入| 少妇的逼好多水| 亚洲av综合色区一区| 欧美最新免费一区二区三区| 中国国产av一级| 免费黄网站久久成人精品| 久久99一区二区三区| 成人漫画全彩无遮挡| 乱人伦中国视频| 男的添女的下面高潮视频| 久久久久久久精品精品| 成年女人在线观看亚洲视频| 草草在线视频免费看| 久久久久网色| 99国产精品免费福利视频| 色吧在线观看| 亚洲国产欧美日韩在线播放| 精品人妻偷拍中文字幕| 欧美日韩国产mv在线观看视频| 考比视频在线观看| 国产精品国产三级国产av玫瑰| 99热全是精品| 寂寞人妻少妇视频99o| 在线观看美女被高潮喷水网站| 天堂俺去俺来也www色官网| 涩涩av久久男人的天堂| 午夜免费观看性视频| 黄色配什么色好看| 亚洲精品色激情综合| 久久人人爽人人片av| 欧美激情国产日韩精品一区| 精品久久蜜臀av无| 超碰97精品在线观看| 九九在线视频观看精品| xxxhd国产人妻xxx| 80岁老熟妇乱子伦牲交| 美女国产视频在线观看| 如何舔出高潮| 日韩av在线免费看完整版不卡| 亚洲色图综合在线观看| 男人添女人高潮全过程视频| 精品国产一区二区久久| 国产日韩欧美视频二区| 寂寞人妻少妇视频99o| 综合色丁香网| 成人综合一区亚洲| 国产麻豆69| 少妇人妻精品综合一区二区| 日本免费在线观看一区| 美女xxoo啪啪120秒动态图| 新久久久久国产一级毛片| 亚洲精品视频女| 国产精品久久久久久av不卡| 亚洲人成网站在线观看播放| 大码成人一级视频| 欧美 日韩 精品 国产| 国产免费一级a男人的天堂| 精品少妇久久久久久888优播| 少妇精品久久久久久久| 国产国语露脸激情在线看| 午夜激情av网站| 性高湖久久久久久久久免费观看| 久久人人爽av亚洲精品天堂| 成年动漫av网址| 中文字幕人妻熟女乱码| 久久久久国产精品人妻一区二区| 亚洲欧美一区二区三区黑人 | 中文字幕免费在线视频6| 22中文网久久字幕| 国产精品熟女久久久久浪| 亚洲国产日韩一区二区| 男女国产视频网站| 久久狼人影院| 99久久中文字幕三级久久日本| 韩国高清视频一区二区三区| av网站免费在线观看视频| 一二三四中文在线观看免费高清| 日韩欧美一区视频在线观看| xxx大片免费视频| 欧美亚洲日本最大视频资源| 亚洲激情五月婷婷啪啪| 国内精品宾馆在线| 老司机影院毛片| 久久人人爽av亚洲精品天堂| 少妇被粗大的猛进出69影院 | 久久久久久久大尺度免费视频| 天天操日日干夜夜撸| 18禁在线无遮挡免费观看视频| 18+在线观看网站| 七月丁香在线播放| 国产欧美日韩一区二区三区在线| 22中文网久久字幕| 日韩欧美精品免费久久| 青春草视频在线免费观看| 人人妻人人澡人人爽人人夜夜| 亚洲国产最新在线播放| 国产免费一区二区三区四区乱码| 卡戴珊不雅视频在线播放| 黑人巨大精品欧美一区二区蜜桃 | 国产精品国产三级专区第一集| 少妇人妻 视频| 高清不卡的av网站| 高清毛片免费看| 国产男女超爽视频在线观看| 我的女老师完整版在线观看| 九色亚洲精品在线播放| 免费大片黄手机在线观看| 波多野结衣一区麻豆| 卡戴珊不雅视频在线播放| 十八禁高潮呻吟视频| freevideosex欧美| 性色avwww在线观看| 午夜免费男女啪啪视频观看| 只有这里有精品99| 久久午夜综合久久蜜桃| 天天躁夜夜躁狠狠躁躁| 欧美+日韩+精品| 爱豆传媒免费全集在线观看| 久久久久国产精品人妻一区二区| 99re6热这里在线精品视频| 亚洲国产av影院在线观看| 国产黄色视频一区二区在线观看| 欧美激情极品国产一区二区三区 | 涩涩av久久男人的天堂| 欧美xxⅹ黑人| 啦啦啦在线观看免费高清www| 欧美日韩av久久| 在线天堂中文资源库| 少妇的丰满在线观看| 日日爽夜夜爽网站| 国产一区有黄有色的免费视频| 国产成人精品无人区| 欧美精品av麻豆av| 熟女人妻精品中文字幕| 国产精品一区二区在线观看99| 日韩三级伦理在线观看| 人妻少妇偷人精品九色| 欧美变态另类bdsm刘玥| 一区在线观看完整版| 丝袜美足系列| 久久久欧美国产精品| 国产成人91sexporn| 黄网站色视频无遮挡免费观看| 国产一区二区激情短视频 | 亚洲一码二码三码区别大吗| 国产精品免费大片| 妹子高潮喷水视频| 亚洲国产精品一区二区三区在线| 丰满迷人的少妇在线观看| 最后的刺客免费高清国语| 免费在线观看完整版高清| 自拍欧美九色日韩亚洲蝌蚪91| 国产精品国产三级国产专区5o| 人妻 亚洲 视频| 国产精品国产三级专区第一集| 亚洲色图综合在线观看| 精品亚洲成国产av| 亚洲精品aⅴ在线观看| 在线观看美女被高潮喷水网站| 久久久久久人妻| 性高湖久久久久久久久免费观看| av又黄又爽大尺度在线免费看| 麻豆乱淫一区二区| 五月开心婷婷网| 搡女人真爽免费视频火全软件| 国内精品宾馆在线| 另类亚洲欧美激情| 99热国产这里只有精品6| 亚洲国产色片| 久久久久国产网址| 十分钟在线观看高清视频www| 亚洲精品日本国产第一区| 精品亚洲乱码少妇综合久久| 亚洲精品国产av成人精品| 国产综合精华液| 少妇熟女欧美另类| 亚洲久久久国产精品| 久久精品久久久久久噜噜老黄| 亚洲精品av麻豆狂野| 亚洲精品国产av蜜桃| 久久人妻熟女aⅴ| 亚洲欧洲日产国产| 女人被躁到高潮嗷嗷叫费观| 欧美日韩一区二区视频在线观看视频在线| 久久久精品94久久精品| 999精品在线视频| 永久网站在线| 一边亲一边摸免费视频| 午夜福利影视在线免费观看| kizo精华| 亚洲av中文av极速乱| 久久久精品94久久精品| 乱人伦中国视频| 男女啪啪激烈高潮av片| 韩国av在线不卡| 天天躁夜夜躁狠狠久久av| 老司机影院成人| 满18在线观看网站| 九色亚洲精品在线播放| 日产精品乱码卡一卡2卡三| 最近2019中文字幕mv第一页| 视频在线观看一区二区三区| 国产精品国产三级国产专区5o| 国产精品国产三级专区第一集| 秋霞在线观看毛片| 18在线观看网站| 人妻系列 视频| 蜜臀久久99精品久久宅男| 人妻系列 视频| 只有这里有精品99| 亚洲国产欧美日韩在线播放| 视频区图区小说| 韩国精品一区二区三区 | 欧美丝袜亚洲另类| 五月伊人婷婷丁香| 丝瓜视频免费看黄片| 搡女人真爽免费视频火全软件| 国产爽快片一区二区三区| 观看av在线不卡| 中文字幕免费在线视频6| 免费播放大片免费观看视频在线观看| 精品人妻一区二区三区麻豆| 五月玫瑰六月丁香| 80岁老熟妇乱子伦牲交| 亚洲情色 制服丝袜| 高清视频免费观看一区二区| 亚洲色图综合在线观看| 老司机影院毛片| 国语对白做爰xxxⅹ性视频网站| 夫妻性生交免费视频一级片| 欧美人与性动交α欧美软件 | 欧美3d第一页| 久久久久精品久久久久真实原创| 男女午夜视频在线观看 | 尾随美女入室| 亚洲,一卡二卡三卡| 九草在线视频观看| 国产成人免费无遮挡视频| 成人亚洲精品一区在线观看| 一区二区三区乱码不卡18| 你懂的网址亚洲精品在线观看| 国产在视频线精品| 亚洲一码二码三码区别大吗| 一级黄片播放器| 欧美日韩成人在线一区二区| 欧美成人精品欧美一级黄| 国产国语露脸激情在线看| 国产乱人偷精品视频| 亚洲一级一片aⅴ在线观看| 免费少妇av软件| 国产亚洲午夜精品一区二区久久| 尾随美女入室| 午夜福利视频精品| 免费观看在线日韩| 欧美精品亚洲一区二区| 免费人妻精品一区二区三区视频| 午夜精品国产一区二区电影| 观看av在线不卡| 午夜精品国产一区二区电影| 成人国产av品久久久| 国产精品熟女久久久久浪| 边亲边吃奶的免费视频| 久久99热6这里只有精品| 哪个播放器可以免费观看大片| 亚洲五月色婷婷综合| av有码第一页| 看十八女毛片水多多多| 一本—道久久a久久精品蜜桃钙片| 婷婷色av中文字幕| 国产免费一级a男人的天堂| av不卡在线播放| 伊人亚洲综合成人网| 最近的中文字幕免费完整| 一区二区三区精品91| 18禁动态无遮挡网站| 日本vs欧美在线观看视频| 国产一区有黄有色的免费视频| 男女边吃奶边做爰视频| 在线精品无人区一区二区三| 少妇猛男粗大的猛烈进出视频| 在线天堂最新版资源| 久久久久网色| 欧美精品一区二区免费开放| 亚洲美女搞黄在线观看| 大片电影免费在线观看免费| 少妇的逼水好多| 精品少妇久久久久久888优播| 波野结衣二区三区在线| av女优亚洲男人天堂| 只有这里有精品99| 18禁观看日本| 中国三级夫妇交换| 精品一品国产午夜福利视频| 观看美女的网站| 亚洲人成77777在线视频| 高清欧美精品videossex| 久久免费观看电影| 久久久久久久大尺度免费视频| av片东京热男人的天堂| 久久婷婷青草| 天堂8中文在线网| 国产精品.久久久| 午夜福利影视在线免费观看| 亚洲一区二区三区欧美精品| 香蕉精品网在线| 亚洲国产av新网站| 91午夜精品亚洲一区二区三区| 欧美日韩av久久| 中文欧美无线码| 久久精品国产亚洲av天美| 国产男人的电影天堂91| 高清视频免费观看一区二区| 一级毛片我不卡| 国产精品 国内视频| 久热久热在线精品观看| 久久久久视频综合| kizo精华| 国产免费一区二区三区四区乱码| 天天躁夜夜躁狠狠久久av| 亚洲精品456在线播放app| 桃花免费在线播放| 两个人免费观看高清视频| 亚洲 欧美一区二区三区| 99视频精品全部免费 在线| 最近2019中文字幕mv第一页| 国产欧美日韩综合在线一区二区| 精品人妻一区二区三区麻豆| 少妇人妻久久综合中文| 久久精品国产a三级三级三级| 熟女av电影| av免费在线看不卡| 丝袜喷水一区| 国产成人精品婷婷| 成人综合一区亚洲| 777米奇影视久久| 久久久久久久亚洲中文字幕| 国产成人91sexporn| 国产深夜福利视频在线观看| 国产麻豆69| 新久久久久国产一级毛片| 亚洲国产精品专区欧美| 久久久国产欧美日韩av| 亚洲精品色激情综合| 日本欧美视频一区| 日日撸夜夜添| 日韩制服丝袜自拍偷拍| 99热国产这里只有精品6| 国产在线一区二区三区精| 夫妻午夜视频| 成人亚洲精品一区在线观看| 亚洲欧美一区二区三区国产| 欧美bdsm另类| 亚洲国产看品久久| 久久99热这里只频精品6学生| 国产av码专区亚洲av| 少妇熟女欧美另类| 久久久久久伊人网av| 蜜桃在线观看..| 91精品伊人久久大香线蕉| 欧美精品人与动牲交sv欧美| 男人舔女人的私密视频| 国产亚洲午夜精品一区二区久久| 五月开心婷婷网| 人妻少妇偷人精品九色| 久久精品国产自在天天线| 国产色婷婷99| 亚洲 欧美一区二区三区| 亚洲一码二码三码区别大吗| 中文字幕制服av| 亚洲内射少妇av| 少妇高潮的动态图| 亚洲人与动物交配视频| 国产一区二区在线观看av| 午夜免费男女啪啪视频观看| 成人无遮挡网站| 蜜臀久久99精品久久宅男| av在线app专区| 久久久久人妻精品一区果冻| 国产有黄有色有爽视频| 亚洲成人一二三区av| 亚洲精品乱久久久久久| 十八禁网站网址无遮挡| a级毛片黄视频| 99热国产这里只有精品6| 秋霞伦理黄片| 丝袜美足系列| 国产极品天堂在线| 国产高清三级在线| 日本爱情动作片www.在线观看| 国产福利在线免费观看视频| 精品99又大又爽又粗少妇毛片| √禁漫天堂资源中文www| 国产成人精品在线电影| 在线免费观看不下载黄p国产| www.色视频.com| 18禁国产床啪视频网站| 最近手机中文字幕大全| 亚洲一区二区三区欧美精品| 久久99热6这里只有精品| 免费av不卡在线播放| 极品人妻少妇av视频| 国产深夜福利视频在线观看| 亚洲美女视频黄频| 免费黄网站久久成人精品| 精品一区二区免费观看| 97在线人人人人妻| 最近最新中文字幕免费大全7| 女人被躁到高潮嗷嗷叫费观| 26uuu在线亚洲综合色| 内地一区二区视频在线| 亚洲国产av影院在线观看| 韩国精品一区二区三区 | 亚洲欧美日韩卡通动漫| 国产欧美亚洲国产| 毛片一级片免费看久久久久| 插逼视频在线观看| 午夜老司机福利剧场| 欧美xxⅹ黑人| 亚洲欧美中文字幕日韩二区| 一级片免费观看大全| 欧美精品亚洲一区二区| 一级,二级,三级黄色视频| 国产黄色视频一区二区在线观看| 日韩人妻精品一区2区三区| 三级国产精品片| 亚洲精品久久久久久婷婷小说| 国产日韩欧美视频二区| 街头女战士在线观看网站| 久久人人97超碰香蕉20202| av片东京热男人的天堂| 午夜影院在线不卡| 久久久国产一区二区| 日韩欧美一区视频在线观看| 成人漫画全彩无遮挡| 岛国毛片在线播放| 春色校园在线视频观看| 欧美精品一区二区大全| 三级国产精品片| 少妇被粗大猛烈的视频| 国产在线一区二区三区精| 日韩中文字幕视频在线看片| 精品一品国产午夜福利视频| 黑人欧美特级aaaaaa片| 在线天堂中文资源库| 一级毛片我不卡| 久久99一区二区三区| 51国产日韩欧美| 国产精品成人在线| 欧美成人午夜免费资源| 下体分泌物呈黄色| 日本免费在线观看一区| 少妇熟女欧美另类| 日韩伦理黄色片| 在线观看一区二区三区激情| 久久久国产一区二区| 人人妻人人爽人人添夜夜欢视频|