Spawning Threads
Overview
Last time we added audio support by taking advantage of SDL's audio functions. SDL started a thread that made callbacks to a function we defined every time it needed audio. Now we're going to do the same sort of thing with the video display. This makes the code more modular and easier to work with - especially when we want to add syncing. So where do we start?
First we notice that our main function is handling an awful lot: it's running through the event loop, reading in packets, and decoding the video. So what we're going to do is split all those apart: we're going to have a thread that will be responsible for decoding the packets; these packets will then be added to the queue and read by the corresponding audio and video threads. The audio thread we have already set up the way we want it; the video thread will be a little more complicated since we have to display the video ourselves. We will add the actual display code to the main loop. But instead of just displaying video every time we loop, we will integrate the video display into the event loop. The idea is to decode the video, save the resulting frame in another queue, then create a custom event (FF_REFRESH_EVENT) that we add to the event system, then when our event loop sees this event, it will display the next frame in the queue. Here's a handy ASCII art illustration of what is going on:
_______
________ audio? _______????? _____
|??????? | pkts |?????? |??? |???? | to spkr
| DECODE |----->| AUDIO |--->| SDL |-->
|________|????? |_______|??? |_____|
??? |? video???? _______
??? |?? pkts??? |?????? |
??? +---------->| VIDEO |
?________?????? |_______|?? _______
|?????? |????????? |?????? |?????? |
| EVENT |????????? +------>| VIDEO | to mon.
| LOOP? |----------------->| DISP. |-->
|_______|<---FF_REFRESH----|_______|
The main purpose of moving controlling the video display via the event loop is that using an SDL_Delay thread, we can control exactly when the next video frame shows up on the screen. When we finally sync the video in the next tutorial, it will be a simple matter to add the code that will schedule the next video refresh so the right picture is being shown on the screen at the right time.
Simplifying Code
We're also going to clean up the code a bit. We have all this audio and video codec information, and we're going to be adding queues and buffers and who knows what else. All this stuff is for one logical unit, viz. the movie. So we're going to make a large struct that will hold all that information called the VideoState.
typedef struct VideoState {
AVFormatContext *pFormatCtx;
int videoStream, audioStream;
AVStream *audio_st;
PacketQueue audioq;
uint8_t audio_buf[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2];
unsigned int audio_buf_size;
unsigned int audio_buf_index;
AVPacket audio_pkt;
uint8_t *audio_pkt_data;
int audio_pkt_size;
AVStream *video_st;
PacketQueue videoq;
VideoPicture pictq[VIDEO_PICTURE_QUEUE_SIZE];
int pictq_size, pictq_rindex, pictq_windex;
SDL_mutex *pictq_mutex;
SDL_cond *pictq_cond;
SDL_Thread *parse_tid;
SDL_Thread *video_tid;
char filename[1024];
int quit;
} VideoState;
Here we see a glimpse of what we're going to get to. First we see the basic information - the format context and the indices of the audio and video stream, and the corresponding AVStream objects. Then we can see that we've moved some of those audio buffers into this structure. These (audio_buf, audio_buf_size, etc.) were all for information about audio that was still lying around (or the lack thereof). We've added another queue for the video, and a buffer (which will be used as a queue; we don't need any fancy queueing stuff for this) for the decoded frames (saved as an overlay). The VideoPicture struct is of our own creations (we'll see what's in it when we come to it). We also notice that we've allocated pointers for the two extra threads we will create, and the quit flag and the filename of the movie.
So now we take it all the way back to the main function to see how this changes our program. Let's set up our VideoState struct:
int main(int argc, char *argv[]) {
SDL_Event event;
VideoState *is;
is = av_mallocz(sizeof(VideoState));
av_mallocz() is a nice function that will allocate memory for us and zero it out.
Then we'll initialize our locks for the display buffer (pictq), because since the event loop calls our display function - the display function, remember, will be pulling pre-decoded frames from pictq. At the same time, our video decoder will be putting information into it - we don't know who will get there first. Hopefully you recognize that this is a classic race condition. So we allocate it now before we start any threads. Let's also copy the filename of our movie into our VideoState.
pstrcpy(is->filename, sizeof(is->filename), argv[1]);
is->pictq_mutex = SDL_CreateMutex();
is->pictq_cond = SDL_CreateCond();
pstrcpy is a function from ffmpeg that does some extra bounds checking beyond strncpy.
Our First Thread
Now let's finally launch our threads and get the real work done:
schedule_refresh(is, 40);
is->parse_tid = SDL_CreateThread(decode_thread, is);
if(!is->parse_tid) {
av_free(is);
return -1;
}
schedule_refresh is a function we will define later. What it basically does is tell the system to push a FF_REFRESH_EVENT after the specified number of milliseconds. This will in turn call the video refresh function when we see it in the event queue. But for now, let's look at SDL_CreateThread().
SDL_CreateThread() does just that - it spawns a new thread that has complete access to all the memory of the original process, and starts the thread running on the function we give it. It will also pass that function user-defined data. In this case, we're calling decode_thread() and with our VideoState struct attached. The first half of the function has nothing new; it simply does the work of opening the file and finding the index of the audio and video streams. The only thing we do different is save the format context in our big struct. After we've found our stream indices, we call another function that we will define, stream_component_open(). This is a pretty natural way to split things up, and since we do a lot of similar things to set up the video and audio codec, we reuse some code by making this a function.
The stream_component_open() function is where we will find our codec decoder, set up our audio options, save important information to our big struct, and launch our audio and video threads. This is where we would also insert other options, such as forcing the codec instead of autodetecting it and so forth. Here it is:
int stream_component_open(VideoState *is, int stream_index) {
AVFormatContext *pFormatCtx = is->pFormatCtx;
AVCodecContext *codecCtx;
AVCodec *codec;
SDL_AudioSpec wanted_spec, spec;
if(stream_index < 0 || stream_index >= pFormatCtx->nb_streams) {
return -1;
}
// Get a pointer to the codec context for the video stream
codecCtx = pFormatCtx->streams[stream_index]->codec;
if(codecCtx->codec_type == CODEC_TYPE_AUDIO) {
// Set audio settings from codec info
wanted_spec.freq = codecCtx->sample_rate;
/* .... */
wanted_spec.callback = audio_callback;
wanted_spec.userdata = is;
if(SDL_OpenAudio(&wanted_spec, &spec) < 0) {
fprintf(stderr, "SDL_OpenAudio: %s/n", SDL_GetError());
return -1;
}
}
codec = avcodec_find_decoder(codecCtx->codec_id);
if(!codec || (avcodec_open(codecCtx, codec) < 0)) {
fprintf(stderr, "Unsupported codec!/n");
return -1;
}
switch(codecCtx->codec_type) {
case CODEC_TYPE_AUDIO:
is->audioStream = stream_index;
is->audio_st = pFormatCtx->streams[stream_index];
is->audio_buf_size = 0;
is->audio_buf_index = 0;
memset(&is->audio_pkt, 0, sizeof(is->audio_pkt));
packet_queue_init(&is->audioq);
SDL_PauseAudio(0);
break;
case CODEC_TYPE_VIDEO:
is->videoStream = stream_index;
is->video_st = pFormatCtx->streams[stream_index];
packet_queue_init(&is->videoq);
is->video_tid = SDL_CreateThread(video_thread, is);
break;
default:
break;
}
}This is pretty much the same as the code we had before, except now it's generalized for audio and video. Notice that instead of aCodecCtx, we've set up our big struct as the userdata for our audio callback. We've also saved the streams themselves as audio_st and video_st. We also have added our video queue and set it up in the same way we set up our audio queue. Most of the point is to launch the video and audio threads. These bits do it:
SDL_PauseAudio(0);
break;
/* ...... */
is->video_tid = SDL_CreateThread(video_thread, is);
We remember SDL_PauseAudio() from last time, and SDL_CreateThread() is used as in the exact same way as before. We'll get back to our video_thread() function.
Before that, let's go back to the second half of our decode_thread() function. It's basically just a for loop that will read in a packet and put it on the right queue:
for(;;) {
if(is->quit) {
break;
}
// seek stuff goes here
if(is->audioq.size > MAX_AUDIOQ_SIZE ||
is->videoq.size > MAX_VIDEOQ_SIZE) {
SDL_Delay(10);
continue;
}
if(av_read_frame(is->pFormatCtx, packet) < 0) {
if(url_ferror(&pFormatCtx->pb) == 0) {
SDL_Delay(100); /* no error; wait for user input */
continue;
} else {
break;
}
}
// Is this a packet from the video stream?
if(packet->stream_index == is->videoStream) {
packet_queue_put(&is->videoq, packet);
} else if(packet->stream_index == is->audioStream) {
packet_queue_put(&is->audioq, packet);
} else {
av_free_packet(packet);
}
}
這里沒有什么新東西,除了我們給音頻和視頻隊列限定了一個最大值并且我們添加一個檢測讀錯誤的函數(shù)。格式上下文里面有一個叫做pb的 ByteIOContext類型結(jié)構(gòu)體。這個結(jié)構(gòu)體是用來保存一些低級的文件信息。函數(shù)url_ferror用來檢測結(jié)構(gòu)體并發(fā)現(xiàn)是否有些讀取文件錯誤。
在循環(huán)以后,我們的代碼是用等待其余的程序結(jié)束和提示我們已經(jīng)結(jié)束的。這些代碼是有益的,因為它指示出了如何驅(qū)動事件--后面我們將顯示影像。
while(!is->quit) {
SDL_Delay(100);
}
fail:
if(1){
SDL_Event event;
event.type = FF_QUIT_EVENT;
event.user.data1 = is;
SDL_PushEvent(&event);
}
return 0;
我們使用SDL常量SDL_USEREVENT來從用戶事件中得到值。第一個用戶事件的值應(yīng)當是SDL_USEREVENT,下一個是 SDL_USEREVENT+1并且依此類推。在我們的程序中FF_QUIT_EVENT被定義成SDL_USEREVENT+2。如果喜歡,我們也可以傳遞用戶數(shù)據(jù),在這里我們傳遞的是大結(jié)構(gòu)體的指針。最后我們調(diào)用SDL_PushEvent()函數(shù)。在我們的事件分支中,我們只是像以前放入 SDL_QUIT_EVENT部分一樣。我們將在自己的事件隊列中詳細討論,現(xiàn)在只是確保我們正確放入了FF_QUIT_EVENT事件,我們將在后面捕捉到它并且設(shè)置我們的退出標志quit。
得到幀:video_thread
當我們準備好解碼器后,我們開始視頻線程。這個線程從視頻隊列中讀取包,把它解碼成視頻幀,然后調(diào)用queue_picture函數(shù)把處理好的幀放入到圖片隊列中:
int video_thread(void *arg) {
VideoState *is = (VideoState *)arg;
AVPacket pkt1, *packet = &pkt1;
int len1, frameFinished;
AVFrame *pFrame;
pFrame = avcodec_alloc_frame();
for(;;) {
if(packet_queue_get(&is->videoq, packet, 1) < 0) {
// means we quit getting packets
break;
}
// Decode video frame
len1 = avcodec_decode_video(is->video_st->codec, pFrame, &frameFinished,
packet->data, packet->size);
// Did we get a video frame?
if(frameFinished) {
if(queue_picture(is, pFrame) < 0) {
break;
}
}
av_free_packet(packet);
}
av_free(pFrame);
return 0;
}
在這里的很多函數(shù)應(yīng)該很熟悉吧。我們把avcodec_decode_video函數(shù)移到了這里,替換了一些參數(shù),例如:我們把AVStream保存在我們自己的大結(jié)構(gòu)體中,所以我們可以從那里得到編解碼器的信息。我們僅僅是不斷的從視頻隊列中取包一直到有人告訴我們要停止或者出錯為止。
把幀隊列化
讓我們看一下保存解碼后的幀pFrame到圖像隊列中去的函數(shù)。因為我們的圖像隊列是SDL的覆蓋的集合(基本上不用讓視頻顯示函數(shù)再做計算了),我們需要把幀轉(zhuǎn)換成相應(yīng)的格式。我們保存到圖像隊列中的數(shù)據(jù)是我們自己做的一個結(jié)構(gòu)體。
typedef struct VideoPicture {
SDL_Overlay *bmp;
int width, height;
int allocated;
} VideoPicture;
我們的大結(jié)構(gòu)體有一個可以保存這些緩沖區(qū)。然而,我們需要自己來申請SDL_Overlay(注意:allocated標志會指明我們是否已經(jīng)做了這個申請的動作與否)。
為了使用這個隊列,我們有兩個指針--寫入指針和讀取指針。我們也要保證一定數(shù)量的實際數(shù)據(jù)在緩沖中。要寫入到隊列中,我們先要等待緩沖清空以便于有位置來保存我們的VideoPicture。然后我們檢查看我們是否已經(jīng)申請到了一個可以寫入覆蓋的索引號。如果沒有,我們要申請一段空間。我們也要重新申請緩沖如果窗口的大小已經(jīng)改變。然而,為了避免被鎖定,盡量避免在這里申請(我現(xiàn)在還不太清楚原因;我相信是為了避免在其它線程中調(diào)用SDL覆蓋函數(shù)的原因)。
int queue_picture(VideoState *is, AVFrame *pFrame) {
VideoPicture *vp;
int dst_pix_fmt;
AVPicture pict;
SDL_LockMutex(is->pictq_mutex);
while(is->pictq_size >= VIDEO_PICTURE_QUEUE_SIZE &&
!is->quit) {
SDL_CondWait(is->pictq_cond, is->pictq_mutex);
}
SDL_UnlockMutex(is->pictq_mutex);
if(is->quit)
return -1;
// windex is set to 0 initially
vp = &is->pictq[is->pictq_windex];
if(!vp->bmp ||
vp->width != is->video_st->codec->width ||
vp->height != is->video_st->codec->height) {
SDL_Event event;
vp->allocated = 0;
event.type = FF_ALLOC_EVENT;
event.user.data1 = is;
SDL_PushEvent(&event);
SDL_LockMutex(is->pictq_mutex);
while(!vp->allocated && !is->quit) {
SDL_CondWait(is->pictq_cond, is->pictq_mutex);
}
SDL_UnlockMutex(is->pictq_mutex);
if(is->quit) {
return -1;
}
}
這里的事件機制與前面我們想要退出的時候看到的一樣。我們已經(jīng)定義了事件FF_ALLOC_EVENT作為SDL_USEREVENT。我們把事件發(fā)到事件隊列中然后等待申請內(nèi)存的函數(shù)設(shè)置好條件變量。
讓我們來看一看如何來修改事件循環(huán):
for(;;) {
SDL_WaitEvent(&event);
switch(event.type) {
case FF_ALLOC_EVENT:
alloc_picture(event.user.data1);
break;
記住event.user.data1是我們的大結(jié)構(gòu)體。就這么簡單。讓我們看一下alloc_picture()函數(shù):
void alloc_picture(void *userdata) {
VideoState *is = (VideoState *)userdata;
VideoPicture *vp;
vp = &is->pictq[is->pictq_windex];
if(vp->bmp) {
// we already have one make another, bigger/smaller
SDL_FreeYUVOverlay(vp->bmp);
}
// Allocate a place to put our YUV image on that screen
vp->bmp = SDL_CreateYUVOverlay(is->video_st->codec->width,
is->video_st->codec->height,
SDL_YV12_OVERLAY,
screen);
vp->width = is->video_st->codec->width;
vp->height = is->video_st->codec->height;
SDL_LockMutex(is->pictq_mutex);
vp->allocated = 1;
SDL_CondSignal(is->pictq_cond);
SDL_UnlockMutex(is->pictq_mutex);
}
你可以看到我們把SDL_CreateYUVOverlay函數(shù)從主循環(huán)中移到了這里。這段代碼應(yīng)該完全可以自我注釋。記住我們把高度和寬度保存到VideoPicture結(jié)構(gòu)體中因為我們需要保存我們的視頻的大小沒有因為某些原因而改變。
好,我們幾乎已經(jīng)全部解決并且可以申請到Y(jié)UV覆蓋和準備好接收圖像。讓我們回顧一下queue_picture并看一個拷貝幀到覆蓋的代碼。你應(yīng)該能認出其中的一部分:
int queue_picture(VideoState *is, AVFrame *pFrame) {
if(vp->bmp) {
SDL_LockYUVOverlay(vp->bmp);
dst_pix_fmt = PIX_FMT_YUV420P;
pict.data[0] = vp->bmp->pixels[0];
pict.data[1] = vp->bmp->pixels[2];
pict.data[2] = vp->bmp->pixels[1];
pict.linesize[0] = vp->bmp->pitches[0];
pict.linesize[1] = vp->bmp->pitches[2];
pict.linesize[2] = vp->bmp->pitches[1];
// Convert the image into YUV format that SDL uses
img_convert(&pict, dst_pix_fmt,
(AVPicture *)pFrame, is->video_st->codec->pix_fmt,
is->video_st->codec->width, is->video_st->codec->height);
SDL_UnlockYUVOverlay(vp->bmp);
if(++is->pictq_windex == VIDEO_PICTURE_QUEUE_SIZE) {
is->pictq_windex = 0;
}
SDL_LockMutex(is->pictq_mutex);
is->pictq_size++;
SDL_UnlockMutex(is->pictq_mutex);
}
return 0;
}
這部分代碼和前面用到的一樣,主要是簡單的用我們的幀來填充YUV覆蓋。最后一點只是簡單的給隊列加1。這個隊列在寫的時候會一直寫入到滿為止,在讀的時候會一直讀空為止。因此所有的都依賴于is->pictq_size值,這要求我們必需要鎖定它。這里我們做的是增加寫指針(在必要的時候采用輪轉(zhuǎn)的方式),然后鎖定隊列并且增加尺寸?,F(xiàn)在我們的讀者函數(shù)將會知道隊列中有了更多的信息,當隊列滿的時候,我們的寫入函數(shù)也會知道。
顯示視頻
這就是我們的視頻線程。現(xiàn)在我們看過了幾乎所有的線程除了一個--記得我們調(diào)用schedule_refresh()函數(shù)嗎?讓我們看一下實際中是如何做的:
static void schedule_refresh(VideoState *is, int delay) {
SDL_AddTimer(delay, sdl_refresh_timer_cb, is);
}
函數(shù)SDL_AddTimer()是SDL中的一個定時(特定的毫秒)執(zhí)行用戶定義的回調(diào)函數(shù)(可以帶一些參數(shù)user data)的簡單函數(shù)。我們將用這個函數(shù)來定時刷新視頻--每次我們調(diào)用這個函數(shù)的時候,它將設(shè)置一個定時器來觸發(fā)定時事件來把一幀從圖像隊列中顯示到屏幕上。
但是,讓我們先觸發(fā)那個事件。
static Uint32 sdl_refresh_timer_cb(Uint32 interval, void *opaque) {
SDL_Event event;
event.type = FF_REFRESH_EVENT;
event.user.data1 = opaque;
SDL_PushEvent(&event);
return 0;
}
這里向隊列中寫入了一個現(xiàn)在很熟悉的事件。FF_REFRESH_EVENT被定義成SDL_USEREVENT+1。要注意的一件事是當返回0的時候,SDL停止定時器,于是回調(diào)就不會再發(fā)生。
現(xiàn)在我們產(chǎn)生了一個FF_REFRESH_EVENT事件,我們需要在事件循環(huán)中處理它:
for(;;) {
SDL_WaitEvent(&event);
switch(event.type) {
case FF_REFRESH_EVENT:
video_refresh_timer(event.user.data1);
break;
于是我們就運行到了這個函數(shù),在這個函數(shù)中會把數(shù)據(jù)從圖像隊列中取出:
void video_refresh_timer(void *userdata) {
VideoState *is = (VideoState *)userdata;
VideoPicture *vp;
if(is->video_st) {
if(is->pictq_size == 0) {
schedule_refresh(is, 1);
} else {
vp = &is->pictq[is->pictq_rindex];
schedule_refresh(is, 80);
video_display(is);
if(++is->pictq_rindex == VIDEO_PICTURE_QUEUE_SIZE) {
is->pictq_rindex = 0;
}
SDL_LockMutex(is->pictq_mutex);
is->pictq_size--;
SDL_CondSignal(is->pictq_cond);
SDL_UnlockMutex(is->pictq_mutex);
}
} else {
schedule_refresh(is, 100);
}
}
現(xiàn)在,這只是一個極其簡單的函數(shù):當隊列中有數(shù)據(jù)的時候,他從其中獲得數(shù)據(jù),為下一幀設(shè)置定時器,調(diào)用video_display函數(shù)來真正顯示圖像到屏幕上,然后把隊列讀索引值加1,并且把隊列的尺寸size減1。你可能會注意到在這個函數(shù)中我們并沒有真正對vp做一些實際的動作,原因是這樣的:我們將在后面處理。我們將在后面同步音頻和視頻的時候用它來訪問時間信息。你會在這里看到這個注釋信息“timing密碼here”。那里我們將討論什么時候顯示下一幀視頻,然后把相應(yīng)的值寫入到schedule_refresh()函數(shù)中?,F(xiàn)在我們只是隨便寫入一個值80。從技術(shù)上來講,你可以猜測并驗證這個值,并且為每個電影重新編譯程序,但是:1)過一段時間它會漂移;2)這種方式是很笨的。我們將在后面來討論它。
我們幾乎做完了;我們僅僅剩了最后一件事:顯示視頻!下面就是video_display函數(shù):
void video_display(VideoState *is) {
SDL_Rect rect;
VideoPicture *vp;
AVPicture pict;
float aspect_ratio;
int w, h, x, y;
int i;
vp = &is->pictq[is->pictq_rindex];
if(vp->bmp) {
if(is->video_st->codec->sample_aspect_ratio.num == 0) {
aspect_ratio = 0;
} else {
aspect_ratio = av_q2d(is->video_st->codec->sample_aspect_ratio) *
is->video_st->codec->width / is->video_st->codec->height;
}
if(aspect_ratio <= 0.0) {
aspect_ratio = (float)is->video_st->codec->width /
(float)is->video_st->codec->height;
}
h = screen->h;
w = ((int)rint(h * aspect_ratio)) & -3;
if(w > screen->w) {
w = screen->w;
h = ((int)rint(w / aspect_ratio)) & -3;
}
x = (screen->w - w) / 2;
y = (screen->h - h) / 2;
rect.x = x;
rect.y = y;
rect.w = w;
rect.h = h;
SDL_DisplayYUVOverlay(vp->bmp, &rect);
}
}
因為我們的屏幕可以是任意尺寸(我們設(shè)置為640x480并且用戶可以自己來改變尺寸),我們需要動態(tài)計算出我們顯示的圖像的矩形大小。所以一開始我們需要計算出電影的縱橫比aspect ratio,表示方式為寬度除以高度。某些編解碼器會有奇數(shù)采樣縱橫比,只是簡單表示了一個像素或者一個采樣的寬度除以高度的比例。因為寬度和高度在我們的編解碼器中是用像素為單位的,所以實際的縱橫比與縱橫比乘以樣本縱橫比相同。某些編解碼器會顯示縱橫比為0,這表示每個像素的縱橫比為1x1。然后我們把電影縮放到適合屏幕的盡可能大的尺寸。這里的& -3表示與-3做與運算,實際上是讓它們4字節(jié)對齊。然后我們把電影移到中心
位置,接著調(diào)用SDL_DisplayYUVOverlay()函數(shù)。
結(jié)果是什么?我們做完了嗎?嗯,我們?nèi)匀灰匦赂膶懧曇舨糠值拇a來使用新的VideoStruct結(jié)構(gòu)體,但是那些只是嘗試著改變,你可以看一下那些參考示例代碼。最后我們要做的是改變ffmpeg提供的默認退出回調(diào)函數(shù)為我們的退出回調(diào)函數(shù)。
VideoState *global_video_state;
int decode_interrupt_cb(void) {
return (global_video_state && global_video_state->quit);
}
我們在主函數(shù)中為大結(jié)構(gòu)體設(shè)置了global_video_state。
這就是了!讓我們編譯它:
gcc -o tutorial04 tutorial04.c -lavutil -lavformat -lavcodec -lz -lm /
`sdl-config --cflags --libs`
請享受一下沒有經(jīng)過同步的電影!下次我們將編譯一個可以最終工作的電影播放器。
?
原文出自 http://blog.csdn.net/jinhaijian/article/details/5831335
更多文章、技術(shù)交流、商務(wù)合作、聯(lián)系博主
微信掃碼或搜索:z360901061

微信掃一掃加我為好友
QQ號聯(lián)系: 360901061
您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點擊下面給點支持吧,站長非常感激您!手機微信長按不能支付解決辦法:請將微信支付二維碼保存到相冊,切換到微信,然后點擊微信右上角掃一掃功能,選擇支付二維碼完成支付。
【本文對您有幫助就好】元
