
曾喜煌#
學歷:#
- 國立臺中教育大學 資訊工程學系 學士畢業 (2019/09 ~ 2023/06)
- 國立臺中教育大學 資訊工程學系 碩士畢業 (2023/09 ~ 2025/07)
專長:#
C, C++, Linux, Shell, Makefile, gdb, Golang, Kubernetes, Docker, Dockerfile
專業技能#
Linux#
- 從大一開始把筆電灌成 Linux,長期並持續使用的情況下,熟悉 Linux 的命令操作
- 曾經使用 Ubuntu, Debian, CentOS, Arch, 等等的 Linux 發行版
- 習慣且熟悉使用 tmux + vim 作為我的開發環境
C Language and System Software#
Cloud Computing#
- 大學專題與研究所的研究領域,
- 大學專題:「賽局理論應用於資源分配之實現——以 Apache Yunikorn 為例」
- 碩士論文:「雲端邊緣運算中微服務部署之離散化粒子群最佳化演算法」
- 參與開源專案 Apache Yunikorn 並貢獻程式碼及文件撰寫
作品集與個人經歷#
xv6 學習紀錄#
系列文章連結:xv6 學習紀錄

International Collegiate Programming Contest (ICPC) 程式競賽#
大學時期相當喜歡程式競賽這項活動,透過練習及參與程式競賽提昇程式實做能力
The 2021 Taiwan Online Programming Contest#
比賽結果:Honorable Mention (為校內最佳)

The 2021 ICPC Asia Taipei Regional Programming Contest#
比賽結果:Bronze(銅牌獎,台灣區名次/隊伍數:46/103)

LeetCode 解題#
除了以實做出題目需求作為目的,也嘗試把解題過程中的思路記錄下來。
系列文章連結:LeetCode 解題
2021 iThome 鐵人賽 (主題: C 語言筆記)#
系列文章連結:C 語言筆記

這個網站的誕生#
- 系列文章連結:這個網站的誕生
- Git Repository: yamatrail-site
- 學習以下技能:
- Google Cloud Platform 建立及管理虛擬機
- Cloudflare 註冊 Domain 及產生憑證
- Nginx 建立 HTTPS server
- Hugo 建立部落格網站
Apache Yunikorn contributor#
Apache Yunikorn 是一個 Kubernetes 的 scheduler 的開源專案,在實驗室學長的帶領之下也進入開源社群進行貢獻,主要讓我學習到
- 大型專案的 CI/CD 模式
- 學習大型專案的 trace code 能力
下圖是我貢獻的 issue

大學專題:「賽局理論應用於資源分配之實現——以 Apache Yunikorn 為例」#

碩士論文:「雲端邊緣運算中微服務部署之離散化粒子群最佳化演算法」#
研究所的研究領域為微服務架構的效能優化,碩士論文題目為「雲端邊緣運算中微服務部署之離散化粒子群最佳化演算法」
- 硬體環境為 Cloud-Edge computing 的異質性架構
- 軟體環境以 Google Cloud Platform 提供的 benchmark: Online-Boutique, 是一個以微服務架構實做的購物平台
- 演算法使用離散化粒子群最佳化演算法進行效能(Response time)優化

kernel/kalloc.c data structure kmem 與 fun kmem 負責紀錄可使用的 pages,這些 pages 用一個 linked list 做紀錄 struct run { struct run *next; }; struct { struct spinlock lock; struct run *freelist; } kmem; 這個 Linked list run 是一個裡面就只有一個值,指向下一個可用的 page 的 address
kalloc() 與 kfree() // Allocate one 4096-byte page of physical memory. // Returns a pointer that the kernel can use. // Returns 0 if the memory cannot be allocated. void * kalloc(void) { struct run *r; acquire(&kmem.lock); r = kmem.freelist; if(r) kmem.freelist = r->next; release(&kmem.lock); if(r) memset((char*)r, 5, PGSIZE); // fill with junk return (void*)r; } kalloc() 中,回傳 freelist 的 head,新的 head 變成 freelist->next Returns 0 if the memory cannot be allocated. 之所以這會成立,是因為在最一開始最一開始的時候,struct run *freelist; 被初始成 null (0),kinit() 之前的這個唯一的 freelist 最後會變成 linked list 的結尾,這也是 kalloc() 之所以把記憶體用光之後會回傳 0 的原因 // Free the page of physical memory pointed at by pa, // which normally should have been returned by a // call to kalloc(). (The exception is when // initializing the allocator; see kinit above.) void kfree(void *pa) { struct run *r; if(((uint64)pa % PGSIZE) != 0 || (char*)pa < end || (uint64)pa >= PHYSTOP) panic("kfree"); // Fill with junk to catch dangling refs. memset(pa, 1, PGSIZE); r = (struct run*)pa; acquire(&kmem.lock); r->next = kmem.freelist; kmem.freelist = r; release(&kmem.lock); } pa 必須是一個 page 的開頭 kfree() 把這個不要的 pa 放到 freelist 的開頭 freelist 的初始化過程 kernel/main.c void main() { if(cpuid() == 0){ consoleinit(); printfinit(); printf("\n"); printf("xv6 kernel is booting\n"); printf("\n"); kinit(); // physical page allocator <- 這裡進入 kalloc.c kvminit(); // create kernel page table kvminithart(); // turn on paging procinit(); // process table trapinit(); // trap vectors trapinithart(); // install kernel trap vector plicinit(); // set up interrupt controller plicinithart(); // ask PLIC for device interrupts binit(); // buffer cache iinit(); // inode table fileinit(); // file table virtio_disk_init(); // emulated hard disk userinit(); // first user process __sync_synchronize(); started = 1; } else { while(started == 0) ; __sync_synchronize(); printf("hart %d starting\n", cpuid()); kvminithart(); // turn on paging trapinithart(); // install kernel trap vector plicinithart(); // ask PLIC for device interrupts } scheduler(); } kernel/kalloc.c: kinit() void kinit() { initlock(&kmem.lock, "kmem"); freerange(end, (void*)PHYSTOP); } 初始 kmem.lock 使用 freerange() 把可用的 page 都放到 freelist 中 kernel/kalloc.c: freerange() void freerange(void *pa_start, void *pa_end) { char *p; p = (char*)PGROUNDUP((uint64)pa_start); for(; p + PGSIZE <= (char*)pa_end; p += PGSIZE) kfree(p); } 這裡使用先前提到的 kfree() 一次一次的把可用 page 放到 freelist 的前方
這篇主要是為了 lab locks 中的 “Buffer cache” 這一題,要我們先了解 bcache 的一些原理與應用,主要的參考資料是 xv6 book 8.1 ~ 8.3 與 bcache 相關的程式碼
kernel/main.c void main() { if(cpuid() == 0){ consoleinit(); printfinit(); printf("\n"); printf("xv6 kernel is booting\n"); printf("\n"); kinit(); // physical page allocator kvminit(); // create kernel page table kvminithart(); // turn on paging procinit(); // process table trapinit(); // trap vectors trapinithart(); // install kernel trap vector plicinit(); // set up interrupt controller plicinithart(); // ask PLIC for device interrupts binit(); // buffer cache <- 這裡進入 bcache 的初始化過程 iinit(); // inode table fileinit(); // file table virtio_disk_init(); // emulated hard disk userinit(); // first user process __sync_synchronize(); started = 1; } else { while(started == 0) ; __sync_synchronize(); printf("hart %d starting\n", cpuid()); kvminithart(); // turn on paging trapinithart(); // install kernel trap vector plicinithart(); // ask PLIC for device interrupts } scheduler(); } kernel/bio.c: binit() struct { struct spinlock lock; struct buf buf[NBUF]; // Linked list of all buffers, through prev/next. // Sorted by how recently the buffer was used. // head.next is most recent, head.prev is least. struct buf head; } bcache; void binit(void) { struct buf *b; initlock(&bcache.lock, "bcache"); // Create linked list of buffers bcache.head.prev = &bcache.head; bcache.head.next = &bcache.head; for(b = bcache.buf; b < bcache.buf+NBUF; b++){ b->next = bcache.head.next; b->prev = &bcache.head; initsleeplock(&b->lock, "buffer"); bcache.head.next->prev = b; bcache.head.next = b; } } 在 binit() 之後,就不會直接使用 bcache.buf 這個 array,而是使用由 head 開頭的這個環狀 linked-list bread() 與 bwrite() // Return a locked buf with the contents of the indicated block. struct buf* bread(uint dev, uint blockno) { struct buf *b; b = bget(dev, blockno); if(!b->valid) { virtio_disk_rw(b, 0); b->valid = 1; } return b; } 如果 bget() 可以拿到 buf 就直接回傳,如果不行,則使用 virtio_disk_rw()
...
Lab 連結:Lab: locks
Memory allocator (moderate) Your job is to implement per-CPU freelists, and stealing when a CPU’s free list is empty. You must give all of your locks names that start with “kmem”. That is, you should call initlock for each of your locks, and pass a name that starts with “kmem”. Run kalloctest to see if your implementation has reduced lock contention. To check that it can still allocate all of memory, run usertests sbrkmuch. Your output will look similar to that shown below, with much-reduced contention in total on kmem locks, although the specific numbers will differ. Make sure all tests in usertests -q pass. make grade should say that the kalloctests pass.
...
// Init fs void fsinit(int dev) { readsb(dev, &sb); if(sb.magic != FSMAGIC) panic("invalid file system"); initlog(dev, &sb); } // Zero a block. static void bzero(int dev, int bno) { struct buf *bp; bp = bread(dev, bno); memset(bp->data, 0, BSIZE); log_write(bp); brelse(bp); } balloc() 與 bfree() // Blocks. // Allocate a zeroed disk block. // returns 0 if out of disk space. static uint balloc(uint dev) { int b, bi, m; struct buf *bp; bp = 0; // 掃過 super block for(b = 0; b < sb.size; b += BPB){ bp = bread(dev, BBLOCK(b, sb)); for(bi = 0; bi < BPB && b + bi < sb.size; bi++){ m = 1 << (bi % 8); if((bp->data[bi/8] & m) == 0){ // Is block free? bp->data[bi/8] |= m; // Mark block in use. log_write(bp); brelse(bp); bzero(dev, b + bi); return b + bi; } } brelse(bp); } printf("balloc: out of blocks\n"); return 0; } balloc() 是 block allocate 的意思
...
課程連結:6.S081 Fall 2020 Lecture 14: File Systems
Lab 連結:Lab: file system
Large files (moderate) Modify bmap() so that it implements a doubly-indirect block, in addition to direct blocks and a singly-indirect block. You’ll have to have only 11 direct blocks, rather than 12, to make room for your new doubly-indirect block; you’re not allowed to change the size of an on-disk inode. The first 11 elements of ip->addrs[] should be direct blocks; the 12th should be a singly-indirect block (just like the current one); the 13th should be your new doubly-indirect block. You are done with this exercise when bigfile writes 65803 blocks and usertests runs successfully:
...
課程連結:6.S081 Lecture 17: Virtual Memory for Applications
hugo-paperMod Example
Example site
Expand Method 1 - Git Clone
Lab 連結:Lab: mmap
Lab: mmap (hard) The mmap and munmap system calls allow UNIX programs to exert detailed control over their address spaces. They can be used to share memory among processes, to map files into process address spaces, and as part of user-level page fault schemes such as the garbage-collection algorithms discussed in lecture. In this lab you’ll add mmap and munmap to xv6, focusing on memory-mapped files.
目前的理解會像是把 memory address map 到一個 file 中,可以有多個 process share 同一份資料的好處
...
課程連結:6.S081 Fall 2020 Lecture 21: Networking Protocal nesting header 的 type 決定了之後的內容要如何解讀 The OS network stack tcpdump 的解析 Queue 的使用 影片中有提及不管是在那一個層級的 stack 當中,queue 的使用都是很常見的
可能會有一個 buffer allocator MBUF
E1000 NIC 這在 lab: network driver 中會運用的一個 NIC, 影片中有提及跟現代的 NIC 比較起來
現在的 NIC 當中有做一些 check Livelock Problem 最後的結論是用 Polling 來解決