Custom Threads Revisited
One key idea of the Pothouse project is to use application-specific threads to programing high-concurrency systems, especially, the server programs. We believe this approach would improve both scalability and flexibility.
This is by no means of a new idea.
As user-level threads are much more lightweight than native processes or threads provided by the OS, there are dozens of user-space multithreading implementations and the history can even date back to two decades ago.
However, Pothouse still finds a position in this crowd space and provides a unique threading abstraction, called pothread. The pothread runtime improves on existing user-space threading systems in the following ways.
Here introduces several Pothouse compoments, and we will develop more during the project process.
- It is built on top of mcoro.
A group of pothreads are dispatched independently by a separate process (or OS-level thread), so as to exploit multi-processors in a portable, simple way.
- The pothread scheduler uses 'staged cohort scheduling', a scheduling policy specific for network servers, aims to improve progam locality.
- It provides rich interface functions, especially to IO events, timing operations as well as memory allocation.
- Thread creation time is greatly reduced, thanks to XFMalloc.
- It works with several handy components that can be used independently, e.g. PACE.
- PACE: Pollable Asynchronous Call Engine
This simple library makes any function call asynchronously completed and moreover, exposes a file-descriptor that can be monitored by the polling primitives like epoll or select.
- XFMalloc: A fast memory allocator
No lock contention and recycling freed blocked for speedup. Moreover, it proted the heap management information and allows for runtime detection of heap-based overflows.
- Research papers(In preparation)
Describes Pothouse's principles and the most important design details.
Reports various performance results .
References and Related Works
Pothouse project at Sourceforge