libuv是一个网络接口,在linux下集成了libev,在windows下按linux风格封装了一下iocp,不过使用起来方便多了。
用来写tcp server,极大地简化了代码,比直接用libev还要简单,详细例子可以见test目录下的例子,以其中blackhole-server为例,稍修改一下使之成为独立程序:
#define assert(expr) \
do { \
if (!(expr)) { \
fprintf(stderr, \
"assertion failed in %s on line %d: %s\n", \
__file__, \
__line__, \
#expr); \
abort(); \
} \
} while (0)
#define container_of(ptr, type, member) \
((type *) ((char *) (ptr) - offsetof(type, member)))
typedef struct {
uv_tcp_t handle;
uv_shutdown_t shutdown_req;
} conn_rec;
static uv_tcp_t tcp_server;
static void connection_cb(uv_stream_t* stream, int status);
static uv_buf_t alloc_cb(uv_handle_t* handle, size_t suggested_size);
static void read_cb(uv_stream_t* stream, ssize_t nread, uv_buf_t buf);
static void shutdown_cb(uv_shutdown_t* req, int status);
static void close_cb(uv_handle_t* handle);
static void connection_cb(uv_stream_t* stream, int status) {
conn_rec* conn;
int r;
assert(status == 0);
assert(stream == (uv_stream_t*)&tcp_server);
conn = (conn_rec*)malloc(sizeof *conn);
assert(conn != null);
r = uv_tcp_init(stream->loop, &conn->handle);
assert(r == 0);
r = uv_accept(stream, (uv_stream_t*)&conn->handle);
assert(r == 0);
r = uv_read_start((uv_stream_t*)&conn->handle, alloc_cb, read_cb);
assert(r == 0);
}
static uv_buf_t alloc_cb(uv_handle_t* handle, size_t suggested_size) {
static char buf[65536];
return uv_buf_init(buf, sizeof buf);
}
static void read_cb(uv_stream_t* stream, ssize_t nread, uv_buf_t buf) {
conn_rec* conn;
int r;
if (nread >= 0)
return;
assert(uv_last_error(stream->loop).code == uv_eof);
conn = container_of(stream, conn_rec, handle);
r = uv_shutdown(&conn->shutdown_req, stream, shutdown_cb);
assert(r == 0);
}
static void shutdown_cb(uv_shutdown_t* req, int status) {
conn_rec* conn = container_of(req, conn_rec, shutdown_req);
uv_close((uv_handle_t*)&conn->handle, close_cb);
}
static void close_cb(uv_handle_t* handle) {
conn_rec* conn = container_of(handle, conn_rec, handle);
free(conn);
}
int main()
{
struct sockaddr_in addr;
uv_loop_t* loop;
int r;
loop = uv_default_loop();
addr = uv_ip4_addr("127.0.0.1", 1234);
r = uv_tcp_init(loop, &tcp_server);
assert(r == 0);
r = uv_tcp_bind(&tcp_server, addr);
assert(r == 0);
r = uv_listen((uv_stream_t*)&tcp_server, 128, connection_cb);
assert(r == 0);
r = uv_run(loop);
assert(0 && "blackhole server dropped out of event loop.");
return 0;
}
简单说明:
当客户端有连接时,响应connection_cb回调,在这里进行accept操作。
然后uv_read_start是设置读事件回调read_cb。
socket有读事件时,触发read_cb,在里面读数据。
当关掉socket时,先uv_shutdown,设置关闭的回调shutdown_cb,在这里再设置close回调close_cb。这样是完成一个动作后再去下一个动作,避免了while(1) 去检查socket状态。
代码里可能稍难理解的是container_of,不过如果接触过list_entry就很容易理解了。在这里充分看出指针的强大作用,基本相当于陆小凤的灵犀指,李探花的飞刀,替没有小李飞刀的大众化编程默哀两分钟,再替c/c 鼓鼓掌。
这个demo里没有写怎么发,从其他例子中copy一个:
for (i = 0; i < write_sockets; i ) {
do_write(type == tcp ? (uv_stream_t*)&tcp_write_handles[i] : (uv_stream_t*)&pipe_write_handles[i]);
}
选个合适的位置,比如定时器,向所有连接来的client socket发点消息,发送方法如下:
static void do_write(uv_stream_t* stream) {
uv_write_t* req;
uv_buf_t buf;
int r;
buf.base = (char*) &write_buffer;
buf.len = sizeof write_buffer;
while (stream->write_queue_size == 0) {
req = (uv_write_t*) req_alloc();
r = uv_write(req, stream, &buf, 1, write_cb);
assert(r == 0);
}
}
static void write_cb(uv_write_t* req, int status) {
assert(status == 0);
req_free((uv_req_t*) req);
nsent = sizeof write_buffer;
nsent_total = sizeof write_buffer;
do_write((uv_stream_t*) req->handle);
}
老规矩,还是回调方式,发送函数调用uv_write,libuv帮你发,然后触发回调write_cb,看一下,如果还有数据没发完,继续do_write。传统的方法伪代码一般是:
while(1)
{
leng = send();
sendlen = leng;
if (sendlen == total)
{
break;
}
}
看的出来,libuv使用起来的确很方便。开发tcp 的朋友可以大大松一口气,连 libev,libevent都可以替换掉了。至于boost.asio和ace,如果只是单线做网络模块,自己写线程池,内存池,应该没必要去考虑了。再说内存池,线程池也不麻烦,比如 char * p是事先分配好的一块内存,在上面再给obj分配一下:
obj* pobj = new(p)
obj;
只是注意一下,释放的时候需要显示调用obj的析构。
线程池呢
for (i = 0; i < nthreads; i ) {
create_worker(worker_libevent, &threads[i]);
}
然后协调好就行了。
不要以为线程池内存池是普通人无法实现的技术,认真研究一下,都能掌握的不错,然后在工作中不断完善优化。没必要神化ace和boost,当然其代码写的很好,可以学习,但不是离开就活不了的。
阅读(15783) | 评论(0) | 转发(0) |