全网统一账户实践

分享下目前我们全网的账号管理体系。

整体的账户管理思路是分而治之。主要分为下面三类账户:
1. 办公网账户,也就是大家熟悉的域账户。对于办公网账户,全网用户一人一账户,在 OpenLDAP 的基础上做了一些开发,这是进入公司内部的大门,所有新入职的员工都会分配一个该账号,不管是在办公室连接 Wi-Fi 还是在家连接 anyconnect VPN,访问 confluence/jira 等基础办公设置,都需要通过此账户进行登录认证。
2. 生产网账户,主要用来访问线上、线下机器资源。工程师访问线上生产、测试机器,登录线下自助机树莓派(raspberry) 均需要通过此账户认证,整个 user/group 的分配、HBAC/sudo 的控制、密码/公钥的管理均在 freeIPA/IDM 上实现,freeIPA 是 RedHat 支持的一整套集成安全信息管理解决方案系统,又称 IDM。
3. 数据库账户,这块比较小众,简单带过。
下面会针对上面两大块分别介绍。

办公网账户

该账号与所有人息息相关,从入职第一天起到你的 lastday,都需要登录该账户才可以访问办公资源,包括常见的 Wi-Fi/Gitlab/Jira/Confluence/Jenkins/Cisco Anyconnect VPN/Zabbix/Grafana/跳板机(intermediate host) 以及自己开发的各种内部系统。

这里面需要先简单描述下目前我们的多网分离的架构,多网是指办公网(包含国际线路)、生产网、测试网、专线网(到医院 HIS/LIS 等信息系统)、OOB(带外管理网),这五张大网是处于「部分」分离的状态的,比如,办公网到生产网/测试网/专线网/OOB 是完全分离的状态,除非通过跳板机(后面全部以 ih 代指)登录之后再访问;生产网跟测试网也是几乎分离的,除了极个别的公共服务。以办公网登录生产、测试网为例,在进入用户验证这步之前,会先通过 IP <-> MAC <-> 用户身份的一一映射绑定,先通过最基本 IP:PORT 的 ACL 方式来控制住大部分的请求。所以在你能通过网页打开 git,通过 ssh 协议 pull/push git 仓库的时候,说明已经通过 TCP/IP 层面的验证机制以及用户级别的验证机制,具体如何实现的会在后面的博客中说明。

默认情况下,OpenLDAP 支持的 schema 非常有限;另外不同应用接入方式都不大一样,需要逐一尝试。

比如 LDAP 跟 Anyconnect 的对接,官方提供的 schema 仅仅基本可用,离生产还有一段距离,默认只支持添加一组 IP/Netmask,以及一条 IP 层面的 ACL,为此需要自行扩展维护一套 cisco.ldif 文件,这个是我们自行维护的一个 schema 文件,根据我们业务的情况,新增了一组 IP/netmask 以及若干的 ACL。下面是一个普通用户的数据文件示例:

从 slapcat 导出的字段数据可以看到,为了实现 Wi-Fi 账号跟 OpenLDAP 的共享,加入了 samba 相关的 schema,具体的可以看这个文件

再比如,对于 anyconnect/ih 的认证,需要增加两步验证机制,这里引入 OTP(privacyIDEA) 以及 Radius 作为跟 OpenLDAP 的桥梁例,具体的交互图可以看这里:

最终实现了用户的 CRUD 在 OpenLDAP 层面控制。用户的两步验证,下图可以看到,privacyIDEA 支持十余种的 token,包括 hOTP/mOTP/sms/email/Yubikey 等等:

privacyIDEA 不仅仅支持我们使用的 ldapresolver,sql/passwd/scim 的 resolver 都有很好的支持。目前我们使用 Google Authenticator 是基于事件的 HOTP(RFC4226):

用来管理 OpenLDAP 的工具有不少,包括我们使用的经典的 phpldapadmin,除此之外,fusiondirectoryweb2ldap 都是不错的选择。同时为了支持用户的自助修改、重置密码,定期修改用户密码,我们引入了 LTB,一个非常神奇的 LDAP 工具箱集合,目前仅支持通过邮件找回密码的链接来重置密码:

除了自助机服务,LTB 还支持 LDAP 性能、用户使用数据的监控等若干非常实用的脚本。
这么重要的基础服务稳定性肯定是需要保障的,通过 syncprov 模块实现双 master 的写的高可用,通过多个 slave 同步 master,前端挂 Haproxy 实现读的高可用:

对用户来说,他们看到的是下面这个样子:

PACKT 出版的 Mastering OpenLDAP 是本通俗易懂的书,浏览一遍之后应该能轻松应对上面的内容。 

生产网账户

这里的生产网泛指包含线上生产测试服务器线下自助机树莓派等在内的所有 *nix 系统。所有需要登录如上机器的用户都需要获得该账户的权限之后,才能访问基于 HBAC 机制的主机以及颗粒度更细致的用户执行权限。

整个系统基于 freeIPA/IDM 实现,默认情况下,每台机器有一个专门用来运行进程(我们所有的 JVM 进程都跑在 vm 上,不会出现混跑的状态)的 worker 用户,所有的相关的 bin/, log/, lib/ 等目录文件都通过 Jenkins 统一打包到 ~/worker/ 下面,实现跟系统文件的隔离。/home/worker/ 的权限如下:

worker/ 下面所有的目录文件权限都如上所示。这样,只要用户在 bm_be_sre 这个组下面,就能实现 worker/ 下面也即 JVM 相关服务的读取权限:

下图可以看到,appCenter 这个服务是跑在 worker 下面的:

类似的,如果想实现 worker/ 下面的写操作,比如执行 bin/ 下面的脚本,将用户加到 30002(sre) 这个组即可,这里完全可以各自的需求自定义。所有的 sudo/su 相关权限设置全部在 IPA 的 Policy 操作即可,比如下面这个赋予的是 sre 这个组的用户可以执行 /bin/su – worker、/bin/su worker 这两个单独的比较危险的命令以及 sre_allow 命令行组里面基础命令(/bin/chmod, /bin/chown, /bin/cp 等等):

目前单台 8G/4core 的 IPA 支撑着 1000+ 的线上生产测试的物理虚拟机以及线下 100+ 的树莓派,应付起来绰绰有余,CPU 平均 1% 利用率、disk IO 平均 10%。

对我们的用户来说,在经过办公网/VPN 的 IP ACL 过滤之后,通过办公网账号结合 OTP 登录线上的任意一台跳板机(ih),然后通过 ih 再登录线上的机器,跟办公网账户类似的是,需要定期修改密码。对有权限看到的用户来说,他们看到的是下面这样

以上的两块账户体系算是把当年在阿里的基础核心账户体系复制了一遍,对最终用户来说,体验相对来说会更好些,当然我们没法也暂时没必要实现类似阿里郎那种监控每台入网设备所有操作的系统。

需要注意的是,在 IPA 上新增的用户,或者新增 sudo 权限,由于 sssd 的缓存,不会立即生效,可以通过 sss_cache 或者删除缓存文件可以让规则立即生效,这部分可以以类似 hook 的方式触发,每次修改用户权限的时候自动触发该操作。

freeIPA 结尾的 A 表示 audit,实际情况是目前还不支持 audit 功能,所以这块功能我们通过 snoopy 结合 rsyslog 方式进行收集审计

对于办公网账户,目前通过 python-ldap 实现了多个层次的自助以及服务的打通,freeIPA 还未进行这块的开发工作,最终的效果类比阿里内外,一个 portal 实现不同用户的自助服务。

数据库账户

第三块,或者说非常小众的一块账户是 DB 的账户,这里面主要涉及 MySQL 以及 NoSQL(Redis/ElasticSearch) 的账户划分管理审计。

MySQL 的账户管理我们通过 saltstack 的 mysql_user 进行管理,我们用的saltstack 是 2015.8.7,对 MySQL 5.7 的支持有不少 bug,2016 开头的版本对此都修复了,嫌麻烦的不妨直接把 salt 给升级了或者做 diff 给 2015.8.7 打 patch,主要原因是 5.7 之前存储用户表的密码字段是 password 而 5.7 之后变成了 authentication_string。

目前所有的 MySQL 账户用户通过 git 管理而密码通过 salt pillar 管理,审计则使用了 Percona 的 Audit Log Plugin 通过 rsyslog 打到中心服务器收集分析。

对于 Redis/ES 来说,由于上面存储的数据为非敏感数据,所有的都通过 IP 层面的 ACL 进行控制访问。

OpenVPN Connectivity Issue in Public Network

We established a ovpn tunnel between 2 IDCs in September 2014, and we have monitored the availability and performance of two ends for a long time. The geographical distance between 2 IDCs are quite short, but with different telecom carries. mtr shows that there exits ahout 7 hops from one end to the other. The below screenshot shows the standard ping loss.


The result is quite interesting. At first we used UDP protocol, and we often experienced network disconnection issue, later we switched to TCP, and it improved a lot. From the digram, the average package is 1.11%.
Why 1.11%, what I can explain is the tunnel is often fully saturated during the peak hour, and this can't solved at the moment, so no matter what protocol, the package loss should exists. Another possible reason is the complexity of public network which I can't quantitate.
The current plan works during current background, but no "how many 9s" guarantee. Anyway, if we want to achieve more stable connectivity, a DLL(dedicated leased line) is a better choice.

Router Matters

We are transfering PBs of our HDFS data from one data center to another via a router, we never thought the performance of a router will becomes the bottleneck until we find the below statistic:

#show interfaces Gi0/1
GigabitEthernet0/1 is up, line protocol is up
  Hardware is iGbE, address is 7c0e.cece.dc01 (bia 7c0e.cece.dc01)
  Description: Connect-Shanghai-MSTP
  Internet address is 10.143.67.18/30
  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
     reliability 255/255, txload 250/255, rxload 3/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full Duplex, 1Gbps, media type is ZX
  output flow-control is unsupported, input flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:06, output 00:00:00, output hang never
  Last clearing of "show interface" counters 1d22h
  Input queue: 0/75/0/6 (size/max/drops/flushes); Total output drops: 22559915
  Queueing strategy: fifo
  Output queue: 39/40 (size/max)

The output queue is full, hence the txload is obviously high.

How awful it is. At the beginning, we found there were many failures or retransmissions during the transfer between two data centers. After adding some metrics, everything is clear, the latency between two data centers is quite unstable, sometimes around 30ms, and sometimes reaches 100ms or even more which is unacceptable for some latency sensitive application. we then ssh into the router and found the above result.

After that, we drop it and replace it with a more advanced one, now, everything returns to normal, latency is around 30ms, packet drop is below 1%.

How Many Non-Persistent Connections Can Nginx/Tengine Support Concurrently

Recently, I took over a product line which has horrible performance issues and our customers complain a lot. The architecture is qute simple, clients, which are SDKs installed in our customers' handsets send POST requests to a cluster of Tengine servers, via a cluster of IPVS load balancers, actually the Tengine is a highly customized Nginx server, it comes with tons of handy features. then, the Tengine proxy redirects the requests to the upstream servers, after some computations, the app servers will send the results to MongoDB.

When using curl to send a POST request like this:
$ curl -X POST -H "Content-Type: application/json" -d '{"name": "jaseywang","sex": "1","birthday": "19990101"}' http://api.jaseywang.me/person -v

Every 10 tries, you'll probably get 8 or even more failure responses with "connection timeout" back.

After some basic debugging, I find that Nginx is quite abnormal with TCP accept queue totally full, therefore, it's explainable that the client get unwanted response. The CPU and memory util is acceptable, but the networking goes wild cuz packets nics received is extremely tremendous, about 300kpps average, 500kpps at peak times, since the package number is so large, the interrupts is correspondingly quite high. Fortunately, these Nginx servers all ship with 10G network cards, with mode 4 bonding, some link layer and lower level are also pre-optimizated like TSO/GSO/GRO/LRO, ring buffer size, etc. I havn't seen any package drop/overrun using ifconfig or similar tools.

After some package capturing, I found more teffifying facts, almost all of the incoming packagea are less than 75Byte. Most of these package are non-persistent. They start the 3-way handshake, send one or a few more usually less than 2 if need TCP sengment HTTP POST request, and exist with 4-way handshake. Besides that, these clients usually resend the same requests every 10 minutes or longer, the interval time is set by the app's developer and is beyond our control. That means:

1. More than 80% of the traffic are purely small TCP packages, which will have significant impact on network card and CPU. You can get the overall ideas about the percent with the below image. Actually, the percent is about 88%.

2. Since they can't keep the connections persistent, just TCP 3-wayhandshak, one or more POST, TCP 4-way handshake, quite simple. You have no way to reuse the connection. That's OK for network card and CPU, but it's a nightmare for Nginx, even I enlarge the backlog, the TCP accept queue quickly becomes full after reloading Nginx. The below two images show the a client packet's lifetime. The first one is the packet capture between load balance IPVS and Nginx, the second one is the communications between Nginx and upstream server.

3. It's quite expensive to set up a TCP connection, especially when Nginx runs out of resources. I can see during that period, the network traffic it quite large, but the new request connected per second is qute small compare to the normal. HTTP 1.0 needs client to specify Connection: Keep-Alive in the request header to enable persistent connection, and HTTP 1.1 enables by default. After turning off, not so much effect.

It now have many evidences shows that our 3 Nginx servers, yes, its' 3, we never thought Nginx will becomes bottleneck one day, are half broken in such a harsh environment. How mnay connections can it keep and how many new connections can it accept? I need to run some real traffic benchmarks to get the accurate result.

Since recovering the product is the top priority, I add another 6 Nginx servers with same configurations to IPVS. With 9 servers serving the online service, it behaves normal now. Each Nginx gets about 10K qps, with 300ms response time, zero TCP queue, and 10% CPU util.

The benchmark process is not complex, remove one Nginx server(real server) from IPVS at a time, and monitor its metrics like qps, response time, TCP queue, and CPU/memory/networking/disk utils. When qps or similar metric don't goes up anymore and begins to turn round, that's usually the maximum power the server can support.

Before kicking off, make sure some key parameters or directives are setting correct, including Nginx worker_processes/worker_connections, CPU affinity to distrubute interrupts, and kernel parameter(tcp_max_syn_backlog/file-max/netdev_max_backlog/somaxconn, etc.). Keep an eye on the rmem_max/wmem_max, during my benchmark, I notice quite different results with different values.

Here are the results:
The best performance for a single server is 25K qps, however during that period, it's not so stable, I observe a almost full queue size in TCP and some connection failures during requesting the URI, except that, everything seems normal. A conservative value is about 23K qps. That comes with 300K TCP established connections, 200Kpps, 500K interrupts and 40% CPU util.
During that time, the total resource consumed from the IPVS perspective is 900K current connection, 600Kpps, 800Mbps, and 100K qps.
The above benchmark was tested during 10:30PM ~ 11:30PM, the peak time usually falls between 10:00PM to 10:30PM.

Turning off your access log to cut down the IO and timestamps in TCP stack may achieve better performance, I haven't tested.

Don't confused TCP keepalievd with HTTP keeplive, they are totally diffent concept. One thing to keep in mind is that, the client-LB-Nginx-upstream mode usually has a LB TCP sesstion timeout value with 90s by default. that means, when client sends a request to Nginx, if Nginx doesn't response within 90s to client, LB will disconnect both end TCP connection by sending rst package in order to save LB's resource and sometimes for security reasons. In this case, you can decrease TCP keepalive parameter to workround.

tcp_keepalive_time and rst Flag in NAT Environment

Here, I'm not going to explain the details of what is TCP keepalive, what are the 3 related parameters tcp_keepalive_time, tcp_keepalive_intvl, tcp_keepalive_probes mean.
You need to know, the default value of these 3 parameters with net.ipv4.tcp_keepalive_time = 7200, net.ipv4.tcp_keepalive_intvl = 75, net.ipv4.tcp_keepalive_probes = 9, especially the first one, tcp_keepalive_time may and usually will cause some nasty behavior to you in some environment like NAT.

Let's say, your client now wants to connect to server, via one or more nat in the middle way, whatever SNAT or DNAT. The nat server, no matter its' a router or a Linux based device with ip_forward on, need to maintain a TCP connections mapping table to track each incoming and outcoming connection, since the device's resource is limited, the table can't grow as large as it wants, so it needs to drop some connections which are idle for a period of time with no data change between client and server.

This phenomenon is quite common, not only in consumer oriented/low end/poor implementation router, but also in data center NAT servers. For most of these NAT server, their tcp_keepalive_time is usually set to 90s or around, so, if your client set the parameter larger than 90, and have no data communications for more than 90s, the NAT server will send a TCP package with RST flags to disconnect the both end.

In this situation, one workaround is to lower the keepalive interval than that, like 30s, if you can't control the NAT server.

Recently, when we sync data from MongoDB Master/Slave, note, the architecture is a little old, not the standard replica. We often observe a large amount of errors indicating the failure of synchronisation during creating index process after finishing synchronizing the MongoDB data. Below log is get from our slave server:

Fri Nov 14 12:52:00.356 [replslave] Socket recv() errno:110 Connection timed out 100.80.1.26:27018
Fri Nov 14 12:52:00.356 [replslave] SocketException: remote: 100.80.1.26:27018 error: 9001 socket exception [RECV_ERROR] server [100.80.1.26:27018]
Fri Nov 14 12:52:00.356 [replslave] DBClientCursor::init call() failedFri Nov 14 12:52:00.356 [replslave] repl: AssertionException socket error for mapping query
Fri Nov 14 12:52:00.356 [replslave] Socket flush send() errno:104 Connection reset by peer 100.80.1.26:27018
Fri Nov 14 12:52:00.356 [replslave]   caught exception (socket exception [SEND_ERROR] for 100.80.1.26:27018) in destructor (~PiggyBackData)
Fri Nov 14 12:52:00.356 [replslave] repl: sleep 2 sec before next pass
Fri Nov 14 12:52:02.356 [replslave] repl: syncing from host:100.80.1.26:27018
Fri Nov 14 12:52:03.180 [replslave] An earlier initial clone of 'sso_production' did not complete, now resyncing.
Fri Nov 14 12:52:03.180 [replslave] resync: dropping database sso_production
Fri Nov 14 12:52:04.272 [replslave] resync: cloning database sso_production to get an initial copy

100.80.1.26 is DNAT IP address for MongoDB master. As you can see, the slave receives reset from 100.80.1.26.

After some package capture and analysis, we get the conclusion with the root cause of tcp_keepalive_time related. If you are sensitive, you may consider TCP stack related issues when see "reset by peer" keyword at first glance.

Be careful, the same issue also occurs in some IMAP proxy, IPVS envionment, etc.  After many years hands-on online operations experience, I have to say, DNS and NAT are two potential threats of various issues, even in some so-called 100% uptime environment.

通过 tcpcopy(pf_ring) 对 BCM 5719 小包做的多组 benchmark

tcpcopy 在文档化、用户参与方式方面有很大的提升空间这个问题在之前已经专门说过。最终,在我们自己阅读代码的情况下,结合 pf_ring,坚持跑通了整个流程,用其对目前 BCM 5719 型号的网卡做了多组对比,结论见结尾。
使用 tcpcopy 做 benchmark,务必确定 tcpcopy 语法使用的正确性, 尽管互联网上绝大多数的文档以及官方文档都写的含糊不清。
比如,我们之前把过滤条件 -F "tcp and dst port 80" 写成了 -F "tcp and src port 80",造成的结果是在错误的基础上得出了一些奇葩的数据以及无法解释的现象,比如到 target server 的流量特别小,并且及其的不稳定,得到的 pps 也是非常的不稳定大到 400kpps/s,小到 20k  甚至 0。

这里列举我们测试环境使用的一组配置。
所有机器都是 RedHat 6.2 2.6.32-279.el6.x86_64,Tcpcopy 1.0 版本,PF_RING 6.0.1,BCM 5719,双网卡 bonding。另外,我们单台生产机器的流量都比较大,并且绝大多数都是 100B 以下的小包,所以对系统的要求还是比较高的。
为了避免后续的误解,先确定几个用语:
1. online server(OS): 即我们线上生产环境的机器
2. target server(TS): 是我们想做 benchmark 的那台机器,以测试其究竟能支撑多大的量
3. assistant server(AS): 是安装了 intercept 的机器
4. mirror server(MS): 通过 tcpcopy 的 mirror 工具把 OS 的流量复制到该台机器上

我们一共做了三大组 benchmark,每组下面包含若干小的 benchmark,包括:
1. tcpcopy + intercept
2. tcpcopy + intercept + tcpcopy mirror,加一个 mirror 是为了减少对 OS 的负载,这样 tcpcopy 在 MS 上运行
3. tcpcopy + intercept + 交换机 mirror,更直接的方式复制线上流量,保证对 OS 没有干扰,并且能保证 100% 全量复制 OS 的流量

Continue reading