一、salt入门
1、saltstack的3种运行方式:
local
Master/Minion
Salt SSH
2、salttstack的3大功能:
远程执行
配置管理
云管理
3、saltstack的基础配置和通信原理
编辑minion的配置文件:/etc/salt/minion,修改master
master: 192.168.74.20
- schedule:
- highstate:
- function: state.highstate
- seconds: 30
客户端每隔30s到server端同步一次数据;
注意:关于主机名的解析
- [root@linux-node1 master]# cat /etc/hosts
- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
- ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
-
- 192.168.74.20 linux-node1.example.com linux-node1
- 192.168.74.21 linux-node2.example.com linux-node2
salt安装完毕之后,首先需要在minion端坐初始化操作;需要指定server的地址以及id(可以不指定,默认为minion的hostname);
- [root@linux-node1 minion]# pwd
- /etc/salt/pki/minion
- [root@linux-node1 minion]# ls #minion.pub是minion的公钥,会拷贝到master端,需要master同意之后才可以实现通信
- minion.pem minion.pub
- [root@linux-node1 master]# pwd
- /etc/salt/pki/master
- [root@linux-node1 master]# tree
- .
- ├── master.pem
- ├── master.pub
- ├── minions
- ├── minions_autosign
- ├── minions_denied
- ├── minions_pre
- │?? ├── linux-node1.example.com
- │?? └── linux-node2.example.com
- └── minions_rejected
-
- 5 directories, 4 files
- [root@linux-node1 master]# salt-key
- Accepted Keys:
- Denied Keys:
- Unaccepted Keys:
- linux-node1.example.com
- linux-node2.example.com
- Rejected Keys:
接下来需要同意公钥:
- [root@linux-node1 master]# salt-key -a linux* #支持通配符
- The following keys are going to be accepted:
- Unaccepted Keys:
- linux-node1.example.com
- linux-node2.example.com
- Proceed? [n/Y] y
- Key for minion linux-node1.example.com accepted.
- Key for minion linux-node2.example.com accepted.
- [root@linux-node1 master]# tree
- .
- ├── master.pem
- ├── master.pub
- ├── minions
- │?? ├── linux-node1.example.com #这个是minion的公钥
- │?? └── linux-node2.example.com
- ├── minions_autosign
- ├── minions_denied
- ├── minions_pre
- └── minions_rejected
- [root@linux-node1 minions]# ls
- linux-node1.example.com linux-node2.example.com
- [root@linux-node1 minions]# file linux-node1.example.com
- linux-node1.example.com: ASCII text
- [root@linux-node1 minions]# cat linux-node1.example.com
- -----BEGIN PUBLIC KEY-----
- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvrmgglfYeMpUay3Tx8GG
- Pre5gzFe//2jwT8S6tZBA8nyzhOa0ONJSF5aGojdDCb6J6pKZvZOnj3cPdee4oeX
- Z4E0bpH7aW2ZpR4yTFaXbJ8xj3TCspVF2of7HMr+eA/CKDojg2NhRGvsUkH+LRry
- fCBsHTj/SSUp/k7jH+Yf2gYhT7cRSbKbiEC2V4EsQfIxX4ER7kYgjMUZPdvkRgTY
- kgds3Ol4eeL9ZjZguRQen1qI7DQWU9JlhEDerlsnoTreH8XPHBDJ9JvC0BuK4YQm
- oyIJUkDY4JFWcCjkecRgVGh9AYHkmofgBaEmf2TKgLrK5lvOK5miViKjc+hVN4zP
- 2QIDAQAB
- -----END PUBLIC KEY-----
在minion端,也将master的公钥拿过来了
- [root@linux-node2 ~]# cd /etc/salt/pki/
- [root@linux-node2 pki]# ls
- minion
- [root@linux-node2 pki]# cd minion/
- [root@linux-node2 minion]# ls
- minion_master.pub minion.pem minion.pub
- [root@linux-node2 minion]# cat minion_master.pub
- -----BEGIN PUBLIC KEY-----
- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0+xX8cUQXZ9eRVMS6J+P
- BxgEbB9o2nr7SGhu781RJoYQ2IOT64CsICrx0W/ACbCEqjaRV809VPkGndPfGgu1
- IVZ+OP6eVub12ZpOXUpnJRKaVC2v2li6h+OqgCuESitlNpPGtYnbBqcROKv/mtYa
- FwLZvIa5aVHUj3UReKky8WAtGCuHHx3TdQQCVJfgkmUG97Y+62wPBbBeop4L0c5d
- iO8rICFlty0uMt/0FRmnKj+vN/a/B3DabHWUbf4GUDVevFJCZFZyF24Yhgx4jlu7
- +QBRGLjQkKsv+SIR14diIy1Wex3i8f67KtQnlQLlWb2nJmSXiXShAWhccZagXGPF
- xwIDAQAB
- -----END PUBLIC KEY-----
配置完成之后,还不能实现master和minion的通信,需要master和minion配置类似于openssh的信任关系,要借助salt-key命令完成,参数-a 可以添加特定的信任主机,-A添加所有的主机,
-D删除所有,-d删除特定的信任主机;
salt-key命令执行完毕之后,minion会将本地的私钥拷贝到master上的/etc/salt/pki/master/minions目录下,同时master会讲自己的公钥拷贝到minion端;
二、配置管理
修改/etc/salt/master文件,创建file_roots目录;
- file_roots:
- base:
- - /srv/salt
创建/srv/salt目录,然后重启salt-master;
下面安装apache,并启动
- /srv/salt目录下:[root@22-57 salt]# cat apache.sls
- apache-install: #名称
- pkg.installed: #pkg是模块的名称,installed是模块中的方法
- - names: #names是参数
- - httpd
- - httpd-devel
-
- apache-service:
- service.running:
- - name: httpd
- - enable: True
- - reload: True
使用:set list查看不显示的字符:
- apache-install:$
- pkg.installed:$
- - names:$
- - httpd$
- - httpd-devel$
- $
- $
- apache-service:$
- service.running:$
- - name: httpd$
- - enable: True$
- - reload: True$
然后执行salt '*' state.sls apache就会执行上述sls文件: state表示模块,sls为模块中的方法名
也可以在top.sls文件中制定上述sls文件,看下面:
[root@22-57 salt]# cat top.sls base: '*': - apache
然后执行salt '*' state.highstate就执行apache.sls啦!
三、数据系统 Grains
grains中存放着minion启动时获取的系统信息,只有在minion启动的时候才会收集,然后就不变了,只有重启的时候才会重新收集;
grains有三个作用:采集minion数据、在远程执行的时候匹配minion、在top文件执行的时候匹配minion;
1、收集信息数据
- [root@linux-node1 ~]# salt 'linux-node1*' grains.ls #将grains的所有key列出来
- linux-node1.example.com:
- - SSDs
- - biosreleasedate
- - biosversion
- - cpu_flags
- - cpu_model
- - cpuarch
- - domain
- - fqdn
- - fqdn_ip4
- - fqdn_ip6
- - gpus
- - host
- - hwaddr_interfaces
- - id
- - init
- - ip4_interfaces
- - ip6_interfaces
- - ip_interfaces
- - ipv4
- - ipv6
- - kernel
- - kernelrelease
- - locale_info
- - localhost
- - lsb_distrib_id
- - machine_id
- - manufacturer
- - master
- - mdadm
- - mem_total
- - nodename
- - num_cpus
- - num_gpus
- - os
- - os_family
- - osarch
- - oscodename
- - osfinger
- - osfullname
- - osmajorrelease
- - osrelease
- - osrelease_info
- - path
- - productname
- - ps
- - pythonexecutable
- - pythonpath
- - pythonversion
- - saltpath
- - saltversion
- - saltversioninfo
- - selinux
- - serialnumber
- - server_id
- - shell
- - systemd
- - virtual
- - zmqversion
- root@linux-node1 ~]# salt 'linux-node1*' grains.items #将grains的所有内容显示出来
- linux-node1.example.com:
- ----------
- SSDs:
- biosreleasedate:
- 06/02/2011
- biosversion:
- 6.00
- cpu_flags:
- - fpu
- - vme
- - de
- - pse
- - tsc
- - msr
- - pae
- - mce
- - cx8
- - apic
- - sep
- - mtrr
- - pge
- - mca
- - cmov
- - pat
- - pse36
- - clflush
- - dts
- - acpi
- - mmx
- - fxsr
- - sse
- - sse2
- - ss
- - syscall
- - nx
- - rdtscp
- - lm
- - constant_tsc
- - arch_perfmon
- - pebs
- - bts
- - nopl
- - xtopology
- - tsc_reliable
- - nonstop_tsc
- - aperfmperf
- - eagerfpu
- - pni
- - pclmulqdq
- - ssse3
- - fma
- - cx16
- - sse4_1
- - sse4_2
- - movbe
- - popcnt
- - aes
- - xsave
- - avx
- - hypervisor
- - lahf_lm
- - ida
- - arat
- - epb
- - pln
- - pts
- - dtherm
- - hwp
- - hwp_noitfy
- - hwp_act_window
- - hwp_epp
- - xsaveopt
- - xsavec
- - xgetbv1
- - xsaves
- cpu_model:
- Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz
- cpuarch:
- x86_64
- domain:
- example.com
- fqdn:
- linux-node1.example.com
- fqdn_ip4:
- - 192.168.74.20
- fqdn_ip6:
- gpus:
- |_
- ----------
- model:
- SVGA II Adapter
- vendor:
- unknown
- host:
- linux-node1
- hwaddr_interfaces:
- ----------
- ens33:
- 00:0c:29:89:6a:8f
- lo:
- 00:00:00:00:00:00
- virbr0:
- 00:00:00:00:00:00
- virbr0-nic:
- 52:54:00:95:4d:38
- id:
- linux-node1.example.com
- init:
- systemd
- ip4_interfaces:
- ----------
- ens33:
- - 192.168.74.20
- lo:
- - 127.0.0.1
- virbr0:
- - 192.168.122.1
- virbr0-nic:
- ip6_interfaces:
- ----------
- ens33:
- - fe80::20c:29ff:fe89:6a8f
- lo:
- - ::1
- virbr0:
- virbr0-nic:
- ip_interfaces:
- ----------
- ens33:
- - 192.168.74.20
- - fe80::20c:29ff:fe89:6a8f
- lo:
- - 127.0.0.1
- - ::1
- virbr0:
- - 192.168.122.1
- virbr0-nic:
- ipv4:
- - 127.0.0.1
- - 192.168.122.1
- - 192.168.74.20
- ipv6:
- - ::1
- - fe80::20c:29ff:fe89:6a8f
- kernel:
- Linux
- kernelrelease:
- 3.10.0-327.el7.x86_64
- locale_info:
- ----------
- defaultencoding:
- UTF-8
- defaultlanguage:
- en_US
- detectedencoding:
- UTF-8
- localhost:
- linux-node1
- lsb_distrib_id:
- CentOS Linux
- machine_id:
- ae71ba43e74c41a7b705e17fff4a03fb
- manufacturer:
- VMware, Inc.
- master:
- 192.168.74.20
- mdadm:
- mem_total:
- 1836
- nodename:
- linux-node1
- num_cpus:
- 1
- num_gpus:
- 1
- os:
- CentOS
- os_family:
- RedHat
- osarch:
- x86_64
- oscodename:
- Core
- osfinger:
- CentOS Linux-7
- osfullname:
- CentOS Linux
- osmajorrelease:
- 7
- osrelease:
- 7.2.1511
- osrelease_info:
- - 7
- - 2
- - 1511
- path:
- /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
- productname:
- VMware Virtual Platform
- ps:
- ps -efH
- pythonexecutable:
- /usr/bin/python
- pythonpath:
- - /usr/bin
- - /usr/lib/python2.7/site-packages/aliyun_python_sdk_rds-2.0.0-py2.7.egg
- - /usr/lib/python2.7/site-packages/aliyun_python_sdk_core-2.0.36-py2.7.egg
- - /usr/lib64/python27.zip
- - /usr/lib64/python2.7
- - /usr/lib64/python2.7/plat-linux2
- - /usr/lib64/python2.7/lib-tk
- - /usr/lib64/python2.7/lib-old
- - /usr/lib64/python2.7/lib-dynload
- - /usr/lib64/python2.7/site-packages
- - /usr/lib64/python2.7/site-packages/Twisted-16.6.0-py2.7-linux-x86_64.egg
- - /usr/lib64/python2.7/site-packages/constantly-15.1.0-py2.7.egg
- - /usr/lib64/python2.7/site-packages/zope.interface-4.3.3-py2.7-linux-x86_64.egg
- - /root/Twisted-16.6.0/.eggs/incremental-16.10.1-py2.7.egg
- - /usr/lib64/python2.7/site-packages/gtk-2.0
- - /usr/lib/python2.7/site-packages
- pythonversion:
- - 2
- - 7
- - 5
- - final
- - 0
- saltpath:
- /usr/lib/python2.7/site-packages/salt
- saltversion:
- 2015.5.10
- saltversioninfo:
- - 2015
- - 5
- - 10
- - 0
- selinux:
- ----------
- enabled:
- True
- enforced:
- Enforcing
- serialnumber:
- VMware-56 4d bd b4 2a f9 c1 e4-53 27 19 67 df 89 6a 8f
- server_id: #物理机的快速服务代码
- 1981947194
- shell:
- /bin/sh
- systemd:
- ----------
- features:
- +PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN
- version:
- 219
- virtual:
- VMware
- zmqversion:
- 3.2.5
- [root@linux-node1 ~]# salt 'linux-node1*' grains.item fqdn #获取其中单独一项
- linux-node1.example.com:
- ----------
- fqdn:
- linux-node1.example.com
- [root@linux-node1 ~]# salt 'linux-node1*' grains.get fqdn
- linux-node1.example.com:
- linux-node1.example.com
-
- [root@linux-node1 ~]# salt 'linux-node1*' grains.get ip_interfaces:ens33
- linux-node1.example.com:
- - 192.168.74.20
- - fe80::20c:29ff:fe89:6a8f
系统默认的grains数据都包含在salt项目下的: /salt/grains/core.py下面,有兴趣可以看看;
2、匹配minion
- [root@linux-node1 ~]# salt 'linux-node1*' grains.get os
- linux-node1.example.com:
- CentOS
-
- [root@linux-node1 ~]# salt -G os:Centos cmd.run 'w' #-G表示匹配grains
- linux-node1.example.com:
- 21:17:27 up 5:36, 1 user, load average: 0.16, 0.05, 0.06
- USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
- root pts/0 192.168.74.1 20:30 7.00s 0.48s 0.36s /usr/bin/python /usr/bin/salt -G os:Centos cmd.run w
- linux-node2.example.com:
- 21:17:26 up 5:36, 2 users, load average: 0.02, 0.05, 0.05
- USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
- root pts/0 192.168.74.1 13:17 3:48m 0.38s 0.38s -bash
- root pts/1 192.168.74.1 20:30 46:14 0.05s 0.05s -bash
3、自定义grains
如果自带的grains不满足需求,可以自定义grains;自定义grains有两种方式实现:在server端写好后推到client,或者直接在client端编辑;
在server端定义如下:
- #定义file_roots的base目录,并新建grains目录
- file_roots:
- base:
- - /srv/salt/base
- [root@linux-node1 _grains]# pwd
- /srv/salt/base/_grains
-
- #定义grains,获取客户端的ulimit -n 的值
- [root@linux-node1 _grains]# cat file.py
- import os
- def file():
- grains={}
- file = os.popen('ulimit -n').read()
- grains['file']=file
- return grains
-
- #将grains推送到需要的客户端
- salt '*' saltutil.sync_all
-
- #获取grains的值
- [root@linux-node1 _grains]# salt '*' grains.get file
- linux-node2-computer:
- 1024
- linux-node1.oldboyedu.com:
- 8192
在客户端编辑如下:
将minion的配置文件/etc/salt/minion如下的注释去掉
- grains:
- roles:
- - webserver
- - memcache
然后重启minion,测试:
- [root@linux-node1 ~]# salt -G 'roles:memcache' cmd.run 'echo hehe'
- linux-node1.example.com:
- hehe
当然了,自定义grains一般不写在配置文件中,放在文件/etc/salt/grains 中:
- [root@linux-node1 ~]# cat /etc/salt/grains
- web: nginx #这里的web,即key不能喝系统已经有的grains的key有重复,否则会出问题啦
测试一下:
- [root@linux-node1 ~]# systemctl restart salt-minion #每次自定义grains,都必须重启minion,否则不生效啊
- [root@linux-node1 ~]# salt '*' grains.item roles
- linux-node1.example.com:
- ----------
- roles:
- - webserver
- - memcache
- linux-node2.example.com:
- ----------
- roles:
- [root@linux-node1 ~]# salt '*' grains.item web
- linux-node2.example.com:
- ----------
- web:
- linux-node1.example.com:
- ----------
- web:
- nginx
- [root@linux-node1 ~]#
- [root@linux-node1 ~]# salt -G 'web:nginx' cmd.run 'w'
- linux-node1.example.com:
- 21:42:33 up 6:02, 1 user, load average: 0.25, 0.19, 0.11
- USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
- root pts/0 192.168.74.1 20:30 1.00s 0.76s 0.34s /usr/bin/python /usr/bin/salt -G web:nginx cmd.run w
另外,grains 在minion端还可以写在default_include: minion.d/*.conf 下面
- [root@linux-node2 minion.d]# cat charles.conf
- grains:
- FD: 2
- charles: 5
- cpis:
- - a
- - b
- charles.net:
- ++++++++++++++++++++++++++++
-
- +++++++++++++++++++=========
-
- ***************************
-
-
- #测试结果
- [root@linux-node1 _grains]# salt 'linux-node2-computer' grains.items
- linux-node2-computer:
- ----------
- FD:
- 2
- SSDs:
- biosreleasedate:
- 06/02/2011
- biosversion:
- 6.00
- charles:
- 5
- charles.net:
- ++++++++++++++++++++++++++++
- +++++++++++++++++++=========
- ***************************
- cpis:
- - a
- - b
3、在top file中使用grains
匹配web: nginx的机器,执行apache.sls
- [root@linux-node1 salt]# pwd
- /srv/salt
- [root@linux-node1 salt]# cat top.sls
- base:
- 'web:nginx':
- - match: grain
- - apache
指定结果如下:
salt '*' state.highstate
- [root@linux-node1 salt]# salt '*' state.highstate
- linux-node2.example.com:
- ----------
- ID: states
- Function: no.None
- Result: False
- Comment: No Top file or external nodes data matches found.
- Started:
- Duration:
- Changes:
-
- Summary
- ------------
- Succeeded: 0
- Failed: 1
- ------------
- Total states run: 1
- linux-node1.example.com:
- ----------
- ID: apache-install
- Function: pkg.installed
- Name: httpd
- Result: True
- Comment: Package httpd is already installed.
- Started: 21:51:23.629420
- Duration: 1910.371 ms
- Changes:
- ----------
- ID: apache-install
- Function: pkg.installed
- Name: httpd-devel
- Result: True
- Comment: Package httpd-devel is already installed.
- Started: 21:51:25.539976
- Duration: 0.369 ms
- Changes:
- ----------
- ID: apache-service
- Function: service.running
- Name: httpd
- Result: True
- Comment: Service httpd is already enabled, and is in the desired state
- Started: 21:51:25.540793
- Duration: 647.157 ms
- Changes:
-
- Summary
- ------------
- Succeeded: 3
- Failed: 0
- ------------
- Total states run: 3
- ERROR: Minions returned with non-zero exit code
- [root@linux-node1 salt]#
四、数据系统 pillar(给minion指定其想要的数据)
- Grains:是部署在minion端的,支持静态数据,minion启动的时候采集,也可以使用saltutil.sync_grains进行刷新;应用比如:存储minion基本数据,比如用于匹配minion,自身数据可以用来做资产管理等;
-
- Pillar:部署在master端,支持动态数据,在master端定义,指定给对应的minion,可以使用saltutil.reflesh_pillar刷新;应用举例:存储master指定的数据,只有指定的minion可以看到,用于敏感数据保存;
1、pillar默认是没有的内容的,需要在/etc/salt/master下面设置,将参数pillar_opts: True,然后重启mater就有啦
东西很多,然而并没有什么用~~,那就关掉吧
- linux-node2.example.com:
- ----------
- master:
- ----------
- __role:
- master
- auth_mode:
- 1
- auto_accept:
- False
- cache_sreqs:
- True
- cachedir:
- /var/cache/salt/master
- cli_summary:
- False
- client_acl:
- ----------
- client_acl_blacklist:
- ----------
- cluster_masters:
- cluster_mode:
- paranoid
- con_cache:
- False
- conf_file:
- /etc/salt/master
- config_dir:
- /etc/salt
- cython_enable:
- False
- daemon:
- False
- default_include:
- master.d/*.conf
- enable_gpu_grains:
- False
- enforce_mine_cache:
- False
- enumerate_proxy_minions:
- False
- environment:
- None
- event_return:
- event_return_blacklist:
- event_return_queue:
- 0
- event_return_whitelist:
- ext_job_cache:
- ext_pillar:
- extension_modules:
- /var/cache/salt/extmods
- external_auth:
- ----------
- failhard:
- False
- file_buffer_size:
- 1048576
- file_client:
- local
- file_ignore_glob:
- None
- file_ignore_regex:
- None
- file_recv:
- False
- file_recv_max_size:
- 100
- file_roots:
- ----------
- base:
- - /srv/salt
- fileserver_backend:
- - roots
- fileserver_followsymlinks:
- True
- fileserver_ignoresymlinks:
- False
- fileserver_limit_traversal:
- False
- gather_job_timeout:
- 10
- gitfs_base:
- master
- gitfs_env_blacklist:
- gitfs_env_whitelist:
- gitfs_insecure_auth:
- False
- gitfs_mountpoint:
- gitfs_passphrase:
- gitfs_password:
- gitfs_privkey:
- gitfs_pubkey:
- gitfs_remotes:
- gitfs_root:
- gitfs_user:
- hash_type:
- md5
- hgfs_base:
- default
- hgfs_branch_method:
- branches
- hgfs_env_blacklist:
- hgfs_env_whitelist:
- hgfs_mountpoint:
- hgfs_remotes:
- hgfs_root:
- id:
- linux-node2.example.com
- interface:
- 0.0.0.0
- ioflo_console_logdir:
- ioflo_period:
- 0.01
- ioflo_realtime:
- True
- ioflo_verbose:
- 0
- ipv6:
- False
- jinja_lstrip_blocks:
- False
- jinja_trim_blocks:
- False
- job_cache:
- True
- keep_jobs:
- 24
- key_logfile:
- /var/log/salt/key
- keysize:
- 2048
- log_datefmt:
- %H:%M:%S
- log_datefmt_logfile:
- %Y-%m-%d %H:%M:%S
- log_file:
- /var/log/salt/master
- log_fmt_console:
- [%(levelname)-8s] %(message)s
- log_fmt_logfile:
- %(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s][%(process)d] %(message)s
- log_granular_levels:
- ----------
- log_level:
- warning
- loop_interval:
- 60
- maintenance_floscript:
- /usr/lib/python2.7/site-packages/salt/daemons/flo/maint.flo
- master_floscript:
- /usr/lib/python2.7/site-packages/salt/daemons/flo/master.flo
- master_job_cache:
- local_cache
- master_pubkey_signature:
- master_pubkey_signature
- master_roots:
- ----------
- base:
- - /srv/salt-master
- master_sign_key_name:
- master_sign
- master_sign_pubkey:
- False
- master_tops:
- ----------
- master_use_pubkey_signature:
- False
- max_event_size:
- 1048576
- max_minions:
- 0
- max_open_files:
- 100000
- minion_data_cache:
- True
- minionfs_blacklist:
- minionfs_env:
- base
- minionfs_mountpoint:
- minionfs_whitelist:
- nodegroups:
- ----------
- open_mode:
- False
- order_masters:
- False
- outputter_dirs:
- peer:
- ----------
- permissive_pki_access:
- False
- pidfile:
- /var/run/salt-master.pid
- pillar_opts:
- True
- pillar_roots:
- ----------
- base:
- - /srv/pillar
- pillar_safe_render_error:
- True
- pillar_source_merging_strategy:
- smart
- pillar_version:
- 2
- pillarenv:
- None
- ping_on_rotate:
- False
- pki_dir:
- /etc/salt/pki/master
- preserve_minion_cache:
- False
- pub_hwm:
- 1000
- publish_port:
- 4505
- publish_session:
- 86400
- queue_dirs:
- raet_alt_port:
- 4511
- raet_clear_remotes:
- False
- raet_main:
- True
- raet_mutable:
- False
- raet_port:
- 4506
- range_server:
- range:80
- reactor:
- reactor_refresh_interval:
- 60
- reactor_worker_hwm:
- 10000
- reactor_worker_threads:
- 10
- renderer:
- yaml_jinja
- ret_port:
- 4506
- root_dir:
- /
- rotate_aes_key:
- True
- runner_dirs:
- saltversion:
- 2015.5.10
- search:
- search_index_interval:
- 3600
- serial:
- msgpack
- show_jid:
- False
- show_timeout:
- True
- sign_pub_messages:
- False
- sock_dir:
- /var/run/salt/master
- sqlite_queue_dir:
- /var/cache/salt/master/queues
- ssh_passwd:
- ssh_port:
- 22
- ssh_scan_ports:
- 22
- ssh_scan_timeout:
- 0.01
- ssh_sudo:
- False
- ssh_timeout:
- 60
- ssh_user:
- root
- state_aggregate:
- False
- state_auto_order:
- True
- state_events:
- False
- state_output:
- full
- state_top:
- salt://top.sls
- state_top_saltenv:
- None
- state_verbose:
- True
- sudo_acl:
- False
- svnfs_branches:
- branches
- svnfs_env_blacklist:
- svnfs_env_whitelist:
- svnfs_mountpoint:
- svnfs_remotes:
- svnfs_root:
- svnfs_tags:
- tags
- svnfs_trunk:
- trunk
- syndic_dir:
- /var/cache/salt/master/syndics
- syndic_event_forward_timeout:
- 0.5
- syndic_jid_forward_cache_hwm:
- 100
- syndic_master:
- syndic_max_event_process_time:
- 0.5
- syndic_wait:
- 5
- timeout:
- 5
- token_dir:
- /var/cache/salt/master/tokens
- token_expire:
- 43200
- transport:
- zeromq
- user:
- root
- verify_env:
- True
- win_gitrepos:
- - https://github.com/saltstack/salt-winrepo.git
- win_repo:
- /srv/salt/win/repo
- win_repo_mastercachefile:
- /srv/salt/win/repo/winrepo.p
- worker_floscript:
- /usr/lib/python2.7/site-packages/salt/daemons/flo/worker.flo
- worker_threads:
- 5
- zmq_filtering:
- False
pillar的用途:a针对敏感数据,比如对数据加密,只能是指定的minion看到;b变量的差异性;
2、使用自定义的pillar
开启pillar的sls,需要在master的配置文件中去掉如下的注释;
- vim /etc/salt/master
- pillar_roots:
- base:
- - /srv/pillar
创建pillar的目录和top file
- [root@linux-node1 base]# pwd
- /srv/pillar/base
- [root@linux-node1 base]# ls
- sc.sls top.sls zabbix
- [root@linux-node1 base]# pwd
- /srv/pillar/base
- [root@linux-node1 base]# cat top.sls
- base:
- '*':
- - zabbix.agent #zabbix 下的agent.sls
- 'linux-node2-computer':
- - sc #sc.sls
-
-
- #sc.sls
- [root@linux-node1 base]# cat sc.sls
- cange: 1
- charles.net: 2
- DF: 22222
- 12_1T: cache_dir coss /data/cache1/coss 6000 max
-
- cache_dir aufs /data/cach1 20000 128 128 min
- [root@linux-node1 base]# cat sc.sls
- cange: 1
- charles.net: 2
- DF: 22222
- 12_1T: cache_dir coss /data/cache1/coss 6000 max #两行,必须空一行
-
- cache_dir aufs /data/cach1 20000 128 128 min
-
-
- #测试
- [root@linux-node1 base]# salt '*' pillar.data
- linux-node2-computer:
- ----------
- 12_1T:
- cache_dir coss /data/cache1/coss 6000 max
- cache_dir aufs /data/cach1 20000 128 128 min
- DF:
- 22222
- cange:
- 1
- charles.net:
- 2
- zabbix-agent:
- ----------
- Zabbix_Server:
- 192.168.74.20
- linux-node1.oldboyedu.com:
- ----------
- zabbix-agent:
- ----------
- Zabbix_Server:
- 192.168.74.20
在pillar的base目录下创建apache.sls文件,这里我们使用jinjia模板:
- [root@22-57 pillar]# pwd
- /srv/pillar
- [root@22-57 pillar]# cat apache.sls
- {% if grains['os'] == 'CentOS' %}
- apache: httpd
- {% elif grains['os'] == 'Debian'%}
- apache: apche2
- {% endif %}
在top file中指定哪些minion可以接收到该pillar的数据:
- [root@linux-node1 ~]# cd /srv/pillar/
- [root@linux-node1 pillar]# cat top.sls
- base:
- '*':
- - apache
执行如下:
- [root@22-57 pillar]# salt '*' pillar.items
- 172.16.22.35:
- ----------
- apache:
- httpd
- 172.16.22.57:
- ----------
- apache:
- httpd
当然pillar也用来匹配minion,前提是需要先刷新
- 刷新pillar,并执行pillar的sls;
-
- [root@22-57 ~]# salt '*' saltutil.refresh_pillar
- 172.16.22.57:
- True
- 172.16.22.35:
- True
- [root@22-57 ~]# salt -I 'apache:httpd' test.ping #使用pillar匹配
- 172.16.22.35:
- True
- 172.16.22.57:
- True
3、grains和pillar的使用
- #定义base 环境的top file
- [root@linux-node1 base]# pwd
- /srv/salt/base
- [root@linux-node1 base]# cat top.sls
- base:
- 'linux-node2-computer':
- - test.test #执行test目录下面的test.sls文件
-
- #test.sls
- [root@linux-node1 test]# cat test.sls
- /tmp/squid.conf:
- file.managed:
- - source: salt://test/squid.conf.jinjia #引入jinja文件
- - template: jinja
-
-
- #jinja文件
- visible_host {{ grains['fqdn'] }}
- {{ pillar['12_1T'] }}
- {{ pillar['DF'] }}
-
- test {{ grains['charles'] }}
-
-
- #执行
- [root@linux-node1 test]# salt '*' state.highstate
- linux-node1.oldboyedu.com:
- ----------
- ID: states
- Function: no.None
- Result: False
- Comment: No Top file or external nodes data matches found.
- Changes:
-
- Summary for linux-node1.oldboyedu.com
- ------------
- Succeeded: 0
- Failed: 1
- ------------
- Total states run: 1
- Total run time: 0.000 ms
- linux-node2-computer:
- ----------
- ID: /tmp/squid.conf
- Function: file.managed
- Result: True
- Comment: File /tmp/squid.conf is in the correct state
- Started: 11:58:26.578232
- Duration: 38.022 ms
- Changes:
-
- Summary for linux-node2-computer
- ------------
- Succeeded: 1
- Failed: 0
- ------------
- Total states run: 1
- Total run time: 38.022 ms
-
-
- #执行结果
- [root@linux-node2 salt]# cat /tmp/squid.conf
- visible_host linux-node2.openstack.com
- cache_dir coss /data/cache1/coss 6000 max
- cache_dir aufs /data/cach1 20000 128 128 min
- 22222
-
- test 5
五、远程执行详解
1、目标
a、使用通配符
- [root@linux-node1 ~]# salt 'linux-node?.example.com' cmd.run 'w'
- linux-node1.example.com:
- 21:22:28 up 5:27, 2 users, load average: 0.15, 0.17, 0.11
- USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
- root pts/0 192.168.74.1 14:17 3:49m 0.13s 0.13s -bash
- root pts/1 192.168.74.1 21:18 4.00s 0.54s 0.35s /usr/bin/python /usr/bin/salt linux-node?.example.com cmd.run w
- linux-node2.example.com:
- 21:22:28 up 5:27, 2 users, load average: 0.02, 0.04, 0.05
- USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
- root pts/0 192.168.74.1 14:17 5:51m 0.22s 0.22s -bash
- root pts/1 192.168.74.1 21:18 28.00s 0.06s 0.06s -bash
- [root@linux-node1 ~]# salt 'linux-node[1,2].example.com' cmd.run 'w'
- linux-node2.example.com:
- 21:24:32 up 5:29, 2 users, load average: 0.07, 0.06, 0.05
- USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
- root pts/0 192.168.74.1 14:17 5:53m 0.22s 0.22s -bash
- root pts/1 192.168.74.1 21:18 2:32 0.06s 0.06s -bash
- linux-node1.example.com:
- 21:24:32 up 5:29, 2 users, load average: 0.09, 0.14, 0.11
- USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
- root pts/0 192.168.74.1 14:17 3:51m 0.13s 0.13s -bash
- root pts/1 192.168.74.1 21:18 0.00s 0.56s 0.36s /usr/bin/python /usr/bin/salt linux-node[1,2].example.com cmd.run w
- [root@linux-node1 ~]# salt 'linux-node[1-2].example.com' cmd.run 'w'
- linux-node1.example.com:
- 21:24:39 up 5:29, 2 users, load average: 0.09, 0.14, 0.11
- USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
- root pts/0 192.168.74.1 14:17 3:51m 0.13s 0.13s -bash
- root pts/1 192.168.74.1 21:18 7.00s 0.55s 0.34s /usr/bin/python /usr/bin/salt linux-node[1-2].example.com cmd.run w
- linux-node2.example.com:
- 21:24:39 up 5:29, 2 users, load average: 0.07, 0.06, 0.05
- USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
- root pts/0 192.168.74.1 14:17 5:53m 0.22s 0.22s -bash
- root pts/1 192.168.74.1 21:18 2:39 0.06s 0.06s -bash
b、正则表达式
- [root@linux-node1 ~]# salt -E 'linux-node(1|2).example.com' test.ping #正则表达式使用-E
- linux-node1.example.com:
- True
- linux-node2.example.com:
- True
- [root@linux-node1 ~]# salt -L 'linux-node1.example.com,linux-node2.example.com' test.ping #list
- linux-node1.example.com:
- True
- linux-node2.example.com:
- True
c、ip地址
- [root@linux-node1 ~]# salt -S '192.168.74.20' test.ping
- linux-node1.example.com:
- True
- [root@linux-node1 ~]# salt -S '192.168.74.0/24' test.ping
- linux-node1.example.com:
- True
- linux-node2.example.com:
- True
多种匹配条件
- [root@linux-node1 ~]# salt -C 'S@192.168.74.21 or G@web:nginx' test.ping
- linux-node1.example.com:
- True
- linux-node2.example.com:
- True
注意:minion-id很重要
2、模块
a、service模块
- [root@linux-node1 ~]# salt '*' service.available sshd #判断sshd是否启动
- linux-node1.example.com:
- True
- linux-node2.example.com:
- True
salt '*' service.get_all #获取所有的服务
- [root@linux-node1 ~]# salt '*' service.missing sshd
- linux-node2.example.com:
- False
- linux-node1.example.com:
- False
重启服务
- [root@linux-node1 ~]# salt '*' service.reload httpd
- linux-node2.example.com:
- True
- linux-node1.example.com:
- True
- [root@linux-node1 ~]# salt '*' service.status httpd
- linux-node1.example.com:
- True
- linux-node2.example.com:
- True
- [root@linux-node1 ~]# salt '*' service.stop httpd
- linux-node1.example.com:
- True
- linux-node2.example.com:
- True
- [root@linux-node1 ~]# salt '*' service.start httpd
- linux-node1.example.com:
- True
- linux-node2.example.com:
- True
b、network
- [root@linux-node1 ~]# salt '*' network.interface ens33
- linux-node1.example.com:
- |_
- ----------
- address:
- 192.168.74.20
- broadcast:
- 192.168.74.255
- label:
- ens33
- netmask:
- 255.255.255.0
- linux-node2.example.com:
- |_
- ----------
- address:
- 192.168.74.21
- broadcast:
- 192.168.74.255
- label:
- ens33
- netmask:
- 255.255.255.0
这里介绍一下模块的ACL:
- client_acl:
- oldboy:
- - test.ping #只可以指定test.ping 和Network模块
- - network.*
- user01:
- - linux-node1*: #支队node1可以执行test.ping
- - test.ping
同时需要赋予普通用户对文件操作的权限
chmod 755 /var/cache/salt/ /var/cache/salt/master/ /var/cache/salt/master/jobs/ /var/run/salt /var/run/salt/master/
测试:
- [user01@linux-node1 ~]$ salt 'linux-node1*' test.ping
- linux-node1.example.com:
- True
- [user01@linux-node1 ~]$ salt '*' test.ping
- Failed to authenticate! This is most likely because this user is not permitted to execute commands, but there is a small possibility that a disk error occurred (check disk/inode usage).
salt还可以设置黑名单
- #
- #client_acl_blacklist:
- # users:
- # - root
- # - '^(?!sudo_).*$' # all non sudo users
- # modules:
- # - cmd
3、返回
创建salt的库和表:
- mysql> CREATE DATABASE`salt`
- -> DEFAULT CHARACTER SET utf8
- -> DEFAULT COLLATE utf8_general_ci;
- Query OK, 1 row affected (0.03 sec)
-
- mysql> use salt;
- Database changed
-
- DROP TABLE IF EXISTS `jids`;
- CREATE TABLE `jids` (
- `jid` varchar(255) NOT NULL,
- `load` mediumtext NOT NULL,
- UNIQUE KEY `jid` (`jid`)
- ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
- CREATE INDEX jid ON jids(jid) USING BTREE;
-
- --
- -- Table structure for table `salt_returns`
- --
-
- DROP TABLE IF EXISTS `salt_returns`;
- CREATE TABLE `salt_returns` (
- `fun` varchar(50) NOT NULL,
- `jid` varchar(255) NOT NULL,
- `return` mediumtext NOT NULL,
- `id` varchar(255) NOT NULL,
- `success` varchar(10) NOT NULL,
- `full_ret` mediumtext NOT NULL,
- `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
- KEY `id` (`id`),
- KEY `jid` (`jid`),
- KEY `fun` (`fun`)
- ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
- --
- -- Table structure for table `salt_events`
- --
-
- DROP TABLE IF EXISTS `salt_events`;
- CREATE TABLE `salt_events` (
- `id` BIGINT NOT NULL AUTO_INCREMENT,
- `tag` varchar(255) NOT NULL,
- `data` mediumtext NOT NULL,
- `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
- `master_id` varchar(255) NOT NULL,
- PRIMARY KEY (`id`),
- KEY `tag` (`tag`)
- ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
'运行
- mysql> show tables;
- +----------------+
- | Tables_in_salt |
- +----------------+
- | jids |
- | salt_events |
- | salt_returns |
- +----------------+
- 3 rows in set (0.00 sec)
-
- mysql>
- mysql> grant all on salt.* to salt@'192.168.74.0/255.255.255.0' identified by 'salt';
- Query OK, 0 rows affected (0.00 sec)
所有的minion和master都必须安装MySQL-python的包;
在master和minion的配置文件中添加如下配置:
- return: mysql #如果没有该参数,需要加--return参数
-
- mysql.host: '192.168.74.20'
- mysql.user: 'salt'
- mysql.pass: 'salt'
- mysql.db: 'salt'
- mysql.port: 3306
重启配置文件之后
- [root@linux-node1 ~]# salt '*' cmd.run 'uptime'
- linux-node2.example.com:
- 22:09:15 up 10:21, 1 user, load average: 0.07, 0.07, 0.07
- linux-node1.example.com:
- 22:09:15 up 10:21, 2 users, load average: 0.16, 0.15, 0.11
- [root@linux-node1 ~]# salt '*' cmd.run 'df -h' --return mysql
- linux-node1.example.com:
- Filesystem Size Used Avail Use% Mounted on
- /dev/mapper/centos-root 18G 12G 5.6G 69% /
- devtmpfs 485M 0 485M 0% /dev
- tmpfs 495M 16K 495M 1% /dev/shm
- tmpfs 495M 14M 482M 3% /run
- tmpfs 495M 0 495M 0% /sys/fs/cgroup
- /dev/sda1 497M 125M 373M 26% /boot
- tmpfs 99M 0 99M 0% /run/user/0
- /dev/sr0 4.1G 4.1G 0 100% /mnt
- linux-node2.example.com:
- Filesystem Size Used Avail Use% Mounted on
- /dev/mapper/centos-root 18G 12G 5.6G 68% /
- devtmpfs 215M 0 215M 0% /dev
- tmpfs 225M 16K 225M 1% /dev/shm
- tmpfs 225M 13M 212M 6% /run
- tmpfs 225M 0 225M 0% /sys/fs/cgroup
- /dev/sda1 497M 125M 373M 26% /boot
- tmpfs 45M 0 45M 0% /run/user/0
- /dev/sr0 4.1G 4.1G 0 100% /mnt
返回的数据会存放在salt_returns表中
或者使用job_cache,将接受到的数据写入mysql中
- master_job_cache: mysql
-
- mysql.host: '192.168.74.20'
- mysql.user: 'salt'
- mysql.pass: 'salt'
- mysql.db: 'salt'
- mysql.port: 3306
六、salt配置管理
salt的配置管理基于远程执行的
在master的配置文件中指定top file文件位置
- file_roots:
- base:
- - /srv/salt/base
- test:
- - /srv/salt/test
- prod:
- - /srv/salt/prod
- [root@22-57 ~]# mkdir /srv/salt/{base,test,prod}
- [root@22-57 ~]# cd /srv/salt/
- [root@22-57 salt]# mv top.sls apache.sls base/
- [root@22-57 salt]# ll
- 总用量 0
- drwxr-xr-x 2 root root 37 12月 12 23:08 base
- drwxr-xr-x 2 root root 6 12月 12 23:07 prod
- drwxr-xr-x 2 root root 6 12月 12 23:07 test
- [root@22-57 salt]# pwd
- /srv/salt
练习:salt文件管理
- [root@22-57 base]# cat dns.sls
- /var/log/secure:
- file.managed:
- - souce: salt://files/secure
- - user: root
- - group: root
- - mode: 777
有两种执行方式:
salt '*' state.sls dns
或者定义top file,
- [root@22-57 base]# cat top.sls
- base:
- '*':
- - dns
salt '*' state.highstate执行
六、saltstak配置管理 -yaml和jinjia
配置管理三步分:1、系统初始化。2、功能模块。3、业务模块。
1、yaml语法规则
规则一:缩进,每一个缩进级别使用两个空格组成。
规则二:冒号,每一个冒号后面都有一个空格,处理以冒号结尾和使用冒号表示路径之外。
规则三:短横线,想要表示列表项,使用短横线加空格。
2、YAML和Jinjia
步骤:系统初始化-->功能模块-->业务模块
YAML的语法规则:
- a、使用两个空格表示层级关系;
- b、冒号,每个冒号之后都有一个空格,除了以冒号结尾的;
- c、短横线(列表),每一个短横线后面都需要有一个空格;
Jinjia模板:这里使用例子说明,还是之前的DNS文件
- [root@linux-node1 base]# cat dns.sls
- /etc/resolv.conf:
- file.managed:
- - source: salt://files/resolv.conf
- - user: root
- - group: root
- - mode: 644
- - template: jinja #使用jinj模板
- - defaults:
- DNS_SERVER: 8.8.8.8 #定义模板变量
- [root@linux-node1 base]# cat files/resolv.conf
- #dns server
- nameserver {{ DNS_SERVER }} #引用模板变量
-
- [root@linux-node1 base]# cat files/resolv.conf
- #dns server
- # {{ grains['fqdn_ip4']}} #也可以在Jinja模板中使用grains
- nameserver {{ DNS_SERVER }}
-
- #jinja模板中也可以加执行模块,加pillar
3、系统初始化
将系统初始化的sls全部放在/srv/salt/base/init下面,使用top.sls进行调研
- [root@22-57 base]# tree
- .
- ├── apache.sls
- ├── init
- │ ├── audit.sls
- │ ├── dns.sls
- │ ├── env_init.sls
- │ ├── files
- │ │ ├── resolv.conf
- │ │ └── secure
- │ ├── history.sls
- │ └── sysctl.sls
- └── top.sls
- [root@22-57 base]# cat top.sls
- base:
- '*':
- - init.env_init
-
- [root@22-57 init]# cat env_init.sls
- include:
- - init.dns
- - init.history
- - init.audit
- - init.sysct
初始化文件全部写在init目录下
- 增加环境变量,让history文件记录时间戳
- [root@22-57 init]# cat history.sls
- /etc/profile:
- file.append:
- - text:
- - export HISTTIMEFORMAT="%F %T `whoami` "
-
-
- 日志审计:将操作命令全部记录在message文件中
- [root@22-57 init]# cat audit.sls
- /etc/bashrc:
- file.append:
- - text:
- - export PROMPT_COMMAND='{ msg=$(history 1 | { read x y; echo $y;});logger "[euid=$(whoami)]":$(who am i):[`pwd`]"$msg";}'
-
-
- 调整内核参数:不使用交换分区,重新规划端口范围,设定可以打开的文件数量
- [root@22-57 init]# cat sysctl.sls
- vm.swappiness:
- sysctl.present:
- - value: 0
-
-
- net.ipv4.ip_local_port_range:
- sysctl.present:
- - value: 10000 65000
-
- fs.file-max:
- sysctl.present:
- - value: 100000
现在基础环境下有了这几个sls:
- [root@linux-node1 init]# ls -l
- total 20
- -rw-r--r--. 1 root root 168 Jan 26 18:19 audit.sls
- -rw-r--r--. 1 root root 194 Jan 25 22:27 dns.sls
- drwxr-xr-x. 2 root root 24 Jan 25 22:27 files
- -rw-r--r--. 1 root root 88 Jan 26 18:09 history.sls
- -rw-r--r--. 1 root root 175 Jan 26 18:35 sysctl.sls
如果将上面的这些SLS放入TOP file中执行的话,top file的内容会很多,所以定义一个sls,将所有的sls include进去:
- [root@linux-node1 init]# cat env_init.sls #init.表示文件的路径,所有的sls是从base目录开始查找的
- include:
- - init.dns
- - init.history
- - init.audit
- - init.sysctl
最后调用:
- [root@linux-node1 base]# cat top.sls #最后在top file 中引用
- base:
- '*':
- - init.env_init
-
- [root@linux-node1 ~]# salt '*' state.highstate test=True #测试
- [root@linux-node1 ~]# salt '*' state.highstate
ps:涉及到的系统知识
- export PROMPT_COMMAND='{msg=$(history 1 | {read x y;echo $y;});logger "[euid=$(whoami)]";$(who am i):[`pwd`]"$msg";}' #将操作日志加到message中
-
-
- cat /proc/sys/net/ipv4/ip_local_port_range #端口范围
-
-
- cat /proc/sys/fs/file-max #打开文件描述符数
-
-
-
- salt '*' state.highstate test=True #测试sls,看执行了哪些操作
4、业务引用haproxy
- [root@22-57 cluster]# cat haproxy-outside.sls
- include:
- - haproxy.install
-
- haproxy-service:
- file.managed:
- - name: /etc/haproxy/haproxy.cfg
- - source: salt://cluster/files/haproxy-outside.cfg
- - user: root
- - group: root
- - mode: 644
- service.running:
- - name: haproxy
- - enable: True
- - reload: True
- - require:
- - cmd: haproxy-init
- - watch: #配置文件改变,自动reload
- - file: haproxy-service
- [root@22-57 cluster]# pwd
- /srv/salt/prod/cluste
- [root@22-57 base]# cat top.sls
- base:
- '*':
- - init.env_init
-
- prod:
- '172.16.22.57':
- - cluster.haproxy-outside
- '172.16.22.35':
- - cluster.haproxy-outside
- [root@22-57 base]# pwd
- /srv/salt/bas
haproxy的配置文件如下:
- [root@22-57 files]# cat haproxy-outside.cfg
- global
- maxconn 100000
- chroot /usr/local/haproxy
- uid 99
- gid 99
- daemon
- nbproc 1
- pidfile /usr/local/haproxy/logs/haproxy.pid
- log 127.0.0.1 local3 info
-
- defaults
- option http-keep-alive
- maxconn 100000
- mode http
- timeout connect 5000ms
- timeout client 50000ms
- timeout server 50000ms
-
- listen stats
- mode http
- bind 0.0.0.0:8888
- stats enable
- stats uri /haproxy-status
- stats auth haproxy:saltstack
-
- frontend frontend_www_example_com
- bind 172.16.22.50:80
- mode http
- option httplog
- log global
- default_bankend backend_www_example_com
-
- backend backend_www_example_com
- option forwardfor header X-REAL-IP
- option httpchk HEAD / HTTP/1.0
- balance source
- server web-node1 172.16.22.57:8080 check inter 2000 rise 30 fall 15
- server web-node2 172.16.22.35:8080 check inter 2000 rise 30 fall 15
http://192.168.74.20:8888/haproxy-status
解决haproxy 403问题:修改/var/www/html/index.html内容
http://192.168.74.21:8080/
需要的salt的文件目录如下:
- [root@linux-node1 prod]# pwd
- /srv/salt/prod
- [root@linux-node1 prod]# ls -l
- total 0
- drwxr-xr-x. 3 root root 81 Jan 26 22:40 cluster #业务模块
- drwxr-xr-x. 3 root root 36 Jan 26 23:01 haproxy #haproxy的功能模块
- drwxr-xr-x. 2 root root 25 Jan 26 22:52 pkg #所有包的安装
5、配置管理keepalived
keepalived是vrrp协议的实现;
- [root@linux-node1 keepalived]# cat install.sls #安装
- keepalived-install:
- file.managed:
- - name: /usr/local/src/keepalived-1.2.17.tar.gz
- - source: salt://keepalived/files/keepalived-1.2.17.tar.gz
- - mode: 755
- - user: root
- - group: root
- cmd.run:
- - name: cd /usr/local/src && tar zxf keepalived-1.2.17.tar.gz && cd keepalived-1.2.17 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install
- - unless: test -d /usr/local/keepalived
- - require:
- - file: keepalived-install
-
- /etc/sysconfig/keepalived:
- file.managed:
- - source: salt://keepalived/files/keepalived.sysconfig
- - mode: 644
- - user: root
- - group: root
-
- /etc/init.d/keepalived:
- file.managed:
- - source: salt://keepalived/files/keepalived.init
- - mode: 755
- - user: root
- - group: root
-
- keepalived-init:
- cmd.run:
- - name: chkconfig --add keepalived
- - unless: chkconfig --list | grep keepalived
- - require:
- - file: /etc/init.d/keepalived
-
- /etc/keepalived:
- file.directory:
- - user: root
- - group: root
-
-
- [root@linux-node1 ~]# salt '*' state.sls keepalived.install env=prod #执行
6、业务引用keepalived
- [root@22-57 init]# cat zabbix_agent.sls
- zabbix-agent-install:
- pkg.installed:
- - name: zabbix-agent
-
- file.managed:
- - name: /etc/zabbix/zabbix_agent.d.conf
- - source: salt://init/files/zabbix_agentd.conf
- - template: jiajia
- - defaults:
- Server: {{ pillar['zabbix-agent']['Zabbix_Server'] }} #使用自定义的pillar
- - require:
- - pkg: zabbix-agent-install
-
- service.running:
- - name: zabbix-agent
- - enable: True
- - watch:
- - pkg: zabbix-agent-install
- - file: zabbix-agent-install
- [root@22-57 init]# pwd
- /srv/salt/base/init
- [root@linux-node1 zabbix]# cat agent.sls
- zabbix-agent:
- Zabbix_Server: 192.168.74.20
- [root@linux-node1 zabbix]# pwd
- /srv/pillar/base/zabbix
- [root@linux-node1 base]# cat top.sls
- base:
- '*':
- - zabbix.agent
- [root@linux-node1 base]# pwd
- /srv/pillar/base
- [root@linux-node1 base]# salt '*' pillar.items
- linux-node2.example.com:
- ----------
- zabbix-agent:
- ----------
- Zabbix_Server:
- 192.168.74.20
- linux-node1.example.com:
- ----------
- zabbix-agent:
- ----------
- Zabbix_Server:
- 192.168.74.20
需要注意的是,在master的配置文件中,需要将pillar_roots的目录设置为/srv/pillar/base;
将zabbix-agent加载的基础服务中
- [root@22-57 init]# cat env_init.sls
- include:
- - init.dns
- - init.history
- - init.audit
- - init.sysctl
- - init.zabbix_agent
详细代码请参考:
https://github.com/unixhot/saltbook-code
zabbix监控mysal的插件:
https://www.percona.com/software/database-tools/percona-monitoring-and-management
七、saltstack架构扩展
生产应用不要使用root用户;
online/offline;
最佳实践不是删掉,而是mv到offline下面;
作业:mysql安装,实现自动化安装
salt-minion重新启动之后,需要等待一会时间再使用;
共享配置文件建议放在版本管理中;
1、
安装python-setproctitle可以查看python程序的进程名;
重启salt-master之后,就可以查看master的进程名啦!
一般情况下,salt如果由于网络延迟造成返回失败的话,可以将timeout的时间定的长一点(在master的配置文件中设定);
当然,saltstak支持无master的情况,也就是在minion端本地执行salt的相关命令,需要首先在minion的配置文件中将file_client配置为remote,重启minion,
- [root@22-57 ~]# salt-call --local test.ping
- local:
- True
这样就可以通过salt-call执行啦!
主要是在没有master的情况下,安装master的时候使用,可以将master的相关配置文件(source),放置到公网上,使用http进行相关配置(source支持http),配置目录结构等;
minion当然支持多个mater,不建议使用;
2、syndic
必须运行在一个master上,并且在连接到另外一个master(比他更高级)。
有两台机器:172.16.22.35和172.16.22.57
首先在172.16.22.35上安装salt-master和salt-syndic,在172.16.22.57上安装salt-master;
在172.16.22.57上的master的配置文件中配置syndic的地址: syndic_master: 172.16.22.57;
停止两台机器的minion,并将/etc/salt/pki/minion的内容清空,如果有minion_id文件,建议删除(必须停止minion后在删除密钥文件);
重启172.16.22.35上的master,启动salt-syndic,将minion端的配置文件的Server地址配置为172.16.22.35;
启动minion;
这样在172.16.22.57 上执行salt-key,可以看到172.16.22.35,这是master,不是minion,信任他吧!同时在172.16.22.35上可以看到35和57l两台机器,同样信任;这样,就可以在master上执行salt的命令了,实现了分布式!
- [root@22-57 pki]# salt-key
- Accepted Keys:
- 172.16.22.35
- Denied Keys:
- Unaccepted Keys:
- Rejected Keys:
- [root@22-57 pki]#
- [root@22-57 pki]#
- [root@22-57 pki]#
- [root@22-57 pki]# salt '*' test.ping
- 172.16.22.57:
- True
- 172.16.22.35:
- True
优点:可以建立多层级的架构; minion<-->syndic<--->master
缺点:需要保证syndic的file_roots需要和master一致,并且master不知道syndic有几个,master不知道minion归哪个syndic管理;
发送时,syndic二次下发给minion,收的时候,由syndic将接收的结果发送给master;
八、syndic二次开发
1、自定义grains
首先在master的file_roots的base目录下,创建自定义grains保存的目录,并创建一个grains的实例;
- [root@22-57 _grains]# pwd
- /srv/salt/base/_grains
- [root@22-57 _grains]# cat my_grains.py
- #!/usr/bin/env python
- def my_grains():
- '''
- My Custom Grains
- '''
- grains= {'hehe1':'haha1','hehe2':'haha2'}
- return grains
使用命令:salt '*' saltutil.sync_grains,就会把自定义的grains传到minion的缓存下面,缓存目录为:
- [root@qiangqiang grains]# pwd
- /var/cache/salt/minion/extmods/grains
- [root@qiangqiang grains]# ls
- my_grains.py my_grains.pyc
这样就可以使用自定义的grains了
- [root@22-57 _grains]# salt '*' grains.item hehe1
- 172.16.22.57:
- ----------
- hehe1:
- haha1
- 172.16.22.35:
- ----------
- hehe1:
- haha1
- [root@22-57 _grains]# salt '*' grains.item hehe2
- 172.16.22.35:
- ----------
- hehe2:
- haha2
- 172.16.22.57:
- ----------
- hehe2:
- haha2
2、自定义模块
在base目录下创建modules的目录,并创建自定义的module
- [root@22-57 _modules]# pwd
- /srv/salt/base/_modules
- [root@22-57 _modules]# ls
- my_disk.py
salt的模块存放路径为:/usr/lib/python2.6/site-packages/salt/,salt的module支持调用其他的salt模块;
- [root@22-57 _modules]# cat my_disk.py
- def list():
- cmd= 'df -h'
- ret = __salt__['cmd.run'](cmd)
- return ret
salt '*' saltutil.sync_modules将自定义的模块发送到minion端;
salt '*' my_disk.list 调用自定义的salt模块;
九、运维自动化
1、运维自动化的三个阶段
第一:标准化,工具化;
运维标准化,操作工具化,变更流程化,标准化运维;
第二:web化:为了弱化流程,直接走下一步就可以了,减少因为执行顺序的原因导致认为失误;
操作web化,权限控制、统计分析、统一调度 、web化运维
第三:服务化,API化;
DNS服务、负载均衡服务、监控服务、分布式缓存服务、分布式存储服务、CMDB;
2、自动化扩容加etcd部署;
job的添加,需要通过web进行添加;
job添加需要提交申请;
Zabbix监控-->Action-->创建一台虚拟机/Docker容器--->部署服务-->部署代码-->测试状态-->加入集群-->加入监控-->通知
etcd介绍:用于共享配置和服务发现
a、简单:curl可以访问用户的API(HTTP+JSON);
b、使用Raft保证数据的一直性;
思路:将haproxy的配置写在etcd中,然后只有salt.pillar.etcd_pillar,就可以动态获取到这个pillar,最后在Jinjia中使用for循环就搞定了;
安装etcd:
- 将etcd的安装文件放置在/usr/local/src下,解压
- 拷贝二进制文件到/usr/local/bin下;
- [root@22-57 ~]# cd /usr/local/src/etcd-v2.2.1-linux-amd64
- [root@22-57 etcd-v2.2.1-linux-amd64]# ls
- Documentation etcd etcdctl README-etcdctl.md README.md
- [root@22-57 etcd-v2.2.1-linux-amd64]# cp etcd etcdctl /usr/local/bin/
-
- [root@22-57 ~]# etcd --version #查看版本
- etcd Version: 2.2.1
- Git SHA: 75f8282
- Go Version: go1.5.1
- Go OS/Arch: linux/amd6
启动etcd:
[root@22-57 ~]# nohup etcd --name auto_scale --data-dir /data/etcd/ --listen-peer-urls 'http://172.16.22.57:2380,http://172.16.22.57:7001' --listen-client-urls 'http://172.16.22.57:2379,http://172.16.22.57:4001' --advertise-client-urls 'http://172.16.22.57:2379,http://172.16.22.57:4001' &
- [root@22-57 ~]# netstat -ntlp|grep etcd
- tcp 0 0 172.16.22.57:4001 0.0.0.0:* LISTEN 53941/etcd
- tcp 0 0 172.16.22.57:2379 0.0.0.0:* LISTEN 53941/etcd
- tcp 0 0 172.16.22.57:2380 0.0.0.0:* LISTEN 53941/etcd
- tcp 0 0 172.16.22.57:7001 0.0.0.0:* LISTEN 53941/etcd
使用curl put数据
- [root@22-57 ~]# curl -s http://172.16.22.57:2379/v2/keys/message -XPUT -d value="Hello world" |python -m json.tool #-d表示put的数据的内容
- {
- "action": "set",
- "node": {
- "createdIndex": 6,
- "key": "/message",
- "modifiedIndex": 6,
- "value": "Hello world"
- },
- "prevNode": {
- "createdIndex": 5,
- "key": "/message",
- "modifiedIndex": 5,
- "value": "Hello world"
- }
- }
获取put的值:
- [root@22-57 ~]# curl -s http://172.16.22.57:2379/v2/keys/message |python -m json.tool
- {
- "action": "get",
- "node": {
- "createdIndex": 6,
- "key": "/message",
- "modifiedIndex": 6,
- "value": "Hello world"
- }
- }
删除值:
- [root@22-57 ~]# curl -s http://172.16.22.57:2379/v2/keys/message -XDELETE |python -m json.tool
- {
- "action": "delete",
- "node": {
- "createdIndex": 6,
- "key": "/message",
- "modifiedIndex": 7
- },
- "prevNode": {
- "createdIndex": 6,
- "key": "/message",
- "modifiedIndex": 6,
- "value": "Hello world"
- }
- }
- [root@22-57 ~]# curl -s http://172.16.22.57:2379/v2/keys/message |python -m json.tool
- {
- "cause": "/message",
- "errorCode": 100,
- "index": 7,
- "message": "Key not found"
- }
put一条数据,设置5秒之后自动删除:
- [root@22-57 ~]# curl -s http://172.16.22.57:2379/v2/keys/ttl_use -XPUT -d value="Hello world 1" -d ttl=5 |python -m json.tool
- {
- "action": "set",
- "node": {
- "createdIndex": 10,
- "expiration": "2016-12-18T02:57:52.420004181Z",
- "key": "/ttl_use",
- "modifiedIndex": 10,
- "ttl": 5,
- "value": "Hello world 1"
- }
- }
-
-
- #自动删除
- [root@22-57 ~]# curl -s http://172.16.22.57:2379/v2/keys/ttl_use |python -m json.tool
- {
- "action": "get",
- "node": {
- "createdIndex": 10,
- "expiration": "2016-12-18T02:57:52.420004181Z",
- "key": "/ttl_use",
- "modifiedIndex": 10,
- "ttl": 1, #剩余时间
- "value": "Hello world 1"
- }
- }
- [root@22-57 ~]# curl -s http://172.16.22.57:2379/v2/keys/ttl_use |python -m json.tool
- {
- "cause": "/ttl_use",
- "errorCode": 100,
- "index": 11,
- "message": "Key not found"
- }
这样,在haproxy中添加的配合到时也会清除;
3、基于etcd和salstack的自动化扩容
首先编辑salt-master的配置文件,并重启;这样salt-master和etcd就可以通信了;可以通过salt '*' pillar.items就可以获取到设置的pillar了;
- etcd_pillar_config:
- etcd.host: 172.16.22.57
- etcd.port: 4001
-
- ext-pillar:
- - etcd: etcd_pillar_config root=/salt/haproxy/
然后重启salt-master
使用pip 安装python-etcd
向etcd中添加key,value的数据,需要安装python-etcd的模块之后才可以获取pillar的值;
- [root@linux-node1 ~]# curl -s http://192.168.74.20:2379/v2/keys/salt/haproxy/backend_www_oldboyedu_com/web-node1 -XPUT -d value="192.168.74.20:8080" |python -m json.tool
- {
- "action": "set",
- "node": {
- "createdIndex": 20,
- "key": "/salt/haproxy/backend_www_oldboyedu_com/web-node1",
- "modifiedIndex": 20,
- "value": "192.168.74.20:8080"
- }
- } #创建了一个key,这个key在/salt/haproxy/backend_www_oldboyedu_com/下,key为web-node1,key的值为192.168.74.20:8080
- [root@linux-node1 ~]# salt '*' pillar.items
- linux-node1.example.com:
- ----------
- backend_www_oldboyedu_com:
- ----------
- web-node1:
- 192.168.74.20:8080
- bankend_www_oldboyedu_com:
- ----------
- web-node1:
- 192.168.74.20:8080
- zabbix-agent:
- ----------
- Zabbix_Server:
- 192.168.74.20
- linux-node2.example.com:
- ----------
- backend_www_oldboyedu_com:
- ----------
- web-node1:
- 192.168.74.20:8080
- bankend_www_oldboyedu_com:
- ----------
- web-node1:
- 192.168.74.20:8080
- zabbix-agent:
- ----------
- Zabbix_Server:
- 192.168.74.20
然后在haproxy的配置文件中替换最后的server的相关配置为jinja模板的数据:
- [root@linux-node1 files]# pwd
- /srv/salt/prod/cluster/files
- [root@linux-node1 files]# cat haproxy-outside.cfg
- global
- maxconn 100000
- chroot /usr/local/haproxy
- uid 99
- gid 99
- daemon
- nbproc 1
- pidfile /usr/local/haproxy/logs/haproxy.pid
- log 127.0.0.1 local3 info
-
- defaults
- option http-keep-alive
- maxconn 100000
- mode http
- timeout connect 5000ms
- timeout client 50000ms
- timeout server 50000ms
-
- listen stats
- mode http
- bind 0.0.0.0:8888
- stats enable
- stats uri /haproxy-status
- stats auth haproxy:saltstack
-
- frontend frontend_www_example_com
- bind 192.168.74.22:80
- mode http
- option httplog
- log global
- default_backend backend_www_example_com
-
- backend backend_www_example_com
- option forwardfor header X-REAL-IP
- option httpchk HEAD / HTTP/1.0
- balance roundrobin
-
- {% for web,web_ip in pillar.backend_www_oldboyedu_com.iteritems() %}
- server {{ web }} {{ web_ip }} check inter 2000 rise 30 fall 15
- {% endfor %}
-
-
- [root@linux-node1 cluster]# cat haproxy-outside-keepalived.sls #并且要告诉sls使用的是Jinja模板
- include:
- - keepalived.install
- keepalived-server:
- file.managed:
- - name: /etc/keepalived/keepalived.conf
- - source: salt://cluster/files/haproxy-outside-keepalived.conf
- - mode: 644
- - user: root
- - group: root
- - template: jinja
- {% if grains['fqdn'] == 'linux-node1.example.com' %}
- - ROUTEID: haproxy_ha
- - STATEID: MASTER
- - PRIORITYID: 150
- {% elif grains['fqdn'] == 'linux-node2.example.com' %}
- - ROUTEID: haproxy_ha
- - STATEID: BACKUP
- - PRIORITYID: 100
- {% endif %}
- service.running:
- - name: keepalived
- - enable: True
- - watch:
- - file: keepalived-server
-
- [root@22-57 cluster]# cat haproxy-outside.sls
- include:
- - haproxy.install
-
- haproxy-service:
- file.managed:
- - name: /etc/haproxy/haproxy.cfg
- - source: salt://cluster/files/haproxy-outside.cfg
- - user: root
- - group: root
- - mode: 644
- - template: jinja #一定要指名模板
- service.running:
- - name: haproxy
- - enable: True
- - reload: True
- - require:
- - cmd: haproxy-init
- - watch:
- - file: haproxy-service
- [root@22-57 cluster]# pwd
- /srv/salt/prod/cluster
这样就可以通过在etcd中添加数据,来增加haproxy的节点了;
- curl -s http://192.168.74.20:2379/v2/keys/salt/haproxy/backend_www_oldboyedu_com/web-node2 -XPUT -d value="192.168.74.20:8080" |python -m json.tool
- curl -s http://192.168.74.20:2379/v2/keys/salt/haproxy/backend_www_oldboyedu_com/web-node3 -XPUT -d value="192.168.74.20:8080" |python -m json.tool
- curl -s http://192.168.74.20:2379/v2/keys/salt/haproxy/backend_www_oldboyedu_com/web-node4 -XPUT -d value="192.168.74.20:8080" |python -m json.tool
-
- 最后:salt '*' state.highstate
通过脚本实现如下:
- [root@22-57 ~]# cat auth.sh
- #!/bin/sh
-
- create_host(){
- echo "create host"
- }
-
- deploy_service(){
- salt '172.16.22.35' state.sls nginx.install env=prod
- ADD_HOST="172.16.22.35"
- ADD_HOST_PORT="8080"
- }
-
- delpoy_code(){
- echo "deploy code ok"
- }
-
- service_check(){
- STATUS=$(curl -s --head http://"$ADD_HOST":"$ADD_HOST_PORT"/ |grep '200 OK')
- if [ -n "$STATUS" ]
- echo "ok"
- else
- echo "not ok"
- exit
- fi
-
- }
-
- etcd_key(){
- curl "http://172.16.22.57:2379/v2/keys/salt/haproxy/backend_www_oldboyedu_com/web-node1 -XPUT -d value="${ADD_HOT}:${ADD_HOST_PORT}""
-
- }
-
- sync_state(){
- salt '*' '172.16.22.57' state.sls cluster.haproxy-outside env=prod
- }
-
- main(){
- create_host;
- deploy_service;
- deploy_code;
- etcd_key;
- sync_state;
- }
-
- main
# -*- coding: utf-8 -*-
'''
The static grains, these are the core, or built in grains.
When grains are loaded they are not loaded in the same way that modules are
loaded, grain functions are detected and executed, the functions MUST
return a dict which will be applied to the main grains dict. This module
will always be executed first, so that any grains loaded here in the core
module can be overwritten just by returning dict keys with the same value
as those returned here
'''
# Import python libs
from __future__ import absolute_import
import os
import json
import socket
import sys
import re
import platform
import logging
import locale
import uuid
from errno import EACCES, EPERM
__proxyenabled__ = ['*']
__FQDN__ = None
# Extend the default list of supported distros. This will be used for the
# /etc/DISTRO-release checking that is part of linux_distribution()
from platform import _supported_dists
_supported_dists += ('arch', 'mageia', 'meego', 'vmware', 'bluewhite64',
'slamd64', 'ovs', 'system', 'mint', 'oracle', 'void')
# linux_distribution deprecated in py3.7
try:
from platform import linux_distribution
except ImportError:
from distro import linux_distribution
# Import salt libs
import salt.exceptions
import salt.log
import salt.utils
import salt.utils.network
import salt.utils.dns
import salt.ext.six as six
from salt.ext.six.moves import range
if salt.utils.is_windows():
import salt.utils.win_osinfo
# Solve the Chicken and egg problem where grains need to run before any
# of the modules are loaded and are generally available for any usage.
import salt.modules.cmdmod
import salt.modules.smbios
__salt__ = {
'cmd.run': salt.modules.cmdmod._run_quiet,
'cmd.retcode': salt.modules.cmdmod._retcode_quiet,
'cmd.run_all': salt.modules.cmdmod._run_all_quiet,
'smbios.records': salt.modules.smbios.records,
'smbios.get': salt.modules.smbios.get,
}
log = logging.getLogger(__name__)
HAS_WMI = False
if salt.utils.is_windows():
# attempt to import the python wmi module
# the Windows minion uses WMI for some of its grains
try:
import wmi # pylint: disable=import-error
import salt.utils.winapi
import win32api
import salt.modules.reg
HAS_WMI = True
__salt__['reg.read_value'] = salt.modules.reg.read_value
except ImportError:
log.exception(
'Unable to import Python wmi module, some core grains '
'will be missing'
)
_INTERFACES = {}
def _windows_cpudata():
'''
Return some CPU information on Windows minions
'''
# Provides:
# num_cpus
# cpu_model
grains = {}
if 'NUMBER_OF_PROCESSORS' in os.environ:
# Cast to int so that the logic isn't broken when used as a
# conditional in templating. Also follows _linux_cpudata()
try:
grains['num_cpus'] = int(os.environ['NUMBER_OF_PROCESSORS'])
except ValueError:
grains['num_cpus'] = 1
grains['cpu_model'] = __salt__['reg.read_value'](
"HKEY_LOCAL_MACHINE",
"HARDWARE\\DESCRIPTION\\System\\CentralProcessor\\0",
"ProcessorNameString").get('vdata')
return grains
def _linux_cpudata():
'''
Return some CPU information for Linux minions
'''
# Provides:
# num_cpus
# cpu_model
# cpu_flags
grains = {}
cpuinfo = '/proc/cpuinfo'
# Parse over the cpuinfo file
if os.path.isfile(cpuinfo):
with salt.utils.fopen(cpuinfo, 'r') as _fp:
for line in _fp:
comps = line.split(':')
if not len(comps) > 1:
continue
key = comps[0].strip()
val = comps[1].strip()
if key == 'processor':
grains['num_cpus'] = int(val) + 1
elif key == 'model name':
grains['cpu_model'] = val
elif key == 'flags':
grains['cpu_flags'] = val.split()
elif key == 'Features':
grains['cpu_flags'] = val.split()
# ARM support - /proc/cpuinfo
#
# Processor : ARMv6-compatible processor rev 7 (v6l)
# BogoMIPS : 697.95
# Features : swp half thumb fastmult vfp edsp java tls
# CPU implementer : 0x41
# CPU architecture: 7
# CPU variant : 0x0
# CPU part : 0xb76
# CPU revision : 7
#
# Hardware : BCM2708
# Revision : 0002
# Serial : 00000000
elif key == 'Processor':
grains['cpu_model'] = val.split('-')[0]
grains['num_cpus'] = 1
if 'num_cpus' not in grains:
grains['num_cpus'] = 0
if 'cpu_model' not in grains:
grains['cpu_model'] = 'Unknown'
if 'cpu_flags' not in grains:
grains['cpu_flags'] = []
return grains
def _linux_gpu_data():
'''
num_gpus: int
gpus:
- vendor: nvidia|amd|ati|...
model: string
'''
if __opts__.get('enable_lspci', True) is False:
return {}
if __opts__.get('enable_gpu_grains', True) is False:
return {}
lspci = salt.utils.which('lspci')
if not lspci:
log.debug(
'The `lspci` binary is not available on the system. GPU grains '
'will not be available.'
)
return {}
# dominant gpu vendors to search for (MUST be lowercase for matching below)
known_vendors = ['nvidia', 'amd', 'ati', 'intel']
gpu_classes = ('vga compatible controller', '3d controller')
devs = []
try:
lspci_out = __salt__['cmd.run']('{0} -vmm'.format(lspci))
cur_dev = {}
error = False
# Add a blank element to the lspci_out.splitlines() list,
# otherwise the last device is not evaluated as a cur_dev and ignored.
lspci_list = lspci_out.splitlines()
lspci_list.append('')
for line in lspci_list:
# check for record-separating empty lines
if line == '':
if cur_dev.get('Class', '').lower() in gpu_classes:
devs.append(cur_dev)
cur_dev = {}
continue
if re.match(r'^\w+:\s+.*', line):
key, val = line.split(':', 1)
cur_dev[key.strip()] = val.strip()
else:
error = True
log.debug('Unexpected lspci output: \'{0}\''.format(line))
if error:
log.warning(
'Error loading grains, unexpected linux_gpu_data output, '
'check that you have a valid shell configured and '
'permissions to run lspci command'
)
except OSError:
pass
gpus = []
for gpu in devs:
vendor_strings = gpu['Vendor'].lower().split()
# default vendor to 'unknown', overwrite if we match a known one
vendor = 'unknown'
for name in known_vendors:
# search for an 'expected' vendor name in the list of strings
if name in vendor_strings:
vendor = name
break
gpus.append({'vendor': vendor, 'model': gpu['Device']})
grains = {}
grains['num_gpus'] = len(gpus)
grains['gpus'] = gpus
return grains
def _netbsd_gpu_data():
'''
num_gpus: int
gpus:
- vendor: nvidia|amd|ati|...
model: string
'''
known_vendors = ['nvidia', 'amd', 'ati', 'intel', 'cirrus logic', 'vmware']
gpus = []
try:
pcictl_out = __salt__['cmd.run']('pcictl pci0 list')
for line in pcictl_out.splitlines():
for vendor in known_vendors:
vendor_match = re.match(
r'[0-9:]+ ({0}) (.+) \(VGA .+\)'.format(vendor),
line,
re.IGNORECASE
)
if vendor_match:
gpus.append({'vendor': vendor_match.group(1), 'model': vendor_match.group(2)})
except OSError:
pass
grains = {}
grains['num_gpus'] = len(gpus)
grains['gpus'] = gpus
return grains
def _osx_gpudata():
'''
num_gpus: int
gpus:
- vendor: nvidia|amd|ati|...
model: string
'''
gpus = []
try:
pcictl_out = __salt__['cmd.run']('system_profiler SPDisplaysDataType')
for line in pcictl_out.splitlines():
fieldname, _, fieldval = line.partition(': ')
if fieldname.strip() == "Chipset Model":
vendor, _, model = fieldval.partition(' ')
vendor = vendor.lower()
gpus.append({'vendor': vendor, 'model': model})
except OSError:
pass
grains = {}
grains['num_gpus'] = len(gpus)
grains['gpus'] = gpus
return grains
def _bsd_cpudata(osdata):
'''
Return CPU information for BSD-like systems
'''
# Provides:
# cpuarch
# num_cpus
# cpu_model
# cpu_flags
sysctl = salt.utils.which('sysctl')
arch = salt.utils.which('arch')
cmds = {}
if sysctl:
cmds.update({
'num_cpus': '{0} -n hw.ncpu'.format(sysctl),
'cpuarch': '{0} -n hw.machine'.format(sysctl),
'cpu_model': '{0} -n hw.model'.format(sysctl),
})
if arch and osdata['kernel'] == 'OpenBSD':
cmds['cpuarch'] = '{0} -s'.format(arch)
if osdata['kernel'] == 'Darwin':
cmds['cpu_model'] = '{0} -n machdep.cpu.brand_string'.format(sysctl)
cmds['cpu_flags'] = '{0} -n machdep.cpu.features'.format(sysctl)
grains = dict([(k, __salt__['cmd.run'](v)) for k, v in six.iteritems(cmds)])
if 'cpu_flags' in grains and isinstance(grains['cpu_flags'], six.string_types):
grains['cpu_flags'] = grains['cpu_flags'].split(' ')
if osdata['kernel'] == 'NetBSD':
grains['cpu_flags'] = []
for line in __salt__['cmd.run']('cpuctl identify 0').splitlines():
cpu_match = re.match(r'cpu[0-9]:\ features[0-9]?\ .+<(.+)>', line)
if cpu_match:
flag = cpu_match.group(1).split(',')
grains['cpu_flags'].extend(flag)
if osdata['kernel'] == 'FreeBSD' and os.path.isfile('/var/run/dmesg.boot'):
grains['cpu_flags'] = []
# TODO: at least it needs to be tested for BSD other then FreeBSD
with salt.utils.fopen('/var/run/dmesg.boot', 'r') as _fp:
cpu_here = False
for line in _fp:
if line.startswith('CPU: '):
cpu_here = True # starts CPU descr
continue
if cpu_here:
if not line.startswith(' '):
break # game over
if 'Features' in line:
start = line.find('<')
end = line.find('>')
if start > 0 and end > 0:
flag = line[start + 1:end].split(',')
grains['cpu_flags'].extend(flag)
try:
grains['num_cpus'] = int(grains['num_cpus'])
except ValueError:
grains['num_cpus'] = 1
return grains
def _sunos_cpudata():
'''
Return the CPU information for Solaris-like systems
'''
# Provides:
# cpuarch
# num_cpus
# cpu_model
# cpu_flags
grains = {}
grains['cpu_flags'] = []
grains['cpuarch'] = __salt__['cmd.run']('isainfo -k')
psrinfo = '/usr/sbin/psrinfo 2>/dev/null'
grains['num_cpus'] = len(__salt__['cmd.run'](psrinfo, python_shell=True).splitlines())
kstat_info = 'kstat -p cpu_info:*:*:brand'
for line in __salt__['cmd.run'](kstat_info).splitlines():
match = re.match(r'(\w+:\d+:\w+\d+:\w+)\s+(.+)', line)
if match:
grains['cpu_model'] = match.group(2)
isainfo = 'isainfo -n -v'
for line in __salt__['cmd.run'](isainfo).splitlines():
match = re.match(r'^\s+(.+)', line)
if match:
cpu_flags = match.group(1).split()
grains['cpu_flags'].extend(cpu_flags)
return grains
def _memdata(osdata):
'''
Gather information about the system memory
'''
# Provides:
# mem_total
grains = {'mem_total': 0}
if osdata['kernel'] == 'Linux':
meminfo = '/proc/meminfo'
if os.path.isfile(meminfo):
with salt.utils.fopen(meminfo, 'r') as ifile:
for line in ifile:
comps = line.rstrip('\n').split(':')
if not len(comps) > 1:
continue
if comps[0].strip() == 'MemTotal':
# Use floor division to force output to be an integer
grains['mem_total'] = int(comps[1].split()[0]) // 1024
elif osdata['kernel'] in ('FreeBSD', 'OpenBSD', 'NetBSD', 'Darwin'):
sysctl = salt.utils.which('sysctl')
if sysctl:
if osdata['kernel'] == 'Darwin':
mem = __salt__['cmd.run']('{0} -n hw.memsize'.format(sysctl))
else:
mem = __salt__['cmd.run']('{0} -n hw.physmem'.format(sysctl))
if osdata['kernel'] == 'NetBSD' and mem.startswith('-'):
mem = __salt__['cmd.run']('{0} -n hw.physmem64'.format(sysctl))
grains['mem_total'] = int(mem) / 1024 / 1024
elif osdata['kernel'] == 'SunOS':
prtconf = '/usr/sbin/prtconf 2>/dev/null'
for line in __salt__['cmd.run'](prtconf, python_shell=True).splitlines():
comps = line.split(' ')
if comps[0].strip() == 'Memory' and comps[1].strip() == 'size:':
grains['mem_total'] = int(comps[2].strip())
elif osdata['kernel'] == 'Windows' and HAS_WMI:
# get the Total Physical memory as reported by msinfo32
tot_bytes = win32api.GlobalMemoryStatusEx()['TotalPhys']
# return memory info in gigabytes
grains['mem_total'] = int(tot_bytes / (1024 ** 2))
return grains
def _windows_virtual(osdata):
'''
Returns what type of virtual hardware is under the hood, kvm or physical
'''
# Provides:
# virtual
# virtual_subtype
grains = dict()
if osdata['kernel'] != 'Windows':
return grains
# It is possible that the 'manufacturer' and/or 'productname' grains
# exist but have a value of None.
manufacturer = osdata.get('manufacturer', '')
if manufacturer is None:
manufacturer = ''
productname = osdata.get('productname', '')
if productname is None:
productname = ''
if 'QEMU' in manufacturer:
# FIXME: Make this detect between kvm or qemu
grains['virtual'] = 'kvm'
if 'Bochs' in manufacturer:
grains['virtual'] = 'kvm'
# Product Name: (oVirt) www.ovirt.org
# Red Hat Community virtualization Project based on kvm
elif 'oVirt' in productname:
grains['virtual'] = 'kvm'
grains['virtual_subtype'] = 'oVirt'
# Red Hat Enterprise Virtualization
elif 'RHEV Hypervisor' in productname:
grains['virtual'] = 'kvm'
grains['virtual_subtype'] = 'rhev'
# Product Name: VirtualBox
elif 'VirtualBox' in productname:
grains['virtual'] = 'VirtualBox'
# Product Name: VMware Virtual Platform
elif 'VMware Virtual Platform' in productname:
grains['virtual'] = 'VMware'
# Manufacturer: Microsoft Corporation
# Product Name: Virtual Machine
elif 'Microsoft' in manufacturer and \
'Virtual Machine' in productname:
grains['virtual'] = 'VirtualPC'
# Manufacturer: Parallels Software International Inc.
elif 'Parallels Software' in manufacturer:
grains['virtual'] = 'Parallels'
# Apache CloudStack
elif 'CloudStack KVM Hypervisor' in productname:
grains['virtual'] = 'kvm'
grains['virtual_subtype'] = 'cloudstack'
return grains
def _virtual(osdata):
'''
Returns what type of virtual hardware is under the hood, kvm or physical
'''
# This is going to be a monster, if you are running a vm you can test this
# grain with please submit patches!
# Provides:
# virtual
# virtual_subtype
grains = {'virtual': 'physical'}
# Skip the below loop on platforms which have none of the desired cmds
# This is a temporary measure until we can write proper virtual hardware
# detection.
skip_cmds = ('AIX',)
# list of commands to be executed to determine the 'virtual' grain
_cmds = ['systemd-detect-virt', 'virt-what', 'dmidecode']
# test first for virt-what, which covers most of the desired functionality
# on most platforms
if not salt.utils.is_windows() and osdata['kernel'] not in skip_cmds:
if salt.utils.which('virt-what'):
_cmds = ['virt-what']
else:
log.debug(
'Please install \'virt-what\' to improve results of the '
'\'virtual\' grain.'
)
# Check if enable_lspci is True or False
if __opts__.get('enable_lspci', True) is False:
# /proc/bus/pci does not exists, lspci will fail
if os.path.exists('/proc/bus/pci'):
_cmds += ['lspci']
# Add additional last resort commands
if osdata['kernel'] in skip_cmds:
_cmds = ()
# Quick backout for BrandZ (Solaris LX Branded zones)
# Don't waste time trying other commands to detect the virtual grain
if osdata['kernel'] == 'Linux' and 'BrandZ virtual linux' in os.uname():
grains['virtual'] = 'zone'
return grains
failed_commands = set()
for command in _cmds:
args = []
if osdata['kernel'] == 'Darwin':
command = 'system_profiler'
args = ['SPDisplaysDataType']
elif osdata['kernel'] == 'SunOS':
command = 'prtdiag'
args = []
cmd = salt.utils.which(command)
if not cmd:
continue
cmd = '{0} {1}'.format(cmd, ' '.join(args))
try:
ret = __salt__['cmd.run_all'](cmd)
if ret['retcode'] > 0:
if salt.log.is_logging_configured():
# systemd-detect-virt always returns > 0 on non-virtualized
# systems
# prtdiag only works in the global zone, skip if it fails
if salt.utils.is_windows() or 'systemd-detect-virt' in cmd or 'prtdiag' in cmd:
continue
failed_commands.add(command)
continue
except salt.exceptions.CommandExecutionError:
if salt.log.is_logging_configured():
if salt.utils.is_windows():
continue
failed_commands.add(command)
continue
output = ret['stdout']
if command == "system_profiler":
macoutput = output.lower()
if '0x1ab8' in macoutput:
grains['virtual'] = 'Parallels'
if 'parallels' in macoutput:
grains['virtual'] = 'Parallels'
if 'vmware' in macoutput:
grains['virtual'] = 'VMware'
if '0x15ad' in macoutput:
grains['virtual'] = 'VMware'
if 'virtualbox' in macoutput:
grains['virtual'] = 'VirtualBox'
# Break out of the loop so the next log message is not issued
break
elif command == 'systemd-detect-virt':
if output in ('qemu', 'kvm', 'oracle', 'xen', 'bochs', 'chroot', 'uml', 'systemd-nspawn'):
grains['virtual'] = output
break
elif 'vmware' in output:
grains['virtual'] = 'VMware'
break
elif 'microsoft' in output:
grains['virtual'] = 'VirtualPC'
break
elif 'lxc' in output:
grains['virtual'] = 'LXC'
break
elif 'systemd-nspawn' in output:
grains['virtual'] = 'LXC'
break
elif command == 'virt-what':
if output in ('kvm', 'qemu', 'uml', 'xen', 'lxc'):
grains['virtual'] = output
break
elif 'vmware' in output:
grains['virtual'] = 'VMware'
break
elif 'parallels' in output:
grains['virtual'] = 'Parallels'
break
elif 'hyperv' in output:
grains['virtual'] = 'HyperV'
break
elif command == 'dmidecode':
# Product Name: VirtualBox
if 'Vendor: QEMU' in output:
# FIXME: Make this detect between kvm or qemu
grains['virtual'] = 'kvm'
if 'Manufacturer: QEMU' in output:
grains['virtual'] = 'kvm'
if 'Vendor: Bochs' in output:
grains['virtual'] = 'kvm'
if 'Manufacturer: Bochs' in output:
grains['virtual'] = 'kvm'
if 'BHYVE' in output:
grains['virtual'] = 'bhyve'
# Product Name: (oVirt) www.ovirt.org
# Red Hat Community virtualization Project based on kvm
elif 'Manufacturer: oVirt' in output:
grains['virtual'] = 'kvm'
grains['virtual_subtype'] = 'ovirt'
# Red Hat Enterprise Virtualization
elif 'Product Name: RHEV Hypervisor' in output:
grains['virtual'] = 'kvm'
grains['virtual_subtype'] = 'rhev'
elif 'VirtualBox' in output:
grains['virtual'] = 'VirtualBox'
# Product Name: VMware Virtual Platform
elif 'VMware' in output:
grains['virtual'] = 'VMware'
# Manufacturer: Microsoft Corporation
# Product Name: Virtual Machine
elif ': Microsoft' in output and 'Virtual Machine' in output:
grains['virtual'] = 'VirtualPC'
# Manufacturer: Parallels Software International Inc.
elif 'Parallels Software' in output:
grains['virtual'] = 'Parallels'
elif 'Manufacturer: Google' in output:
grains['virtual'] = 'kvm'
# Proxmox KVM
elif 'Vendor: SeaBIOS' in output:
grains['virtual'] = 'kvm'
# Break out of the loop, lspci parsing is not necessary
break
elif command == 'lspci':
# dmidecode not available or the user does not have the necessary
# permissions
model = output.lower()
if 'vmware' in model:
grains['virtual'] = 'VMware'
# 00:04.0 System peripheral: InnoTek Systemberatung GmbH
# VirtualBox Guest Service
elif 'virtualbox' in model:
grains['virtual'] = 'VirtualBox'
elif 'qemu' in model:
grains['virtual'] = 'kvm'
elif 'virtio' in model:
grains['virtual'] = 'kvm'
# Break out of the loop so the next log message is not issued
break
elif command == 'virt-what':
# if 'virt-what' returns nothing, it's either an undetected platform
# so we default just as virt-what to 'physical', otherwise use the
# platform detected/returned by virt-what
if output:
grains['virtual'] = output.lower()
break
elif command == 'prtdiag':
model = output.lower().split("\n")[0]
if 'vmware' in model:
grains['virtual'] = 'VMware'
elif 'virtualbox' in model:
grains['virtual'] = 'VirtualBox'
elif 'qemu' in model:
grains['virtual'] = 'kvm'
elif 'joyent smartdc hvm' in model:
grains['virtual'] = 'kvm'
break
else:
if osdata['kernel'] not in skip_cmds:
log.debug(
'All tools for virtual hardware identification failed to '
'execute because they do not exist on the system running this '
'instance or the user does not have the necessary permissions '
'to execute them. Grains output might not be accurate.'
)
choices = ('Linux', 'HP-UX')
isdir = os.path.isdir
sysctl = salt.utils.which('sysctl')
if osdata['kernel'] in choices:
if os.path.isdir('/proc'):
try:
self_root = os.stat('/')
init_root = os.stat('/proc/1/root/.')
if self_root != init_root:
grains['virtual_subtype'] = 'chroot'
except (IOError, OSError):
pass
if os.path.isfile('/proc/1/cgroup'):
try:
with salt.utils.fopen('/proc/1/cgroup', 'r') as fhr:
if ':/lxc/' in fhr.read():
grains['virtual_subtype'] = 'LXC'
with salt.utils.fopen('/proc/1/cgroup', 'r') as fhr:
fhr_contents = fhr.read()
if ':/docker/' in fhr_contents or ':/system.slice/docker' in fhr_contents:
grains['virtual_subtype'] = 'Docker'
except IOError:
pass
if isdir('/proc/vz'):
if os.path.isfile('/proc/vz/version'):
grains['virtual'] = 'openvzhn'
elif os.path.isfile('/proc/vz/veinfo'):
grains['virtual'] = 'openvzve'
# a posteriori, it's expected for these to have failed:
failed_commands.discard('lspci')
failed_commands.discard('dmidecode')
# Provide additional detection for OpenVZ
if os.path.isfile('/proc/self/status'):
with salt.utils.fopen('/proc/self/status') as status_file:
vz_re = re.compile(r'^envID:\s+(\d+)$')
for line in status_file:
vz_match = vz_re.match(line.rstrip('\n'))
if vz_match and int(vz_match.groups()[0]) != 0:
grains['virtual'] = 'openvzve'
elif vz_match and int(vz_match.groups()[0]) == 0:
grains['virtual'] = 'openvzhn'
if isdir('/proc/sys/xen') or \
isdir('/sys/bus/xen') or isdir('/proc/xen'):
if os.path.isfile('/proc/xen/xsd_kva'):
# Tested on CentOS 5.3 / 2.6.18-194.26.1.el5xen
# Tested on CentOS 5.4 / 2.6.18-164.15.1.el5xen
grains['virtual_subtype'] = 'Xen Dom0'
else:
if grains.get('productname', '') == 'HVM domU':
# Requires dmidecode!
grains['virtual_subtype'] = 'Xen HVM DomU'
elif os.path.isfile('/proc/xen/capabilities') and \
os.access('/proc/xen/capabilities', os.R_OK):
with salt.utils.fopen('/proc/xen/capabilities') as fhr:
if 'control_d' not in fhr.read():
# Tested on CentOS 5.5 / 2.6.18-194.3.1.el5xen
grains['virtual_subtype'] = 'Xen PV DomU'
else:
# Shouldn't get to this, but just in case
grains['virtual_subtype'] = 'Xen Dom0'
# Tested on Fedora 10 / 2.6.27.30-170.2.82 with xen
# Tested on Fedora 15 / 2.6.41.4-1 without running xen
elif isdir('/sys/bus/xen'):
if 'xen:' in __salt__['cmd.run']('dmesg').lower():
grains['virtual_subtype'] = 'Xen PV DomU'
elif os.listdir('/sys/bus/xen/drivers'):
# An actual DomU will have several drivers
# whereas a paravirt ops kernel will not.
grains['virtual_subtype'] = 'Xen PV DomU'
# If a Dom0 or DomU was detected, obviously this is xen
if 'dom' in grains.get('virtual_subtype', '').lower():
grains['virtual'] = 'xen'
if os.path.isfile('/proc/cpuinfo'):
with salt.utils.fopen('/proc/cpuinfo', 'r') as fhr:
if 'QEMU Virtual CPU' in fhr.read():
grains['virtual'] = 'kvm'
if os.path.isfile('/sys/devices/virtual/dmi/id/product_name'):
try:
with salt.utils.fopen('/sys/devices/virtual/dmi/id/product_name', 'r') as fhr:
output = fhr.read()
if 'VirtualBox' in output:
grains['virtual'] = 'VirtualBox'
elif 'RHEV Hypervisor' in output:
grains['virtual'] = 'kvm'
grains['virtual_subtype'] = 'rhev'
elif 'oVirt Node' in output:
grains['virtual'] = 'kvm'
grains['virtual_subtype'] = 'ovirt'
elif 'Google' in output:
grains['virtual'] = 'gce'
except IOError:
pass
elif osdata['kernel'] == 'FreeBSD':
kenv = salt.utils.which('kenv')
if kenv:
product = __salt__['cmd.run'](
'{0} smbios.system.product'.format(kenv)
)
maker = __salt__['cmd.run'](
'{0} smbios.system.maker'.format(kenv)
)
if product.startswith('VMware'):
grains['virtual'] = 'VMware'
if product.startswith('VirtualBox'):
grains['virtual'] = 'VirtualBox'
if maker.startswith('Xen'):
grains['virtual_subtype'] = '{0} {1}'.format(maker, product)
grains['virtual'] = 'xen'
if maker.startswith('Microsoft') and product.startswith('Virtual'):
grains['virtual'] = 'VirtualPC'
if maker.startswith('OpenStack'):
grains['virtual'] = 'OpenStack'
if maker.startswith('Bochs'):
grains['virtual'] = 'kvm'
if sysctl:
hv_vendor = __salt__['cmd.run']('{0} hw.hv_vendor'.format(sysctl))
model = __salt__['cmd.run']('{0} hw.model'.format(sysctl))
jail = __salt__['cmd.run'](
'{0} -n security.jail.jailed'.format(sysctl)
)
if 'bhyve' in hv_vendor:
grains['virtual'] = 'bhyve'
if jail == '1':
grains['virtual_subtype'] = 'jail'
if 'QEMU Virtual CPU' in model:
grains['virtual'] = 'kvm'
elif osdata['kernel'] == 'OpenBSD':
if osdata['manufacturer'] == 'QEMU':
grains['virtual'] = 'kvm'
elif osdata['kernel'] == 'SunOS':
# Check if it's a "regular" zone. (i.e. Solaris 10/11 zone)
zonename = salt.utils.which('zonename')
if zonename:
zone = __salt__['cmd.run']('{0}'.format(zonename))
if zone != 'global':
grains['virtual'] = 'zone'
if salt.utils.is_smartos_zone():
grains.update(_smartos_zone_data())
# Check if it's a branded zone (i.e. Solaris 8/9 zone)
if isdir('/.SUNWnative'):
grains['virtual'] = 'zone'
elif osdata['kernel'] == 'NetBSD':
if sysctl:
if 'QEMU Virtual CPU' in __salt__['cmd.run'](
'{0} -n machdep.cpu_brand'.format(sysctl)):
grains['virtual'] = 'kvm'
elif 'invalid' not in __salt__['cmd.run'](
'{0} -n machdep.xen.suspend'.format(sysctl)):
grains['virtual'] = 'Xen PV DomU'
elif 'VMware' in __salt__['cmd.run'](
'{0} -n machdep.dmi.system-vendor'.format(sysctl)):
grains['virtual'] = 'VMware'
# NetBSD has Xen dom0 support
elif __salt__['cmd.run'](
'{0} -n machdep.idle-mechanism'.format(sysctl)) == 'xen':
if os.path.isfile('/var/run/xenconsoled.pid'):
grains['virtual_subtype'] = 'Xen Dom0'
for command in failed_commands:
log.info(
"Although '{0}' was found in path, the current user "
'cannot execute it. Grains output might not be '
'accurate.'.format(command)
)
return grains
def _ps(osdata):
'''
Return the ps grain
'''
grains = {}
bsd_choices = ('FreeBSD', 'NetBSD', 'OpenBSD', 'MacOS')
if osdata['os'] in bsd_choices:
grains['ps'] = 'ps auxwww'
elif osdata['os_family'] == 'Solaris':
grains['ps'] = '/usr/ucb/ps auxwww'
elif osdata['os'] == 'Windows':
grains['ps'] = 'tasklist.exe'
elif osdata.get('virtual', '') == 'openvzhn':
grains['ps'] = (
'ps -fH -p $(grep -l \"^envID:[[:space:]]*0\\$\" '
'/proc/[0-9]*/status | sed -e \"s=/proc/\\([0-9]*\\)/.*=\\1=\") '
'| awk \'{ $7=\"\"; print }\''
)
elif osdata['os_family'] == 'AIX':
grains['ps'] = '/usr/bin/ps auxww'
else:
grains['ps'] = 'ps -efHww'
return grains
def _clean_value(key, val):
'''
Clean out well-known bogus values.
If it isn't clean (for example has value 'None'), return None.
Otherwise, return the original value.
NOTE: This logic also exists in the smbios module. This function is
for use when not using smbios to retrieve the value.
'''
if (val is None or
not len(val) or
re.match('none', val, flags=re.IGNORECASE)):
return None
elif 'uuid' in key:
# Try each version (1-5) of RFC4122 to check if it's actually a UUID
for uuidver in range(1, 5):
try:
uuid.UUID(val, version=uuidver)
return val
except ValueError:
continue
log.trace('HW {0} value {1} is an invalid UUID'.format(key, val.replace('\n', ' ')))
return None
elif re.search('serial|part|version', key):
# 'To be filled by O.E.M.
# 'Not applicable' etc.
# 'Not specified' etc.
# 0000000, 1234567 etc.
# begone!
if (re.match(r'^[0]+$', val) or
re.match(r'[0]?1234567[8]?[9]?[0]?', val) or
re.search(r'sernum|part[_-]?number|specified|filled|applicable', val, flags=re.IGNORECASE)):
return None
elif re.search('asset|manufacturer', key):
# AssetTag0. Manufacturer04. Begone.
if re.search(r'manufacturer|to be filled|available|asset|^no(ne|t)', val, flags=re.IGNORECASE):
return None
else:
# map unspecified, undefined, unknown & whatever to None
if (re.search(r'to be filled', val, flags=re.IGNORECASE) or
re.search(r'un(known|specified)|no(t|ne)? (asset|provided|defined|available|present|specified)',
val, flags=re.IGNORECASE)):
return None
return val
def _windows_platform_data():
'''
Use the platform module for as much as we can.
'''
# Provides:
# kernelrelease
# kernelversion
# osversion
# osrelease
# osservicepack
# osmanufacturer
# manufacturer
# productname
# biosversion
# serialnumber
# osfullname
# timezone
# windowsdomain
# motherboard.productname
# motherboard.serialnumber
# virtual
if not HAS_WMI:
return {}
with salt.utils.winapi.Com():
wmi_c = wmi.WMI()
# http://msdn.microsoft.com/en-us/library/windows/desktop/aa394102%28v=vs.85%29.aspx
systeminfo = wmi_c.Win32_ComputerSystem()[0]
# https://msdn.microsoft.com/en-us/library/aa394239(v=vs.85).aspx
osinfo = wmi_c.Win32_OperatingSystem()[0]
# http://msdn.microsoft.com/en-us/library/windows/desktop/aa394077(v=vs.85).aspx
biosinfo = wmi_c.Win32_BIOS()[0]
# http://msdn.microsoft.com/en-us/library/windows/desktop/aa394498(v=vs.85).aspx
timeinfo = wmi_c.Win32_TimeZone()[0]
# http://msdn.microsoft.com/en-us/library/windows/desktop/aa394072(v=vs.85).aspx
motherboard = {'product': None,
'serial': None}
try:
motherboardinfo = wmi_c.Win32_BaseBoard()[0]
motherboard['product'] = motherboardinfo.Product
motherboard['serial'] = motherboardinfo.SerialNumber
except IndexError:
log.debug('Motherboard info not available on this system')
os_release = platform.release()
kernel_version = platform.version()
info = salt.utils.win_osinfo.get_os_version_info()
# Starting with Python 2.7.12 and 3.5.2 the `platform.uname()` function
# started reporting the Desktop version instead of the Server version on
# Server versions of Windows, so we need to look those up
# Check for Python >=2.7.12 or >=3.5.2
ver = pythonversion()['pythonversion']
if ((six.PY2 and
salt.utils.compare_versions(ver, '>=', [2, 7, 12, 'final', 0]))
or
(six.PY3 and
salt.utils.compare_versions(ver, '>=', [3, 5, 2, 'final', 0]))):
# (Product Type 1 is Desktop, Everything else is Server)
if info['ProductType'] > 1:
server = {'Vista': '2008Server',
'7': '2008ServerR2',
'8': '2012Server',
'8.1': '2012ServerR2',
'10': '2016Server'}
os_release = server.get(os_release,
'Grain not found. Update lookup table '
'in the `_windows_platform_data` '
'function in `grains\\core.py`')
service_pack = None
if info['ServicePackMajor'] > 0:
service_pack = ''.join(['SP', str(info['ServicePackMajor'])])
grains = {
'kernelrelease': _clean_value('kernelrelease', osinfo.Version),
'kernelversion': _clean_value('kernelversion', kernel_version),
'osversion': _clean_value('osversion', osinfo.Version),
'osrelease': _clean_value('osrelease', os_release),
'osservicepack': _clean_value('osservicepack', service_pack),
'osmanufacturer': _clean_value('osmanufacturer', osinfo.Manufacturer),
'manufacturer': _clean_value('manufacturer', systeminfo.Manufacturer),
'productname': _clean_value('productname', systeminfo.Model),
# bios name had a bunch of whitespace appended to it in my testing
# 'PhoenixBIOS 4.0 Release 6.0 '
'biosversion': _clean_value('biosversion', biosinfo.Name.strip()),
'serialnumber': _clean_value('serialnumber', biosinfo.SerialNumber),
'osfullname': _clean_value('osfullname', osinfo.Caption),
'timezone': _clean_value('timezone', timeinfo.Description),
'windowsdomain': _clean_value('windowsdomain', systeminfo.Domain),
'motherboard': {
'productname': _clean_value('motherboard.productname', motherboard['product']),
'serialnumber': _clean_value('motherboard.serialnumber', motherboard['serial']),
}
}
# test for virtualized environments
# I only had VMware available so the rest are unvalidated
if 'VRTUAL' in biosinfo.Version: # (not a typo)
grains['virtual'] = 'HyperV'
elif 'A M I' in biosinfo.Version:
grains['virtual'] = 'VirtualPC'
elif 'VMware' in systeminfo.Model:
grains['virtual'] = 'VMware'
elif 'VirtualBox' in systeminfo.Model:
grains['virtual'] = 'VirtualBox'
elif 'Xen' in biosinfo.Version:
grains['virtual'] = 'Xen'
if 'HVM domU' in systeminfo.Model:
grains['virtual_subtype'] = 'HVM domU'
elif 'OpenStack' in systeminfo.Model:
grains['virtual'] = 'OpenStack'
return grains
def _osx_platform_data():
'''
Additional data for macOS systems
Returns: A dictionary containing values for the following:
- model_name
- boot_rom_version
- smc_version
- system_serialnumber
'''
cmd = 'system_profiler SPHardwareDataType'
hardware = __salt__['cmd.run'](cmd)
grains = {}
for line in hardware.splitlines():
field_name, _, field_val = line.partition(': ')
if field_name.strip() == "Model Name":
key = 'model_name'
grains[key] = _clean_value(key, field_val)
if field_name.strip() == "Boot ROM Version":
key = 'boot_rom_version'
grains[key] = _clean_value(key, field_val)
if field_name.strip() == "SMC Version (system)":
key = 'smc_version'
grains[key] = _clean_value(key, field_val)
if field_name.strip() == "Serial Number (system)":
key = 'system_serialnumber'
grains[key] = _clean_value(key, field_val)
return grains
def id_():
'''
Return the id
'''
return {'id': __opts__.get('id', '')}
_REPLACE_LINUX_RE = re.compile(r'\W(?:gnu/)?linux', re.IGNORECASE)
# This maps (at most) the first ten characters (no spaces, lowercased) of
# 'osfullname' to the 'os' grain that Salt traditionally uses.
# Please see os_data() and _supported_dists.
# If your system is not detecting properly it likely needs an entry here.
_OS_NAME_MAP = {
'redhatente': 'RedHat',
'gentoobase': 'Gentoo',
'archarm': 'Arch ARM',
'arch': 'Arch',
'debian': 'Debian',
'raspbian': 'Raspbian',
'fedoraremi': 'Fedora',
'chapeau': 'Chapeau',
'korora': 'Korora',
'amazonami': 'Amazon',
'alt': 'ALT',
'enterprise': 'OEL',
'oracleserv': 'OEL',
'cloudserve': 'CloudLinux',
'cloudlinux': 'CloudLinux',
'pidora': 'Fedora',
'scientific': 'ScientificLinux',
'synology': 'Synology',
'nilrt': 'NILinuxRT',
'nilrt-xfce': 'NILinuxRT-XFCE',
'manjaro': 'Manjaro',
'antergos': 'Antergos',
'sles': 'SUSE',
'slesexpand': 'RES',
'void': 'Void',
'linuxmint': 'Mint',
'neon': 'KDE neon',
}
# Map the 'os' grain to the 'os_family' grain
# These should always be capitalized entries as the lookup comes
# post-_OS_NAME_MAP. If your system is having trouble with detection, please
# make sure that the 'os' grain is capitalized and working correctly first.
_OS_FAMILY_MAP = {
'Ubuntu': 'Debian',
'Fedora': 'RedHat',
'Chapeau': 'RedHat',
'Korora': 'RedHat',
'FedBerry': 'RedHat',
'CentOS': 'RedHat',
'GoOSe': 'RedHat',
'Scientific': 'RedHat',
'Amazon': 'RedHat',
'CloudLinux': 'RedHat',
'OVS': 'RedHat',
'OEL': 'RedHat',
'XCP': 'RedHat',
'XenServer': 'RedHat',
'RES': 'RedHat',
'Sangoma': 'RedHat',
'Mandrake': 'Mandriva',
'ESXi': 'VMware',
'Mint': 'Debian',
'VMwareESX': 'VMware',
'Bluewhite64': 'Bluewhite',
'Slamd64': 'Slackware',
'SLES': 'Suse',
'SUSE Enterprise Server': 'Suse',
'SUSE Enterprise Server': 'Suse',
'SLED': 'Suse',
'openSUSE': 'Suse',
'SUSE': 'Suse',
'openSUSE Leap': 'Suse',
'openSUSE Tumbleweed': 'Suse',
'SLES_SAP': 'Suse',
'Solaris': 'Solaris',
'SmartOS': 'Solaris',
'OmniOS': 'Solaris',
'OpenIndiana Development': 'Solaris',
'OpenIndiana': 'Solaris',
'OpenSolaris Development': 'Solaris',
'OpenSolaris': 'Solaris',
'Oracle Solaris': 'Solaris',
'Arch ARM': 'Arch',
'Manjaro': 'Arch',
'Antergos': 'Arch',
'ALT': 'RedHat',
'Trisquel': 'Debian',
'GCEL': 'Debian',
'Linaro': 'Debian',
'elementary OS': 'Debian',
'ScientificLinux': 'RedHat',
'Raspbian': 'Debian',
'Devuan': 'Debian',
'antiX': 'Debian',
'NILinuxRT': 'NILinuxRT',
'NILinuxRT-XFCE': 'NILinuxRT',
'KDE neon': 'Debian',
'Void': 'Void',
}
def _linux_bin_exists(binary):
'''
Does a binary exist in linux (depends on which, type, or whereis)
'''
for search_cmd in ('which', 'type -ap'):
try:
return __salt__['cmd.retcode'](
'{0} {1}'.format(search_cmd, binary)
) == 0
except salt.exceptions.CommandExecutionError:
pass
try:
return len(__salt__['cmd.run_all'](
'whereis -b {0}'.format(binary)
)['stdout'].split()) > 1
except salt.exceptions.CommandExecutionError:
return False
def _get_interfaces():
'''
Provide a dict of the connected interfaces and their ip addresses
'''
global _INTERFACES
if not _INTERFACES:
_INTERFACES = salt.utils.network.interfaces()
return _INTERFACES
def _parse_os_release():
'''
Parse /etc/os-release and return a parameter dictionary
See http://www.freedesktop.org/software/systemd/man/os-release.html
for specification of the file format.
'''
filename = '/etc/os-release'
if not os.path.isfile(filename):
filename = '/usr/lib/os-release'
data = dict()
with salt.utils.fopen(filename) as ifile:
regex = re.compile('^([\\w]+)=(?:\'|")?(.*?)(?:\'|")?$')
for line in ifile:
match = regex.match(line.strip())
if match:
# Shell special characters ("$", quotes, backslash, backtick)
# are escaped with backslashes
data[match.group(1)] = re.sub(r'\\([$"\'\\`])', r'\1', match.group(2))
return data
def os_data():
'''
Return grains pertaining to the operating system
'''
grains = {
'num_gpus': 0,
'gpus': [],
}
# Windows Server 2008 64-bit
# ('Windows', 'MINIONNAME', '2008ServerR2', '6.1.7601', 'AMD64',
# 'Intel64 Fam ily 6 Model 23 Stepping 6, GenuineIntel')
# Ubuntu 10.04
# ('Linux', 'MINIONNAME', '2.6.32-38-server',
# '#83-Ubuntu SMP Wed Jan 4 11:26:59 UTC 2012', 'x86_64', '')
# pylint: disable=unpacking-non-sequence
(grains['kernel'], grains['nodename'],
grains['kernelrelease'], grains['kernelversion'], grains['cpuarch'], _) = platform.uname()
# pylint: enable=unpacking-non-sequence
if salt.utils.is_proxy():
grains['kernel'] = 'proxy'
grains['kernelrelease'] = 'proxy'
grains['kernelversion'] = 'proxy'
grains['osrelease'] = 'proxy'
grains['os'] = 'proxy'
grains['os_family'] = 'proxy'
grains['osfullname'] = 'proxy'
elif salt.utils.is_windows():
grains['os'] = 'Windows'
grains['os_family'] = 'Windows'
grains.update(_memdata(grains))
grains.update(_windows_platform_data())
grains.update(_windows_cpudata())
grains.update(_windows_virtual(grains))
grains.update(_ps(grains))
if 'Server' in grains['osrelease']:
osrelease_info = grains['osrelease'].split('Server', 1)
osrelease_info[1] = osrelease_info[1].lstrip('R')
else:
osrelease_info = grains['osrelease'].split('.')
for idx, value in enumerate(osrelease_info):
if not value.isdigit():
continue
osrelease_info[idx] = int(value)
grains['osrelease_info'] = tuple(osrelease_info)
grains['osfinger'] = '{os}-{ver}'.format(
os=grains['os'],
ver=grains['osrelease'])
grains['init'] = 'Windows'
return grains
elif salt.utils.is_linux():
# Add SELinux grain, if you have it
if _linux_bin_exists('selinuxenabled'):
grains['selinux'] = {}
grains['selinux']['enabled'] = __salt__['cmd.retcode'](
'selinuxenabled'
) == 0
if _linux_bin_exists('getenforce'):
grains['selinux']['enforced'] = __salt__['cmd.run'](
'getenforce'
).strip()
# Add systemd grain, if you have it
if _linux_bin_exists('systemctl') and _linux_bin_exists('localectl'):
grains['systemd'] = {}
systemd_info = __salt__['cmd.run'](
'systemctl --version'
).splitlines()
grains['systemd']['version'] = systemd_info[0].split()[1]
grains['systemd']['features'] = systemd_info[1]
# Add init grain
grains['init'] = 'unknown'
try:
os.stat('/run/systemd/system')
grains['init'] = 'systemd'
except (OSError, IOError):
if os.path.exists('/proc/1/cmdline'):
with salt.utils.fopen('/proc/1/cmdline') as fhr:
init_cmdline = fhr.read().replace('\x00', ' ').split()
try:
init_bin = salt.utils.which(init_cmdline[0])
except IndexError:
# Emtpy init_cmdline
init_bin = None
log.warning(
"Unable to fetch data from /proc/1/cmdline"
)
if init_bin is not None and init_bin.endswith('bin/init'):
supported_inits = (six.b('upstart'), six.b('sysvinit'), six.b('systemd'))
edge_len = max(len(x) for x in supported_inits) - 1
try:
buf_size = __opts__['file_buffer_size']
except KeyError:
# Default to the value of file_buffer_size for the minion
buf_size = 262144
try:
with salt.utils.fopen(init_bin, 'rb') as fp_:
buf = True
edge = six.b('')
buf = fp_.read(buf_size).lower()
while buf:
buf = edge + buf
for item in supported_inits:
if item in buf:
if six.PY3:
item = item.decode('utf-8')
grains['init'] = item
buf = six.b('')
break
edge = buf[-edge_len:]
buf = fp_.read(buf_size).lower()
except (IOError, OSError) as exc:
log.error(
'Unable to read from init_bin ({0}): {1}'
.format(init_bin, exc)
)
elif salt.utils.which('supervisord') in init_cmdline:
grains['init'] = 'supervisord'
elif init_cmdline == ['runit']:
grains['init'] = 'runit'
else:
log.info(
'Could not determine init system from command line: ({0})'
.format(' '.join(init_cmdline))
)
# Add lsb grains on any distro with lsb-release
try:
import lsb_release # pylint: disable=import-error
release = lsb_release.get_distro_information()
for key, value in six.iteritems(release):
key = key.lower()
lsb_param = 'lsb_{0}{1}'.format(
'' if key.startswith('distrib_') else 'distrib_',
key
)
grains[lsb_param] = value
# Catch a NameError to workaround possible breakage in lsb_release
# See https://github.com/saltstack/salt/issues/37867
except (ImportError, NameError):
# if the python library isn't available, default to regex
if os.path.isfile('/etc/lsb-release'):
# Matches any possible format:
# DISTRIB_ID="Ubuntu"
# DISTRIB_ID='Mageia'
# DISTRIB_ID=Fedora
# DISTRIB_RELEASE='10.10'
# DISTRIB_CODENAME='squeeze'
# DISTRIB_DESCRIPTION='Ubuntu 10.10'
regex = re.compile((
'^(DISTRIB_(?:ID|RELEASE|CODENAME|DESCRIPTION))=(?:\'|")?'
'([\\w\\s\\.\\-_]+)(?:\'|")?'
))
with salt.utils.fopen('/etc/lsb-release') as ifile:
for line in ifile:
match = regex.match(line.rstrip('\n'))
if match:
# Adds:
# lsb_distrib_{id,release,codename,description}
grains[
'lsb_{0}'.format(match.groups()[0].lower())
] = match.groups()[1].rstrip()
if grains.get('lsb_distrib_description', '').lower().startswith('antergos'):
# Antergos incorrectly configures their /etc/lsb-release,
# setting the DISTRIB_ID to "Arch". This causes the "os" grain
# to be incorrectly set to "Arch".
grains['osfullname'] = 'Antergos Linux'
elif 'lsb_distrib_id' not in grains:
if os.path.isfile('/etc/os-release') or os.path.isfile('/usr/lib/os-release'):
os_release = _parse_os_release()
if 'NAME' in os_release:
grains['lsb_distrib_id'] = os_release['NAME'].strip()
if 'VERSION_ID' in os_release:
grains['lsb_distrib_release'] = os_release['VERSION_ID']
if 'PRETTY_NAME' in os_release:
grains['lsb_distrib_codename'] = os_release['PRETTY_NAME']
if 'CPE_NAME' in os_release:
if ":suse:" in os_release['CPE_NAME'] or ":opensuse:" in os_release['CPE_NAME']:
grains['os'] = "SUSE"
# openSUSE `osfullname` grain normalization
if os_release.get("NAME") == "openSUSE Leap":
grains['osfullname'] = "Leap"
elif os_release.get("VERSION") == "Tumbleweed":
grains['osfullname'] = os_release["VERSION"]
elif os.path.isfile('/etc/SuSE-release'):
grains['lsb_distrib_id'] = 'SUSE'
version = ''
patch = ''
with salt.utils.fopen('/etc/SuSE-release') as fhr:
for line in fhr:
if 'enterprise' in line.lower():
grains['lsb_distrib_id'] = 'SLES'
grains['lsb_distrib_codename'] = re.sub(r'\(.+\)', '', line).strip()
elif 'version' in line.lower():
version = re.sub(r'[^0-9]', '', line)
elif 'patchlevel' in line.lower():
patch = re.sub(r'[^0-9]', '', line)
grains['lsb_distrib_release'] = version
if patch:
grains['lsb_distrib_release'] += '.' + patch
patchstr = 'SP' + patch
if grains['lsb_distrib_codename'] and patchstr not in grains['lsb_distrib_codename']:
grains['lsb_distrib_codename'] += ' ' + patchstr
if not grains.get('lsb_distrib_codename'):
grains['lsb_distrib_codename'] = 'n.a'
elif os.path.isfile('/etc/altlinux-release'):
# ALT Linux
grains['lsb_distrib_id'] = 'altlinux'
with salt.utils.fopen('/etc/altlinux-release') as ifile:
# This file is symlinked to from:
# /etc/fedora-release
# /etc/redhat-release
# /etc/system-release
for line in ifile:
# ALT Linux Sisyphus (unstable)
comps = line.split()
if comps[0] == 'ALT':
grains['lsb_distrib_release'] = comps[2]
grains['lsb_distrib_codename'] = \
comps[3].replace('(', '').replace(')', '')
elif os.path.isfile('/etc/centos-release'):
# CentOS Linux
grains['lsb_distrib_id'] = 'CentOS'
with salt.utils.fopen('/etc/centos-release') as ifile:
for line in ifile:
# Need to pull out the version and codename
# in the case of custom content in /etc/centos-release
find_release = re.compile(r'\d+\.\d+')
find_codename = re.compile(r'(?<=\()(.*?)(?=\))')
release = find_release.search(line)
codename = find_codename.search(line)
if release is not None:
grains['lsb_distrib_release'] = release.group()
if codename is not None:
grains['lsb_distrib_codename'] = codename.group()
elif os.path.isfile('/etc.defaults/VERSION') \
and os.path.isfile('/etc.defaults/synoinfo.conf'):
grains['osfullname'] = 'Synology'
with salt.utils.fopen('/etc.defaults/VERSION', 'r') as fp_:
synoinfo = {}
for line in fp_:
try:
key, val = line.rstrip('\n').split('=')
except ValueError:
continue
if key in ('majorversion', 'minorversion',
'buildnumber'):
synoinfo[key] = val.strip('"')
if len(synoinfo) != 3:
log.warning(
'Unable to determine Synology version info. '
'Please report this, as it is likely a bug.'
)
else:
grains['osrelease'] = (
'{majorversion}.{minorversion}-{buildnumber}'
.format(**synoinfo)
)
# Use the already intelligent platform module to get distro info
# (though apparently it's not intelligent enough to strip quotes)
(osname, osrelease, oscodename) = \
[x.strip('"').strip("'") for x in
linux_distribution(supported_dists=_supported_dists)]
# Try to assign these three names based on the lsb info, they tend to
# be more accurate than what python gets from /etc/DISTRO-release.
# It's worth noting that Ubuntu has patched their Python distribution
# so that linux_distribution() does the /etc/lsb-release parsing, but
# we do it anyway here for the sake for full portability.
if 'osfullname' not in grains:
grains['osfullname'] = \
grains.get('lsb_distrib_id', osname).strip()
if 'osrelease' not in grains:
# NOTE: This is a workaround for CentOS 7 os-release bug
# https://bugs.centos.org/view.php?id=8359
# /etc/os-release contains no minor distro release number so we fall back to parse
# /etc/centos-release file instead.
# Commit introducing this comment should be reverted after the upstream bug is released.
if 'CentOS Linux 7' in grains.get('lsb_distrib_codename', ''):
grains.pop('lsb_distrib_release', None)
grains['osrelease'] = \
grains.get('lsb_distrib_release', osrelease).strip()
grains['oscodename'] = grains.get('lsb_distrib_codename', '').strip() or oscodename
if 'Red Hat' in grains['oscodename']:
grains['oscodename'] = oscodename
distroname = _REPLACE_LINUX_RE.sub('', grains['osfullname']).strip()
# return the first ten characters with no spaces, lowercased
shortname = distroname.replace(' ', '').lower()[:10]
# this maps the long names from the /etc/DISTRO-release files to the
# traditional short names that Salt has used.
if 'os' not in grains:
grains['os'] = _OS_NAME_MAP.get(shortname, distroname)
grains.update(_linux_cpudata())
grains.update(_linux_gpu_data())
elif grains['kernel'] == 'SunOS':
if salt.utils.is_smartos():
# See https://github.com/joyent/smartos-live/issues/224
uname_v = os.uname()[3] # format: joyent_20161101T004406Z
uname_v = uname_v[uname_v.index('_')+1:]
grains['os'] = grains['osfullname'] = 'SmartOS'
# store a parsed version of YYYY.MM.DD as osrelease
grains['osrelease'] = ".".join([
uname_v.split('T')[0][0:4],
uname_v.split('T')[0][4:6],
uname_v.split('T')[0][6:8],
])
# store a untouched copy of the timestamp in osrelease_stamp
grains['osrelease_stamp'] = uname_v
if salt.utils.is_smartos_globalzone():
grains.update(_smartos_computenode_data())
elif os.path.isfile('/etc/release'):
with salt.utils.fopen('/etc/release', 'r') as fp_:
rel_data = fp_.read()
try:
release_re = re.compile(
r'((?:Open|Oracle )?Solaris|OpenIndiana|OmniOS) (Development)?'
r'\s*(\d+\.?\d*|v\d+)\s?[A-Z]*\s?(r\d+|\d+\/\d+|oi_\S+|snv_\S+)?'
)
osname, development, osmajorrelease, osminorrelease = \
release_re.search(rel_data).groups()
except AttributeError:
# Set a blank osrelease grain and fallback to 'Solaris'
# as the 'os' grain.
grains['os'] = grains['osfullname'] = 'Solaris'
grains['osrelease'] = ''
else:
if development is not None:
osname = ' '.join((osname, development))
uname_v = os.uname()[3]
grains['os'] = grains['osfullname'] = osname
if osname in ['Oracle Solaris'] and uname_v.startswith(osmajorrelease):
# Oracla Solars 11 and up have minor version in uname
grains['osrelease'] = uname_v
elif osname in ['OmniOS']:
# OmniOS
osrelease = []
osrelease.append(osmajorrelease[1:])
osrelease.append(osminorrelease[1:])
grains['osrelease'] = ".".join(osrelease)
grains['osrelease_stamp'] = uname_v
else:
# Sun Solaris 10 and earlier/comparable
osrelease = []
osrelease.append(osmajorrelease)
if osminorrelease:
osrelease.append(osminorrelease)
grains['osrelease'] = ".".join(osrelease)
grains['osrelease_stamp'] = uname_v
grains.update(_sunos_cpudata())
elif grains['kernel'] == 'VMkernel':
grains['os'] = 'ESXi'
elif grains['kernel'] == 'Darwin':
osrelease = __salt__['cmd.run']('sw_vers -productVersion')
osname = __salt__['cmd.run']('sw_vers -productName')
osbuild = __salt__['cmd.run']('sw_vers -buildVersion')
grains['os'] = 'MacOS'
grains['os_family'] = 'MacOS'
grains['osfullname'] = "{0} {1}".format(osname, osrelease)
grains['osrelease'] = osrelease
grains['osbuild'] = osbuild
grains['init'] = 'launchd'
grains.update(_bsd_cpudata(grains))
grains.update(_osx_gpudata())
grains.update(_osx_platform_data())
else:
grains['os'] = grains['kernel']
if grains['kernel'] == 'FreeBSD':
try:
grains['osrelease'] = __salt__['cmd.run']('freebsd-version -u').split('-')[0]
except salt.exceptions.CommandExecutionError:
# freebsd-version was introduced in 10.0.
# derive osrelease from kernelversion prior to that
grains['osrelease'] = grains['kernelrelease'].split('-')[0]
grains.update(_bsd_cpudata(grains))
if grains['kernel'] in ('OpenBSD', 'NetBSD'):
grains.update(_bsd_cpudata(grains))
grains['osrelease'] = grains['kernelrelease'].split('-')[0]
if grains['kernel'] == 'NetBSD':
grains.update(_netbsd_gpu_data())
if not grains['os']:
grains['os'] = 'Unknown {0}'.format(grains['kernel'])
grains['os_family'] = 'Unknown'
else:
# this assigns family names based on the os name
# family defaults to the os name if not found
grains['os_family'] = _OS_FAMILY_MAP.get(grains['os'],
grains['os'])
# Build the osarch grain. This grain will be used for platform-specific
# considerations such as package management. Fall back to the CPU
# architecture.
if grains.get('os_family') == 'Debian':
osarch = __salt__['cmd.run']('dpkg --print-architecture').strip()
elif grains.get('os_family') == 'RedHat':
osarch = __salt__['cmd.run']('rpm --eval %{_host_cpu}').strip()
elif grains.get('os_family') == 'NILinuxRT':
archinfo = {}
for line in __salt__['cmd.run']('opkg print-architecture').splitlines():
if line.startswith('arch'):
_, arch, priority = line.split()
archinfo[arch.strip()] = int(priority.strip())
# Return osarch in priority order (higher to lower)
osarch = sorted(archinfo, key=archinfo.get, reverse=True)
else:
osarch = grains['cpuarch']
grains['osarch'] = osarch
grains.update(_memdata(grains))
# Get the hardware and bios data
grains.update(_hw_data(grains))
# Get zpool data
grains.update(_zpool_data(grains))
# Load the virtual machine info
grains.update(_virtual(grains))
grains.update(_ps(grains))
if grains.get('osrelease', ''):
osrelease_info = grains['osrelease'].split('.')
for idx, value in enumerate(osrelease_info):
if not value.isdigit():
continue
osrelease_info[idx] = int(value)
grains['osrelease_info'] = tuple(osrelease_info)
try:
grains['osmajorrelease'] = int(grains['osrelease_info'][0])
except (IndexError, TypeError, ValueError):
log.debug(
'Unable to derive osmajorrelease from osrelease_info \'%s\'. '
'The osmajorrelease grain will not be set.',
grains['osrelease_info']
)
os_name = grains['os' if grains.get('os') in (
'FreeBSD', 'OpenBSD', 'NetBSD', 'Mac', 'Raspbian') else 'osfullname']
grains['osfinger'] = '{0}-{1}'.format(
os_name, grains['osrelease'] if os_name in ('Ubuntu',) else grains['osrelease_info'][0])
return grains
def locale_info():
'''
Provides
defaultlanguage
defaultencoding
'''
grains = {}
grains['locale_info'] = {}
if salt.utils.is_proxy():
return grains
try:
(
grains['locale_info']['defaultlanguage'],
grains['locale_info']['defaultencoding']
) = locale.getdefaultlocale()
except Exception:
# locale.getdefaultlocale can ValueError!! Catch anything else it
# might do, per #2205
grains['locale_info']['defaultlanguage'] = 'unknown'
grains['locale_info']['defaultencoding'] = 'unknown'
grains['locale_info']['detectedencoding'] = __salt_system_encoding__
return grains
def hostname():
'''
Return fqdn, hostname, domainname
'''
# This is going to need some work
# Provides:
# fqdn
# host
# localhost
# domain
global __FQDN__
grains = {}
if salt.utils.is_proxy():
return grains
grains['localhost'] = socket.gethostname()
if __FQDN__ is None:
__FQDN__ = salt.utils.network.get_fqhostname()
# On some distros (notably FreeBSD) if there is no hostname set
# salt.utils.network.get_fqhostname() will return None.
# In this case we punt and log a message at error level, but force the
# hostname and domain to be localhost.localdomain
# Otherwise we would stacktrace below
if __FQDN__ is None: # still!
log.error('Having trouble getting a hostname. Does this machine have its hostname and domain set properly?')
__FQDN__ = 'localhost.localdomain'
grains['fqdn'] = __FQDN__
(grains['host'], grains['domain']) = grains['fqdn'].partition('.')[::2]
return grains
def append_domain():
'''
Return append_domain if set
'''
grain = {}
if salt.utils.is_proxy():
return grain
if 'append_domain' in __opts__:
grain['append_domain'] = __opts__['append_domain']
return grain
def ip_fqdn():
'''
Return ip address and FQDN grains
'''
if salt.utils.is_proxy():
return {}
ret = {}
ret['ipv4'] = salt.utils.network.ip_addrs(include_loopback=True)
ret['ipv6'] = salt.utils.network.ip_addrs6(include_loopback=True)
_fqdn = hostname()['fqdn']
for socket_type, ipv_num in ((socket.AF_INET, '4'), (socket.AF_INET6, '6')):
key = 'fqdn_ip' + ipv_num
if not ret['ipv' + ipv_num]:
ret[key] = []
else:
try:
info = socket.getaddrinfo(_fqdn, None, socket_type)
ret[key] = list(set(item[4][0] for item in info))
except socket.error:
if __opts__['__role'] == 'master':
log.warning('Unable to find IPv{0} record for "{1}" causing a 10 second timeout when rendering grains. '
'Set the dns or /etc/hosts for IPv{0} to clear this.'.format(ipv_num, _fqdn))
ret[key] = []
return ret
def ip_interfaces():
'''
Provide a dict of the connected interfaces and their ip addresses
The addresses will be passed as a list for each interface
'''
# Provides:
# ip_interfaces
if salt.utils.is_proxy():
return {}
ret = {}
ifaces = _get_interfaces()
for face in ifaces:
iface_ips = []
for inet in ifaces[face].get('inet', []):
if 'address' in inet:
iface_ips.append(inet['address'])
for inet in ifaces[face].get('inet6', []):
if 'address' in inet:
iface_ips.append(inet['address'])
for secondary in ifaces[face].get('secondary', []):
if 'address' in secondary:
iface_ips.append(secondary['address'])
ret[face] = iface_ips
return {'ip_interfaces': ret}
def ip4_interfaces():
'''
Provide a dict of the connected interfaces and their ip4 addresses
The addresses will be passed as a list for each interface
'''
# Provides:
# ip_interfaces
if salt.utils.is_proxy():
return {}
ret = {}
ifaces = _get_interfaces()
for face in ifaces:
iface_ips = []
for inet in ifaces[face].get('inet', []):
if 'address' in inet:
iface_ips.append(inet['address'])
for secondary in ifaces[face].get('secondary', []):
if 'address' in secondary:
iface_ips.append(secondary['address'])
ret[face] = iface_ips
return {'ip4_interfaces': ret}
def ip6_interfaces():
'''
Provide a dict of the connected interfaces and their ip6 addresses
The addresses will be passed as a list for each interface
'''
# Provides:
# ip_interfaces
if salt.utils.is_proxy():
return {}
ret = {}
ifaces = _get_interfaces()
for face in ifaces:
iface_ips = []
for inet in ifaces[face].get('inet6', []):
if 'address' in inet:
iface_ips.append(inet['address'])
for secondary in ifaces[face].get('secondary', []):
if 'address' in secondary:
iface_ips.append(secondary['address'])
ret[face] = iface_ips
return {'ip6_interfaces': ret}
def hwaddr_interfaces():
'''
Provide a dict of the connected interfaces and their
hw addresses (Mac Address)
'''
# Provides:
# hwaddr_interfaces
ret = {}
ifaces = _get_interfaces()
for face in ifaces:
if 'hwaddr' in ifaces[face]:
ret[face] = ifaces[face]['hwaddr']
return {'hwaddr_interfaces': ret}
def dns():
'''
Parse the resolver configuration file
.. versionadded:: 2016.3.0
'''
# Provides:
# dns
if salt.utils.is_windows() or 'proxyminion' in __opts__:
return {}
resolv = salt.utils.dns.parse_resolv()
for key in ('nameservers', 'ip4_nameservers', 'ip6_nameservers',
'sortlist'):
if key in resolv:
resolv[key] = [str(i) for i in resolv[key]]
return {'dns': resolv} if resolv else {}
def get_machine_id():
'''
Provide the machine-id
'''
# Provides:
# machine-id
locations = ['/etc/machine-id', '/var/lib/dbus/machine-id']
existing_locations = [loc for loc in locations if os.path.exists(loc)]
if not existing_locations:
return {}
else:
with salt.utils.fopen(existing_locations[0]) as machineid:
return {'machine_id': machineid.read().strip()}
def path():
'''
Return the path
'''
# Provides:
# path
return {'path': os.environ.get('PATH', '').strip()}
def pythonversion():
'''
Return the Python version
'''
# Provides:
# pythonversion
return {'pythonversion': list(sys.version_info)}
def pythonpath():
'''
Return the Python path
'''
# Provides:
# pythonpath
return {'pythonpath': sys.path}
def pythonexecutable():
'''
Return the python executable in use
'''
# Provides:
# pythonexecutable
return {'pythonexecutable': sys.executable}
def saltpath():
'''
Return the path of the salt module
'''
# Provides:
# saltpath
salt_path = os.path.abspath(os.path.join(__file__, os.path.pardir))
return {'saltpath': os.path.dirname(salt_path)}
def saltversion():
'''
Return the version of salt
'''
# Provides:
# saltversion
from salt.version import __version__
return {'saltversion': __version__}
def zmqversion():
'''
Return the zeromq version
'''
# Provides:
# zmqversion
try:
import zmq
return {'zmqversion': zmq.zmq_version()} # pylint: disable=no-member
except ImportError:
return {}
def saltversioninfo():
'''
Return the version_info of salt
.. versionadded:: 0.17.0
'''
# Provides:
# saltversioninfo
from salt.version import __version_info__
return {'saltversioninfo': list(__version_info__)}
def _hw_data(osdata):
'''
Get system specific hardware data from dmidecode
Provides
biosversion
productname
manufacturer
serialnumber
biosreleasedate
uuid
.. versionadded:: 0.9.5
'''
if salt.utils.is_proxy():
return {}
grains = {}
if osdata['kernel'] == 'Linux' and os.path.exists('/sys/class/dmi/id'):
# On many Linux distributions basic firmware information is available via sysfs
# requires CONFIG_DMIID to be enabled in the Linux kernel configuration
sysfs_firmware_info = {
'biosversion': 'bios_version',
'productname': 'product_name',
'manufacturer': 'sys_vendor',
'biosreleasedate': 'bios_date',
'uuid': 'product_uuid',
'serialnumber': 'product_serial'
}
for key, fw_file in sysfs_firmware_info.items():
contents_file = os.path.join('/sys/class/dmi/id', fw_file)
if os.path.exists(contents_file):
try:
with salt.utils.fopen(contents_file, 'r') as ifile:
grains[key] = ifile.read()
if key == 'uuid':
grains['uuid'] = grains['uuid'].lower()
except (IOError, OSError) as err:
# PermissionError is new to Python 3, but corresponds to the EACESS and
# EPERM error numbers. Use those instead here for PY2 compatibility.
if err.errno == EACCES or err.errno == EPERM:
# Skip the grain if non-root user has no access to the file.
pass
elif salt.utils.which_bin(['dmidecode', 'smbios']) is not None and not (
salt.utils.is_smartos() or
( # SunOS on SPARC - 'smbios: failed to load SMBIOS: System does not export an SMBIOS table'
osdata['kernel'] == 'SunOS' and
osdata['cpuarch'].startswith('sparc')
)):
# On SmartOS (possibly SunOS also) smbios only works in the global zone
# smbios is also not compatible with linux's smbios (smbios -s = print summarized)
grains = {
'biosversion': __salt__['smbios.get']('bios-version'),
'productname': __salt__['smbios.get']('system-product-name'),
'manufacturer': __salt__['smbios.get']('system-manufacturer'),
'biosreleasedate': __salt__['smbios.get']('bios-release-date'),
'uuid': __salt__['smbios.get']('system-uuid')
}
grains = dict([(key, val) for key, val in grains.items() if val is not None])
uuid = __salt__['smbios.get']('system-uuid')
if uuid is not None:
grains['uuid'] = uuid.lower()
for serial in ('system-serial-number', 'chassis-serial-number', 'baseboard-serial-number'):
serial = __salt__['smbios.get'](serial)
if serial is not None:
grains['serialnumber'] = serial
break
elif salt.utils.which_bin(['fw_printenv']) is not None:
# ARM Linux devices expose UBOOT env variables via fw_printenv
hwdata = {
'manufacturer': 'manufacturer',
'serialnumber': 'serial#',
}
for grain_name, cmd_key in six.iteritems(hwdata):
result = __salt__['cmd.run_all']('fw_printenv {0}'.format(cmd_key))
if result['retcode'] == 0:
uboot_keyval = result['stdout'].split('=')
grains[grain_name] = _clean_value(grain_name, uboot_keyval[1])
elif osdata['kernel'] == 'FreeBSD':
# On FreeBSD /bin/kenv (already in base system)
# can be used instead of dmidecode
kenv = salt.utils.which('kenv')
if kenv:
# In theory, it will be easier to add new fields to this later
fbsd_hwdata = {
'biosversion': 'smbios.bios.version',
'manufacturer': 'smbios.system.maker',
'serialnumber': 'smbios.system.serial',
'productname': 'smbios.system.product',
'biosreleasedate': 'smbios.bios.reldate',
'uuid': 'smbios.system.uuid',
}
for key, val in six.iteritems(fbsd_hwdata):
value = __salt__['cmd.run']('{0} {1}'.format(kenv, val))
grains[key] = _clean_value(key, value)
elif osdata['kernel'] == 'OpenBSD':
sysctl = salt.utils.which('sysctl')
hwdata = {'biosversion': 'hw.version',
'manufacturer': 'hw.vendor',
'productname': 'hw.product',
'serialnumber': 'hw.serialno',
'uuid': 'hw.uuid'}
for key, oid in six.iteritems(hwdata):
value = __salt__['cmd.run']('{0} -n {1}'.format(sysctl, oid))
if not value.endswith(' value is not available'):
grains[key] = _clean_value(key, value)
elif osdata['kernel'] == 'NetBSD':
sysctl = salt.utils.which('sysctl')
nbsd_hwdata = {
'biosversion': 'machdep.dmi.board-version',
'manufacturer': 'machdep.dmi.system-vendor',
'serialnumber': 'machdep.dmi.system-serial',
'productname': 'machdep.dmi.system-product',
'biosreleasedate': 'machdep.dmi.bios-date',
'uuid': 'machdep.dmi.system-uuid',
}
for key, oid in six.iteritems(nbsd_hwdata):
result = __salt__['cmd.run_all']('{0} -n {1}'.format(sysctl, oid))
if result['retcode'] == 0:
grains[key] = _clean_value(key, result['stdout'])
elif osdata['kernel'] == 'Darwin':
grains['manufacturer'] = 'Apple Inc.'
sysctl = salt.utils.which('sysctl')
hwdata = {'productname': 'hw.model'}
for key, oid in hwdata.items():
value = __salt__['cmd.run']('{0} -b {1}'.format(sysctl, oid))
if not value.endswith(' is invalid'):
grains[key] = _clean_value(key, value)
elif osdata['kernel'] == 'SunOS' and osdata['cpuarch'].startswith('sparc'):
# Depending on the hardware model, commands can report different bits
# of information. With that said, consolidate the output from various
# commands and attempt various lookups.
data = ""
for (cmd, args) in (('/usr/sbin/prtdiag', '-v'), ('/usr/sbin/prtconf', '-vp'), ('/usr/sbin/virtinfo', '-a')):
if salt.utils.which(cmd): # Also verifies that cmd is executable
data += __salt__['cmd.run']('{0} {1}'.format(cmd, args))
data += '\n'
sn_regexes = [
re.compile(r) for r in [
r'(?im)^\s*Chassis\s+Serial\s+Number\n-+\n(\S+)', # prtdiag
r'(?im)^\s*chassis-sn:\s*(\S+)', # prtconf
r'(?im)^\s*Chassis\s+Serial#:\s*(\S+)', # virtinfo
]
]
obp_regexes = [
re.compile(r) for r in [
r'(?im)^\s*System\s+PROM\s+revisions.*\nVersion\n-+\nOBP\s+(\S+)\s+(\S+)', # prtdiag
r'(?im)^\s*version:\s*\'OBP\s+(\S+)\s+(\S+)', # prtconf
]
]
fw_regexes = [
re.compile(r) for r in [
r'(?im)^\s*Sun\s+System\s+Firmware\s+(\S+)\s+(\S+)', # prtdiag
]
]
uuid_regexes = [
re.compile(r) for r in [
r'(?im)^\s*Domain\s+UUID:\s*(\S+)', # virtinfo
]
]
manufacture_regexes = [
re.compile(r) for r in [
r'(?im)^\s*System\s+Configuration:\s*(.*)(?=sun)', # prtdiag
]
]
product_regexes = [
re.compile(r) for r in [
r'(?im)^\s*System\s+Configuration:\s*.*?sun\d\S+\s(.*)', # prtdiag
r'(?im)^\s*banner-name:\s*(.*)', # prtconf
r'(?im)^\s*product-name:\s*(.*)', # prtconf
]
]
sn_regexes = [
re.compile(r) for r in [
r'(?im)Chassis\s+Serial\s+Number\n-+\n(\S+)', # prtdiag
r'(?i)Chassis\s+Serial#:\s*(\S+)', # virtinfo
r'(?i)chassis-sn:\s*(\S+)', # prtconf
]
]
obp_regexes = [
re.compile(r) for r in [
r'(?im)System\s+PROM\s+revisions.*\nVersion\n-+\nOBP\s+(\S+)\s+(\S+)', # prtdiag
r'(?im)version:\s*\'OBP\s+(\S+)\s+(\S+)', # prtconf
]
]
fw_regexes = [
re.compile(r) for r in [
r'(?i)Sun\s+System\s+Firmware\s+(\S+)\s+(\S+)', # prtdiag
]
]
uuid_regexes = [
re.compile(r) for r in [
r'(?i)Domain\s+UUID:\s+(\S+)', # virtinfo
]
]
for regex in sn_regexes:
res = regex.search(data)
if res and len(res.groups()) >= 1:
grains['serialnumber'] = res.group(1).strip().replace("'", "")
break
for regex in obp_regexes:
res = regex.search(data)
if res and len(res.groups()) >= 1:
obp_rev, obp_date = res.groups()[0:2] # Limit the number in case we found the data in multiple places
grains['biosversion'] = obp_rev.strip().replace("'", "")
grains['biosreleasedate'] = obp_date.strip().replace("'", "")
for regex in fw_regexes:
res = regex.search(data)
if res and len(res.groups()) >= 1:
fw_rev, fw_date = res.groups()[0:2]
grains['systemfirmware'] = fw_rev.strip().replace("'", "")
grains['systemfirmwaredate'] = fw_date.strip().replace("'", "")
break
for regex in uuid_regexes:
res = regex.search(data)
if res and len(res.groups()) >= 1:
grains['uuid'] = res.group(1).strip().replace("'", "")
break
for regex in manufacture_regexes:
res = regex.search(data)
if res and len(res.groups()) >= 1:
grains['manufacture'] = res.group(1).strip().replace("'", "")
break
for regex in product_regexes:
res = regex.search(data)
if res and len(res.groups()) >= 1:
grains['product'] = res.group(1).strip().replace("'", "")
break
return grains
def _smartos_computenode_data():
'''
Return useful information from a SmartOS compute node
'''
# Provides:
# vms_total
# vms_running
# vms_stopped
# sdc_version
# vm_capable
# vm_hw_virt
if salt.utils.is_proxy():
return {}
grains = {}
# *_vms grains
grains['computenode_vms_total'] = len(__salt__['cmd.run']('vmadm list -p').split("\n"))
grains['computenode_vms_running'] = len(__salt__['cmd.run']('vmadm list -p state=running').split("\n"))
grains['computenode_vms_stopped'] = len(__salt__['cmd.run']('vmadm list -p state=stopped').split("\n"))
# sysinfo derived grains
sysinfo = json.loads(__salt__['cmd.run']('sysinfo'))
grains['computenode_sdc_version'] = sysinfo['SDC Version']
grains['computenode_vm_capable'] = sysinfo['VM Capable']
if sysinfo['VM Capable']:
grains['computenode_vm_hw_virt'] = sysinfo['CPU Virtualization']
# sysinfo derived smbios grains
grains['manufacturer'] = sysinfo['Manufacturer']
grains['productname'] = sysinfo['Product']
grains['uuid'] = sysinfo['UUID']
return grains
def _smartos_zone_data():
'''
Return useful information from a SmartOS zone
'''
# Provides:
# pkgsrcversion
# imageversion
# pkgsrcpath
# zonename
# zoneid
# hypervisor_uuid
# datacenter
if salt.utils.is_proxy():
return {}
grains = {}
pkgsrcversion = re.compile('^release:\\s(.+)')
imageversion = re.compile('Image:\\s(.+)')
pkgsrcpath = re.compile('PKG_PATH=(.+)')
if os.path.isfile('/etc/pkgsrc_version'):
with salt.utils.fopen('/etc/pkgsrc_version', 'r') as fp_:
for line in fp_:
match = pkgsrcversion.match(line)
if match:
grains['pkgsrcversion'] = match.group(1)
if os.path.isfile('/etc/product'):
with salt.utils.fopen('/etc/product', 'r') as fp_:
for line in fp_:
match = imageversion.match(line)
if match:
grains['imageversion'] = match.group(1)
if os.path.isfile('/opt/local/etc/pkg_install.conf'):
with salt.utils.fopen('/opt/local/etc/pkg_install.conf', 'r') as fp_:
for line in fp_:
match = pkgsrcpath.match(line)
if match:
grains['pkgsrcpath'] = match.group(1)
if 'pkgsrcversion' not in grains:
grains['pkgsrcversion'] = 'Unknown'
if 'imageversion' not in grains:
grains['imageversion'] = 'Unknown'
if 'pkgsrcpath' not in grains:
grains['pkgsrcpath'] = 'Unknown'
grains['zonename'] = __salt__['cmd.run']('zonename')
grains['zoneid'] = __salt__['cmd.run']('zoneadm list -p | awk -F: \'{ print $1 }\'', python_shell=True)
return grains
def _zpool_data(grains):
'''
Provide grains about zpools
'''
# quickly return if windows or proxy
if salt.utils.is_windows() or 'proxyminion' in __opts__:
return {}
# quickly return if no zpool and zfs command
if not salt.utils.which('zpool'):
return {}
# collect zpool data
zpool_grains = {}
for zpool in __salt__['cmd.run']('zpool list -H -o name,size').splitlines():
zpool = zpool.split()
zpool_grains[zpool[0]] = zpool[1]
# return grain data
if len(zpool_grains.keys()) < 1:
return {}
return {'zpool': zpool_grains}
def get_server_id():
'''
Provides an integer based on the FQDN of a machine.
Useful as server-id in MySQL replication or anywhere else you'll need an ID
like this.
'''
# Provides:
# server_id
if salt.utils.is_proxy():
return {}
return {'server_id': abs(hash(__opts__.get('id', '')) % (2 ** 31))}
def get_master():
'''
Provides the minion with the name of its master.
This is useful in states to target other services running on the master.
'''
# Provides:
# master
return {'master': __opts__.get('master', '')}
# vim: tabstop=4 expandtab shiftwidth=4 softtabstop=4