赞
踩
MySQL Binlog 解析工具 Maxwell 详解
官方文档 :Reference - Maxwell's Daemon
maxwell 简介
Maxwell是一个能实时读取MySQL二进制日志binlog,并生成 JSON 格式的消息,作为生产者发送给 Kafka,Kinesis、RabbitMQ、Redis、Google Cloud Pub/Sub、文件或其它平台的应用程序。它的常见应用场景有ETL、维护缓存、收集表级别的dml指标、增量到搜索引擎、数据分区迁移、切库binlog回滚方案等。
Maxwell主要提供了下列功能:
除了Maxwell外,目前常用的MySQL Binlog解析工具主要有阿里的canal、mysql_streamer,三个工具对比如下:
canal 由Java开发,分为服务端和客户端,拥有众多的衍生应用,性能稳定,功能强大;canal 需要自己编写客户端来消费canal解析到的数据。
maxwell相对于canal的优势是使用简单,它直接将数据变更输出为json字符串,不需要再编写客户端。
maxwell 的安装部署:Maxwell简介、部署、原理和使用介绍-CSDN博客
生产环境配置:
注意:后台运行参数 --daemon
- ## tl;dr config 生产环境配置为info级别
- log_level=info
-
- ## mysql login info, mysql用户必须拥有读取binlog权限和新建库表的权限
- #mysql过滤设置
- filter= exclude: *.*, include: woc_school.*, exclude: *.*.create_time = *
- # mysql login info
- #host=127.0.0.1
- #user=root
- #password=root
- #output_nulls=true#是否包含值为NULL的字段,默认true
-
- ##jdbc 配置[解决抽取binlog后时区相比北京时间少8个时区的问题]
- # options to pass into the jdbc connection, given as opt=val&opt2=val2
- jdbc_options=characterEncoding=utf8mb4&autoReconnect=true&allowMultiQueries=true&useSSL=false&serverTimezone=Asia/Shanghai
- #使用replication_host 则开启
- #replication_jdbc_options=characterEncoding=utf8mb4&autoReconnect=true&allowMultiQueries=true&useSSL=false&serverTimezone=Asia/Shanghai
- #使用schema_host 则开启
- #schema_jdbc_options=characterEncoding=utf8mb4&autoReconnect=true&allowMultiQueries=true&useSSL=false&serverTimezone=Asia/Shanghai
-
- #rabbitmq消息配置
- producer=rabbitmq
- rabbitmq_host=1600123129785135.mq-amqp.cn-hangzhou-a.aliyuncs.com
- rabbitmq_user=MjoxNjAwMTIzMTI5Nzg1MTM1OkxUQUk1dEhObnVBN2NxbTJHeEUyMXBXSw==
- rabbitmq_pass=OUJBNTc3NzY2OUE1MUEyOUUzODQzN0Q2NjhCNTU5QzY2RjlDNkZDMToxNjM5NDQ1OTkzMTQ0
- rabbitmq_port=5672
- rabbitmq_virtual_host=canal-vhost
- rabbitmq_exchange=canal-exchange-routing-key
- rabbitmq_exchange_type=fanout
- rabbitmq_exchange_durable=true
- rabbitmq_exchange_autodelete=false
- ##按照 db.tbl 的格式指定 routing_key,在创建队列时,可以根据不同的表进入不同的队列,提高并行消费而不乱序的能力
- #rabbitmq_routing_key_template=%db%.%table%
-
-
- # *** general ***
- # choose where to produce data to. stdout|file|kafka|kinesis|pubsub|sqs|rabbitmq|redis
- #producer=kafka
-
- # set the log level. note that you can configure things further in log4j2.xml
- #log_level=DEBUG # [DEBUG, INFO, WARN, ERROR]
-
- # if set, maxwell will look up the scoped environment variables, strip off the prefix and inject the configs
- #env_config_prefix=MAXWELL_
-
- # *** mysql ***
-
- # mysql host to connect to
- #host=hostname
-
- # mysql port to connect to
- #port=3306
-
- # mysql user to connect as. This user must have REPLICATION SLAVE permissions,
- # as well as full access to the `maxwell` (or schema_database) database
- #user=maxwell
-
- # mysql password
- #password=maxwell
-
- # options to pass into the jdbc connection, given as opt=val&opt2=val2
- #jdbc_options=opt1=100&opt2=hello
-
- # name of the mysql database where maxwell keeps its own state
- #schema_database=maxwell
-
- # whether to use GTID or not for positioning
- #gtid_mode=true
-
- # maxwell will capture an initial "base" schema containing all table and column information,
- # and then keep delta-updates on top of that schema. If you have an inordinate amount of DDL changes,
- # the table containing delta changes will grow unbounded (and possibly too large) over time. If you
- # enable this option Maxwell will periodically compact its tables.
- #max_schemas=10000
-
- # SSL/TLS options
- # To use VERIFY_CA or VERIFY_IDENTITY, you must set the trust store with Java opts:
- # -Djavax.net.ssl.trustStore=<truststore> -Djavax.net.ssl.trustStorePassword=<password>
- # or import the MySQL cert into the global Java cacerts.
- # MODE must be one of DISABLED, PREFERRED, REQUIRED, VERIFY_CA, or VERIFY_IDENTITY
- #
- # turns on ssl for the maxwell-store connection, other connections inherit this setting unless specified
- #ssl=DISABLED
- # for binlog-connector
- #replication_ssl=DISABLED
- # for the schema-capture connection, if used
- #schema_ssl=DISABLED
-
- # maxwell can optionally replicate from a different server than where it stores
- # schema and binlog position info. Specify that different server here:
-
- #replication_host=other
- #replication_user=username
- #replication_password=password
- #replication_port=3306
-
- # This may be useful when using MaxScale's binlog mirroring host.
- # Specifies that Maxwell should capture schema from a different server than
- # it replicates from:
-
- #schema_host=other
- #schema_user=username
- #schema_password=password
- #schema_port=3306
-
-
- # *** output format ***
-
- # records include binlog position (default false)
- #output_binlog_position=true
-
- # records include a gtid string (default false)
- #output_gtid_position=true
-
- # records include fields with null values (default true). If this is false,
- # fields where the value is null will be omitted entirely from output.
- #output_nulls=true
-
- # records include server_id (default false)
- #output_server_id=true
-
- # records include thread_id (default false)
- #output_thread_id=true
-
- # records include schema_id (default false)
- #output_schema_id=true
-
- # records include row query, binlog option "binlog_rows_query_log_events" must be enabled" (default false)
- #output_row_query=true
-
- # DML records include list of values that make up a row's primary key (default false)
- #output_primary_keys=true
-
- # DML records include list of columns that make up a row's primary key (default false)
- #output_primary_key_columns=true
-
- # records include commit and xid (default true)
- #output_commit_info=true
-
- # This controls whether maxwell will output JSON information containing
- # DDL (ALTER/CREATE TABLE/ETC) infromation. (default: false)
- # See also: ddl_kafka_topic
- #output_ddl=true
-
- # turns underscore naming style of fields to camel case style in JSON output
- # default is none, which means the field name in JSON is the exact name in MySQL table
- #output_naming_strategy=underscore_to_camelcase
-
- # *** kafka ***
-
- # list of kafka brokers
- #kafka.bootstrap.servers=hosta:9092,hostb:9092
-
- # kafka topic to write to
- # this can be static, e.g. 'maxwell', or dynamic, e.g. namespace_%{database}_%{table}
- # in the latter case 'database' and 'table' will be replaced with the values for the row being processed
- #kafka_topic=maxwell
-
- # alternative kafka topic to write DDL (alter/create/drop) to. Defaults to kafka_topic
- #ddl_kafka_topic=maxwell_ddl
-
- # hash function to use. "default" is just the JVM's 'hashCode' function.
- #kafka_partition_hash=default # [default, murmur3]
-
- # how maxwell writes its kafka key.
- #
- # 'hash' looks like:
- # {"database":"test","table":"tickets","pk.id":10001}
- #
- # 'array' looks like:
- # ["test","tickets",[{"id":10001}]]
- #
- # default: "hash"
- #kafka_key_format=hash # [hash, array]
-
- # extra kafka options. Anything prefixed "kafka." will get
- # passed directly into the kafka-producer's config.
-
- # a few defaults.
- # These are 0.11-specific. They may or may not work with other versions.
- kafka.compression.type=snappy
- kafka.retries=0
- kafka.acks=1
- #kafka.batch.size=16384
-
-
- # kafka+SSL example
- # kafka.security.protocol=SSL
- # kafka.ssl.truststore.location=/var/private/ssl/kafka.client.truststore.jks
- # kafka.ssl.truststore.password=test1234
- # kafka.ssl.keystore.location=/var/private/ssl/kafka.client.keystore.jks
- # kafka.ssl.keystore.password=test1234
- # kafka.ssl.key.password=test1234#
-
- # controls a heuristic check that maxwell may use to detect messages that
- # we never heard back from. The heuristic check looks for "stuck" messages, and
- # will timeout maxwell after this many milliseconds.
- #
- # See https://github.com/zendesk/maxwell/blob/master/src/main/java/com/zendesk/maxwell/producer/InflightMessageList.java
- # if you really want to get into it.
- #producer_ack_timeout=120000 # default 0
-
-
- # *** partitioning ***
-
- # What part of the data do we partition by?
- #producer_partition_by=database # [database, table, primary_key, transaction_id, thread_id, column]
-
- # specify what fields to partition by when using producer_partition_by=column
- # column separated list.
- #producer_partition_columns=id,foo,bar
-
- # when using producer_partition_by=column, partition by this when
- # the specified column(s) don't exist.
- #producer_partition_by_fallback=database
-
- # *** kinesis ***
-
- #kinesis_stream=maxwell
-
- # AWS places a 256 unicode character limit on the max key length of a record
- # http://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html
- #
- # Setting this option to true enables hashing the key with the md5 algorithm
- # before we send it to kinesis so all the keys work within the key size limit.
- # Values: true, false
- # Default: false
- #kinesis_md5_keys=true
-
- # *** sqs ***
-
- #sqs_queue_uri=aws_sqs_queue_uri
-
- # The sqs producer will need aws credentials configured in the default
- # root folder and file format. Please check below link on how to do it.
- # http://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-credentials.html
-
- # *** pub/sub ***
-
- #pubsub_project_id=maxwell
- #pubsub_topic=maxwell
- #ddl_pubsub_topic=maxwell_ddl
-
- # *** rabbit-mq ***
-
- #rabbitmq_host=rabbitmq_hostname
- #rabbitmq_port=5672
- #rabbitmq_user=guest
- #rabbitmq_pass=guest
- #rabbitmq_virtual_host=/
- #rabbitmq_exchange=maxwell
- #rabbitmq_exchange_type=fanout
- #rabbitmq_exchange_durable=false
- #rabbitmq_exchange_autodelete=false
- #rabbitmq_routing_key_template=%db%.%table%
- #rabbitmq_message_persistent=false
- #rabbitmq_declare_exchange=true
-
- # *** redis ***
-
- #redis_host=redis_host
- #redis_port=6379
- #redis_auth=redis_auth
- #redis_database=0
-
- # name of pubsub/list/whatever key to publish to
- #redis_key=maxwell
-
- # this can be static, e.g. 'maxwell', or dynamic, e.g. namespace_%{database}_%{table}
- #redis_pub_channel=maxwell
- # this can be static, e.g. 'maxwell', or dynamic, e.g. namespace_%{database}_%{table}
- #redis_list_key=maxwell
- # this can be static, e.g. 'maxwell', or dynamic, e.g. namespace_%{database}_%{table}
- # Valid values for redis_type = pubsub|lpush. Defaults to pubsub
-
- #redis_type=pubsub
-
- # *** custom producer ***
-
- # the fully qualified class name for custom ProducerFactory
- # see the following link for more details.
- # http://maxwells-daemon.io/producers/#custom-producer
- #custom_producer.factory=
-
- # custom producer properties can be configured using the custom_producer.* property namespace
- #custom_producer.custom_prop=foo
-
- # *** filtering ***
-
- # filter rows out of Maxwell's output. Command separated list of filter-rules, evaluated in sequence.
- # A filter rule is:
- # <type> ":" <db> "." <tbl> [ "." <col> "=" <col_val> ]
- # type ::= [ "include" | "exclude" | "blacklist" ]
- # db ::= [ "/regexp/" | "string" | "`string`" | "*" ]
- # tbl ::= [ "/regexp/" | "string" | "`string`" | "*" ]
- # col_val ::= "column_name"
- # tbl ::= [ "/regexp/" | "string" | "`string`" | "*" ]
- #
- # See http://maxwells-daemon.io/filtering for more details
- #
- #filter= exclude: *.*, include: foo.*, include: bar.baz, include: foo.bar.col_eg = "value_to_match"
-
- # javascript filter
- # maxwell can run a bit of javascript for each row if you need very custom filtering/data munging.
- # See http://maxwells-daemon.io/filtering/#javascript_filters for more details
- #
- #javascript=/path/to/javascript_filter_file
-
- # *** encryption ***
-
- # Encryption mode. Possible values are none, data, and all. (default none)
- #encrypt=none
-
- # Specify the secret key to be used
- #secret_key=RandomInitVector
-
- # *** monitoring ***
-
- # Maxwell collects metrics via dropwizard. These can be exposed through the
- # base logging mechanism (slf4j), JMX, HTTP or pushed to Datadog.
- # Options: [jmx, slf4j, http, datadog]
- # Supplying multiple is allowed.
- #metrics_type=jmx,slf4j
-
- # The prefix maxwell will apply to all metrics
- #metrics_prefix=MaxwellMetrics # default MaxwellMetrics
-
- # Enable (dropwizard) JVM metrics, default false
- #metrics_jvm=true
-
- # When metrics_type includes slf4j this is the frequency metrics are emitted to the log, in seconds
- #metrics_slf4j_interval=60
-
- # When metrics_type includes http or diagnostic is enabled, this is the port the server will bind to.
- #http_port=8080
-
- # When metrics_type includes http or diagnostic is enabled, this is the http path prefix, default /.
- #http_path_prefix=/some/path/
-
- # ** The following are Datadog specific. **
- # When metrics_type includes datadog this is the way metrics will be reported.
- # Options: [udp, http]
- # Supplying multiple is not allowed.
- #metrics_datadog_type=udp
-
- # datadog tags that should be supplied
- #metrics_datadog_tags=tag1:value1,tag2:value2
-
- # The frequency metrics are pushed to datadog, in seconds
- #metrics_datadog_interval=60
-
- # required if metrics_datadog_type = http
- #metrics_datadog_apikey=API_KEY
-
- # required if metrics_datadog_type = udp
- #metrics_datadog_host=localhost # default localhost
- #metrics_datadog_port=8125 # default 8125
-
- # Maxwell exposes http diagnostic endpoint to check below in parallel:
- # 1. binlog replication lag
- # 2. producer (currently kafka) lag
-
- # To enable Maxwell diagnostic
- #http_diagnostic=true # default false
-
- # Diagnostic check timeout in milliseconds, required if diagnostic = true
- #http_diagnostic_timeout=10000 # default 10000
-
- # *** misc ***
-
- # maxwell's bootstrapping functionality has a couple of modes.
- #
- # In "async" mode, maxwell will output the replication stream while it
- # simultaneously outputs the database to the topic. Note that it won't
- # output replication data for any tables it is currently bootstrapping -- this
- # data will be buffered and output after the bootstrap is complete.
- #
- # In "sync" mode, maxwell stops the replication stream while it
- # outputs bootstrap data.
- #
- # async mode keeps ops live while bootstrapping, but carries the possibility of
- # data loss (due to buffering transactions). sync mode is safer but you
- # have to stop replication.
- #bootstrapper=async [sync, async, none]
-
- # output filename when using the "file" producer
- #output_file=/path/to/file

Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。