当前位置:网站首页>Initial experience of MariaDB spider sharding engine
Initial experience of MariaDB spider sharding engine
2022-07-22 19:44:00 【Yabingshi】
One spider summary
Spider Storage engine is a storage engine with built-in sharding function . It supports partitioning and xa Business , And allow for different MariaDB Instance table , Just as they are on the same instance .
Use Spider When the storage engine creates a table , This table is linked to a table on the remote server , It is like operating a local table . A remote table can be a table of any storage engine . Table links are specifically created from local MariaDB Server to remote MariaDB Server connection . The engine is completely transparent to the business .
Use Spider Some server variables to be set during :
MariaDB from 10.3.4 Start
If used in replication Spider, It can be done by putting slave_transaction_retry_errors Set to the following values to expand the list of transaction errors to retry , To avoid network problems :
- 1158: Got an error reading communication packets
- 1159: Got timeout reading communication packets
- 1160: Got an error writing communication packets
- 1161: Got timeout writing communication packets
- 1429: Unable to connect to foreign data source
- 2013: Lost connection to MySQL server during query
- 12701: Remote MySQL server has gone away
modify my.cnf, add to :
slave_transaction_retry_errors="1158,1159,1160,1161,1429,2013,12701"
MariaDB 10.4.5 And later , The above is included in the default values .
Let's start with two of us DBA Frequently encountered scenarios :
scene 1: There are two different tables distributed on different instances , You want to associate... With a field , Make a statistic , Or you want to distribute tables across different instances , Merge into an instance to do some queries .
scene 2: Because of the bottleneck of database capacity or database access performance , A certain large library 、 Split a large table or a table with a large number of visits , And then distributed to different instances .
These two scenarios cover us DBA Vertical split and horizontal split that are often touched , In this scenario, we often face the following dilemmas :
1、 Access and access to these tables requires additional routing rules , It's very complicated
2、 When data summary or statistics is required , Very trouble
Spider The advantages of :
a、 Completely transparent to the business , The business does not need to be modified
b、 Easy to scale out , Can solve single mysql Performance and storage bottlenecks
c、 There are no restrictions on the back-end storage engine
d、 Indirectly realize the vertical split and horizontal split functions
adopt spider And back-end database connection , It can be a separate table , It can also be based on partitioned tables , Partition tables support hashing 、 Range 、 List and other algorithms .
e、 Fully compatible with mysql agreement
because MySQL Special plug-in storage engine architecture ,server Layer responsibility SQL analysis 、SQL Optimize 、 Database objects ( View 、 Stored procedure, etc ) management ; The storage engine layer is responsible for data storage 、 Index support 、 Business 、buffer etc. , Between the two by agreement handler Interface for interaction .SQL analysis 、 Optimization and execution are left to server Layer handles , Almost any type of execution is supported SQL visit .
Spider The disadvantages of :
a、spider The table itself does not support query caching and full-text indexing , However, you can add full-text indexes to the back-end database ;
b、 If physical backup is used ,spider Back end data cannot be backed up , Because the data itself is stored on the back end . It can be used for back-end mysql Make physical backup one by one
c、spider Itself is a single point , You need to build your own disaster recovery machine , Such as through VIP The way
d、 One more layer of network , There will be some loss in performance , Especially across partitions 、 Cross table query performance will be worse
Two install spider
There are two ways to install :
The way 1:
MariaDB [(none)]> INSTALL SONAME "ha_spider";
Query OK, 0 rows affected (0.044 sec)
The way 2:
mysql -u root -p < /usr/share/mysql/install_spider.sql
# Verify that you can see spider engine
3、 ... and spider Examples of use
3.1 spider node + Examples of multiple back-end nodes
Experimental environment :
IP | role | remarks | Operating system configuration |
192.168.144.249 | spider node +sysbench | The back end connects two nodes | Centos 7.6 1G Memory |
192.168.144.250 | Backend1 | Back end node 1 | |
192.168.144.251 | Backend2 | Back end node 2 |
3.1.1 Create a table on the backend node
# Create tables on two back-end nodes
CREATE DATABASE backend;
CREATE TABLE backend.sbtest1 (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
3.1.2 stay spider Create a table on the node
3.1.2.1 build server
CREATE SERVER backend1
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.144.250',
DATABASE 'backend',
USER 'root',
PASSWORD '123456',
PORT 3306
);
CREATE SERVER backend2
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.144.251',
DATABASE 'backend',
USER 'root',
PASSWORD '123456',
PORT 3306
);
select * from mysql.servers;
3.1.2.2 Build table
CREATE DATABASE IF NOT EXISTS backend;
CREATE TABLE backend.sbtest1
(
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=spider COMMENT='wrapper "mysql", table "sbtest1"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "backend1"',
PARTITION pt2 COMMENT = 'srv "backend2"'
) ;
3.1.3 Validation fragment
# stay spider Insert data on the node
MariaDB [(none)]> use backend;
MariaDB [backend]> insert into sbtest1(id,k,c,pad) values(1,1,'a','a');
Query OK, 1 row affected (0.006 sec)
MariaDB [backend]> insert into sbtest1(id,k,c,pad) values(2,2,'a','a');
Query OK, 1 row affected (0.007 sec)
select * from backend.sbtest1;
# Query on the backend node , It is found that two pieces of data are distributed on two back-end nodes
/*
It was found during the test that 10.3.18 Version of spider unstable ,select * And count(*) Is not accurate , Sometimes it is the data after summary , Sometimes it is the data of the first node , Sometimes it is the data of the second node , I don't know what it is .
This data is very strange .
I am here 10.5.7 This problem was not found in the version tested .
*/
This article refers to Spider - MariaDB Knowledge Base
边栏推荐
- 数据湖(十八):Flink与Iceberg整合SQL API操作
- Neo4j - cypher syntax example
- pytest接口自动化测试框架 | 为什么要做pytest插件的二次开发
- Leetcode notes
- 消息的可靠性与重复性
- Force deduction solution summary 745 prefix and suffix search
- Explain the adapter technology in NLP from simple to deep
- 代码覆盖率提示 No code coverage driver is available
- 管理的艺术-通过领导力驱动软件研发效能提升
- pytest接口自动化测试框架 | 插件二次开发实战
猜你喜欢
Cv520 domestic card reader chip instead of ci520 contactless reader
CV520国产替代CI520非接触式读写器读卡芯片
Constructeurs et destructeurs de classes dérivées
mariadb spider分片引擎初体验
Explain the adapter technology in NLP from simple to deep
关于数据产品经理的三个小的知识点
Wanxing PDF expert v8.3.8.1253 Professional Edition
多态性
mysql 将毫秒数转为时间字符串
最强屏幕工具ShareX v14.0.1
随机推荐
多重继承
Pytest interface automated testing framework | the difference between unittest and pytest
中国最幸运极客:二十出头身家过亿,三次创业全身而退
The strongest screen tool sharex v14.0.1
error: cannot open Packages database in /var/lib/rpm
如何不获取root权限使用第三方主题:MIUI篇
DP4361国产六通道立体声D/A音频转换器芯片替代CS4361
货币增发为什么会造成通货膨胀? 如何造成
Si12t touch key chip replaces tms12
Wanxing PDF expert v8.3.8.1253 Professional Edition
C # automatically generates a dictionary (when there is a lot of data)
mysql进阶(十七)Cannot Connect to Database Server问题分析
NIO文件锁
CV520国产替代CI520非接触式读写器读卡芯片
NFT数字藏品系统搭建—app开发
每日一题专栏前言
管理的艺术-通过领导力驱动软件研发效能提升
Setting proxy method in PHP curl request
ansible简单使用示例
7.18 number of square arrays