shardingsphere-jdbc 水平分表学习记录( 二 )

也就分为逻辑datasource定义, 真实的datasource定义.对于每个逻辑表,定义分库分表规则,如果需要生成分布式key,定义key的生成算法.分别对应spring.shardingsphere.datasource.前缀和spring.shardingsphere.rules.sharding前缀.
对于SNOWFLAKE要注意数据库的字段类型要bigint,int不够放.
启动报错***************************APPLICATION FAILED TO START***************************Description:An attempt was made to call a method that does not exist. The attempt was made from the following location:org.apache.shardingsphere.infra.util.yaml.constructor.ShardingSphereYamlConstructor$1.<init>(ShardingSphereYamlConstructor.java:44)The following method did not exist:'void org.apache.shardingsphere.infra.util.yaml.constructor.ShardingSphereYamlConstructor$1.setCodePointLimit(int)'The calling methods class, org.apache.shardingsphere.infra.util.yaml.constructor.ShardingSphereYamlConstructor$1, was loaded from the following location:jar:file:/.m2/repository/org/apache/shardingsphere/shardingsphere-infra-util/5.2.1/shardingsphere-infra-util-5.2.1.jar!/org/apache/shardingsphere/infra/util/yaml/constructor/ShardingSphereYamlConstructor$1.classThe called methods class, org.apache.shardingsphere.infra.util.yaml.constructor.ShardingSphereYamlConstructor$1, is available from the following locations:jar:file:/.m2/repository/org/apache/shardingsphere/shardingsphere-infra-util/5.2.1/shardingsphere-infra-util-5.2.1.jar!/org/apache/shardingsphere/infra/util/yaml/constructor/ShardingSphereYamlConstructor$1.classThe called methods class hierarchy was loaded from the following locations:null: file:/.m2/repository/org/apache/shardingsphere/shardingsphere-infra-util/5.2.1/shardingsphere-infra-util-5.2.1.jarorg.yaml.snakeyaml.LoaderOptions: file:/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jarAction:Correct the classpath of your application so that it contains a single, compatible version of org.apache.shardingsphere.infra.util.yaml.constructor.ShardingSphereYamlConstructor$1很明显的一个以来冲突问题, 主要是这行代码:
public ShardingSphereYamlConstructor(final Class<?> rootClass) {super(rootClass, new LoaderOptions() {{setCodePointLimit(Integer.MAX_VALUE);}});ShardingSphereYamlConstructFactory.getInstances().forEach(each -> typeConstructs.put(each.getType(), each));ShardingSphereYamlShortcutsFactory.getAllYamlShortcuts().forEach((key, value) -> addTypeDescription(new TypeDescription(value, key)));this.rootClass = rootClass;}snakeyaml的版本冲突,使用的版本中LoaderOptions没有setCodePointLimit这个方法.使用的springboot的依赖的是1.30.0,显式依赖1.33.0即可.
<dependency><groupId>org.yaml</groupId><artifactId>snakeyaml</artifactId><version>1.33</version></dependency>配置错误导致的报错这类报错种类比较多比如

  • DataNodesMissedWithShardingTableException
  • ShardingRuleNotFoundException
  • InconsistentShardingTableMetaDataException
等等, 启动就会失败, 因为是读取了配置解析异常.
这种就要看看对应的错误和配置.
不过有点奇怪的是一些错误没有打出详细的报错信息.比如:
Caused by: org.apache.shardingsphere.sharding.exception.metadata.DataNodesMissedWithShardingTableException: null at org.apache.shardingsphere.sharding.rule.TableRule.lambda$checkRule$4(TableRule.java:246) ~[shardingsphere-sharding-core-5.2.1.jar:5.2.1] at org.apache.shardingsphere.infra.util.exception.ShardingSpherePreconditions.checkState(ShardingSpherePreconditions.java:41) ~[shardingsphere-infra-util-5.2.1.jar:5.2.1] at org.apache.shardingsphere.sharding.rule.TableRule.checkRule(TableRule.java:245) ~[shardingsphere-sharding-core-5.2.1.jar:5.2.1]看了下是基类没调用super,导致message没有值.看了下这个已经在master分支修好了:
public ShardingSphereSQLException(final SQLState sqlState, final int typeOffset, final int errorCode, final String reason, final Object... messageArguments) {this(sqlState.getValue(), typeOffset, errorCode, reason, messageArguments);}public ShardingSphereSQLException(final String sqlState, final int typeOffset, final int errorCode, final String reason, final Object... messageArguments) {this.sqlState = sqlState;vendorCode = typeOffset * 10000 + errorCode;this.reason = null == reason ? null : String.format(reason, messageArguments);// missing super(resaon) here}数据库自动生成的key不能作为route key但是分布式生成的key可以, 这个在FAQ里有, 有这个错误是刚开始配分布式key的时候配错了.
原文:
[分片] ShardingSphere 除了支持自带的分布式自增主键之外,还能否支持原生的自增主键?回答:
是的,可以支持 。但原生自增主键有使用限制,即不能将原生自增主键同时作为分片键使用 。由于 ShardingSphere 并不知晓数据库的表结构,而原生自增主键是不包含在原始 SQL 中内的,因此 ShardingSphere 无法将该字段解析为分片字段 。如自增主键非分片键 , 则无需关注,可正常返回;若自增主键同时作为分片键使用,ShardingSphere 无法解析其分片值 , 导致 SQL 路由至多张表,从而影响应用的正确性 。而原生自增主键返回的前提条件是 INSERT SQL 必须最终路由至一张表,因此 , 面对返回多表的 INSERT SQL,自增主键则会返回零 。

推荐阅读