大家都知道Redis在6.0版本之前是单线程工作的,这导致在一个项目中有大量读写操作的情况下,Redis单实例的性能被其他业务长时间占据,导致部分业务出现延迟现象,为了解决这个问题,部分公司项目选择使用多个Redis实例分别存储不同的业务数据和使用场景,比如IoT网关写入的数据,可以单独拆分一个Redis实例去使用,其他业务使用一个Redis实例。用多个Redis实例 可以提高Redis的性能。Redis是一种基于内存的缓存数据库,内存容量是其性能的瓶颈。当项目中的数据量较大时,单个Redis实例可能无法承载所有数据,导致性能下降。而使用多个Redis实例可以将数据分散到多个实例中,从而提高Redis的整体性能。
这就导致在某些业务场景下,一个项目工程,同时要使用这两个Redis实例的数据,这就是本文要解决的问题。
本文通过写一个Redis 多数据源组件 Starter 来解决上面的问题,支持Redis 多数据源,可集成配置哨兵模式、Cluster集群模式、单机模式。如果单实例配置哨兵模式,请参阅我之前的博客 《SpringBoot Redis 使用Lettuce和Jedis配置哨兵模式》
如下,可能有多余的,根据项目具体情况删减。再就是需要使用Springboot parent
org.springframework.boot spring-boot-starter-json org.springframework.boot spring-boot-configuration-processor true org.apache.commons commons-lang3 com.alibaba transmittable-thread-local org.springframework.boot spring-boot-starter-data-redis org.apache.commons commons-pool2
很关键
# 示例 custom.primary.redis.key=user
此配置为通用配置所有类型的链接模式都可以配置,不配置走Springboot 默认配置。
spring.redis.xxx.timeout = 3000 spring.redis.xxx.maxTotal = 50 spring.redis.xxx.maxIdle = 50 spring.redis.xxx.minIdle = 2 spring.redis.xxx.maxWaitMillis = 10000 spring.redis.xxx.testOnBorrow = False
# 第1个Redis 实例 用于用户体系,我们取标识为user spring.redis.user.host = 127.0.0.1 spring.redis.user.port = 6380 spring.redis.user.password = 密码 spring.redis.user.database = 0 # 第2个Redis 实例用于IoT体系 spring.redis.iot.host = 127.0.0.1 spring.redis.iot.port = 6390 spring.redis.iot.password = 密码 spring.redis.iot.database = 0 # 第3个Redis 实例用于xxx spring.redis.xxx.host = 127.0.0.1 spring.redis.xxx.port = 6390 spring.redis.xxx.password = 密码 spring.redis.xxx.database = 0
多个Redis数据库实例的情况下,将下面配置项多配置几个。
spring.redis.xxx1.sentinel.master=mymaster1 spring.redis.xxx1.sentinel.nodes=ip:端口,ip:端口 spring.redis.xxx1.password = bD945aAfeb422E22AbAdFb9D2a22bEDd spring.redis.xxx1.database = 0 spring.redis.xxx1.timeout = 3000 #第二个 spring.redis.xxx2.sentinel.master=mymaster2 spring.redis.xxx2.sentinel.nodes=ip:端口,ip:端口 spring.redis.xxx2.password = bD945aAfeb422E22AbAdFb9D2a22bEDd spring.redis.xxx2.database = 0 spring.redis.xxx2.timeout = 3000
spring.redis.xxx1.cluster.nodes=ip1:端口,ip2:端口,ip3:端口,ip4:端口,ip5:端口,ip6:端口 spring.redis.xxx1.cluster.max-redirects=5 spring.redis.xxx1.password = 密码 spring.redis.xxx1.timeout = 3000
根据配置文件配置项,创建Redis多个数据源的RedisTemplate 。
主要思想为,
// 定义静态Map变量redis,用于存储Redis配置参数 protected static Map> redis = new HashMap<>();
private RedisStandaloneConfiguration buildStandaloneConfig(Mapparam){ //...省略 }
private RedisSentinelConfiguration buildSentinelConfig(Mapparam){ //...省略 }
private RedisClusterConfiguration buildClusterConfig(Mapparam){ //...省略 }
public RedisConnectionFactory buildLettuceConnectionFactory(String redisKey, Mapparam,GenericObjectPoolConfig genericObjectPoolConfig){ ... }
4.最后遍历上面我们配置的配置文件调用buildCustomRedisService(k, redisTemplate, stringRedisTemplate); 将创建的不同的RedisTemplate Bean 然后注入到Spring容器中
源码中涉及的Springboot 相关知识在此处就不做赘婿,需要了解,可以参考我的《SpringBoot 源码解析系列》
InitializingBean, ApplicationContextAware, BeanPostProcessor
package com.iceicepip.project.common.redis; import com.iceicepip.project.common.redis.util.AddressUtils; import com.fasterxml.jackson.annotation.JsonAutoDetect; import com.fasterxml.jackson.annotation.PropertyAccessor; import com.fasterxml.jackson.databind.ObjectMapper; import org.apache.commons.lang3.StringUtils; import org.apache.commons.pool2.impl.GenericObjectPoolConfig; import org.springframework.beans.MutablePropertyValues; import org.springframework.beans.factory.InitializingBean; import org.springframework.beans.factory.annotation.Value; import org.springframework.beans.factory.config.BeanPostProcessor; import org.springframework.beans.factory.config.ConstructorArgumentValues; import org.springframework.beans.factory.support.DefaultListableBeanFactory; import org.springframework.beans.factory.support.GenericBeanDefinition; import org.springframework.boot.autoconfigure.AutoConfiguration; import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.context.ApplicationContext; import org.springframework.context.ApplicationContextAware; import org.springframework.context.annotation.Bean; import org.springframework.core.env.MapPropertySource; import org.springframework.core.env.StandardEnvironment; import org.springframework.data.redis.connection.*; import org.springframework.data.redis.connection.lettuce.LettuceClientConfiguration; import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory; import org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.data.redis.core.StringRedisTemplate; import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer; import org.springframework.data.redis.serializer.StringRedisSerializer; import java.time.Duration; import java.util.*; @AutoConfiguration @ConfigurationProperties(prefix = "spring") public class CustomRedisConfig implements InitializingBean, ApplicationContextAware, BeanPostProcessor { // 定义静态Map变量redis,用于存储Redis配置参数 protected static Map> redis = new HashMap<>(); // 在代码中作为Redis的主数据源的标识 @Value("${customer.primary.redis.key}") private String primaryKey; @Override // 实现InitializingBean接口的方法,用于在属性被注入后初始化Redis连接工厂和Redis模板 public void afterPropertiesSet() { redis.forEach((k, v) -> { // 如果当前的Redis主键等于注入的主键,则将Redis配置参数加入到属性源中 if(Objects.equals(k,primaryKey)){ Map paramMap = new HashMap<>(4); v.forEach((k1,v1)-> paramMap.put("spring.redis."+k1,v1)); MapPropertySource mapPropertySource = new MapPropertySource("redisAutoConfigProperty", paramMap); ((StandardEnvironment)applicationContext.getEnvironment()).getPropertySources().addLast(mapPropertySource); } // 创建Redis连接池配置对象和连接工厂对象 GenericObjectPoolConfig genericObjectPoolConfig = buildGenericObjectPoolConfig(k, v); RedisConnectionFactory lettuceConnectionFactory = buildLettuceConnectionFactory(k, v, genericObjectPoolConfig); // 创建Redis模板对象和字符串模板对象,并调用方法创建自定义Redis服务对象 RedisTemplate redisTemplate = buildRedisTemplate(k, lettuceConnectionFactory); StringRedisTemplate stringRedisTemplate = buildStringRedisTemplate(k, lettuceConnectionFactory); buildCustomRedisService(k, redisTemplate, stringRedisTemplate); }); } // 创建Redis主数据源 RedisTemplate @Bean public RedisTemplate
定义常用配置项的键名
package com.iceicepip.project.common.redis; public class CustomRedisConfigConstant { private CustomRedisConfigConstant() { } public static final String REDIS_HOST = "host"; public static final String REDIS_PORT = "port"; public static final String REDIS_TIMEOUT = "timeout"; public static final String REDIS_DATABASE = "database"; public static final String REDIS_PASSWORD = "password"; public static final String REDIS_MAXWAITMILLIS = "maxWaitMillis"; public static final String REDIS_MAXIDLE = "maxIdle"; public static final String REDIS_MINIDLE = "minIdle"; public static final String REDIS_MAXTOTAL = "maxTotal"; public static final String REDIS_TESTONBORROW = "testOnBorrow"; public static final String REDIS_SENTINEL_MASTER = "sentinel.master"; public static final String REDIS_SENTINEL_NODES = "sentinel.nodes"; public static final String REDIS_CLUSTER_NODES = "cluster.nodes"; public static final String REDIS_CLUSTER_MAX_REDIRECTS = "cluster.max-redirects"; public static final String BEAN_NAME_SUFFIX = "Redis"; public static final String INIT_METHOD_NAME = "getInit"; }
package com.iceicepip.project.common.redis; import com.alibaba.ttl.TransmittableThreadLocal; import org.apache.commons.lang3.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.aop.framework.Advised; import org.springframework.aop.support.AopUtils; import org.springframework.beans.factory.NoSuchBeanDefinitionException; import org.springframework.beans.factory.annotation.Value; import org.springframework.context.ApplicationContext; import org.springframework.dao.DataAccessException; import org.springframework.data.redis.connection.RedisConnection; import org.springframework.data.redis.connection.RedisStringCommands; import org.springframework.data.redis.connection.ReturnType; import org.springframework.data.redis.connection.StringRedisConnection; import org.springframework.data.redis.core.*; import org.springframework.data.redis.core.types.Expiration; import javax.annotation.Resource; import java.nio.charset.StandardCharsets; import java.util.*; import java.util.concurrent.TimeUnit; import java.util.function.Function; import java.util.function.Supplier; public class CustomRedisService { private static final Logger logger = LoggerFactory.getLogger(CustomRedisService.class); private StringRedisTemplate stringRedisTemplate; private RedisTemplate redisTemplate; @Value("${distribute.lock.MaxSeconds:100}") private Integer lockMaxSeconds; private static Long LOCK_WAIT_MAX_TIME = 120000L; @Resource private ApplicationContext applicationContext; /** * 保存锁的value */ private TransmittableThreadLocalredisLockReentrant = new TransmittableThreadLocal<>(); /** * 解锁lua脚本 */ private static final String RELEASE_LOCK_SCRIPT = "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end"; /** * redis锁固定前缀 */ private static final String REDIS_LOCK_KEY_PREFIX = "xxx:redisLock"; /** * redis nameSpace */ private static final String REDIS_NAMESPACE_PREFIX = ":"; @Value("${spring.application.name}") private String appName; public CustomRedisService() { } public CustomRedisService(StringRedisTemplate stringRedisTemplate, RedisTemplate redisTemplate) { this.stringRedisTemplate = stringRedisTemplate; this.redisTemplate = redisTemplate; } public StringRedisTemplate getStringRedisTemplate() { return stringRedisTemplate; } public RedisTemplate getRedisTemplate() { return redisTemplate; } //以下是操作 public void saveOrUpdate(HashMap values) throws Exception { ValueOperations valueOps = stringRedisTemplate .opsForValue(); valueOps.multiSet(values); } public void saveOrUpdate(String key, String value) throws Exception { ValueOperations valueOps = stringRedisTemplate .opsForValue(); valueOps.set(key, value); } public String getValue(String key) throws Exception { ValueOperations valueOps = stringRedisTemplate .opsForValue(); return valueOps.get(key); } public void setValue(String key, String value) throws Exception { ValueOperations valueOps = stringRedisTemplate .opsForValue(); valueOps.set(key, value); } public void setValue(String key, String value, long timeout, TimeUnit unit) throws Exception { ValueOperations valueOps = stringRedisTemplate .opsForValue(); valueOps.set(key, value, timeout, unit); } public List getValues(Collection keys) throws Exception { ValueOperations valueOps = stringRedisTemplate .opsForValue(); return valueOps.multiGet(keys); } public void delete(String key) throws Exception { stringRedisTemplate.delete(key); } public void delete(Collection keys) throws Exception { stringRedisTemplate.delete(keys); } public void addSetValues(String key, String... values) throws Exception { SetOperations setOps = stringRedisTemplate.opsForSet(); setOps.add(key, values); } public Set getSetValues(String key) throws Exception { SetOperations setOps = stringRedisTemplate.opsForSet(); return setOps.members(key); } public String getSetRandomMember(String key) throws Exception { SetOperations setOps = stringRedisTemplate.opsForSet(); return setOps.randomMember(key); } public void delSetValues(String key, Object... values) throws Exception { SetOperations setOps = stringRedisTemplate.opsForSet(); setOps.remove(key, values); } public Long getZsetValuesCount(String key) throws Exception { return stringRedisTemplate.opsForSet().size(key); } public void addHashSet(String key, HashMap args) throws Exception { HashOperations hashsetOps = stringRedisTemplate .opsForHash(); hashsetOps.putAll(key, args); } public Map getHashSet(String key) throws Exception { HashOperations hashsetOps = stringRedisTemplate .opsForHash(); return hashsetOps.entries(key); } public Map getHashByteSet(String key) throws Exception { RedisConnection connection = null; try { connection = redisTemplate.getConnectionFactory().getConnection(); return connection.hGetAll(key.getBytes()); } catch (Exception e) { throw new Exception(e); } finally { if (Objects.nonNull(connection) && !connection.isClosed()) { connection.close(); } } } public List getHashMSet(byte[] key, byte[][] fields) throws Exception { return stringRedisTemplate.getConnectionFactory().getConnection().hMGet(key, fields); } /** * 设备hash中的值 * * @param key * @param field * @param vaule * @return * @throws Exception */ public Boolean setHashMSet(byte[] key, byte[] field, byte[] vaule) throws Exception { return stringRedisTemplate.getConnectionFactory().getConnection().hSet(key, field, vaule); } /** * 采用Pipeline方式获取多个Key的数据 * * @param keys Key数组 * @param fields Hash对象的二级Key * @return 结果数组,每个Object对象为List ,使用时需判断是否为null * @throws Exception */ public List getHashMSet(byte[][] keys, byte[][] fields) throws Exception { if (keys == null || keys.length == 0 || fields == null || fields.length == 0) { return null; } RedisConnection connection = stringRedisTemplate.getConnectionFactory().getConnection(); try { connection.openPipeline(); for (byte[] key : keys) { connection.hMGet(key, fields); } return connection.closePipeline(); } finally { if (!connection.isClosed()) { connection.close(); } } } /** * 采用Pipeline方式获取多个Key的数据 * * @param keys Key数组 * @param field Hash对象的二级Key * @return 结果数组,每个Object对象为List ,使用时需判断是否为null * @throws Exception */ public List getHashMSet(byte[][] keys, byte[] field) throws Exception { if (keys == null || keys.length == 0 || field == null) { return null; } RedisConnection connection = stringRedisTemplate.getConnectionFactory().getConnection(); try { connection.openPipeline(); for (byte[] key : keys) { connection.hGet(key, field); } return connection.closePipeline(); } finally { if (!connection.isClosed()) { connection.close(); } } } /** * 采用Pipeline方式获取多个Key的数据 * * @param keys Key数组 * @return 结果数组,每个Object对象为List ,使用时需判断是否为null * @throws Exception */ public List getHashMSet(byte[][] keys) throws Exception { if (keys == null || keys.length == 0) { return null; } RedisConnection connection = stringRedisTemplate.getConnectionFactory().getConnection(); try { connection.openPipeline(); for (byte[] key : keys) { connection.hGetAll(key); } return connection.closePipeline(); } finally { if (!connection.isClosed()) { connection.close(); } } } /** * 删除批量string * * @param keys Key数组 */ public void deleteAllStringValues(byte[][] keys) { if (keys == null || keys.length == 0) { return; } RedisConnection connection = stringRedisTemplate.getConnectionFactory().getConnection(); try { connection.openPipeline(); for (byte[] key : keys) { connection.del(key); } connection.closePipeline(); } finally { if (!connection.isClosed()) { connection.close(); } } } public List getHashMSet(String key, List fields) throws Exception { HashOperations hashsetOps = stringRedisTemplate .opsForHash(); return hashsetOps.multiGet(key, fields); } public List getHashByteMSet(String key, byte[]... fields) throws Exception { // HashOperations hashsetOps = stringRedisTemplate // .opsForHash(); // return hashsetOps.multiGet(key, fields); RedisConnection connection = null; try { connection = redisTemplate.getConnectionFactory().getConnection(); return connection.hMGet(key.getBytes(), fields); } catch (Exception e) { throw new Exception(e); } finally { if (Objects.nonNull(connection) && !connection.isClosed()) { connection.close(); } } } public void delHashSetValues(String key, Object... values) throws Exception { HashOperations hashsetOps = stringRedisTemplate .opsForHash(); hashsetOps.delete(key, values); } public void addZset(String key, String value, double score) throws Exception { ZSetOperations zSetOps = stringRedisTemplate .opsForZSet(); zSetOps.add(key, value, score); } public Set getZsetValues(String key) throws Exception { return null; } public void delZsetValues(String key, Object... values) throws Exception { ZSetOperations zSetOps = stringRedisTemplate .opsForZSet(); zSetOps.remove(key, values); } public String getHashByKey(String redisKey, String mapKey) throws Exception { HashOperations hashsetOps = stringRedisTemplate.opsForHash(); return hashsetOps.get(redisKey, mapKey); } public byte[] getHashByteByKey(String redisKey, String mapKey) throws Exception { RedisConnection connection = null; try { connection = redisTemplate.getConnectionFactory().getConnection(); return connection.hGet(redisKey.getBytes(), mapKey.getBytes()); } catch (Exception e) { throw new Exception(e); } finally { if (Objects.nonNull(connection) && !connection.isClosed()) { connection.close(); } } // HashOperations hashsetOps = stringRedisTemplate.opsForHash(); // return hashsetOps.get(redisKey, mapKey); } public Map getHashByte(String redisKey) throws Exception { RedisConnection connection = null; try { connection = redisTemplate.getConnectionFactory().getConnection(); return connection.hGetAll(redisKey.getBytes()); } catch (Exception e) { throw new Exception(e); } finally { if (Objects.nonNull(connection) && !connection.isClosed()) { connection.close(); } } } public void addHashSet(String redisKey, String mapKey, String mapValue) throws Exception { stringRedisTemplate.opsForHash().put(redisKey, mapKey, mapValue); } public Set getSet(String key) throws Exception { SetOperations setOperations = stringRedisTemplate.opsForSet(); return setOperations.members(key); } public void addSetValuesPipelined(final String[] keys, final String value) throws Exception { stringRedisTemplate.executePipelined(new RedisCallback () { @Override public Object doInRedis(RedisConnection connection) { StringRedisConnection stringRedisConn = (StringRedisConnection) connection; for (int i = 0; i < keys.length; i++) { stringRedisConn.sAdd(keys[i], value); } //必须返回null return null; } }); } public void delSetValuesPipelined(final String[] keys, final String value) throws Exception { stringRedisTemplate.executePipelined(new RedisCallback () { @Override public Object doInRedis(RedisConnection connection) { StringRedisConnection stringRedisConn = (StringRedisConnection) connection; for (int i = 0; i < keys.length; i++) { stringRedisConn.sRem(keys[i], value); } //必须返回null return null; } }); } public void delHashByKey(String redisKey, String mapKey) throws Exception { HashOperations hashMapOps = stringRedisTemplate.opsForHash(); hashMapOps.delete(redisKey, mapKey); } public Boolean hasKey(String key) throws Exception { return stringRedisTemplate.hasKey(key); } /** * 设置用户其他类的缓存 * * @param key * @param field hash结构的field * @param data 需要存的数据 * @param timeOut 超时时间 * @param unit 时间单位 */ public void setHashOther(String key, String field, String data, long timeOut, TimeUnit unit) { stringRedisTemplate.opsForHash().put(key, field, data); stringRedisTemplate.expire(key, timeOut, unit); } /** * 返回用户的其他缓存 * * @param key * @param field hash结构的field * @return String * @throws Exception */ public String getHashOther(String key, String field) throws Exception { return this.getHashByKey(key, field); } /** * 2019-2-20 changyandong 新增incr方法,设置过期时间 * * @param key * @param delta * @param timeout * @param unit * @return */ public Long increment(final String key, final int delta, final long timeout, final TimeUnit unit) { if (timeout <= 0 || unit == null) { return stringRedisTemplate.opsForValue().increment(key, delta); } List result = stringRedisTemplate .executePipelined(new SessionCallback () { @Override public Object execute( RedisOperations operations) throws DataAccessException { ValueOperations ops = operations.opsForValue(); ops.increment((K) key, delta); operations.expire((K) key, timeout, unit); return null; } }); return (Long) result.get(0); } /** * 管道增加hash结构 */ public void addHashValuesPipelined(Map > keys) { stringRedisTemplate.executePipelined((RedisCallback ) connection -> { StringRedisConnection stringRedisConn = (StringRedisConnection) connection; keys.forEach(stringRedisConn::hMSet); //必须返回null return null; }); } /** * 管道增加hash结构 删除老hash结构 */ public void addHashValuesPipelinedRemoveOldHash(Map > keys) { stringRedisTemplate.executePipelined((RedisCallback ) connection -> { StringRedisConnection stringRedisConn = (StringRedisConnection) connection; stringRedisConn.del(keys.keySet().toArray(new String[0])); keys.forEach(stringRedisConn::hMSet); //必须返回null return null; }); } /** * 分布式锁模板方法 * * @param businessKey 业务key * @param callbackFunction 回调方法 * @param s 回调方法具体入参 * @param 回调方法入参类型 * @param回调方法返回值类型 * @return 回调方法返回值 */ public T redisLockCallback(String businessKey, FunctioncallbackFunction, S s) { try { redisLock(businessKey); return callbackFunction.apply(s); } finally { redisUnLock(businessKey); } } publicT redisLockSupplier(String businessKey, Supplier supplier) { return redisLockSupplier(businessKey, supplier, lockMaxSeconds, LOCK_WAIT_MAX_TIME, TimeUnit.SECONDS); } public T redisLockSupplier(String businessKey, Supplier supplier, long lockMaxTime, long tryTimeout, TimeUnit timeUnit) { try { redisLock(businessKey, lockMaxTime, tryTimeout, timeUnit); return supplier.get(); } finally { redisUnLock(businessKey); } } /** * 获取锁(不等待,直接返回 是否获取到锁资源) * * @param businessKey 业务key * @return 是否获取到锁资源 */ public boolean redisLockSuspend(String businessKey) { return redisLockSuspend(businessKey, lockMaxSeconds, TimeUnit.SECONDS); } /** * 获取锁(不等待,直接返回 是否获取到锁资源) * @param businessKey 业务key * @param lockMaxTime 锁占用时长 * @param timeUnit 时间单位 * @return 是否获取锁资源 */ public boolean redisLockSuspend(String businessKey, long lockMaxTime, TimeUnit timeUnit) { String lockKey = generateLockKey(businessKey); long finalLockMaxTime = timeUnit.toMillis(lockMaxTime); //可重入锁判断 if (isReentrantLock(lockKey)) { return Boolean.TRUE; } RedisCallback callback = (connection) -> connection.set( lockKey.getBytes(StandardCharsets.UTF_8), businessKey.getBytes(StandardCharsets.UTF_8), Expiration.milliseconds(finalLockMaxTime), RedisStringCommands.SetOption.SET_IF_ABSENT); return stringRedisTemplate.execute(callback); } /** * @param keyPrefix redis锁 key前缀 * @param key key * @param tryTimeout 超时时间 * @param timeUnit 时间单位 * @return 是否获取到锁资源 */ @Deprecated public boolean redisLock(String keyPrefix, String key, long lockMaxTime, long tryTimeout, TimeUnit timeUnit) { String businessKey = getLockKey(keyPrefix, key); return redisLock(businessKey, lockMaxTime, tryTimeout, timeUnit); } public boolean redisLock(String businessKey, long lockMaxTime, long tryTimeout, TimeUnit timeUnit) { tryTimeout = System.currentTimeMillis() + timeUnit.toMillis(tryTimeout); lockMaxTime = timeUnit.toMillis(lockMaxTime); return redisLock(businessKey, lockMaxTime, tryTimeout); } /** * 获取redis分布式锁 (默认超时时间) * * @param keyPrefix redis锁 key前缀 * @param key key * @return 是否获取到锁资源 */ @Deprecated public boolean redisLock(String keyPrefix, String key) { String businessKey = getLockKey(keyPrefix, key); return redisLock(businessKey); } public boolean redisLock(String businessKey) { long endTime = System.currentTimeMillis() + LOCK_WAIT_MAX_TIME; long lockMaxTime = TimeUnit.SECONDS.toMillis(this.lockMaxSeconds); return redisLock(businessKey, lockMaxTime, endTime); } /** * 获取redis分布式锁 (默认超时时间) * @param businessKey 业务key * @param lockMaxTime 锁占用时长 * @param endTime 结束时间 * @return 是否获取到锁资源 */ private boolean redisLock(String businessKey, long lockMaxTime, long endTime) { String lockKey = generateLockKey(businessKey); logger.debug("redisLock businessKey:{}, lockKey:{}, lockMaxTime:{}, endTime:{}", businessKey, lockKey, lockMaxTime, endTime); //可重入锁判断 if (isReentrantLock(lockKey)) { logger.debug("redisLock lockKey:{}, threadName:{}, isReentrantLock true", lockKey, Thread.currentThread().getName()); return Boolean.TRUE; } RedisCallback callback = (connection) -> connection.set( lockKey.getBytes(StandardCharsets.UTF_8), businessKey.getBytes(StandardCharsets.UTF_8), Expiration.milliseconds(lockMaxTime), RedisStringCommands.SetOption.SET_IF_ABSENT); //在timeout时间内仍未获取到锁,则获取失败 while (System.currentTimeMillis() < endTime) { if (stringRedisTemplate.execute(callback)) { redisLockReentrant.set(lockKey); logger.debug("redisLock getKey lockKey:{}, ", lockKey); return true; } try { Thread.sleep(100); } catch (InterruptedException e) { logger.error("获取redis分布式锁出错", e); Thread.currentThread().interrupt(); } } logger.debug("redisLock meiyoukey lockKey:{}, ", lockKey); return false; } /** * 释放分布式锁 * * @param keyPrefix redis锁 key前缀 * @param key key */ @Deprecated public Boolean redisUnLock(String keyPrefix, String key) { String lockKey = getLockKey(keyPrefix, key); return redisUnLock(lockKey); } public Boolean redisUnLock(String businessKey) { String lockKey = generateLockKey(businessKey); RedisCallback callback = (connection) -> connection.eval( RELEASE_LOCK_SCRIPT.getBytes(), ReturnType.BOOLEAN, 1, lockKey.getBytes(StandardCharsets.UTF_8), businessKey.getBytes(StandardCharsets.UTF_8)); //清空 ThreadLocal redisLockReentrant.remove(); Boolean execute = stringRedisTemplate.execute(callback); logger.debug("redisUnLock execute lockKey:{}, ", lockKey); return execute; } private String getLockKey(String keyPrefix, String key) { return keyPrefix + "-" + key; } /** * 是否为重入锁 */ private boolean isReentrantLock(String lockKey) { String originValue = redisLockReentrant.get(); String redisValue = stringRedisTemplate.opsForValue().get(lockKey); return StringUtils.isNotBlank(originValue) && originValue.equals(redisValue); } /** * 生成规则要求的 key * xxx:redisLock:${appName}:${classSimpleName}:${methodName}:${businessKey} * @param businessKey 业务key * @return key */ private String generateLockKey(String businessKey) { StackTraceElement[] stackTrace = Thread.currentThread().getStackTrace(); String classSimpleName = StringUtils.EMPTY; String methodName = StringUtils.EMPTY; for (StackTraceElement traceElement : stackTrace) { String itemClassName = traceElement.getClassName(); //如果是当前类或者stack类 continue; if (itemClassName.equals(this.getClass().getName()) || itemClassName.equals(StackTraceElement.class.getName())) { continue; } char[] cs=itemClassName.substring(itemClassName.lastIndexOf(".")+1).toCharArray(); cs[0]+=32; //一直找,找到被spring管理的类。 Object target; try { target = applicationContext.getBean(String.valueOf(cs)); } catch (NoSuchBeanDefinitionException e) { continue; } //如果是代理类,找到实际类 if (AopUtils.isAopProxy(target) && target instanceof Advised) { Advised advised = (Advised) target; try { target = advised.getTargetSource().getTarget(); } catch (Exception e) { continue; } } if (Objects.nonNull(target)) { classSimpleName = target.getClass().getSimpleName(); methodName = traceElement.getMethodName(); break; } } return REDIS_LOCK_KEY_PREFIX.concat(REDIS_NAMESPACE_PREFIX).concat(appName.toLowerCase()) .concat(REDIS_NAMESPACE_PREFIX).concat(classSimpleName) .concat(REDIS_NAMESPACE_PREFIX).concat(methodName) .concat(REDIS_NAMESPACE_PREFIX).concat(businessKey); } }
在工程目录中创建 common-redis-lettuce/src/main/resources/META-INF/spring创建文件名为
org.springframework.boot.autoconfigure.AutoConfiguration.imports 文件
文件内容:
com.iceicepip.project.common.redis.CustomRedisConfig
在工程目录中创建 common-redis-lettuce/src/main/resources/META-INF/spring 目录,并在该目录下创建一个名为 org.springframework.boot.autoconfigure.AutoConfiguration.imports 的文件。该文件的作用是指示 Spring Boot 在自动配置期间需要导入哪些额外的配置类。
在 org.springframework.boot.autoconfigure.AutoConfiguration.imports 文件中,可以添加需要导入的其他配置类的全限定类名。例如,如果我们需要在自动配置期间导入一个名为 CustomRedisConfig 的配置类,可以在该文件中添加以下内容:
com.iceicepip.project.common.redis.CustomRedisConfig
这样,在应用程序启动时,Spring Boot 会自动加载 CustomRedisConfig 类,并将其与自动配置合并,以提供完整的应用程序配置。
其中xxx 为在Spring Boot 配置文件中配置的多数据源的标识.如’user’、“iot”
@Autowired @Qualifier("xxxRedis") private CustomRedisService xxxRedisService; @Autowired @Qualifier("userRedis") private CustomRedisService userRedisService;
或者直接使用RedisTemplate 。
@Autowired @Qualifier("userRedisTemplate") private RedisTemplate userRedisTemplate; @Autowired @Qualifier("xxxStringRedisTemplate") private StringRedisTemplate xxxStringRedisTemplate; @Autowired @Qualifier("xxxRedisTemplate") private RedisTemplate xxxRedisTemplate;
https://github.com/wangshuai67/Redis-Tutorial-2023
大家好,我是冰点,今天的Redis【实践篇】之SpringBoot Redis 多数据源集成支持哨兵模式和Cluster集群模式,全部内容就是这些。如果你有疑问或见解可以在评论区留言。