background

Project Structure

The project mainly uses the springboot + dubbo + mybatis framework, which is roughly divided into two layers: web and service. web provides api interface for sdk client, service provides mysql database table and other operations, and provides dubbo service support for web.

Business scene

Take a look at this picture:

As shown in the figure, the approximate process is: sdk end incoming appkey (the unique identifier of an app), the web side calls the service remotely through dubbo, and the service needs to be routed to different databases according to different appkeys. The table structure of the database represented by each appkey is the same.

Now the problem is coming, appkey is a non-business field identifier (not in the table, only on the library name), how to change the original dao method signature (no appkey entry), service method signature (no appkey entry), Under the premise of the mapper xml file (without appkey identifier), execute the sql operation on the corresponding database according to the different appkeys reported?

solution

Summarize the requirements of the above questions: `How to non-intrusive database routing by key?

The "no intrusion" here involves two difficulties:

  1. How can I bring the appkey to the service instead of the appkey when the web is tuned?
  2. Assuming the first point is ok, how can the service side execute sql on the specified library without displaying the changed data source?

Consider the second of the above difficulties:

Idea One

There are many examples of multi-data sources for mybatis, but they are not suitable for the above scenarios. It needs to configure multiple data sources to “write dead” first, and you need to explicitly specify the data source to be used on the specific dao method used. It may be more suitable for scenarios such as read-write separation, but it is not enough for the "dynamic" data source in this case!

Idea 2

Data sources can’t be dynamically selected. Can you create data sources dynamically? The answer is yes. But this will make the architecture more complicated. sql under different appkeys needs to use different data sources to execute. In general, each data source must maintain a connection pool. As a result, when there are many appkeys, Very resource intensive. So the way to dynamically create a data source is not feasible!

Feasible solution

In fact, sql itself has given us the answer:

Mysql> create database test;
Query OK, 1 row affected (0.01 sec)

Mysql> create database test_1;
Query OK, 1 row affected (0.00 sec)

Mysql> create table test.person(id integer,name varchar(100));
Query OK, 0 rows affected (0.27 sec)

Mysql> create table test_1.person(id integer,name varchar(100));
Query OK, 0 rows affected (0.27 sec)
Mysql> insert into test.person values(1,'zhou');
Query OK, 1 row affected (0.21 sec)

Mysql> insert into test_1.person values(1,'zhou');
Query OK, 1 row affected (0.20 sec)

Mysql> select * from test.person;
+------+------+
| id | name |
+------+------+
| 1 | zhou |
+------+------+
1 row in set (0.00 sec)

Mysql> select * from test_1.person;
+------+------+
| id | name |
+------+------+
| 1 | zhou |
+------+------+
1 row in set (0.00 sec)

As can be seen from the above process, the entire process does not use any commands such as use test to switch databases, which means that all sql operations are performed in a data source (I am logged in as the root user, the default mysql library) ).

As a result, things became clear, and the remaining goals were: rewrite sql, table name plus library name prefix

Just as mybatis also supports this kind of operation, through the interceptor of mybatis, the sample code (verified) is as follows:

@Component
@Intercepts({
        @Signature(type = Executor.class, method = "update", args = {
                MappedStatement.class, Object.class}),
        @Signature(type = Executor.class, method = "query", args = {
                MappedStatement.class, Object.class, RowBounds.class,
                ResultHandler.class})})
Public class MybatisInterceptor implements Interceptor {
    Private static Log logger = LogFactory.getLog(MybatisInterceptor.class);

    Static int MAPPED_STATEMENT_INDEX = 0; / / This is the serial number corresponding to the above args
    Static int PARAMETER_INDEX = 1;
    Static int ROWBOUNDS_INDEX = 2;
    Static int RESULT_HANDLER_INDEX = 3;

    @Override
    Public Object intercept(Invocation invocation) throws Throwable {

        // If there is an appkey parameter in the dubbo context, rewrite sql and switch to the specified library.
        String appkey = RpcContext.getContext().getAttachment(Constants.APPKEY);
        If (StringUtils.isNotEmpty(appkey)) {

            Final Object[] queryArgs = invocation.getArgs();
            Final MappedStatement mappedStatement = (MappedStatement) queryArgs[MAPPED_STATEMENT_INDEX];
            Final Object parameter = queryArgs[PARAMETER_INDEX];
            Final BoundSql boundSql = mappedStatement.getBoundSql(parameter);

            String sql = boundSql.getSql();

            String tableName = getTableName(sql);
            / / Add tablename plus the specified database prefix
// String dbname= DBSwitcher.getDBName();

            Logger.info("old sql is :" + sql);

            String dbname = Constants.DB_PREFIX + appkey;
            If (StringUtils.isNotEmpty(tableName)) {
                Sql = sql.replaceFirst(tableName, dbname + "." + tableName);

                Logger.info("new sql is :" + sql);

                / / Re-new a query statement object
                BoundSql newBoundSql = new BoundSql(mappedStatement.getConfiguration(), sql, boundSql.getParameterMappings(), boundSql.getParameterObject());
                // put the new query in the statement
                MappedStatement newMs = copyFromMappedStatement(mappedStatement, new BoundSqlSqlSource(newBoundSql));
                For (ParameterMapping mapping : boundSql.getParameterMappings()) {
                    String prop = mapping.getProperty();
                    If (boundSql.hasAdditionalParameter(prop)) {
                        newBoundSql.setAdditionalParameter(prop, boundSql.getAdditionalParameter(prop));
                    }
                }
                queryArgs[MAPPED_STATEMENT_INDEX] = newMs;
            }

        }

        Return invocation.proceed();
    }

    @Override
    Public Object plugin(Object target) {
        Return Plugin.wrap(target, this);
    }

    @Override
    Public void setProperties(Properties properties) {

    }

  Private MappedStatement copyFromMappedStatement(MappedStatement ms, SqlSource newSqlSource) {
        MappedStatement.Builder builder = new MappedStatement.Builder(ms.getConfiguration(), ms.getId(), newSqlSource, ms.getSqlCommandType());
        Builder.resource(ms.getResource());
        builder.fetchSize(ms.getFetchSize());
        builder.statementType(ms.getStatementType());
        builder.keyGenerator(ms.getKeyGenerator());
        If (ms.getKeyProperties() != null && ms.getKeyProperties().length > 0) {
            builder.keyProperty(ms.getKeyProperties()[0]);
        }
        Builder.timeout(ms.getTimeout());
        builder.parameterMap(ms.getParameterMap());
        builder.resultMaps(ms.getResultMaps());
        builder.resultSetType(ms.getResultSetType());
        Builder.cache(ms.getCache());
        builder.flushCacheRequired(ms.isFlushCacheRequired());
        builder.useCache(ms.isUseCache());
        Return builder.build();
    }

    Public static class BoundSqlSqlSource implements SqlSource {
        Private BoundSql boundSql;

        Public BoundSqlSqlSource(BoundSql boundSql) {
            this.boundSql = boundSql;
        }

        Public BoundSql getBoundSql(Object parameterObject) {
            Return boundSql;
        }
    }

    / / Get the table name according to sql
    Private String getTableName(String sql) {
        String[] sqls = sql.split("\\s+");
        Switch (sqls[0]) {
            Case "select": {
                // select aa, bb, cc from tableName
                For (int i = 0; i < sqls.length; i++) {
                    If (sqls[i].equals("from")) {
                        Return sqls[i + 1];
                    }
                }
                Break;
            }
            Case "update": {
                // update tableName
                Return sqls[1];
            }
            Case "insert": {
                // insert into tableName
                Return sqls[2];
            }
            Case "delete": {
                // delete tableName
                Return sqls[1];
            }
        }
        Return null;
    }
}

Careful, you can easily find this line:

At this point, the first of the above two difficulties is almost solved. Yes, the RpcContext object here is dubbo for us. A tool that allows you to easily pass context parameters when calling the dubbo service. The principle behind it is based on ThreadLocal.

The problem of the service layer has been largely solved. Now the problem of the web layer is relatively simple. Write a filter to intercept the appkey parameter globally and put it into the dubbo context. All other controllers will bring the appkey parameter as long as the dubbo service is called. as follows:

 Public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
        Logger.info("Filter is executing...");
        // pass the request along the filter chain
        String appkey = ((HttpServletRequest) request).getHeader("key");

        Logger.info("the requested appkey is :" + appkey);

        RpcContext.getContext().setAttachment(APPKEY, appkey);

        chain.doFilter(request, response);
    }

Still a little bit

The above process seems to be perfect, but due to the design of dubbo, things are not as smooth as imagined. For the convenience of explanation, look at the following code:

/**
 * test interface
 */
@Controller
@RequestMapping(path = "/test")
Public class TestController {

    Private static final Logger LOGGER = LoggerFactory.getLogger(TestController.class);

    @Reference
    Private TestService testService;

    @PostMapping(path = {"/test"})
    @ResponseBody
    Public String hello() {

        // key In the request header, the filter intercepts the dubbo RpcContext context.
        // Suppose the current default data source is the test library, and key = test_1

        Boolean ret1=testService.method1(); // execute on the test library

        Boolean ret2=testService.method1(); // Execute on the test_1 library

        Return "hello";
    }
}

I can take a closer look at the code comments section I wrote above. The above conclusions are the ones I have tried.

It is common to call the dubbo service multiple times in a controller, but the data put into RpcContext will be destroyed after the first call to testService.method1(), the second time the dubbo is called. I can't get it. And filter will only intercept the controller once.

As mentioned earlier, RpcContext is based on the java native ThreadLocal design, and ThreadLocal is bound to the current thread, and the hello method of the above controller is after the first testService.method1() is adjusted. The current thread is still there! It seems that the data does not disappear out of thin air, so it should be explicitly removed from the dubbo source code.

After some search + breakpoint debugging (limited to space, omit the steps), finally found the following two places in the destruction of data:

The code in the red box above is not uncommented before, and friends who need it can modify the dubbo source code and put it into the private warehouse.

At this point, a non-intrusive dynamic routing database solution under microservices is achieved!

to sum up

  1. The interceptor of mybatis is very practical, rewriting sql, statistical execution time, etc., is very useful.
  2. Dubbo originally considered carrying context parameters in remote calls. Using this can do some similar but more lightweight designs.
Last modified: 2019年3月29日

Author

Comments

Write a Reply or Comment

Your email address will not be published.