Camunda框架数据库迁移至达梦/人大金仓/海量数据库

可通过修改项目依赖的JAR包完成迁移目的,或者修改获取的Camunda框架的源码后打包为JAR包的方式完成迁移。本迁移指导采用Jar Editor插件对JAR包直接进行修改。

部署时Camunda框架会扫描数据库检查是否存在所需的数据库表结构,若不存在则调用JAR包内的SQL语句创建相应的数据库表,而JAR包内没有针对达梦数据库的SQL,因此建议在完成数据库迁移后再进行本项目的迁移。

若需要在数据库中不存在相应表格的情况下部署项目,需要在首次部署时将数据库类型改为oracle,通过包内oracle类型的SQL创建表格,再次部署时将数据库类型改为dm/kingbase即可完成部署。

前提条件

Camunda适配达梦数据库

  1. 通过IntelliJ IDEA打开依赖包org.camunda.bpm:camunda-engine,通过Jar Editor插件直接对JAR包进行修改。
  2. 修改JAR包下“org/camunda/bpm/engine/impl/cfg/ProcessEngineConfigurationImpl”文件。

    在getDefaultDatabaseTypeMappings方法中新增如下加粗代码:
    protected static Properties getDefaultDatabaseTypeMappings() {
        Properties databaseTypeMappings = new Properties();
        databaseTypeMappings.setProperty("H2", "h2");
        databaseTypeMappings.setProperty(MY_SQL_PRODUCT_NAME, "mysql");
        databaseTypeMappings.setProperty(MARIA_DB_PRODUCT_NAME, "mariadb");
        databaseTypeMappings.setProperty("Oracle", "oracle");
        databaseTypeMappings.setProperty(POSTGRES_DB_PRODUCT_NAME, "postgres");
        databaseTypeMappings.setProperty("Microsoft SQL Server", "mssql");
        databaseTypeMappings.setProperty("DB2", "db2");
        databaseTypeMappings.setProperty("DB2", "db2");
        databaseTypeMappings.setProperty("DB2/NT", "db2");
        databaseTypeMappings.setProperty("DB2/NT64", "db2");
        databaseTypeMappings.setProperty("DB2 UDP", "db2");
        databaseTypeMappings.setProperty("DB2/LINUX", "db2");
        databaseTypeMappings.setProperty("DB2/LINUX390", "db2");
        databaseTypeMappings.setProperty("DB2/LINUXX8664", "db2");
        databaseTypeMappings.setProperty("DB2/LINUXZ64", "db2");
        databaseTypeMappings.setProperty("DB2/400 SQL", "db2");
        databaseTypeMappings.setProperty("DB2/6000", "db2");
        databaseTypeMappings.setProperty("DB2 UDB iSeries", "db2");
        databaseTypeMappings.setProperty("DB2/AIX64", "db2");
        databaseTypeMappings.setProperty("DB2/HPUX", "db2");
        databaseTypeMappings.setProperty("DB2/HP64", "db2");
        databaseTypeMappings.setProperty("DB2/SUN", "db2");
        databaseTypeMappings.setProperty("DB2/SUN64", "db2");
        databaseTypeMappings.setProperty("DB2/PTX", "db2");
        databaseTypeMappings.setProperty("DB2/2", "db2");
        databaseTypeMappings.setProperty("DMDBMS", "dm");	
        return databaseTypeMappings;
      }
    通过Jar Editor保存并编译后重新构建JAR包,需要确保JDK版本与Camunda框架适配的JDK版本(JDK11)保持一致。
    部分情况下可能会出现编译错误的情况,可能原因如下:
    • Camunda框架和源码调用的依赖包之间版本不兼容。
    • JDK版本错误。

  3. 修改jar包下“org/camunda/bpm/engine/impl/db/sql/DbSqlSessionFactory”文件。

    1. 在DbSqlSessionFactory的实例变量中添加如下加粗内容:
      public class DbSqlSessionFactory implements SessionFactory {
      
        public static final String MSSQL = "mssql";
        public static final String DB2 = "db2";
        public static final String ORACLE = "oracle";
        public static final String H2 = "h2";
        public static final String MYSQL = "mysql";
        public static final String POSTGRES = "postgres";
        public static final String MARIADB = "mariadb";
        public static final String DMDBMS = "dm";    //新增变量
        public static final String[] SUPPORTED_DATABASES = {MSSQL, DB2, ORACLE, H2, MYSQL, POSTGRES, MARIADB, DMDBMS};  //新增成员
        ....
        }
    2. 在DbSqlSessionFactory的static方法中添加如下代码:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      83
      84
      85
      86
      87
           databaseSpecificLimitBeforeStatements.put(DMDBMS, "select * from ( select a.*, ROWNUM rnum from (");
            optimizeDatabaseSpecificLimitBeforeWithoutOffsetStatements.put(DMDBMS, "select * from ( select a.*, ROWNUM rnum from (");
            databaseSpecificLimitAfterStatements.put(DMDBMS, "  ) a where ROWNUM < #{lastRow}) where rnum  >= #{firstRow}");
            optimizeDatabaseSpecificLimitAfterWithoutOffsetStatements.put(DMDBMS, "  ) a where ROWNUM <= #{maxResults})");
            databaseSpecificLimitBeforeWithoutOffsetStatements.put(DMDBMS, "");
            databaseSpecificLimitAfterWithoutOffsetStatements.put(DMDBMS, "AND ROWNUM <= #{maxResults}");
            databaseSpecificInnerLimitAfterStatements.put(DMDBMS, databaseSpecificLimitAfterStatements.get(DMDBMS));
            databaseSpecificLimitBetweenStatements.put(DMDBMS, "");
            databaseSpecificLimitBetweenFilterStatements.put(DMDBMS, "");
            databaseSpecificLimitBetweenAcquisitionStatements.put(DMDBMS, "");
      
            databaseSpecificOrderByStatements.put(DMDBMS, defaultOrderBy);
            databaseSpecificLimitBeforeNativeQueryStatements.put(DMDBMS, "");
            databaseSpecificDistinct.put(DMDBMS, "distinct");
            databaseSpecificLimitBeforeInUpdate.put(DMDBMS, "");
            databaseSpecificLimitAfterInUpdate.put(DMDBMS, "");
            databaseSpecificAuthJoinStart.put(DMDBMS, defaultAuthOnStart);
            databaseSpecificNumericCast.put(DMDBMS, "");
            databaseSpecificCountDistinctBeforeStart.put(DMDBMS, defaultDistinctCountBeforeStart);
            databaseSpecificCountDistinctBeforeEnd.put(DMDBMS, defaultDistinctCountBeforeEnd);
            databaseSpecificCountDistinctAfterEnd.put(DMDBMS, defaultDistinctCountAfterEnd);
      
            databaseSpecificEscapeChar.put(DMDBMS, defaultEscapeChar);
      
            databaseSpecificDummyTable.put(DMDBMS, "FROM DUAL");
            databaseSpecificBitAnd1.put(DMDBMS, "BITAND(");
            databaseSpecificBitAnd2.put(DMDBMS, ",");
            databaseSpecificBitAnd3.put(DMDBMS, ")");
            databaseSpecificDatepart1.put(DMDBMS, "to_number(to_char(");
            databaseSpecificDatepart2.put(DMDBMS, ",");
            databaseSpecificDatepart3.put(DMDBMS, "))");
      
            databaseSpecificTrueConstant.put(DMDBMS, "1");
            databaseSpecificFalseConstant.put(DMDBMS, "0");
            databaseSpecificIfNull.put(DMDBMS, "NVL");
      
            databaseSpecificDaysComparator.put(DMDBMS, "${date} <= #{currentTimestamp} - ${days}");
      
            databaseSpecificCollationForCaseSensitivity.put(DMDBMS, "");
      
            databaseSpecificAuthJoinEnd.put(DMDBMS, defaultAuthOnEnd);
            databaseSpecificAuthJoinSeparator.put(DMDBMS, defaultAuthOnSeparator);
      
            databaseSpecificAuth1JoinStart.put(DMDBMS, defaultAuthOnStart);
            databaseSpecificAuth1JoinEnd.put(DMDBMS, defaultAuthOnEnd);
            databaseSpecificAuth1JoinSeparator.put(DMDBMS, defaultAuthOnSeparator);
            databaseSpecificExtractTimeUnitFromDate.put(DMDBMS, defaultExtractTimeUnitFromDate);
      
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricProcessInstanceDurationReport", "selectHistoricProcessInstanceDurationReport_oracle");
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricTaskInstanceDurationReport", "selectHistoricTaskInstanceDurationReport_oracle");
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricTaskInstanceCountByTaskNameReport", "selectHistoricTaskInstanceCountByTaskNameReport_oracle");
            addDatabaseSpecificStatement(DMDBMS, "selectFilterByQueryCriteria", "selectFilterByQueryCriteria_oracleDb2");
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricProcessInstanceIdsForCleanup", "selectHistoricProcessInstanceIdsForCleanup_oracle");
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricDecisionInstanceIdsForCleanup", "selectHistoricDecisionInstanceIdsForCleanup_oracle");
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricCaseInstanceIdsForCleanup", "selectHistoricCaseInstanceIdsForCleanup_oracle");
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricBatchIdsForCleanup", "selectHistoricBatchIdsForCleanup_oracle");
      
            addDatabaseSpecificStatement(DMDBMS, "deleteAttachmentsByRemovalTime", "deleteAttachmentsByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteCommentsByRemovalTime", "deleteCommentsByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricActivityInstancesByRemovalTime", "deleteHistoricActivityInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricDecisionInputInstancesByRemovalTime", "deleteHistoricDecisionInputInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricDecisionInstancesByRemovalTime", "deleteHistoricDecisionInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricDecisionOutputInstancesByRemovalTime", "deleteHistoricDecisionOutputInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricDetailsByRemovalTime", "deleteHistoricDetailsByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteExternalTaskLogByRemovalTime", "deleteExternalTaskLogByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricIdentityLinkLogByRemovalTime", "deleteHistoricIdentityLinkLogByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricIncidentsByRemovalTime", "deleteHistoricIncidentsByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteJobLogByRemovalTime", "deleteJobLogByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricProcessInstancesByRemovalTime", "deleteHistoricProcessInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricTaskInstancesByRemovalTime", "deleteHistoricTaskInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricVariableInstancesByRemovalTime", "deleteHistoricVariableInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteUserOperationLogByRemovalTime", "deleteUserOperationLogByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteByteArraysByRemovalTime", "deleteByteArraysByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricBatchesByRemovalTime", "deleteHistoricBatchesByRemovalTime_oracle");
      
            constants = new HashMap<String, String>();
            constants.put("constant.event", "cast('event' as nvarchar2(255))");
            constants.put("constant.op_message", "NEW_VALUE_ || '_|_' || PROPERTY_");
            constants.put("constant_for_update", "for update");
            constants.put("constant.datepart.quarter", "'Q'");
            constants.put("constant.datepart.month", "'MM'");
            constants.put("constant.datepart.minute", "'MI'");
            constants.put("constant.null.startTime", "null START_TIME_");
            constants.put("constant.varchar.cast", "'${key}'");
            constants.put("constant.integer.cast", "NULL");
            constants.put("constant.null.reporter", "NULL AS REPORTER_");
            dbSpecificConstants.put(DMDBMS, constants);
      

    通过Jar Editor保存并编译后重新构建JAR包,需要确保JDK版本与Camunda框架适配的JDK版本(JDK11)保持一致。

  4. 引入达梦数据库的JDBC驱动包。

    以maven为例,在pom.xml文件中的<dependencies></>添加如下内容:

    1
    2
    3
    4
    5
    <dependency>
       <groupId>com.dameng</groupId>
       <artifactId>Dm8JdbcDriver18</artifactId>
       <version>8.1.1.49</version>
    </dependency>
    

    最新版本请查阅达梦官网。

  5. 配置数据库信息。

    以yaml文件为例:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    #数据库驱动
    driver-class-name: dm.jdbc.driver.DmDriver
    #数据库IP地址和端口
    url: jdbc:dm://127.0.0.1:5236/DMSERVER?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf-8
    username: 数据库用户名
    password: 数据库密码
    
    #Camunda框架指定数据库类型:
    camunda:
      bpm:
        database:
          type: dm
    

Camunda适配人大金仓数据库

  1. 通过IntelliJ IDEA打开依赖包org.camunda.bpm:camunda-engine,通过Jar Editor插件直接对JAR包进行修改。
  2. 修改JAR包下“org/camunda/bpm/engine/impl/cfg/ProcessEngineConfigurationImpl”文件。

    在getDefaultDatabaseTypeMappings方法中新增如下加粗代码:
    protected static Properties getDefaultDatabaseTypeMappings() {
    Properties databaseTypeMappings = new Properties();
    databaseTypeMappings.setProperty("H2", "h2");
    databaseTypeMappings.setProperty(MY_SQL_PRODUCT_NAME, "mysql");
    databaseTypeMappings.setProperty(MARIA_DB_PRODUCT_NAME, "mariadb");
    databaseTypeMappings.setProperty("Oracle", "oracle");
    databaseTypeMappings.setProperty(POSTGRES_DB_PRODUCT_NAME, "postgres");
    databaseTypeMappings.setProperty("Microsoft SQL Server", "mssql");
    databaseTypeMappings.setProperty("DB2", "db2");
    databaseTypeMappings.setProperty("DB2", "db2");
    databaseTypeMappings.setProperty("DB2/NT", "db2");
    databaseTypeMappings.setProperty("DB2/NT64", "db2");
    databaseTypeMappings.setProperty("DB2 UDP", "db2");
    databaseTypeMappings.setProperty("DB2/LINUX", "db2");
    databaseTypeMappings.setProperty("DB2/LINUX390", "db2");
    databaseTypeMappings.setProperty("DB2/LINUXX8664", "db2");
    databaseTypeMappings.setProperty("DB2/LINUXZ64", "db2");
    databaseTypeMappings.setProperty("DB2/400 SQL", "db2");
    databaseTypeMappings.setProperty("DB2/6000", "db2");
    databaseTypeMappings.setProperty("DB2 UDB iSeries", "db2");
    databaseTypeMappings.setProperty("DB2/AIX64", "db2");
    databaseTypeMappings.setProperty("DB2/HPUX", "db2");
    databaseTypeMappings.setProperty("DB2/HP64", "db2");
    databaseTypeMappings.setProperty("DB2/SUN", "db2");
    databaseTypeMappings.setProperty("DB2/SUN64", "db2");
    databaseTypeMappings.setProperty("DB2/PTX", "db2");
    databaseTypeMappings.setProperty("DB2/2", "db2");
    databaseTypeMappings.setProperty("KingbaseEs", "kingbase8");
    return databaseTypeMappings;
    }

    通过Jar Editor保存并编译后重新构建JAR包,需要确保JDK版本与Camunda框架适配的JDK版本(JDK11)保持一致。

    部分情况下可能会出现编译错误的情况,可能原因如下:
    • Camunda框架和源码调用的依赖包之间版本不兼容。
    • JDK版本错误。

  3. 修改JAR包下“org/camunda/bpm/engine/impl/db/sql/DbSqlSessionFactory”文件。

    1. 在DbSqlSessionFactory的实例变量中添加:
      public class DbSqlSessionFactory implements SessionFactory {
      
      public static final String MSSQL = "mssql";
      public static final String DB2 = "db2";
      public static final String ORACLE = "oracle";
      public static final String H2 = "h2";
      public static final String MYSQL = "mysql";
      public static final String POSTGRES = "postgres";
      public static final String MARIADB = "mariadb";
      public static final String KINGBASEES = "kingbase8";   //新增变量
      public static final String[] SUPPORTED_DATABASES = {MSSQL, DB2, ORACLE, H2, MYSQL, POSTGRES, MARIADB, KINGBASEES};  //新增成员
      ....
      }
    2. 在DbSqlSessionFactory的static方法中添加如下代码:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      83
      84
      85
      86
      87
      databaseSpecificLimitBeforeStatements.put(KINGBASEES, "select * from ( select a.*, ROWNUM rnum from (");
      optimizeDatabaseSpecificLimitBeforeWithoutOffsetStatements.put(KINGBASEES, "select * from ( select a.*, ROWNUM rnum from (");
      databaseSpecificLimitAfterStatements.put(KINGBASEES, "  ) a where ROWNUM < #{lastRow}) where rnum  >= #{firstRow}");
      optimizeDatabaseSpecificLimitAfterWithoutOffsetStatements.put(KINGBASEES, "  ) a where ROWNUM <= #{maxResults})");
      databaseSpecificLimitBeforeWithoutOffsetStatements.put(KINGBASEES, "");
      databaseSpecificLimitAfterWithoutOffsetStatements.put(KINGBASEES, "AND ROWNUM <= #{maxResults}");
      databaseSpecificInnerLimitAfterStatements.put(KINGBASEES, databaseSpecificLimitAfterStatements.get(KINGBASEES));
      databaseSpecificLimitBetweenStatements.put(KINGBASEES, "");
      databaseSpecificLimitBetweenFilterStatements.put(KINGBASEES, "");
      databaseSpecificLimitBetweenAcquisitionStatements.put(KINGBASEES, "");
      
      databaseSpecificOrderByStatements.put(KINGBASEES, defaultOrderBy);
      databaseSpecificLimitBeforeNativeQueryStatements.put(KINGBASEES, "");
      databaseSpecificDistinct.put(KINGBASEES, "distinct");
      databaseSpecificLimitBeforeInUpdate.put(KINGBASEES, "");
      databaseSpecificLimitAfterInUpdate.put(KINGBASEES, "");
      databaseSpecificAuthJoinStart.put(KINGBASEES, defaultAuthOnStart);
      databaseSpecificNumericCast.put(KINGBASEES, "");
      databaseSpecificCountDistinctBeforeStart.put(KINGBASEES, defaultDistinctCountBeforeStart);
      databaseSpecificCountDistinctBeforeEnd.put(KINGBASEES, defaultDistinctCountBeforeEnd);
      databaseSpecificCountDistinctAfterEnd.put(KINGBASEES, defaultDistinctCountAfterEnd);
      
      databaseSpecificEscapeChar.put(KINGBASEES, defaultEscapeChar);
      
      databaseSpecificDummyTable.put(KINGBASEES, "FROM DUAL");
      databaseSpecificBitAnd1.put(KINGBASEES, "BITAND(");
      databaseSpecificBitAnd2.put(KINGBASEES, ",");
      databaseSpecificBitAnd3.put(KINGBASEES, ")");
      databaseSpecificDatepart1.put(KINGBASEES, "to_number(to_char(");
      databaseSpecificDatepart2.put(KINGBASEES, ",");
      databaseSpecificDatepart3.put(KINGBASEES, "))");
      
      databaseSpecificTrueConstant.put(KINGBASEES, "1");
      databaseSpecificFalseConstant.put(KINGBASEES, "0");
      databaseSpecificIfNull.put(KINGBASEES, "NVL");
      
      databaseSpecificDaysComparator.put(KINGBASEES, "${date} <= #{currentTimestamp} - ${days}");
      
      databaseSpecificCollationForCaseSensitivity.put(KINGBASEES, "");
      
      databaseSpecificAuthJoinEnd.put(KINGBASEES, defaultAuthOnEnd);
      databaseSpecificAuthJoinSeparator.put(KINGBASEES, defaultAuthOnSeparator);
      
      databaseSpecificAuth1JoinStart.put(KINGBASEES, defaultAuthOnStart);
      databaseSpecificAuth1JoinEnd.put(KINGBASEES, defaultAuthOnEnd);
      databaseSpecificAuth1JoinSeparator.put(KINGBASEES, defaultAuthOnSeparator);
      databaseSpecificExtractTimeUnitFromDate.put(KINGBASEES, defaultExtractTimeUnitFromDate);
      
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricProcessInstanceDurationReport", "selectHistoricProcessInstanceDurationReport_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricTaskInstanceDurationReport", "selectHistoricTaskInstanceDurationReport_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricTaskInstanceCountByTaskNameReport", "selectHistoricTaskInstanceCountByTaskNameReport_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "selectFilterByQueryCriteria", "selectFilterByQueryCriteria_oracleDb2");
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricProcessInstanceIdsForCleanup", "selectHistoricProcessInstanceIdsForCleanup_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricDecisionInstanceIdsForCleanup", "selectHistoricDecisionInstanceIdsForCleanup_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricCaseInstanceIdsForCleanup", "selectHistoricCaseInstanceIdsForCleanup_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricBatchIdsForCleanup", "selectHistoricBatchIdsForCleanup_oracle");
      
      addDatabaseSpecificStatement(KINGBASEES, "deleteAttachmentsByRemovalTime", "deleteAttachmentsByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteCommentsByRemovalTime", "deleteCommentsByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricActivityInstancesByRemovalTime", "deleteHistoricActivityInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricDecisionInputInstancesByRemovalTime", "deleteHistoricDecisionInputInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricDecisionInstancesByRemovalTime", "deleteHistoricDecisionInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricDecisionOutputInstancesByRemovalTime", "deleteHistoricDecisionOutputInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricDetailsByRemovalTime", "deleteHistoricDetailsByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteExternalTaskLogByRemovalTime", "deleteExternalTaskLogByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricIdentityLinkLogByRemovalTime", "deleteHistoricIdentityLinkLogByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricIncidentsByRemovalTime", "deleteHistoricIncidentsByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteJobLogByRemovalTime", "deleteJobLogByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricProcessInstancesByRemovalTime", "deleteHistoricProcessInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricTaskInstancesByRemovalTime", "deleteHistoricTaskInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricVariableInstancesByRemovalTime", "deleteHistoricVariableInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteUserOperationLogByRemovalTime", "deleteUserOperationLogByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteByteArraysByRemovalTime", "deleteByteArraysByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricBatchesByRemovalTime", "deleteHistoricBatchesByRemovalTime_oracle");
      
      constants = new HashMap<String, String>();
      constants.put("constant.event", "cast('event' as nvarchar2(255))");
      constants.put("constant.op_message", "NEW_VALUE_ || '_|_' || PROPERTY_");
      constants.put("constant_for_update", "for update");
      constants.put("constant.datepart.quarter", "'Q'");
      constants.put("constant.datepart.month", "'MM'");
      constants.put("constant.datepart.minute", "'MI'");
      constants.put("constant.null.startTime", "null START_TIME_");
      constants.put("constant.varchar.cast", "'${key}'");
      constants.put("constant.integer.cast", "NULL");
      constants.put("constant.null.reporter", "NULL AS REPORTER_");
      dbSpecificConstants.put(KINGBASEES, constants);
      

    通过Jar Editor保存并编译后重新构建JAR包,需要确保JDK版本与Camunda框架适配的JDK版本(JDK11)保持一致。

  4. 引入人大金仓的JDBC驱动包。

    需要在人大金仓官网中下载对应的依赖包然后本地引入。

  5. 配置数据库信息。

    以yaml文件为例:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    #数据库驱动
    driver-class-name: com.kingbase8.Driver
    #数据库IP地址和端口,具体URL请查看数据库的IP地址
    url: jdbc:dm://127.0.0.1:54321/数据库名
    username: 数据库用户名
    password: 数据库密码
    
    #Camunda框架指定数据库类型:
    camunda:
      bpm:
        database:
          type: kingbase8
    

Camunda适配海量数据库

海量数据库完全兼容MySQL数据库,Camunda框架适配海量数据库,只需要引入MySQL数据库的依赖并在配置文件中修改数据库的相关配置即可完成适配。例如,若Camunda集成在springboot项目中,可通过修改下列对应的配置文件的值来完成迁移适配,以yaml文件为例:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
spring.datasource:
  #配置mysql的数据库驱动
  driver-class-name: com.mysql.cj.jdbc.Driver
  #url中指定数据库类型为mysql
  url: jdbc:mysql://{海量数据库IP地址}:{海量数据库开放端口}/{数据库名}
  #例:jdbc:mysql://127.0.0.1:2881/CamundaProject
  username: {海量数据库用户}
  password: {海量数据库登录的用户密码}
#Camunda框架使用的数据库类型:
camunda:
  bpm:
    database:
      type: mysql
      #将数据库类型修改为mysql

修改配置完成后,启动项目。若数据库中生成了相应的数据库表格,则表示适配成功。