This manual describes the operating tools of GridDB.
It is written for system designers and system administrators responsible for GridDB system’s construction and operation management respectively.
The contents of each chapter is as follows:
Service
This section explains the GridDB service performed automatically during OS start-up.
Operating commands
This section explains the various operating commands of GridDB.
Cluster operation control command interpreter (gs_sh)
This section explains the GridDB cluster operation control functions and the command interpreter (gs_sh) to provide data operations.
Integrated operation control GUI (gs_admin)
This section explains the web-based integrated operation control GUI (gs_admin) integrating the operating functions of a GridDB cluster.
Export/import tools
This section explains the export/import tools of GridDB.
The procedure to use and install GridDB service is as follows.
See the “GridDB Administrator Guide” for the procedure to install GridDB and configure a GridDB node.
The table above shows the kinds of files used in GridDB services.
Type | Meaning |
---|---|
systemd unit file | systemd unit definition file. It is installed in /usr/lib/systemd/system/gridstore.service by the server package of GridDB and registered on the system as GridDB service. |
Service script | Script file is executed automatically during OS startup. It is installed in /usr/griddb/bin/gridstore by the server package of GridDB. |
PID file | File containing only the process ID (PID) of the gsserver process. This is created in $GS_HOME/conf/gridstore.pid when the gsserver process is started. |
Start configuration file | File containing the parameters that can be set while in service. Depending on the GridDB server package, it is installed in /etc/sysconfig/gridstore/gridstore.conf. |
A list of parameters is available to control the GridDB service operations. A list of the parameters is given below.
Property | Default | Note |
---|---|---|
GS_USER | admin | GridDB user name |
GS_PASSWORD | admin | GS_USER password |
CLUSTER_NAME | INPUT_YOUR_CLUSTER_NAME_HERE | Cluster name to join |
MIN_NODE_NUM | 1 | Number of nodes constituting a cluster |
To change the parameters, edit the start configuration file (/etc/sysconfig/gridstore/gridstore.conf
).
When a server package is updated or uninstalled, the start configuration file will not be overwritten or uninstalled.
[Notes]
MIN_NODE_NUM
of all the nodes needs to be changed to the number of nodes constituting a cluster after the expansion.See the boot log( /var/log/boot.log
) and operating command log($GS_HOME/log
) for details of the service log.
GridDB service commands are shown below.
[Notes]
GS_SSL_MODE
and enable SSL connection for communication for issuing operation commands. For details on SSL connection, see the GridDB Features Reference.
GS_SSL_MODE
to REQUIRED will validate SSL connection for communication for issuing operation commands; setting the variable to VERIFY will validate SSL connection for communication for issuing operation commands and perform server certificate verification.GS_SSL_MODE
to VERIFY), specify the path to the certificate by the Certificate Authority (CA) for the environment variable SSL_CERT_FILE
.vi .bash_profile
GS_SSL_MODE=VERIFY
export GS_SSL_MODE
SSL_CERT_FILE=$GS_HOME/security/ca.crt
export SSL_CERT_FILE
Action:
$ sudo systemctl start gridstore
Set the cluster name in
CLUSTER_NAME` .Set the number of nodes constituting a cluster in
MIN_NODE_NUM` .[Notes]
Action:
$ sudo systemctl stop gridstore
[Notes]
Action:
$ sudo systemctl status gridstore
Action:
Action:
Service error messages are as shown below.
Code | Message | Meaning |
---|---|---|
F00003 | Json load error | Reading of definition file failed. |
F01001 | Stop service timed out | Stop node process timed out. |
F01002 | Startnode error | An error occurred in the node startup process. |
F01003 | Startnode timed out | Start node process timed out. |
F01004 | Joincluster error | An error occurred in the join cluster process. |
F01005 | Joincluster timed out | Join cluster process timed out. |
F01006 | Leavecluster error | An error occurred in the leave cluster process. |
F02001 | Command execution error | An error occurred in the command execution. |
F02002 | Command execution timed out | Command execution timed out. |
[Memo]
The following commands are available in GridDB.
Type | Functions | Command | RPM package |
---|---|---|---|
(1) Start/stop node | start node | gs_startnode | server |
stop node | gs_stopnode | client | |
(2) User management | Registration of administrator user | gs_adduser | server |
Deletion of administrator user | gs_deluser | server | |
Change the password of an administrator user | gs_passwd | server | |
(3) Cluster management | Joining a cluster configuration | gs_joincluster | client |
Leaving a cluster configuration | gs_leavecluster | client | |
Stopping a cluster | gs_stopcluster | client | |
Getting cluster configuration data | gs_config | client | |
Getting node status | gs_stat | client | |
Adding a node to a cluster | gs_appendcluster | client | |
Manual failover of a cluster | gs_failovercluster | client | |
Getting partition data | gs_partition | client | |
Increasing the no. of nodes of the cluster | gs_increasecluster | client | |
Set up autonomous data redistribution of a cluster | gs_loadbalance | client | |
Set up data redistribution goal of a cluster | gs_goalconf | client | |
Controlling the checkpoint of the node | gs_checkpoint | server | |
(4) Log data | Displaying recent event logs | gs_logs | client |
Displaying and changing the event log output level | gs_logconf | client | |
(5) Backup/restoration | backup execution | gs_backup | server |
Check backup data | gs_backuplist | server | |
Backup/restoration | gs_restore | server | |
(6) Import/export | Import | gs_import | client |
Export | gs_export | client | |
(7) Maintenance | Displaying and changing parameters | gs_paramconf | client |
Managing user cache for authentication | gs_authcache | client |
[Memo]
[Command option]
The options below are common options that can be used in all commands.
Options | Note |
---|---|
-h|–help | Display the command help. |
--version | Display the version of the operating command. |
[Example]
Display the command help and version.
$ gs_startnode -h
Usage: gs_startnode [-u USER/PASS [-w [WAIT_TIME]] ]
Start the GridDB node.
$ gs_stat --version
gs_stat [V5.0.00]
The options below are common options that can be used in some of the commands.
Options | Note | |
---|---|---|
[-s <Server>[:<Port no.>] | -p <Port no.>] | The host name or the server name (address) and port number, that is, the connection port no. of the operating command. The value “localhost (127.0.0.1):10040” is used by default. |
-u <User name>/<Password> | Specify authentication user and password. | |
-w|–wait [<No. of sec>] | Wait for the process to end. There is no time limit if the time is not set or if the time is set to 0. |
|
-a | --address-type <Address type> | Specify the service type of the port, address to display. system: Connection address of operating command cluster: Reception address used for cluster administration transaction: Reception address for transaction process sync: Reception address used for synchronization process |
--no-proxy | If specified, the proxy will not be used. | |
--ssl|–ssl-verify | Specifying –ssl will validate SSL connection for communication for operation commands; specifying –ssl-verify will additionally perform server certificate verification as well. |
[Memo]
GS_SSL_MODE
is available. Specifying REQUIRED for the environment variable GS_SSL_MODE
will validate connection for SSL communications for operation commands; specifying VERIF’Y will validate SSL connection for communication for operation commands and perform server certificate verification as well.[Notes]
GS_SSL_MODE
, the following is required:
GS_SSL_MODE
to VERIFY), specify the path to the certificate by the Certificate Authority (CA) for the environment variable SSL_CERT_FILE
.vi .bash_profile
GS_SSL_MODE=VERIFY
export GS_SSL_MODE
SSL_CERT_FILE=$GS_HOME/security/ca.crt
export SSL_CERT_FILE
[Termination status]
The end status of the command is shown below.
[Log file]
Log file of the command will be saved in ${GS_LOG}/command name.log.
[Example] The log file below is created if the GS_LOG value is “/var/lib/gridstore/log (default)” and the “gs_startnode” command is executed.
[Before using an operating command]
If a proxy variable (http_proxy) has been set up, specify the –no-proxy option or set the address (group) of the GridDB node in no_proxy and exclude it from the proxy. As an operating command will perform REST/http communications, the proxy server may be connected by mistake, thereby deactivating the operating command.
$ export http_proxy=proxy.example.net:8080
$ gs_paramconf -u admin/admin --show storeMemoryLimit
A00110: Check the network setting. (HTTP Error 403: Forbidden)
$ gs_paramconf -u admin/admin --show storeMemoryLimit --no-proxy
"1024MB"
[To compose a cluster]
A cluster is composed of a group of 1 or more nodes, consisting of a master with the rest being followers.
In a cluster configuration, the number of nodes already participating in a cluster and the number of nodes constituting a cluster are important. The number of nodes already participating in a cluster is the actual number of nodes joined to the cluster. The number of nodes constituting a cluster is the number of nodes that can join the cluster which is specified in the gs_joincluster command.
The number of nodes already participating in a cluster and the number of nodes constituting a cluster can be checked by executing a gs_stat command on the master node, with the values being /cluster/activeCount and /cluster/designatedCount respectively.
The main procedure to create/change a cluster configuration is shown below for reference purposes. See the following sections for details of each command.
Execute the GridDB start node command on the machine executing the node. This command needs to be executed for each GridDB node.
Command
Command | Option/argument |
---|---|
gs_startnode | [-w|–wait [<No. of sec>] -u <User name>/<Password>] [–releaseUnusedFileBlocks] [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
--releaseUnusedFileBlocks | Deallocate unused file blocks. |
[Memo]
The following command is used to stop the GridDB node. To stop a node, the GridDB cluster management process needs to be stopped first.
Command
Command | Option/argument |
---|---|
gs_stopnode | [-f|–force] [-k|–kill] [-w|–wait [<No. of sec>]] [-s <Server>[:<Port no.>] | -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
-f|–force | Stop a node by force. |
-k|–kill | Force the node process of a local machine to stop. |
[Memo]
The user management is used to perform registration/deletion/password change for GridDB administrator user.
The default user below exists after installation.
Default user
User | <Password> | Use case example |
---|---|---|
admin | admin | Operation administrator user, for executing operation commands |
system | manager | Application user, for client operation |
[Notes]
/var/lib/gridstore/conf/password
Command
Command | Option/argument |
---|---|
gs_adduser | <User name> [-p|–password <Password>] |
Options
Options | Note | |
---|---|---|
<User name> | Specify the name of the user to be created. The username should start with “gs#”, and only one or more ASCII alphanumeric characters and the underscore sign “_” can be used after “gs#”. | |
-p | –password <Password> | Specify the user password. A prompt to input the password interactively appears by default. |
[Memo]
[Example]
Add an administrator user (“user name (gs#someone)”, “password (opensesame)”) to the user definition file.
$ gs_adduser -p opensesame gs#someone
$ gs_stopcluster -u admin/admin
Execute the following in all the nodes
$ gs_stopnode -u admin/admin
$ cp [User definition file with additional users] /var/lib/gridstore/conf/password
$ gs_startnode
$ gs_joincluster -c clsA -n XX -u admin/admin
Command
Command | Option/argument |
---|---|
gs_deluser | <User name> |
[Memo]
[Example]
Delete the specified administrator user (gs#someone).
$ gs_deluser gs#someone
$ gs_stopcluster -u admin/admin
Execute the following in all the nodes
$ gs_stopnode -u admin/admin
$ cp [User definition file with deleted users] /var/lib/gridstore/conf/password
$ gs_startnode
$ gs_joincluster -c clsA -n XX -u admin/admin
Command
Command | Option/argument |
---|---|
gs_passwd | <User name> [-p|–password <Password>] |
Options
Options | Note | |
---|---|---|
<User name> | The name of the administrator user whose password is going to be changed. | |
-p | –password <Password> | Specify the password of the administrator user. A prompt to input the password interactively appears by default. |
[Memo]
[Example]
Change the password of a specified administrator user (“user name (gs#someone)”) to foobarxyz.
$ gs_passwd -p foobarxyz gs#someone
$ gs_stopcluster -u admin/admin
Execute the following in all the nodes
$ gs_stopnode -u admin/admin
$ cp [Revised user definition file] /var/lib/gridstore/conf/password
$ gs_startnode
$ gs_joincluster -c clsA -n XX -u admin/admin
When composing a GridDB cluster, the nodes need to be attached (joined) to the cluster.
Command
Command | Option/argument |
---|---|
gs_joincluster | [-c|–clusterName <Cluster name>] [-n|–nodeNum <No. of nodes constituting a cluster>] [-w|–wait [<No. of sec>]] [-s <Server>[:<Port no.>]| -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
-c|–clusterName <Cluster name> | Specify the cluster name. Default value is “defaultCluster”. |
-n|–nodeNum <No. of nodes constituting a cluster> | Specify the number of nodes of the cluster to be composed. Default value is 1 (single node configuration). |
[Memo]
/cluster/clusterName
) has been set up, an error will occur if the specified cluster name does not match the value set.[Example] Compose a 3-node cluster with the cluster name “example_three_nodes_cluster” using node A - C
Start the nodes constituting the cluster and attach them to the cluster.
Execute on node A
$ gs_startnode
$ gs_joincluster -c example_three_nodes_cluster -n 3 -u admin/admin -w
Execute on node B
$ gs_startnode
$ gs_joincluster -c example_three_nodes_cluster -n 3 -u admin/admin -w
Execute on node C
$ gs_startnode
$ gs_joincluster -c example_three_nodes_cluster -n 3 -u admin/admin -w
Node commencement is done separately in each of the nodes (as shown above) and node entry is performed from a specific node (as shown below).
Execute on node A - C respectively
$ gs_startnode
Execute on node A
$ gs_joincluster -c example_three_nodes_cluster -n 3 -s <node B's server address> -u admin/admin
$ gs_joincluster -c example_three_nodes_cluster -n 3 -s <node C's server address> -u admin/admin
$ gs_joincluster -c example_three_nodes_cluster -n 3 -u admin/admin -w
The following command is used to detach a node from a cluster.
Command
Command | Option/argument |
---|---|
gs_leavecluster | [-f | –force] [-w | –wait [<No. of sec>]] [-s <Server>[:<Port no.>] | -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
-f|–force | Detach a node by force. |
[Memo]
A cluster will be stopped automatically if the number of nodes participating in a cluster is reduced to less than half the number of nodes constituting the cluster due to nodes leaving the cluster.
[Example]
Execute a leave cluster command on the node that you want to detach from the cluster.
$ gs_leavecluster -u admin/admin
The following command is used to stop a cluster.
Command
Command | Option/argument |
---|---|
gs_stopcluster | [-w|–wait [<No. of sec>]] [-s <Server>[:<Port no.>] | -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
[Memo]
[Example]
Execute a cluster stop command.
$ gs_stopcluster -u admin/admin
The following command is used to get the cluster configuration data (data on list of nodes joined to a cluster).
Command
Command | Option/argument |
---|---|
gs_config | [-s <Server>[:<Port no.>]| -p <Port no.>] -u <User name>/<Password> [-a|–address-type <Address type>] [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
-a | –address-type <Address type> | Specify the service type of the port, address to display. system: Connection address of operating command cluster: Reception address used for cluster administration transaction: Reception address for transaction process sync: Reception address used for synchronization process |
[Memo]
[Example]
The following data is output when the cluster is composed of 3 nodes and cluster configuration data is acquired from the master.
$ gs_config -u admin/admin
{
"follower": [ // [array] follower data
{
"address": "192.168.11.10", // [string] connection address of operating command
"port": 10040 // [number] connection port of operating command
},
{
"address": "192.168.11.11",
"port": 10040
}
],
"master": { // master data
"address": "192.168.11.12", // [string] connection address of operating command
"port": 10040 // [number] connection port of operating command
},
"multicast": { // multicast data
"address": "239.0.0.20", // [string] address for multi-cast distribution to client
"port": 31999 // [number] Port for multi-cast distribution to client
},
"self": { // own node data
"address": "192.168.11.12", // [string] connection address of operating command
"port": 10040, // [number] connection port of operating command
"status": "ACTIVE" // [string] system status
}
}
The following command gets the cluster data (cluster configuration data and internal data), or backup progress status.
Command
Command | Option/argument |
---|---|
gs_stat | [-t|–type <Type>] [-a|–address-type <Address type>] [–member] [–csv] [-s <Server>[:<Port no.>]| -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
-t|–type <Type> | Display data of the specified type. backup: Display the backup status |
-a | –address-type <Address type> | Specify the service type of the port, address to display. system: Connection address of operating command cluster: Reception address used for cluster administration transaction: Reception address for transaction process sync: Reception address used for synchronization process |
–csv | Cluster information is displayed in CSV format. |
[Memo]
[Example]
The following data is output when cluster data is acquired by nodes joined to the cluster in operation.
$ gs_stat -u admin/admin
{
:
:
"cluster": {
"activeCount": 1,
"clusterName": "defaultCluster",
"clusterStatus": "MASTER",
"designatedCount": 1,
"loadBalancer": "ACTIVE",
"master": {
"address": "192.168.10.11",
"port": 10010
},
"nodeList": [
{
"address": "192.168.10.11",
"port": 10010
}
],
"nodeStatus": "ACTIVE",
"partitionStatus": "NORMAL",
"startupTime": "2014-08-29T09:56:20+0900",
"syncCount": 3
},
:
:
}
Add a new node to a cluster in operation.
Command
Command | Option/argument |
---|---|
gs_appendcluster | –cluster <Server>:<Port no.> [-w|–wait [<No. of sec>]] [-s <Server>[:<Port no.>] | -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
--cluster <Server>:<Port no.> | Specify the server name (address) and port no. of the node to be added to the cluster. |
[Memo]
[Example]
Add a new node to a cluster in operation.
Check the status of the cluster to add the nodes
$ gs_stat -s 192.168.33.29:10040 -u admin/admin
{
:
"cluster":{ //cluster-related
"activeCount":5, //number of nodes already participating in a cluster
"clusterName":"function_1", //cluster name
"clusterStatus":"MASTER", //cluster status
"designatedCount":5, //number of nodes constituting a cluster
:
}
Check that the number of nodes = number of nodes already participating in a cluster
If the number of nodes constituting a cluster\> number of nodes already participating in a cluster, execute a gs_joincluster (add node to cluster configuration)
Start the node you want to add and specify the server address and port no. of the node joined to the cluster in operation.
$ gs_startnode
$ gs_appendcluster --cluster 192.168.33.29:10040 -u admin/admin
Check the cluster status to see if the node has been added successfully to the cluster.
$ gs_stat -u admin/admin
{
:
"cluster":{ //cluster-related
"activeCount":6, //number of nodes already participating in a cluster
"clusterName":"function_1", //cluster name
"clusterStatus":"MASTER", //cluster status
"designatedCount":6, //number of nodes constituting a cluster
:
}
The following command is used to execute GridDB cluster failover.
Command
Command | Option/argument |
---|---|
gs_failovercluster | [–repair] [-s <Server>[:<Port no.>] | -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
--repair | Accept the data lost and execute a forced failover. |
[Memo]
[Example]
Execute a cluster failover.
$ gs_failovercluster -u admin/admin
The following command is used to display the partition data of a GridDB node.
Command
Command | Option/argument |
---|---|
gs_partition | [[-n|–partitionId <Partition ID>] [–loss] [-a|–address-type <Address type>] [-s <Server>[:<Port no.>] | -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
-n|–partitionId <Partition ID> | Specify the partition ID to display data. (Display all data by default) |
–loss | Display only data from missing partitions. |
-a | –address-type <Address type> | Specify the service type of the port, address to display. system: Connection address of operating command cluster: Reception address used for cluster administration transaction: Reception address for transaction process sync: Reception address used for synchronization process |
[Memo]
[Example]
Get the partition data of a specific node of a cluster in operation.
$ gs_partition -u admin/admin
[
{
"backup": [],
"catchup": [],
"maxLsn": 300008,
"owner": {
"address": "192.168.11.10",
"lsn": 300008,
"port": 10010
},
"pId": "0",
"status": "ON"
},
:
]
Increase the no. of nodes of the GridDB cluster.
Command
Command | Option/argument |
---|---|
gs_increasecluster | [-s <Server>[:<Port no.>] | -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
[Memo]
[Example]
Increase the no. of nodes of the GridDB cluster and append node to the cluster.
Confirm the cluster status.
$ gs_stat -s 192.168.33.29:10040 -u admin/admin
{
:
"cluster":{ //cluster-related
"activeCount":5, //number of nodes already participating in a cluster
"clusterName":"function_1", //cluster name
"clusterStatus":"MASTER", //cluster status
"designatedCount":5, //number of nodes constituting a cluster
:
}
Check that the number of nodes = number of nodes already participating in a cluster
Start the node to be expanded, execute the gs_joincluster command with the no. of nodes after expansion (6 nodes).
$ gs_startnode -u admin/admin -w
$ gs_joincluster -u admin/admin -c function_1 -n 6
Execute the gs_increasecluster for the cluster to be expanded.
$ gs_increasecluster -s 192.168.33.29:10040 -u admin/admin
Confirm that the node to be expanded has been added to the cluster.
$ gs_stat -u admin/admin
{
:
"cluster":{ //cluster-related
"activeCount":6, //number of nodes already participating in a cluster
"clusterName":"function_1", //cluster name
"clusterStatus":"MASTER", //cluster status
"designatedCount":6, //number of nodes constituting a cluster
:
}
Enable/disable autonomous data redistribution of a GridDB cluster, or display the setting. As in the case of stopping nodes and rejoining them in a cluster for rolling upgrade, by disabling autonomous data redistribution, you can eliminate redundant redistribution processing and reduce the load on the operations.
Command
Command | Option/argument |
---|---|
gs_loadbalance | [–on|–off] [–cluster] [-s <Server>[:<Port no.>] | -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
–on|–off | Enable (–on) or Disable (–off) autonomous data redistribution. If these options are omitted, the current setting value is displayed. |
–cluster | The setting is applied to all nodes of the cluster by specifying this option. If this option is omitted, the setting is applied to only the specified node. |
[Memo]
[Example]
Confirm the settings of autonomous data redistribution on all nodes in a cluster.
$ gs_loadbalance -s 192.168.33.29:10040 -u admin/admin --cluster
192.168.33.29 ACTIVE
192.168.33.30 ACTIVE
192.168.33.31 ACTIVE
Disable the setting of the node, "192.168.33.31".
$ gs_loadbalance -s 192.168.33.31:10040 -u admin/admin --off
Enabled/disable GridDB autonomous data redistribution, display the present data redistribution goal, and manual setting. These commands are used during rolling upgrades, to detach the node safely off the cluster.
Command
Command | Option/argument |
---|---|
gs_goalconf | [–on|–off] [–cluster] [-s <Server>[:<Port no.>] | -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
Command | Option/argument |
---|---|
gs_goalconf | –manual [[–set JSON_FILE | –switch PARTITION_ID | –leaveNode HOST[:PORT]] [–cluster]] [-s <Server>[:<Port no.>] | -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
–on|–off | Enable (–on) or disable (–off) autonomous data redistribution. If these options are omitted, the current setting value is displayed. |
–cluster | The setting is applied to all nodes of the cluster by specifying this option. If this option is omitted, the setting is applied to only the specified node. |
–manual | Display the present data redistribution goal. When setting up a data redistribution goal, specify also one of the following options: set, switch, or leaveNode. |
–set JSON_FILE | Set the specified JSON file as a data redistribution goal. |
–switch PARTITION_ID | Set a data redistribution goal with the owner and the backup of specified partition ID replaced with each other. |
–leaveNode HOST[:PORT] | Set a data redistribution goal with the owner and the backup of all the specified nodes replaced with each other. |
[Example]
Confirm the settings of autonomous data redistribution on all nodes in a cluster.
$ gs_goalconf -s 192.168.33.29:10040 -u admin/admin --cluster
192.168.33.29 ACTIVE
192.168.33.30 ACTIVE
192.168.33.31 ACTIVE
Disable the setting of the node, "192.168.33.31".
$ gs_goalconf -s 192.168.33.31:10040 -u admin/admin --off
Set up the data redistribution goal to leave the node of "192.168.33.31" for all the nodes in a cluster.
$ gs_goalconf -u admin/admin --manual --leaveNode 192.168.33.31 --cluster
Switching 43 owners to backup on 192.168.33.31:10040 ...
Setting goal requests have been sent. Sync operations will be started when loadbalancer is active.
Enable/disable the periodic checkpoint of a GridDB node, or execute manual checkpoint.
Command
Command | Option/argument |
---|---|
gs_checkpoint | [–on|–off] | [–manual [-w|–wait [No. of sec]]] [-s <Server>[:<Port no.>] | -p <Port no.>] -u <User name>/<Password> [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
–on|–off | Enable (–on) or Disable (–off) the periodic checkpoint. If these options are omitted, the current setting value is displayed. |
–manual | # Perform the manual checkpoint and wait to complete. |
[Memo]
[Example]
Disable the periodic checkpoint
$ gs_checkpoint -u admin/admin --off
Perform the manual checkpoint and wait to complete.
$ gs_checkpoint -u admin/admin --manual -w
...
The manual checkpoint has been completed.
Re-enable the periodic checkpoint
$ gs_checkpoint -u admin/admin --on
The following command is used to get the most recent GridDB event log.
Command
Command | Option/argument |
---|---|
gs_logs | [-l | –lines <No. of rows acquired>] [-g | –ignore <Exclusion key word>] [-s <Server>[:<Port no.>] | -p <Port no.>] [–tracestats] [–slowlogs] [–csv] -u <User name>/<Password> [<First key word> [<Second key word>]] [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
-l|–lines <No. of rows acquired> | Specify the no. of rows to acquire. |
-g|–ignore <Exclusion key word> | Ignore rows that include exclusion key words. |
–tracestats | Display the performance trace information in an event log in JSON format. |
–slowlogs | Display the slow query information in an event log in JSON format. |
–csv | When specifies with –tracestats, display the performance trace information in an event log in CSV format. When specifies with –slowlogs, display the slow query information in an event log in CSV format. |
<First key word> [<Second key word>] | Get only rows that contain the key word. |
[Memo]
[Example]
Get logs terminated by the checkpoint 3 times.
$ gs_logs -u admin/admin CP_END -l 3
2014-08-04T11:02:52.754+0900 NODE1 1143 INFO CHECKPOINT_SERVICE ../server/checkpoint_service.cpp void CheckpointService::runCheckpoint(EventContext&, int32_t, const std::string&) line=866 : [CP_END] mode=NORMAL_CHECKPOINT, backupPath=, commandElapsedMillis=132
2014-08-04T11:22:54.095+0900 NODE1 1143 INFO CHECKPOINT_SERVICE ../server/checkpoint_service.cpp void CheckpointService::runCheckpoint(EventContext&, int32_t, const std::string&) line=866 : [CP_END] mode=NORMAL_CHECKPOINT, backupPath=, commandElapsedMillis=141
2014-08-04T11:42:55.433+0900 NODE1 1143 INFO CHECKPOINT_SERVICE ../server/checkpoint_service.cpp void CheckpointService::runCheckpoint(EventContext&, int32_t, const std::string&) line=866 : [CP_END] mode=NORMAL_CHECKPOINT, backupPath=, commandElapsedMillis=138
The following command is used to display or change the event log output level. Get the list of settings if the argument is not specified.
Command
Command | Option/argument |
---|---|
gs_logconf | [-s <Server>[:<Port no.>]| -p <Port no.>] -u <User name>/<Password> [<Category name> <Output level>] [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
[<Category name> <Output level>] | Specify the category name and output level. |
[Memo]
[Example]
Change the log output level and display the event log status.
$ gs_logconf -u admin/admin CHUNK_MANAGER INFO
$ gs_logconf -u admin/admin
{
"levels": {
"CHECKPOINT_SERVICE": "INFO",
"CHECKPOINT_SERVICE_DETAIL": "ERROR",
"CHUNK_MANAGER": "INFO",
"CLUSTER_OPERATION": "INFO",
:
:
}
}
The following command is used to get GridDB backup data on a per-node basis while continuing services.
A backup of the entire cluster can be carried out while continuing services by backing up all the nodes constituting the cluster in sequence.
Command
Command | Option/argument |
---|---|
gs_backup | –mode <Mode> [–skipBaseline]] -u <User name>/<Password> <Backup name> [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
–mode <Mode> | Specify the backup mode. - auto: auto backup_- auto_nostop: auto backup (no Node stop when an error occurs)- baseline: Create a full backup of the differential/incremental backup baseline- since: After creating a baseline, perform a differential backup from the baseline of the updated data blocks- incremental: After creating a baseline, or After the last incremental, since backup, perform an incremental backup of the updated data blocks |
–skipBaseline | If mode is auto or auto_nostop, omit a baseline backup operation. Otherwise, ignore this option. |
<Backup name> | Specify the directory name of the backup data. |
<mode option>
[Memo]
[Example]
Perform a backup in the node being started
Check the directory where the backup file is stored (backup directory)
$ cat /var/lib/gridstore/conf/gs_node.json # configuration check
{
"dataStore":{
"dbPath":"/var/lib/gridstore/data",
"transactionLogPath":"/var/lib/gridstore/txnlog",
"backupPath":"/var/lib/gridstore/backup", # backup directory
"storeMemoryLimit":"1024MB",
"concurrency":4,
"logWriteMode":1,
"persistencyMode":"NORMAL"
:
:
}
Execute backup
$ gs_backup -u admin/admin 20150425 # backup execution
Depending on the data size and load condition, it may take several hours or more for the backup to be completed.
The progress status of the backup can be checked with a gs_stat command.
$ gs_stat -u admin/admin --type backup
BackupStatus: Processing # backup in progress
/var/lib/gridstore/backup
). During a differential/incremental backup, BACKUPNAME_lv0 (baseline directory of differential/incremental backup ), BACKUPNAME_lv1_NNN_MMM (differential (Since) and incremental (Incremental) directory of differential/incremental backup) are created.The following is used to get a list of the backup data in the backup directory set up in the node definition file (gs_node.json).
Command
Command | Option/argument |
---|---|
gs_backuplist | -u <User name>/<Password> [–partitionId <Partition ID>|<Backup name>] [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
--partitionId <Partition ID> | Display the LSN data of the specified partition in a list. |
<Backup name> | Specify the backup name. |
[Memo]
[Example]
Verify the backup data in the node where you want to check the list of backup data.
Display the list of backup names.
$ gs_backuplist -u admin/admin
BackupName Status StartTime EndTime
-------------------------------------------------------------------------
*201912 -- 2019-12-01T05:20:00+09:00 2019-12-01T06:10:55+09:00
*201911 -- 2019-11-01T05:20:00+09:00 2019-11-01T06:10:55+09:00
:
20191025NO2 OK 2019-10-25T06:37:10+09:00 2019-10-25T06:38:20+09:00
Specify the individual backup name and display the detailed data.
$ gs_backuplist -u admin/admin 201911
BackupName : 201911
BackupData Status StartTime EndTime
--------------------------------------------------------------------------------
201911_lv0 OK 2019-11-01T05:20:00+09:00 2019-11-01T06:10:55+09:00
201911_lv1_000_001 OK 2019-11-02T05:20:00+09:00 2019-11-02T05:20:52+09:00
201911_lv1_000_002 OK 2019-11-03T05:20:00+09:00 2019-11-03T05:20:25+09:00
201911_lv1_000_003 OK 2019-11-04T05:20:00+09:00 2019-11-04T05:20:33+09:00
201911_lv1_000_004 OK 2019-11-05T05:20:00+09:00 2019-11-05T05:21:25+09:00
201911_lv1_000_005 OK 2019-11-06T05:20:00+09:00 2019-11-06T05:21:05+09:00
201911_lv1_001_000 OK 2019-11-07T05:20:00+09:00 2019-11-07T05:22:11+09:00
201911_lv1_001_001 OK 2019-11-08T05:20:00+09:00 2019-11-08T05:20:55+09:00
When investigating the LSN no. of the data maintained in the partition.
$ gs_backuplist -u admin/admin --partitionId=50
BackupName ID LSN
----------------------------------------------------------
*201912 50 2349
*201911 50 118
20190704 50 0
The following command is used to restore a GridDB backup file.
Command
Command | Option/argument |
---|---|
gs_restore | [–test] [–updateLogs] <Backup name> |
Options
Options | Note |
---|---|
–test | Get backup data used for restoration purposes without performing a restoration. |
–updateLogs | If specified, restore only log and json files and overwrite existing files. |
<Backup name> | Specify the directory name of the backup file to restore. |
[Memo]
[Example]
Restore backup data. Execute a restoration with the executing node stopped.
Move the files in the database file directory
Specify the database file directory with the node definition file (gs_node.json)
$ mv ${GS_HOME}/data/* ${GS_HOME}/temp_db # Move the data file and the checkpoint log file.
$ mv ${GS_HOME}/txnlog/* ${GS_HOME}/temp_txnlog # Move the transaction log file.
Check the data to be restored prior to the restoration
$ gs_restore --test 20190901
BackupName : 20190901
BackupFolder : /var/lib/gridstore/backup
RestoreData Status StartTime EndTime
--------------------------------------------------------------------------------
20190901_lv0 OK 2019-09-01T17:50:00+09:00 2019-09-01T17:52:10+09:00
20190901_lv1_001_000 OK 2019-09-02T17:50:00+09:00 2019-09-02T17:50:15+09:00
Execute a restoration
$ gs_restore 20190901 # restoration
The following command is used to display or change the node parameters.
Command
Command | Option/argument |
---|---|
gs_paramconf | [-s <Server>[:<Port no.>]| -p <Port no.>] -u <User name>/<Password> –show [<Parameter name>] | –set <Parameter name> <Value> [–ssl|–ssl-verify] |
Options
Options | Note |
---|---|
--show [<Parameter name>] | Display the specified parameter. If the parameter is not specified in the command, all parameters will be displayed instead. |
--set <Parameter name> <Value> | Change the specified parameter to the specified value. |
[Memo]
Change the parameter storeMemoryLimit
and display the value.
$ gs_paramconf -u admin/admin --set storeMemoryLimit 2048MB
$ gs_paramconf -u admin/admin --show storeMemoryLimit
"2048MB"
Change the parameter traceLimitExecutionTime and display the value.
$ gs_paramconf -u admin/admin --set traceLimitExecutionTime 30s
$ gs_paramconf -u admin/admin --show traceLimitExecutionTime
"30s"
The following command lists and deletes cache for user information for faster authentication of general users and of LDAP.
For details on the authentication method, see the GridDB Features Reference .
Command
Command | Option/argument |
---|---|
gs_authcache | –show [-s server[:port number] | -p port number] -u user name/password [–db database name] [–username user name ] [–cluster] [–ssl-verify] |
gs_authcache | –clear [-s server[:port number] | -p port number] -u user name/password –db database name | –username user name [–cluster] [–ssl-verify] |
Options
| Options | Note |
|—————————|————————————————|
| –show | Display a list of user information stored in cache. |
| –clear | Delete user information stored in cache.|
| –db | Specify the name of a database where users’ information stored in cache is to be operated.|
| –username | Specify the user name of the user whose information stored in cache is to be operated.|
| –cluster | The setting is applied to all nodes of the cluster by specifying this option.
If this option is omitted, the setting is applied to only the specified node. |
[Memo]
[Example]
Display a list of information on all the users stored in cache.
$ gs_authcache -u admin/admin --show
{
"usercache": [
{
"count": 30,
"dbname": "mydb",
"username": "user01"
},
{
"count": 8,
"dbname": "mydb",
"username": "user02"
},
・・・
]
}
}
The cluster operation control command interpreter (hereinafter referred to gs_sh) is a command line interface tool to manage GridDB cluster operations and data operations.
The following can be carried out by gs_sh.
Carry out the following preparations before using gs_sh.
* For details of the procedure, refer to the “Installation of GridDB” section of “GridDB Quickstart Guide” .
$ vi /etc/ssh/sshd_config
...
Kexalgorithms +diffie-hellman-group14-sha1
$ sudo systemctl reload sshd
There are two types of start modes in gs_sh.
The interactive mode is started when gs_sh is executed without any arguments. The gs_sh prompt will appear, allowing sub-commands to be entered.
$ gs_sh
//execution of sub-command "version"
gs> version
gs_sh version 5.0.0
When the script file for user creation is specified in gs_sh, the system will be started in the batch mode. Batch processing of a series of sub-commands described in the script file will be carried out. gs_sh will terminate at the end of the batch processing.
// specify the script file (test.gsh) and execute
$ gs_sh test.gsh
[Memo]
The definition below is required in advance when executing a GridDB cluster operation control or data operation.
An explanation of node variables, cluster variables, and how to define user data is given below. An explanation of the definition of an arbitrary variable, display of variable definition details, and how to save and import variable definition details in a script file is also given below.
Define the IP address and port no. of a GridDB node in the node variable.
Sub-command
setnode <Node variable> <IP address> <Port no.> [<SSH port no.>] |
Argument
Argument | Note |
---|---|
Node variable | Specify the node variable name. If the same variable name already exists, its definition will be overwritten. |
IP address | Specify the IP address of the GridDB node (for connecting operation control tools). |
Port no. | Specify the port no. of the GridDB node (for connecting operation control tools). |
SSH port no. | Specify the SSH port number. Number 22 is used by default. |
Example:
//Define 4 GridDB nodes
gs> setnode node0 192.168.0.1 10000
gs> setnode node1 192.168.0.2 10000
gs> setnode node2 192.168.0.3 10000
gs> setnode node3 192.168.0.4 10000
[Memo]
Define the GridDB cluster configuration in the cluster variable.
Sub-command
Multicast method | setcluster <Cluster variable> <Cluster name> <Multicast address> <Port no.> [<Node variable> …] |
Fixed list method | setcluster <Cluster variable> <Cluster name> FIXED_LIST <Address list of fixed list method> [<Node variable> …] |
Provider method | setcluster <Cluster variable> <Cluster name> PROVIDER <URL of provider method> [<Node variable> …] |
Argument
Argument | Note |
---|---|
<Cluster variable> | Specify the cluster variable name. If the same variable name already exists, its definition will be overwritten. |
cluster name | Specify the cluster name. |
Multicast address | [For the multicast method] Specify the GridDB cluster multicast address (for client connection). |
Port no. | [For the multicast method] Specify the GridDB cluster multicast port no. (for client connection). |
Node variable | Specify the nodes constituting a GridDB cluster with a node variable. When not performing operation management of GridDB clusters, the node variable may be omitted. |
Address list of fixed list method | [For fixed list method] Specify the list of transaction addresses and ports. Example: 192.168.15.10:10001,192.168.15.11:10001 When the cluster configuration defined in the cluster definition file (gs_cluster.json) is a fixed list method, specify the transaction address and port list of /cluster/notificationMember in the cluster definition file. |
URL of provider method | [For the provider method] Specify the URL of the address provider. If the cluster configuration defined in the cluster definition file (gs_cluster.json) is the provider method, specify the value of /cluster/notificationprovider/url in the cluster definition file. |
Example:
//define the GridDB cluster configuration
gs> setcluster cluster0 name 200.0.0.1 1000 $node0 $node1 $node2
[Memo]
Port no.: /transaction/notificationPort
*All settings in the cluster definition file of a node constituting a GridDB cluster have to be configured the same way. If the settings are configured differently, the cluster cannot be composed.
In addition, node variables can be added or deleted for a defined cluster variable.
Sub-command
modcluster <Cluster variable> add | remove <Node variable> … |
Argument
Argument | Note |
---|---|
<Cluster variable> | Specify the name of a cluster variable to add or delete a node. |
add | remove | Specify “add” when adding node variables, and “remove” when deleting node variables. |
Node variable | Specify node variables to add or delete a cluster variable. |
Example:
//Add a node to a defined GridDB cluster configuration
gs> modcluster cluster0 add $node3
//Delete a node from a defined GridDB cluster configuration
gs> modcluster cluster0 remove $node3
[Memo]
Define the SQL connection destination in the GridDB cluster configuration. This is set up only when using the GridDB NewSQL interface.
Sub-command
Multicast method | setclustersql <Cluster variable> <Cluster name> <SQL address> <SQL port no.> |
Fixed list method | setclustersql <Cluster variable> <Cluster name> FIXED_LIST < SQL address list of fixed list method> |
Provider method | setclustersql <Cluster variable> <Cluster name> PROVIDER <URL of provider method> |
Argument
Argument | Note |
---|---|
<Cluster variable> | Specify the cluster variable name. If the same variable name already exists, the SQL connection data will be overwritten. |
cluster name | Specify the cluster name. |
SQL address | [For multicast method] Specify the reception address for the SQL client connection. |
SQL port no. | [For multicast method] Specify the port no. for the SQL client connection. |
SQL address list of fixed list method | [For fixed list method] Specify the list of transaction addresses and ports. Example: 192.168.15.10:20001,192.168.15.11:20001 When the cluster configuration defined in the cluster definition file (gs_cluster.json) is a fixed list method, specify the sql address and port list of /cluster/notificationMember in the cluster definition file. |
URL of provider method | [For the provider method] Specify the URL of the address provider. If the cluster configuration defined in the cluster definition file (gs_cluster.json) is the provider method, specify the value of /cluster/notificationprovider/url in the cluster definition file. |
Example:
// Definition method when using both NoSQL interface and NewSQL interface to connect to a NewSQL server
gs> setcluster cluster0 name 239.0.0.1 31999 $node0 $node1 $node2
gs> setclustersql cluster0 name 239.0.0.1 41999
[Memo]
Define the user and password to access the GridDB cluster.
Sub-command
setuser <User name> <Password> [<gsadm password>] |
Argument
Argument | Note |
---|---|
<User name> | Specify the name of the user accessing the GridDB cluster. |
<Password> | Specify the corresponding password. |
gsadm password | Specify the password of the OS user ‘gsadm’. This may be omitted if start node (startnode sub-command) is not going to be executed. |
Example:
//Define the user, password and gsadm password to access a GridDB cluster
gs> setuser admin admin gsadm
[Memo]
After a user is defined, the following variables are set.
Variable Name | Value |
---|---|
user | <User name> |
password | <Password> |
ospassword | gsadm password |
Multiple users cannot be defined. The user and password defined earlier will be overwritten. When operating multiple GridDB clusters in gs_sh, reset the user and password with the setuser sub-command every time the connection destination cluster is changed.
Define an arbitrary variable.
Sub-command
set <Variable name> [<Value>] |
Argument
Argument | Note |
---|---|
Variable Name | Specify the variable name. |
Value | Specify the setting value. The setting value of the variable concerned can be cleared by omitting the specification. |
Example:
//Define variable
gs> set GS_PORT 10000
//Clear variable settings
gs> set GS_PORT
[Memo]
Display the detailed definition of the specified variable.
Sub-command
show [<Variable name>] |
Argument
Argument | Note |
---|---|
Variable Name | Specify the name of the variable to display the definition details. If the name is not specified, details of all defined variables will be displayed. |
Example:
//Display all defined variables
gs> show
Node variable:
node0=Node[192.168.0.1:10000,ssh=22]
node1=Node[192.168.0.2:10000,ssh=22]
node2=Node[192.168.0.3:10000,ssh=22]
node3=Node[192.168.0.4:10000,ssh=22]
Cluster variable:
cluster0=Cluster[name=name,200.0.0.1:1000,nodes=(node0,node1,node2)]
Other variables:
user=admin
password=*****
ospassword=*****
[Memo]
Save the variable definition details in the script file.
Sub-command
save [<Script file name>] |
Argument
Argument | Note |
---|---|
Script file name | Specify the name of the script file serving as the storage destination. Extension of script file is gsh. If the name is not specified, the data will be saved in the .gsshrc file in the gsadm user home directory. |
Example:
//Save the defined variable in a file
gs> save test.gsh
[Memo]
Read and execute a script file.
Sub-command
load [<Script file name>] |
Argument
Argument | Note |
---|---|
Script file name | Specify the script file to execute. If the script file is not specified, the .gsshrc file in the gsadm user home directory will be imported again. |
Example:
//Execute script file
gs> load test.gsh
[Memo]
Connect to the running GridDB cluster and automatically define a cluster variable and a node variable.
Sub-command
sync IP address port number [cluster variable name [node variable] ] |
Argument
Argument | Note |
---|---|
IP address | Specify the IP address of a GridDB node participating in the GridDB cluster. |
port number | port number of a GridDB node (for connecting to the operation control tool) |
cluster variable name | Specify the cluster variable name. If omitted, the cluster variable name is set to “scluster”. |
node variable name | Specify the node variable name. If omitted, the node variable name is set to “snodeX” where X is a sequential number. |
Example:
gs> sync 192.168.0.1 10040 mycluster mynode
// Check the settings.
gs> show
Node variable:
mynode1=Node[192.168.0.1:10040,ssh=22]
mynode2=Node[192.168.0.2:10040,ssh=22]
mynode3=Node[192.168.0.3:10040,ssh=22]
mynode4=Node[192.168.0.4:10040,ssh=22]
mynode5=Node[192.168.0.5:10040,ssh=22]
Cluster variable:
mycluster=Cluster[name=mycluster,mode=MULTICAST,transaction=239.0.0.20:31999,sql=239.0.0.20:41999,nodes=($mynode1,$mynode2,$mynode3,$mynode4,$mynode5)]
// Save the settings
gs> save
[Memo]
The following operations can be executed by the administrator user only as functions to manage GridDB cluster operations.
This section explains the status of a GridDB node and GridDB cluster.
A cluster is composed of 1 or more nodes. A node status represents the status of the node itself e.g. start or stop etc. A cluster status represents the acceptance status of data operations from a client. A cluster status is determined according to the status of the node group constituting the cluster.
An example of the change in the node status and cluster status due to a gs_sh sub-command operation is shown below. A cluster is composed of 4 nodes. When the nodes constituting the cluster are started (startnode), the node status changes to “Start”. When the cluster is started after starting the nodes (startcluster), each node status changes to “Join”, and the cluster status also changes to “In Operation”.
A detailed explanation of the node status and cluster status is given below.
Node status
Node status changes to “Stop”, “Start” or “Join” depending on whether a node is being started, stopped, joined or detached. If a node has joined a cluster, there are 2 types of node status depending on the status of the joined cluster.
Status | Status name | Note |
---|---|---|
Join | SERVICING | Node is joined to the cluster, and the status of the joined cluster is “In Operation” |
WAIT | Node is joined to the cluster, and the status of the joined cluster is “Halted” | |
Start | STARTED | Node is started but has not joined a cluster |
STARTING | Starting node | |
Stop | STOP | Stopped node |
STOPPING | Stopping node |
Cluster status
GridDB cluster status changes to “Stop”, “Halted” or “In Operation” depending on the operation start/stop status of the GridDB cluster or the join/leave operation of the GridDB node. Data operations from the client can be accepted only when the GridDB cluster status is “In Operation”.
Status | Status name | Note |
---|---|---|
In Operation | SERVICE_STABLE | All nodes defined in the cluster configuration have joined the cluster |
SERVICE_UNSTABLE | More than half the nodes defined in the cluster configuration have joined the cluster | |
Halted | WAIT | Half and more of the nodes defined in the cluster configuration have left the cluster |
INIT_WAIT | 1 or more of the nodes defined in the cluster configuration have left the cluster (when the cluster is operated for the first time, the status will not change to “In Operation” unless all nodes have joined the cluster) | |
Stop | STOP | All nodes defined in the cluster configuration have left the cluster |
The GridDB cluster status will change from “Stop” to “In Operation” when all nodes constituting the GridDB cluster are allowed to join the cluster. In addition, the GridDB cluster status will change to “Halted” when half and more of the nodes have left the cluster, and “Stop” when all the nodes have left the cluster.
Join and leave operations (which affect the cluster status) can be applied in batch to all the nodes in the cluster, or to individual node.
When the operating target is a single node | Operation | When the operating targets are all nodes |
---|---|---|
Join | startcluster : Batch entry of a group of nodes that are already operating but have not joined the cluster yet. | joincluster : Entry by a node that is in operation but has not joined the cluster yet. |
Leave | stopcluster : Batch detachment of a group of nodes joined to a cluster. | leavecluster : Detachment of a node joined to a cluster. |
[Memo]
Details of the various operating methods are explained below.
Start the specified node.
Sub-command
startnode <Node variable> | <Cluster variable> [ <Timeout time in sec.> ] |
Argument
Argument | Note |
---|---|
Node variable | cluster variable | Specify the node to start by its node variable or cluster variable. If the cluster variable is specified, all nodes defined in the cluster variable will be started. |
Timeout time in sec. | Set the number of seconds the command or a script is allowed to run. Timeout time = -1, return to the console immediately without waiting for the command to finish. Timeout time = 0 or not set, no timeout time, wait for the command to finish indefinitely. |
Example:
//Start the node
gs> startnode $node1
The GridDB node node1 is starting up.
All GridDB node has been started.
[Memo]
Stop the specified node.
Sub-command
stopnode <Node variable> | <Cluster variable> [<Timeout time in sec>] |
Argument
Argument | Note |
---|---|
Node variable | Cluster variable | Specify the node to stop by its node variable or cluster variable. If the cluster variable is specified, all nodes defined in the cluster variable will be stopped. |
Timeout time in sec. | Set the number of seconds the command or a script is allowed to run. Timeout time = -1, return to the console immediately without waiting for the command to finish. Timeout time = 0 or not set, no timeout time, wait for the command to finish indefinitely. |
Example:
//stop node
gs> stopnode $node1
The GridDB node node1 is stopping down.
The GridDB node node1 has started stopping down.
Waiting for a node to complete the stopping processing.
All GridDB node has been stopped.
In addition, the specified node can be forced to stop as well.
Sub-command
stopnodeforce <Node variable> | <Cluster variable> [<Timeout time in sec>] |
Argument
Argument | Note |
---|---|
Node variable | Cluster variable | Specify the node to stop by force by its node variable or cluster variable. If the cluster variable is specified, all nodes defined in the cluster variable will be stopped by force. |
Timeout time in sec. | Set the number of seconds the command or a script is allowed to run. Timeout time = -1, return to the console immediately without waiting for the command to finish. Timeout time = 0 or not set, no timeout time, wait for the command to finish indefinitely. |
Example:
//stop node by force
gs> stopnodeforce $node1
The GridDB node node1 is stopping down.
The GridDB node node1 has started stopping down.
Waiting for a node to complete the stopping processing.
All GridDB node has been stopped.
[Memo]
Explanation on how to add batch nodes into a cluster is shown below. In this case when a group of unattached but operating nodes are added to the cluster, the cluster status will change to “In Operation”.
Sub-command
startcluster <Cluster variable> [<Timeout time in sec.>] |
Argument
Argument | Note |
---|---|
Cluster variable | Specify a GridDB cluster by its cluster variable. |
Timeout time in sec. | Set the number of seconds the command or a script is allowed to run. Timeout time = -1, return to the console immediately without waiting for the command to finish. Timeout time = 0 or not set, no timeout time, wait for the command to finish indefinitely. |
Example:
//start GridDB cluster
gs> startcluster $cluster1
Waiting for the GridDB cluster to start.
The GridDB cluster has been started.
[Memo]
To stop a GridDB cluster, simply make the attached nodes leave the cluster using the stopcluster command.
Sub-command
stopcluster <Cluster variable> [<Timeout time in sec.>] |
Argument
Argument | Note |
---|---|
Cluster variable | Specify a GridDB cluster by its cluster variable. |
Timeout time in sec. | Set the number of seconds the command or a script is allowed to run. Timeout time = -1, return to the console immediately without waiting for the command to finish. Timeout time = 0 or not set, no timeout time, wait for the command to finish indefinitely. |
Example:
//stop GridDB cluster
gs> stopcluster $cluster1
Waiting for the GridDB cluster to stop.
The GridDB cluster has been stopped.
[Memo]
Join a node that is temporarily left from the cluster by leavecluster sub-command or failure into the cluster.
Sub-command
joincluster <Cluster variable> <Node variable> [<Timeout time in sec.>] |
Argument
Argument | Note |
---|---|
Cluster variable | Specify a GridDB cluster by its cluster variable. |
Node variable | Specify the node to join by its node variable. |
Timeout time in sec. | Set the number of seconds the command or a script is allowed to run. Timeout time = -1, return to the console immediately without waiting for the command to finish. Timeout time = 0 or not set, no timeout time, wait for the command to finish indefinitely. |
Example:
//Start the node
gs> startnode $node2
The GridDB node node2 is starting up.
All GridDB node has been started.
//join node
joincluster $cluster1 $node2
Waiting for the GridDB node to join the GridDB cluster.
The GridDB node has joined to the GridDB cluster.
[Memo]
Detach the specified node from the cluster. Also force the specified active node to be detached from the cluster.
Sub-command
leavecluster <Node variable> [<Timeout time in sec.>] |
leaveclusterforce <Node variable> [<Timeout time in sec.>] |
Argument
Argument | Note |
---|---|
Node variable | Specify the node to detach by its node variable. |
Timeout time in sec. | Set the number of seconds the command or a script is allowed to run. Timeout time = -1, return to the console immediately without waiting for the command to finish. Timeout time = 0 or not set, no timeout time, wait for the command to finish indefinitely. |
Example:
//leave node
gs> leavecluster $node2
Waiting for the GridDB node to leave the GridDB cluster.
The GridDB node has leaved the GridDB cluster.
[Memo]
Add an undefined node to a pre-defined cluster.
Sub-command
appendcluster <Cluster variable> <Node variable> [<Timeout time in sec.>] |
Argument
Argument | Note |
---|---|
Cluster variable | Specify a GridDB cluster by its cluster variable. |
Node variable | Specify the node to join by its node variable. |
Timeout time in sec. | Set the number of seconds the command or a script is allowed to run. Timeout time = -1, return to the console immediately without waiting for the command to finish. Timeout time = 0 or not set, no timeout time, wait for the command to finish indefinitely. |
Example:
//define node
gs> setnode node5 192.168.0.5 10044
//start node
gs> startnode $node5
//increase the number of nodes
gs> appendcluster $cluster1 $node5
Waiting for a node to be added to a cluster.
A node has been added to the cluster.
Add node variables $node5 to cluster variable $cluster1. (Execute a save command when saving changes to a variable. )
Cluster[name=name1,239.0.5.111:33333,nodes=($node1,$node2,$node3,$node4,$node5)]
[Memo]
Display the status of an active GridDB cluster, and each node constituting the cluster.
Sub-command
configcluster <Cluster variable> |
Argument
Argument | Note |
---|---|
Cluster variable | Specify a GridDB cluster by its cluster variable. |
Example:
//display cluster data
gs> configcluster $cluster1
Name : cluster1
ClusterName : defaultCluster
Designated Node Count : 4
Active Node Count : 4
ClusterStatus : SERVICE_STABLE
Nodes:
Name Role Host:Port Status
-------------------------------------------------
node1 F 10.45.237.151:10040 SERVICING
node2 F 10.45.237.152:10040 SERVICING
node3 M 10.45.237.153:10040 SERVICING
node4 F 10.45.237.154:10040 SERVICING
[Memo]
Display the cluster configuration data.
Sub-command
config <Node variable> |
Argument
Argument | Note |
---|---|
Node variable | Specify the node belonging to a GridDB cluster to be displayed with a node variable. |
Example:
//display cluster configuration data
gs> config $node1
{
"follower" : [ {
"address" : "10.45.237.151",
"port" : 10040
}, {
"address" : "10.45.237.152",
"port" : 10040
}, {
"address" : "10.45.237.153",
"port" : 10040
}, {
"address" : "10.45.237.154",
"port" : 10040
} ],
"master" : {
"address" : "10.45.237.155",
"port" : 10040
},
"multicast" : {
"address" : "239.0.5.111",
"port" : 33333
},
"self" : {
"address" : "10.45.237.150",
"port" : 10040,
"status" : "ACTIVE"
}
}
[Memo]
Display the node configuration data.
Sub-command
stat <Node variable> |
Argument
Argument | Note |
---|---|
Node variable | Specify the node to display by its node variable. |
Example:
//display node status, statistical data
gs> stat $node1
{
"checkpoint" : {
"archiveLog" : 0,
"backupOperation" : 0,
"duplicateLog" : 0,
"endTime" : 1413852025843,
"mode" : "NORMAL_CHECKPOINT",
:
:
}
[Memo]
Displays the log of the specified node.
Sub-command
logs <Node variable> |
Argument
Argument | Note |
---|---|
Node variable | Specify the node to display by its node variable. |
Example:
//display log of node
gs> logs $node0
2013-02-26T13:45:58.613+0900 c63x64n1 4051 INFO SYSTEM_SERVICE ../server/system_service.cpp void SystemService::joinCluster(const char8_t*, uint32_t) line=179 : joinCluster requested (clusterName="defaultCluster", minNodeNum=1)
2013-02-26T13:45:58.616+0900 c63x64n1 4050 INFO SYSTEM_SERVICE ../server/system_service.cpp virtual void SystemService::JoinClusterHandler::callback(EventEngine&, util::StackAllocator&, Event*, NodeDescriptor) line=813 : ShutdownClusterHandler called g
2013-02-26T13:45:58.617+0900 c63x64n1 4050 INFO SYSTEM_SERVICE ../server/system_service.cpp void SystemService::completeClusterJoin() line=639 : completeClusterJoin requested
2013-02-26T13:45:58.617+0900 c63x64n1 4050 INFO SYSTEM_SERVICE ../server/system_service.cpp virtual void SystemService::CompleteClusterJoinHandler::callback(EventEngine&, util::StackAllocator&, Event*, NodeDescriptor) line=929 : CompleteClusterJoinHandler called
The output level of a log can be displayed and changed.
Sub-command
logconf <Node variable> [<Category name> [<Log level>]] |
Argument
Argument | Note |
---|---|
Node variable | Specify the node to operate by its node variable. |
Category name | Specify the log category name subject to the operation. Output level of all log categories will be displayed by default. |
Log level | Specify the log level to change the log level of the specified category. Log level of the specified category will be displayed by default. |
Example:
//display log level of node
gs> logconf $node0
{
"CHECKPOINT_SERVICE" : "INFO",
"CHUNK_MANAGER" : "ERROR",
:
}
// change the log level
gs> logconf $node0 SYSTEM WARNING
// display the log level specifying the category name
gs> logconf $node0 SYSTEM
{
"SYSTEM" : "WARNING"
}
[Memo]
Display the SQL processing under execution.
Sub-command
showsql <Query ID> |
Argument
Argument | Note |
---|---|
Query ID | ID to specify the SQL processing to be displayed. When specified, display only the SQL processing information on specified query ID. When not specified, display the list of SQL processes in progress. Query ID can be obtained by displaying the SQL processing in progress. |
gs[public]> showsql
=======================================================================
query id: e6bf24f5-d811-4b45-95cb-ecc643922149:3
start time: 2019-04-02T06:02:36.93900
elapsed time: 53
database name: public
application name: gs_admin
node: 192.168.56.101:10040
sql: INSERT INTO TAB_711_0101 SELECT a.id, b.longval FROM TAB_711_0001 a LEFT OU
job id: e6bf24f5-d811-4b45-95cb-ecc643922149:3:5:0
node: 192.168.56.101:10040
#---------------------------
[Memo]
Display the event list executed by the thread in each node in a cluster.
Sub-command
showevent |
Example:
gs[public]> showevent
=======================================================================
worker id: 0
start time: 2019-03-05T05:28:21.00000
elapsed time: 1
application name:
node: 192.168.56.101:10040
service type: TRANSACTION_SERVICE
event type: PUT_MULTIPLE_ROWS
cluster partition id: 5
#---------------------------
[Memo]
Display the list of connections.
Sub-command
showconnection |
Example:
gs[public]> showconnection
=======================================================================
application name: gs_admin
creation time: 2019-04-02T06:09:42.52300 service type: TRANSACTION
elapsed time: 106 node: 192.168.56.101:10001 remote: 192.168.56.101:56166
dispatching event count: 5 sending event count: 5
#---------------------------
[Memo]
Cancel the SQL processing in progress.
Sub-command
killsql <query ID> |
Argument
Argument | Note |
---|---|
Query ID | ID to specify SQL processing to be canceled. Can be obtained by displaying the SQL processing in progress. |
Example:
gs[public]> killsql 5b9662c0-b34f-49e8-92e7-7ca4a9c1fd4d:1
[Memo]
To execute a data operation, there is a need to connect to the cluster subject to the operation. Data in the database configured during the connection (“public” when the database name is omitted) will be subject to the operation.
Establish connection to a GridDB cluster to execute a data operation.
Sub-command
connect <Cluster variable> [<Database name>] |
Argument
Argument | Note |
---|---|
Cluster variable | Specify a GridDB cluster serving as the connection destination by its cluster variable. |
<Database name> | Specify the database name. |
Example:
//connect to GridDB cluster
//for NoSQL
gs> connect $cluster1
The connection attempt was successful(NoSQL).
gs[public]>
gs> connect $cluster1 userDB
The connection attempt was successful(NoSQL).
gs[userDB]>
//For NewSQL (configure both NoSQL/NewSQL interfaces)
gs> connect $cluster1
The connection attempt was successful(NoSQL).
The connection attempt was successful(NewSQL).
gs[public]>
[Memo]
Execute a search and retain the search results.
Sub-command
tql <Container name> <Query;> |
Argument
Argument | Note |
---|---|
<Container name> | Specify the container subject to the search. |
Query; | Specify the TQL command to execute. A semicolon (;) is required at the end of a TQL command. |
Example:
//execute a search
gs[public]> tql c001 select *;
5 results. (25 ms)
[Memo]
Execute an SQL command and retains the search result.
Sub-command
sql <SQL command;> |
Argument
Argument | Note |
---|---|
<SQL command;> | Specify the SQL command to execute. A semicolon (;) is required at the end of the SQL command. |
Example:
gs[public]> sql select * from con1; -> search for SQL
10000 results. (52 ms)
gs[public]> get 1 -> display SQL results
id,name
----------------------
0,tanaka
The 1 result has been acquired.
Sub-command name ‘sql’ can be omitted when the first word of SQL statement is one of the follows.
[Memo]
The following results will appear depending on the type of SQL command.
Operation | Execution results when terminated normally |
---|---|
Search SELECT | Display the no. of search results found. Search results are displayed in sub-command get/getcsv/getnoprint. |
Update INSERT/UPDATE/DELETE | Display the no. of rows updated. |
DDL statement | Nothing is displayed. |
The following command gets the inquiry results and presents them in different formats. There are 3 ways to output the results as listed below.
(A) Display the results obtained in a standard output.
Sub-command
get [<No. of acquires>] |
Argument
Argument | Note |
---|---|
No. of acquires | Specify the number of search results to be acquired. All search results will be obtained and displayed by default. |
(B) Save the results obtained in a file in the CSV format.
Sub-command
getcsv <CSV file name> [<No. of acquires>] |
Argument
Argument | Note |
---|---|
CSV file name | Specify the name of the csv file where the search results are saved. |
No. of acquires | Specify the number of search results to be acquired. All search results will be obtained and saved in the file by default. |
(C) Results obtained will not be output.
Sub-command
getnoprint [<No. of acquires>] |
Argument
Argument | Note |
---|---|
No. of acquires | Specify the number of search results to be acquired. All search results will be obtained by default. |
Example:
//execute a search
gs[public]> tql c001 select *;
5 results.
//Get first result and display
gs[public]> get 1
name,status,count
mie,true,2
The 1 result has been acquired.
//Get second and third results and save them in a file
gs[public]> getcsv /var/lib/gridstore/test2.csv 2
The 2 results had been acquired.
//Get fourth result
gs[public]> getnoprint 1
The 1 result has been acquired.
//Get fifth result and display
gs[public]> get 1
name,status,count
akita,true,45
The 1 result has been acquired.
[Memo]
Execute the specified TQL command and display the execution plan and actual measurement values such as the number of cases processed etc. Search is not executed.
Sub-command
tqlexplain <Container name> <Query;> |
Argument
Argument | Note |
---|---|
<Container name> | Specify the target container. |
Query; | Specify the TQL command to get the execution plan. A semicolon (;) is required at the end of a TQL command. |
Example:
//Get an execution plan
gs[public]> tqlexplain c001 select * ;
0 0 SELECTION CONDITION NULL
1 1 INDEX BTREE ROWMAP
2 0 QUERY_EXECUTE_RESULT_ROWS INTEGER 0
In addition, the actual measurement values such as the number of processing rows etc. can also be displayed together with the executive plan by actually executing the specified TQL command.
Sub-command
tqlanalyze <Container name> <Query;> |
Argument
Argument | Note |
---|---|
<Container name> | Specify the target container. |
Query; | Specify the TQL command to get the execution plan. A semicolon (;) is required at the end of a TQL command. |
Example:
//Execute a search to get an execution plan
gs[public]> tqlanalyze c001 select *;
0 0 SELECTION CONDITION NULL
1 1 INDEX BTREE ROWMAP
2 0 QUERY_EXECUTE_RESULT_ROWS INTEGER 5
3 0 QUERY_RESULT_TYPE STRING RESULT_ROW_ID_SET
4 0 QUERY_RESULT_ROWS INTEGER 5
[Memo]
Close the tql and discard the search results saved.
Sub-command
tqlclose |
Close the query and discard the search results saved.
Sub-command
queryclose |
Example:
//Discard search results
gs[public]> tqlclose
gs[public]> queryclose
[Memo]
Disconnect from a GridDB cluster.
Sub-command
disconnect |
Example:
//Disconnect from a GridDB cluster
gs[public]> disconnect
gs>
[Memo]
Set whether to execute count query when SQL querying.
Sub-command
sqlcount |
Argument
Argument | Note |
---|---|
Boolean | If FALSE is specified, gs_sh does not count the number of the result when querying by sql sub-command. And hit count does not be displayed. Default is TRUE. |
Example:
gs[public]> sql select * from mycontainer;
25550 results. (33 ms)
gs[public]> sqlcount FALSE
gs[public]> sql select * from mycontainer;
A search was executed. (33 ms)
[Memo]
This section explains the available sub-commands that can be used for database management. Connect to the cluster first prior to performing database management with connect sub-command. (Subcommand connect)
Create a database with the specified name.
Sub-command
createdatabase <Database name> |
Argument
Argument | Note |
---|---|
<Database name> | Specify the name of the database to be created. |
Example:
//Create a database with the name "db1"
gs[public]> createdatabase db1
[Memo]
Delete the specified database.
Sub-command
dropdatabase <Database name> |
Argument
Argument | Note |
---|---|
<Database name> | Specify the name of the database to be deleted. |
Example:
//Delete databases shown below
//db1:No container exists in the database
//db2:Database does not exist
//db3:Container exists in the database
gs[public]> dropdatabase db1 // No error occurs
gs[public]> dropdatabase db2 // An error occurs
D20340: This database "db2" does not exists.
gs[public]> dropdatabase db3 // An error occurs
D20336: An unexpected error occurred while dropping the database. : msg=[[145045:JC_DATABASE_NOT_EMPTY]
Illegal target error by non-empty database.]
[Memo]
Display the current database name.
Sub-command
getcurrentdatabase |
Example:
gs[db1]> getcurrentdatabase
db1
List the databases with access right information.
Sub-command
showdatabase [<Database name>] |
Argument
Argument | Note |
---|---|
<Database name> | Specify the name of the database to be displayed. |
Example:
gs[public]> showdatabase
Name ACL
---------------------------------
public ALL_USER
DATABASE001 user01 ALL
DATABASE001 user02 READ
DATABASE002 user03 ALL
DATABASE003
gs[public]> showdatabase DATABASE001
Name ACL
---------------------------------
DATABASE001 user01 ALL
DATABASE001 user02 READ
[Memo]
Grant the database access rights to user.
Sub-command
grantacl <Access rights> <Database name> <User name> |
Argument
Argument | Note |
---|---|
<Access right> | Specify the access right (ALL, READ). “ALL” permission indicates all operations to a container are allowed such as creating a container, adding a row, searching, and creating an index. “READ” permission indicates only search operations are allowed. |
<Database name> | Specify the name of the database for which access rights are going to be granted |
<User name> | Specify the name of the user to assign access rights to. |
Example:
gs[public]> grantacl ALL DATABASE001 user01
[Memo]
Revoke access rights to the database.
Sub-command
revokeacl |
Argument
Argument | Note |
---|---|
<Access right> | Specify the access right (ALL, READ). |
<Database name> | Specify the name of the database for which access rights are going to be revoked. |
<User name> | Specify the name of the user whose access rights are going to be revoked. |
Example:
gs[public]> revokeacl ALL DATABASE001 user02
[Memo]
This section explains the available sub-commands that can be used to perform user management. Connect to the cluster first prior to performing user management (sub-command connect).
Create a general user (username and password).
Sub-command
createuser <User name> <Password> |
Argument
Argument | Note |
---|---|
<User name> | Specify the name of the user to be created. |
<Password> | Specify the password of the user to be created. |
Example:
gs[public]> createuser user01 pass001
[Memo]
Delete the specified general user
Sub-command
dropuser <User name> |
Argument
Argument | Note |
---|---|
<User name> | Specify the name of the user to be deleted. |
Example:
gs[public]> dropuser user01
[Memo]
Update the user password.
Sub-command
General user only | setpassword <password> |
Administrator user only | setpassword <User name> <Password> |
Argument
Argument | Note |
---|---|
<Password> | Specify the password to change. |
<User name> | Specify the name of the user whose password is going to be changed. |
Example:
gs[public]> setpassword newPass009
[Memo]
List information on a general user data and a role.
Sub-command
showuser [user name | role name] |
Argument
Argument | Note |
---|---|
<User name> | Specify the name of the user or role to be displayed. |
Example:
gs[public]> showuser
Name Type
--------------------------------------------
user001 General User
ldapUser Role
ldapGroup Role
gs[public]> showuser user01
Name : user001
Type : General User
GrantedDB: public
DATABASE001 ALL
DATABASE003 READ
gs[public]> showuser ldapUser
Name : ldapUser
Type : Role
GrantedDB: public
DATABASE002 ALL
[Memo]
This section explains the available sub-commands that can be used when performing container operations. Connect to the cluster first before performing container management (sub-command connect). The container in the connected database will be subject to the operation.
Create a container.
Sub-command (Simple version)
Container (collection) | createcollection <Container name> <Column name> <Column type> [<Column name> <Column type> …] |
Container (timeseries container) | createtimeseries <Container name> <Compression method> <Column name> <Column type> [<Column name> <Column type> …] |
Sub-command (Detailed version)
Container (collection/timeseries container) | Createcontainer <Container definition file> [<Container name>] |
Description of each argument
Argument | Note |
---|---|
<Container name> | Specify the name of the container to be created. If the name is omitted in the createcontainer command, a container with the name given in the container definition file will be created. |
Column name | Specify the column name. |
Column type | Specify the column type. |
Compression method | For time series data, specify the data compression method. |
Container data file | Specify the file that stores the container definition information in JSON format. |
Simplified version
Specify the container name and column data (column name and type) to create the container.
Detailed version
Specify the container definition data in the json file to create a container.
When using the container definition file, the metadata file will be output when the –out option is specified in the export function. The output metadata file can be edited and used as a container definition file.
Example: When using the output metadata file as a container definition file
{
"version":"2.1.00", ←unused
"container":"container_354",
"database":"db2", ←unused
"containerType":"TIME_SERIES",
"containerFileType":"binary", ←unused
"containerFile":"20141219_114232_098_div1.mc", ←unused
"rowKeyAssigned":true,
"partitionNo":0, ←unused
"columnSet":[
{
"columnName":"timestamp",
"type":"timestamp",
"notNull":true
},
{
"columnName":"active",
"type":"boolean",
"notNull":true
},
{
"columnName":"voltage",
"type":"double",
"notNull":true
}
]
}
Delete a container
Sub-command
dropcontainer <Container name> |
Argument
Argument | Note |
---|---|
<Container name> | Specify the name of the container to be deleted. |
Example:
gs[public]> dropcontainer Con001
[Memo]
Register a row in a container
Sub-command
putrow container name value [value…] |
Argument
Argument | Note |
---|---|
container name | Specify the name of a container where a row is to be registered. |
value | Specify the value of a row to be registered, |
Example:
gs[public]> putrow mycontainer 'key1' 1 1.0
gs[public]> putrow mycontainer 'key2' 2 2.0
gs[public]> putrow mycontainer 'key3' 3 null
// Check the results.
gs[public]> tql mycontainer select *;
3 results. (1 ms)
gs[public]> get
key,val1,val2
key1,1,1.0
key2,2,2.0
key3,3,(NULL)
3 results had been acquired.
Delete a row from a container
Sub-command
removerow container name row key value [row key value…] |
Argument
Argument | Note |
---|---|
container name | Specify the name of a container from which a row is to be deleted. |
value | Specify the row key value of a row to be deleted. |
Example:
gs[public]> removerow mycontainer 'key1'
gs[public]> removerow mycontainer 'key2'
// Check the results.
gs[public]> tql mycontainer select *;
1 results. (1 ms)
gs[public]> get
key,val1,val2
key3,3,(NULL)
1 results had been acquired.
[Memo]
Display the container data.
Sub-command
showcontainer [<Container name>] |
Argument
Argument | Note |
---|---|
<Container name> | Specify the container name to be displayed. Display a list of all containers if omitted. |
Example:
//display container list
gs[public]> showcontainer
Database : public
Name Type PartitionId
---------------------------------------------
TEST_TIME_0001 TIME_SERIES 3
TEST_TIME_0004 TIME_SERIES 12
TEST_TIME_0005 TIME_SERIES 26
cont003 COLLECTION 27
TABLE_01 COLLECTION 58
TEST_COLLECTION_0001 COLLECTION 79
//display data of specified container
gs[public]> showcontainer cont003
Database : public
Name : cont003
Type : COLLECTION
Partition ID: 27
DataAffinity: -
Columns:
No Name Type CSTR RowKey
------------------------------------------------------------------------------
0 col1 INTEGER NN [RowKey]
1 col2 STRING
2 col3 TIMESTAMP
Indexes:
Name :
Type : TREE
Columns:
No Name
--------------------------
0 col1
Name :
Type : TREE
Columns:
No Name
--------------------------
0 col2
Name : myIndex
Type : TREE
Columns:
No Name
--------------------------
0 col2
1 col3
[Memo]
In the case of connecting through JDBC, the details of “Table partitioning data” are displayed. The displayed items are “Partitioning Type”, “Partitioning Column”, “Partition Interval Value”, “Partition Interval Unit” of interval partitioning, and “Partition Division Count” of hash partitioning. For interval-hash partitioning, the items of interval partitioning and hash partitioning are both displayed.
//Display the specified container data (in the case of connecting through JDBC)
gs[userDB]> showcontainer time018
Database : userDB
Name : time018
Type : TIME_SERIES
Partition ID: 37
DataAffinity: -
Partitioned : true
Partition Type : INTERVAL
Partition Column : date
Partition Interval Value : 730
Partition Interval Unit : DAY
Sub Partition Type : HASH
Sub Partition Column : date
Sub Partition Division Count : 16
:
:
//Display the specified container data (not in the case of connecting through JDBC)
gs[userDB]> showcontainer time018
Database : userDB
Name : time018
Type : TIME_SERIES
Partition ID: 37
DataAffinity: -
Partitioned : true (need SQL connection for details)
:
:
Display the table data. It is compatible command of showcontainer.
Sub-command
showtable [<Table name>] |
Argument
Argument | Note |
---|---|
<Table name> | Specify the table name to be displayed. Display a list of all tables if omitted. |
Search for a container by specifying a container name.
Sub-command
searchcontainer [container name] |
Argument
Argument | Note |
---|---|
container name | Specify the container name to search for. Otherwise, all containers are displayed. Wild cards (where % represents zero or more characters, and _ represents a single character) can be specified for a container name. |
Example:
gs[public]> searchcontainer mycontainer
mycontainer
gs[public]> searchcontainer my%
my
my_container
mycontainer
gs[public]> searchcontainer my\_container
my_container
Search for a view by specifying a view name.
Sub-command
searchview [view name] |
Argument
Argument | Note |
---|---|
view name | Specify the view name to search for. Otherwise, all views are displayed. Wild cards (where % represents zero or more characters, and _ represents a single character) can be specified for a view name. |
Example:
gs[public]> searchview myview
myview
gs[public]> searchview my%
my
my_view
myview
gs[public]> searchview my\_view
my_view
Create an index in the column of a specified container.
Sub-command
createindex <Container name> <Column name> <Index type> … |
Argument
Argument | Note |
---|---|
<Container name> | Specify the name of container that the column subject to the index operation belongs to. |
Column name | Specify the name of the column subject to the index operation. |
Index type … | Specify the index type. Specify TREE or SPATIAL for the index type. |
Example:
//create index
gs[public]> createindex cont003 col2 tree
gs[public]> showcontainer cont003
Database : public
Name : cont003
Type : COLLECTION
Partition ID: 27
DataAffinity: -
Columns:
No Name Type CSTR RowKey
------------------------------------------------------------------------------
0 col1 INTEGER NN [RowKey]
1 col2 STRING
2 col3 TIMESTAMP
Indexes:
Name :
Type : TREE
Columns:
No Name
--------------------------
0 col1
Name :
Type : TREE
Columns:
No Name
--------------------------
0 col2
[Memo]
Create a composite index on the column of a specified container.
Sub-command
createcompindex |
Argument
Argument | Note |
---|---|
<Container name> | Specify the name of container that the column subject to the index operation belongs to. |
Column name | Specify the name of the column subject to the index operation. Specify more than one. |
Example:
//create index
gs[public]> createcompindex cont003 col2 col3
gs[public]> showcontainer cont003
Database : public
Name : cont003
Type : COLLECTION
Partition ID: 27
DataAffinity: -
Columns:
No Name Type CSTR RowKey
------------------------------------------------------------------------------
0 col1 INTEGER NN [RowKey]
1 col2 STRING
2 col3 TIMESTAMP
Indexes:
Name :
Type : TREE
Columns:
No Name
--------------------------
0 col1
Name :
Type : TREE
Columns:
No Name
--------------------------
0 col2
1 col3
[Memo]
Delete the index in the column of a specified container.
Sub-command
dropindex <Container name> <Column name> <Index type> … |
Argument
Argument | Note |
---|---|
<Container name> | Specify the name of container that the column subject to the index operation belongs to. |
Column name | Specify the name of the column subject to the index operation. |
Index type … | Specify the index type. Specify TREE or SPATIAL for the index type. |
Example:
//delete index
gs[public]> showcontainer cont004
Database : public
Name : cont004
:
:
Indexes:
Name :
Type : TREE
Columns:
No Name
--------------------------
0 id
Name : myIndex
Type : TREE
Columns:
No Name
--------------------------
0 value
gs[public]> dropindex cont004 value tree
gs[public]> showcontainer cont004
Database : public
Name : cont004
:
:
Indexes:
Name :
Type : TREE
Columns:
No Name
--------------------------
0 id
[Memo]
Delete the compound index in the column of a specified container.
Sub-command
dropcompindex |
Argument
Argument | Note |
---|---|
<Container name> | Specify the name of container that the column subject to the index operation belongs to. |
Column name | Specify the name of the column subject to the index operation. Specify more than one. |
Example:
//delete index
gs[public]> showcontainer cont003
Database : public
Name : cont003
Type : COLLECTION
Partition ID: 27
DataAffinity: -
Columns:
No Name Type CSTR RowKey
------------------------------------------------------------------------------
0 col1 INTEGER NN [RowKey]
1 col2 STRING
2 col3 TIMESTAMP
Indexes:
Name :
Type : TREE
Columns:
No Name
--------------------------
0 col1
Name :
Type : TREE
Columns:
No Name
--------------------------
0 col2
1 col3
gs[public]> dropcompindex cont003 col2 col3
gs[public]> showcontainer cont003
Database : public
Name : cont003
Type : COLLECTION
Partition ID: 27
DataAffinity: -
Columns:
No Name Type CSTR RowKey
------------------------------------------------------------------------------
0 col1 INTEGER NN [RowKey]
1 col2 STRING
2 col3 TIMESTAMP
Indexes:
Name :
Type : TREE
Columns:
No Name
--------------------------
0 col1
[Memo]
This section explains subcommands to displays an SQL execution plan.
Display an SQL analysis result (global plan) in text format or in JSON format.
Sub-command
getplantxt [<Text file name>] |
Argument
Argument | Note |
---|---|
Text file name | Specify the name of the file where the results are saved. |
Example:
gs[public]> EXPLAIN ANALYZE select * from table1, table2 where table1.value=0 and table1.id=table2.id;
Search is executed (11 ms).
gs[public]> getplantxt
Id Type Input Rows Lead time Actual time Node And more..
--------------------------------------------------------------------------------------------------------------------
0 SCAN - - 0 0 192.168.15.161:10001 table: {table1} INDEX SCAN
1 SCAN 0 0 2 2 192.168.15.161:10001 table: {table1, table2} INDEX SCAN JOIN_EQ_HASH
2 RESULT 1 0 0 0 192.168.15.161:20001
[Memo]
Sub-command
getplanjson [<JSON file name>] |
Argument
Argument | Note |
---|---|
JSON file name | Specify the name of the file where the results are saved. |
Example:
gs[public]> getplanjson
{
"nodeList" : [ {
"cmdOptionFlag" : 65,
"id" : 0,
"indexInfoList" : [ 2, 0, 0 ],
"inputList" : [ ],
"outputList" : [ {
"columnId" : 0,
"columnType" : "STRING",
"inputId" : 0,
"op" : "EXPR_COLUMN",
"qName" : {
"db" : "public",
"name" : "id",
"table" : "collection_nopart_AA22"
},
"srcId" : 1
}, {
・
・
・
[Memo]
Display the detailed information of an SQL analysis result in JSON format.
Sub-command
gettaskplan <plan ID> |
Argument
Argument | Note |
---|---|
Plan ID | Specify the plan ID of the plan to display. |
Example:
gs[public]> gettaskplan 0
{
"cmdOptionFlag" : 65,
"id" : 0,
"indexInfoList" : [ 2, 0, 0 ],
"inputList" : [ ],
"outputList" : [ {
"columnId" : 0,
"columnType" : "STRING",
"inputId" : 0,
"op" : "EXPR_COLUMN",
"qName" : {
"db" : "public",
"name" : "id",
"table" : "collection_nopart_AA22"
},
"srcId" : 1
}, {
・
・
・
[Memo]
This section explains the sub-commands for other operations.
Display the executed sub-command in the standard output.
Sub-command
echo |
Argument
Argument | Note |
---|---|
Boolean | Display the executed sub-command in the standard output when TRUE is specified. Default value is FALSE. |
Example:
//display the executed sub-command in the standard output
gs> echo TRUE
[Memo]
Display the definition details of the specified character string or variable.
Sub-command
print <message> |
Argument
Argument | Note |
---|---|
Message | Specify the character string or variable to display. |
Example:
//display of character string
gs> print print executed.
print executed.
[Memo]
Set the time for the sleeping function.
Sub-command
sleep <No. of sec> |
Argument
Argument | Note |
---|---|
No. of sec | Specify the no. of sec to go to sleep. |
Example:
//sleep for 10 sec
gs> sleep 10
[Memo]
Execute an external command.
Sub-command
exec <External command> [<External command arguments>] |
Argument
Argument | Note |
---|---|
External command | Specify an external command. |
External command arguments | Specify the argument of an external command. |
Example:
//display the file data of the current directory
gs> exec ls -la
[Memo]
The above command is used to terminate gs_sh.
Sub-command
exit quit |
Example:
// terminate gs_sh.
gs> exit
In addition, if an error occurs in the sub-command, the setting can be configured to end gs_sh.
Sub-command
errexit |
Argument
Argument | Note |
---|---|
If TRUE is specified, gs_sh ends when an error occurs in the sub-command. Default is FALSE. |
Example:
//configure the setting so as to end gs_sh when an error occurs in the sub-command
gs> errexit TRUE
[Memo]
Display a description of the sub-command.
Sub-command
help [<Sub-command name>] |
Argument
Argument | Note |
---|---|
Sub-command name | Specify the sub-command name to display the description Display a list of the sub-commands if omitted. |
Example:
//display the description of the sub-command
gs> help exit
exit
terminate gs_sh.
[Memo]
Display the version of gs_sh.
Sub-command
version |
Example:
//display the version
gs> version
gs_sh version 2.0.0
[Memo]
Set the time zone.
Sub-command
settimezone [setting value] |
Argument
Argument | Note |
---|---|
Value | The format of the setting value is “±hh:mm”, “±hhmm”, “Z”, and “auto”. For example, for Japan time, the setting value is “+09:00”. When the value is not specified, the setting value for the time zone is cleared. |
[Memo]
Specify the authentication method. For details on the authentication method, see the GridDB Features Reference .
Sub-command
setauthmethod [authentication method] |
Argument
Argument | Note |
---|---|
authentication method | Specify either INTERNAL (internal authentication) or LDAP (LDAP authentication) as an authentication method to be used. If unspecified, the set value is cleared. The set value can be checked by using the authentication variable. |
Example:
gs[public]> setauthmethod ldap
// Check the settings.
gs[public]> print $authentication
ldap
[Memo]
Secure communication with the GridDB cluster with SSL,
After enabling SSL connection on the GridDB cluster, enable SSL connection with the following subcommand of the gs_sh command. For details on SSL connection, see the GridDB Features Reference.
Sub-command
setsslmode [SSL connection settings] |
Argument
Argument | Note |
---|---|
For SSL connection settings, specify DISABLED (SSL is invalid), REQUIRED (SSL is valid), or VERIFY (SSL is valid and performs server certificate verification). The default value is DISABLED. The set value can be checked by using the variable sslMode. |
Example:
gs> setsslmode REQUIRED
//Check the settings.
gs[public]> print $sslMode
REQUIRED
[Memo]
GS_COMMON_JVM_ARGS
. Note that the command interpreter does not support the checking of the expiration date of a CA certificate to ensure it is valid.Example:
GS_COMMON_JVM_ARGS="-Djavax.net.ssl.trustStore=/var/lib/gridstore/admin/keystore.jks -Djavax.net.ssl.trustStorePassword=changeit"
export GS_COMMON_JVM_ARGS
To configure the cluster network in multicast mode when multiple network interfaces are available, specify the IP address of the interface to receive the multicast packets from.
Sub-command
setntfif [IP address] |
Argument
Argument | Note |
---|---|
IP address | Specify in IPv4 the IP address of the interface from which the multicast packet is received. If unspecified, the set value is cleared. The set value can be checked by using the variable notificationInterfaceAddress. |
Example:
gs[public]> setntfif 192.168.1.100
// Check the settings.
gs[public]> print $notificationInterfaceAddress
192.168.1.100
[Memo]
Display previously run subcommands.
Sub-command
history |
Rerun recent subcommands from the subcommand history displayed with the history subcommand.
Sub-command
!history number |
Argument
Argument | Note |
---|---|
history number | Specify the history number of the subcommand you want to rerun from the subcommand history displayed with the history subcommand. |
Rerun the previously run subcommand.
Sub-command
!! |
Example:
gs> history
1 connect $mycluster
2 showcontainer
3 select * from mytable;
:
210 configcluster $mycluster
211 history
gs> !210
gs> configcluster $mycluster
:
gs> !!
gs> configcluster $mycluster
:
[Memo]
Command list
gs_sh [<Script file>] |
gs_sh -v|–version |
gs_sh -h|–help |
Options
Options | Required | Note |
---|---|---|
-v|–version | Display the version of the tool. | |
-h|–help | Display the command list as a help message. |
[Memo]
GridDB cluster definition sub-command list
Sub-command | Argument | Note | *1 | |
---|---|---|---|---|
setnode | <Node variable> <IP address> <Port no.> [<SSH port no.>] | Define the node variable. | ||
setcluster | setcluster <Cluster variable> <Cluster name> <Multicast address> <Port no.> [<Node variable> …] | Define the cluster variable. | ||
setclustersql | setclustersql <Cluster variable> <Cluster name> <SQL address> <SQL port no.> | Define the SQL connection destination in the cluster configuration. | ||
modcluster | <Cluster variable> add | remove <Node variable> … | Add or delete a node variable to or from the cluster variable. | |
setuser | <User name> <Password> [<gsadm password>] | Define the user and password to access the cluster. | ||
set | <Variable name> [<Value>] | Define an arbitrary variable. | ||
show | [<Variable name>] | Display the detailed definition of the variable. | ||
save | [<Script file name>] | Save the variable definition in the script file. | ||
load | [<Script file name>] | Execute a read script file. | ||
sync | IP address port number [cluster variable name [node variable] ] | Connect to the running GridDB cluster and automatically define a cluster variable and a node variable. | * |
GridDB cluster operation sub-command list
Sub-command | Argument | Note | *1 |
---|---|---|---|
startnode | <Node variable> | <Cluster variable> [<Timeout time in sec>] | Start the specified node. | * |
stopnode | <Node variable> | <Cluster variable> [<Timeout time in sec>] | Stop the specified node. | * |
stopnodeforce | <Node variable> | <Cluster variable> [<Timeout time in sec>] | Stop the specified node by force. | * |
startcluster | <Cluster variable> [ <Timeout time in sec.> ] | Attach the active node groups to a cluster, together at once. | * |
stopcluster | <Cluster variable> [ <Timeout time in sec.> ] | Detach all of the currently attached nodes from a cluster, together at once. | * |
joincluster | <Cluster variable> <Node variable> [ <Timeout time in sec.> ] | Attach a node individually to a cluster. | * |
leavecluster | <Node variable> [ <Timeout time in sec.> ] | Detach a node individually from a cluster. | * |
leaveclusterforce | <Node variable> [ <Timeout time in sec.> ] | Detach a node individually from a cluster by force. | * |
appendcluster | <Cluster variable> <Node variable> [ <Timeout time in sec.> ] | Add an undefined node to a pre-defined cluster. | * |
configcluster | Cluster variable | Display the cluster status data. | * |
config | Node variable | Display the cluster configuration data. | * |
stat | Node variable | Display the node configuration data and statistical information. | * |
logs | Node variable | Displays the log of the specified node. | * |
logconf | <Node variable> [ <Category name> [ <Output level> ] ] | Display and change the log settings. | * |
showsql | Query ID | Display the SQL processing under execution. | |
showevent | Display the event list under execution. | ||
showconnection | Display the list of connections. | ||
killsql | Query ID | Cancel the SQL processing in progress. | * |
Data operation sub-command list
Sub-command | Argument | Note | *1 |
---|---|---|---|
connect | <Cluster variable> [<Database name>] | Connect to a GridDB cluster. | |
tql | <Container name> <Query;> | Execute a search and retain the search results. | |
get | [ <No. of acquires> ] | Get the search results and display them in a stdout. | |
getcsv | <CSV file name> [<No. of acquires>] | Get the search results and save them in a file in the CSV format. | |
getnoprint | [ <No. of acquires> ] | Get the query results but do not display them in a stdout. | |
tqlclose | Close the TQL and discard the search results saved. | ||
tqlexplain | <Container name> <Query;> | Execute the specified TQL command and display the execution plan and actual measurement values such as the number of cases processed etc. | |
tqlanalyze | <Container name> <Query;> | Displays the execution plan of the specified TQL command. | |
sql | <SQL command;> | Execute an SQL command and retains the search result. | |
sqlcount | Boolean | Set whether to execute count query when SQL querying. | |
queryclose | Close the query and discard the search results saved. | ||
disconnect | Disconnect user from a GridDB cluster. |
Database management sub-command list
Sub-command | Argument | Note | *1 |
---|---|---|---|
createdatabase | <Database name> | Create a database. | * |
dropdatabase | <Database name> | Delete a database. | * |
getcurrentdatabase | Display the current database name. | ||
showdatabase | <Database name> | List the databases with access right information. | |
grantacl | <access rights> <Database name> <User name> | Grant the database access rights to user. | * |
revokeacl | <access rights> <Database name> <User name> | Revoke access rights to the database. | * |
User management sub-command list
Sub-command | Argument | Note | *1 | |
---|---|---|---|---|
createuser | <User name> <Password> | Create a general user. | * | |
dropuser | <User name> | Delete a general user. | * | |
setpassword | <Password> | Change the own password. | ||
setpassword | <User name> <Password> | Change the password of a general user. | * | |
showuser | [user name | role name] | Display information on a general user and a role. | * |
Container management sub-command list
Sub-command | Argument | Note | *1 |
---|---|---|---|
createcollection | <Container name> <Column name> <Column type> [<Column name> <Column type> …] | Create a container (collection). | |
createtimeseries | <Container name> <Compression method> <Column name> <Column type> [<Column name> <Column type> …] | Create a container (timeseries container). | |
createcontainer | <Container definition file> [<Container name>] | Create a container based on the container definition file. | |
dropcontainer | <Container name> | Delete a container | |
putrow | container name value [value…] | Register a row in a container. | |
removerow | container name row key value [row key value…] | Delete a row from a container. | |
showcontainer | [ <Container name> ] | Display the container data. | |
showtable | [ <Table name> ] | Display the table data. | |
searchcontainer | [container name] | Search for a container by specifying a container name. | |
searchview | [view name] | Search for a view by specifying a view name. | |
createindex | <Container name> <Column name> <Index type> … | Create an index in the specified column. | |
createcompindex | <Container name> <Column name> … | Create a composite index on the specified column. | |
dropindex | <Container name> <Column name> <Index type> … | Delete an index of the specified column. | |
dropcompindex | <Container name> <Column name> … | Deletes the composite index of the specified column. |
Execution plan sub-command list
Sub-command | Argument | Note | *1 |
---|---|---|---|
getplantxt | [Text file name] | Display an SQL analysis result in text format. | |
getplanjson | [JSON file name] | Display an SQL analysis result in JSON format. | |
gettaskplan | Plan ID | Display the detailed information of an SQL analysis result in JSON format. |
Other operation sub-command list
Sub-command | Argument | Note | *1 |
---|---|---|---|
echo | Boolean | Set whether to echo back. | |
Message | Display the definition details of the specified character string or variable. | ||
sleep | No. of sec | Set the time for the sleeping function. | |
exec | External command [External command arguments] | Execute an external command. | |
exit | The above command is used to terminate gs_sh. | ||
quit | The above command is used to terminate gs_sh. | ||
errexit | Boolean | Set whether to terminate gs_sh when an error occurs. | |
help | [ <Sub-command name> ] | Display a description of the sub-command. | |
version | Display the version of gs_sh. | ||
settimezone | [setting value] | Set the time zone. | |
setauthmethod | [ authentication method ] | Specify the authentication method. | |
sslMode | [ SSL connection settings ] | Specify whether to secure communication with the GridDB cluster through SSL. | |
setntfif | [ IP address ] | Specify the IP address of the interface from which the multicast packet is received. | |
history | Display previously run subcommands. | ||
![history number ] | Specify the history number of the subcommand you want to rerun from the subcommand history displayed with the history subcommand. | ||
!! | Rerun the previously run subcommand. |
The integrated operation control GUI (hereinafter described as gs_admin) is a Web application that integrates GridDB cluster operation functions.
The following operations can be carried out using gs_admin.
gs_admin needs to be installed on a machine in which nodes constituting a cluster have been started, or in a machine on the network with the same subnet and multicast distribution.
gs_admin is a Web application that contains Web containers.
To use gs_admin, Java have to be installed beforehand. The supported versions are as follows.
The GridDB version supported by gs_admin Ver. 5.0 is:
The procedure to use gs_admin is as follows.
See the “GridDB Quickstart Guide” for the procedure to configure a GridDB node.
The procedure to install and configure gs_admin is as follows.
Install the GridDB Web UI package (griddb-ee-webui).
On a machine where the Web application is placed, install the package using the command below.
$ sudo rpm -Uvh griddb-ee-webui-X.X.X-linux.x86_64.rpm
``` example(Ubuntu Server) $ sudo dpkg -i griddb-ee-webui_X.X.X_amd64.deb
*X.X.X indicates the GridDB version.
When a client package is installed, a directory named admin is created in the GridDB home directory ( `/var/lib/gridstore` ). This directory (`/var/lib/gridstore/admin` ) is known as adminHomehereinafter.
The configuration under adminHome is as follows.
``` example
capture/ # snapshot storage directory (*)
\<Node address\>_\<port\>/YYYYMMDDHHMMSS.json # snapshotfile(*)
conf/ # configuration file directory
gs_admin.properties # Static parameter file to be configured initially
gs_admin.settings # dynamic parameter file to configure display-related settings
password # gs_admin user definition file
repository.json # node repository file
log/ # log file directory of gs_admin (*)
gs_admin-YYYYMMDD.log # log file (*)
tree/ # structural file directory of container tree (*)
foldertree-\<cluster name\>-\<user name\>.json # folder tree file (*)
Files and directories marked with a (*) are created automatically by gs_admin.
[Notes]
When using gs_admin, perform authentication as a gs_admin user.
Administrator users of GridDB clusters under management need to be set up as gs_admin users.
The gs_admin user definition file is found in /var/lib/gridstore/admin/conf/password
This file will not be created when a client package is installed.
To create this file easier, overwrite the user definition file of the node in the cluster you want to manage (/var/lib/gridstore/conf/password
) to the gs_admin user definition file ( /var/lib/gridstore/admin/conf/password
). In this case, all administrator users listed in the copied user definition file will become gs_admin users.
[Memo]
The configuration file is found in /var/lib/gridstore/admin/conf/gs_admin.properties
. Set together with the GridDB cluster configuration as a gsadm user.
If the property file has been modified, restart the griddb-webui service.
gs_admin.properties contains the following settings.
Property | Default | Description |
---|---|---|
adminUser | admin | Set the gs_admin administrator user. Multiple user names can be set by separating the names with commas. This function can be used by a gs_admin administrator user. - cluster operation function - Repository management function |
ospassword | - | Set the password of the node gsadm user (OS user). The following functions can be used when the password is set. - Node start operation (start) in the cluster operation functions - OS information display screen |
timeZone | - | Set timeZone as a property for cluster connection. The set value is used as the time zone of the TIMESTAMP type column value on the TQL screen and SQL screen. If not specified, the time zone will be UTC. |
logging.performance | FALSE | Specify TRUE to retrieve the performance log. |
gs_admin.debug | FALSE | Specify TRUE to start in debug mode. |
sqlLoginTimeout | Specify the SQL login timeout in seconds. | |
authenticationMethod | *dependent on the GridDB cluster settings | Specify either INTERNAL (internal authentication) or LDAP (LDAP authentication) as an authentication method to be used. |
notificationInterfaceAddress | *OS-dependent | To configure the cluster network in multicast mode when multiple network interfaces are available, specify the IP address of the interface to receive the multicast packets from. |
sslMode | DISABLED | For SSL connection settings, specify DISABLED (SSL is invalid), REQUIRED (SSL is valid), or VERIFY (SSL is valid and performs server certificate verification). |
[Memo]
ospassword
is required for the gs_admin.GS_COMMON_JVM_ARGS
. Note that the command interpreter does not support the checking of the expiration date of a CA certificate to ensure it is valid.GS_COMMON_JVM_ARGS
in /etc/environment, referring to the example below. Restart gs_admin to apply the settings.Example:
GS_COMMON_JVM_ARGS="-Djavax.net.ssl.trustStore=/var/lib/gridstore/admin/keystore.jks -Djavax.net.ssl.trustStorePassword=changeit"
Example:
server.ssl.enabled=true
server.port=8443
server.ssl.key-store-type=JKS
server.ssl.key-store=/var/lib/gridstore/admin/keystore.jks
server.ssl.key-store-password=changeit
server.ssl.key-alias=tomcat
Node repository files are files to centrally manage cluster configuration data and node data (/var/lib/gridstore/admin/conf/repository.json
). They are used to specify cluster under management and cluster operation functions. Set together with the GridDB cluster configuration as a gsadm user.
The default file contents are as follows.
{
"header" : {
"lastModified" : "",
"version" : "5.0.0"
},
"clusters" : [
{
"name" : "INPUT_YOUR_CLUSTER_NAME_HERE",
"address" : "239.0.0.1",
"port" : 31999,
"jdbcAddress" : "239.0.0.1",
"jdbcPort" : 41999
}
],
"nodes" : [
{
"address" : "192.168.1.10",
"port" : 10040,
"sshPort" : 22,
"clusterName" : "INPUT_YOUR_CLUSTER_NAME_HERE"
}
]
}
To configure a node repository, either edit the file directly or use the repository management screen. Repository management screen is recommended. When configuring using the repository management screen, see the functions on the repository management screen and Starting management of a cluster in operation with gs_admin.
Use of the operation control command or command interpreter (gs_sh) is recommended when performing cluster configuration for the first time.
Use the systemctl command to start and stop gs_admin.
$ sudo systemctl [start|stop|status|restart] griddb-webui
Access the application URI below to access gs_admin.
http://[Tomcat operating machine address]:8080/gs_admin
he login screen appears when you access the gs_admin application URI.
In the log-in screen, you can choose from 2 different environment; cluster or repository manager. In the former option, you need to select the cluster that you would like to manage from the drop-down list. Once logged in, you will be taken to the Integrated operation control screen
On the other hand, for the latter option, you will be taken to the Repository management screen.
When logging in, enter your gs_admin user name and password in the box next to “user” and “password” respectively, and press the Login button.
[Memo]
The integrated operation control screen is shown below.
The integrated operation control screen is made up of the following elements.
Element | Abbreviation | Location | Functions |
---|---|---|---|
Tree view | Tree | Left | Display, select a list of operating targets |
Data display and input section | View | Right | Data display and data input subject to operation |
Menu area | ― | Top | Log out |
Message area | ― | Bottom | ― |
Tree function
In Tree, a cluster or container can be selected as the main operation target by switching tabs at the top.
Tab | Tree name | Main functions |
---|---|---|
ClusterTree | Cluster tree | Display a list of the clusters and nodes, select the operating targets |
ContainerTree | Container tree | Display a list of the databases, search for containers, select operating targets |
View function
In View, the tab displayed at the top of View differs for each operating target selected in Tree. The function can be switched by selecting the tab at the top.
See the items of each tree and screen for details.
This function can be used by a gs_admin administrator user only.
Select repository manager in the login screen and login as a gs_admin administrator user to arrive at the repository management screen.
The repository management screen is shown below.
The following functions are available in the repository management screen.
/var/lib/gridstore/admin/conf/repository.json
) is divided into 2 sections. The top half of the screen shows the cluster data, whereas the bottom half displays the node data./system/serviceAddress
of the node definition file (gs_cluster.json) as the IP address./system/servicePort
of the node definition file (gs_node.json) as the port.The specifications of the input column are as follows.
Cluster
/cluster/clusterName
of the cluster definition file (gs_cluster.json)./transaction/notificationAddress
of the cluster definition file (gs_cluster.json)./transaction/notificationAddress
of the cluster definition file (gs_cluster.json)./sql/notificationAddress
of the cluster definition file (gs_cluster.json)./sql/notificationPort
of the cluster definition file (gs_cluster.json)./cluster/notificationMember/transaction/address
and /cluster/notificationMember/transaction/port
in the cluster definition file (gs_cluster.json) with a “:” and specify the value of each node by separating them with a comma./cluster/notificationMember/sql/address
and /cluster/notificationMember/sql/port
in the cluster definition file (gs_cluster.json) with a “:” and specify the value of each node by separating them with a comma.
of the cluster definition file (gs_cluster.json).Node
/system/serviceAddress
of the node definition file (gs_node.json)./system/servicePort
of the node definition file (gs_node.json).Summary
In a cluster tree, the nodes constituting a cluster under management, i.e the repository nodes (clusterName is the cluster under management) are displayed in a tree format.
An * will appear at the beginning of a node which has not been registered in the repository.
A description of the icons shown in a cluster tree is given below.
Icon | Note |
---|---|
Cluster | |
Master node | |
Follower node | |
Started node | |
Stopped node | |
Status unconfirmed node | |
Message |
Context menu
When an element of the tree is right clicked, a context menu appears according to which element is clicked, cluster or node. Data update and element operation can then be performed by selecting an item from the menu.
The menus and functions for the respective selected elements are as follows.
Selection element | Menu | Functions |
---|---|---|
Cluster | refresh | Get list of nodes in a tree again |
Node | refresh | Display the latest node information in View |
Operating target and view tab
When an element in the tree is left clicked, the functions appear in the View according to which element is clicked, cluster or node. The function can be changed by tapping the top section of the View.
Selection element | Tab | Screen name | Functions |
---|---|---|---|
Cluster | Dashboard | Dashboard screen | The dashboard screen contains a variety of information related to the entire cluster such as memory usage, cluster health, log information, etc. |
Status | Cluster status screen | Display configuration data and information of cluster under management. | |
Monitor | OS data display screen | Display OS data of a machine with operating nodes. | |
Configuration | Cluster operation screen | The cluster operations screen consists of a list of table of the running nodes, as well as the start and end node features. | |
Node | System | System data screen | Display system data of the node. |
Container | Container list screen | The container list screen contains containers information such as the name of the containers and to which database it belongs to. | |
Performance | Performance data screen | Display performance data of the node as a graph. | |
Snapshot | Snapshot screen | The snapshot screen shows the node’s performance at a point in time. The values can be compared with the values measured earlier. | |
Log | Log screen | The log screen contains the event log information of a node and the corresponding setting of its output level. |
[Memo]
Summary
The dashboard screen contains a variety of information related to the entire cluster such as memory usage, cluster health, log information, etc.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Cluster tree | Cluster | Dashboard |
Screen
Functions
The following functions are available in the dashboard screen.
Summary
Display configuration data and information of cluster under management.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Cluster tree | Cluster | Status |
Screen
Functions
The cluster status screen is comprised of the following components.
Data-related information display (◆Stored Data Information)
Summary
The OS data display screen is comprised of two components, Resource Information and OS Performance of the current cluster. The GridDB performance analysis, and the CPU and Network load status are displayed by pie charts and line graphs respectively.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Cluster tree | Cluster | Monitor |
Screen
Functions
The OS data display screen is comprised of the following components.
[Memo]
ospassword
has not been set up in gs_admin.properties.This function can be used by the gs_admin administrator only.
Summary
The cluster operations screen consists of a list of table of the running nodes, as well as the start and end node features.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Cluster tree | Cluster | Configuration |
Screen
Functions
The following functions are available in the cluster operation screen.
[Memo]
Summary
Display system data of the node.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Cluster tree | Node | System |
Screen
Functions
The following functions are available in the system data screen.
Summary
The container list screen contains containers information such as the name of the containers and to which database it belongs to.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Cluster tree | Node | Container |
Screen
Functions
The following functions are available in the container list screen.
[Memo]
Summary
Display performance data of the node as a graph.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Cluster tree | Node | Performance |
Screen
Functions
The following functions are available in the performance data screen.
Summary
The snapshot screen shows the node’s performance at a point in time. The values can be compared with the values measured earlier.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Cluster tree | Node | Snapshot |
Screen
Functions
The following functions are available in the snapshot screen.
Summary
The log screen contains the event log information of a node and the corresponding setting of its output level.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Cluster tree | Node | Log |
Screen
Functions
The following functions are available in the log screen.
[Notes]
Summary
In a container tree, the databases and containers which exist in a cluster under management are displayed in a tree format.
The cluster under management is displayed at the top of the tree (the figure within the parentheses () refer to the total number of databases in the cluster).
A description of the icons shown in a container tree is given below.
Icon | Note |
---|---|
Cluster | |
Database | |
Database (does not exist) | |
Container (collection) | |
Container (timeseries container) | |
Partitioned table (container) | |
Search folder | |
Temporary work folder | |
Message |
Functions
The following functions are available in a container tree.
/var/lib/gridstore/admin/tree/foldertree-[cluster name]-[user name].json
in the Tomcat operating machine.After login, the ClusterTree tab and node list are displayed automatically. Upon switching to the ContainerTree tab, the tree structure of the container tree will be added automatically if it has been saved. However, search folders will not be searched again automatically.
The following operations cannot be carried out in a container tree.
Context menu
When an element of the tree is right clicked, a context menu appears according to which element is clicked, cluster or node. Data update and element operation can then be performed by selecting an item from the menu.
The menus and functions for the respective selected elements are as follows.
Selection element | Menu | Functions |
---|---|---|
Cluster | refresh | Read the tree structure of the tree again and automatically detect the database |
Database | refresh | Check the database existence and search for containers again |
Container | refresh | Display the latest container information in View |
drop | Deletion of container (with confirmation dialog) | |
Search folder | refresh | Search for container again |
remove | Deletion of the search folder | |
Temporary work folder | remove | Deletion of a temporary work folder |
[Memo]
Operating target and view tab
When an element in the tree is left clicked, the functions appear in the View according to which element is clicked, cluster or node. The function can be changed by tapping the top section of the View.
Selection element | Tab | Screen name | Function overview |
---|---|---|---|
Cluster | Database | Database management screen | A database can be created or deleted, and access rights can be assigned or revoked. |
User | User management screen | In the user management window, addition and deletion of general user, as well as modification of the password can be performed. | |
SQL | SQL screen | The results of a SQL command executed on the database can be displayed. | |
Database | Create | Container creation screen | A container can be created in a database. |
SQL | SQL screen | The results of a SQL command executed on the database can be displayed. | |
Container | Details | Container details screen | The container details screen contains column and index configuration data of a container. |
Index | Index setting screen | Index setting window allows an index to be created or deleted for each column of a container. | |
TQL | TQL screen | Execute a TQL (query language) on a container and display the results. | |
Partition | Details | Container details screen | Column, index and table partitioning data of a container will be displayed. |
Summary
A database can be created or deleted, and access rights can be assigned or revoked.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Container tree | Cluster | Database |
Screen
Functions
The following functions are available in the database management screen.
Summary
In the user management window, addition and deletion of general user, as well as modification of the password can be performed.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Container tree | Cluster | User |
Screen
Functions
Summary
The results of a SQL command executed on the database can be displayed.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Container tree | Cluster | SQL |
Container tree | Database | SQL |
Screen
Functions
The following functions are available in the SQL screen.
[Memo]
Summary
A container can be created in a database.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Container tree | Database | Create |
Screen
Functions
The following functions are available in the container creation screen.
[Memo]
Summary
The container details screen contains column and index configuration data of a container.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Container tree | Container | Details |
Screen
Functions
The following functions are available in the container details screen.
Summary
An index can be created or deleted for each container.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Container tree | Container | Index |
Screen
Functions
The following functions are available in the index setting screen.
[Memo]
Summary
Execute a TQL (query language) on a container and display the results.
Method of use
Type of tree | Operating target | Tab |
---|---|---|
Container tree | Container | TQL |
Screen
Functions
The following functions are available in the TQL screen.
[Memo]
This section provides a guide on how to use various functions accessible by gs_admin.
To manage the current active cluster in gs_admin, use the repository management function and follow the procedure below.
/system/serviceAddress
of the node definition file (gs_node.json) as the IP address./system/servicePort
of the node definition file (gs_node.json) as the port.When managing multiple clusters as a single gs_admin user, take note of the gs_admin user settings.
gs_admin user is managed in a single file, therefore if an administrator managing multiple clusters use different passwords for each of the cluster, the admin cannot be specified as a gs_admin user.
Therefore, the appropriate settings need to be configured according to number of admin in charge of the entire clusters.
The procedure to register a new gs_admin user is shown below.
Use the gs_adduser command to add an administrator user to a single node among the clusters that you want to manage as a new user.
Example: If the new user name/password is gs#newuser/newuser
$ su - gsadm
$ gs_adduser gs#newuser -p newuser
$ cat /var/lib/gridstore/conf/password
admin,8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
gs#newuser,9c9064c59f1ffa2e174ee754d2979be80dd30db552ec03e7e327e9b1a4bd594e
system,6ee4a469cd4e91053847f5d3fcb61dbcc91e8f0ef10be7748da4c4a1ba382d17
Add the user name and password added above to the gs_admin user definition file as a Tomcat execution user.
Example: If the new user name/password is gs#newuser/newuser
$ echo gs#newuser,9c9064c59f1ffa2e174ee754d2979be80dd30db552ec03e7e327e9b1a4bd594e >> /var/lib/gridstore/admin/conf/password
gs_admin error data and other logs are output to the adminHome log directory.
The default output level is info.
This command is used in collecting data when a gs_admin problem occurs, or when there is a request from the support desk, etc.
The log output level can be set in the /webapps/gs_admin/WEB-INF/classes/logback.xml
under the Tomcat home directory ( /usr/local/tomcat
by default).
Error type | Error no. | Message | Treatment method |
---|---|---|---|
Internal Server Error | E00104 | Cluster is not servicing. | Cluster under management is not operating. Use the configuration tab and other operation tools to operate the cluster, refresh the clusters from the cluster tree, or login again. |
Internal Server Error | E00105 | D10135: Failed to check a node status. | Nodes from Ver.1.5 or lower may have been registered in the nodes registered in the repository. Check the version of each node. |
Internal Server Error | Failed to create <File path>. | File creation failed. Check if there is any directory which does not exist in the displayed path, or any directory for which access rights of Tomcat user have not been assigned. | |
Internal Server Error | E0030C | [Code:******] <Error message> | Error message of GridDB node. See GridDB Error Codes and check the countermeasure with the corresponding code. |
Bad Request | E00300 | Container “Container name” already exists. | Container name is duplicated. Specify another container name to create a container. |
Bad Request | E00303 | Container “Container name” not found. | Specified container does not exist. Right click the ContainerTree cluster, select refresh and search for the container again. |
Bad Request | [Code:******] <Error message> | Error message of GridDB node. See GridDB Error Codes and check the countermeasure with the corresponding code. |
|
Input Error | <Field name> is required. | The input field has been left blank. Enter a value in the <Field name> input file. | |
Input Error | <Field name> is invalid. | An invalid value has been entered in the <Field name> input field. See “GridDB Operation Tools Reference” and input a possible types value. |
In the GridDB export/import tools, to recover a database from local damages or the database migration process, save/recovery functions are provided in the database and container unit.
The export tool saves the container and row data of a GridDB cluster in the file below. A specific container can also be exported by specifying its name.
The import tool imports the container and export execution data files, and recover the container and row data in GridDB. A specific container data can also be imported as well.
Container data files are composed of metadata files and row data files.
A metadata file is a json-format file which contains the container type and schema, and the index data that has been set up.
There are 2 types of row data file, one of which is the CSV data file in which container data is stored in the CSV format, and the other is the binary data file in which data is stored in a zip format.
See Format of a container data file for details of the contents described in each file.
In addition, there are 2 types of container data file as shown below depending on the number of containers to be listed.
Hereinafter, container data files of various configurations will be written as single container data file and multi-container data file.
When a large container is specified as a single container data file and export is executed, management becomes troublesome as a large amount of metadata files and row data files are created. On the other hand, even if a large container is specified as a multi-container data file, only 1 metadata file and row data file is output.
Therefore, it is recommended that these 2 configurations be used differently depending on the application.
A single container data file is used in the following cases.
A multi-container data file is used in the following cases.
Data such as the export date and time, the number of containers, container name etc. is saved in the export execution data file. This file is required to directly recover exported data in a GridDB cluster.
[Memo]
The following settings are required to execute an export/import command.
To execute the export/import commands, the client package containing the export/import functions and Java library package need to be installed.
[Example]
# rpm -Uvh griddb-ee-client-X.X.X-linux.x86_64.rpm
Preparing... ########################################### [100%]
User and group has already been registered correctly.
GridDB uses existing user and group.
1:griddb-ee-client ########################################### [100%]
# rpm -Uvh griddb-ee-java_lib-X.X.X-linux.x86_64.rpm
Preparing... ########################################### [100%]
1:griddb-ee-java_lib ########################################### [100%]
Set the property file in accordance with the GridDB cluster configuration used by a gsadm user. Property file is /var/lib/gridstore/expimp/conf/gs_expimp.properties
.
The property file contains the following settings.
Property | Required | Default value | Note |
---|---|---|---|
mode | Required | MULTICAST | Specify the type of connection method. If the method is not specified, the method used will be the multicast method. MULTICAST ・・MULTICAST: multicast method FIXED_LIST・・fixed list method PROVIDER ・・provider method |
hostAddress | Essential if mode=MULTICAST | 239.0.0.1 | Specify the /transaction/notificationAddress in the GridDB cluster definition file (gs_cluster.json). Multicast address used by the export/import tool to access a cluster. |
hostPort | Essential if mode=MULTICAST | 31999 | Specify the /transaction/notificationPort in the GridDB cluster definition file (gs_cluster.json). Port of multicast address used by the export/import tool to access a cluster. |
jdbcAddress | Essential if mode=MULTICAST | 239.0.0.1 | Specify /sql/notificationAddress in the GridDB cluster definition file (gs_cluster.json) when using the multicast method. |
jdbcPort | Essential if mode=MULTICAST | 41999 | Specify /sql/notificationPort in the GridDB cluster definition file (gs_cluster.json) when using the multicast method. |
notificationMember | Essential if mode=FIXED_LIST | - | Specify /cluster/notificationMember/transaction of the cluster definition file (gs_cluster.json) when using the fixed list method to connect. Connect address and port with a “:” in the description. For multiple nodes, link them up using commas. Example)192.168.0.100:10001,192.168.0.101:10001 |
jdbcNotificationMember | Essential if mode=FIXED_LIST | - | Specify sql/address and sql/port under the /cluster/notificationMember of the cluster definition file (gs_cluster.json) when using the fixed list method to connect. Connect address and port with a “:” in the description. For multiple nodes, link them up using commas. Example)192.168.0.100:20001,192.168.0.101:20001 |
notificationProvider.url | Essential if mode=PROVIDER | - | Specify /cluster/notificationProvide/url of the cluster definition file (gs_cluster.json) when using the provider method to connect. |
restAddress | - | 127.0.0.1 | Specify /system/listenerAddress of the GridDB node definition file (gs_node.json). Parameter for future expansion. |
restPort | - | 10040 | Specify /system/listenerPort of the GridDB node definition file (gs_node.json). Parameter for future expansion. |
clusterName | Required | INPUT_YOUR_CLUSTER_NAME_HERE | Specify the cluster name of GridDB which is used in the command “gs_joincluster”. |
logPath | - | /var/lib/gridstore/log | Specify the directory to output the error data and other logs when using the export/import tools. Log is output in gs_expimp-YYYYMMDD.log under the directory. |
commitCount | - | 1000 | Specify the number of rows as a unit to register data when registering container data with the import tool. When the numerical value becomes larger, the buffer for data processing gets larger too. If the row size is small, raise the numerical value, and if the row size is large, lower the numerical value. The parameter affects the registration performance for data import. |
transactionTimeout | - | 2147483647 | Specify the time allowed from the start until the end of a transaction. When registering or acquiring a large volume of data, a large numerical value matching the data volume needs to be set. A maximum value has been specified for processing a large volume of data by default. (Unit: second) |
failoverTimeout | - | 10 | Specify the failover time to repeat retry starting from the time a node failure is detected. This is also used in the timeout of the initial connection to the cluster subject to import/export. Increase the value when performing a process such as registering/acquiring a large volume of data in/from a container. (Unit: second) |
jdbcLoginTimeout | - | 10 | Specify the time of initial connection timeout for JDBC. (Unit: second) |
authenticationMethod | - | *dependent on the GridDB cluster settings | Specify either INTERNAL (internal authentication) or LDAP (LDAP authentication) as an authentication method to be used. |
notificationInterfaceAddress | - | *OS-dependent | To configure the cluster network in multicast mode when multiple network interfaces are available, specify the IP address of the interface to receive the multicast packets from. |
sslMode | - | DISABLED | For SSL connection settings, specify DISABLED (SSL is invalid), REQUIRED (SSL is valid), or VERIFY (SSL is valid and performs server certificate verification). |
[Memo]
GS_COMMON_JVM_ARGS
. Note that export/import tools do not support the checking of the expiration date of a CA certificate to ensure it is valid.Example:
GS_COMMON_JVM_ARGS="-Djavax.net.ssl.trustStore=/var/lib/gridstore/admin/keystore.jks -Djavax.net.ssl.trustStorePassword=changeit"
export GS_COMMON_JVM_ARGS
The options that can be specified when using the export function is explained here (based on usage examples of the export function).
There are 3 ways to specify a container from a GridDB cluster, by specifying all the containers of the cluster: by specifying the database, and by specifying the container individually.
(1) Specify all containers
Specify –all option.
[Example]
$ gs_export --all -u admin/admin
(2) Specify the database
Specify the database name with the –db option. Multiple database names can also be specified repeatedly by separating the names with a “ “ (blank).
[Example]
$ gs_export --db db001 db002 -u admin/admin //Enumerate DB name. Container in the DB Container in the DB
(3) Specify container individually
[Example]
$ gs_export --container c001 c002 -u admin/admin //Enumerate container name
$ gs_export --containerregex "^c0" -u admin/admin //regular expression specification: Specify containers whose container name start with c0
Rows located by a search query can be exported by specifying a search query to remove rows from a container. All rows stored in a container which has not been specified in the search query will be exported.
Specify search query
[Example] Execution example
$ gs_export -c c001 c002 -u admin/admin --filterfile filter1.txt
$ gs_export --all -u admin/admin --filterfile filter2.txt
[Example] Description of definition file
^cont_month :select * where time > 100
^cont_minutes_.*:select * where flag = 0
cont_year2014 :select * where timestamp > TIMESTAMP('2014-05-21T08:00:00.000Z')
[Memo]
Information on GridDB cluster users and their access rights can also be exported. Use the following command when migrating all data in the cluster.
[Example]
$ gs_export --all -u admin/admin --acl
[Memo]
A view of a GridDB cluster can also be exported as well as the container.
Specify –all option or –db option to export the view of the database to be exported.
$ gs_export --db public -u admin/admin
Export Start.
Directory : /tmp/export
:
Number of target container:5 ( Success:5 Failure:0 )
The number of target views : 15
Export Completed.
A CSV data file or binary data file can be specified as the output format of a row data file.
[Example]
$ gs_export -c c001 c002 -u admin/admin --binary
$ gs_export --all -u admin/admin --binary 500 //Export Completed.
A single container data file to create container data file in a container unit, or a multi-container data file to output all containers to a single container data file can be specified.
[Example]
$ gs_export -c c001 c002 -u admin/admin --out test
$ gs_export --all -u admin/admin --out //file is created with the date
The directory of the container data file can be specified as the output destination. Create a directory if the specified directory does not exist. If the directory is not specified, data will be output to the current directory when a command is executed. Use the -d option to specify the output destination.
[Example]
$ gs_export --all -u admin/admin --out test -d /tmp
[Memo]
Get data to access a cluster in parallel with the export tool. If a command is executed in parallel on a cluster composed of multiple nodes, data can be acquired at a high speed as each node is accessed in parallel.
[Memo]
[Example]
$ gs_export --all -u admin/admin --binary --out --parallel 4
Before exporting a container, the user can assess whether the export can be carried out correctly.
[Example]
$ gs_export -u admin/admin --all --test
Export Start.
[TEST Mode]
Directory : /var/lib/gridstore/export
The number of target containers : 5
Name PartitionId Row
------------------------------------------------------------------
public.container_2 15 10
public.container_3 25 20
public.container_0 35 10
public.container_1 53 10
public.container_4 58 20
Number of target container:5 ( Success:5 Failure:0 )
The number of target views : 15
Export Completed.
Export processing can be continued even if a row data acquisition error were to occur due to a lock conflict with another application.
[Example]
$ gs_export --all -u admin/admin --force
[Memo]
Detailed settings in the operating display
[Example]
$ gs_export --containerregex "^c0" -u admin/admin --verbose
Export Start.
Directory : /data/exp
Number of target container : 4
public.c003 : 1
public.c002 : 1
public.c001 : 1
public.c010 : 1
The row data has been acquired. : time=[5080]
Number of target container:4 ( Success:4 Failure:0 )
Export Completed.
Suppressed settings in the operating display
[Example]
$ gs_export -c c002 c001 -u admin/admin --silent
Import the container data file data into the GridDB cluster.
The input data sources used by the import tool are as follows.
Use the export function to import data in the exported data format into a GridDB cluster.
Processing data to be imported from the container data file needs to be specified.
There are 3 ways to specify a container, by specifying all the containers in the container data file, by specifying the database, and by specifying the container individually.
(1) Specify all containers
Specify –all option.
[Example]
$ gs_import --all -u admin/admin
(2) Specify the database
[Example]
$ gs_import --db db001 db002 -u admin/admin //Enumerate DB name. Container in the DB Container in the DB
(3) Specify container individually
[Example]
$ gs_import --container c001 c002 -u admin/admin //Enumerate container name
$ gs_import --containerregex "^c0" -u admin/admin //regular expression specification: Specify containers whose container name start with c0
[Points to note]
[Memo]
If data is exported by specifying the –acl option in the export function, data on the user and access rights can also be imported. Use the following command when migrating all data in the cluster.
[Example]
$ gs_import --all --acl -u admin/admin
[Memo]
If the view was exported using the export function, a view can also be imported together with the container data.
Specify –all option or –db option to import the view of the database to be imported.
[Memo]
Specify the container data file. If this is not specified, the file in the current directory will be processed.
[Example]
//Specify all containers from the current directory
$ gs_import --all -u admin/admin
//Specify multiple databases from a specific directory
$ gs_import --db db002 db001 -u admin/admin -d /data/expdata
//Specify multiple containers from a specific directory
$ gs_import -c c002 c001 -u admin/admin -d /data/expdata
[Memo]
The container data can be checked before importing.
[Example]
$ gs_import --list
Container List in local export file
DB Name Type FileName
public container_2 COLLECTION container_2.csv
public container_0 TIME_SERIES container_0.csv
public container_1 COLLECTION container_1.csv
userDB container_1_db TIME_SERIES userDB.container_1_db.csv
userDB container_2_db TIME_SERIES userDB.container_2_db.csv
userDB container_0_db COLLECTION userDB.container_0_db.csv
When importing, if a specific option is not specified, an error will occur if the container that you are trying to register already exists in the GridDB cluster. Data can be added or replaced by specifying the next option. During data registration, the number of containers registered successfully and the number of containers which failed to be registered are shown.
The registration procedure according to the type of container is as follows.
Container type | Row key assigned | Behavior |
---|---|---|
Collection | TRUE | Columns with the same key will be updated while data with different keys will be added. |
FALSE | All row data will be added and registered. | |
Timeseries | TRUE | The data will be added and registered if it has more recent time than the existing registration data. If the data has the same time as the existing data, the column data will be updated. |
[Example]
$ gs_import -c c002 c001 -u admin/admin --append
Import initiated (Append Mode)
Import completed
Success:2 Failure:0
$ gs_import -c c002 c001 -u admin/admin --replace
Import initiated (Replace Mode)
Import completed
Success:2 Failure:0
$ gs_import --all -u admin/admin -d /datat/expdata --replace
The import process can be continued even if a registration error were to occur in a specific row data due to a user editing error in the container data file.
[Example]
$ gs_import --all -u admin/admin -d /data/expdata --force
[Memo]
Detailed settings in the operating display
Command list
Command | Option/argument |
---|---|
gs_export | -u|–user <User name>/<Password> –all | –db <database name> [<database name>] | ( –container <container name> [<container name>] … | –containerregex <regular expression> [<regular expression>] …) [-d|–directory <output destination directory path>] [–out [<file identifier>] [–binary [<file size>]] [–filterfile definition file name] [–parallel <no. of parallel executions>] [–acl] [–prefixdb <database name>] [–force] [-t|–test] [-v|–verbose] [–silent] [–schemaOnly] |
gs_export | –version |
gs_export | [-h|–help] |
Options
Options | Required | Note |
---|---|---|
-u|–user <user name>/<password> | ✓ | Specify the user and password used for authentication purposes. |
–all | ✓ | All containers of the cluster shall be exported. Either –all, –container, –containerregex, –db option needs to be specified. |
–db | ✓ | All containers in the specified database shall be exported. Either –all, –container, –containerregex, –db option needs to be specified. |
-c|–container <container name> … | ✓ | Specify the container to be exported. Multiple specifications are allowed by separating them with blanks. Either –all, –container, –containerregex, –db option needs to be specified. |
–containerregex <regular expression> … | ✓ | Specify the containers by the regular expression to be exported. Multiple specifications are allowed by separating them with blanks. When using a regular expression, enclose it within double quotations to specify it. Either –all, –container, –containerregex, –db option needs to be specified. This option can be used in combination with –container option. |
-d|–directory <output destination directory path> | Specify the directory path of the export destination. Default is the current directory. | |
–out [<file identifier>] | Specify this when using the multi-container format for the file format of the output data. The single container format will be used by default. The number of characters in the file identifier is limited to 20. If the file identifier is specified, the file identifier will be used as the file name, and if it is omitted, the output start date and time will be used as the file name. |
|
–binary [<file size>] | Specify this when using the binary format for the output format of the row data file. The CSV format will be used by default. Specify the output file size in MB. Default is 100MB. A range from 1 to 1000 (1GB) can be specified. |
|
–filterfile <definition file name> | Specify the definition file in which the search query used to export rows is described. All rows are exported by default. | |
–parallel <no. of parallel executions> | Execute in parallel for the specified number. When executed in parallel, the export data will be divided by the same number as the number of parallel executions. This can be specified only for the multi-container format (when the –out option is specified). A range from 2 to 32 can be specified. | |
–acl | Data on the database, user, access rights will also be exported. This can be specified only if the user is an administrator user and –all option is specified. | |
–prefixdb <database name> | If a –container option is specified, specify the database name of the container. The containers in the default database will be processed if they are omitted. | |
–force | Processing is forced to continue even if an error occurs. Error descriptions are displayed in a list after processing ends. | |
-t|–test | Execute the tool in the test mode. | |
-v|–verbose | Output the operating display details. | |
–silent | Operating display is not output. | |
–schemaOnly | Export container definitions only; row data is not exported. | |
–version | Display the version of the tool. | |
-h|–help | Display the command list as a help message. |
[Memo]
Command list
Command | Option/argument |
---|---|
gs_import | -u|–user <User name>/<Password> –all | –db <database name> [<database name>] | ( –container <container name> [<container name>] … | –containerregex <regular expression> [<regular expression>] …) –db <database name> [<database name>] [–append|–replace] [-d|–directory <import target directory path>] [-f|–file <file name> [<file name> …]] [–count <commit count>] [–acl] [–prefixdb <database name>] [–force] [–schemaCheckSkip] [-v|–verbose] [–silent] |
gs_import | -l|–list [-d|–directory <directory path>] [-f|–file <file name> [<file name> …]] |
gs_import | –version |
gs_import | [-h|–help] |
Options
Options | Required | Note |
---|---|---|
-u|–user <user name>/<password> | ✓ | Specify the user and password used for authentication purposes. |
--all | ✓ | All containers in the import source file shall be imported. Either –all, –container, –containerregex, –db option needs to be specified. |
--db | ✓ | All containers in the specified database shall be imported. Either –all, –container, –containerregex, –db option needs to be specified. |
-c|–container <container name> … | ✓ | Specify the container subject to import. Multiple specifications are allowed by separating them with blanks. Either –all, –container, –containerregex, –db option needs to be specified. |
--containerregex <regular expression> … | ✓ | Specify the containers by regular expressions subject to import. Multiple specifications are allowed by separating them with blanks. When using a regular expression, enclose it within double quotations to specify it. Either –all, –container, –containerregex, –db option needs to be specified. This option can be used in combination with –container option. |
--append | Register and update data in an existing container. | |
--replace | Delete the existing container, create a new container, and register data. | |
-d|–directory <import target directory path> | Specify the directory path of the import source. Default is the current directory. | |
-f|–file <file name> [<file name> …] | Specify the container data file to be imported. Multiple specifications allowed. All container data files of the current directory or directory specified in d (–directory) will be applicable by default. | |
--count <commit count> | Specify the number of input cases until the input data is committed together. | |
--acl | Data on the database, user, access rights will also be imported. This can be specified only if the user is an administrator user and the –all option is specified for data exported by specifying the –acl option. | |
--prefixdb <database name> | If a –container option is specified, specify the database name of the container. The containers in the default database will be processed if they are omitted. | |
--force | Processing is forced to continue even if an error occurs. Error descriptions are displayed in a list after processing ends. | |
--schemaCheckSkip | When –append option is specified, a schema check of the existing container will not be executed. | |
-v|–verbose | Output the operating display details. | |
--silent | Operating display is not output. | |
-l|–list | Display a list of the specified containers to be imported. | |
--version | Display the version of the tool. | |
-h|–help | Display the command list as a help message. |
[Memo]
The respective file formats to configure container data files are shown below.
The metadata file stores the container data in the JSON format. The container data to be stored is shown below.
Item | Note |
---|---|
<Container name> | Name of the container. |
Container type | Refers to a collection or time series container. |
Schema data | Data of a group of columns constituting a row. Specify the column name, data type, and column constraints. |
Index setting data | Index type data set in a container. Availability of index settings. Specify the type of index, including tree index and spatial index. |
Row key setting data | Set up a row key when collection container is used. For time series containers, either there is no row key set or the default value, if set, will be valid. |
Table partitioning data | Specify table partitioning data. |
The tag and data items of the metadata in the JSON format are shown below. Tags that are essential for new creations by the user are also listed (tag setting condition).
field | Item | Note | Setting conditions |
---|---|---|---|
Common parameters | |||
database | <Database name> | <Database name> | Arbitrary, “public” by default |
container | <Container name> | <Container name> | Required |
containerType | Container type | Specify either COLLECTION or TIME_SERIES | Required |
containerFileType | Container data file type | Specify either csv or binary. | Required |
containerFile | Container data file name | File name | Arbitrary |
dataAffinity | Data affinity name | Specify the data affinity name. | Arbitrary |
partitionNo | Partition | Null string indicates no specification. | Arbitrary, output during export. Not used even if it is specified when importing. |
columnSet | Column data set (, schema data) | Column data needs to match when adding data to an existing container | Required |
columnName | Column name | Required | |
type | JSON Data type | Specify either of the following values: BOOLEAN/ STRING/ BYTE/ SHORT/ INTEGER/ LONG/ FLOAT/ DOUBLE/ TIMESTAMP/ GEOMETRY/ BLOB/ BOOLEAN[]/ STRING[]/ BYTE[] /SHORT. []/ INTEGER[]/ LONG[]/ FLOAT[]/ DOUBLE[]/ TIMESTAMP[]. | Required |
notNull | NOT NULL constraint | true/false | Arbitrary, “false” by default |
rowKeyAssigned | Row key setting (*1) | specify either true/false Specifying also rowKeySet causes an error |
Arbitrary, “false” by default |
rowKeySet | Row key column names | Specify row key column names in array format. The row key needs to match when adding data to an existing container |
Arbitrary (*2) |
indexSet | Index data set | Can be set for each column. Non-existent column name will be ignored or an error will be output. | Arbitrary |
columnNames | Column names | Specify column names in array format. | Arbitrary (essential when indexSet is specified) |
type | Index type | Specify one of the following values: TREE (STRING/ BOOLEAN/ BYTE/ SHORT/ INTEGER/ LONG/ FLOAT/ DOUBLE/ TIMESTAMP) or SPATIAL (GEOMETRY). | Arbitrary (essential when indexSet is specified) |
indexName | Index name | Index name | Arbitrary, not specified either by default or when null is specified. |
Table partitioning data | |||
tablePartitionInfo | Table partitioning data | For Interval-Hash partitioning, specify the following group of items for both Interval and Hash as an array in that order | Arbitrary |
type | Table partitioning type | Specify either HASH or INTERVAL | Essential if tablePartitionInfo is specified |
column | Partitioning key | Column types that can be specified are as follows Any type if type=HASH BYTE, SHORT, INTEGER, LONG, TIMESTAMP if type=INTERVAL |
Essential if tablePartitionInfo is specified |
divisionCount | Number of hash partitions | (Effective only if type=HASH) Specify the number of hash partitions | Essential if type=HASH |
intervalValue | Interval value | (Effective only if type=INTERVAL) Specify the interval value | Essential if type=INTERVAL |
intervalUnit | Interval unit | (Effective only if type=INTERVAL) DAY only | Essential if type=INTERVAL and column=TIMESTAMP |
Interval or interval-hash partitioning only parameter | |||
expirationType | Type of expiry release function | Specify “partition”, when specifying partition expiry release. | Arbitrary |
expirationTime | Length of expiration | Integer value | Essential if expirationType is specified |
expirationTimeUnit | Elapsed time unit of row expiration | Specify either of the following values: DAY/ HOUR/ MINUTE/ SECOND/ MILLISECOND. | Essential if expirationType is specified |
[Memo]
Container metadata is described in a json array in the metadata file of a multi-container data file.
The database and container name in the file name are URL-encoded. If the length of “encoded database name.encoded container name” is over 140 characters, the file name is modified as connecting 140 characters from the beginning and the sequential number.
Example:
In the case of importing the next three containers,
* database "db1", container "container_ ... _2017/08/01" (the container name that contains over 140 characters)
* database "db1", container "container_ ... _2017/09/01" (the container name that contains over 140 characters)
* database "db1", container "container_ ... _2017/10/01" (the container name that contains over 140 characters)
the name of each metadata file will be the container name encoded, trimmed to be less than 140 characters, and consecutive numbers added, like as follows:
db1.container・・・2017%2f08_0_properties.json
db1.container・・・2017%2f09_1_properties.json
db2.container・・・2017%2f10_2_properties.json
[Notes]
[Example1] Example of a collection in a single container data file (public.c001_properties.json)
A single collection is described.
{
"container": "c001",
"containerFile": "public.c001.csv",
"containerFileType": "csv",
"containerType": "COLLECTION",
"columnSet": [
{ "columnName": "COLUMN_ID", "type": "INTEGER" },
{ "columnName": "COLUMN_STRING", "type": "STRING"}
],
"indexSet": [
{ "columnName": "COLUMN_ID", "type": "TREE"},
{ "columnName": "COLUMN_STRING", "type": "TREE" }
],
"rowKeyAssigned": true
}
[Example 2] Example of a collection and timeseries container in a multi-container data file (public.container01_properties.json)
For collections and timeseries containers >
[
{
"container": "c001",
"containerType": "collection",
"containerFileType":"csv",
"containerFile":"public.container01.csv",
"rowKeyAssigned":true,
"columnSet": [
{ "columnName": "COLUMN_FLAG", "type": "BOOLEAN" },
{ "columnName": "COLUMN_BLOB_DATA", "type": "BLOB" },
{ "columnName": "COLUMN_STRING", "type": "STRING" }
],
"indexSet":[
{ "columnName":" COLUMN_STRING ", "indexType": "TREE" }
]
},
{
"container": "c002",
"containerType": "timeSeries",
"containerFileType":"csv",
"containerFile":"public.container01.csv",
"rowKeyAssigned":true,
"dataAffinity":"month",
"columnSet": [
{ "columnName": "COLUMN_TIMESTAMP", "type": "TIMESTAMP" },
{ "columnName": "COLUMN_FLAG", "type": "BOOLEAN" },
{ "columnName": "COLUMN_BLOB_DATA", "type": "BLOB" },
{ "columnName": "COLUMN_INTEGER", "type": "INTEGER" }
],
"indexSet":[
{ "columnName":" COLUMN_FLAG ", "indexType": "TREE" }
]
}
]
[Example 3] Example of a description for table partitioning
For hash partitioning (Showing only the description for table partitioning data) >
"tablePartitionInfo":{
"type": "HASH",
"column": "column03",
"divisionCount": 16
}
For interval partitioning (Showing only the description for table partitioning data) >
"tablePartitionInfo":{
"type": "INTERVAL",
"column": "timecolumn05",
"intervalValue": 20,
"intervalUnit": "DAY"
}
For interval-hash partitioning (Showing only the description for table partitioning data) >
"tablePartitionInfo":[
{
"type": "INTERVAL",
"column": "timecolumn05",
"intervalValue": 10,
"intervalUnit": "DAY"
},
{
"type": "HASH",
"column": "column03",
"divisionCount": 16
}
]
[Memo]
A row data file, binary data file, is in zip format and can be created by gs_export only. No readability, and cannot be edited as well.
A row data file, csv file, is in CSV format and describes the references to the metadata file, which defines rows, in the container data file data section.
[Memo]
<CSV data file format>
1. Header section (1st - 2nd row)
Header section contains data output during export. Header data is not required during import.
Assign a “#” at the beginning of the command to differentiate it. The format will be as follows.
"#(Date and time) GridDB release version"
"#User:(user name)"
[Example]
"#2017-10-01T17:34:36.520+0900 GridDB V4.0.00"
"#User:admin "
2. Container data file data section (3rd and subsequent rows)
Describe the references to the metadata file.
Assign a “%” at the beginning of the command to differentiate it. The format of one row will be as follows.
"%","metadata file name"
3. Row data section (container data and subsequent sections)
The following section describes the row data.
Separate the row data of the column with commas and describe them in one line of the CSV file.
"$","database name.container name"
"value","value","value", ... (number of column definitions)
"value","value","value", ... (number of column definitions)
:
: //Describe the number of row cases you want to register
:
[Memo]
4. Comments section
The comment section can be described anywhere in the CSV data file except the header section.
[Memo]
<File name format>
The name of the CSV data file output by the export tool is as follows.
[Example] a meta data file in CSV format ,including external object file, for Example 1
"#2017-10-01T11:19:03.437+0900 GridDB V4.0.00"
"#User:admin"
"%","public.c001_properties.json"
"$","public.c001"
"1","Tokyo"
"2","Kanagawa"
"3","Osaka"
When the data below is included in some of the rows of the CSV data file, prepare an external object file separate from the CSV data file as an external object. List the references of the external data file in the target column of the CSV file as below. “@data type”: (file name)
When an external object file is exported, the external object file name is created in accordance with the following rules during export.
For import purposes, any file name can be used for the external object file. List down the CSV data file with a file name of any data type in the relevant column.
[Example] Naming example of an external object file
//When a collection (colb) having a BYTE array in the 3rd column is exported
10月 4 12:51 2017 public.colb.csv
10月 4 12:51 2017 public.colb_0_3.byte_array
10月 4 12:51 2017 public.colb_1_3.byte_array
10月 4 12:51 2017 public.colb_2_3.byte_array
10月 4 12:51 2017 public.colb_3_3.byte_array
10月 4 12:51 2017 public.colb_4_3.byte_array
10月 4 12:51 2017 public.colb_properties.json
[Example] Description of an external object file in a single container data file is shown below.
Metadata file public.col01_properties.json
{
"version": "4.0.0",
"container": "col01",
"containerFile": "public.col01.csv",
"containerFileType": "csv",
"containerType": "COLLECTION",
"columnSet": [
{ "columnName": "name","type": "string" },
{ "columnName": "status", "type": "boolean"},
{ "columnName": "count", "type": "long" },
{ "columnName": "lob", "type": "byte[]"
}
],
"indexSet": [
{
"columnName": "name",
"type": "TREE"
},
{
"columnName": "count",
"type": "TREE"
}
],
"rowKeyAssigned": true
}
CSV data file public.col01.csv
"#2017-10-01T19:41:35.320+0900 GridDB V4.0.00"
"#User:admin"
"%","public.col01_properties.json"
"$","public.col01"
"name02","false","2","@BYTE_ARRAY:public.col01_0_3.byte_array"
External object file public.col01_03.byte_array
1,10,15,20,40,70,71,72,73,74
Migrate the database files created in GridDB V4 to the format that can be used in GridDB V5. The following section provides the procedure for migration by using the migration tool on a single machine.
[Notes]
Before migration, be sure to run backups of the database files.
The settings file (gs_node.json/gs_cluster.json) for V4 can also be used as the settings file for V5. The parameters added in V5 are enabled using the default value. The parameters that have been discontinued in V5 are ignored, and the following warning is recorded in the event log.
2021-12-26T16:10:41.211+09:00 RHEL83-1 31897 WARNING MAIN [100002:CT_PARAMETER_INVALID] Unknown parameter (source=gs_node.json, group=dataStore, name=storeWarmStart)
This migration tool supports migration of database files created in GridDB V4. Use the export/import tools to migrate database files for GridDB V3 or earlier.
Functions not supported in V5 (trigger, HASH, row expiry release, and timeseries compression) cannot be migrated. In SQL, check whether unsupported functions are used. They must be deleted. To delete them, see Preparation for migration.
If the V4 database meets any of the following conditions, the database file migration tool fails to run.
Before using the database file migration tool, delete the functions not supported in V5 from the database to be migrated and complete the background process.
To check the V4 database, start a GridDB V4 cluster. GridDB clusters can be started using the GridDB service and various tools. For detailed operation, see the following chapters:
Functions not supported in V5 (trigger, HASH, row expiry release, and timeseries compression) cannot be migrated. In SQL, check whether unsupported functions are used. They must be deleted. The following procedure is based on the sections on Cluster operation control command interpreter (gs_sh) and Export/import tools.
Execute the following SQL statement using gs_sh to check if there is any trigger.
gs[public]> select database_name, table_name, trigger_name from "#event_triggers";
If it results in no hit, then there is no trigger.
If it results in one or more hits, there is at least one trigger; delete trigger(s) using the gs_sh subcommand droptrigger.
> Sample output when there is any trigger
# Searching for a trigger
gs[public]> select count(*) from "#event_triggers"
3 hits are returned.
gs[public]> get
DATABASE_NAME,TABLE_NAME,TRIGGER_NAME
public,c01,trigger1
public,c01,trigger1
public,c01,trigger1
Completed retrieving 3 hits.
> Example of deleting a trigger
# Deleting the trigger1 trigger in the c01 container.
gs[public]> droptrigger c01 trigger1
gs[public]> select database_name, table_name, trigger_name from "#event_triggers";
No hit is returned.
Execute the following SQL statement using gs_sh to check if there is any Hash index.
gs[public]> select database_name, table_name, column_name from "#index_info" where index_type='HASH';
If it results in no hit, then there is no Hash index.
If it results in one or more hits, there is at least one Hash index; delete Hash index(es) using the gs_sh subcommand dropindex.
> Sample output when there is any Hash index
# Searching for a Hash index
gs[public]> select database_name, table_name, column_name from "#index_info" where index_type='HASH';
1 hit is returned.
gs[public]> get
DATABASE_NAME,TABLE_NAME,COLUMN_NAME
public,c02,status
Completed retrieving 1 hit.
> Example of deleting a Hash index
# Deleting the Hash index in the status column in the c02 container.
gs[public]> dropindex c02 status HASH
gs[public]> select database_name, table_name, column_name from "#index_info" where index_type='HASH';
No hit is returned.
Execute the following SQL statement using gs_sh to check if there is any container with row expiry release.
gs[public]> select database_name, table_name from "#tables" where EXPIRATION_TYPE='ROW';
If it results in no hit, then there is no container with row expiry release.
If it results in one or more hits, there is at least one container with row expiry release. Run the backups using gs_export and delete container(s) using the gs_sh subcommand dropcontainer.
> Sample output when there is any container with row expiry release
# Searching for a container with row expiry release
gs[public]> select database_name, table_name from "#tables" where EXPIRATION_TYPE='ROW';
1 hit is returned.
gs[public]> get
DATABASE_NAME,TABLE_NAME
public,c03
Completed retrieving 1 hit.
> Example of running the backups of a container with row expiry release
# Using the gs_export command, run backups of the c03 container.
$ mkdir c03
$ cd c03
$ gs_export -u admin/admin -d public -c c03
> Example of deleting a container with row expiry release
# Deleting the c03 container with row expiry release
gs[public]> dropcontainer c03
gs[public]> select database_name, table_name from "#tables" where EXPIRATION_TYPE='ROW';
No hit is returned.
Execute the following SQL statement using gs_sh to check if there is any container with timeseries compression.
gs[public]> select database_name, table_name from "#tables" where COMPRESSION_METHOD is not null;
If it results in no hit, then there is no container with timeseries compression.
If it results in one or more hits, there is at least one container with timeseries compression. Run the backups using gs_export and delete container(s) using the gs_sh subcommand dropcontainer.
> Sample output when there is any container with timeseries compression
# Searching for a container with timeseries compression
gs[public]> select database_name, table_name from "#tables" where COMPRESSION_METHOD is not null;
1 hit is returned.
gs[public]> get
DATABASE_NAME,TABLE_NAME
public,c04
Completed retrieving 1 hit.
> Example of running the backups of a container with timeseries compression
# Using the gs_export command, run backups of the c04 container.
$ mkdir c04
$ cd c04
$ gs_export -u admin/admin -d public -c c04
> Example of deleting a container with timeseries compression
# Deleting the c04 container with timeseries compression
gs[public]> dropcontainer c04
gs[public]> select database_name, table_name from "#tables" where EXPIRATION_TYPE='ROW';
No hit is returned.
Using the cluster information retrieval command gs_stat, check the completion of a background process. If numBackground is equal to 0, the background process is completed.
$ gs_stat -u admin/admin | grep numBackground
"numBackground": 0,
Check the completion on all the GridDB clusters. If numBackground is one or more, the background process is currently running. Wait with the GridDB cluster running until the background process is completed (i.e., until numBackground reaches 0).
After checking the deletion of the functions not supported in V5 and the completion of the background process, stop the V4 cluster and then each node. To prepare for the installation of GridDB V5, rename the storage directory for database files.
Example:
$ mv /var/lib/gridstore/data /var/lib/gridstore/data-v4
Before installing GridDB V5, uninstall all the GridDB V4 packages.
[example of running CentOS]
$ sudo rpm -e griddb-ee-server
$ sudo rpm -e griddb-ee-client
$ sudo rpm -e griddb-ee-java-lib
$ sudo rpm -e griddb-ee-c-lib
[Memo]
$ sudo dpkg -r griddb-ee-server
After uninstalling GridDB V4, install GridDB V5. For details on installation, see the “GridDB Administrator Guide” .
On the machine where V4 databases reside, run the database file migration tool.
Command
Command | Option/argument |
---|---|
gs_convdbv4tov5 | –v4data: storage directory for V4 database files. –v5data: storage directory for V5 data files. –v5txnlog: storage directory for V5 transaction log files. [ –v5splitCount: number of data file splits. ] [ –parallel: number of parallel runs. ]] |
Options
Options | Description |
---|---|
–v4data: storage directory for V4 database files. | Specify the storage directory for V4 database files. |
–v5data: storage directory for V4 data files. | Specify the storage directory for V5 data files generated from V4 database files. |
–v5txnlog: storage directory for V5 transaction log files. | Specify the storage directory for V5 transaction log files generated from V4 databases. |
–v5splitCount: data file splits. | Specify the number of V5 data file splits generated from V4 database files. The default value is zero (no split). |
–parallel: number of parallel runs | The specified number of parallel runs. The numbers that can be specified for the number of parallel runs are 1, 2, 4, and 8. The default value is 1. |
[example of running the command]
$ gs_convdbv4tov5 --v4data /var/lib/gridstore/data-v4 --v5data /var/lib/gridstore/data --v5txnlog /var/lib/gridstore/txnlog
[Notes]
Before running the database file migration tool, change the upper limit for the number of processes available to the user gsadm and the number of files that gsadm can open. Otherwise, the database file migration tool might fail to run. After editing /etc/security/limits.conf, log in again as gsadm and the settings are applied. Use the command ulmit -a to check the content of the settings.
[example of setting in limits.conf]
gsadm soft nproc 16384
gsadm hard nproc 16384
gsadm soft nofile 65536
gsadm hard nofile 65536
After completing running the database file migration tool, start a GridDB V5 cluster and make sure that the data has been migrated. Import the container exported in the section 7.2.2. Checking unsupported functions and deleting them as needed.
[Notes]
[example of running import]
$ cd c03
$ gs_import --all -u admin/admin