General purpose backend library. The primary goal is to have a scalable platform for running and managing Node.js servers for Web services implementation.
This project only covers the lower portion of the Web services ecosystem: Node.js processes, HTTP servers, basic API functionality, database access, caching, messaging between processes, metrics and monitoring, a library of tools for developing Node.js servers.
For the UI and presentation layer there are no restrictions what to use as long as it can run on top of the Express server.
Features:
Check out the Documentation for more details.
To install the module with all optional dependencies if they are available in the system
npm install backendjs
To install from the git
npm install git+https://github.com/vseryakov/backendjs.git
or simply
npm install vseryakov/backendjs
Only core required dependencies are installed but there are many modules which require a module to work correctly.
All optional dependencies are listed in the package.json under "modDependencies" so npm cannot use it, only manual install of required modules is supported or it is possible to install all optional dependencies for development purposes.
Here is the list of modules required for each internal feature:
pg
- PostgreSQL database accessargon2
or bcrypt
- for user password hashingmmmagic
- file detection in uploads, only used when allow
is passed to the api.putFile
redis
- for Redis queue and cache driverunix-dgram
- for syslog on Linux to use local syslogbkjs-sqlite
- to use SQLite database driverweb-push
- for Web push notifications@parse/node-apn
- for Apple push notificationssharp
- scaling images in uploads using VPS imagingnats
- NATS driver for queue and eventsamqplib
- RabbitMQ driver for queue and events (alpha)The command below will show all core and optional dependencies, npm install
will install only the core dependencies
bkjs deps -dry-run -mods
Simplest way of using the backendjs, it will start the server listening on port 8000
$ node
> const bkjs = require('backendjs')
> bkjs.server.start()
Access is allowed only with valid signature except urls that are explicitly allowed without it (see api-allow
config parameter below)
Same but using the helper tool, by default no database driver are enablked so here we use embedded SQLite database and listen on port 8000.
bkjs web -db-pool sqlite -db-sqlite-pool default
or to the PostgreSQL server, database backend
bkjs web -db-pool pg -db-pg-pool postgresql://postgres@localhost/backend
If running on EC2 instance with IAM profile no need to specify AWS credentials:
bkjs web -db-pool dynamodb -db-dynamodb-pool default
To start the server and connect to the DynamoDB (command line parameters can be saved in the etc/config file
, see below about config files)
bkjs web -db-pool dynamodb -db-dynamodb-pool default -aws-key XXXX -aws-secret XXXX
or to the ElasticSearch server, database backend
bkjs web -db-pool elasticsearch -db-elasticsearch-pool http://127.0.0.1:9200
All commands above will behave exactly the same
Tables are not created by default, in order to initialize the database, run the server or the shell with -db-create-tables
flag,
it is called only inside a master process, a worker never creates tables on start
prepare the tables in the shell
bksh -db-pool dynamodb -db-dynamodb-pool default -db-create-tables
run the server and create tables on start, run Elasticsearch locally first on the local machine
bkjs get-elasticsearch
bkjs run-elasticsearch
bkjs web -db-pool elasticsearch -db-elasticsearch-pool http://127.0.0.1:9200 -db-create-tables
While the local backendjs is runnning, the documentation is always available at http://localhost:8000/doc.html (or whatever port is the server using)
To add users from the command line
bksh -user-add login test secret test name TestUser email test@test.com
To start Node.js shell with backendjs loaded and initialized, all command line parameters apply to the shell as well
bkjs shell
To access the database while in the shell using callbacks
> db.select("bk_user", {}, lib.log);
> db.add("bk_user", { id: 'test2', login: 'test2', secret: 'test2', name' Test 2 name' }, lib.log);
> db.select("bk_user", { id: 'test2' }, lib.log);
> db.select("bk_user", { id: ['test1','test2'] }, { ops: { id: "in" } }, lib.log);
or the same using async/await, same methods with a
prepended to the name
> await db.aselect("bk_user", {});
> await db.aadd("bk_user", { id: 'test2', login: 'test2', secret: 'test2', name' Test 2 name' });
> await db.aselect("bk_user", { id: 'test2' });
To search using Elasticsearch (assuming it runs on EC2 and it is synced with DynamoDB using streams)
> await db.select("bk_user", { q: 'test' }, { pool: "elasticsearch" });
The library is packaged with copies of Bootstrap, jQuery, Knockout.js for quick Web development in web/js and web/css directories, all scripts are available from the browser with /js or /css paths. To use all at once as a bundle run the following command:
cd node_modules/backendjs && npm run devbuild
Go to examples/api
directory:
Run the application, it will start the Web server on port 8000:
./app.sh
Now log in with the new account,
Go to http://localhost:8000/api.html and click on Login at the top-right corner, then enter 'test' as login and 'test' as secret in the login popup dialog.
To see your account details run the command in the console /account/get
To see current metrics run the command in the console /system/stats/get
When the web server is started with -watch
parameter or as bkjs watch
then any change in the source files will make the server restart automatically
letting you focus on the source code and not server management, this mode is only enabled by default in development mode,
check app.sh
for parameters before running it in production.
Almost everything in the backend is configurable using config files, a config database or DNS. The whole principle behind it is that once deployed in production, even quick restarts are impossible to do so there should be a way to push config changes to the processes without restarting.
Every module defines a set of config parameters that defines the behavior of the code, due to the single threaded nature of the Node.js. It is simple to update any config parameter to a new value so the code can operate differently. To achieve this the code must be written in a special way, like driven by configuration which can be changed at any time.
All configuration goes through the configuration process that checks all inputs and produces valid output which is applied to the module variables. Config file or database table with configuration can be loaded on demand or periodically, for example all local config files are watched for modification and reloaded automatically, the config database is loaded periodically which is defined by another config parameter.
When the backendjs server starts it spawns several processes that perform different tasks.
There are 2 major tasks of the backend that can be run at the same time or in any combination:
These features can be run standalone or under the guard of the monitor which tracks all running processes and restarted any failed ones.
This is the typical output from the ps command on Linux server:
ec2-user 891 0.0 0.6 1071632 49504 ? Ssl 14:33 0:01 bkjs: monitor
ec2-user 899 0.0 0.6 1073844 52892 ? Sl 14:33 0:01 bkjs: master
ec2-user 908 0.0 0.8 1081020 68780 ? Sl 14:33 0:02 bkjs: server
ec2-user 917 0.0 0.7 1072820 59008 ? Sl 14:33 0:01 bkjs: web
ec2-user 919 0.0 0.7 1072820 60792 ? Sl 14:33 0:02 bkjs: web
ec2-user 921 0.0 0.7 1072120 40721 ? Sl 14:33 0:02 bkjs: worker
To enable any task a command line parameter must be provided, it cannot be specified in the config file. The bkjs
utility supports several
commands that simplify running the backend in different modes.
bkjs start
- this command is supposed to be run at the server startup as a service, it runs in the background and the monitors all tasks,
the env variable BKJS_SERVER
must be set in the profile to one of the master or monitor
to define which run mode to usebkjs start-instance
- this command is supposed to be run at the server startup to perform system adjustments, it is run by bkjs start
bkjs watch
- runs the master and Web server in wather mode checking all source files for changes, this is the common command to be used
in development, it passes the command line switches: -watch -master
bkjs monitor
- this command is supposed to be run at the server startup, it runs in the background and the monitors all processes,
the command line parameters are: -daemon -monitor -master -syslog
bkjs master
- this command is supposed to be run at the server startup, it runs in the background and the monitors all processes,
the command line parameters are: -daemon -monitor -master -syslog
, web server and workers are started by defaultbkjs web
- this command runs just web server process with child processes as web workersbkjs run
- this command runs without other parameters, all additional parameters can be added in the command line, this command
is a barebone helper to be used with any other custom settings.bkjs run -api
- this command runs a single process as web server, sutable for Dockerbkjs run -worker
- this command runs a single process worker, suatable for Dockerbkjs shell
or bksh
- start backendjs shell, no API or Web server is initialized, only the database poolsThe main purpose of the backendjs is to provide API to access the data, the data can be stored in the database or some other way but the access to that data will be over HTTP and returned back as JSON. This is default functionality but any custom application may return data in whatever format is required.
Basically the backendjs is a Web server with ability to perform data processing using local or remote jobs which can be scheduled similar to Unix cron.
The principle behind the system is that nowadays the API services just return data which Web apps or mobiles apps can render to the user without the backend involved. It does not mean this is simple gateway between the database, in many cases it is but if special processing of the data is needed before sending it to the user, it is possible to do and backendjs provides many convenient helpers and tools for it.
When the API layer is initialized, the api module contains app
object which is an Express server.
Special module/namespace app
is designated to be used for application development/extension. This module is available in the same way as api
and core
which makes it easy to refer and extend with additional methods and structures.
The typical structure of a single file backendjs application is the following:
const bkjs = require('backendjs');
const api = bkjs.api;
const app = bkjs.app;
const db = bkjs.db;
app.listArg = [];
// Define the module config parameters
core.describeArgs('app', [
{ name: "list-arg", array: 1, type: "list", descr: "List of words" },
{ name: "int-arg", type: "int", descr: "An integer parameter" },
]);
// Describe the tables or data models, all DB pools will use it, the master or shell
// process only creates new tables, workers just use the existing tables
db.describeTables({
...
});
// Optionally customize the Express environment, setup MVC routes or else, `api.app` is the Express server
app.configureMiddleware = function(options, callback)
{
...
callback()
}
// Register API endpoints, i.e. url callbacks
app.configureWeb = function(options, callback)
{
api.app.get('/some/api/endpoint', (req, res) => {
// to return an error, the message will be translated with internal i18n module if locales
// are loaded and the request requires it
api.sendReply(res, err);
// or with custom status and message, explicitely translated
api.sendReply(res, 404, res.__({ phrase: "not found", locale: "fr" }));
// with config check
if (app.intArg > 5) ...
if (app.listArg.indexOf(req.query.name) > -1) ...
// to send data back with optional postprocessing hooks
api.sendJSON(req, err, data);
// or simply
res.json(data);
});
...
callback();
}
// Optionally register post processing of the returned data from the default calls
api.registerPostProcess('', /^\/account\/([a-z\/]+)$/, (req, res, rows) => { ... });
...
// Optionally register access permissions callbacks
api.registerAccessCheck('', /^\/test\/list$/, (req, status, callback) => { ... });
api.registerPreProcess('', /^\/test\/list$/, (req, status, callback) => { ... });
...
bkjs.server.start();
Another probably easier way to create single file apps is to use your namespace instead of app
:
const bkjs = require("backendjs");
const api = bkjs.api;
const db = bkjs.db;
const mymod = {
name: "mymod",
args: [
{ name: "types", type: "list", descr: "Types allowed" },
{ name: "size", type: "int", descr: "Records in one page" },
],
tables: {
mytable: {
id: { type: "int", primary: 1 },
name: { primary: 2 },
type: { type: "list" },
descr: {}
}
}
};
exports.module = mymod;
bkjs.core.addModule(mymod);
mymod.configureWeb = function(options, callback)
{
api.app.all("/mymod", async function(req, res) {
if (!req.query.id) return api.sendReply(res, 400, "id is required");
req.query.type = mod.types;
const rows = await db.aselect("mymod", req.query, { ops: { type: "in" }, count: mod.size });
api.sendJSON(req, null, rows);
});
}
bkjs.server.start();
Except the app.configureWeb
and server.start()
all other functions are optional, they are here for the sake of completeness of the example. Also
because running the backend involves more than just running web server many things can be setup using the configuration options like common access permissions,
configuration of the cron jobs so the amount of code to be written to have fully functioning production API server is not that much, basically only
request endpoint callbacks must be provided in the application.
As with any Node.js application, node modules are the way to build and extend the functionality, backendjs does not restrict how the application is structured.
By default no system modules are loaded except bk_user
, it must be configured by the -preload-modules
config parameter to
preload modules from the backendjs/modules/.
Another way to add functionality to the backend is via external modules specific to the backend, these modules are loaded on startup from the backend
home subdirectory modules/
. The format is the same as for regular Node.js modules and only top level .js files are loaded on the backend startup.
Once loaded they have the same access to the backend as the rest of the code, the only difference is that they reside in the backend home and
can be shipped regardless of the npm, node modules and other env setup. These modules are exposed in the core.modules
the same way as all other core submodules
methods.
Let's assume the modules/
contains file facebook.js which implements custom FB logic:
const bkjs = require("backendjs");
const core = bkjs.core;
const mod = {
name: "facebook",
args: [
{ name: "token", descr: "API token" },
]
}
module.exports = mod;
mod.configureWeb = function(options, callback) {
...
}
mod.makeRequest = function(options, callback) {
core.sendRequest({ url: options.path, query: { access_token: fb.token } }, callback);
}
This is the main app code:
const bkjs = require("backendjs");
const core = bkjs.core;
// Using facebook module in the main app
api.app.get("/me", (req, res) => {
core.modules.facebook.makeRequest({ path: "/me" }, (err, data) => {
bkjs.api.sendJSON(req, err, data);
});
});
bkj.server.start();
In case different modules is better keep separately for maintenance or development purposes they can be split into
separate NPM packages, the structure is the same, modules must be in the modules/ folder and the package must be loadable
via require as usual. In most cases just empty index.js is enough. Such modules will not be loaded via require though but
by the backendjs core.loadModule
machinery, the NPM packages are just keep different module directories separate from each other.
The config parameter preload-packages
can be used to specify NPM package names to be loaded separated by comma, as with the default
application structure all subfolders inside each NPM package will be added to the core:
If there is a config file present as etc/config
it will be loaded as well, this way each package can maintain its default config parameters if necessary
without touching other or global configuration. Although such config files will not be reloaded on changes, when NPM installs or updates packages it
moves files around so watching the old config is no point because the updated config file will be different.
The backend support multiple databases and provides the same db layer for access. Common operations are supported and all other specific usage can be achieved by
using SQL directly or other query language supported by any particular database.
The database operations supported in the unified way provide simple actions like db.get, db.put, db.update, db.del, db.select
. The db.query
method provides generic
access to the database driver and executes given query directly by the db driver, it can be SQL or other driver specific query request.
Before the tables can be queried the schema must be defined and created, the backend db layer provides simple functions to do it:
db.describeTables({
album: {
id: { primary: 1 }, // Primary key for an album
name: { pub: 1 }, // Album name, public column
mtime: { type: "now" }, // Modification timestamp
},
photo: {
album_id: { primary: 1 }, // Combined primary key
id: { primary: 1 }, // consisting of album and photo id
name: { pub: 1, index: 1 }, // Photo name or description, public column with the index for faster search
mtime: { type: "now" }
}
});
Each database may restrict how the schema is defined and used, the db layer does not provide an artificial layer hiding all specifics, it just provides the same API and syntax, for example, DynamoDB tables must have only hash primary key or combined hash and range key, so when creating table to be used with DynamoDB, only one or two columns can be marked with primary property while for SQL databases the composite primary key can consist of more than 2 columns.
The backendjs always creates several tables in the configured database pools by default, these tables are required to support default API functionality and some
are required for backend operations. Refer below for the JavaScript modules documentation that described which tables are created by default. In the custom applications
the db.describeTables
method can modify columns in the default table and add more columns if needed.
For example, to make age and some other columns in the accounts table public and visible by other users with additional columns the following can be
done in the api.initApplication
method. It will extend the bk_user table and the application can use new columns the same way as the already existing columns.
Using the birthday column we make 'age' property automatically calculated and visible in the result, this is done by the internal method api.processAccountRow
which
is registered as post process callback for the bk_user table. The computed property age
will be returned because it is not present in the table definition
and all properties not defined and configured are passed as is.
The cleanup of the public columns is done by the api.sendJSON
which is used by all API routes when ready to send data back to the client. If any post-process
hooks are registered and return data itself then it is the hook responsibility to cleanup non-public columns.
db.describeTables({
bk_user: {
birthday: {},
ssn: {},
salary: { type: "int" },
occupation: {},
home_phone: {},
work_phone: {},
});
app.configureWeb = function(options, callback)
{
db.setProcessRow("post", "bk_user", this.processAccountRow);
...
callback();
}
app.processAccountRow = function(req, row, options)
{
if (row.birthday) row.age = Math.floor((Date.now() - core.toDate(row.birthday))/(86400000*365));
}
To define tables inside a module just provide a tables
property in the module object, it will be picked up by database initialization automatically.
const mod = {
name: "billing",
tables: {
invoices: {
id: { type: "int", primary: 1 },
name: {},
price: { type: "real" },
mtime: { type: "now" }
}
}
}
module.exports = mod;
// Run db setup once all the DB pools are configured, for example produce dynamic icon property
// for each record retrieved
mod.configureModule = function(options, callback)
{
db.setProcessRows("post", "invoices", function(req, row, opts) {
if (row.id) row.icon = "/images/" + row.id + ".png";
});
callback();
}
This is useful for easier naming conventions or switching to a different table name on the fly without changinbf the code, access to the table by it is real name is always available.
For example:
bksh -db-aliases-bk_user users
> await db.aget("bk_user", { login: "u1" })
> { login: "u1", name: "user", .... }
> await db.aget("users", { login: "u1" })
> { login: "u1", name: "user", .... }
All methods will put input parameters in the req.query
, GET or POST.
One way to verify input values is to use lib.toParams
, only specified parameters will be returned and converted according to
the type or ignored.
Example:
var params = {
test1: { id: { type: "text" },
count: { type: "int" },
email: { regexp: /^[^@]+@[^@]+$/ }
}
};
api.app.all("/endpoint/test1", function(req, res) {
const query = lib.toParams(req.query, params.test1);
if (typeof query == "string") return api.sendReply(res, 400, query);
...
});
Here is an example how to create simple TODO application using any database supported by the backend. It supports basic operations like add/update/delete a record, show all records.
Create a file named app.js
with the code below.
const bkjs = require('backendjs');
const api = bkjs.api;
const lib = bkjs.lib;
const app = bkjs.app;
const db = bkjs.db;
// Describe the table to store todo records
db.describeTables({
todo: {
id: { type: "uuid", primary: 1 }, // Store unique task id
due: {}, // Due date
name: {}, // Short task name
descr: {}, // Full description
mtime: { type: "now" } // Last update time in ms
}
});
// API routes
app.configureWeb = function(options, callback)
{
api.app.get(/^\/todo\/([a-z]+)$/, async function(req, res) {
var options = api.getOptions(req);
switch (req.params[0]) {
case "get":
if (!req.query.id) return api.sendReply(res, 400, "id is required");
const rows = await db.aget("todo", { id: req.query.id }, options);
api.sendJSON(req, null, rows);
break;
case "select":
options.noscan = 0; // Allow empty scan of the whole table if no query is given, disabled by default
const rows = await db.aselect("todo", req.query, options);
api.sendJSON(req, null, rows);
break;
case "add":
if (!req.query.name) return api.sendReply(res, 400, "name is required");
// By default due date is tomorrow
if (req.query.due) req.query.due = lib.toDate(req.query.due, Date.now() + 86400000).toISOString();
db.add("todo", req.query, options, (err, rows) => {
api.sendJSON(req, err, rows);
});
break;
case "update":
if (!req.query.id) return api.sendReply(res, 400, "id is required");
const rows = await db.aupdate("todo", req.query, options);
api.sendJSON(req, null, rows);
break;
case "del":
if (!req.query.id) return api.sendReply(res, 400, "id is required");
db.del("todo", { id: req.query.id }, options, (err, rows) => {
api.sendJSON(req, err, rows);
});
break;
}
});
callback();
}
bkjs.server.start();
Now run it with an option to allow API access without an account:
node app.js -log debug -web -api-allow-path /todo -db-create-tables
To use a different database, for example PostgresSQL(running localy) or DynamoDB(assuming EC2 instance), all config parametetrs can be stored in the etc/config as well
node app.js -log debug -web -api-allow-path /todo -db-pool dynamodb -db-dynamodb-pool default -db-create-tables
node app.js -log debug -web -api-allow-path /todo -db-pool pg -db-pg-pool default -db-create-tables
API commands can be executed in the browser or using curl
:
curl 'http://localhost:8000/todo?name=TestTask1&descr=Descr1&due=2015-01-01`
curl 'http://localhost:8000/todo/select'
When the backend server starts and no -home argument passed in the command line the backend makes its home environment in the ~/.bkjs
directory.
It is also possible to set the default home using BKJS_HOME environment variable.
The backend directory structure is the following:
etc
- configuration directory, all config files are there
etc/profile
- shell script loaded by the bkjs utility to customize env variables
etc/config
- config parameters, same as specified in the command line but without leading -, each config parameter per line:
Example:
debug=1
db-pool=dynamodb
db-dynamodb-pool=http://localhost:9000
db-pg-pool=postgresql://postgres@127.0.0.1/backend
To specify other config file: bkjs shell -config-file file
etc/config.local
- same as the config but for the cases when local environment is different than the production or for dev specific parameters
on startup the following local config files will be loaded if present: etc/config.runMode
and etc/config.instance.tag
. These will be loaded after the main config but before config.local. The runMode is set to dev
by default and can be changed with -run-mode
config parameter, the instance tag is set with -instance-tag
config parameter.
config files support sections that can be used for conditions, see lib.configParse
description for details
etc/crontab
- jobs to be run with intervals, JSON file with a list of cron jobs objects:
Example:
Create file in ~/.backend/etc/crontab with the following contents:
[ { "cron": "0 1 1 * * 1,3", "job": { "app.cleanSessions": { "interval": 3600000 } } } ]
Define the function that the cron will call with the options specified, callback must be called at the end, create this app.js file
var bkjs = require("backendjs");
bkjs.app.cleanSessions = function(options, callback) {
bkjs.db.delAll("session", { mtime: options.interval + Date.now() }, { ops: "le" }, callback);
}
bkjs.server.start()
Start the jobs queue and the web server at once
bkjs master -jobs-workers 1 -jobs-cron
etc/crontab.local - additional local crontab that is read after the main one, for local or dev environment
run-mode
and db-pool
config parameters can be configured in DNS as TXT records, the backend on startup will try to resolve such records and use the value if not empty.
All params that marked with DNS TXT can be configured in the DNS server for the domain where the backend is running, the config parameter name is
concatenated with the domain and queried for the TXT record, for example: run-mode
parameter will be queried for run-mode.domain.name for TXT record type.
modules
- loadable modules with specific functionality
images
- all images to be served by the API server, every subfolder represent naming space with lots of subfolders for images
var
- database files created by the server
tmp
- temporary files
web
- Web pages served by the static Express middleware
On startup some env variable will be used for initial configuration:
-home
config parameter overrides it-run-mode
overrides it-conf-file
overrides it-preload-packages
overrieds it-config-roles
overrides it-db-pool
overrides it-db-config
overrides it-instance-tag
overrides it, it may be also overridden by AWS instance tagDatabase layer support caching of the responses using db.getCached
call, it retrieves exactly one record from the configured cache, if no record exists it
will pull it from the database and on success will store it in the cache before returning to the client. When dealing with cached records, there is a special option
that must be passed to all put/update/del database methods in order to clear local cache, so next time the record will be retrieved with new changes from the database
and refresh the cache, that is { cached: true }
can be passed in the options parameter for the db methods that may modify records with cached contents. In any case
it is required to clear cache manually there is db.clearCache
method for that.
Also there is a configuration option -db-caching
to make any table automatically cached for all requests.
If no cache is configured the local driver is used, it keeps the cache on the master process in the LRU pool and any worker or Web process
communicate with it via internal messaging provided by the cluster
module. This works only for a single server.
Set ipc-client=redis://HOST[:PORT]
that points to the server running Redis server.
The config option max_attempts
defines maximum number of times to reconnect before giving up. Any other node-redis
module parameter can be passed as well in
the options or url, the system supports special parameters that start with bk-
, it will extract them into options automatically.
For example:
ipc-client=redis://host1?bk-max_attempts=3
ipc-client-backup=redis://host2
ipc-client-backup-options-max_attempts=3
If configured all processes subscribe to it and listen for system messages, it must support PUB/SUB and does not need to be reliable. Websockets in the API server also use the system bus to send broadcasts between multiple api instances.
ipc-client-system=redis://
ipc-system-queue=system
To configure the backend to use Redis for job processing set ipc-queue=redis://HOST
where HOST is IP address or hostname of the single Redis server.
This driver implements reliable Redis queue, with visibilityTimeout
config option works similar to AWS SQS.
Once configured, then all calls to jobs.submitJob
will push jobs to be executed to the Redis queue, starting somewhere a backend master
process with -jobs-workers 2
will launch 2 worker processes which will start pulling jobs from the queue and execute.
The naming convention is that any function defined as function(options, callback)
can be used as a job to be executed in one of the worker processes.
An example of how to perform jobs in the API routes:
core.describeArgs('app', [
{ name: "queue", descr: "Queue for jobs" },
]);
app.queue = "somequeue";
app.processAccounts = function(options, callback) {
db.select("bk_user", { type: options.type || "user" }, (err, rows) => {
...
callback();
});
}
api.all("/process/accounts", (req, res) => {
jobs.submitJob({ job: { "app.processAccounts": { type: req.query.type } } }, { queueName: app.queue }, (err) => {
api.sendReply(res, err);
});
});
To use AWS SQS for job processing set ipc-queue=https://sqs.amazonaws.com....
, this queue system will poll SQS for new messages on a worker
and after successful execution will delete the message. For long running jobs it will automatically extend visibility timeout if it is configured.
The local queue is implemented on the master process as a list, communication is done via local sockets between the master and workers. This is intended for a single server development purposes only.
To use NATS (https://nats.io) configure a queue like ipc-queue-nats=nats://HOST:PORT, it supports broadcasts and job queues only, visibility timeout is supported as well.
To configure the backend to use RabbitMQ for messaging set ipc-queue=amqp://HOST
and optionally amqp-options=JSON
with options to the amqp module.
Additional objects from the config JSON are used for specific AMQP functions: { queueParams: {}, subscribeParams: {}, publishParams: {} }. These
will be passed to the corresponding AMQP methods: amqp.queue, amqp.queue.subcribe, amqp.publish
. See AMQP Node.js module for more info.
This is default setup of the backend when all API requests except must provide valid signature and all HTML, JavaScript, CSS and image files
are available to everyone. This mode assumes that Web development will be based on 'single-page' design when only data is requested from the Web server and all
rendering is done using JavaScript. This is how the examples/api/api.html
developers console is implemented, using JQuery-UI and Knockout.js.
To see current default config parameters run any of the following commands:
bkjs bkhelp | grep api-allow
node -e 'require("backendjs").core.showHelp()'
This is a mode when the whole Web site is secure by default, even access to the HTML files must be authenticated. In this mode the pages must defined 'Backend.session = true' during the initialization on every html page, it will enable Web sessions for the site and then no need to sign every API request.
The typical client JavaScript verification for the html page may look like this, it will redirect to login page if needed, this assumes the default path '/public' still allowed without the signature:
<link href="/css/bkjs.bundle.css" rel="stylesheet">
<script src="/js/bkjs.bundle.js" type="text/javascript"></script>
<script>
$(function () {
bkjs.session = true;
$(bkjs).on("bkjs.nologin", function() { window.location='/public/index.html'; });
bkjs.koInit();
});
</script>
On the backend side in your application app.js it needs more secure settings defined i.e. no html except /public will be accessible and
in case of error will be redirected to the login page by the server. Note, in the login page bkjs.session
must be set to true for all
html pages to work after login without singing every API request.
app.configureMiddleware = function(options, callback) {
this.allow.splice(this.allow.indexOf('^/$'), 1);
this.allow.splice(this.allow.indexOf('\\.html$'), 1);
callback();
}
api.registerPreProcess('', /^\/$|\.html$/, (req, status, callback) => {
if (status.status != 200) {
status.status = 302;
status.url = '/public/index.html';
}
callback(status);
});
The simplest way is to configure ws-port
to the same value as the HTTP port. This will run WebSockets server along the regular Web server.
In the browser the connection config is stored in the bkjs.wsconf
and by default it connects to the local server on port 8000.
There are two ways to send messages via Websockets to the server from a browser:
as urls, eg. bkjs.wsSend('/project/update?id=1&name=Test2')
In this case the url will be parsed and checked for access and authorization before letting it pass via Express routes. This method allows to share the same route handlers between HTTP and Websockets requests, the handlers will use the same code and all responses will be sent back, only in the Websockets case the response will arrived in the message listener (see an example below)
bkjs.wsConnect({ path: "/project/ws?id=1" });
$(bkjs).on("bkjs.ws.message", (msg) => {
switch (msg.op) {
case "/account/update":
bkjs.wsSend("/account/ws/account");
break;
case "/project/update":
for (const p in msg.project) app.project[p] = msg.project[p];
break;
case "/message/new":
bkjs.showAlert("info", `New message: ${msg.msg}`);
break;
}
});
as JSON objects, eg. bkjs.wsSend({ op: "/project/update", project: { id: 1, name: "Test2" } })
In this case the server still have to check for access so it treats all JSON messages as coming from the path which was used during the connect,
i.e. the one stored in the bkjs.wsconf.path
. The Express route handler for this path will receive all messages from Websocket clients, the response will be
received in the event listener the same way as for the first use case.
// Notify all clients who is using the project being updated
api.app.all("/project/ws", (req, res) => {
switch (req.query.op) {
case "/project/update":
// some code ....
api.wsNotify({ query: { id: req.query.project.id } }, { op: "/project/update", project: req.query.project });
break;
}
res.send("");
});
In any case all Websocket messages sent from the server will arrive in the event handler and must be formatted properly in order to distinguish what is what, this is
the application logic. If the server needs to send a message to all or some specific clients for example due to some updates in the DB, it must use the
api.wsNotify
function.
// Received a new message for a user from external API service, notify all websocket clients by account id
api.app.post("/api/message", (req, res) => {
....
... processing logic
....
api.wsNotify({ account_id: req.query.uid }, { op: "/message/new", msg: req.query.msg });
});
There is no ready to use support for different versions of API because there is no just one solution that satisfies all applications. But there are tools ready to use that will allow to implement such versioning system in the backend. Some examples are provided below:
Fixed versions
This is similar to AWS version system when versions are fixed and changed not very often. A client can specify the core version
using bk-version
header. When a request is parsed and the version is provided it will be set in the request options object as apiVersion
.
All API routes are defined using Express middleware and one of the possible ways of dealing with different versions can look like this, by appending version to the command it is very simple to call only changed API code.
api.all(/\/domain\/(get|put|del)/, function(req, res) {
var options = api.getOptions(req);
var cmd = req.params[0];
if (options.apiVersion) cmd += "/" + options.apiVersion;
switch (cmd) {
case "get":
break;
case "get/2015-01-01":
break;
case "put":
break;
case "put/2015-02-01":
break;
case "del"
break;
}
});
Application semver support For cases when applications support Semver kind of versioning and it may be too many releases the method above still can be used while the number of versions is small, once too many different versions with different minor/patch numbers, it is easier to support greater/less comparisons.
The application version bk-app
can be supplied in the query or as a header or in the user-agent HTTP header which is the easiest case for mobile apps.
In the middlware, the code can look like this:
var options = api.getOptions(req);
var version = lib.toVersion(options.appVersion);
switch (req.params[0]) {
case "get":
if (version < lib.toVersion("1.2.5")) {
res.json({ id: 1, name: "name", description: "descr" });
break;
}
if (version < lib.toVersion("1.1")) {
res.json([id, name]);
break;
}
res.json({ id: 1, name: "name", descr: "descr" });
break;
}
The actual implementation can be modularized, split into functions, controllers.... there are no restrictions how to build the working backend code, the backend just provides all necessary information for the middleware modules.
The purpose of the bkjs
shell script is to act as a helper tool in configuring and managing the backend environment
and as well to be used in operations on production systems. It is not required for the backend operations and provided as a convenience tool
which is used in the backend development and can be useful for others running or testing the backend.
Run bkjs help
to see description of all available commands.
The tool is multi-command utility where the first argument is the command to be executed with optional additional arguments if needed.
On Linux, when started the bkjs tries to load and source the following global config files:
/etc/conf.d/bkjs
/etc/sysconfig/bkjs
Then it try to source all local config files:
$BKJS_HOME/etc/profile
$BKJS_HOME/etc/profile.local
Any of the following config files can redefine any environment variable thus pointing to the correct backend environment directory or customize the running environment, these should be regular shell scripts using bash syntax.
To check all env variables inside bkjs just run the command bkjs env
The tool provides some simple functions to parse comamndline arguments, the convention is that argument name must start with a single dash followed by a value.
get_arg(name, dflt)
- returns the value for the arg name
or default value if specified
get_flag(name, dflt)
- returns 1 if there is a command lione arg with the name
or default value
Example:
bkjs shell -log debug
concat_arg(name, value)
- returns concatenated value from the arg and provided value, to combine values from multiple sources
Example:
ssh=$(concat_arg -ssh $BKJS_SSH_ARGS)
get_json(file, name, dflt, realpath)
- returns a value from the json file, name
can be path deep into object, realpath
flag if nonempty will treat all values as paths and convert each into actual real path (this is used by the internal web bundler)
get_json_flat
- similar to get_json but property names are flattened for deep access
Example:
$(get_json package.json config.sync.path)
$(get_json package.json name)
get_all_args(except)
- returns all args not present in the except
list, this is to pass all arguments to other script, for command development
Example:
The script is called: `bkjs cmd1 -skip 1 -filter 2 -log 3`
Your command handler process -skip but must pass all other args to another except -skip
cmd1)
skip=$(get_arg -skip)
...
other_script $(get_all_args "-skip")
;;
The utility is extended via external scripts that reside in the tools/
folders.
When bkjs is running it treats the first arg as a command:
$BKJS_CMD
set to the whole comamndif no internal commands match it starts loading external scripts that match with bkjs-PART1-*
where
PART1 is the first part of the command before first dash.
For example, when called:
bkjs ec2-check-hostname
it will check the command in main bkjs cript, not found it will search for all files that
match bkjs-ec2-*
in all known folders.
The file are loaded from following directories in this particular order:
-tools
command line argument$BKJS_TOOLS
,$BKJS_HOME/tools
$BKJS_DIR/tools
BKJS_DIR
always points to the backendjs installation directory.
BLKJS_TOOLS
env variable may contain a list of directories separated by spaces
, this variable or command line arg -tools
is the way to add
custom commands to bkjs. BKJS_TOOLS
var is usually set in one of the profile config files mentioned above.
Example of a typical bkjs command:
We need to set BKJS_TOOLS to point to our package(s), on Darwin add it to ~/.bkjs/etc/profile as
BKJS_TOOLS="$HOME/src/node-pkg/tools"
Create a file $HOME/tools/bkjs-super
#!/bin/sh
case "$BKJS_CMD" in
super)
arg1=$(get_arg -arg1)
arg2=$(get_arg -arg1 1)
[ -z $arg1 ] && echo "-arg1 is required" && exit 1
...
exit
super-all)
...
exit
;;
help)
echo ""
echo "$0 super -arg1 ARG -arg2 ARG ..."
echo "$0 super-all ...."
;;
esac
Now calling bkjs super
or bkjs super-all
will use the new $HOME/tools/bkjs-super
file.
Then run the dev build script to produce web/js/bkjs.bundle.js and web/css/bkjs.bundle.css
cd node_modules/backendjs && npm run devbuild
Now instead of including a bunch of .js or css files in the html pages it only needs /js/bkjs.bundle.js and /css/bkjs.bundle.css. The configuration is in the package.json file.
The list of files to be used in bundles is in the package.json under config.bundles
.
To enable auto bundler in your project just add to the local config ~/.bkjs/etc/config.local
a list of directories to be
watched for changes. For example adding these lines to the local config will enable the watcher and bundle support
watch-web=web/js,web/css,$HOME/src/js,$HOME/src/css
watch-ignore=.bundle.(js|css)$
watch-build=bkjs bundle -dev
The simple script below allows to build the bundle and refresh Chrome tab automatically, saves several clicks:
#!/bin/sh
bkjs bundle -dev -file $2
[ "$?" != "0" ] && exit
osascript -e "tell application \"Google Chrome\" to reload (tabs of window 1 whose URL contains \"$1\")"
To use it call this script instead in the config.local:
watch-build=bundle.sh /website
NOTE: Because the rebuild happens while the watcher is running there are cases like the server is restarting or pulling a large update from the repository when the bundle build may not be called or called too early. To force rebuild run the command:
bkjs bundle -dev -all -force
start new AWS instance via AWS console, use Alpine 3.19 or later
login as alpine
install commands
doas apk add git
git clone --depth=1 https://github.com/vseryakov/backendjs.git
doas backendjs/bkjs setup-ec2
doas reboot
now login as ec2-user
NOTE: if running behind a Load balancer and actual IP address is needed set Express option in the command line -api-express-options {"trust%20proxy":1}
. In the config file
replacing spaces with %20 is not required.
On the running machine which will be used for an image:
bksh -aws-create-image -no-reboot
Use an instance by tag for an image:
bksh -aws-create-image -no-reboot -instance-id `bkjs ec2-show -tag api -fmt id | head -1`
bksh -aws-set-route53 -name elasticsearch.ec-internal -filter elasticsearch
The first thing when deploying the backend into production is to change API HTTP port, by default is is 8000, but we would want port 80 so regardless how the environment is setup it is ultimately 2 ways to specify the port for HTTP server to use:
config file
The config file is always located in the etc/ folder in the backend home directory, how the home is specified depends on the system but basically it can be
defined via command line arguments as -home
or via environment variables when using bkjs. See bkjs documentation but on AWS instances created with bkjs
setup-server
command, for non-standard home use /etc/sysconfig/bkjs
profile, specify BKJS_HOME=/home/backend
there and the rest will be taken care of
command line arguments
When running node scripts which use the backend, just specify -home
command line argument with the directory where your backend should be and the backend will use it
Example:
node app.js -home $HOME -port 80
config database
If -db-config
is specified in the command line or db-config=
in the local config file, this will trigger loading additional
config parameters from the specified database pool, it will load all records from the bk_config
table on that db pool. Using the database to store
configuration make it easier to maintain dynamic environment for example in case of auto scaling or launching on demand, this way
a new instance will query current config from the database and this eliminates supporting text files and distributing them to all instances.
The config database is refreshed from time to time acording to the db-config-interval
parameter, also all records with ttl
property in the bk_config
will be pulled every ttl interval and updated in place.
DNS records
Some config options may be kept in the DNS TXT records and every time a instance is started it will query the local DNS for such parameters. Only a small subset of
all config parameters support DNS store. To see which parameters can be stored in the DNS run bkjs show-help
and look for 'DNS TXT configurable'.
git clone https://github.com/vseryakov/backendjs.git
or git clone git@github.com:vseryakov/backendjs.git
cd backendjs
if Node.js is already installed skip to the next section
to install binary release run the command, it will install it into ~/.bkjs on Darwin
bkjs install-node
# To install into different path
bkjs install-node -home ~/.local
Important: Add NODE_PATH=$BKJS_HOME/lib/node_modules to your environment in .profile or .bash_profile so node can find global modules, replace $BKJS_HOME with the actual path unless this variable is also set in the .profile
to install all dependencies and make backendjs module and bkjs globally available:
npm link backendjs
to run local server on port 8000 run command:
bkjs web
to start the backend in command line mode, the backend environment is prepared and initialized including all database pools. This command line access allows you to test and run all functions from all modules of the backend without running full server similar to Node.js REPL functionality. All modules are accessible from the command line.
$ ./bkjs shell
> core.version
'0.70.0'
> logger.setLevel('info')
Included a simple testing tool, it is used for internal bkjs testing but can be used for other applications as well.
The convention is to create a test file in the tests/ folder, each test file can define one or more test
functions named in the form tests.test_NAME
where NAME is any custom name for the test, for example:
File tests/example.js
:
tests.test_example = function(callback)
{
expect(1 == 2, "expect 1 eq 2")
callback();
}
Then to run all tests
bkjs test-all
More details are in the documentation or doc.html
All API endpoints are optional and can be disabled or replaced easily. By default the naming convention is:
/namespace/command[/subname[/subcommand]]
Any HTTP methods can be used because its the command in the URL that defines the operation. The payload can be url-encoded query parameters or JSON or any other format supported by any particular endpoint. This makes the backend universal and usable with any environment, not just a Web browser. Request signature can be passed in the query so it does not require HTTP headers at all.
All requests to the API server must be signed with account login/secret pair.
The resulting signature is sent as HTTP header bk-signature
or in the header specified by the api-signature-name
config parameter.
For JSON content type, the method must be POST and no query parameters specified, instead everything should be inside the JSON object which is placed in the body of the request. For additional safety, SHA1 checksum of the JSON payload can be calculated and passed in the signature, this is the only way to ensure the body is not modified when not using query parameters.
See web/js/bkjs.js function bkjs.createSignature
or
api.js function api.createSignature
for the JavaScript implementations.
/auth
This API request returns the current user record from the bk_user
table if the request is verified and the signature provided
is valid. If no signature or it is invalid the result will be an error with the corresponding error code and message.
By default this endpoint is secured, i.e. requires a valid signature.
Parameters:
_session=1
- if the call is authenticated a cookie with the session signature is returned, from now on
all requests with such cookie will be authenticated, the primary use for this is Web apps/login
Same as the /auth but it uses secret for user authentication, this request does not need a signature, just simple login and secret query parameters to be sent to the backend. This must be sent over SSL.
Parameters:
login
- account loginsecret
- account secret_session=1
- same as in /auth requestOn successful login, the result contains full account record including the secret, this is the only time when the secret is returned back
Example:
$.ajax({ url: "/login?login=test123&secret=test123&_session=1",
success: function(json, status, xhr) { console.log(json) }
});
> { id: "XXXX...", name: "Test User", login: "test123", ...}
/logout
Logout the current user, clear session cookies if exist. For pure API access with the signature this will not do anything on the backend side.
The accounts API manages accounts and authentication, it provides basic user account features with common fields like email, name, address.
/account/get
Returns information about the current account, all account columns are returned except the secret and other table columns with the property priv
Response:
{ "id": "57d07a4e28fc4f33bdca9f6c8e04d6c3",
"name": "Test User",
"mtime": 1391824028,
"login": "testuser",
"type": ["user"],
}
How to make an account as admin
# Run backend shell
bkjs shell
# Update record by login
> db.update("bk_user", { login: 'login@name', type: 'admin' });
/account/update
Update current account with new values, the parameters are columns of the table bk_user
, only columns with non empty values will be updated.
Example:
/account/update?name=New%2BName
When running with AWS load balancer there should be a url that a load balancer polls all the time and this must be very quick and lightweight request. For this
purpose there is an API endpoint /ping
that just responds with status 200. It is open by default in the default api-allow-path
config parameter.
The data API is a generic way to access any table in the database with common operations, as oppose to the any specific APIs above this API only deals with one table and one record without maintaining any other features like auto counters, cache...
Because it exposes the whole database to anybody who has a login it is a good idea to disable this endpoint in the production or provide access callback that verifies who can access it.
To disable this endpoint completely in the config: deny-modules=bk_data
To allow admins to access it only in the config: api-allow-admin=^/data
To allow admins to access it only:
api.registerPreProcess('GET', '/data', function(req, status, cb) { if (req.account.type != "admin") return cb({ status: 401, message: 'access denied' }; cb(status)); });
This is implemented by the data
module from the core.
/data/columns
/data/columns/TABLE
Return columns for all tables or the specific TABLE
/data/keys/TABLE
Return primary keys for the given TABLE
/data/(select|search|list|get|add|put|update|del|incr|replace)/TABLE
Perform database operation on the given TABLE, all options for the db
functiobns are passed as query parametrrs prepended with underscore,
regular parameters are the table columns.
By default the API does not allow table scans without a condition to avoid expensive and long queries, to enable a scan pass _noscan=0
.
For this to work the Data API must be configured as unsecure in the config file using the parameter api-unsecure=data
.
Some tables like messages and connections perform data convertion before returning the results, mostly splitting combined columns like type into
separate fields. To return raw data pass the parameter _noprocessrows=1
.
Example:
/data/get/bk_user?login=12345
/data/update/bk_user?login=12345&name=Admin
/data/select/bk_user?name=john&_ops=name,gt&_select=name,email
/data/select/bk_user?_noscan=0&_noprocessrows=1
The system API returns information about the backend statistics, allows provisioning and configuration commands and other internal maintenance functions. By default is is open for access to all users but same security considerations apply here as for the Data API.
This is implemented by the system
module from the core. To enable this functionality specify -preload-modules=bk_system
.
/system/restart
Perform restart of the Web processes, this will be done gracefully, only one Web worker process will be restarting while the other processes will keep
serving requests. The intention is to allow code updates on live systems without service interruption.
/system/cache/(init|stats|keys|get|set|put|incr|del|clear)
Access to the caching functions
/system/config/(init)
Access to the config functions
/system/msg/(init|send)
Access to the messaging functions
/system/jobs/(send)
Access to the jobs functions
/system/queue/(init|publish)
Access to the queue functions
/system/params/get
Return all config parameters applied from the config file(s) or remote database.
Vlad Seryakov
Check out the Documentation for more details.
HTTP API to the server from the clients, this module implements the basic HTTP(S) API functionality with some common features. The API module
incorporates the Express server which is exposed as api.app object, the master server spawns Web workers which perform actual operations and monitors
the worker processes if they die and restart them automatically. How many processes to spawn can be configured via -server-max-workers
config parameter.
When an HTTP request arrives it goes over Express middleware, but before processing any registered routes there are several steps performed:
req
object which is by convention is a Request object, assigned with common backend properties to be used later:bk_user
table will be setapi.sendJSON
method, registered post process callbacks will be called for such responseConfig parameters
api-images-url
, descr: "URL where images are stored, for cases of central image server(s), must be full URL with optional path"api-images-s3
, descr: "S3 bucket name where to store and retrieve images"api-images-raw
, type: "bool", descr: "Return raw urls for the images, requires images-url to be configured. The path will reflect the actual 2 level structure and account id in the image name"api-images-s3-options
, type: "json", logger: "warn", descr: "S3 options to sign images urls, may have expires:, key:, secret: properties"api-images-ext
, descr: "Default image extension to use when saving images"api-images-mod
, descr: "Images scaling module, sharp"api-files-raw
, type: "bool", descr: "Return raw urls for the files, requires files-url to be configured. The path will reflect the actual 2 level structure and account id in the file name"api-files-url
, descr: "URL where files are stored, for cases of central file server(s), must be full URL with optional path"api-files-s3
, descr: "S3 bucket name where to store files uploaded with the File API"api-files-detect
, descr: "File mime type detection method: file, default is mmmagic"api-max-request-queue
, type: "number", min: 0, descr: "Max number of requests in the processing queue, if exceeds this value server returns too busy error"api-no-access-log
, type: "bool", descr: "Disable access logging in both file or syslog"api-access-log-file
, descr: "File for access logging"api-access-log-level
, type: "int", descr: "Syslog level priority, default is local5.info, 21 * 8 + 6"api-access-log-fields
, array: 1, type: "list", descr: "Additional fields from the request or account to put in the access log, prefix defines where the field is lcoated: q: - query, h: - headers, a: - account otherwise from the request, Example: -api-log-fields h:Referer,a:name,q:action"api-salt
, descr: "Salt to be used for scrambling credentials or other hashing activities"api-qs-options-(.+)
, autotype: 1, obj: "qsOptions", strip: "qs-options-", nocamel: 1, descr: "Options to pass to qs.parse: depth, arrayLimit, allowDots, comma, plainObjects, allowPrototypes, parseArrays"api-no-static
, type: "bool", descr: "Disable static files from /web folder, no .js or .html files will be served by the server"api-static-options-(.+)
, autotype: 1, obj: "staticOptions", strip: "static-options-", nocamel: 1, descr: "Options to pass to serve-static module: maxAge, dotfiles, etag, redirect, fallthrough, extensions, index, lastModified"api-vhost-path-([^/]+)
, type: "regexp", obj: "vhostPath", nocamel: 1, strip: "vhost-path-", regexp: "i", descr: "Define a virtual host regexp to be matched against the hostname header to serve static content from a different root, a vhost path must be inside the web directory, if the regexp starts with !, that means negative match, example: api-vhost-path-test_dir=test.com$"api-no-vhost-path
, type: "regexpobj", descr: "Add to the list of URL paths that should be served for all virtual hosts"api-templating
, descr: "Templating engine package to use, it assumes it supports Expres by exposing __express or renderfile methods"api-no-session
, type: "bool", descr: "Disable cookie session support, all requests must be signed for Web clients"api-session-age
, type: "int", min: 0, descr: "Session age in milliseconds, for cookie based authentication"api-session-domain-(.+)
, type: "regexp", obj: "session-domain", nocamel: 1, regexp: "i", descr: "Cookie domain by Host: header, if not matched session is bound to the exact host only, example: -api-session-domain-site.com=site.com$"api-session-same-site
, descr: "Session SameSite option, for cookie based authentication"api-session-cache
, descr: "Cache name for session control"api-session-secure
, type: "bool", descr: "Set cookie Secure flag"api-query-token-secret
, descr: "Name of the property to be used for encrypting tokens for pagination or other sensitive data, any property from bk_user can be used, if empty no secret is used, if not a valid property then it is used as the secret"api-app-header-name
, descr: "Name for the app name/version query parameter or header, it is can be used to tell the server about the application version"api-version-header-name
, descr: "Name for the access version query parameter or header, this is the core protocol version that can be sent to specify which core functionality a client expects"api-no-cache-files
, type: "regexpobj", descr: "Set cache-control=no-cache header for matching static files",api-tz-header-name
, descr: "Name for the timezone offset header a client can send for time sensitive requests, the backend decides how to treat this offset"api-signature-header-name
, descr: "Name for the access signature query parameter, header and session cookie"api-lang-header-name
, descr: "Name for the language query parameter, header and session cookie, primary language for a client"api-signature-age
, type: "int", descr: "Max age for request signature in milliseconds, how old the API signature can be to be considered valid, the 'expires' field in the signature must be less than current time plus this age, this is to support time drifts"api-access-time-interval
, type: "int", min: 0, descr: "Intervals to refresh last access time for accounts, only updates the cache if bk_user
is configured to be cached"api-access-token-secret
, descr: "A generic secret to be used for API access or signatures"api-allow-authenticated
, type: "regexpobj", descr: "Add URLs which can be accessed by any authenticated user account, can be partial urls or Regexp, it is checked before any other account types, if matched then no account specific paths will be checked anymore(any of the allow-account-...)"api-allow-acl-authenticated
, type: "list", descr: "Combine regexps from the specified acls for the check explained by -api-allow-authenticated
parameter"api-allow-admin
, type: "regexpobj", descr: "Add URLs which can be accessed by admin accounts only, can be partial urls or Regexp"api-allow-acl-admin
, type: "list", descr: "Combine regexps from the specified acls for the check explained by -api-allow-admin
parameter"api-allow-account-([a-z0-9_]+)
, type: "regexpobj", obj: "allow-account", descr: "Add URLs which can be accessed by specific account type, can be partial urls or Regexp"api-allow-acl-([a-z0-9_]+)
, type: "rlist", obj: "allow-acl", descr: "Combine regexps from the specified acls for allow checks for the specified account type"api-only-account-([a-z0-9_,]+)
, type: "regexpobj", obj: "only-account", descr: "Add URLs which can be accessed by specific account type only, can be partial urls or Regexp"api-only-acl-([a-z0-9_,]+)
, type: "rlist", obj: "only-acl", descr: "Combine regexps from the specified acls allowed for the specified account type only"api-deny-authenticated
, type: "regexpobj", descr: "Add URLs which CAN NOT be accessed by any authenticated user account, can be partial urls or Regexp, it is checked before any other account types, if matched then no account specific paths will be checked anymore(any of the deny-account-...)"api-deny-acl-authenticated
, type: "list", descr: "Combine regexps from the specified acls for the check explained by -api-deny-authenticated
parameter"api-deny-account-([a-z0-9_]+)
, type: "regexpobj", obj: "deny-account", descr: "Add URLs which CAN NOT be accessed by specific account type, can be partial urls or Regexp, this is checked before any allow parameters"api-deny-acl-([a-z0-9_]+)
, type: "list", obj: "deny-acl", descr: "Combine regexps from the specified acls for deny checks for the specified account type"api-acl-([a-z0-9_]+)
, type: "regexpobj", obj: "acl", descr: "Add URLs to the named ACL which can be used in allow/deny rules per account"api-allow
, type: "regexpobj", set: 1, descr: "Regexp for URLs that dont need credentials, replaces the whole access list"api-allow-path
, type: "regexpobj", key: "allow", descr: "Add to the list of allowed URL paths without authentication, adds to the -api-allow
parameter"api-allow-acl
, type: "list", descr: "Combine regexps from the specified acls for the check explained by -api-allow
parameter"api-deny
, type: "regexpobj", set: 1, descr: "Regexp for URLs that will be denied access, replaces the whole access list"api-deny-path
, type: "regexpobj", key: "deny", descr: "Add to the list of URL paths to be denied without authentication, adds to the -api-deny
parameter"api-deny-acl
, type: "list", descr: "Combine regexps from the specified acls for the check explained by -api-deny
parameter"api-allow-anonymous
, type: "regexpobj", descr: "Add to the list of allowed URL paths that can be served with or without valid account, the difference with -api-allow-path
is that it will check for signature and an account but will continue if no login is provided, return error in case of wrong account or not account found"api-allow-acl-anonymous
, type: "list", descr: "Combine regexps from the specified acls for the check explained by -allow-anonymous
parameter"api-allow-empty
, type: "regexpobj", descr: "Regexp for URLs that should return empty responses if not found, for example return nothing for non-existent javascript files or css files"api-ignore-allow
, type: "regexpobj", descr: "Regexp for URLs that should be ignored by the allow rules, the processing will continue"api-ignore-allow-path
, type: "regexpobj", key: "ignore-allow", descr: "Add to the list of URL paths which should be ignored by the allow rules, in order to keep allow/deny rules simple, for example to keep some js files from open to all: -allow-path \.js -ignore-allow-path /secure/"api-ignore-allow-acl
, type: "list", descr: "Combine regexps from the specified acls for the check explained by -ignore-allow-path
parameter"api-allow-ip
, type: "regexpobj", descr: "Add to the list of regexps for IPs that only allowed access from. It is checked before endpoint access list"api-deny-ip
, type: "regexpobj", descr: "Add to the list of regexps for IPs that will be denied access. It is checked before endpoint access list."api-allow-ssl
, type: "regexpobj", descr: "Add to the list of allowed locations using HTTPs only, plain HTTP requests to these urls will be refused"api-ignore-ssl
, type: "regexpobj", descr: "Allow plain HTTP from matched IP addresss or locations"api-redirect-ssl
, type: "regexpobj", descr: "Add to the list of the locations to be redirected to the same path but using HTTPS protocol"api-express-options
, type: "json", logger: "warn", descr: "Set Express config options during initialization,example: -api-express-options { \"trust proxy\": 1, \"strict routing\": true }
"api-mime-body
, type: "regexpobj", descr: "Collect full request body in the req.body property for the given MIME type in addition to json and form posts, this is for custom body processing"api-mime-ignore
, type: "regexpobj", descr: "Ignore the body for the following MIME content types, request body will not be parsed at all"api-mime-map-(.+)
, obj: "mime-map", descr: "File extension to MIME content type mapping, this is used by static-serve, example: -api-mime-map-mobileconfig application/x-apple-aspen-config"api-ignore-content-type
, type: "regexpobj", descr: "Ignore the content type for the following endpoint paths, keep the body unparsed"api-platform-match
, type: "regexpmap", regexp: "i", descr: "An JSON object with list of regexps to match user-agent header for platform detection, example: { 'ios|iphone|ipad': 'ios', 'android': 'android' }"api-cors-origin
, descr: "Origin header for CORS requests"api-cors-allow
, type: "regexpobj", descr: "Enable CORS requests if a request host/path matches the given regexp"api-server-header
, descr: "Custom Server: header to return for all requests"api-error-message
, descr: "Default error message to return in case of exceptions"api-restart
, descr: "On address in use error condition restart the specified servers, this assumes an external monitor like monit to handle restarts"api-allow-error-code
, type: "regexpobj", descr: "Error codes in exceptions to return in the response to the user, if not matched the error-message will be returned"api-rlimits-([a-z]+)$
, obj: "rlimits", make: "$1", autotype: 1, descr: "Default rate limiter parameters, default interval is 1s, ttl
is to expire old cache entries, message for error"api-rlimits-(rate|max|interval|ttl|ip|delay|multiplier|queue)-(.+)
, autotype: 1, obj: "rlimitsMap.$2", make: "$1", descr: "Rate limiter parameters for Token Bucket algorithm. queue
to use specific queue, ttlis to expire cache entries,
ip` is to limit by IP address as well, ex. -api-rlimits-ip-ip=10, -api-rlimits-rate-/path=1"api-rlimits-map-(.+)
, type: "map", obj: "rlimitsMap.$1", make: "$1", maptype: "auto", merge: 1, descr: "Rate limiter parameters for Token Bucket algorithm. set all at once, ex. -api-rlimits-map-/url=rate:1,interval:2000"api-exit-on-error
, type: "bool", descr: "Exit on uncaught exception in the route handler"api-timeout
, type: "number", min: 0, max: 3600000, descr: "HTTP request idle timeout for servers in ms, how long to keep the connection socket open, this does not affect Long Poll requests"api-keep-alive-timeout
, type: "int", descr: "Number of milliseconds to keep the HTTP conection alive"api-request-timeout
, type: "int", min: 0, descr: "Number of milliseconds to receive the entire request from the client"api-max-requests-per-socket
, type: "int", min: 0, descr: "The maximum number of requests a socket can handle before closing keep alive connection"api-(query|header|upload)-limit
, type: "number", descr: "Max size for query/headers/uploads, bytes"api-(files|fields)-limit
, type: "number", descr: "Max number of files or fields in uploads"api-limiter-queue
, descr: "Name of an ipc queue for API rate limiting"api-errlog-limiter-max
, type: "int", descr: "How many error messages to put in the log before throttling kicks in"api-errlog-limiter-interval
, type: "int", descr: "Interval for error log limiter, max errors per this interval"api-errlog-limiter-ignore
, type: "regexpobj", descr: "Do not show errors that match the regexp"api-routing-(.+)
, type: "regexpobj", reverse: 1, nocamel: 1, obj: 'routing', descr: "Locations to be re-routed to other path, this is done inside the server at the beginning, only the path is replaced, same format and placeholders as in redirect-url, use ! in front of regexp to remove particular redirect from the list, example: -api-routing-^/account/get /acount/read"api-ignore-routing
, type: "regexpobj", descr: "Ignore locations from the routing"api-auth-routing-(.+)
, type: "regexpobj", reverse: 1, nocamel: 1, obj: 'auth-routing', descr: "URL path to be re-routed to other path after the authentication is successful, this is done inside the server, only the path is replaced, same format and placeholders as in redirect-url, example: -api-routing-auth-^/account/get /acount/read"api-redirect-url
, type: "regexpmap", descr: "Add to the list a JSON object with property name defining a location regexp to be matched early against in order to redirect using the value of the property, if the regexp starts with !, that means it must be removed from the list, variables can be used for substitution: @HOST@, @PATH@, @URL@, @BASE@, @DIR@, @QUERY@, status code can be prepended to the location, example: { '^[^/]+/path/$': '/path2/index.html', '.+/$': '301:@PATH@/index.html' } "api-login-redirect-(.+)
, type: "regexpobj", reverse: 1, nocamel: 1, obj: "login-redirect", descr: "Define a location where to redirect if no login is provided, same format and placeholders as in redirect-url, example: api-login-redirect-^/admin/=/login.html"api-default-auth-status
, type: "int", descr: "Default authenticated status, if no auth rules matched but valid signature this is the status returned"api-default-auth-message
, descr: "Default authenticated message to be returned with default auth status"api-reset-acl
, type: "callback", callback: function(v) { if (v) this.resetAcl() descr: "Reset all ACL, auth, routing and login properties in the api module"api-response-headers
, type: "regexpmap", json: 1, descr: "An JSON object with list of regexps to match against the location and set response headers defined as a ist of pairs name, value..., -api-response-headers={ "^/": ["x-frame-options","sameorigin","x-xss-protection","1; mode=block"] }"api-cleanup-rules-(.+)
, obj: "cleanupRules.$1", type: "map", maptype: "auto", merge: 1, nocamel: 1, descr: "Rules for the cleanupResult per table, ex. api-cleanup-rules-bk_user=email:0,phone:1"api-cleanup-strict
, type: "bool", descr: "Default mode for cleanup results"api-request-cleanup
, type: "list", array: 1, descr: "List of fields to explicitely cleanup on request end"api-query-defaults-([a-z0-9_]+)-(.+)
, obj: "queryDefaults.$2", make: "$1", autotype: 1, descr: "Global query defaults for getQuery, can be path specific, ex. -api-query-defaults-max-name 128 -api-query-defaults-max-/endpoint-name 255"api-csrf-set-path
, type: "regexpobj", descr: "Regexp for URLs to set CSRF token for all methods, token type(account|pub) is based on the current session"api-csrf-pub-path
, type: "regexpobj", descr: "Regexp for URLs to set public CSRF token only if no valid CSRF token detected"api-csrf-check-path
, type: "regexpobj", descr: "Regexp for URLs to set CSRF token for skip methods and verify for others"api-csrf-skip-method
, type: "regexp", descr: "Do not check for CSRF token for specified methods"api-csrf-skip-status
, type: "regexp", descr: "Do not return CSRF token for specified status codes"api-csrf-header-name
, descr: "Name for the CSRF header"api-csrf-age
, type: "int", min: 0, descr: "CSRF token age in milliseconds"api-delays-([0-9]+)
, type: "int", obj: "delays", nocamel: 1, descr: "Delays in ms by status code, useful for delaying error responses to slow down brute force attacks, ex. -api-delays-401 1000"api-err-(.+)
, descr: "Error messages for various cases"api-compressed-([^/]+)
, type: "regexp", obj: "compressed", nocamel: 1, strip: "compressed-", reverse: 1, regexp: "i", descr: "Match static paths to be returned compressed, files must exist and be pre-compressed with the given extention , example: -api-compress-bundle.js gz"api-allow-configure-(web|middleware)
, type: "regexp", descr: "Modules allowed to call configureWeb or Middleware, i.e. only allowed endpoints"api.init(options, callback)
Initialize API layer, this must be called before the api
module can be used but it is called by the server module automatically so api.init
is
rearely need to called directly, only for new server implementation or if using in the shell for testing.
During the init sequence, this function calls api.initMiddleware
and api.initApplication
methods which by default are empty but can be redefined in the user aplications.
The bkjs.js uses its own request parser that places query parameters into req.query
or req.body
depending on the method.
For GET method, req.query
contains all url-encoded parameters, for POST method req.body
contains url-encoded parameters or parsed JSON payload or multipart payload.
The reason not to do this by default is that this may not be the alwayse wanted case and distinguishing data coming in the request or in the body may be desirable,
also, this will needed only for Express handlers .all
, when registering handler by method like .get
or .post
then the handler needs to deal with only either source of the request data.
api.shutdown(callback)
Gracefully close all connections, call the callback after that
api.shutdownWeb(options, callback)
Gracefully close all database pools when the shutdown is initiated by a Web process
api.configureStatic()
Templating and static paths
api.configureAccessLog()
Setup access log stream
api.handleServerRequest(req, res)
Start Express middleware processing wrapped in the node domain
api.prepareRequest(req)
Prepare request options that the API routes will merge with, can be used by pre process hooks, initialize required properties for subsequent use
api.prepareOptions(req)
Parse or re-parse special headers about app version, language and timezone, it is called early to parse headers first and then right after the query parameters are available, query values have higher priority than headers.
api.startMetrics(req, res, next)
This is supposed to be called at the beginning of request processing to start metrics and install the handler which will be called at the end to finalize the metrics and call the cleanup handlers
api.handleMetrics(req, elapsed)
Finish metrics collection about the current rquest
api.handleCleanup(req)
Call registered cleanup hooks and clear the request explicitly
api.checkQuery(req, res, next)
Parse incoming query parameters
api.checkBody(req, res, next)
Parse multipart forms for uploaded files
api.checkRouting(req, name, ignore)
Check if the current request must be re-routed to another endpoint
api.checkRedirectPlaceholders(req, pathname)
Replace redirect placeholders
api.checkRedirectSsl(req)
Check a request for possible SSL redirection, it checks the original URL
api.checkRedirectRules(req, name)
Check a request for possible redirection condition based on the configuration.
This is used by API servers for early redirections. It returns null
if no redirects or errors happend, otherwise an object with status that is expected by the api.sendStatus
method.
The options is expected to contain the following cached request properties:
api.checkRateLimits(req, options, callback)
Perform rate limiting by specified property, if not given no limiting is done.
The following options properties can be used:
type - predefined: ip, path, opath
, determines by which property to perform rate limiting, when using account properties
the rate limiter should be called after the request signature has been parsed. Any other value is treated as
custom type and used as is. If it is an array all items will be checked sequentially.
This property is required.
The predefined types checked for every request:
ip - check every IP address
opath - same as path but uses original path before routing
path - limit number of requests for an API path by IP address, * can be used at the end to match only the beginning
-api-rlimits-rate-ip=100 -api-rlimits-rate-/api/path=2 -api-rlimits-ip-/api/path=1 -api-rlimits-rate-/api/path/*=1
ip - to use the specified IP address
max - max capacity to be used by default
rate - fill rate to be used by default
interval - interval in ms within which the rate is measured, default 1000 ms
message - more descriptive text to be used in the error message for the type, if not specified a generic error message is used
queue - which queue to use instead of the default, some limits is more useful with global queues like Redis instead of the default
delay - time in ms to delay the response, slowing down request rate
multiplier - multiply the interval after it consumed all tokens, subsequent checks use the increased interval, fractions supported, if the multiplier is positive then the interval will keep increasing indefinitely, if it is negative the interval will reset to the default value on first successful consumption
The metrics are kept in the LRU cache in the master process by default.
Example:
api.checkRateLimits(req, { type: "ip", rate: 100, interval: 60000 }, (err, info) => {
if (err) return api.sendReply(err);
...
});
api.sendJSON(req, err, data)
Send result back with possibly executing post-process callback, this is used by all API handlers to allow custom post processing in the apps. If err is not null the error message is returned immediately.
api.sendFormatted(req, err, data, options)
Send result back formatting according to the options properties:
api.sendStatus(res, options)
Return reply to the client using the options object, it contains the following properties:
i18n Note:
The API server attaches fake i18n functions req.__
and res.__
which are used automatically for the message
property
before sending the response.
With real i18n module these can/will be replaced performing actual translation without
using i18n.__
method for messages explicitely in the application code for sendStatus
or sendReply
methods.
Replies can be delayed per status via api.delays
if configured, to override any daly set
req.options.sendDelay
to nonzero value, negative equals no delay
api.sendReply(res, status, text)
Send formatted JSON reply to an API client, if status is an instance of Error then error message with status 500 is sent back.
If the status is an object it is sent as is.
All Error objects will return a generic error message without exposing the real error message, it will log all error exceptions in the logger subject to log throttling configuration.
api.sendFile(req, file, redirect)
Send file back to the client, res is Express response object
api.handleLogout(req)
Clear the session and all cookies
api.handleSignature(req, res, callback)
Perform authorization of the incoming request for access and permissions
api.newSignature(req)
Returns a new signature object with all required properties filled form the request object
api.getSignature(req)
Parse incoming request for signature and return all pieces wrapped in an object, this object will be used by verifySignature
function.
If the signature successfully recognized it is saved in the request as req.signature
,
it always returns a signature object, a new one or existing
api.verifySignature(req, sig, account, callback)
Returns true if the signature sig
matches given account secret. account
object must be a bk_user
record.
api.createSignature(login, secret, method, host, uri, options)
Create secure signature for an HTTP request. Returns an object with HTTP headers to be sent in the response.
The options may contains the following:
api.checkRequestSignature(req, callback)
Verify request signature from the request object, uses properties: .host, .method, .url or .originalUrl, .headers
api.checkAccess(req, callback)
Perform URL based access checks, this is called before the signature verification, very early in the request processing step.
Checks access permissions, calls the callback with the following argument:
api.checkAuthorization(req, status, callback)
Perform authorization checks after the account been checked for valid signature, this is called even if the signature verification failed, in case of a custom authentication middlware this must be called at the end and use the status object returned in the callback to return an error or proceed with the request. In any case the result of this function is final.
If a user has valid login by default access to all API endpoints is granted, to restrict access to specific APIs use any combinations of
api-allow
or api-deny
config parameters.
api.getCsrfToken(req)
CSRF token format: TYPE,RANDOM_INT,EXPIRE_MS,[UID]
type`` is
hfor header or
c`` for cookie
Implements double cookie protection using HTTP and cookie tokens, both must be present.
In addition a token may contain the account id which must be the same as logged in user.
Return HTTP CSRF token, can be used in templates or forms, the cookie token will reuse the same token
api.verifyCsrfToken(req)
Returns .ok == false if CSRF token verification fails, both header and cookie are checked and retuned as .h and .c
api.checkCsrfToken(req, options)
For configured endpoints check for a token and fail if not present or invalid
api.skipCsrfToken(req)
Do not return CSRF token in cooies or headers
api.clearCsrfToken(req)
Reset CSRF tokens from cookies and headers
api.fileUrl(file, options)
Returns absolute file url if it is configured with any prefix or S3 bucket, otherwise returns empty string
api.getFile(req, file, options)
Send a file to the client
api.readFile(file, options, callback)
Returns contents of a file, all specific parameters are passed as is, the contents of the file is returned to the callback,
see lib.readFile
or aws.s3GetFile
for specific options.
api.copyFile(source, dest, options, callback)
Copy a file from one location to another, can deal with local and S3 files if starts with s3:// prefix
api.listFile(options, callback)
Returns a list of file names inside the given folder, options.filter
can be a regexp to restrict which files to return
api.putFile(req, name, options, callback)
Upload file and store in the filesystem or S3, try to find the file in multipart form, in the body or query by the given name
Output file name is built according to the following options properties:
On return the options may have the following properties set:
api.storeFile(tmpfile, outfile, options, callback)
Place the uploaded tmpfile to the destination pointed by outfile
api.delFile(file, options, callback)
Delete file by name from the local filesystem or S3 drive if filesS3 is defined in api or options objects
api.detectFile(file, flags, callback)
Returns detected mime type and ext for a file, requires mmmagic
package,
if no flags given uses MAGIC_MIME_TYPE by default
api.findHook(type, method, path)
Find registered hooks for given type and path
api.addHook(type, method, path, callback)
Register a hook callback for the type and method and request url, if already exists does nothing.
api.registerRateLimits(name, rate, max, interval, queue)
Register access rate limit for a given name, all other rate limit properties will be applied as described in the checkRateLimits
api.registerControlParams(options)
Add special control parameters that will be recognized in the query and placed in the req.options
for every request.
Control params start with underscore and will be converted into the configured type according to the spec.
The options
is an object in the format that is used by lib.toParams
, no default type is allowed, even for string
it needs to be defined as { type: "string" }.
No existing control parameters will be overridden, also care must be taken when defining new control parameters so they do not conflict with the existing ones.
These are default common parameters that can be used by any module:
_count, _page, _tm, _sort, _select, _ext, _start, _token, _session, _format, _total, _encoding, _ops
These are the reserved names that cannot be used for parameters, they are defined by the engine for every request:
path, apath, ip, host, mtime, cleanup, secure, noscan, appName, appVersion, appLocale, appTimezone, apiVersion
NOTE: noscan
is set to 1 in every request to prevent accidental full scans, this means it cannot be enabled via the API but any module
can do it in the code if needed.
Example:
mod.configureMiddleware = function(options, callback) {
api.registerControlParams({ notify: { type: "bool" }, level: { type: "int", min: 1, max: 10 } });
callback();
}
Then if a request arrives for example as `_notify=true&_level=5`, it will be parsed and placed in the `req.options`:
mod.configureWeb = function(options, callback) {
api.app.all("/send", function(req, res) {
if (req.options.notify) { ... }
if (req.options.level > 5) { ... }
});
callback()
}
api.registerAccessCheck(method, path, callback)
Register a handler to check access for any given endpoint, it works the same way as the global accessCheck function and is called before validating the signature or session cookies. No account information is available at this point yet.
Example:
api.registerAccessCheck('', 'account', function(req, cb) { cb({ status: 500, message: "access disabled"}) }))
api.registerAccessCheck('POST', '/account/add', function(req, cb) {
if (!req.query.invitecode) return cb({ status: 400, message: "invitation code is required" });
cb();
});
api.registerAuthCheck(method, path, callback)
This callback will be called after the signature or session is verified but before
the ACL authorizaton is called. The req.account
object will always exist at this point but may not contain the user in case of an error.
The purpose of this hook is to perform alternative authentication like API access with keys. Because it is called before the authorization it is also possible to customize user roles.
To just continue to next hopok or step return nothing in the cb
,
any returned status will be final, an error status will be immediately returned in the response,
status 200 will continue to the authorization step
Example:
api.registerPreProcess(method, path, callback)
Similar to registerAuthCheck
, this callback will be called after the signature or session is verified and ACL authorization performed but before
the API route method is called. The req.account
object will always exist at this point but may not contain the user in case of an error.
The purpose of this hook is to perform some preparations or check permissions of a valid user to resources or in case of error perform any other action like redirection or returning something explaining what to do in case of failure.
cb
callbackExample:
api.registerPreProcess('GET', '/account/get', function(req, status, cb) {
if (status.status != 200) status = { status: 302, url: '/error.html' };
cb(status)
});
Example with admin access only:
api.registerPreProcess('POST', '/data/', function(req, status, cb) {
if (req.account.type != "admin") return cb({ status: 401, message: "access denied, admins only" });
cb();
});
api.registerPostProcess(method, path, callback)
Register a callback to be called after successfull API action, status 200 only. To trigger this callback the primary response handler must return
results using api.sendJSON
or api.sendFormatted
methods.
The purpose is to perform some additional actions after the standard API completed or to customize the result
Note: the req.account,req.options,req.query
objects may become empty if any callback decided to do some async action, they are explicitly emptied at the end of the request,
in such cases make a copy of the needed objects if it will needed
Example, just update the rows, it will be sent at the end of processing all post hooks
api.registerPostProcess('', '/data/', function(req, res, rows) {
rows.forEach(function(row) { ...});
});
Example, add data to the rows and return result after it
api.registerPostProcess('', '/data/', function(req, res, row) {
db.get("bk_user", { id: row.id }, function(err, rec) {
row.name = rec.name;
res.json(row);
});
return true;
});
api.registerCleanup(method, path, callback)
Register a cleanup callback that will be called at the end of a request, all registered cleanup callbacks will be called in the order of registration. At this time the result has been sent so connection is not valid anymore but the request and account objects are still available.
Example, do custom logging of all requests
api.registerCleanup('', '/data/', function(req, next) {
db.add("log", req.query, next);
});
api.registerSendStatus(method, path, callback)
Register a status callback that will be called when api.sendReply
or api.sendStatus
is called,
all registered callbacks will be called in the order of registration. At this time the result has NOT been sent yet so connection is
still valid and can be changed. The callback behavior is similar to the api.registerPostProcess
.
To indicate that this hook will send the result eventually it must return true, otherwise the result will be sent afer all hooks are called
Example, do custom logging of all requests
api.registerSendStatus('', '/data/', function(req, res, data) {
logger.info("response", req.path, data);
});
api.registerSignature(method, path, callback)
The purpose of this hook is to manage custom signatures.
Example:
api.registerSignature('', '/', function(req, account, sig, cb) {
if (sig) {
if (invalid) sig = null;
} else {
sig = api.createSignature(.....);
}
cb(sig)
});
api.registerSecret(login, callback)
Register a secret generation method.
api.registerPreHeaders(req, callback)
Register a callback to be called just before HTTP headers are flushed, the callback may update response headers
api.scaleIcon(infile, options, callback)
Scale image return err if failed.
If image module is not set (default) then the input data is returned or saved as is.
The callback takes 3 arguments: function(err, data, info)
where data
will contain a new image data and info
is an object with the info about the new or unmodified image: ext, width, height.
api.iconPath(id, options)
Full path to the icon, perform necessary hashing and sharding, id can be a number or any string.
options.type
may contain special placeholders:
api.iconUrl(file, options)
Returns constructed icon url from the icon record
api.sendIcon(req, id, options)
Send an icon to the client, only handles files
api.putIcon(req, name, id, options, callback)
Store an icon for account, the options are the same as for the iconPath
method
iconPath
along with the options to build the icon absolute pathapi.saveIcon(file, id, options, callback)
Save the icon data to the destination, if api.imagesS3 or options.imagesS3 specified then plave the image on the S3 drive. Store in the proper location according to the types for given id, this function is used after downloading new image or when moving images from other places. On success the callback will be called with the second argument set to the output file name where the image has been saved. Valid properties in the options:
scaleIcon
functionapi.delIcon(id, options, callback)
Delete an icon for account, .type defines icon prefix
api.isIconUrl(url)
Return true if the given file or url point ot an image
api.isIcon(buf)
Returns detected image type if the given buffer contains an image, it checks the header only
api.getSessionCookie(req, name)
Return named encrypted cookie
api.setSessionCookie(req, name, value)
Set a cookie by name and domain, the value is always encrypted
api.handleSessionSignature(req, callback)
Setup session cookies or access token for automatic authentication without signing, req must be complete with all required properties after successful authorization.
api.checkAccountType(account, type)
Return true if the current user belong to the specified type, account type may contain more than one type.
NOTE: after this call the type
property is converted into an array
api.setCurrentAccount(req, account)
Assign or clear the current account record for the given request, if account is null the account is cleared.
All columns in the auth table marked with the auth
property will be also set in the req.options.account
which is used
for permissions when the full account record is not available and only the options are passed.
api.getOptions(req, controls)
Convert query options into internal options, such options are prepended with the underscore to distinguish control parameters from the query parameters.
For security purposes this is the only place that translates special control query parameters into the options properties,
all the supported options are defined in the api.controls
and can be used by the apps freely but with caution. See registerControlParams
.
if controls
is an object it will be used to define additional control parameters or override existing ones for this request only. Same rules as for
registerControlParams
apply.
api.getOptions(req, { count: { min: 5, max: 100 } })
api.getQuery(req, params, options)
Parse query parameters according to the params
. Uses global api defaults, if provided in options as well defaults are all merged.
Returns a query object or an error message or null
var query = api.getQuery(req, { q: { required: 1 } }, { null: 1 });
api.getTokenSecret(req)
Return a secret to be used for enrypting tokens, it uses the account property if configured or the global API token
to be used to encrypt data and pass it to the clients. -api-query-token-secret
can be configured and if a column in the bk_user
with such name exists it is used as a secret, otherwise the value of this property is used as a secret.
api.getResultPage(req, options, rows, info)
Return an object to be returned to the client as a page of result data with possibly next token if present in the info. This result object can be used for pagination responses.
api.getPublicColumns(table, options)
Columns that are allowed to be visible, used in select to limit number of columns to be returned by a query
options may be used to define the following properties:
skip - a regexp with names to be excluded as well
allow - a list of properties which can be checked along with the pub
property for a column to be considered public
disallow - a list of properties which if set will prevent a column to be returned, it is checked before the 'allow' rule
api.getPublicColumns("bk_user", { allow: ["admins"], skip: /device_id|0$/ });
api.cleanupResult(table, data, options)
Process records and keep only public properties as defined in the table columns. This method is supposed to be used in the post process callbacks after all records have been processes and are ready to be returned to the client, the last step would be to cleanup all non public columns if necessary.
table
can be a single table name or a list of table names which combined public columns need to be kept in the rows. List of request tables
is kept in the req.options.cleanup
which by default is empty.
By default primary keys are not kept and must be marked with pub
property in the table definition to be returned.
If any column is marked with priv
property this means never return that column in the result even for the owner of the record
The options.isInternal
allows to return everything except secure columns
Columns with the pub_admin
property will be returned only if the options contains isAdmin
, same with the pub_staff
property, it requires options.isStaff
.
To return data based on the current account roles a special property in the format pub_types
must be set as a string or an array
with roles to be present in the current account type`` field. This is checked only if the column is allowed, this is an additional restriction, i.e. a column must be allowed by the
pub` property or other way.
To retstrict by role define priv_types
in a column with a list of roles which should be denied access to the field.
The options.pool
property must match the actual rowset to be applied properly, in case the records
have been retrieved for the different database pool.
The options.cleanup_strict
will enforce that all columns not present in the table definition will be skipped as well, by default all
new columns or columns created on the fly are returned to the client. api.cleanupStrict
can be configured globbly.
The options.cleanup_rules
can be an object with property names and the values 0, or 1 for pub
or 2
for admin`` or
3for
staff``
The options.cleanup_copy
means to return a copy of every modified record, the original data is preserved
api.clearQuery(table, query, options)
Clear request query properties specified in the table definition or in custom schema.
The table
argument can be a table name or an object with properties as columns.
If options.filter
is not specified the query
will only keep existing columns for the given table.
If options.filter
is a list then the query
will delete properties for columns that contain any specified
property from the filter list. This is used for the bk_user
table to remove properties that supposed to be updated by admins only.
The filter will keep non-existent columns in the query
. To remove such columns when using the filter specify options.force
.
If a name in the filter is prefixed with ! then the logic is reversed, keep all except this property
If options.keep
is a regexp it will be used to keep matched properties by name in the query
regardless of any condition.
If options.clear
is a regexp it will be used to remove matched properties by name in the query
.
Example:
api.clearQuery("bk_user", req.query)
api.clearQuery("bk_user", req.query, "internal")
api.clearQuery("bk_user", req.query, { filter: "internal" })
api.clearQuery("bk_user", req.query, { filter: ["internal"] })
api.clearQuery("bk_user", req.query, { filter: ["!pub"] })
api.clearQuery("bk_user", req.query, { filter: ["internal","priv"] })
api.clearQuery("bk_user", req.query, { filter: ["internal","!priv"], keep: /^__/ })
api.clearQuery({ name: {}, id: { admin: 1 } }, req.query, { filter: ["internal"] })
api.handleWebSocketUpgrade(req, socket, head)
Check if the request is allowed to upgrade to Websocket
api.handleWebSocketConnect(ws, req)
Wrap external WebSocket connection into the Express routing, respond on backend command
api.handleWebSocketRequest(ws, data)
Wrap WebSocket into HTTP request to be proceses by the Express routes
api.wsSet(type, req, value)
Update a Websocket connection properties:
api.wsSend(wsid, msg)
Send to a websocket inside an api server directly
api.wsNotify(options, msg, callback)
Broadcast a message according to the options, if no websocket queue is defined send directly using wsBroadcast
api.wsBroadcast(options, msg)
Send a message to all websockets inside an api process that match the criteria from the options:
lib.isMatched
is used for comparisonlib.isMatched
is used for comparisonapi.cleanupResult
, if it is an array then
the first item is a table and the second item is the property name inside the msg
to be cleanup only, eg. cleanup: ["bk_user","user"].
All properties starting with is`` or
cleanup_`` will be passed to the cleanupResult.module.method
to run the same way as the preprocess
function, this is a more
reliable way to be use preprocess with wsNotify
Author: Vlad Seryakov vseryakov@gmail.com backendjs 2018
This is a skeleton module to be extended by the specific application logic. It provides all callbacks and hooks that are called by the core backend modules during different phases, like initialization, shutting down, etc...
It should be used for custom functions and methods to be defined, the app
module is always available.
All app modules in the modules/ subdirectory use the same prototype, i.e. all hooks are available for custom app modules as well.
app.configure(options, callback)
Called after all config files are loaded and command line args are parsed, home directory is set but before the db is initialized, the primary purpose of this early call is to setup environment before connecting to the database. This is called regardless of the server to be started and intended to initialize the common environment before the database and other subsystems are initialized.
app.configureModule(options, callback)
Called after the core.init has been initialized successfully, this can be redefined in the applications to add additional init steps that all processes require to have. All database pools and other confugration is ready at this point. This hook is called regardless of what kind of server is about to start, it is always called before starting a server or shell.
app.configureMiddleware(options, callback)
This handler is called during the Express server initialization just after the security middleware.
NOTE: api.app
refers to the Express instance.
app.configureWeb(options, callback)
This handler is called after the Express server has been setup and all default API endpoints initialized but the Web server is not ready for incoming requests yet. This handler can setup additional API endpoints, add/modify table descriptions.
NOTE: api.app
refers to the Express instance
app.shutdownWeb(options, callback)
Perform shutdown sequence when a Web process is about to exit
NOTE: api.app
refers to the Express instance
app.configureMaster(options, callback)
This handler is called during the master server startup, this is the process that monitors the worker jobs and performs jobs scheduling
app.configureServer(options, callback)
This handler is called during the Web server startup, this is the master process that creates Web workers for handling Web requests, this process interacts with the Web workers via IPC sockets between processes and relaunches them if any Web worker dies.
app.configureWorker(options, callback)
This handler is called on job worker instance startup after the tables are intialized and it is ready to process the job
app.shutdownWorker(options, callback)
Perform last minute operations inside a worker process before exit, the callback must be called eventually which will exit the process. This method can be overrided to implement custom worker shutdown procedure in order to finish pending tasks like network calls.
app.configureMonitor(options, callback)
This callback is called when the monitor process is ready, there is no any other code is supposed to run inside the monitor, but in case it is needed, this is the hook to be used.
app.configureShell(options, callback)
This callback is called by the shell process to setup additional command or to execute a command which is not supported by the standard shell. Setting options.done to 1 will stop the shell, this is a signal that command has already been processed.
Config parameters
auth-table
, descr: "Table to use for user accounts"auth-err-(.+)
, descr: "Error messages for various cases"auth-admin-roles
, type: "list", descr: "List of special super admin roles"auth-sigversion
, type: "int", descr: "Signature version for secrets"auth-hash
, descr: "Hashing method to use by default: bcrypt, argon2, none"auth-bcrypt
, type: "int", min: 12, descr: "Number of iterations for bcrypt"auth-argon2
, type: "map", maptype: "auto", nocamel: 1, descr: "Argon2 parameteres, ex: type:2,memoryCost:1,hashLength:32"auth-max-length
, type: "int", descr: "Max login and name length"auth-users
, type: "json", logger: "error", descr: "An object with users"auth-users-file
, descr: "A JSON file with a list of users"auth.loadUsers(callback)
Load users from a JSON file, only add or update records
auth.prepareSecret(query, options, callback)
If specified in the options, prepare credentials to be stored in the db, if no error occurred return null, otherwise an error object
auth.checkSecret(user, password, callback)
Verify an existing user record with given password,
auth.get(query, options, callback)
Returns an account record by login or id, to make use of a cache add to the config db-cache-keys-bk_user-id=id
auth.add(query, options, callback)
Registers a new account, returns new record in the callback, when options.isInternal
is true then allow to set all properties
otherwise internal properties will not be added
auth.update(query, options, callback)
Updates an existing account by login or id, if options.isInternal
is true then allow to update all properties, returns a new record in the callback
auth.del(query, options, callback)
Deletes an existing account by login or id, no admin checks, returns the old record in the callback
AWS Cloud API interface
Config parameters
aws-key
, descr: "AWS access key"aws-secret
, descr: "AWS access secret"aws-token
, descr: "AWS security token"aws-region
, descr: "AWS region", pass: 1aws-zone
, descr: "AWS availability zone"aws-meta
, type: "bool", descr: "Retrieve instance metadata, 0 to disable"aws-sdk-profile
, descr: "AWS SDK profile to use when reading credentials file"aws-sns-app-arn
, descr: "SNS Platform application ARN to be used for push notifications"aws-key-name
, descr: "AWS instance keypair name for remote job instances or other AWS commands"aws-elb-name
, descr: "AWS ELB name to be registered with on start up or other AWS commands"aws-target-group
, descr: "AWS ELB target group to be registered with on start up or other AWS commands"aws-elastic-ip
, descr: "AWS Elastic IP to be associated on start"aws-host-name
, type: "list", descr: "List of hosts to update in Route54 zone with the current private IP address, hosts must be in FQDN format, supports @..@ core.instance placeholders"aws-iam-profile
, descr: "IAM instance profile name for instances or commands"aws-image-id
, descr: "AWS image id to be used for instances or commands"aws-subnet-id
, descr: "AWS subnet id to be used for instances or commands"aws-vpc-id
, descr: "AWS VPC id to be used for instances or commands"aws-group-id
, array: 1, descr: "AWS security group(s) to be used for instances or commands"aws-instance-type
, descr: "AWS instance type to launch on demand"aws-account-id
, descr: "AWS account id if not running on an instance"aws-eni-id
, type: "list", descr: "AWS Elastic Network Interfaces to attach on start, format is: eni[:index],eni..."aws-config-parameters
, descr: "Prefix for AWS Config Parameters Store to load and parse as config before initializing the database pools, example: /bkjs/config/"aws-set-parameters
, type: "list", descr: "AWS Config Parameters Store to set on start, supports @..@ core.instance placeholders: format is: path:value,...."aws-conf-file
, descr: "S3 url for config file to download on start"aws-conf-file-interval
, type: "int", descr: "Load S3 config file every specified interval in minites"aws.configure(options, callback)
Initialization of metadata
aws.configureServer(options, callback)
Execute on Web server startup
aws.configureMaster(options, callback)
Execute on master server startup
aws.configureJob(options, callback)
Process AWS alarms and state notifications, if such a job is pulled from SQS queue it is handled here and never get to the jobs. SNS alarms or EventBridge events must use a SQS qeue as the target.
aws.queryIAM(action, obj, options, callback)
AWS AIM API request
aws.querySTS(action, obj, options, callback)
AWS STS API request
aws.queryCFN(action, obj, options, callback)
AWS CFN API request
aws.queryElastiCache(action, obj, options, callback)
AWS Elastic Cache API request
aws.queryAS(action, obj, options, callback)
AWS Autoscaling API request
aws.queryRekognition(action, obj, options, callback)
Make a request to the Rekognition service
aws.querySSM(action, obj, options, callback)
AWS SSM API request
aws.queryACM(action, obj, options, callback)
AWS ACM API request
aws.queryComprehend(action, obj, options, callback)
AWS Comprehend API request
aws.queryTranscribe(action, obj, options, callback)
AWS Transcribe API request
aws.queryECS(action, obj, options, callback)
AWS ECS API request
aws.queryECR(action, obj, options, callback)
AWS ECR API request
aws.getTagValue(obj, key)
Returns a tag value by key, default key is Name
aws.queryCW(action, obj, options, callback)
AWS CloudWatch API request
aws.queryCWL(action, obj, options, callback)
AWS CloudWatch Log API request
aws.cwPutMetricAlarm(options, callback)
Creates or updates an alarm and associates it with the specified Amazon CloudWatch metric. The options specify the following:
CPUUtilization
AWS/EC2
>=
.Average
60
15
90
aws.cwPutMetricData(namespace, data, options, callback)
Publishes metric data points to Amazon CloudWatch. The argumernts specify the following:
AWS
The options can specify the following:
aws.cwListMetrics(options, callback)
Return metrics for the given query, the options can be specified:
aws.cwGetMetricData(options, callback)
Return collected metric statistics
Options:
Example:
aws.cwGetMetricData({ age: 5, metrics: [{ name: "NetworkOut", label: "Traffic", stat: "Average", dimensions: { InstanceId: "i-1234567" } } ] }, lib.log)
aws.cwlFilterLogEvents(options, callback)
Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream. Options:
aws.cwPutLogEvents(options, callback)
Store events in the Cloudwatch Logs. Options:
aws._queryDDB(target, service, action, obj, options, callback)
DynamoDB requests
aws.toDynamoDB(value, level)
Convert a Javascript object into DynamoDB object
aws.fromDynamoDB(value, level)
Convert a DynamoDB object into Javascript object
aws.queryExpression(params, obj, options, join)
Build a condition expression for the given object, all properties in the obj are used
aws.ddbListTables(options, callback)
Return list of tables in .TableNames property of the result
Example:
{ TableNames: [ name, ...] }
aws.ddbDescribeTable(name, options, callback)
Return table definition and parameters in the result structure with property of the given table name
Example:
{ name: { AttributeDefinitions: [], KeySchema: [] ...} }
aws.ddbCreateTable(name, attrs, options, callback)
Create a table
Example:
ddbCreateTable('users', { id: 'S', mtime: 'N', name: 'S'},
{ keys: ["id", "name"],
local: { mtime: { mtime: "HASH" } },
global: { name: { name: 'HASH', ProvisionedThroughput: { ReadCapacityUnits: 50 } } },
projections: { mtime: ['gender','age'],
name: ['name','gender'] },
stream: "NEW_IMAGE",
readCapacity: 10,
writeCapacity: 10 });
aws.ddbUpdateTable(options, callback)
Update tables provisioned throughput settings, options is used instead of table name so this call can be used directly in the cron jobs to adjust provisionined throughput on demand. Options must provide the following properties:
Example
aws.ddbUpdateTable({ name: "users", add: { name_id: { name: "S", id: 'N', readCapacity: 20, writeCapacity: 20, projections: ["mtime","email"] } })
aws.ddbUpdateTable({ name: "users", add: { name: { name: "S", readCapacity: 20, writeCapacity: 20, projections: ["mtime","email"] } })
aws.ddbUpdateTable({ name: "users", del: "name" })
aws.ddbUpdateTable({ name: "users", update: { name: { readCapacity: 10, writeCapacity: 10 } })
Example of crontab job in etc/crontab:
[
{ "type": "server", "cron": "0 0 1 * * *", "job": { "aws.ddbUpdateTable": { "name": "bk_user", "readCapacity": 1000, "writeCapacity": 1000 } } },
{ "type": "server", "cron": "0 0 6 * * *", "job": { "aws.ddbUpdateTable": { "name": "bk_user", "readCapacity": 2000, "writeCapacity": 2000 } } }
]
aws.ddbUpdateTimeToLive(options, callback)
Update TTL attribute. The options properties:
aws.ddbDescribeTimeToLive(name, options, callback)
Returns status of Time to live attribute for a table
aws.ddbDeleteTable(name, options, callback)
Remove a table from the database.
By default the callback will ba callled only after the table is deleted, specifying options.nowait
will return immediately
aws.ddbWaitForTable(name, item, options, callback)
Call the callback after specified period of time or when table status become different from the given waiting status. if options.waitTimeout is not specified calls the callback immediately. options.waitStatus is checked if given and keeps waiting while the status is equal to it. options.waitDelay can be specified how often to request new status, default is 250ms.
aws.ddbPutItem(name, item, options, callback)
Put or add an item
Example:
ddbPutItem("users", { id: 1, name: "john", mtime: 11233434 }, { expected: { name: null } })
aws.ddbUpdateItem(name, keys, item, options, callback)
Update an item
options.ops
the same way as for queries.*
or new
means ALL_NEW, old
means ALL_OLD,
updated
means UPDATED_NEW, old_updated
means UPDATED_OLDExample:
ddbUpdateItem("users", { id: 1, name: "john" }, { gender: 'male', icons: '1.png' }, { action: { icons: 'add' }, expected: { id: 1 }, returning: "*" })
ddbUpdateItem("users", { id: 1, name: "john" }, { gender: 'male', icons: '1.png' }, { action: { icons: 'incr' }, expected: { id: null } })
ddbUpdateItem("users", { id: 1, name: "john" }, { gender: 'male', icons: '1.png', num: 1 }, { action: { num: 'add', icons: 'add' }, expected: { id: null, num: 0 }, ops: { num: "gt" } })
aws.ddbDeleteItem(name, keys, options, callback)
Delete an item from a table
Example:
ddbDeleteItem("users", { id: 1, name: "john" }, {})
aws.ddbBatchWriteItem(items, options, callback)
Update items from the list at the same time
Example:
{ table: [ { put: { id: 1, name: "tt" } }, { del: { id: 2 } }] }
aws.ddbBatchGetItem(items, options, callback)
Retrieve all items for given list of keys
Example:
{ users: { keys: [{ id: 1, name: "john" },{ id: .., name: .. }], select: ['name','id'], consistent: true }, ... }
aws.ddbGetItem(name, keys, options, callback)
Retrieve one item by primary key
Example:
ddbGetItem("users", { id: 1, name: "john" }, { select: 'id,name' })
aws.ddbQueryTable(name, condition, options, callback)
Query on a table, return all matching items
Example:
aws.ddbQueryTable("users", { id: 1, name: "john" }, { select: 'id,name', ops: { name: 'gt' } })
aws.ddbQueryTable("users", { id: 1, name: "john", status: "ok" }, { keys: ["id"], select: 'id,name', ops: { name: 'gt' } })
aws.ddbQueryTable("users", { id: 1 }, { expr: "status=:s", values: { s: "status" } })
aws.ddbScanTable(name, condition, options, callback)
Scan a table for all matching items
Example:
aws.ddbScanTable("users", { id: 1, name: 'a' }, { ops: { name: 'gt' }})
aws.ddbScanTable("users", "id=:id AND name=:name", { values: { id: 1, name: 'a' } });
aws.ddbTransactWriteItems(items, options, callback)
Update items from the list at the same time in one transaction, on any failure everything is rolled back
Example:
{ op: "put": table: "table-name", obj: { id: 1, name: "tt" } },
{ op: "del": table: "table-name", obj: { id: 2 } },
{ op: "update": table: "table-name", obj: { id: 1, name: "test" }, options: { expected: { status: "ok" } } },
{ op: "check": table: "table-name", obj: { id: 1 }, options: { expected: { status: "ok" } } }
aws.queryEC2(action, obj, options, callback)
AWS EC2 API request
aws.ec2RunInstances(options, callback)
Run AWS instances, supports all native EC2 parameters with first capital letter but also accepts simple parameters in the options:
Name:
, any occurences of %i will be replaced with the instance indexThe callback will take 3 arguments: callback(err, rc, info) where info will contain properties that can be used by `aws.ec2PrepareInstance
aws.ec2AfterRunInstances(options, callback)
Perform the final tasks after an instance has been launched like wait for status, assign Elastic IP or tags..
aws.ec2WaitForInstance(instanceId, status, options, callback)
Check an instance status and keep waiting until it is equal what we expect or timeout occurred.
The status
can be one of: pending | running | shutting-down | terminated | stopping | stopped
The options can specify the following:
aws.ec2DescribeSecurityGroups(options, callback)
Describe security groups, optionally if options.filter
regexp is provided then limit the result to the matched groups only,
return list of groups to the callback
aws.ec2DescribeInstances(options, callback)
Describe instances according to the query filters, returns a list with instances, the following properties can be used:
aws.ec2CreateTags(id, name, options, callback)
Create tags for a resource. The name is a string, an array or an object with tags. The options also may contain tags property which is an object with tag key and value
Example
aws.ec2CreateTags("i-1234","My Instance", { tags: { tag2 : "val2", tag3: "val3" } } )
aws.ec2CreateTags("i-1234", { tag2: "val2", tag3: "val3" })
aws.ec2CreateTags("i-1234", [ "tag2", "val2", "tag3", "val3" ])
aws.ec2AssociateAddress(instanceId, elasticIp, options, callback)
Associate an Elastic IP with an instance. Default behaviour is to reassociate if the EIP is taken. The options can specify the following:
aws.ec2CreateImage(options, callback)
Create an EBS image from the instance given or the current instance running
aws.ec2DeregisterImage(ami_id, options, callback)
Deregister an AMI by id. If options.snapshots
is set, then delete all snapshots for this image as well
aws.ec2AttachNetworkInterface(eniId, instance, options, callback)
Attach given ENIs in eniId
to the instance
, each ENI can be specified as 'eni:idx' where idx is interface index
aws.elb2RegisterInstances(target, instance, options, callback)
Register an instance(s) with ELB, instance can be one id or a list of ids
aws.ssmSendCommand(cmds, instances, options, callback)
Run a shell command
aws.ssmWaitForCommand(cmdId, instanceId, options, callback)
Return a command details
aws.readCredentials(profile, callback)
Read key and secret from the AWS SDK credentials file, if no profile is given in the config or command line only the default peofile will be loaded.
aws.readConfig(callback)
Read and apply config from S3 bucket
aws.getInstanceMeta(path, callback)
Retrieve instance meta data
aws.getInstanceCredentials(path, callback)
Retrieve instance credentials using EC2 instance profile and setup for AWS access
aws.getInstanceInfo(options, callback)
Retrieve instance launch index from the meta data if running on AWS instance
aws.getInstanceDetails(options, callback)
Get the current instance details if not retrieved already in aws.instance
aws.stsAssumeRole(options, callback)
Assume a role and return new credentials that can be used in other API calls
aws.detectLabels(name, options, callback)
Detect image featires using AWS Rekognition service, the name
can be a Buffer, a local file or an url to the S3 bucket. In the latter case
the url can be just apath to the file inside a bucket if options.bucket
is specified, otherwise it must be a public S3 url with the bucket name
to be the first part of the host name. For CDN/CloudFront cases use the option.bucket
option.
aws.listCertificates(options, callback)
Return a list of certificates,
status
can limit which certs to return, PENDING_VALIDATION | ISSUED | INACTIVE | EXPIRED | VALIDATION_TIMED_OUT | REVOKED | FAILEDaws.parseXMLResponse(err, params, options, callback)
Parse AWS response and try to extract error code and message, convert XML into an object.
aws.getServiceRegion(service, region)
Check for supported regions per service, return the first one if the given region is not supported
aws.copyCredentials(obj, options)
Copy all credentials properties from the options into the obj
aws.querySign(region, service, host, method, path, body, headers, credentials, options)
Build version 4 signature headers
aws.queryPrepare(action, version, obj, options)
Return a request object ready to be sent to AWS, properly formatted
aws.querySigner()
It is called in the context of a http request
aws.queryAWS(region, service, proto, host, path, obj, options, callback)
Make AWS request, return parsed response as Javascript object or null in case of error
aws.queryEndpoint(service, version, action, obj, options, callback)
AWS generic query interface
aws.queryRoute53(method, path, data, options, callback)
Make a request to Route53 service
aws.route53List(options, callback)
List all zones
aws.route53Get(options, callback)
Return a zone by domain or id
aws.route53Change(names, options, callback)
Create or update a host in the Route53 database.
names
is a host name to be set with the current IP address or a list with objects in the format
[ { name: "..", value: "1.1.1.1", type: "A", ttl: 300, zoneId: "Id", alias: "dnsname", hostedzone: "/hostedzone/id" } ...]The options
may contain the following:
aws.signS3(method, bucket, path, body, options)
Sign S3 AWS request, returns url to be send to S3 server, options will have all updated headers to be sent as well
aws.queryS3(bucket, path, options, callback)
S3 requests Options may contain the following properties:
aws.s3List(path, options, callback)
Retrieve a list of files from S3 bucket, only files inside the path will be returned
aws.s3GetFile(path, options, callback)
Retrieve a file from S3 bucket, root of the path is a bucket, path can have a protocol prepended like s3://, it will be ignored
aws.s3PutFile(path, file, options, callback)
Upload a file to S3 bucket, file
can be a Buffer or a file name
aws.s3CopyFile(path, source, options, callback)
Copy existing S3 file, source must be in the format bucket/path
aws.s3ParseUrl(link)
Parse an S3 URL and return an object with bucket and path
aws.s3Proxy(res, bucket, file, options, callback)
Proxy a file from S3 bucket into the existing HTTP response res
aws.querySES(action, obj, options, callback)
AWS SES API request
aws.sesSendEmail(to, subject, body, options, callback)
Send an email via SES The following options supported:
aws.sesSendRawEmail(body, options, callback)
Send raw email The following options accepted:
aws.querySNS(action, obj, options, callback)
AWS SNS API request
aws.snsCreatePlatformEndpoint(token, options, callback)
Creates an endpoint for a device and mobile app on one of the supported push notification services, such as GCM and APNS.
The following properties can be specified in the options:
-sns-app-arn
will be used.All capitalized properties in the options will be pased as is. The callback will be called with an error if any and the endpoint ARN
aws.snsSetEndpointAttributes(arn, options, callback)
Sets the attributes for an endpoint for a device on one of the supported push notification services, such as GCM and APNS.
The following properties can be specified in the options:
aws.snsDeleteEndpoint(arn, options, callback)
Deletes the endpoint from Amazon SNS.
aws.snsPublish(arn, msg, options, callback)
Sends a message to all of a topic's subscribed endpoints or to a mobile endpoint. If msg is an object, then it will be pushed as JSON. The options may take the following properties:
aws.snsCreateTopic(name, options, callback)
Creates a topic to which notifications can be published. The callback returns topic ARN on success.
aws.snsSetTopicAttributes(arn, options, callback)
Updates the topic attributes. The following options can be used:
aws.snsDeleteTopic(arn, options, callback)
Deletes the topic from Amazon SNS.
aws.snsSubscribe(arn, endpoint, options, callback)
Creates a topic to which notifications can be published. The callback returns topic ARN on success, if the topic requires confirmation the arn returned will be null and a token will be sent to the endpoint for confirmation.
aws.snsConfirmSubscription(arn, token, options, callback)
Verifies an endpoint owner's intent to receive messages by validating the token sent to the endpoint by an earlier Subscribe action. If the token is valid, the action creates a new subscription and returns its Amazon Resource Name (ARN) in the callback.
aws.snsSetSubscriptionAttributes(arn, options, callback)
Updates the subscription attributes. The following options can be used:
aws.snsUnsubscribe(arn, options, callback)
Creates a topic to which notifications can be published. The callback returns topic ARN on success.
aws.snsListTopics(options, callback)
Creates a topic to which notifications can be published. The callback returns topic ARN on success.
aws.querySQS(action, obj, options, callback)
AWS SQS API request
aws.sqsReceiveMessage(url, options, callback)
Receive message(s) from the SQS queue, the callback will receive a list with messages if no error. The following options can be specified:
aws.sqsSendMessage(url, body, options, callback)
Send a message to the SQS queue. The options can specify the following:
The primary object containing all config options and common functions
Config parameters
help
, type: "callback", callback: function() { this.showHelp() descr: "Print help and exit"log
, type: "callback", callback: function(v) { logger.setLevel(v) descr: "Set debugging level to any of " + Object.keys(logger.levels), pass: 2log-file
, type: "callback", callback: function(v) { if (v) this.logFile=v;logger.setFile(this.logFile, this) descr: "Log to a file, if not specified used default logfile, disables syslog", pass: 1log-ignore
, type: "regexp", obj: "logInspect", strip: /log-/, nocamel: 1, descr: "Regexp with property names which must not be exposed in the log when using custom logger inspector"log-inspect
, type: "callback", callback: function(v) { this.setLogInspect(v) descr: "Install custom secure logger inspection instead of util.inspect"log-filter
, type: "callback", callback: function(v) { if (v) logger.setDebugFilter(v) descr: "Enable debug filters, format is: label,... to enable, and !label,... to disable. Only first argument is used for label in logger.debug", pass: 1no-log-filter
, type: "bool", onupdate: function(v) { if (v) logger.filters={} descr: "Clear all log filters", pass: 1syslog
, type: "callback", callback: function(v) { logger.setSyslog(v || 1, this.name) descr: "Log messages to syslog, pass 0 to disable, 1 or url (tcp|udp|unix):[//host:port][/path]?[facility=F][&tag=T][&retryCount=N][&bsd=1][&rfc5424=1][&rfc3164=1]...", pass: 1syslog-options
, type: "callback", callback: function(v) { logger.setSyslogOptions(v) descr: "Update syslog options, the format is a map: name:val,...", pass: 1console
, type: "callback", callback: function() { logger.setFile(null) descr: "All logging goes to the console resetting all previous log related settings, this is used in the development mode mostly", pass: 1home
, type: "callback", callback: "setHome", descr: "Specify home directory for the server, the server will try to chdir there or exit if it is not possible, the directory must exist", pass: 2conf-file
, descr: "Name of the config file to be loaded instead of the default etc/config, can be relative or absolute path", pass: 1err-file
, type: "path", descr: "Path to the error log file where daemon will put app errors and crash stacks", pass: 1etc-dir
, type: "path", obj: "path", strip: /Dir/, descr: "Path where to keep config files", pass: 1tmp-dir
, type: "path", obj: "path", strip: /Dir/, descr: "Path where to keep temp files"spool-dir
, type: "path", obj: "path", strip: /Dir/, descr: "Path where to keep modifiable files"log-dir
, type: "path", obj: "path", strip: /Dir/, descr: "Path where to keep other log files, log-file and err-file are not affected by this", pass: 1files-dir
, type: "path", obj: "path", strip: /Dir/, descr: "Path where to keep uploaded files"images-dir
, type: "path", obj: "path", strip: /Dir/, descr: "Path where to keep images"web-path
, type: "path", array: 1, obj: "path", strip: /Path/, descr: "Path where to keep web pages and other static files to be served by the web servers"views-path
, type: "path", array: 1, obj: "path", strip: /Path/, descr: "Path where to keep virtual hosts web pages, every subdirectory name is a host name to match with Host: header, www. is always stripped before matching vhost directory"modules-path
, type: "path", array: 1, obj: "path", strip: /Path/, descr: "Directory from where to load modules, these are the backendjs modules but in the same format and same conventions as regular node.js modules, the format of the files is NAME_{web,worker,shell}.js. The modules can load any other files or directories, this is just an entry point", pass: 1locales-path
, type: "path", array: 1, obj: "path", strip: /Path/, descr: "Path where to keep locale translations"role
, descr: "Override servers roles, this may have very strange side effects and should only be used for testing purposes"umask
, descr: "Permissions mask for new files, calls system umask on startup, if not specified the current umask is used", pass: 1force-uid
, type: "list", onupdate: function(v) { lib.dropPrivileges(v[0], v[1]) descr: "Drop privileges if running as root by all processes as early as possibly, this reqiures uid being set to non-root user. A convenient switch to start the backend without using any other tools like su or sudo.", pass: 1port
, type: "number", min: 0, descr: "port to listen for the HTTP server, this is global default"bind
, descr: "Bind to this address only, if not specified listen on all interfaces"backlog
, type: "int", descr: "The maximum length of the queue of pending connections, used by HTTP server in listen."ws-port
, type: "number", obj: 'ws', min: 0, descr: "Port to listen for WebSocket server, it can be the same as HTTP/S ports to co-exist on existing web servers"ws-bind
, obj: 'ws', descr: "Bind to this address only for WebSocket, if not specified listen on all interfaces, only when the port is different from existing web ports"ws-ping
, type: "number", obj: 'ws', min: 0, descr: "How often to ping Websocket connections"ws-path
, type: "regexp", obj: 'ws', descr: "Websockets will be accepted only if request path maches the pattern"ws-origin
, type: "regexp", obj: 'ws', descr: "Websockets will be accepted only if request Origin: header maches the pattern"ws-queue
, obj: "ws", descr: "A queue where to publish messages for websockets, API process will listen for messages and proxy it to all macthing connected websockets "ssl-port
, type: "number", obj: 'ssl', min: 0, descr: "port to listen for HTTPS server, this is global default"ssl-bind
, obj: 'ssl', descr: "Bind to this address only for HTTPS server, if not specified listen on all interfaces"ssl-key
, type: "file", obj: 'ssl', descr: "Path to SSL prvate key"ssl-cert
, type: "file", obj: 'ssl', descr: "Path to SSL certificate"ssl-pfx
, type: "file", obj: 'ssl', descr: "A string or Buffer containing the private key, certificate and CA certs of the server in PFX or PKCS12 format. (Mutually exclusive with the key, cert and ca options.)"ssl-ca
, type: "file", obj: 'ssl', array: 1, descr: "An array of strings or Buffers of trusted certificates in PEM format. If this is omitted several well known root CAs will be used, like VeriSign. These are used to authorize connections."ssl-passphrase
, obj: 'ssl', descr: "A string of passphrase for the private key or pfx"ssl-crl
, type: "file", obj: 'ssl', array: 1, descr: "Either a string or list of strings of PEM encoded CRLs (Certificate Revocation List)"ssl-ciphers
, obj: 'ssl', descr: "A string describing the ciphers to use or exclude. Consult http://www.openssl.org/docs/apps/ciphers.html#CIPHER_LIST_FORMAT for details on the format"ssl-request-cert
, type: "bool", obj: 'ssl', descr: "If true the server will request a certificate from clients that connect and attempt to verify that certificate. "ssl-reject-unauthorized
, type: "bool", obj: 'ssl', decr: "If true the server will reject any connection which is not authorized with the list of supplied CAs. This option only has an effect if ssl-request-cert is true"concurrency
, type: "number", min: 1, max: 4, descr: "How many simultaneous tasks to run at the same time inside one process, this is used by async module only to perform several tasks at once, this is not multithreading but and only makes sense for I/O related tasks"daemon
, type: "none", descr: "Daemonize the process, go to the background, can be specified only in the command line"shell
, type: "none", descr: "Run command line shell, load the backend into the memory and prompt for the commands, can be specified only in the command line"monitor
, type: "none", descr: "For production use, monitors the master and Web server processes and restarts if crashed or exited, can be specified only in the command line"master
, type: "none", descr: "Start the master server, can be specified only in the command line, this process handles job schedules and starts Web server, keeps track of failed processes and restarts them"web
, type: "callback", callback: function() { delete this.noWeb descr: "Start Web server processes, spawn workers that listen on the same port, for use without master process which starts Web servers automatically"salt
, type: "callback", callback: function(v) { this.salt=lib.salt=v; descr: "Set random or specific salt value to be used for consistent suuid generation", pass: 1app-name
, type: "callback", callback: function(v) { if (!v) return;v = v.split(/[/-]/);this.appName=v[0].trim();if (v[1]) this.appVersion=v[1].trim(); descr: "Set appName and version explicitely an skip reading it from package.json, it can be just a name or name-version", pass: 1app-version
, descr: "Set appVersion explicitely an skip reading it from package.json", pass: 1app-descr
, descr: "Set appdescr explicitely an skip reading it from package.json", pass: 1app-package
, descr: "NPM package containing the application package.json, it will be added to the list of package.json files for app name and version discovery. The package must be included in the -preload-packages list.", pass: 1instance-(.+)
, obj: 'instance', make: "$1", descr: "Set instance properties explicitly: tag, region, zone", pass: 1run-mode
, dns: 1, descr: "Running mode for the app, used to separate different running environment and configurations", pass: 1no
, type: "callback", callback: function(v) { lib.strSplit(v).forEach((x) => { this[lib.toCamel("no-" + x)] = true }) descr: "List of subsystems to disable instead of using many inidividual -no-NNN parameters"no-monitor
, type: "none", descr: "Disable monitor process, for cases when the master will be monitored by other tool like monit..."no-master
, type: "none", descr: "Do not start the master process"no-watch
, type: "bool", descr: "Disable source code watcher"no-web
, type: "bool", descr: "Disable Web server processes, without this flag Web servers start by default"no-jobs
, type: "bool", descr: "Do not initialize jobs processing", pass: 1no-ipc
, type: "bool", descr: "Do not initialize IPC drivers", pass: 1no-db
, type: "bool", descr: "Do not initialize DB drivers", pass: 1no-dbconf
, type: "bool", descr: "Do not retrieve config from the DB", pass: 1no-dns
, type: "bool", descr: "Do not use DNS configuration during the initialization", pass: 1no-modules
, type: "bool", descr: "Do not load any external modules", pass: 1no-packages
, type: "bool", descr: "Do not load any NPM packages", pass: 1no-configure
, type: "bool", descr: "Do not run configure hooks during the initialization", pass: 1repl-port-([a-z]+)$
, type: "number", obj: "repl", make: "$1Port", min: 1001, descr: "Base REPL port for process role (server, master, web, worker), if specified it initializes REPL in the processes, for workers the port is computed by adding a worker id to the base port, for example if specified -repl-port-web 2090
then a web worker will use any available 2091,2092..."repl-bind
, obj: "repl", descr: "Listen only on specified address for REPL server in the master process"repl-file
, obj: "repl", descr: "User specified file for REPL history"repl-size
, obj: "repl", type: "int", descr: "Max size to read on start from the end of the history file"worker
, type: "bool", descr: "Set this process as a worker even it is actually a master, this skips some initializations"preload-packages
, type: "list", array: 1, push: 1, descr: "NPM packages to load on startup, the modules, locales, viewes, web subfolders from the package will be added automatically to the system paths, modules will be loaded if present, the config file in etc subfolder will be parsed if present", pass: 1preload-modules
, type: "regexp", descr: "Modules to preload first from any modules/ folders including the system folder, this can be used to preload default bkjs system modules", pass: 1exclude-modules
, type: "regexp", descr: "Modules not to load, the whole path is checked", pass: 1depth-modules
, type: "int", descr: "How deep to go looking for modules, it uses lib.findFileSync to locate all .js files", pass: 1user-agent
, array: 1, descr: "Add HTTP user-agent header to be used in HTTP requests, for scrapers or other HTTP requests that need to be pretended coming from Web browsers"backend-host
, descr: "Host of the master backend, can be used for backend nodes communications using core.sendRequest function calls with relative URLs, also used in tests."backend-login
, descr: "Credentials login for the master backend access when using core.sendRequest"backend-secret
, descr: "Credentials secret for the master backend access when using core.sendRequest"host-name
, type: "callback", callback: function(v) { if (v) this.hostName=v;this.domain = lib.domainName(this.hostName);this._name = "hostName" descr: "Hostname/domain to use for communications, default is current domain of the host machine"config-domain
, descr: "Domain to query for configuration TXT records, must be specified to enable DNS configuration"config-roles
, type: "list", array: 1, descr: "Roles to assume when pulling config parameters from the config, used in config files or config database"locales
, array: 1, type: "list", descr: "A list of locales to load from the locales/ directory, only language name must be specified, example: en,es. It enables internal support for res.__
and req.__
methods that can be used for translations, for each request the internal language header will be honored forst, then HTTP Accept-Language"no-locales
, type: "bool", descr: "Do not load locales on start"email-from
, descr: "Email address to be used when sending emails from the backend"email-transport
, descr: "Send emails via supported transports: ses:, sendgrid://?key=SG, if not set default SMTP settings are used"sendgrid-key
, descr: "SendGrid API key"smtp-(.+)
, obj: "smtp", make: "$1", descr: "SMTP server parameters, user, password, host, ssl, tls...see nodemailer for details"tmp-watcher-(.+)
, obj: "tmpWatcher.$1", type: "json", make: "$1", merge: 1, descr: "How long to keep files per subdirectory, age is in seconds, ex: { path: P, age: S, include: rx, exclude: rx, depth: 1, nodirs: 1 }"stop-on-error
, type: "bool", descr: "Exit the process on any error when loading modules, for dev purposes", pass: 1allow-methods-(.+)
, obj: "allow-methods", type: "regexp", nocamel: 1, descr: "Modules that allowed to run methods by name, useful to restrict configure methods. Ex: -allow-methods-configureWeb app", pass: 1core.init(options, callback)
Main initialization, must be called prior to perform any actions.
If options are given they may contain the following properties:
core.run(options, callback)
Run any backend function after environment has been initialized, this is to be used in shell scripts, core.init will parse all command line arguments, the simplest case to run from /data directory and it will use default environment or pass -home dir so the script will reuse same config and paths as the server context can be specified for the callback, if no then it run in the core context
core.exit(code, msg)
Exit the process with possible message to be displayed and status code
core.setHome(home)
Switch to new home directory, exit if we cannot, this is important for relative paths to work if used, no need to do this in worker because we already switched to home directory in the master and all child processes inherit current directory Important note: If run with combined server or as a daemon then this MUST be an absolute path, otherwise calling it in the spawned web master will fail due to the fact that we already set the home and relative path will not work after that.
core.loadConfig(file, callback)
Parse the config file, configFile can point to a file or can be skipped and the default file will be loaded
core.reloadConfig(callback)
Reload all config files
core.loadDnsConfig(options, callback)
Load configuration from the DNS TXT records
core.runMethods(name, params, options, callback)
Run a method for every module, a method must conform to the following signature: function(options, callback)
and
call the callback when finished. The callback second argument will be the parameters passed to each method, the options if provided can
specify the conditions or parameters which wil be used by the `runMethods`` only.
The following properties can be specified in the options or params:
core.addModule(...args)
Adds reference to the objects in the core for further access, specify module name, module reference pairs.
This is used the the core itcore to register all internal modules and makes it available in the shell and in the core.modules
object.
Also this is used when creating modular backend application by separating the logic into different modules, by registering such modules with the core it makes the module a first class citizen in the backendjs core and exposes all the callbacks and methods.
For example, the module below will register API routes and some methods
const bkjs = require("backendjs");
const mymod = { name: "mymod" }
exports.module = mymod;
core.addModule(mymod);
mymod.configureWeb = function(options, callback) {
bkjs.api.app.all("/mymod", function(req, res) {
res.json({});
});
}
In the main app.js just load it and the rest will be done automatically, i.e. routes will be created ...
const mymod = require("./mymod.js");
Running the shell will make the object mymod
available
./app.sh -shell
> mymod
{ name: "mymod" }
core.loadModules(dir, options, callback)
Dynamically load services from the specified directory.
The modules are loaded using require
as a normal nodejs module but in addition if the module exports
init
method it is called immediately with options passed as an argument. This is a synchronous function so it is supposed to be
called on startup, not dynamically during a request processing.
Only .js files from top level are loaded by default unless the depth is provided. core.addModule
is called automatically.
Each module is put in the global core.modules`` object by name, the name can be a property
name` or the module base file name.
Modules can be sorted by a priority, if .priority property is defined in the module it will be used to sort the modules, the higher priority the
closer to the top the module will be. The position of a module in the core.modules
will define the order runMethods
will call.
it uses lib.findFileSync to locate the modules, options depth
, include or
exclude` can be provided
Caution must be taken for module naming, it is possible to override any default bkjs module which will result in unexpected behaviour
Example, to load all modules from the local relative directory
core.loadModules("modules")
core.loadPackages(list, options)
Load NPM packages and auto configure paths from each package, etc/config file inside each package will be parsed immediately. Returns all config files concatenated.
core.httpGet(uri, params, callback)
Make a HTTP request, see httpGet
module for more details.
core.sendRequest(options, callback)
Make a HTTP request using httpGet
with ability to sign requests.
The POST request is made, if data is an object, it is converted into string.
Returns params as in httpGet
with .json property assigned with an object from parsed JSON response.
When used with API endpoints, the backend-host
parameter must be set in the config or command line to the base URL of the backend,
like http://localhost:8000, this is when uri
is relative URL. Absolute URLs do not need this parameter.
Special parameters for options:
core.parseConfig(data, pass, file)
Parse config lines for the file or other place, Examples of sections from modules:
tag=T, instance.tag=T, runMode=M, appName=N, role=R, db.configRoles=dev, aws.region=R, aws.tags=T
core.parseArgs(argv, pass, file)
Parse command line arguments
core.processArgs(mod, argv, pass, file)
Config parameters defined in a module as a list of parameter names prefixed with module name, a parameters can be a string which defines text parameter or an object with the properties: name, type, value, decimals, min, max, separator type can be bool, number, list, json
core.describeArgs(module, args)
Add custom config parameters to be understood and processed by the config parser
Example:
core.describeArgs("api", [ { name: "num", type: "int", descr: "int param" }, { name: "list", array: 1, descr: "list of words" } ]);
core.describeArgs([ { name: "api-list", array: 1, descr: "list of words" } ]);
core.watchLogs(options, callback)
Watch log files for errors and report via email or POST url, see config parameters starting with logwatcher-
about how this works
core.watchLogsSave(file, pos, callback)
Save current position for a log file
core.processName()
Return unique process name based on the cluster status, worker or master and the role. This is can be reused by other workers within the role thus making it usable for repeating environments or storage solutions.
core.showHelp(options)
Print help about command line arguments and exit
core.sendmail(options, callback)
Send email via nodemailer
with SMTP transport, other supported transports:
tmp/
core.killBackend(name, signal, callback)
Kill all backend processes that match name and not the current process
core.shutdown()
Shutdown the machine now
core.setTimeout(name, callback, timeout)
Set or reset a timer
core.createServer(options, callback)
Create a Web server with options and request handler, returns a server object.
Options can have the following properties:
core.createRepl(options)
Create REPL interface with all modules available
core.startRepl(port, bind, options)
Start command prompt on TCP socket, context can be an object with properties assigned with additional object to be accessible in the shell
core.watchTmp(options, callback)
Watch temp files and remove files that are older than given number of seconds since now, remove only files that match pattern if given Options properties:
core.parseCookies(header)
Parse Set-Cookie header and return an object of cookies: { NAME: { value: VAL, secure: true, expires: N ... } }
core.loadLocales(options, callback)
Load configured locales
The Database API, a thin abstraction layer on top of SQLite, PostgreSQL, DynamoDB and Cassandra. The idea is not to introduce new abstraction layer on top of all databases but to make the API usable for common use cases. On the source code level access to all databases will be possible using this API but any specific usage like SQL queries syntax or data types available only for some databases will not be unified or automatically converted but passed to the database directly. Only conversion between JavaScript types and database types is unified to some degree meaning JavaScript data type will be converted into the corresponding data type supported by any particular database and vice versa.
Basic operations are supported for all database and modelled after NoSQL usage, this means no SQL joins are supported by the API, only single table access. SQL joins can be passed as SQL statements directly to the database using low level db.query API call, all high level operations like add/put/del perform SQL generation for single table on the fly.
The common convention is to pass options object with flags that are common for all drivers along with specific, this options object can be modified with new properties but all driver should try not to modify or delete existing properties, so the same options object can be reused in subsequent operations.
All queries and update operations ignore properties that starts with underscore.
Before the DB functions can be used the core.init
MUST be called first, the typical usage:
var backend = require("backendjs"), core = backend.core, db = backend.db;
core.init(function(err) {
db.add(...
...
});
All database methods can use default db pool or any other available db pool by using pool: name
in the options. If not specified,
then default db pool is used, sqlite is default if no -db-pool config parameter specified in the command line or the config file.
Even if the specified pool does not exist, the default pool will be returned, this allows to pre-confgure the app with different pools
in the code and enable or disable any particular pool at any time.
Example, use PostgreSQL db pool to get a record and update the current pool
db.get("bk_user", { login: "123" }, { pool: "pg" }, (err, row) => {
if (row) db.update("bk_user", row);
});
const user = await db.aget("bk_user", { login: "123" });
Most database pools can be configured with options min
and max
for number of connections to be maintained, so no overload will happen and keep warm connection for
faster responses. Even for DynamoDB which uses HTTPS this can be configured without hitting provisioned limits which will return an error but
put extra requests into the waiting queue and execute once some requests finished.
Example:
db-pg-pool-max = 100
db-dynamodb-pool-max = 100
Also, to spread functionality between different databases it is possible to assign some tables to the specific pools using db-X-pool-tables
parameters
thus redirecting the requests to one or another databases depending on the table, this for example can be useful when using fast but expensive
database like DynamoDB for real-time requests and slower SQL database running on some slow instance for rare requests, reports or statistics processing.
Example, run the backend with default PostgreSQL database but keep all config parametrs in the DynamoDB table for availability:
db-pool = pg
db-dynamodb-pool = default
db-dynamodb-pool-tables = bk_config
The following databases are supported with the basic db API methods: Sqlite, PostgreSQL, DynamoDB, Elasticsearch
Multiple connections of the same type can be opened, just add N
suffix to all database config parameters where N is a number,
referer to such pools in the code as poolN
or by an alias.
Example:
db-sqlite1-pool = billing
db-sqlite1-pool-max = 10
db-sqlite1-pool-options-path = /data/db
db-sqlite1-pool-options-journal_mode = OFF
db-sqlite1-pool-alias = billing
in the Javascript:
db.select("bills", { status: "ok" }, { pool: "billing" }, lib.log)
await db.aselect("bills", { status: "ok" }, { pool: "billing" })
Config parameters
db-none
, type: "bool", descr: "disable all db pools"db-pool
, dns: 1, descr: "Default pool to be used for db access without explicit pool specified"db-name
, key: "db-name", descr: "Default database name to be used for default connections in cases when no db is specified in the connection url"db-create-tables
, key: "_createTables", type: "bool", nocamel: 1, master: 1, pass: 1, descr: "Create tables in the database or perform table upgrades for new columns in all pools, only shell or server process can perform this operation"db-create-tables-roles
, type: "list", pass: 1, descr: "Only processes with these roles can create tables"db-cache-tables
, array: 1, type: "list", descr: "List of tables that can be cached: bk_user, bk_counter. This list defines which DB calls will cache data with currently configured cache. This is global for all db pools."db-skip-tables
, array: 1, type: "list", descr: "List of tables that will not be created or modified, this is global for all pools"db-cache-pools
, array: 1, type: "list", descr: "List of pools which trigger cache flushes on update."db-cache-sync
, array: 1, type: "list", descr: "List of tables that perform synchronized cache updates before returning from a DB call, by default cache updates are done in the background"db-cache-keys-([a-z0-9_]+)-(.+)
, obj: "cacheKeys.$1", make: "$2", nocamel: 1, type: "list", descr: "List of columns to be used for the table cache, all update operations will flush the cache if the cache key can be created from the record columns. This is for ad-hoc and caches to be used for custom selects which specified the cache key."db-describe-tables
, type: "callback", callback: function(v) { this.describeTables(lib.jsonParse(v, { datatype: "obj",logger: "error" })) descr: "A JSON object with table descriptions to be merged with the existing definitions"db-cache-ttl
, type: "int", obj: "cacheTtl", key: "default", descr: "Default global TTL for cached tables",db-cache-ttl-(.+)
, type: "int", obj: "cacheTtl", nocamel: 1, strip: /cache-ttl-/, descr: "TTL in milliseconds for each individual table being cached",db-cache-name-(.+)
, obj: "cacheName", nocamel: 1, make: "$1", descr: "Cache client name to use for cache reading and writing for each table instead of the default in order to split cache usage for different tables, it can be just a table name or pool.table
, use *
to set default cache for all tables",db-cache-update-(.+)
, obj: "cacheUpdate", nocamel: 1, make: "$1", descr: "Cache client name to use for updating only for each table instead of the default in order to split cache usage for different tables, it can be just a table name or pool.table
or *
. This cache takes precedence for updating cache over cache-name
parameter",db-cache2-max
, type: "int", min: 1, obj: "lru", make: "max", descr: "Max number of items to keep in the LRU Level 2 cache"db-cache2-(.+)
, obj: "cache2", type: "int", nocamel: 1, strip: /cache2-/, min: 0, descr: "Tables with TTL for level2 cache, i.e. in the local process LRU memory. It works before the primary cache and keeps records in the local LRU cache for the given amount of time, the TTL is in ms and must be greater than zero for level 2 cache to work"db-custom-column-([a-zA-Z0-9_]+)-(.+)
, obj: "customColumn.$1", make: "$2", nocamel: 1, descr: "A column that is allowed to be used in any table, the name is a column name regexp with the value to be a type, Ex: -db-custom-column-bk_user-^stats=counter",db-describe-column-([a-z0-9_]+)-([a-zA-Z0-9_]+)
, obj: "columns.$1", make: "$2", type: "map", maptype: "auto", nocamel: 1, descr: "Describe a table column properties, can be a new or existing column, overrides existing property, ex: -db-describe-column-bk_user-name max:255",db-local
, descr: "Local database pool for properties, cookies and other local instance only specific stuff"db-config
, descr: "Configuration database pool to be used to retrieve config parameters from the database, must be defined to use remote db for config parameters, set to default
to use current default pool"db-config-interval
, type: "number", min: 0, descr: "Interval between loading configuration from the database configured with -db-config, in minutes, 0 disables refreshing config from the db"db-config-count
, type: "number", min: 0, descr: "Max number of records to read fron the config table"db-local-tables
, type: "bool", key: "_localTables", descr: "Only enable local, default and config pools"db-no-cache-columns
, type: "bool", descr: "Do not read/cache table columns"db-cache-columns-interval
, type: "int", min: 0, descr: "How often in minutes to refresh tables columns from the database, it calls cacheColumns for each pool which supports it"db-skip-drop
, type: "regexpobj", descr: "A pattern of table names which will skipped in db.drop operations to prevent accidental table deletion"db-aliases-(.+)
, obj: "aliases", nocamel: 1, reverse: 1, onparse: function(v,o) { o.name=this.table(o.name); return this.table(v) descr: "Table aliases to be used instead of the requested table name, only high level db operations will use it, al low level utilities use the real table names"db-([a-z0-9]+)-pool$
, obj: 'poolParams.$1', make: "url", novalue: "default", descr: "A database pool name, depending on the driver it can be an URL, name or pathname, examples of db pools: -db-pg-pool, -db-dynamodb-pool
, url format: protocol://[user:password@]hostname[:port]/dbname
or default
"db-([a-z0-9]+)-pool-(disabled)$
, obj: 'poolParams.$1', make: "$2", type: "bool", descr: "Disable the specified pool but keep the configuration"db-([a-z0-9]+)-pool-(max)$
, obj: 'poolParams.$1', make: "$2", type: "number", min: 1, descr: "Max number of open connections for a pool, default is Infinity"db-([a-z0-9]+)-pool-(min)$
, obj: 'poolParams.$1', make: "$2", type: "number", min: 1, descr: "Min number of open connections for a pool"db-([a-z0-9]+)-pool-(idle)$
, obj: 'poolParams.$1', make: "$2", type: "number", min: 1000, descr: "Number of ms for a db pool connection to be idle before being destroyed"db-([a-z0-9]+)-pool-(tables)$
, obj: 'poolParams.$1.configOptions', make: "$2", array: 1, type: "list", onupdate: function(v,o) {this.applyPoolOptions(v,o) descr: "Tables to be created only in this pool, to prevent creating all tables in every pool"db-([a-z0-9]+)-pool-connect$
, obj: 'poolParams.$1.connectOptions', type: "json", logger: "warn", descr: "Connect options for a DB pool driver for new connection, driver specific"db-([a-z0-9]+)-pool-options$
, obj: 'poolParams.$1.configOptions', type: "map", maptype: "auto", merge: 1, onupdate: function(v,o) {this.applyPoolOptions(v,o) descr: "General options for a DB pool"db-([a-z0-9]+)-pool-options-([a-zA-Z0-9_.-]+)$
, obj: 'poolParams.$1.configOptions', camel: '-', autotype: 1, make: "$2", onupdate: function(v,o) {this.applyPoolOptions(v,o) descr: "General options for a DB pool"db-([a-z0-9]+)-pool-(create-tables)$
, master: 1, obj: 'poolParams.$1.configOptions', make: "$2", type: "bool", descr: "Create tables for this pool on startup"db-([a-z0-9]+)-pool-(skip-tables)$
, obj: 'poolParams.$1.configOptions', make: "$2", array: 1, type: "list", descr: "Tables not to be created in this pool"db-([a-z0-9]+)-pool-cache2-(.+)
, obj: 'cache2', nocamel: 1, strip: /pool-cache2-/, type: "int", descr: "Level 2 cache TTL for the specified pool and table, data is JSON strings in the LRU cache"db-([a-z0-9]+)-pool-alias
, obj: 'poolAliases', make: "$1", reverse: 1, descr: "Pool alias to refer by an alternative name"Database tables
// Configuration store, same parameters as in the commandline or config file, can be placed in separate config groups
// to be used by different backends or workers
bk_config: {
name: { primary: 1 }, // name of the parameter
type: { primary: 2 }, // config type or tag
value: { type: "text" }, // the value
status: { value: "ok" }, // ok - availaible
ttl: { type: "int" }, // refresh interval in seconds since last read
version: { type: "text" }, // version conditions, >M.N,<M.N
sort: { type: "int" }, // sorting order
ctime: { type: "now", readonly: 1 },
mtime: { type: "now" }
},
createPool:(opts)
None database driver
db.dropTables(tables, options, callback)
Delete all specified tables from the specific pool or all active pools if options.pool
is empty, tables
can be a list of tables or an
object with table definitions
db.query(req, options, callback)
Execute query using native database driver, the query is passed directly to the driver.
req - an object with the following properties:
options may have the following properties:
db-pool-tables
.unique
propertycount
, skip all post processing and convertionobj
property, it will include all generated and updated columnsreturning
property does, it only
returns the query record with new columns from memorycallback(err, rows, info) where
Example with SQL driver
db.query({ text: "SELECT a.id,c.type FROM bk_user a,bk_icon c WHERE a.id=c.id and a.id=?", values: ['123'] }, { pool: 'pg' }, (err, rows, info) => {
});
db.queryProcessSync(pool, req, row)
Post process hook to be used for replicating records to another pool, this is supposed to be used as this:
db.setProcessRow("post", "*", (req, row) => { db.queryProcessSync("elasticsearch", req, row) });
The conditions when to use it is up to the application logic.
It does not deal with the destination pool to be overloaded, all errors will be ignored, this is for simple and light load only
The destination poll must have tables to be synced configured:
db-elasticsearch-pool-tables=table1,table2
db.get(table, query, options, callback)
Retrieve one record from the database by primary key, returns found record or null if not found Options can use the following special properties:
db.select
NOTE: On return the info.cached
will be set to
Example
db.get("bk_user", { login: '12345' }, function(err, row) {
if (row) console.log(row.name);
});
const user = await db.aget("bk_user", { login: '12345' });
db.select(table, query, options, callback)
Select objects from the database that match supplied conditions.
>, gt, <, lt, =, !=, <>, >=, ge, <=, le, in, all_in, between, regexp, iregexp, begins_with, not_begins_with, like%, ilike%, contains, not_contains
db.get
method)select
property might be used to get all required properties. For Elasticsearch if sort is null then scrolling scan will be used,
if no timeout
or scroll
are given the default is 1m.get
db-cache-keys-table-name
parameterOn return, the callback can check third argument which is an object with some predefined properties along with driver specific state returned by the query:
Example: get by primary key, refer above for default table definitions
db.select("bk_message", { id: '123' }, { count: 2 }, (err, rows) => {
});
const rows = await db.aselect("bk_message", { id: '123' }, { count: 2 });
Example: get all icons with type greater or equal to 2
db.select("bk_icon", { id: '123', type: '2' }, { select: 'id,type', ops: { type: 'ge' } }, (err, rows) => {
});
Example: get unread msgs sorted by time, recent first
db.select("bk_message", { id: '123', status: 'N:' }, { sort: "status", desc: 1, ops: { status: "begins_with" } }, (err, rows) => {
});
Example: allow all accounts icons to be visible
db.select("bk_user", {}, (err, rows) => {
rows.forEach(function(row) {
row.acl_allow = 'auth';
db.update("bk_icon", row);
});
});
Example: scan accounts with custom filter, not by primary key: by exact zipcode
db.select("bk_user", { zipcode: '20000' }, (err, rows) => {
});
Example: select accounts by type for the last day
db.select("bk_user", { type: 'admin', mtime: Date.now()-86400000 }, { ops: { type: "contains", mtime: "gt" } }, (err, rows) => {
});
db.search(table, query, options, callback)
Perform full text search on the given table, the database implementation may ignore table name completely in case of global text index.
Query in general is a text string with the format that is supported by the underlying driver,
the db module DOES NOT PARSE the query at all if the driver supports full text search, otherwise it behaves like select
.
Options make take the same properties as in the select
method.
A special query property q
may be used for generic search in all fields.
Without full text search support in the driver this may return nothing or an error.
Example db.search("bk_user", { type: "admin", q: "john*" }, { pool: "elasticsearch" }, lib.log); db.search("bk_user", "john*", { pool: "elasticsearch" }, lib.log); await db.asearch("bk_user", "john*", { pool: "elasticsearch" });
db.add(table, obj, options, callback)
Insert new object into the database
On return the obj
will contain all new columns generated before adding the record
Note: SQL, DynamoDB, MongoDB, Redis drivers are fully atomic but other drivers may be subject to race conditions
Example
db.add("bk_user", { id: '123', login: 'admin', name: 'test' }, function(err, rows, info) {
});
await db.aadd("bk_user", { id: '123', login: 'admin', name: 'test' })
db.incr(table, obj, options, callback)
Counter operation, increase or decrease column values, similar to update but all specified columns except primary key will be incremented, use negative value to decrease the value.
If no options.updateOps
object specified or no 'incr' operations are provided then
all columns with type 'counter' will be used for the action incr
Note: The record must exist already for SQL databases, for DynamoDB and Cassandra a new record will be created
if does not exist yet. To disable upsert pass noupsert
in the options.
Example
db.incr("bk_counter", { id: '123', like0: 1, invite0: 1 }, (err, rows, info) => {
});
await db.aincr("bk_counter", { id: '123', like0: 1, invite0: 1 })
db.put(table, obj, options, callback)
Add/update an object in the database, if object already exists it will be replaced with all new properties from the obj
db.add
methodExample
db.put("bk_user", { id: '123', login: 'test', name: 'test' }, function(err, rows, info) {
});
await db.aput("bk_user", { id: '123', login: 'test', name: 'test' })
db.update(table, obj, options, callback)
Update existing object in the database.
db.add
method with the following additional properties:obj
,
a property named $or or $and will be treated as a sub-expression if it is an object.typesOps: { list: "add" }
will
make sure all lists will have updateOps set as add if not specified explicitlyNote: not all database drivers support atomic update with conditions, all drivers for SQL, DynamoDB, MongoDB, Redis fully atomic, but other drivers perform get before put and so subject to race conditions
Example
db.update("bk_user", { login: 'test', id: '123' }, (err, rows, info) => {
console.log('updated:', info.affected_rows);
});
await db.aupdate("bk_user", { login: 'test', name: 'Test')
db.update("bk_user", { login: 'test', id: '123', first_name: 'Mr' }, { pool: pg' }, (err, rows, info) => {
console.log('updated:', info.affected_rows);
});
db.update("bk_user", { login: 'test', first_name: 'John' }, { expected: { first_name: "Carl" } }, (err, rows, info) => {
console.log('updated:', info.affected_rows);
});
db.update("bk_user", { login: 'test', first_name: 'John' }, { expected: { "$or": { first_name: "Carl", g1: null }, aliases: { g1: "first_name" } }, (err, rows, info) => {
console.log('updated:', info.affected_rows);
});
db.updateAll(table, query, obj, options, callback)
Update all records that match given condition in the query
, one by one, the input is the same as for db.select
and every record
returned will be updated using db.update
call by the primary key, so make sure options.select include the primary key for every row found by the select.
All properties from the obj
will be set in every matched record.
The callback will receive on completion the err and all rows found and updated. This is mostly for non-SQL databases and for very large range it may take a long time to finish due to sequential update every record one by one. Special properties that can be in the options for this call:
op
can be set to put
or add
options.updateProcess(row, options)
. If it returns non-empty value the update will stop and return it as the error.options.updateFilter(row, options, (skip) => {})
If no options.select
is specified only the primary keys will be returned or collected
Example, update birthday format if not null
db.updateAll("bk_user",
{ birthday: 1 },
{ mtime: Date.now() },
{ ops: { birthday: "not null" },
updateProcess: function(r, o) {
r.birthday = lib.strftime(new Date(r.birthday, "%Y-%m-D"));
},
updateFilter: function(r, o, cb) {
cb(r.status == 'ok');
} },
function(err, count) {
console.log(count, "rows updated");
});
db.del(table, obj, options, callback)
Delete an object in the database, no error if the object does not exist
db.update
methodExample
db.del("bk_user", { login: '123' }, function(err, rows, info) {
console.log('updated:', info.affected_rows);
});
db.delAll(table, query, options, callback)
Delete all records that match given condition, one by one, the input is the same as for db.select
and every record
returned will be deleted using db.del
call. The callback will receive on completion the err and all rows found and deleted.
Special properties that can be in the options for this call:
delAll
for the given pooloptions.delProcess(row, options, info)
. If it returns non-empty value the scan will stop and return it as the error.options.delFilter(row, options, (skip) => {})
If no options.select
is specified only the primary keys will be returned or collected
If db-skip-drop
matches the table name and there is no query provided it will exit with error
db.list(table, query, options, callback)
Convenient helper to retrieve all records by primary key, the obj must be a list with key property or a string with list of primary key column Example
db.list("bk_user", ["id1", "id2"], function(err, rows) { console.log(err, rows) });
db.list("bk_user", "id1,id2", function(err, rows) { console.log(err, rows) });
db.batch(list, options, callback)
Perform a batch of operations at the same time, all operations for the same table will be run together one by one but different tables will be updated in parallel.
list
an array of objects to put/delete from the database in the format:On return the second arg to the callback is a list of records with errors, same input record with added property errstatus
and errmsg
Example:
var ops = [ { op: "add", table: "bk_counter", obj: { id:1, like:1 } },
{ op: "add", table: "bk_user", obj: { login: "test", id:1, name:"test" }]
db.batch(ops, { factorCapacity: 0.5 }, lib.log);
db.bulk(list, options, callback)
Bulk operations, it will be noop if the driver does not support it.
The input format is the same as for the db.batch
method.
On return the second arg to the callback is a list of records with errors, same input record with added property errstatus
and errmsg
NOTE: DynamoDB only supports add/put/del only and 25 at a time, if more specified it will send multiple batches
Example
var ops = [ { op: "add", table: "bk_counter", obj: { id:1, like:1 } },
{ op: "del", table: "bk_user", obj: { login: "test1" } },
{ op: "incr", table: "bk_counter", obj: { id:2, like:1 } },
{ op: "add", table: "bk_user", obj: { login: "test2", id:2, name:"test2" } }]
db.bulk(ops, { pool: "elasticsearch" }, lib.log);
db.transaction(list, options, callback)
Same as the db.bulk
but in transaction mode, all operations must succeed or fail. Not every driver can support it,
in DynamoDB case only 10 operations can be done at the same time, if the list is larger then it will be sequentially run with batches of 25 records.
In case of error the second arg will contain the records of the failed batch
db.scan(table, query, options, rowCallback, endCallback)
Convenient helper for scanning a table for some processing, rows are retrieved in batches and passed to the callback until there are no more
records matching given criteria. The obj is the same as passed to the db.select
method which defined a condition which records to get.
The rowCallback must be present and is called for every row or batch retrieved and second parameter which is the function to be called
once the processing is complete. At the end, the callback will be called just with 1 argument, err, this indicates end of scan operation.
Basically, db.scan is the same as db.select but can be used to retrieve large number of records in batches and allows async processing of such records.
To hint a driver that scanning is in progress the options.scanning
will be set to true.
Parameters:
db.select
db.select
, with the following additions:rowCallback(row, info)
read
0.9
table
, useful for cases when the row callback performs
writes into that other table and capacity is differentExample:
db.scan("bk_user", {}, { count: 10, pool: "dynamodb" }, function(row, next) {
// Copy all accounts from one db into another
db.add("bk_user", row, { pool: "pg" }, next);
}, function(err) { });
db.copy(table, query, options, callback)
Copy records from one table to another between different DB pools or regions
Parameters:
sort
index in desc orderdb.join(table, rows, options, callback)
Join the given list of records with the records from other table by primary key. The properties from the joined table will be merged with the original rows preserving the existing properties
A special case when table is empty db.join
just returns same rows to the callback, this is
for convenience of doing joins on some conditions and trigger it by setting the table name or skip the join completely.
Example:
db.join("bk_user", [{id:"123",key1:1},{id:"234",key1:2}], lib.log)
db.join("bk_user", [{aid:"123",key1:1},{aid:"234",key1:2}], { keysMap: { id: "aid" }}, lib.log)
db.join("bk_user", [{id:"123",state:"NY"},{id:"234",state:"VA"}], { columnsMap: { state: "astate" }}, lib.log)
db.create(table, columns, options, callback)
Create a table using column definitions represented as a list of objects. Each column definition may contain the following properties:
name
- column nametype
- column type: int, bigint, real, string, now, counter or other supported typeprimary
- column is part of the primary keyunique
- column is part of an unique keyindex
- column is part of an index, the value is a number for the column position in the indexindexN
- additonal inxdexes where N is 1..5value
- default value for the columnlen
- column lengthmax
- ignore the column if a text
, json
or obj
value is greater than specified limit, unless trunc
is providedtrunc
- truncate the column value, the value will be truncated before saving into the DB, uses the max
as the limitmaxlist
- max number of items in the list
or array
column typespub
- columns is public, this is very important property because it allows anybody to see it when used in the default API functions, i.e. anybody with valid
credentials can retrieve all public columns from all other tables, and if one of the other tables is account table this may expose some personal information,
so by default only a few columns are marked as public in the bk_user
tablepub_admin
- a generic read permission requires options.isAdmin
when used with api.cleanResult
pub_staff
- a generic read permission requires options.isStaff
when used with api.cleanResult
pub_types
- a role or a list of roles which further restrict access to a public column to only users with specified rolespriv_types
- a role or a list of roles which excplicitely deny access to a column for users with specified rolespriv
- an opposite for the pub property, if defined this property should never be returned to the client by the API handlersauth
- this property will be set in req.options.account
for access permissions checks when only options are availableinternal
- if set then this property can only be updated by admin/root or with isInternal`` property, implemented by the
auth` module onlyhidden
- completely ignored by all update operations but could be used by the public columns cleaning procedure, if it is computed and not stored in the db
it can contain pub property to be returned to the clientreadonly
- only add/put operations will use the value, incr/update will not affect the valuewriteonly
- only incr/update can change this value, add/put will ignore itnoresult
- delete this property from the result, mostly for joined artificial columns which used for indexes onlyrandom
- add a random number between 0 and this value, useful with type: "now"lower
- make string value lowercaseupper
- make string value uppercasestrip
- if a regexp perform replace on the column value before savingtrim
- strim string value of whitespacecap
- capitalize into a title with lib.toTitleword
- if a number only save nth word from the value, split by separator
clock
- for now
type use high resolution clock in nanosecondsepoch
- for now
type save as seconds since the Epoch, not millisecondsmultiplier
- for numeric columns apply this multipliers before savingincrememnt
- for numeric columns add this value before savingdecimal
- for numeric columns convert into fixed number using this number of decimalsformat
- a function (val, req) => {} that must return new value for the given column, for custom formattingprefix
- prefix to be prepended for autogenerated columns: uuid
, suud
, tuud
separator
- to be used as a separator in join or split depending on the column propertieslist
- splits the column value into an array, optional separator
property can be used, default separator is ,|
autoincr
- for counter tables, mark the column to be auto-incremented by the connection API if the connection type has the same name as the column namejoin
- a list with property names that must be joined together before performing a db operation, it will use the given record to produce new property,
this will work both ways, to the db and when reading a record from the db it will split joined property and assign individual
properties the value from the joined value. See db.joinColumns
for more options.unjoin
- split the join column into respective columns on retrievalkeepjoined
- keep the joined column value, if not specified the joined column is deleted after unjoinednotempty
- do not allow empty columns, if not provided it is filled with the default valueskip_empty
- ignore the column if the value is empty, i.e. null or empty stringfail_ifempty
- returtn an error if there is no value for the column, this is checked during record preprocessingvalues
- an array with allowed values, ignore the column if not presentvalues_map
- an array of pairs to be checked for exact match and be replaced with the next item, ["", null, "", undefined, "null", ""]Some properties may be defined multiple times with number suffixes like: unique1, unique2, index1, index2
to create more than one index for the table, same
properties define a composite key in the order of definition or sorted by the property value, for example: { a: { index:2 }, b: { index:1 } }
will create index (b,a)
because of the index:
property value being not the same. If all index properties are set to 1 then a composite index will use the order of the properties.
Special column types:
uuid
- autogenerate the column value with UUID, optional prefix
property will be prepended, { type: "uuid", prefix: "u_" }
now
- defines a column to be automatically filled with the current timestamp, { type: "now" }
counter
- defines a columns that will be automatically incremented by the db.incr
command, on creation it is set with 0uid
- defines a columns to be automatically filled with the current user id, this assumes that account object is passed in the options from the API leveluname
- defines a columns to be automatically filled with the current user name, this assumes that account object is passed in the options from the API levelttl
- mark the column to be auto expired, can be set directly to time in the future or use one of: days
, hours
, minutes
as a interval in the futureNOTE: Index creation is not required and all index properties can be omitted, it can be done more effectively using native tools for any specific database,
this format is for simple and common use cases without using any other tools but it does not cover all possible variations for every database. But all indexes and
primary keys created outside of the backend application will be detected properly by db.cacheColumns
and by each pool cacheIndexes
methods.
Each database pool also can support native options that are passed directly to the driver in the options, these properties are defined in the object with the same name as the db driver, all properties are combined, for example to define provisioned throughput for the DynamoDB index:
db.create("test_table", { id: { primary: 1, type: "int", index: 1, dynamodb: { readCapacity: 50, writeCapacity: 50 } },
type: { primary: 1, pub: 1, projections: 1 },
name: { index: 1, pub: 1 } }
});
Create DynamoDB table with global secondary index, the first index property if not the same as primary key hash defines global index, if it is the same then local,
or if the second key column contains global
property then it is a global index as well, below we create global secondary index on property 'name' only,
in the example above it was local secondary index for id and name. Also a local secondary index is created on id,title
.
DynamoDB projection is defined by a projections
property, can be a number/boolean or an array with index numbers:
db.create("test_table", { id: { primary: 1, type: "int", index1: 1 },
type: { primary: 1, projections: [0] },
name: { index: 1, projections: 1 },
title: { index1: 1, projections: [1] } },
descr: { index: 1, projections: [0, 1] },
});
When using real DynamoDB creating a table may take some time, for such cases if options.waitTimeout
is not specified it defaults to 1min,
so the callback is called as soon as the table is active or after the timeout whichever comes first.
Pass MongoDB options directly: db.create("test_table", { id: { primary: 1, type: "int", mongodb: { w: 1, capped: true, max: 100, size: 100 } }, type: { primary: 1, pub: 1 }, name: { index: 1, pub: 1, mongodb: { sparse: true, min: 2, max: 5 } } });
db.upgrade(table, columns, options, callback)
Upgrade a table with missing columns from the definition list, if after the upgrade new columns must be re-read from the database
then info.affected_rows
must be non zero.
db.drop(table, options, callback)
Drop a table
db.sql(text, values, options, callback)
Execute arbitrary SQL-like statement if the pool supports it, values must be an Array with query parameters or can be omitted.
Example:
db.sql("SELECT * FROM bk_property WHERE value=? LIMIT 1", [1], { pool: "sqlite", count: 10 }, lib.log)
db.sql("SELECT * FROM bk_property", { pool: "dynamodb" }, lib.log)
db.sql("SELECT * FROM bk_property", { pool: "dynamodb", count: 10 }, lib.log)
db.getCached(op, table, query, options, callback)
Retrieve cached result or put a record into the cache prefixed with table:key[:key...]
Options accept the same parameters as for the usual get action but it is very important that all the options
be the same for every call, especially select
parameters which tells which columns to retrieve and cache.
Additional options:
Example:
db.getCached("get", "bk_user", { login: req.query.login }, { select: "latitude,longitude" }, function(err, row) {
var distance = lib.geoDistance(req.query.latitude, req.query.longitude, row.latitude, row.longitudde);
});
db.getCache(table, query, options, callback)
Retrieve an object from the cache by key, sets cacheKey
in the options for later use
db.putCache(table, query, options)
Store a record in the cache
db.delCache(table, query, options)
Notify or clear cached record, this is called after del/update operation to clear cached version by primary keys
db.getCacheKey(table, query, options)
Returns concatenated values for the primary keys, this is used for caching records by primary key
db.getCacheOptions(table, options, update)
Setup common cache properties
db.getCache2Ttl(table, options)
Return TTL for level 2 cache, negative means use js cache
db.getCacheKeys(table, query, name)
Return a list of global cache keys, if a name is given only returns the matching key
db.delCacheKeys(req, result, options, callback)
Delete all global cache keys for the table record
db.init(options, callback)
Initialize all database pools. the options may containt the following properties:
db.initConfig(options, callback)
Load configuration from the config database, must be configured with db-config-type
pointing to the database pool where bk_config table contains
configuration parameters.
The priority of the paramaters is fixed and goes from the most broad to the most specific, most specific always wins, this allows for very flexible configuration policies defined by the app or place where instances running and separated by the run mode.
The following list of properties will be queried from the config database and the sorting order is very important, the last values
will override values received for the earlier properties, for example, if two properties defined in the bk_config
table with the
types myapp
and prod-myapp
, then the last value will be used only.
The major elements are the following:
-run-mode: production
myapp
-worker
-nat
The modifiers which are appended to each major attributes:
-192.168
us-east-1
The top level list is the following:
All modifiers are appended for every item in the list like runMode-network
, runMode-appName-tag-region
,...
The options takes the following properties:
NOTE: The config parameters from the DB always take precedence even over config.local.
On return, the callback second argument will receive all parameters received form the database as a list: -name value ...
db.getConfig(options, callback)
Return all config records for the given instance, the result will be sorted most relevant at the top
db.refreshConfig(options, callback)
Refresh parameters which are configured with a TTL
Pool.prototype.convertError(table, op, err, options)
Convert into recognizable error codes
Pool.prototype.query(client, req, options, callback)
Simulate query as in SQL driver but performing AWS call, text will be a table name and values will be request options
Create a database pool that works with ElasticSearch server, only the hostname and port will be used, by default each table is stored in its own index.
To define shards and replicas per index:
-db-elasticsearch-pool-options-shards-INDEX_NAME=NUM
-db-elasticsearch-pool-options-replicas-INDEX_NAME=NUM
To support multiple seed nodes a parameter -db-elasticsearch-pool-options-servers=1.1.1.1,2.2.2.2
can be specified, if the primary node
fails it will switch to other configured nodes. To control the switch retries and timeout there are options:
-db-elasticsearch-pool-options-retry-count=3
-db-elasticsearch-pool-options-retry-timeout=250
On successful connect to any node the driver retrieves full list of nodes in the cluster and switches to a random node, this happens
every discovery-interval
in milliseconds, default is 1h, it can be specified as -db-elasticserch-pool-options-discovery-interval=300000
Pool.prototype.cacheIndexes(options, callback)
Cache indexes using the information_schema
db.getPool(options)
Return database pool by name or default pool, options can be a pool name or an object with { pool: name } to return
the pool by given name. This call always returns a valid pool object, in case no requested pool found, it returns
the default pool, in case of invalid pool name it returns none
pool.
A special pool none
always returns empty result and no errors.
db.getPoolTables(name, options)
Return all tables know to the given pool, returned tables are in the object with
column information merged from cached columns from the database with description columns
given by the application. If options.names
is 1 then return just table names as a list.
db.getPools()
Return a list of all active database pools, returns list of objects with name: and type: properties
db.Pool(options, defaults)
Create a new database pool with default methods and properties
The db methods cover most use cases but in case native driver needs to be used this is how to get the client and use it with its native API,
it is required to call pool.release
at the end to return the connection back to the connection pool.
var pool = db.getPool("mongodb");
pool.get(function(err, client) {
var collection = client.collection('bk_user');
collection.findOne({ id: '123' }, function() {
pool.release(client);
});
});
db.Pool.prototype.configure(options)
Reconfigure properties, only subset of properties are allowed here so it is safe to apply all of them directly, this is called during realtime config update
db.Pool.prototype.open(callback)
Open a connection to the database, default is to return an empty object as a client
db.Pool.prototype.close(client, callback)
Close a connection, default is do nothing
db.Pool.prototype.query(client, req, options, callback)
Query the database, always return an array as a result (i.e. the second argument for the callback)
db.Pool.prototype.cacheColumns(options, callback)
Cache columns for all tables
db.Pool.prototype.cacheIndexes(options, callback)
Cache indexes for all tables
db.Pool.prototype.nextToken(client, req, rows)
Return next token from the client object
db.Pool.prototype.prepareOptions(options)
Update the options with pool config parameters if needed, the options is from the request
db.Pool.prototype.prepareRow(req)
Default prepareRow is to perform pool specific actions for prepared row before passing it to the op specific columns filterting
db.Pool.prototype.prepare(req)
Default prepare is to return all parameters in an object
db.Pool.prototype.bindValue(req, name, value, op)
Return the value to be used in binding, mostly for SQL drivers, on input value and col info are passed, this callback may convert the value into something different depending on the DB driver requirements, like timestamp as string into milliseconds
db.Pool.prototype.convertError(table, op, err, options)
Converts native DB driver error into other human readable format
db.Pool.prototype.processColumns(pool)
that is called after this pool cached columms from the database, it is called sychnroniously inside the db.cacheColumns
method.
db.prepare(op, table, obj, options)
Prepare for execution for the given operation: add, del, put, update,... Returns prepared object to be passed to the driver's .query method. This method is a part of the driver helpers and is not used directly in the applications.
db.prepareRow(pool, req)
Preprocess an object for a given operation, convert types, assign defaults...
db.prepareForUpdate(pool, req)
Keep only columns from the table definition if we have it Go over all properties in the object and makes sure the types of the values correspond to the column definition types, this is for those databases which are very sensitive on the types like DynamoDB.
db.joinColumn(req, obj, name, col)
Join several columns to produce a combined property if configured, given a column description and an object record it replaces the column value with joined value if needed. Empty properties will be still joined as empty strings. It always uses the original value even if one of the properties has been joined already.
Checks for join
property in the column definition.
join_ops
- an array with operations for which perform columns join only, if not specified it applies for all operations,
allowed values: add, put, incr, update, del, get, selectjoin_ifempty
- only join if the column value is not providedskip_join
can be used to restrict joins, it is a list with columns that should not be joinedjoin_pools
can be an array with pool names which are allowed to do the join, other pools will skip joining this column.nojoin_pools
can be an array with pool names which are not allowed to do the join, other pools will skip joining this columnjoin_strict
can be used to perform join only if all columns in the list are not empty, so the join
is for all columns or nonejoin_all
can be used to proceed and join empty values, without it the any join stops on firtst empty value but
marked to be checked later in case the empty column is not empty anymore in case of uuid or other auto-generated column type.join_force
can be used to force the join regardless of the existing value, without it if the existing value contains the
separator it is skippedjoin_hash
can be used to store a hash of the joined column to reduce the space and make the result value easier to usejoin_cap, join_lower, join_upper
- convert the joined value with toTitle, lower or upper casedb.unjoinColumns(rows, name, col, options)
Split joined columns for all rows
db.convertRows(pool, req, rows, options)
Convert rows returned by the database into the Javascript format or into the format defined by the table columns. The following special properties in the column definition chnage the format:
type = json - if a column type is json and the value is a string returned will be converted into a Javascript object
dflt property is defined for a json type and record does not have a value it will be set to specified default value
list - split the value into an array, optional .separator property can be specified
unjoin - a true value or a list of names, it produces new properties by splitting the value by a separator and assigning pieces to
separate properties using names from the list, this is the opposite of the join
property and is used separately if
splitting is required, if joined properties already in the record then no need to split it. If not a list
the names are used form the join property.
Example: db.describeTables([ { user: { id: {}, name: {}, pair: { join: ["left","right"], unjoin: 1 } } ]); db.put("test", { id: "1", type: "user", name: "Test", left: "123", right: "000" }) db.select("test", {}, lib.log)
db.setProcessColumns(callback)
Add a callback to be called after each cache columns event, it will be called for each pool separately. The callback to be called may take options argument and it is called in the context of the pool.
The primary goal for this hook is to allow management of the existing tables which are not own by the backendjs application. For such tables, because we have not created them, we need to define column properties after the fact and to keep column definitions in the app for such cases is not realistic. This callback will allow to handle such situations and can be used to set necessary propeties to the table columns.
Example, a few public columns, allow an admin to see all the columns
db.setProcessColumns(function() {
var cols = db.getColumns("users", { pool: this.name });
for (var p in cols) {
if (["id","name"].indexOf(p) > -1) cols[p].pub = 1; else cols[p].admin = 1;
}
})
db.getProcessRows(type, table, options)
Returns a list of hooks to be used for processing rows for the given table
db.runProcessRows(type, table, req, rows)
Run registered pre- or post- process callbacks.
type
is one of the pre
or 'post`table
- the table to run the hooks for, usually the same as req.table but can be '*' for global hooksreq
is the original db request object with the following required properties: op, table, obj, options, info
,rows
is the result rows for post callbacks and the same request object for pre callbacks.db.setProcessRow(type, table, options, callback)
Assign a processRow callback for a table, this callback will be called for every row on every result being retrieved from the specified table thus providing an opportunity to customize the result.
type defines at what time the callback will be called:
pre
- making a request to the db on the query recordpost
- after the request finished to be called on the result rowsAll assigned callback to this table will be called in the order of the assignment.
The callback accepts 2 arguments: function(req, row) where:
req
- the original request for a db operation with requiredop
- current db operation, like add, put, ....table
- current table being updatedobj
- the record with datapool
- current request db pool nameoptions
- current request db optionsinfo
- an object returned with special properties like affected_rows, next_token, only passed to the post
callbacksrow
- a row from the resultWhen producing complex properties by combining other properties it needs to be synchronized using both pre and post callbacks to keep the record consistent.
For queries returning rows, if the callback returns true for a row it will be filtered out and not included in the final result set.
Example
db.setProcessRow("post", "bk_user", function(req, row) {
if (row.birthday) row.age = Math.floor((Date.now() - lib.toDate(row.birthday))/(86400000*365));
});
db.setProcessRow("post", "bk_icon", function(req, row) {
if (row.type == "private" && row.id != req.options.account.id) return true;
});
db.SqlPool(options, defaults)
Create a database pool for SQL like databases
db.SqlPool.prototype.cacheColumns(options, callback)
Call column caching callback with our pool name
db.SqlPool.prototype.prepare(req)
Prepare for execution, return an object with formatted or transformed SQL query for the database driver of this pool
db.SqlPool.prototype.query(client, req, options, callback)
Execute a query or if req.text is an Array then run all queries in sequence
db.SqlPool.prototype.nextToken(client, req, rows)
Support for pagination, for SQL this is the OFFSET for the next request
db.sqlQuery(pool, client, req, options, callback)
Execute one or more SQL statements
db.sqlCacheColumns(pool, options, callback)
Cache columns using the information_schema
db.sqlPrepare(pool, req)
Prepare SQL statement for the given operation
db.sqlQuote(val)
Quote value to be used in SQL expressions
db.sqlValue(value, options)
Return properly quoted value to be used directly in SQL expressions, format according to the type
db.sqlValueIn(list, type)
Return list in format to be used with SQL IN ()
db.sqlExpr(pool, name, value, options)
Build SQL expressions for the column and value options may contain the following properties:
db.sqlTime(d)
Return time formatted for SQL usage as ISO, if no date specified returns current time
db.sqlLimit(pool, req)
Build SQL orderby/limit/offset conditions, config can define defaults for sorting and paging
db.sqlWhere(pool, req, query, keys, join)
Build SQL where condition from the keys and object values, returns SQL statement to be used in WHERE
db.sqlCreate(pool, req)
Create SQL table using table definition
db.sqlUpgrade(pool, req)
Create ALTER TABLE ADD COLUMN statements for missing columns
db.sqlDrop(pool, req)
Create SQL DROP TABLE statement
db.sqlGet(pool, req)
Get one object from the database, options may define the following properties:
db.sqlSelect(pool, req)
Select object from the database, options may define the following properties:
db.sqlInsert(pool, req)
Build SQL insert statement
db.sqlUpdate(pool, req)
Build SQL statement for update
db.sqlDelete(pool, req)
Build SQL statement for delete
db.initTables()
Merge all tables from all modules
db.createTables(options, callback)
Create or upgrade the tables for the given pool
db.describeTables(tables, callback)
Define new tables or extend/customize existing tables. Table definitions are used with every database operation, on startup, the backend read all existing table columns from the database and cache them in the memory but some properties like public columns are only specific to the backend so to mark such columns the table with such properties must be described using this method. Only columns with changed properties need to be specified, other columns will be left as it is.
Example
db.describeTables({
bk_user: { name: { pub: 1 },
test: { id: { primary: 1, type: "int" },
name: { pub: 1, index: 1 }
}});
db.convertError(pool, table, op, err, options)
Convert native database error in some generic human readable string
db.refreshColumns(options, callback)
Refresh columns for all pools which need it
db.cacheColumns(options, callback)
Reload all columns into the cache for the pool, options can be a pool name or an object like { pool: name }
.
if tables
property is an arary it asks to refresh only specified tables if that is possible.
db.existsPool(name)
Returns true if a pool exists
db.table(table)
Return a normalized table name
db.alias(table)
Returns a table alias if mapped or the same table name normalized
db.existsTable(table, options)
Returns true if a table exists
db.getColumns(table, options)
Return columns for a table or null, columns is an object with column names and objects for definition.
db.getColumn(table, name, options)
Return the column definition for a table, for non-existent columns it returns an empty object
db.getCapacity(table, options)
Return an object with capacity property which is the max write capacity for the table, for DynamoDB only.
By default it checks writeCapacity
property of all table columns and picks the max.
The options can specify the capacity explicitely:
write
, read
or a number with max capacity to usedb.checkCapacity(cap, consumed, callback)
Check if number of requests exceeds the capacity per second, delay if necessary, for DynamoDB only but can be used for pacing
requests with any database or can be used generically. The cap
must be initialized with db.getCapacity
call.
db.getSelectedColumns(req)
Return list of selected or allowed only columns, empty list if no options.select
is specified or it is equal to *
. By default only allowed or existing
columns will be returned, to pass the list as is to the driver just use options.select_all
instead.
db.getFilteredColumns(table, filter, options)
Return table columns that match the given filter. The filter can be:
undefined
means skip columns,null
means column does not exist,Infinity
means column is not undefined,name
will match against the column name not a propertyThe options can contain:
Example:
db.getFilteredColumns("bk_user", "pub")
db.getFilteredColumns("bk_user", { pub: undefined })
db.getFilteredColumns("bk_user", { pub: null, internal: 1 })
db.getFilteredColumns("bk_user", { type: "now" }, { list: 1 })
db.getFilteredColumns("bk_user", { name: /^email/ }, { select: ["type"] })
db.checkCustomColumn(req, name)
Returns type for a global custom column if exists otherwise null, all resolved
columns will be saved in req.custom
for further reference as name: type.
For request specific custom columns pass options.custom_columns
array in the format: [ RegExp, type, ...]
db.skipColumn(req, name, val)
Verify column against common options for inclusion/exclusion into the operation, returns 1 if the column must be skipped
options.no_columns=1
options.skip_columns=["a","b"]
options.allow_pools=["sqlite","mysql"]
db.filterRows(query, rows, options)
Given an object with data and list of keys perform comparison in memory for all rows, return only rows that match all keys. This method is used
by custom filters in db.select
by the drivers which cannot perform comparisons with non-indexes columns like DynamoDb, Cassandra.
The rows that satisfy primary key conditions are returned and then called this function to eliminate the records that do not satisfy non-indexed column conditions.
Options support the following propertis:
db.getKeys(table, options)
Return primary keys for a table or empty array, if allkeys
is given in the options then return
a list of all properties involed in primary keys including joined columns
db.getIndexes(table, options)
Return indexes for a table or empty object, each item in the object is an array with index columns
db.getIndexColumns(table, options)
Return columns for all indexes as alist
db.getIndexForKeys(table, keys, options)
Return an index name that can be used for searching for the given keys, the index match is performed on the index columns from the left to right and stop on the first missing key, for example for given keys { id: "1", name: "2", state: "VA" } the index ["id", "state"] or ["id","name"] will be returned but the index ["id","city","state"] will not.
db.getSearchKeys(table, options)
Return keys for the table search, if options.keys provided and not empty it will be used otherwise table's primary keys will be returned. This is a wrapper that makes sure that valid keys are used and deals with input errors like empty keys list to be consistent between different databases. This function always returns an Array even if it is empty.
db.getSearchQuery(table, obj, options)
Return query object based on the keys specified in the options or primary keys for the table, only search properties will be returned in the query object
db.getQueryForKeys(keys, obj, options)
Returns an object based on the list of keys, basically returns a subset of properties.
options.keysMap
defines an object to map record properties with the actual names to be returned.
Event queue processor
If any of events-worker-queue-XXX
parameters are defined then workers subscribe to configured event queues and listen for events.
Each event queue can run multiple functions idependently but will ack/nack for all functions so to deal with replay dups it is advised to
split between multiple consumers using the syntax: queue#channel@consumer
Multiple event queues can be defined and processed at the same time.
An event processing function takes 2 arguments, an event and callback to call on finish
Config parameters
events-worker-queue-(.+)
, obj: "worker-queue", type: "list", onupdate: function() { if (ipc.role=="worker"&&core.role=="worker") this.subscribeWorker() descr: "Queues to subscribe for workers, same queues can be used at the same time with different functions and channels and consumers, event queue format is queue#channel@consumer
, ex: -ipc-worker-queue-ticket ticket.processEvents, -ipc-worker-queue-ticket#inbox@staff ticket.processInboxEvents, -ipc-worker-queue-ticket@staff ticket.processStaffEvents"events-worker-options-(.+)
, obj: "workerOptions", make: "$1", type: "json", logger: "error", descr: "Custom parameters by queue name, passed to ipc.subscribeQueue
on worker start, useful with channels, ex: -events-worker-options-ticket {\"count\":3,\"raw\":1}
"events-worker-delay
, type: "int", descr: "Delay in milliseconds for a worker before it will start accepting jobs, for cases when other dependencies may take some time to start"events-max-runtime
, type: "int", min: 0, multiplier: 1000, descr: "Max number of seconds an event processing can run before being killed"events-routing-(.+)
, obj: "routing", type: "regexp", empty: 1, descr: "Queue routing by event topic"events-properties
, type: "list", descr: "List of properties to copy into an event envelope from the provided options"events.shutdownWorker(options, callback)
Perform graceful worker shutdown, to be used for workers restart
events.checkTimes()
Check how long we run a job and force kill if exceeded, check if total life time is exceeded.
If exit is required the shundownWorker
methods will receive options with shutdownReason
property
set and the name-sake property will contained the value exceeded.
events.putEvent(topic, data, options)
Place an event into a queue by topic
events.getQueueByHandler(proc)
Return a queue name by the event handler
Downloads file using HTTP and pass it to the callback if provided
qs.stringify
params.mtime
if present or if file
is given use file last modified timestamp, mtimeretryMultiplier
On end, the object params will contain the following updated properties:
Note: SIDE EFFECT: the params object is modified in place so many options will be changed/removed or added
IPC communications between processes and support for caching and subscriptions via queues.
The module is EventEmitter and emits messages received.
Some drivers may support TTL so global options.ttl
or local options.ttl
can be used for put/incr
operations and it will honored if it is suported.
For caches that support maps, like Redis or Hazelcast the options.mapName
can be used with get/put/incr/del to
work with maps and individual keys inside maps.
All methods use options.queueName
or options.cacheName
for non-default queue or cache.
If it is an array then a client will be picked sequentially by maintaining internal sequence number.
To specify a channel within a queue use the format queueName#channelName
, for drivers that support multiple channels like NATS/Redis
the channel will be used for another subscription within the same connection.
For drivers (NATS) that support multiple consumers the full queue syntax is queueName#channelName@groupName
or queueName@groupName
,
as well as the groupName
property in the subscribe options.
A special system queue can be configured and it will be used by all processes to listen for messages on the channel bkjs:role
, where the role
is the process role, the same messages that are processed by the server/worker message handlers like api:restart, config:init,....
All instances will be listening and processing these messages at once, the most usefull use case is refreshing the DB config on demand or
restarting without configuring any other means like SSH, keys....
Config parameters
ipc-ping-interval
, type: "int", min: 0, descr: "Interval for a worker keep-alive pings, if not received within this period it will be killed"ipc-lru-max
, type: "int", obj: "lru", descr: "Max number of items in the limiter LRU cache, this cache is managed by the master Web server process and available to all Web processes maintaining only one copy per machine"ipc-system-queue
, descr: "System queue name to send broadcast control messages, this is a PUB/SUB queue to process system messages like restart, re-init config,..."ipc-(cache|queue)-?([a-z0-9]+)?$
, obj: "configParams.$2", make: "_url", nocamel: 1, onupdate: function(v,o) {this.applyOptions(v,o) descr: "An URL that points to a cache/queue server in the format PROTO://HOST[:PORT]?PARAMS
, multiple clients can be defined with unique names, all params starting with bk-
will be copied into the options without the prefix and removed from the url, the rest of params will be left in the url, ex: -ipc-queue-redis redis://localhost?bk-count=3&bk-ttl=3000"ipc-(cache|queue)(-([a-z0-9]+)?-?options)-(.+)$
, obj: "configParams.$3", make: "$4", camel: '-', autotype: 1, onupdate: function(v,o) {this.applyOptions(v,o) descr: "Additional parameters for clients, specific to each implementation, ex: -ipc-queue-options-count 10
"ipc-(cache|queue)-([a-z0-9]*)-options$
, obj: "configParams.$2", type: "map", merge: 1, maptype: "auto", onupdate: function(v,o) {this.applyOptions(v,o) descr: "Additional parameters for clients, specific to each implementation, ex: `-ipc-queue--options count:10,interval:100"Ipc.prototype.handleServerMessages(worker, msg)
To be used in messages processing that came from the clients or other way
Ipc.prototype.sendReplPort(role, worker)
Send REPL port to a worker if needed
Ipc.prototype.newMsg(op, msg, options)
Returns an IPC message object, msg
must be an object if given.
Ipc.prototype.emitMsg(op, msg, options)
Wrapper around EventEmitter emit
call to send unified IPC messages in the same format
Ipc.prototype.sendMsg(op, msg, options, callback)
Send a message to the master process via IPC messages, callback is used for commands that return value back
timeout
property can be used to specify a timeout for how long to wait the reply, if not given the default is usedIf called inside the server, it process the message directly, reply is passed in the callback if given.
Examples:
ipc.sendMsg("op1", { data: "data" }, { timeout: 100 })
ipc.sendMsg("op1", { name: "name", value: "data" }, function(data) { console.log(data); })
ipc.sendMsg("op1", { 1: 1, 2: 2 }, { timeout: 100 })
ipc.sendMsg("op1", { 1: 1, 2: 2 }, function(data) { console.log(data); })
ipc.newMsg({ __op: "op1", name: "test" })
Ipc.prototype.initServer()
This function is called by a master server process to setup IPC channels and support for cache and messaging
Ipc.prototype.initWorker()
This function is called by a worker process to setup IPC channels and support for cache and messaging
Ipc.prototype.createClient(url, options)
Return a new client for the given host or null if not supported
Ipc.prototype.getQueue(options)
Return a cache or queue client by name if specified in the options or use default client which always exists,
use queueName
to specify a specific queue. If it is an array it will rotate items sequentially.
Ipc.prototype.initClients()
Initialize a client for cache or queue purposes, previous client will be closed.
Ipc.prototype.checkClients()
Initialize missing or new clients, existing clients stay the same
Ipc.prototype.closeClients()
Close all existing clients except empty local client
Ipc.prototype.stats(options, callback)
Returns the cache statistics, the format depends on the cache type used, for queues it returns a property 'queueCount' with currently visible messages in the queue, 'queueRunning' with currently in-flight messages
Ipc.prototype.clear(pattern, options, callback)
Clear all or only items that match the given pattern
Ipc.prototype.get(key, options, callback)
Retrieve an item from the cache by key.
options.set
is given and no value exists in the cache it will be set as the initial value, still
nothing will be returned to signify that a new value assigned.options.mapName
defines a map from which the key will be retrieved if the cache supports maps, to get the whole map
the key must be set to *options.listName
defines a map from which to get items, if a key is given it will return 1 if it belongs to the list,
if no key is provided it will return an array with 2 elements: [a random key, the length of the list], to get the whole list specify * as the key. Specifying
del
in the options will delete returned items from the list.options.ttl
can be used with lists with del
and empty key, in such case all popped up keys will be saved in
the cache with specified time to live, when being popped up every key is checked if it has been served already, i.e.
it exists in the cache and not expired yet, such keys are ignored and only never seen keys are returnedoptions.datatype
specifies that the returned value must be converted into the specified type using lib.toValue
If the key
is an array then it returns an array with values for each key, for non existent keys an empty
string will be returned. For maps only if the key
is * it will return the whole object, otherwise only value(s)
are returned.
Example
ipc.get(["my:key1", "my:key2"], function(err, data) { console.log(data) }); ipc.get("my:key", function(err, data) { console.log(data) }); ipc.get("my:counter", { set: 10 }, function(err, data) { console.log(data) }); ipc.get("*", { mapName: "my:map" }, function(err, data) { console.log(data) }); ipc.get("key1", { mapName: "my:map" }, function(err, data) { console.log(data) }); ipc.get(["key1", "key2"], { mapName: "my:map" }, function(err, data) { console.log(data) }); ipc.get(["key1", "key2"], { listName: "my:list" }, function(err, data) { console.log(data) }); ipc.get("", { listName: "my:list", del: 1 }, function(err, data) { console.log(data) }); ipc.get("", { listName: "my:list", del: 1, ttl: 30000 }, function(err, data) { console.log(data) });
Ipc.prototype.del(key, options, callback)
Delete an item by key(s), if key
is an array all keys will be deleted at once atomically if supported
options.mapName
defines a map from which the counter will be deleted if the cache supports maps, to delete the whole map
the key must be set to *options.listName
defines a list from which an item should be removedExample:
ipc.del("my:key")
ipc.del("key1", { mapName: "my:map" })
ipc.del("*", { mapName: "my:map" })
ipc.del("1", { listName: "my:list" })
Ipc.prototype.put(key, val, options, callback)
Replace or put a new item in the cache.
options.ttl
can be passed in milliseconds if the driver supports itoptions.mapName
defines a map where the counter will be stored if the cache supports maps, to store the whole map in one
operation the key
must be set to * and the val
must be an objectoptions.setmax
if not empty tell the driver to set this new number only if there is no existing
value or it is less that the new number, only works for numeric valuesoptions.listName
defines a list where to add items, val
can be a value or an array of values, key
is ignored in this caseExample:
ipc.put("my:key", 2)
ipc.put("my:key", 1, { setmax: 1 })
ipc.put("key1", 1, { mapName: "my:map" })
ipc.put("*", { key1: 1, key2: 2 }, { mapName: "my:map" })
ipc.put("", [1,2,3], { listName: "my:list" })
Ipc.prototype.incr(key, val, options, callback)
Increase/decrease a counter in the cache by val
, non existent items are treated as 0, if a callback is given an
error and the new value will be returned.
options.ttl
in milliseconds can be used if the driver supports itoptions.mapName
defines a map where the counter will be stored if the cache supports mapsoptions.returning
- return old or new map object, if new or *
it will be the first item in the result array, if `old`` the lastval
is an object then the key is treated as a map and all numeric properties will be incremented, other properties just set,
this is the same as to set key to '*' and define mapName in the optionsExample:
ipc.incr("my:key", 1)
ipc.incr("count", 1, { mapName: "my:map" })
ipc.incr("my:map", { count: 1, name: "aaa", mtime: Date.now().toString() })
ipc.incr("*", { count: 1, name: "bbb", mtime: Date.now().toString() }, { mapName: "my:map" })
Ipc.prototype.subscribe(channel, options, callback)
Subscribe to receive messages from the given channel, the callback will be called only on new message received.
options.queueName
defines the queue, if not specified then it is sent to the default queueExample:
ipc.subscribe("alerts", (msg) => {
req.res.json(data);
}, req);
Ipc.prototype.unsubscribe(channel, options, callback)
Close a subscription for the given channel, no more messages will be delivered.
options.queueName
defines the queue, if not specified then it is sent to the default queueIpc.prototype.publish(channel, msg, options, callback)
Publish an event to the channel to be delivered to all subscribers. If the msg
is not a string it will be stringified.
options.queueName
defines the queue, if not specified then it is sent to the default queueIpc.prototype.broadcast(channel, msg, options, callback)
Send a message to a channel, this is high level routine that uses the corresponding queue, it uses eventually ipc.publish.
If no client or queue is provided in the options it uses default systemQueue
.
Ipc.prototype.sendBroadcast(msg, options)
Send a broadcast to all server roles
Ipc.prototype.subscribeQueue(options, callback)
Listen for messages from the given queue, the callback will be called only on new message received.
options.queueName
defines the queue, if not specified then it is sent to the default queueThe callback accepts 2 arguments, a message and optional next callback, if it is provided it must be called at the end to confirm or reject the message processing. Only errors with code>=500 will result in rejection, not all drivers support the next callback if the underlying queue does not support message acknowledgement.
Depending on the implementation, this can work as fan-out, delivering messages to all subscribed to the same channel or
can implement job queue model where only one subscriber receives a message.
For some cases like Redis this is the same as subscribe
.
For cases when the next
callback is provided this means the queue implementation requires an acknowledgement of successful processing,
returning an error with err.status >= 500
will keep the message in the queue to be processed later. Special code 600
means to keep the job
in the queue and report as warning in the log.
Example:
ipc.listen({ queueName: "jobs" }, (msg, next) => {
req.res.json(data);
if (next) next();
}, req);
Ipc.prototype.unsubscribeQueue(options, callback)
Stop listening for message, if no callback is provided all listeners for the key will be unsubscribed, otherwise only the specified listener.
options.queueName
defines the queue, if not specified then it is sent to the default queueThe callback will not be called.
It keeps a count how many subscribe/unsubscribe calls been made and stops any internal listeners once nobody is subscribed. This is specific to a queue which relies on polling.
Ipc.prototype.publishQueue(msg, options, callback)
Submit a message to the queue, if the msg
is not a string it will be stringified.
options.queueName
defines the queue, if not specified then it is sent to the default queueoptions.stime
defines when the message should be processed, it will be held in the queue until the time comesoptions.etime
defines when the message expires, i.e. will be dropped if not executed before this time.Ipc.prototype.monitorQueue(options)
Queue specific monitor services that must be run in the master process, this is intended to perform queue cleanup or dealing with stuck messages (Redis)
Ipc.prototype.unpublishQueue(msg, options, callback)
Queue specific message deletion from the queue in case of abnormal shutdown or job running too long in order not to re-run it after the restart, this is for queues which require manual message deletion ofter execution(SQS). Each queue client must maintain the mapping or other means to identify messages, the options is the message passed to the listener
Ipc.prototype.limiter(options, callback)
Check for rate limit using the default or specific queue, by default TokenBucket using local LRU cache is used unless a queue client provides its own implementation.
The options must have the following properties:
The callback takes 2 arguments:
delay
is a number of milliseconds till the bucket can be used again if not consumed, i.e. 0 means consumed.info
is an object with info about the state of the token bucket after the operation with properties: delay, count, total, elapsedIpc.prototype.checkLimiter(options, callback)
Keep checking the limiter until it is clear to proceed with the operation, if there is no available tokens in the bucket
it will wait and try again until the bucket is filled.
To support the same interface and ability to abort the loop pass options.retry
with a number of loops to run before exiting.
The callback will receive the same arguments as ipc.limiter``.
options.retries`` will be set to how many times it tried.
Ipc.prototype.localLimiter(msg)
Uses msg.name as a key returns the same message with consumed set to 1 or 0
Ipc.prototype.lock(name, options, callback)
Implementation of a lock with optional ttl, only one instance can lock it, can be for some period of time and will expire after timeout.
A lock must be uniquely named and the ttl period is specified by options.ttl
in milliseconds.
This is intended to be used for background job processing or something similar when
only one instance is needed to run. At the end of the processing ipc.unlock
must be called to enable another instance immediately,
otherwise it will be available after the ttl only.
if options.timeout
is given the function will keep trying to lock for the timeout
milliseconds.
if options.set
is given it will unconditionally set the lock for the specified ttl, this is for cases when
the lock must be active for longer because of the long running task
The callback must be passed which will take an error and a boolean value, if true is returned it means the timer has been locked by the caller, otherwise it is already locked by other instance. In case of an error the lock is not supposed to be locked by the caller.
Example:
ipc.lock("my-lock", { ttl: 60000, timeout: 30000 }, function(err, locked) {
if (locked) {
...
ipc.unlock("my-lock");
}
});
Ipc.prototype.unlock(name, options, callback)
Unconditionally unlock the lock, any client can unlock any lock.
Queue client using RabbitMQ server
To enable install the npm module:
npm i -g amqplib
IpcClient.prototype.close()
Close current connection, ports.... not valid after this call
IpcClient.prototype.applyOptions(options)
Prepare options to be used safely, parse the reserved params from the url
IpcClient.prototype.applyReservedOptions(options)
Handle reserved options
IpcClient.prototype.channel(options)
Return a subscription channel from the given name or options, the same client can support multiple subscriptions, additional
subscriptions are specified by appending #channel
to the options.queueName
, default is to use the primary queue name.
Consumer name if present is stripped off.
IpcClient.prototype.consumer(options)
Returns the consumer name for the given queue or empty if not specified, groupName
will be used as the consumer name if present
IpcClient.prototype.canonical(options)
Return canonical queue name, default channel is not appended, default consumer is not appened
IpcClient.prototype.stats(options, callback)
CACHE MANAGEMENT Returns the cache statistics to the callback as the forst argument, the object tructure is specific to each cache implementstion
IpcClient.prototype.clear(pattern, callback)
Clear all or only matched keys from the cache
IpcClient.prototype.get(key, options, callback)
Returns an item from the cache by a key, callback is required and it acceptes only the item, on any error null or undefined will be returned
IpcClient.prototype.put(key, val, options, callback)
Store an item in the cache, options.ttl
can be used to specify TTL in milliseconds
IpcClient.prototype.incr(key, val, options, callback)
Add/substract a number from the an item, returns new number in the callback if provided, in case of an error null/indefined should be returned
IpcClient.prototype.del(key, options, callback)
Delete an item from the cache
IpcClient.prototype.subscribe(channel, options, callback)
EVENT MANAGEMENT Subscribe to receive notification from the given channel
IpcClient.prototype.unsubscribe(channel, options, callback)
Stop receiving notifications on the given channel
IpcClient.prototype.publish(channel, msg, options, callback)
Publish an event
IpcClient.prototype.subscribeQueue(options, callback)
QUEUE MANAGEMENT Listen for incoming messages
IpcClient.prototype.unsubscribeQueue(options, callback)
Stop receiving messages
IpcClient.prototype.publishQueue(msg, options, callback)
Submit a job to a queue
IpcClient.prototype.unpublishQueue(options, callback)
Drop a job in case of abnormal shutdown or exceeded run time
IpcClient.prototype.pollQueue(options)
This method must take care how to keep the poller running via interval or timeout as long as the this._pollingQueue=1
.
IpcClient.prototype.schedulePollQueue(options, timeout)
Schedule next poller iteration immediately or after timeout, check configured polling rate, make sure it polls no more than
configured number of times per second. If not ready then keep polling until the ready signal is sent.
Two events can be used for back pressure support: pause
and unpause
to stop/restart queue processing
IpcClient.prototype.monitorQueue()
Queue monitor or cleanup service, when poller is involved this will be started and can be used for cleaning up stale messages or other maintainence work the requires.
IpcClient.prototype.lock(name, options, callback)
LOCKING MANAGEMENT By default return an error
IpcClient.prototype.limiter(options, callback)
RATE CONTROL Rate limit check, by default it uses the master LRU cache meaning it works within one physical machine only.
The options must have the following properties:
metrics.TokenBucket
rate limiter.The callback arguments must be:
Queue client using NATS server
To enable install the npm modules:
npm i -g nats
Configuration:
ipc-queue-nats=nats://localhost:4222
Cache/queue client based on Redis server using https://github.com/NodeRedis/node_redis3
The queue client implements reliable queue using sorted sets, one for the new messages and one for the messages that are being processed if timeout is provided. With the timeout this queue works similar to AWS SQS.
The interval
config property defines in ms how often to check for new messages after processing a message, i.e. after a messages processed
it can poll immediately or after this amount of time
The retryInterval
config property defines in ms how often to check for new messages after an error or no data, i.e. on empty
pool when no messages are processed it can poll immediately or after this amount of time
The visibilityTimeout
property specifies to use a shadow queue where all messages that are being processed are stored,
while the message is processed the timestamp will be updated so the message stays in the queue, if a worker exists or crashes without
confirming the message finished it will be put back into the work queue after visibilityTimeout
milliseconds. The queue name that
keeps active messages is appended with #.
Protocol rediss: will use TLS to connect to Redis servers, this is required for RedisCche Serverless
The threshold
property defines the upper limit of how many active messages can be in the queue when to show an error message, this is
for monitoring queue performance
The rate limiter implementes Tocken Bucket algorithm using Lua script inside Redis, the only requirement is that all workers to use NTP for time synchronization
Examples:
ipc-client=redis://host1
ipc-client-options-interval=1000
ipc-client=redis://host1?bk-visibilityTimeout=30000&bk-count=2
Queue client using AWS SQS, full queue url can be used or just the name as sqs://queuename
The count
config property specifies how messages to process at the same time, default is 1.
The interval
config property defines in ms how often to check for new messages after processing a message, i.e. after a messages processed
it can poll immediately or after this amount of time, default is 1000 milliseconds.
The retryInterval
config property defines in ms how often to check for new messages after an error or no data, i.e. on empty
pool when no messages are processed it can poll immediately or after this amount of time, default is 5000 mulliseconds.
The visibilityTimeout
property specifies how long the messages being processed stay hidden, in milliseconds.
The timeout
property defines how long to wait for new messages, i.e. the long poll, in milliseconds
The retryCount
and retryTimeout
define how many times to retry failed AWS HTTP requests, default is 5 times starting
with the backoff starting at 500 milliseconds.
For messages that have startTime
property which is the time in the future when a message must be actually processed there
is a parameter maxTimeout
which defines in milliseconds the max time a messsage can stay invisible while waiting for its scheduled date,
default is 6 hours, the AWS max is 12 hours. The scheduling is implemented using AWS visibilityTimeout
feature, keep
scheduled messages hidden until the actual time.
Examples:
ipc-queue=sqs://messages?bk-interval=60000
ipc-queue=https://sqs.us-east-1.amazonaws.com/123456/messages?bk-visibilityTimeout=300&bk-count=2
Job queue processor
When launched with jobs-workers
parameter equal or greater than 0, the master spawns a number of workers which subscribe to
configured job queues or the default queue and listen for messsges.
A job message is an object that defines what method from which module to run with the options as the first argument and a callback as the second.
Multiple job queues can be defined and processed at the same time.
Config parameters
jobs-workers
, type: "number", min: -1, max: 32, descr: "How many worker processes to launch to process the job queue, -1 disables jobs, 0 means launch as many as the CPUs available"jobs-worker-cpu-factor
, type: "real", min: 0, descr: "A number to multiply the number of CPUs available to make the total number of workers to launch, only used if workers
is 0"jobs-worker-args
, type: "list", descr: "Node arguments for workers, for passing v8 jobspec, see process
"jobs-worker-env
, type: "json", logger: "warn", descr: "Environment to be passed to the worker via fork, see cluster.fork
"jobs-worker-delay
, type: "int", descr: "Delay in milliseconds for a worker before it will start accepting jobs, for cases when other dependencies may take some time to start"jobs-max-runtime
, type: "int", min: 0, descr: "Max number of seconds a job can run before being killed"jobs-max-lifetime
, type: "int", min: 0, descr: "Max number of seconds a worker can live, after that amount of time it will exit once all the jobs are finished, 0 means indefinitely"jobs-shutdown-timeout
, type: "int", min: 500, descr: "Max number of milliseconds to wait for the graceful shutdown sequence to finish, after this timeout the process just exits"jobs-worker-queue
, type: "list", onupdate: function() { if (ipc.role=="worker"&&core.role=="worker") this.subscribeWorker() descr: "Queue(s) to subscribe for workers, multiple queues can be processes at the same time, i.e. more than one job can run from different queues"jobs-worker-options-(.+)
, obj: "workerOptions", make: "$1", type: "json", descr: "Custom parameters by queue name, passed to ipc.subscribeQueue
on worker start, useful with channels, ex: -jobs-worker-options-nats#events {\"count\":10}
"jobs-cron-queue
, type: "list", min: 1, descr: "Default queue to use for cron jobs"jobs-global-queue
, type: "list", min: 1, descr: "Default queue for all jobs, the queueName is ignored"jobs-global-ignore
, type: "list", descr: "Queue names which ignore the global setting, the queueName is used as usual"jobs-cron
, type: "bool", descr: "Allow cron jobs to be executed from the local etc/crontab file or via config parameter"jobs-schedule
, type: "json", onupdate: function() { if (core.role == "master" && this.cron) this.scheduleCronjobs("config", this.schedule) logger: "error", descr: "Cron jobs to be scheduled, the JSON must be in the same format as crontab file"jobs-unique-queue
, descr: "Default queue name to use for keeping track of unique jobs"jobs-unique-ignore
, type: "regexp", descr: "Ignore all unique parameters if a job's uniqueKey matches"jobs-unique-set-ttl-([0-9]+)
, type: "regexp", obj: "uniqueSetTtl", make: "$1", descr: "Override unique TTL to a new value if matches the unique key, ex: -jobs-unique-ttl-100 KEY"jobs-unique-logger
, descr: "Log level for unique error conditions"jobs-retry-visibility-timeout
, type: "map", maptype: "int", descr: "Visibility timeout by error code >= 500 for queues that support it"jobs.configureMaster(options, callback)
Initialize jobs processing in the master process
jobs.configureWorker(options, callback)
Initialize a worker to be ready for jobs to execute, in instance mode setup timers to exit on no activity.
jobs.shutdownWorker(options, callback)
Perform graceful worker shutdown, to be used for workers restart
jobs.exitWorker(options)
Perform graceful worker shutdown and then exit the process
jobs.initServer(options, callback)
Initialize a master that will manage jobs workers
jobs.initWorker(options, callback)
Initialize a worker for processing jobs
jobs.isCancelled(job, tag, value)
Returns true if a task with given name must be cancelled, this flag is set from the jobs master and
stoppable tasks must check it from time to time to terminate gracefully.
if value
is given it will return true only if it exactly equals to the set value in the task cancel state.
The cancel state is cleared only if tag is given, if only name is matched the cancel state remains for other tasks
jobs.cancelTask(name, options)
Send cancellation request to a worker or all workers, this has to be called from the jobs master.
options.workers
can be a single worker id or a list of worker ids, if not given the request will be sent to all workers for the current process cluster.
options.tag
is an opaque data that will be used to verifying which task should be cancelled, without it all tasks with given name will be cancelled.
jobs.getMaxRuntime()
Find the max runtime allowed in seconds
jobs.checkTimes()
Check how long we run a job and force kill if exceeded, check if total life time is exceeded.
If exit is required the shundownWorker
methods will receive options with shutdownReason
property
set and the name-sake property will contained the value exceeded.
jobs._badJob(jobspec)
Make sure the job is valid and has all required fields, returns a normalized job object or an error, the jobspec must be in the following formats:
"module.method"
{ job: "module.method" }
{ job: { "module.method": {}, .... } }
{ job: [ "module.method", { "module.method": {} ... } ...] }
any task in string format "module.method" will be converted into { "module.method: {} } automatically
jobs.checkOptions(jobspec, options)
Apply special job properties from the options
jobs.submitJob(jobspec, options, callback)
Submit a job for execution, it will be saved in a queue and will be picked up later and executed.
The queue and the way how it will be executed depends on the configured queue. See isJob
for
the format of the job objects.
jobspec.uniqueTtl
if greater than zero it defines number of milliseconds for this job to stay in the queue or run,
it creates a global lock using the job object as the hash key, no other job can be run until the ttl expires or the job
finished, non unique jobs will be kept in the queue and repeated later according to the visibilityTimeout
setting.
jobspec.uniqueKey
can define an alternative unique key for this job for cases when different jobs must be run sequentially
jobspec.uniqueKeep
if true then keep the unique lock after the jobs finished, otherwise it is cleared
jobspec.uniqueDrop
if true will make non-unique jobs to be silently dropped instead of keeping them in the queue
jobspec.logger
defines the logger level which will be used to log when the job is finished, default is debug
jobspec.maxRuntime
defines max number of seconds this job can run, if not specified then the queue default is used
jobspec.uniqueTag
defines additional tag to be used for job cancelling, for cases when multiple jobs are running with the same method
jobspec.uniqueOnce
if true than the visibility timeout is not kept alive while the job is running
jobspec.noWait
will run the job and delete it from the queue immediately, not at the end, for one-off jobs
jobspec.noWaitTimeout
number of seconds before deleting the job for one-off jobs but taking into account the uniqueKey and visibility timeout giving time
to check for uniqueness and exit, can be used regardless of the noWait flag
jobspec.noVisibility
will always delete messages after processing, ignore 600 errors as well
jobspec.visibilityTimeout
custom timeout for how long to keep this job invisible, overrides the default timeout
jobspec.retryVisibilityTimeout
an object with custom timeouts for how long to keep this job invisible by error status which results in keeping tasks in the queue for retry
jobspec.stopOnError
will stop tasks processing on first error, otherwise all errors will be just logged. Errors with status >= 600 will
stop the job regardless of this flag
jobspec.startTime
and/or jobspec.endTime
will define the time period during whihc this job is allowed to run, if
outside the period it will be dropped
options.delay
is only supported by SQS currently, it delays the job execution for the specified amount of ms
options.dedup_ttl
- if set it defines number of ms to keep track of duplicate messages, it tries to preserver only-once behaviour. To make
some queue to automatically use dedup mode it can be set in the queue options: -ipc-queue[-NAME]-options-dedup_ttl 86400000
.
Note: uniqueTtl
settings take precedence and if present dedup is ignored.
Special queue name: jobs.selfQueue
is reserved to run the job immediately inside the current process,
it will call the runJob
directly, this is useful in cases when already inside a worker and instead of submitting a new job
just run it directly. Any queue can be configured to run in selfQueue
by setting -ipc-queue[-NAME]-options-self-queue 1
.
jobs.runJob(jobspec, options, callback)
Run all tasks in the job object
jobs.cancelJob(options, callback)
Send a cancellation request to the given job
with optional tag
and value
. The value must mtach exactly.
jobs._runJob(jobspec, options, callback)
Sequentially execute all tasks in the list, run all subtasks in parallel
jobs.runTask(name, jobspec, options, callback)
Execute a task by name, the options
will be passed to the function as the first argument, calls the callback on finish or error
jobs._finishTask(err, name, jobspec, options, callback)
Complete task execution, cleanup and update the status
jobs.scheduleCronjob(jobspec)
Create a new cron job, for remote jobs additional property args can be used in the object to define arguments for the instance backend process, properties must start with -
Example:
{ "cron": "0 */10 * * * *", "job": "server.processQueue" },
{ "cron": "0 */30 * * * *", "job": { "server.processQueue": { name: "queue1" } } },
{ "cron": "0 5 * * * *", "job": [ { "scraper.run": { "url": "host1" } }, { "scraper.run": { "url": "host2" } } ] }
jobs.scheduleCronjobs(type, list)
Schedule a list of cron jobs, types is used to cleanup previous jobs for the same type for cases when
a new list needs to replace the existing jobs. Empty list does nothing, to reset the jobs for the particular type and
empty invalid jobs must be passed, like: [ {} ]
Returns number of cron jobs actually scheduled.
jobs.loadCronjobs()
Load crontab from JSON file as list of job specs:
Example:
[ { cron: "0 0 * * * *", job: "scraper.run" }, ..]
jobs.parseCronjobs(type, data)
Parse a JSON data with cron jobs and schedule for the given type, this can be used to handle configuration properties
Common utilities and useful functions
lib.tryCall(callback, ...args)
Run a callback if a valid function, all arguments after the callback will be passed as is
lib.tryCatch(callback, ...args)
Run a callback inside try..catch block, all arguments after the callback will be passed as is, in case of error all arguments will be printed in the log
lib.log()
Print all arguments into the console, for debugging purposes, if the first arg is an error only print the error
lib.__()
Simple i18n translation method compatible with other popular modules, supports the following usage:
lib.getArg(name, dflt)
Return commandline argument value by name
lib.getArgInt(name, dflt)
Return commandline argument value as a number
lib.isArg(name)
Returns true of given arg(s) are present in the command line, name can be a string or an array of strings.
lib.deferCallback(parent, msg, callback, timeout)
Register the callback to be run later for the given message, the message may have the __id
property which will be used for keeping track of the responses or it will be generated.
The parent
can be any object and is used to register the timer and keep reference to it.
A timeout is created for this message, if runCallback
for this message will not be called in time the timeout handler will call the callback
anyway with the original message.
The callback passed will be called with only one argument which is the message, what is inside the message this function does not care. If any errors must be passed, use the message object for it, no other arguments are expected.
lib.onDeferCallback(msg)
To be called on timeout or when explicitely called by the runCallback
, it is called in the context of the message.
lib.runCallback(parent, msg)
Run delayed callback for the message previously registered with the deferCallback
method.
The message must have id
property which is used to find the corresponding callback, if the msg is a JSON string it will be converted into the object.
Same parent object must be used for deferCallback
and this method.
lib.deferInterval(parent, interval, name, callback)
Assign or clear an interval timer, keep the reference in the given parent object
lib.sortByVersion(list, name)
Sort a list be version in descending order, an item can be a string or an object with
a property to sort by, in such case name
must be specified which property to use for sorting.
The name format is assumed to be: XXXXX-N.N.N
lib.newError(msg, status, code)
Return a new Error object, msg can be a string or an object with message, code, status properties. The default error status is 400 if not specified.
lib.traceError(err)
Returns the error stack or the error itself, to be used in error messages
lib.loadLocale(file, callback)
Load a file with locale translations into memory
lib.shuffle(list)
Randomize the list items in place
lib.toVersion(str)
Returns a floating number from the version string, it assumes common semver format as major.minor.patch, all non-digits will be removed, underscores will be treated as dots. Returns a floating number which can be used in comparing versions.
Example > lib.toVersion("1.0.3") 1.000003 > lib.toVersion("1.0.3.4") 1.000003004 > lib.toVersion("1.0.3.4") > lib.toVersion("1.0.3") true > lib.toVersion("1.0.3.4") > lib.toVersion("1.0.0") true > lib.toVersion("1.0.3.4") > lib.toVersion("1.1.0") false
lib.toTitle(name)
Convert text into capitalized words
lib.toCamel(name, chars)
Convert into camelized form, optional chars can define the separators, default is -, _ and .
lib.toUncamel(str, sep)
Convert Camel names into names separated by the given separator or dash if not.
lib.toNumber(val, options, float)
Safe version, uses 0 instead of NaN, handle booleans, if float specified, returns as float.
Options:
Example:
lib.toNumber("123")
lib.toNumber("1.23", { float: 1, dflt: 0, min: 0, max: 2 })
lib.toDigits(str)
Strip all non-digit characters from a string
lib.toClamp(num, min, max)
Return a number clamped between the range
lib.toBool(val, dflt)
Return true if value represents true condition, i.e. non empty value
lib.toDate(val, dflt, invalid)
Return Date object for given text or numeric date representation, for invalid date returns 1969 unless invalid
parameter is given,
in this case invalid date returned as null. If dflt
is NaN, null or 0 returns null as well.
lib.toMtime(val, dflt)
Return milliseconds from the date or date string, only number as dflt is supported, for invalid dates returns 0
lib.toBase62(num, alphabet)
Return base62 representation for a number
lib.toUrl(val, options)
Return a well formatted and validated url or empty string
lib.toPrice(num, options)
Return a test representation of a number according to the money formatting rules, default is en-US, options may include: currency(USD), display(symbol), sign(standard), min(2), max(3)
lib.toEmail(val, options)
Return an email address if valid, options.parse
makes it extract the email from name <email>
format
lib.toValue(val, type, options)
Convert a value to the proper type, default is to return a string or convert the value to a string if no type is specified, special case if the type is "" or null return the value as is without any conversion
lib.toString(str, options)
Return the value as a string
RegExp.prototype.toJSON()
Serialize regexp with a custom format, `lib.toRegxp`` will be able to use it
lib.toRegexp(str, options)
Safely create a regexp object, if invalid returns undefined, the options can be a string with srandard RegExp flags or an object with the following properties:
lib.toRegexpMap(obj, val, options)
Add a regexp to the list of regexp objects, this is used in the config type regexpmap
.
lib.toRegexpObj(obj, val, options)
Add a regexp to the object that consist of list of patterns and compiled regexp, this is used in the config type regexpobj
lib.toDuration(mtime, options)
Return duration in human format, mtime is msecs
lib.toAge(mtime, options)
Given time in msecs, return how long ago it happened
lib.toSize(size, decimals)
Return size human readable format
lib.toParams(query, schema, options)
Process incoming query and convert parameters according to the type definition, the schema contains the definition of the paramaters against which to validate incoming data. It is an object with property names and definitoons that at least must specify the type, all other options are type specific.
Returns a string message on error or an object
The options can define the following global properties:
*.type
where type id any valid type supported or just *
for all parameters.Schema parameter properties:
lib.isMatched
at the end,|
, for map type default is :;
Supported types:
Example:
var query = lib.toParams(req.query, {
id: { type: "int" },
count: { type: "int", min: 1, max: 10, dflt: 5 },
name: { type: "string", max: 32, trunc: 1 },
pair: { type: "map", maptype: "int" },
code: { type: "string", regexp: /^[a-z]-[0-9]+$/, errmsg: "Valid code is required" },
start: { type: "token", required: 1 },
email: { type: "list", datatype: "email", novalue: ["a@a"] },
email1: { type: "email", required: { email: null } },
data: { type: "json", datatype: "obj" },
mtime: { type: "mtime", name: "timestamp" },
flag: { type: "bool", novalue: false },
descr: { novalue: { name: "name", value: "test" }, replace: { "<": "!" } },
internal: { ignore: 1 },
tm: { type: "timestamp", optional: 1 },
ready: { value: "ready" },
state: { values: [ "ok","bad","good" ] },
status: { value: [ "ok","done" ] },
obj: { type: "obj", params: { id: { type: "int" }, name: {} } },
arr: { type: "array", params: { id: { type: "int" }, name: {} } },
ssn: { type: "string", regexp: /^[0-9]{3}-[0-9]{3}-[0-9]{4}$/, errmsg: "Valid SSN is required" },
phone: { type: "list", datatype: "number" },
}, {
defaults: {
start: { secret: req.account.secret },
name: { dflt: "test" },
count: { max: 100 },
email: { ignore: req.account.type != "admin" },
"*.string": { max: 255 },
'*': { maxlist: 255 },
})
if (typeof query == "string) return api.sendReply(res, 400, query);
lib.toFormat(format, data, options)
Convert a list of records into the specified format, supported formats are: xml, csv, json, jsontext
.
csv
the default separator is comma but can be specified with options.separator
. To produce columns header specify options.header
.json
format puts each record as a separate JSON object on each line, so to read it back
it will require to read every line and parse it and add to the list.xml
format the name of the row tag is <row>
but can be
specified with options.tag
.All formats support the property options.allow
which is a list of property names that are allowed only in the output for each record, non-existent
properties will be replaced by empty strings.
The mapping
object property can redefine different tag/header names to be put into the file instead of the exact column names from the records.
lib.toTemplate(text, obj, options)
Given a template with @..@ placeholders, replace each placeholder with the value from the obj.
The obj
can be an object or an array of objects in which case all objects will be checked for the value until non empty.
To use @ in the template specify it as @@
The options if given may provide the following:
Default placeholders:
Example:
lib.toTemplate("http://www.site.com/@code@/@id@", { id: 123, code: "YYY" }, { encoding: "url" })
lib.toTemplate("Hello @name|friend@!", {})
lib.toFlags(cmd, list, name)
Flags command utility, the commands are:
name
flags to the list if does not exists, returns the same arrayname
, returns the same arrayname
name
lib.toRFC3339 (date)
Return RFC3339 formatted timestamp for a date or current time
lib.jsonToBase64(data, secret, options)
Stringify JSON into base64 string, if secret is given, sign the data with it
lib.base64ToJson(data, secret, options)
Parse base64 JSON into JavaScript object, in some cases this can be just a number then it is passed as it is, if secret is given verify that data is not chnaged and was signed with the same secret
lib.jsonFormat(obj, options)
Nicely format an object with indentations, optional indentlevel
can be used to control until which level deep
to use newlines for objects.
lib.stringify(obj, replacer, space)
JSON stringify without exceptions, on error just returns an empty string and logs the error
lib.encodeURIComponent(str)
Encode with additional symbols, convert these into percent encoded:
! -> %21, * -> %2A, ' -> %27, ( -> %28, ) -> %29
lib.decodeURIComponent(str)
No-exception version of the global function, on error returen empty string
lib.escapeUnicode(text)
Convert all Unicode binary symbols into Javascript text representation
lib.unicode2Ascii(str)
Replace Unicode symbols with ASCII equivalents
lib.unescape(str)
Convert escaped characters into native symbols
lib.textToXml(str)
Convert all special symbols into xml entities
lib.textToEntity(str)
Convert all special symbols into html entities
lib.entityToText(str)
Convert html entities into their original symbols
lib.toBase32(buf, options)
Convert a Buffer into base32 string
lib.fromBase32(str, options)
Convert a string in base32 into a Buffer
lib.encrypt(key, data, options)
Encrypt data with the given key code
lib.decrypt(key, data, options)
Decrypt data with the given key code
lib.sign (key, data, algorithm, encode)
HMAC signing and base64 encoded, default algorithm is sha1
lib.hash (data, algorithm, encode)
Hash and base64 encoded, default algorithm is sha1
lib.random(size)
Generate random key, size if specified defines how many random bits to generate
lib.randomUShort()
Return random number between 0 and USHORT_MAX
lib.randomShort()
Return random number between 0 and SHORT_MAX
lib.randomUInt()
Return random number between 0 and ULONG_MAX
lib.randomFloat()
Returns random number between 0 and 1, 32 bits
lib.randomInt(min, max)
Return random integer between min and max inclusive using crypto generator, based on https://github.com/joepie91/node-random-number-csprng
lib.randomNum(min, max, decs)
Generates a random number between given min and max (required) Optional third parameter indicates the number of decimal points to return:
lib.timingSafeEqual(a, b)
Timing safe string compare using double HMAC, from suryagh/tsscmp
lib.totp(key, options)
Create a Timed One-Time Password, RFC6328
lib.toSkip32(op, key, n)
Encrypt/decrypt a number using a 10 byte key
array, op
== d
for decrypt, other is encrypt
lib.forEachLine(file, options, lineCallback, endCallback)
Call callback for each line in the file options may specify the following parameters:
options.header
it is a function
it will be called with the first line as an argument and must return true if this line needs to be skipped,
lib.rxLine
"'
Properties updated and returned in the options:
lib.readLines(ctx, options, lineCallback, endCallback)
Process lines asynchronously, both callbacks must be provided
lib.forEachLineSync(file, options, lineCallback)
Sync version of the forEachLine
, read every line and call callback which may not do any async operations
because they will not be executed right away but only after all lines processed
lib.writeLines(file, lines, options, callback)
Write given lines into a file, lines can be a string or list of strings or numbers
old
is default, it can be in the strftime
format to use date, like %w, %d, %mlib.moveFile(src, dst, overwrite, callback)
Copy file and then remove the source, do not overwrite existing file
lib.copyFile(src, dst, overwrite, callback)
Copy file, overwrite is optional flag, by default do not overwrite
lib.statSync(file)
Non-exception version, returns empty object, mtime is 0 in case file does not exist or number of seconds of last modified time mdate is a Date object with last modified time
lib.readFileSync(file, options)
Return contents of a file, empty if not exist or on error.
Options can specify the format:
lib.readFile(file, options, callback)
Same as lib.readFileSync
but asynchronous
lib.findFilter(file, stat, options)
Filter function to be used in findFile methods
lib.findFileSync(dir, options)
Return list of files than match filter recursively starting with given path, dir is the starting path.
The options may contain the following:
Example:
lib.findFileSync("modules/", { depth: 1, types: "f", include: /\.js$/ }).sort()
lib.findFile(dir, options, callback)
Async version of find file, same options as in the sync version, the starting dir is not included
lib.watchFiles(options, fileCallback, endCallback)
Watch files in a dir for changes and call the callback, the parameters:
lib.makePathSync(dir)
Recursively create all directories, return 1 if created or 0 on error or if exists, no exceptions are raised, error is logged only
lib.makePath(dir, callback)
Async version of makePath, stops on first error
lib.unlink(name, callback)
Unlink a file, no error on non-existent file, callback is optional
lib.unlinkPath(dir, callback)
Recursively remove all files and folders in the given path, returns an error to the callback if any
lib.unlinkPathSync(dir)
Recursively remove all files and folders in the given path, stops on first error
lib.chownSync(uid, gid)
Change file owner, multiples files can be specified, do not report errors about non existent files, the uid/gid must be set to non-root user for this function to work and it is called by the root only, all the rest of the arguments are used as files names
Example:
lib.chownSync(1, 1, "/path/file1", "/path/file2")
lib.mkdirSync()
Create a directories if do not exist, multiple dirs can be specified, all preceeding directories are not created
Example:
lib.mkdirSync("dir1", "dir2")
lib.findProcess(options, callback)
Return a list of matching processes, Linux only
lib.execProcess(cmd, callback)
Run the process and return all output to the callback, this a simply wrapper around child_processes.exec so the lib.runProcess can be used without importing the child_processes module. All fatal errors are logged.
lib.spawnProcess(cmd, args, options, callback)
Run specified command with the optional arguments, this is similar to child_process.spawn with callback being called after the process exited
Example
lib.spawProcess("ls", "-ls", { cwd: "/tmp" }, lib.log)
lib.checkRespawn(callback)
If respawning too fast, delay otherwise call the callback after a short timeout
lib.spawnSeries(cmds, options, callback)
Run a series of commands, cmds
is an object where a property name is a command to execute and the value is an array of arguments or null.
if options.error
is 1, then stop on first error or if non-zero status on a process exit.
Example:
lib.spawnSeries({"ls": "-la",
"ps": "augx",
"du": { argv: "-sh", stdio: "inherit", cwd: "/tmp" },
"uname": ["-a"] },
lib.log)
lib.forEach(list, iterator, callback, direct)
Apply an iterator function to each item in an array in parallel. Execute a callback when all items have been completed or immediately if there is an error provided.
The direct
argument controls how the final callback is called, if true it is called directly otherwisde via setImmediate
lib.forEach([ 1, 2, 3 ], function (i, next) {
console.log(i);
next();
}, function (err) {
console.log('done');
});
lib.forEvery(list, iterator, callback, direct)
Same as forEach
except that the iterator will be called for every item in the list, all errors will be ignored
lib.forEachSeries(list, iterator, callback, direct)
Apply an iterator function to each item in an array serially. Execute a callback when all items have been completed or immediately if there is is an error provided.
lib.forEachSeries([ 1, 2, 3 ], function (i, next, data) {
console.log(i, data);
next(null, data);
}, function (err, data) {
console.log('done', data);
});
lib.forEverySeries(list, iterator, callback, direct)
Same as forEachSeries
except that the iterator will be called for every item in the list, all errors will be passed to the next
item with optional additional data argument.
lib.forEverySeries([ 1, 2, 3 ], function (i, next, err, data) {
console.log(i, err, data);
next(err, i, data);
}, function (err, data) {
console.log('done', err, data);
});
lib.forEachLimit(list, limit, iterator, callback, direct)
Apply an iterator function to each item in an array in parallel as many as specified in limit
at a time. Execute a callback when all items
have been completed or immediately if there is is an error provided.
lib.forEveryLimit(list, limit, iterator, callback, direct)
Same as forEachLimit
but does not stop on error, all items will be processed and errors will be collected in an array and
passed to the final callback
lib.forEachItem(options, next, iterator, callback, direct)
Apply an iterator function to each item returned by the next(item, cb)
function until it returns null
or the iterator returns an error in the callback,
the final callback will be called after all iterators are finished.
If no item is available the next()
should return empty value, it will be called again in options.interval
ms if specified or
immediately in the next tick cycle.
The max number of iterators to run at the same time is controlled by options.max
, default is 1.
The maximum time waiting for items can be specified by options.timeout
, it is not an error condition, just another way to stop
processing if it takes too long because the next()
function is a black box just returning items to process. Timeout will send null
to the queue and it will stop after all iterators are finished.
var list = [1, 2, "", "", 3, "", 4, "", "", "", null];
lib.forEachItem({ max: 2, interval: 1000, timeout: 30000 },
function(next) {
next(list.shift());
},
function(item, next) {
console.log("item:", item);
next();
},
(err) => {
console.log("done", err);
});
lib.parallel(tasks, callback, direct)
Execute a list of functions in parallel and execute a callback upon completion or occurance of an error. Each function will be passed
a callback to signal completion. The callback accepts an error for the first argument. The iterator and callback will be
called via setImmediate function to allow the main loop to process I/O unless the direct
argument is true
lib.everyParallel(tasks, callback, direct)
Same as lib.parallel
but all functions will be called and any error will be ignored
lib.series(tasks, callback, direct)
Execute a list of functions serially and execute a callback upon completion or occurance of an error. Each function will be passed a callback to signal completion. The callback accepts either an error for the first argument in which case the flow will be aborted and the final callback will be called immediately or some optional data to be passed to thr next iterator function as a second argument.
The iterator and callback will be called via setImmediate function to allow the main loop to process I/O unless the direct
argument is true
lib.series([
function(next) {
next(null, "data");
},
function(next, data) {
setTimeout(function () { next(null, data); }, 100);
},
], function(err, data) {
console.log(err, data);
});
lib.everySeries(tasks, callback, direct)
Same as lib.series
but all functions will be called with errors passed to the next task, only the last passed error will be returned
lib.everySeries([
function(next) {
next("error1", "data1");
},
function(next, err, data) {
setTimeout(function () { next(err, "data2"); }, 100);
},
], function(err, data) {
console.log(err, data);
});
lib.whilst(test, iterator, callback, direct, _)
While the test function returns true keep running the iterator, call the callback at the end if specified.
All functions are called via setImmediate unless the direct
argument is true
var count = 0;
lib.whilst(
function(data) {
return count < 5;
},
function (next, data) {
count++;
setTimeout(next, 1000);
},
function (err, data) {
console.log(err, data, count);
});
lib.doWhilst(iterator, test, callback, direct, _)
Keep running iterator while the test function returns true, call the callback at the end if specified.
All functions are called via setImmediate unless the direct
argument is true
var count = 0;
lib.doWhilst(
(next, data) => {
count++;
setTimeout(next, 1000);
},
(data) => (count < 5),
(err, data) => {
console.log(err, data, count);
});
lib.typeName(v)
Return object type, try to detect any distinguished type
lib.isObject(v)
Returns true of the argument is a generic object, not a null, Buffer, Date, RegExp or Array
lib.isNumber(val)
Return true if the value is a number
lib.isPrefix(val, prefix)
Return true if the value is prefixed
lib.isUuid(val, prefix)
Returns true if the value represents an UUID
lib.isTuuid(str)
Returns true if the value represent tuuid
lib.isUnicode(str)
Returns true of a string contains Unicode characters
lib.isPositive(val)
Returns true if a number is positive, i.e. greater than zero
lib.isArray(val, dflt)
Returns the array if the value is non empty array or dflt value if given or undefined
lib.isEmpty(val)
Return true of the given value considered empty
lib.isNumeric(val)
Returns true if the value is a number or string representing a number
lib.isNumericType(type)
Returns true if the given type belongs to the numeric family of data types
lib.isDate(d)
Returns true if the given date is valid
lib.isFlag(list, name)
Returns true if name
exists in the array list
, search is case sensitive. if name
is an array it will return true if
any element in the array exists in the list
.
lib.isWord(text, start, end, delimiters)
Returns true if it is a word at the position start
and end
in the text
string,
delimiters
define a character set to be used for words boundaries, if not given or empty string the default will be usedlib.validNum(...args)
Returns first valid number from the list of arguments or 0
lib.validPositive(...args)
Returns first valid positive number from the list of arguments or 0
lib.validBool(...args)
Returns first valid boolean from the list of arguments or false
lib.validVersion(version, condition)
Return true if the version is within given condition(s), always true if either argument is empty. Conditions can be: >=M.N, >M.N, =M.N, <=M.N, <M.N, M.N-M.N
lib.LRUCache(max)
Simple LRU cache in memory, supports get,put,del operations only, TTL can be specified in milliseconds as future time
lib.exists(obj, name)
Return true if a variable or property in the object exists,
Example:
lib.exists({ 1: 1 }, "1")
lib.exists([ 1, 2, 3 ], 1)
lib.exists([ 1, 2, 3 ], [ 1, 5 ])
lib.isMatched(obj, condition, options)
All properties in the object obj
must match all properties in the object condition
, for comparison lib.isTrue
is used for each property
in the condition object.
Example:
lib.isMatched({ id: 1, name: "test", type: ["user", "admin"] }, { name: /^j/ })
true
lib.isMatched({ id: 1, name: "test", type: ["user", "admin"] }, { type: "admin" }, { ops: { type: "not_in" } })
false
lib.isMatched({ id: 1, name: "test", type: ["user", "admin"] }, { type: [staff"] })
false
lib.isMatched({ id: 1, name: "test", type: ["user", "admin"] }, { id: 1 }, { ops: { id: "ge" } })
true
lib.isTrue(val, cond, op, type)
Evaluate an expr, compare 2 values with optional type and operation, compae a data value val`` against a condtion
cond`.
lib.arrayLength(list)
Return the length of an array or 0 if it is not an array
lib.arrayRemove(list, item)
Remove the given item from the list in place, returns the same list
lib.arrayUnique(list, key)
Returns only unique items in the array, optional key
specified the name of the column to use when determining uniqueness if items are objects.
lib.arrayEqual(list1, list2)
Returns true if both arrays contain same items, only primitive types are supported
lib.arrayFlatten(list)
Flatten array of arrays into a single array
lib.objClone()
A copy of an object, this is a shallow copy, only arrays and objects are created but all other types are just referenced in the new object
lib.objNew()
Return new object using arguments as name value pairs for new object properties
lib.objFlatten(obj, options)
Flatten a javascript object into a single-depth object, all nested values will have property names appended separated by comma
The options properties:
Example
> lib.objFlatten({ a: { c: 1 }, b: { d: 1 } } )
{ 'a.c': 1, 'b.d': 1 }
> lib.objFlatten({ a: { c: 1 }, b: { d: [1,2,3] } }, { index: 1 })
{ 'a.c': 1, 'b.d.1': 1, 'b.d.2': 2, 'b.d.3': 3 }
lib.objClean(obj, options)
Cleanup object properties, delete all undefined values in place by default. Additional options:
null
is true then delete all null properties.empty
is true then delete all empty properties, i.e. null/undefined/""/[]type
is a RegExp then all properties that match it by type will be deleted.name
is a RegExp then all properties that match it by name will be deleted.value
is a RegExp then all string|number|boolean properties that match it by value will be deleted.array
is true then process all array items recursivellyExample
> lib.cleanObj({ a: 1, b: true, c: undefined, d: 2, e: null, l: ["a", "b", null, undefined, { a: 1, b: undefined } ] },{ null:1, array:1, type: /boolean/})
{ a: 1, d: 2, l: [ 'a', 'b', { a: 1 } ] }
lib.objExtend(obj, val, options)
Add properties to an existing object where the first arg is an object, the second arg is an object to add properties from, the third argument can be an options object that can control how the properties are merged.
Options properties:
allow - a regexp which properties are allowed to be merged
ignore - a regexp which properties should be ignored
del - a regexp which properties should be removed
strip - a regexp to apply to each property name before merging, the matching parts will be removed from the name
deep - extend all objects not just the top level
noempty - skip undefined, default is to keep
lib.objExtend({ a:1, c:5 }, { c: { b: 2 }, d: [{ d: 3 }], _e: 4, f: 5, x: 2 }, { allow: /^(c|d|e)/, strip: /^/, del: /^f/ }) { a: 1, c: { b: 2 }, d: [ { d: 3 } ], e: 4 }
lib.objExtend({ a:1, c:5 }, { c: { b: 2 }, d: [{ d: 3 }], _e: 4, f: 5, x: 2 }, { allow: /^(c|d|e)/, strip: /^/, del: /^f/, deep: 1 }) { a: 1, c: {}, d: [ { d: 3 } ], e: 4 }
lib.objMerge(obj, val, options)
Merge two objects, all properties from the val
override existing properties in the obj
, returns a new object
Options properties:
Example
var o = lib.objMerge({ a:1, b:2, c:3 }, { c:5, d:1, _e: 4, f: 5, x: 2 }, { allow: /^(c|d)/, remove: /^_/, del: /^f/ })
o = { a:1, b:2, c:5, d:1 }
lib.objDel()
Delete properties from the object, first arg is an object, the rest are properties to be deleted
lib.objSearch(obj, options)
Return list of objects that matched the given criteria in the given object. Performs the deep search.
The options can define the following properties:
Example:
var obj = { id: { index: 1 }, name: { index: 3 }, descr: { type: "string", pub: 1 }, items: [ { name: "test" } ] };
lib.objSearch(obj, { matchValue: /string/ });
[ { name: 'descr', value: { type: "string", pub: 1 } } ]
lib.objSearch(obj, { matchName: /name/, matchValue: /^t/ });
[{ name: '0': value: { name: "test" }]
lib.objSearch(obj, { exists: 'index', sort: 1, value: "index" });
{ id: 1, name: 3 }
lib.objSearch(obj, { hasValue: 'test', count: 1 });
1
lib.objGet(obj, name, options)
Return a property from the object, name specifies the path to the property, if the required property belong to another object inside the top one the name uses . to separate objects. This is a convenient method to extract properties from nested objects easily. Options may contains the following properties:
Example:
> lib.objGet({ response: { item : { id: 123, name: "Test" } } }, "response.item.name")
"Test"
> lib.objGet({ response: { item : { id: 123, name: "Test" } } }, "response.item.name", { list: 1 })
[ "Test" ]
> lib.objGet({ response: { item : { id: 123, name: "Test" } } }, "response.item.name", { owner: 1 })
{ item : { id: 123, name: "Test" } }
lib.objSet(obj, name, value, options)
Set a property of the object, name can be an array or a string with property path inside the object, all non existent intermediate objects will be create automatically. The options can have the folowing properties:
.
Example
var a = lib.objSet({}, "response.item.count", 1)
lib.objSet(a, "response.item.count", 1, { incr: 1 })
lib.objIncr(obj, name, count, result)
Increment a property by the specified number, if the property does not exist it will be created,
returns new incremented value or the value specified by the result
argument.
It uses lib.objSet
so the property name can be a nested path.
lib.objMult(obj, name, count, result)
Similar to objIncr
but does multiplication
lib.objKeys(obj)
Return all property names for an object
lib.objDescr(obj, options)
Return an object structure as a string object by showing primitive properties only,
for arrays it shows the length and options.count
or 25 first items,
for objects it will show up to the options.keys
or 25 first properties,
strings are limited by options.length
or 256 bytes, if truncated the full string length is shown.
the object depth is limited by options.depth
or 5 levels deep, the number of properties are limited by options.count or 15,
all properties that match options.ignore
will be skipped from the output, if options.allow
is a regexp, only properties that
match it will be output. Use options.replace
for replacing anything in the final string.
lib.objSize(obj, options, priv)
Returns the size of the whole object, this is not exact JSON size, for speed it summarizes approximate size of each property recursively
depth
- limits how deep it goes, on limit returns MAX_SAFE_INTEGER+ numbernan
- if true return NaN on reaching the limitspad
- extra padding added for each property, default is 5 to simulate JSON encoding, "..": ".."lib.objReset(obj, options)
Reset properties of the obj
matching the regexp, simple types are removed but objects/arrays/maps are set to empty objects
Options properties:
lib.configParse(data, options)
Parse data as config format name=value per line, return an array of arguments in command line format ["-name", value,....]
Supports sections:
[name=value,...]
or [name!=value,...]
where name is a property name with optional value(s).!=
denotes negative condition, i.e. not matching or NOT emptyaws.region
or instance.tag
,
all modules will be checked inside options.modules
object only,
all other names are checked in the top level options.Sections work like a filter, only if a property matches it is used otherwise skipped completely, it uses lib.isTrue
for matching
so checking an item in an array will work as well.
The [global] section can appear anytime to return to global mode
lib.jsonParse(obj, options)
Silent JSON parse, returns null on error, no exceptions raised.
options can specify the following properties:
lib.xmlParse(obj, options)
Same arguments as for jsonParse
Combined parser with type validation
lib.matchRegexp(str, rx, index)
Perform match on a regexp for a string and returns matched value, if no index is specified returns item 1, using index 0 returns the whole matched string, -1 means to return the whole matched array If the `rx`` object has the 'g' flag the result will be all matches in an array.
lib.matchAllRegexp(str, rx, index)
Perform match on a regexp and return all matches in an array, if no index is specified returns item 1
lib.testRegexp(str, rx)
Perform test on a regexp for a string and returns true only if matched.
lib.testRegexpObj(str, rx)
Run test on a regexpObj
lib.replaceRegexp(str, rx, val)
Safe version of replace for strings, always returns a string, if val
is not provided performs
removal of the matched patterns
lib.strTrim(str, chars)
Remove all whitespace from the begining and end of the given string, if an array with characters is not given then it trims all whitespace
lib.strSplit(str, sep, options)
Split string into array, ignore empty items,
sep
is an RegExp to use as a separator instead of default pattern [,\|]
,options
is an object with the same properties as for the toParams
,datatype' will be used with
lib.toValue` to convert the value for each itemkeepempty
- will preserve empty items, by default empty strings are ignorednotrim
- will skip trimming strings, trim is the defaultmax
- will skip strings over the specificed size if no trunc
trunc
- will truncate strings longer than max
regexp
- will skip string if not matchingnoregexp
- will skip string if matchingreplace
- an object map which characters to replace with new valuesIf str
is an array and type is not specified then all non-string items will be returned as is.
lib.strSplitUnique(str, sep, options)
Split as above but keep only unique items, case-insensitive
lib.phraseSplit(str, options)
Split a string into phrases separated by options.separator
character(s) and surrounded by characters in options.quotes
.
The default separator is space and default quotes are both double and single quote.
If options.keepempty
is given all empty parts will be kept in the list.
lib.zeropad(n, width)
Return a string with leading zeros
lib.sprintf(fmt, args)
C-sprintf alike based on http://stackoverflow.com/a/13439711 Usage:
lib.strCompress(data, encoding)
lib.strSimilarity(s1, s2, options)
Returns a score between 0 and 1 for two strings, 0 means no similarity, 1 means exactly similar. The default algorithm is JaroWrinkler, options.type can be used to specify a different algorithm:
lib.AhoCorasick(keywords)
Text search using Aho-Corasick algorithm, based on https://github.com/BrunoRB/ahocorasick
lib.AhoCorasick.prototype.search(text, options)
Search given text for keywords, returns a list of matches in the format [ index, [ keywords] ]
where the index points to the last character of the found keywords. When options.list
is true it returns
the matched keywords only.
Example:
var ac = new lib.AhoCorasick(['keyword1', 'keyword2', 'etc']);
ac.search('should find keyword1 at position 19 and keyword2 at position 47.');
[ [ 19, [ 'keyword1' ] ], [ 47, [ 'keyword2' ] ] ]
ac.search('should find keyword1 at position 19 and keyword2 at position 47.', { list: 1 });
[ 'keyword1', 'keyword2' ]
If options.delimiters
is a string then return words only if surrounded by characters in the delimiters, this is
to return true words and not substrings, empty string means use the default delimiters which are all punctuation characters
lib.findWords(words, text, delimiters)
Return an array of words`` found in the given
text`` separated by delimiters, this a brute force search for every keyword and
using lib.isWord
to detect boundaries.
This is an alternative to AhoCorasick if number of words is less than 50-70.
lib.networkInterfaces(options)
Return a list of local interfaces, default is all active IPv4 unless `IPv6`` property is set
lib.dropPrivileges(uid, gid)
Drop root privileges and switch to a regular user
lib.ip2int(ip)
Convert an IP address into integer
lib.int2ip(int)
Convert an integer into IP address
lib.inCidr(ip, cidr)
Return true if the given IP address is within the given CIDR block
lib.cidrRange(cidr)
Return first and last IP addresses for the CIDR block
lib.domainName(host, toplevel)
Extract domain from the host name, takes all host parts except the first one, if toplevel is true return 2 levels only
lib.localEpoch(type)
Returns current time in seconds (s), microseconds (m), time struct (tm) or milliseconds since the local lib._epoch
(2023-07-31 UTC)
lib.clock()
Returns current time in microseconds since January 1, 1970, UTC
lib.getTimeOfDay()
Return current time in an array as [ tv_sec, tv_usec ]
lib.now()
Return number of seconds for current time
lib.daysInMonth(year, month)
Return the number of days in the given month of the specified year.
lib.weekOfYear(date, utc)
Return an ISO week number for given date, from https://www.epochconverter.com/weeknumbers
lib.isDST(date)
Returns true if the given date is in DST timezone
lib.tzName(tz)
Return a timezone human name if matched (EST, PDT...), tz must be in GMT-NNNN format
lib.parseTime(time)
Parses a string with time and return an array [hour, min], accepts 12 and 24hrs formats, a single hour is accepted as well, returns undefined if cannot parse
lib.isTimeRange(time1, time2, options)
Returns 0 if the current time is not within specified valid time range or it is invalid. Only continious time rang eis support, it does not handle over the midninght ranges, i.e. time1 is always must be greater than time2.
options.tz
to specify timezone, no timezone means current timezone.
options.date
if given must be a list of dates in the format: YYY-MM-DD,...
lib.strftime(date, fmt, options)
Format date object
lib.getHashid(options)
Return cached Hashids object for the given configuration Properties:
lib.uuid(prefix, options)
Return unique Id without any special characters and in lower case
lib.slug(options)
Generate a 22 chars slug from an UUID, alphabet can be provided, default is lib.uriSafe
lib.suuid(prefix, options)
Returns a short unique id within a microsecond
lib.murmurHash3(key, seed = 0)
32-bit MurmurHash3 implemented by bryc (github.com/bryc)
lib.sfuuid(options)
Generate a SnowFlake unique id as 64-bit number Format: time - 41 bit, node - 10 bit, counter - 12 bit Properties can be provided:
m
for microseconds, s
for secondslib.sfuuidParse(id)
Parse an id into original components: now, node, counter
lib.tuuid(prefix, encode)
Returns time sortable unique id, inspired by https://github.com/paixaop/node-time-uuid
lib.tuuidTime(str)
Return time in milliseconds from the time uuid
Simple logger utility for debugging
logger.registerLevel(level, callback, options)
Register a custom level handler, must be invoked via logger.logger
only, if no handler registered for given level
the whole message will be logger as an error. The custom hadnler is called in the context of the module which means
the options are available inside the handler.
The following properties are supported automatically:
logger.setSyslog(facility, tag)
Set or close syslog mode
logger.setSyslogOptions(val)
Options for syslog: name:val,name:val,...
logger.setFile(file, options)
Redirect logging into file
logger.setLevel(level)
Set the output level, it can be a number or one of the supported level names
logger.setDebugFilter(str, func)
Enable debugging level for this label, if used with the same debugging level it will be printed regardless of the global level,
a label is first argument to the logger.debug
methods, it is used as is, usually the fist argument is
the current function name with comma, like logger.debug("select:", name, args)
The func
can be a function to be instead of regular logging, this is for rerouting some output to a custom console or for
dumping the actual Javascript data without preformatting, most useful to use console.log
logger.errorWithOptions(err, options)
Prints the given error and the rest of the arguments, the logger level to be used is determined for the given error by code,
uses options
or options.logger_error
as the level if a string,
options.logger_error
is an object, extract the level by err.code
or use *
as the default level for not matched codes,
the default is to use the error
level.options.logger_inspect
if present with the current inspect options to log the rest of arguments.logger.setInspectOptions(options)
Merge with existing inspect options temporarily, calling without options will reset to previous values
logger.trace()
Print stack backtrace as error
logger.logger(level, ...args)
A generic logger method, safe, first arg is supposed to be a logging level, if not valid the error level is used
logger.write(str)
Stream emulation
TokenBucket.prototype.configure(rate, max, interval, total)
Initialize existing token with numbers for rate calculations
TokenBucket.prototype.toJSON()
Return a JSON object to be serialized/saved
TokenBucket.prototype.toString()
Return a string to be serialized/saved
TokenBucket.prototype.toArray()
Return an array object to be serialized/saved
TokenBucket.prototype.equal(rate, max, interval)
Return true if this bucket uses the same rates in arguments
TokenBucket.prototype.consume(tokens)
Consume N tokens from the bucket, if no capacity, the tokens are not pulled from the bucket.
Refill the bucket by tracking elapsed time from the last time we touched it.
min(totalTokens, current + (fillRate * elapsedTime))
TokenBucket.prototype.delay(tokens)
Returns number of milliseconds to wait till number of tokens can be available again
Messaging and push notifications for mobile and other clients, supports Apple, Google and AWS/SNS push notifications.
Emits a signal uninstall(client, device_id, account_id)
on device invalidation or if a device token is invalid as reported by the server, account_id
may not be available.
Config parameters
msg-([^-]+)-key(@.+)?
, obj: "config", make: "$1$2|_key", nocamel: 1, trim: 1, descr: "API private key for FCM/Webpush or similar services, if the suffix is specified in the config parameter will be used as the app name, without the suffix it is global"msg-([^-]+)-pubkey(@.+)?
, obj: "config", make: "$1$2|_pubkey", nocamel: 1, trim: 1, descr: "API public key for Webpush or similar services, if the suffix is specified in the config parameter will be used as the app name, without the suffix it is global"msg-([^-]+)-authkey-([^-]+)-(.+)
, obj: "config", make: "$1@$2-$3|_authkey", nocamel: 1, descr: "A auth key for APN in p8 format, can be a file name with .p8 extension or a string with the key contents encoded with base64, the format is: -msg-apn-authkey-TEAMID-KEYID KEYDATA"msg-([^-]+)-sandbox(@.+)?
, obj: "config", make: "$1$2|_sandbox", nocamel: 1, type: "bool", descr: "Enable sandbox for a service, default is production mode"msg-([^-]+)-options-([^@]+)(@.+)?
, obj: "config", make: "$1$3|$2", autotype: 1, nocamel: 1, descr: "A config property to the specified agent, driver specific"msg-shutdown-timeout
, type: "int", min: 0, descr: "How long to wait for messages draining out in ms on shutdown before exiting"msg-app-default
, descr: "Default app id(app bundle id) to be used when no app_id is specified"msg-app-dependency@(.+)
, obj: "dependency", make: "$1", type: "list", nocamel: 1, descr: "List of other apps that are considered in the same app family, sending to the primary app will also send to all dependent apps"msg-app-team-(.+)
, obj: "teams", make: "$1", type: "regexp", nocamel: 1, descr: "Regexp that identifies all app bundles for a team"Msg.prototype.init(options, callback)
Initialize supported notification services, it supports jobs arguments convention so can be used in the jobs that need to send push notifications in the worker process.
Msg.prototype.shutdown(options, callback)
Shutdown notification services, wait till all pending messages are sent before calling the callback
Msg.prototype.send(device, options, callback)
Deliver a notification for the given device token(s).
The device
is where to send the message to, can be multiple ids separated by , or |.
Options with the following properties:
Msg.prototype.parseDevice(device)
Parse device URN and returns an object with all parts into separate properties. A device URN can be in the following format: [service://]device_token[@app]
apn
, gcm
, sns
, the default
service uses APN deliveryMsg.prototype.getClient(dev)
Return a client module that supports the given device
Msg.prototype.getConfig(name)
Return a list of all config cert/key parameters for the given name. Each item in the list is an object with the following properties: key, secret, app
Msg.prototype.getAgent(mod, dev)
Return an agent for the given module for the given device
Msg.prototype.getTeam(app)
Return a team for the given app
client.check(dev)
Returns true if given device is supported by APN
client.init(options)
Initialize Apple Push Notification service in the current process, Apple supports multiple connections to the APN gateway but not too many so this should be called on the dedicated backend hosts, on multi-core servers every spawn web process will initialize a connection to APN gateway.
client.close(callback)
Close APN agent, try to send all pending messages before closing the gateway connection
client.send(dev, options, callback)
Send push notification to an Apple device, returns true if the message has been queued.
The options may contain the following properties:
client.init(options)
Initialize Google Cloud Messaging service to send push notifications to mobile devices
client.close(callback)
Close GCM connection, flush the queue
client.send(dev, options, callback)
Send push notification to an Android device, return true if queued.
client.retryOnError()
Retry on server error, honor Retry-After header if present, use it only on the first error
client.send(dev, options, callback)
Send a Web push notification using the web-push
npm module, referer to it for details how to generate VAPID credentials to
configure this module with 3 required parameters:
msg-webpush-key
- VAPID private keymsg-webpush-pubkey
- VAPID public keymsg-webpush-options-email
- an admin email for the VAPID subjectThe device token must be generated in the browser after successful subscription:
navigator.serviceWorker.register("/js/webpush.js", { scope: "/" }).then(function(registration) {
registration.pushManager.subscribe({ userVisibleOnly: true, applicationServerKey: vapidKeyPublic }).then(function(subscription) {
bkjs.send({ url: '/uc/account/update', data: { device_id: "wp://" + window.btoa(JSON.stringify(subscription)) }, type: "POST" });
}).catch((err) => {})
});
Create a resource pool, create
and close
callbacks must be given which perform allocation and deallocation of the resources like db connections.
Options defines the following properties:
function(err, item)
If no create implementation callback is given then all operations are basically noop but still cals the callbacks.
Example: var pool = new Pool({ min: 1, max: 5, create: function(cb) { someDb.connect(function(err) { cb(err, this) } }, destroy: function(client) { client.close() } })
pool.aquire(function(err, client) {
...
client.findItem....
...
pool.release(client);
});
Pool.prototype.init(options)
Initialize pool properties, this can be run anytime even on the active pool to override some properties
Pool.prototype.acquire(callback)
Return next available resource item, if not available immediately wait for defined amount of time before calling the callback with an error. The callback second argument is active resource item.
Pool.prototype.destroy(item, callback)
Destroy the resource item calling the provided close callback
Pool.prototype.release(item)
Return the resource item back to the list of available resources.
Pool.prototype.destroyAll()
Close all active items
Pool.prototype.stats()
Return an object with stats
Pool.prototype.shutdown(callback, maxtime)
Close all connections and shutdown the pool, no more items will be open and the pool cannot be used without re-initialization, if callback is provided then wait until all items are released and call it, optional maxtime can be used to retsrict how long to wait for all items to be released, when expired the callback will be called
Pool.prototype._call(name, callback)
Call registered method and catch exceptions, pass it to the callback if given
Pool.prototype._timer()
Timer to ensure pool integrity
The main server class that starts various processes
Config parameters
server-max-processes
, type: "callback", callback: setWorkersserver-workers
, type: "callback", callback: setWorkers, descr: "Max number of processes to launch for Web servers, 0 means NumberOfCPUs-1
, < 0 means NumberOfCPUs*abs(N)
"server-crash-delay
, type: "number", max: 30000, obj: "crash", descr: "Delay between respawing the crashed process"server-restart-delay
, type: "number", max: 30000, descr: "Delay between respawning the server after changes"server-no-restart
, type: "bool", descr: "Do not restart any processes terminated, for debugging crashes only"server-log-errors
, type: "bool", descr: "If true, log crash errors from child processes by the logger, otherwise write to the daemon err-file. The reason for this is that the logger puts everything into one line thus breaking formatting for stack traces."server-process-name
, descr: "Path to the command to spawn by the monitor instead of node, for external processes guarded by this monitor"server-process-args
, type: "list", re_map: ["%20", " "], descr: "Arguments for spawned processes, for passing v8 options or other flags in case of external processes"server-worker-args
, type: "list", re_map: ["%20", " "], descr: "Node arguments for workers, job and web processes, for passing v8 options"server-api-restart-hours
, type: "list", datatype: "int", descr: "List of hours when to restart api workers, only done once for each hour"server.start()
Start the server process, call the callback to perform some initialization before launchng any server, just after core.init
server.startMonitor(options)
Start process monitor, running as root
server.startMaster(options)
Setup worker environment
server.startWebServer(options)
Create Express server, setup worker environment, call supplied callback to set initial environment
server.startWebMaster()
Spawn web server from the master as a separate master with web workers, it is used when web and master processes are running on the same server
server.handleChildProcess(child, type, method)
Setup exit listener on the child process and restart it
server.startProcess()
Restart the main process with the same arguments and setup as a monitor for the spawn child
server.startDaemon()
Create daemon from the current process, restart node with -daemon removed in the background
server.onProcessExit()
Kill all child processes on exit
server.onProcessTerminate()
Terminates the server process, it is called on SIGTERM signal but can be called manually for graceful shitdown,
it runs shutdown[Role]
methods before exiting
server.shutdown(options, callback)
Shutdown the system immediately, mostly to be used in the remote jobs as the last task
server.shutdownServer(options, callback)
Graceful shutdown if the api server needs restart
server.spawnProcess(args, skip, opts)
Start new process reusing global process arguments, args will be added and args in the skip list will be removed
server.writePidfile()
Create a pid file for the current process
server.restartWebWorkers()
Performs graceful web worker restart
Shell command interface for bksh
This module is supposed to be extended with commands, the format is `shell.cmdNAME``
where NAME
is he commnd name in camel case
For example:
const bkjs = require("backendjs");
const shell = bkjs.shell;
shell.cmdMyCommand = function(options) { console.log("hello"); return "continue" }
Now if i call bksh -my-command
it will print hello and launch the repl,
instead of retuning continue if the command must exit jut call process.exit()
Run bksh -shell-help
to see all registered shell commands
shell.start(options)
Start REPL shell or execute any subcommand if specified in the command line. A subcommand may return special string to indicate how to treat the flow:
-noexit
- in the command line keep the shell running after executing the command-exit
- exit with error if no shell command found-exit-timeout MS
- will be set to ms to wait before exit-shell-delay MS
- will wait before running the commandshell.awsCheckTags(obj, name)
Check all names in the tag set for given name pattern(s), all arguments after 0 are checked
shell.awsFilterSubnets(subnets, zone, name)
Return matched subnet ids by availability zone and/or name pattern
shell.awsSearchImages(options, callback)
Return Amazon AMIs for the given filter, sorted by create date in descending order
shell.awsLaunchInstances(options, callback)
Launch instances by run mode and/or other criteria
shell.cmdAwsLaunchInstances(options)
Delete an AMI with the snapshot
shell.cmdAwsDeleteImage(options)
Delete an AMI with the snapshot
shell.cmdAwsCreateImage(options)
Create an AMI from the current instance of the instance by id
shell.cmdAwsRebootInstances(options)
Reboot instances by run mode and/or other criteria
shell.cmdAwsTerminateInstances(options)
Terminate instances by run mode and/or other criteria
shell.cmdAwsShowInstances(options)
Show running instances by run mode and/or other criteria
shell.cmdAwsSetupSsh(options)
Open/close SSH access to the specified group for the current external IP address
shell.cmdAwsS3Get(options)
Get file
shell.cmdAwsS3Put(options)
Put file
shell.cmdAwsS3List(options)
List folder
shell.cmdAwsSetRoute53(options)
Update a Route53 record with IP/names of all instances specified by the filter or with manually provided values
shell.cmdAwsCreateRoute53(options)
Create a new domain if does not exist, assign an ELB alias to a hosted zone
shell.cmdDbGetConfig(options)
Show all config parameters
shell.cmdDbTables(options)
Show all tables
shell.cmdDbSelect(options)
Show record that match the search criteria, return up to -count N
records
shell.cmdDbScan(options)
Show all records that match search criteria
shell.cmdDbBackup(options)
Save all tables to the specified directory or the server home
shell.cmdDbRestore(options)
Restore tables
shell.cmdDbPut(options)
Put a record
shell.cmdDbUpdate(options)
Update a record
shell.cmdDbDel(options)
Delete a record
shell.cmdDbDelAll(options)
Delete all records
shell.cmdDbDrop(options)
Drop a table
shell.exit(err, msg)
Exit and write to the console a message or error message if non empty
shell.die(...args)
Exit with error code and dump all arguments to the stderr, backtrace as well
shell.getUser(obj, callback)
Resolves a user from obj.id
or obj.login
params and return the record in the callback
shell.getQuery(options)
Returns an object with all command line params that do not start with dash(-), treat 2 subsequent parms without dashes as name value pair
shell.getQueryList()
Returns a list with all command line params that do not start with dash(-), only the trailing arguments will be collected
shell.getArgs(options)
Returns an object with all command line params starting with dash set with the value if the next param does not start with dash or 1.
By sefault all args are stored as is with dashes, if options.camel`` is true then all args will be stored in camel form, if
options.underscore is true then all args will be stored with dashes converted into underscores.
options.index
can be used to get the args from any position, by default it only returns args after the current
commands processed from shell.cmdIndex
shell.getOption(name, options)
Return an argument by name from the options, options may contain parameters in camel form or with underscores, both formats will be checked
shell.getArg(name, options, dflt)
Return first available value for the given name, options first, then command arg and then default,
shell.getArgList(name, options)
Returns a list of all values for the given argument name, it handles duplicate arguments with the same name
shell.cmdShowInfo(options)
App version
shell.cmdRunFile(options)
Load a module and optionally execute it
Example:
var bkjs = require("backendjs")
bkjs.app.test = 123;
exports.run = function() {
console.log("run");
}
exports.newMethod = function() {
console.log(bkjs.core.version, "version");
}
Save into a file a.js and run
bksh -run-file a.js
In the shell now it new methods can be executed
> shell.newMethod()
shell.cmdRunConfig(options)
Load a config file
shell.cmdRunIpc(options)
Initialize more IPC clients
shell.cmdRunApi(options)
Run API server inside the shell
shell.cmdRunJobs(options)
Run jobs workers inside the shell
shell.cmdRunWorker(options)
Run jobs workers inside the shell
shell.cmdAuthGet(options)
Show account records by id or login
shell.cmdAuthAdd(options)
Add a user login
shell.cmdAuthUpdate(options)
Update a user login
shell.cmdAuthDel(options)
Delete a user login
shell.cmdLogWatch(options)
Run logwatcher and exit
shell.cmdSendRequest(options)
Send API request
shell.cmdSubmitJob(options)
Send API request
tests.expect(ok, ...args)
To be used in the tests, these global functions takes the following arguments:
expect(ok, ....)
assert(failed, ....)
Example
tests.test_GetUser = function(next)
{
describe("Test user record existence by id");
db.get("bk_user", { login: "123" }, (err, row) => {
assert(err, "no error expected", row);
expect(row?.id == "123", `id must be 123`, row);
next();
});
}
tests.describe(...args)
Set the title and description of the next test, the title will be printed at the beginning of the global test object, this is a convenience utility to better document tests
tests.checkAccess(options, callback)
Generic access checker to be used in tests, accepts an array in .config with urls to check The following properties can be used:
shell.cmdTestRun(options)
Run the test function which is defined in the global tests module, all arguments will be taken from the options or the command line. Options
use the same names as command line arguments without preceeding test-
prefix.
The main commands:
Optional parameters for the test-run:
All other common command line arguments are used normally, like -db-pool to specify which db to use.
After finish or in case of error the process exits if no callback is given.
Example, store it in tests/index.js:
tests.test_mytest = async function(next) {
describe("Check user record existence")
var row = await db.aget("bk_user", { login: "123" });
expect(row, "record must exists");
expect(row.id != "123", "Record id must not be 123", row)
next();
}
# bksh -test-run mytest
Custom tests:
to run al test in tests/
bkjs test-all
to start all test commands in the shell using local ./tests/db.js
bksh -test-file db -test-run
or
bkjs test-db
to start a specific test
bksh -test-file db -test-run dynamodb
Watch the sources for changes and restart the server
Config parameters
watch-dir
, type: "list", array: 1, descr: "Watch sources directories for file changes to restart the server, for development only, the backend module files will be added to the watch list automatically, so only app specific directores should be added. In the production -monitor must be used."watch-ignore
, type: "regexp", descr: "Files to be ignored by the watcher"watch-match
, type: "regexp", descr: "Files to be watched, .js and .css is the default"watch-web
, type: "list", array: 1, descr: "List of directories to be watched for file modifications and execute a buildWeb
command to produce bundles, apps, etc... Relative paths will be applied to all packages, example: web/js,web/css"watch-build
, descr: "Command to run on web files modifications, to be used with tools like minify/uglify"watch-mode
, descr: "How to serialize web build launches for multiple files chnaged at the same time, if empty run one build per file, dir
to run every launch per config directory, dir1
to run by next top dir, dir3
to run by thid directory from the file...."watch-delay
, type: "int", descr: "Delay in ms before triggering the build web command to allow multiple files saved"Account management
Config parameters
bk_data-perms
, type: "map", maptype: "list", descr: "Tables and allowed operations, ex: -bk_data-perms bk_config:select;put"bk_data.configureWeb(options, callback)
Create API endpoints and routes
bk_data.configureDataAPI()
API for full access to all tables
System management
Config parameters
bk_system-perms
, type: "map", maptype: "list", descr: "Allowed operations, ex: -bk_system-perms restart:api,init:queue;config;db"bk_system.configureWeb(options, callback)
Create API endpoints and routes
bk_system.configureSystemAPI()
API for internal provisioning and configuration
Account management
bk_user.configureWeb(options, callback)
Create API endpoints and routes
bk_user.configureAccountsAPI()
Account management
bk_user.getAccount(req, options, callback)
Returns current account, used in /account/get API call, req.account will be filled with the properties from the db
bk_user.notifyAccount(options, callback)
Send Push notification to the account. The delivery is not guaranteed, if the message was queued for delivery, no errors will be returned.
The options may contain the following:
In addition the device_id can be saved in the format service://id where the service is one of the supported delivery services, this way the notification system will pick the right delivery service depending on the device id, the default service is apple.
Example:
bk_user.notifyAccount({ account_id: "123", msg: "test", badge: 1, sound: 1 } })
bk_user.addAccount(req, options, callback)
Register new account, may be used an API call, but the req does not have to be an Express request, it just need to have query and options objects.
bk_user.updateAccount(req, options, callback)
Update existing account, used in /account/update API call
bk_user.deleteAccount(req, callback)
Delete account specified by the obj.
The options may contain keep
array with tables to be kept, for example
delete an account but keep all messages and location: keep:["bk_user","bk_location"]
This methods is suitable for background jobs