Skip to content

knex

Created: 2015-12-14 11:11:43 -0800 Modified: 2019-12-09 10:22:59 -0800

  • Keep in mind that BIGINT is 8 bytes, but JavaScript can only safely represent integers up to 9007199254740991. For autoincrementing IDs, you’ll probably never hit this number, but you need to keep in mind that a number like 9007199254741491 may not be representable by JavaScript (in this case, it’s stored as 9007199254741492).
  • “.increments()” automatically makes the column a primary key. If you don’t want this, you have to use “.raw” (reference, also see my MySQL note on how increments needs to be a key):
Adding
knex.schema.raw('ALTER TABLE my_table ADD cmdId INT UNSIGNED AUTO_INCREMENT');
Modifying
knex.schema.raw('ALTER TABLE my_table MODIFY cmdId INT UNSIGNED AUTO_INCREMENT');

Both of the calls below are the same, but the latter looks better.

The long way (“groupBy” followed by “_.size”):

const dbResp = await knex('campaign_missions')
.select(knex.raw('1'))
.join('users', 'users.id', '=', 'campaign_missions.user_id')
.groupBy('campaign_missions.user_id');
const numUniquePlayers = _.size(dbResp);

The short way (“count distinct”):

const { COUNT: numUniquePlayers } = await knex('campaign_missions')
.select(knex.raw('count(distinct users.id) as COUNT'))
.join('users', 'users.id', '=', 'campaign_missions.user_id')
.first();

I.e. “count distinct” is possible, but you need knex.raw for it.

I wasn’t able to figure this out on my own. This is the closest to what I have that would be a working solution, but at least with async/await, it didn’t work for me at all.

Error: ER_CANT_CHANGE_TX_CHARACTERISTICS: Transaction characteristics can’t be changed while a transaction is in progress

I had a situation where I had an array of user IDs and I wanted to select from a table with a composite primary key of [‘user_id’, ‘device_id’] with all of their devices. I used whereIn with an array of arrays to do this:

const dbDevices = await trx('devices')
.select('user_id', 'device_id')
.whereIn('user_id', userIds);
const userIdDeviceIdCombos = _.map(dbDevices, ({ user_id, device_id }) => [
user_id,
device_id,
]);
const dbSubscriptionArns = await trx('aws_sns_subscription_arns')
.select('arn')
.whereIn(['user_id', 'device_id'], userIdDeviceIdCombos);

From what I understand, if you’re doing an INSERT and you want Dataset X to be used if a row exists and Dataset Y to be used if it doesn’t exist, you can use this combination of insert() / whereNotExists():

16:04 HiDeoo You never use insert() / whereNotExists() ?

16:05 HiDeoo Example https://github.com/tgriesser/knex/issues/1322#issuecomment-204173708

For both, you can accomplish this by specifying an array of columns:

Composite foreign key:

table
.foreign(['user_id', 'device_id'])
.onDelete('CASCADE')
.references(['user_id', 'device_id'])
.inTable('devices');

Composite primary key:

table.primary(['user_id', 'device_id', 'notification_type']);

Let’s say you have a bunch of rows in a database and you want to add a “status” column to those. You want all existing rows to get a status of “STARTED”, but all future rows to have to define their own “status” explicitly. Thus, you can’t just set a default and leave it, otherwise future rows would get that default, but you also can’t say “notNullable” or existing rows would violate that property.

Here’s how to do this in Knex:

function alterSeasonsTable(knex) {
return knex.schema.alterTable('seasons', (table) => {
// Because there's no "status" column to begin with and seasons may exist,
// we want to set a default just to give the existing season a value.
table
.enu('status', _.keys(SeasonStatus))
.notNullable()
.defaultTo(SeasonStatus.STARTED);
// However, we don't want a default to STAY on this column, so we alter it
// again.
table
.enu('status', _.keys(SeasonStatus))
.notNullable()
.alter();
});
}

First, you need to remove the foreign key with “dropForeign” and the column names specified in the original “foreign” command:

table.dropForeign('user_id');
table
.foreign('user_id')
.onDelete('CASCADE')
.references('id')
.inTable('users');

Here’s an example of the syntax for this (make sure your Node version is higher than 5 or whatever it was that added async/await):

return this.knex.transaction(async (trx) => {
const dbResp = await trx('users')
.select('name')
.where('id', '=', 1);
console.log('dbResp: ' + JSON.stringify(dbResp));
});

Note: if you wrap an “await” with try/catch that the internal catch isn’t going to rollback a transaction; you need to throw for that: https://bpaste.net/show/ce6cc58b4f31 Alternatively: you can call “trx.rollback” if you’re going to manually catch, but it’s probably just best to throw and let Knex handle it.

I’ve gotten the advice that each database table should have a primary index. I had a table like this:

+-----------------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+---------------------+------+-----+---------+-------+
| user_id | bigint(20) unsigned | NO | MUL | <null> | |
| is_attack | tinyint(1) | NO | | <null> | |
| highest_rating | float(8,2) | NO | | <null> | |
| lowest_rating | float(8,2) | NO | | <null> | |
| starting_rating | float(8,2) | NO | | <null> | |
+-----------------+---------------------+------+-----+---------+-------+

Rows should be unique here by user_id and is_attack, so for me to do that, I call this Knex:

function setupForeignKeysForUsersLeagues(knex) {
return knex.schema.table('users_leagues', (table) => {
table
.foreign('user_id')
.references('id')
.inTable('users');
// This should be unique since users_leagues only represents the current
// season.
table.primary(
['user_id', 'is_attack'],
'users_leagues_user_id_and_is_attack_index'
);
});
}

That produces this table (notice the “PRI” next to user_id and is_attack):

+-----------------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+---------------------+------+-----+---------+-------+
| user_id | bigint(20) unsigned | NO | PRI | <null> | |
| is_attack | tinyint(1) | NO | PRI | <null> | |
| highest_rating | float(8,2) | NO | | <null> | |
| lowest_rating | float(8,2) | NO | | <null> | |
| starting_rating | float(8,2) | NO | | <null> | |
+-----------------+---------------------+------+-----+---------+-------+

This uses knex.raw, but it’s worth noting that you can’t use DELIMITER from knex (as it only seems to exist in particular client programs):

const procedure = `
CREATE PROCEDURE \getAllUserNames\ (IN unusedInParameter INT)
BEGIN
SELECT name
FROM users;
END
`;
knex.schema
.raw(procedure)
.then(() => {
console.log("Success");
})
.catch(error => {
console.log("Failure", error);
});

The documentation doesn’t talk about this in depth, but you can do something like this:

table
.dateTime('created_at')
.notNullable()
.defaultTo(knex.fn.now());

Note that when I tried this, I ended up getting my local time put into the database, whereas in JavaScript, if I specified a “new Date()”, I’d get UTC (which I think was based on specifying “timezone”:“z” in the connection string).

Small note: ”=” is assumed as the operator:

knex.where('id', userId); // fine and short
knex.where('id', '=', userId); // fine, but more verbose

To select a column aliased to another name, just use “as” directly in the “select”:

knex(‘ongoing_matches’)

.join(‘matches’, ‘matches.id’, ’=’, ‘ongoing_matches.match_id’)

.select(‘ongoing_matches.id’, ‘matches.defender_id as match_defender’)

.where(‘matches.attacker_id’, ’=’, 1)

.first()

Result: {“id”:9,“match_defender”:4}

If you want to combine “as” with a “knex.raw” call, do something like this:

const {testeroni: avgExecutionTime} = await this.knex('execution_metrics')
.avg({ testeroni: this.knex.raw('elapsed_time / num_turns') })
.first()
.where('created_at', '>', this.knex.raw('now() - interval 1 day'));

You have to use the callback syntax, e.g.

knex.select('*').from('devices').join('notification_settings', function() {
this.on('notification_settings.device_id', '=', 'devices.device_id').andOn('notification_settings.user_id', '=', 'devices.user_id')
});

That turns into this SQL:

select * from `devices` inner join `notification_settings` on `notification_settings`.`device_id` = `devices`.`device_id` and `notification_settings`.`user_id` = `devices`.`user_id`

If you don’t do this with “as” and you instead just “select()” (i.e. “select *”), then conflicting names from joins will be overwritten. In this case, my ongoing_matches and matches tables both have an “id” column, so every once in a while I would end up getting the wrong column.

Your solutions here are to alias the entire table as shown here or to select specific columns as shown in the code snippet above. It’s apparently an antipattern to select * from a table in MySQL (probably for reasons like this and also performance/memory usage).

Note: MySQL has no problem returning two values with the same name (although you wouldn’t be able to differentiate between the two):

SELECT ongoing_matches.id,

matches.id

FROM ongoing_matches

INNER JOIN matches ON matches.id = ongoing_matches.match_id

WHERE matches.attacker_id = 1 LIMIT 1

+----+----+

| id | id |

+----+----+

| 12 | 42 |

+----+----+

JavaScript will choke on that though because object keys need to be unique.

This can be fixed by using a query like this:

SELECT ongoing_matches.id AS ongoing_matches_id,

matches.id AS matches_id

FROM ongoing_matches

INNER JOIN matches ON matches.id = ongoing_matches.match_id LIMIT 1

Note: there are two ways to do this with Knex:

  • knex.raw with “as”
knex("ongoing_matches")
.select([
knex.raw("matches.id as match_id"),
knex.raw("ongoing_matches.id as ongoing_match_id")
])
.join("matches", "ongoing_matches.match_id", "=", "matches.id");
  • You can also use “identifier syntax” (reference)
knex("ongoing_matches")
.select({
match_id: "matches.id",
ongoing_match_id: "ongoing_matches.id",
start_time: "start_time"
})
.join("matches", "ongoing_matches.match_id", "=", "matches.id");

Let’s say you have a function like this:

getUserByEmailAddress(emailAddress) {
return this.knex('users')
.select()
.where('email_address', '=', emailAddress)
.then((dbResp) => {
// Return the user object.
return dbResp[0];
});
}

Instead of returning “dbResp[0];”, you can use “.first()”, which not only has the “dbResp[0]” built-in, but also adds a “limit 1” to the query (which could improve performance for certain requests):

knex(‘users’).select().first().toString()

‘select * from users limit 1’

Thus, the final function looks like this:

getUserByEmailAddress(emailAddress) {
return this.knex('users')
.select()
.where('email_address', '=', emailAddress)
.first();
}

Use “clone” if you plan on reusing a query for later so that there’s no internal state:

const baseQuery = knex('users').select('name').where('id', id);
const cloneForGetNumRows = baseQuery.clone();

If you don’t need to reuse the query, then see “conditionally adding on to a query”.

Conditionally adding on to a query

Section titled Conditionally adding on to a query

You can optionally add to a query like this if you want since it’s just the builder pattern where “.then” coerces it into a Promise:

getScriptsForUser(userId, scriptIds = undefined) {
const query = this.knex('scripts')
.andWhere('user_id', '=', userId);
if (util.exists(scriptIds)) {
query.whereIn('id', scriptIds);
}
return query;
}

I didn’t actually try this out, but here’s an issue on how to do it: https://github.com/tgriesser/knex/issues/1056

10/17/2016

This can be helpful when you’re trying to do something that doesn’t exactly fit the paradigm that knex lays out for you. For example, when a user logged in, I wanted to update their number of logins and their last login time. I thought it would have taken an “update” and an “increments” in the query builder, but that didn’t do what I expected (I think it only did the increments) (UPDATE: update+increments doesn’t work only because of a bug). Instead, I ended up with this:

return this.knex(‘users’)

.where(‘id’, ’=’, userId)

.update({

last_login_date: new Date(),

num_logins: this.knex.raw(‘num_logins + 1’)

});

volatilemajesty: if you mark variables as ":id", ":instanceId"instead of ”?”, you can pass in an object. cleans things up

Be very careful not to open yourself up to injection attacks by using proper bindings (reference)

Here’s an example using named bindings and some extra functions so that I can concatenate with a “log” string that keeps getting built up from null (hence the “COALESCE” (reference)):

const details = 'you were banned';
yield trx('bans')
.whereIn('user_id', ids)
.update({
ban_details: trx.raw("CONCAT(COALESCE(ban_details, ''), :details)", {
details,
}),
});

This doesn’t exist in Knex and the creators don’t want it to exist (apparently because of how particular each database vendor works with upsert semantics). From what I can see, it seems like using “knex.raw” combined with “ON DUPLICATE KEY” (MySQL) or “ON CONFLICT” (PostgreSQL) is the best way to go.

const knexClientMySQL = require('knex/lib/dialects/mysql');
const knexClientMaria = require('knex/lib/dialects/maria');
/**
* Inserts or updates in a single query.
* @param {string} options.table - the name of the table to upsert into
* @param {Object} options.object - the object to insert or update
* @param {?Knex} knex - if specified, this should represent an existing
* transaction if you want this to be rolled back/committed as part of
* another set of queries, otherwise it will fall back to the instance-level
* Knex object.
* @return {Promise}
*/
upsert({ table, object }, trx = this.knex) {
// This specific setup will only work with MySQL/Maria due to the raw SQL
// syntax.
assert(
trx.client instanceof knexClientMySQL ||
trx.client instanceof knexClientMaria
);
const insert = trx(table).insert(object);
const update = trx
.queryBuilder()
.update(object)
.toString();
// In MySQL, "update" typically pairs with "SET", but it does NOT when doing
// an upsert, so we have to remove the "set".
const mySqlUpdate = update.replace(/updates+set/i, 'update ');
return trx.raw(`? ON DUPLICATE KEY ${mySqlUpdate}`, [insert]);
}

Here’s how you would call the function:

const objToUpsert = {
user_id: 1,
is_attack: true,
games_played: 6,
};
await upsert({
table: 'users_leagues',
object: objToUpsert
});

Be very careful when calling upsert with values based on existing data. For example, suppose you want to upsert a row so that “lowest_rating” is always the minimum of the existing lowest_rating and the new rating. You may think to do this:

const objectToUpsert = {
user_id: userId,
is_attack: isAttack,
lowest_rating: trx.raw(
`LEAST(lowest_rating, ${newRating})`
)
};

However, on insert, lowest_rating won’t exist, so it’ll get set to 0. Instead, you need this ridiculous query:

LEAST(COALESCE(NULLIF(lowest_rating, 0), ${newRating}), ${newRating})

“NULLIF” will set lowest_rating to null when it’s 0, so then COALESCE will use the newRating, and finally LEAST will always pick the lower of two now-defined ratings.

Rather than have to use a bunch of COALESCE and NULLIF, I think it may be better to just rewrite the “upsert” function from above to take in separate insert/update objects. Here’s an example:

/**
* Inserts or updates in a single query.
* @param {string} options.table - the name of the table to upsert into
* @param {Object} options.insertObject - the object to insert
* @param {Object} options.updateObject - the object to update
* @param {Object} options.insertOrUpdateObject - the object to use for
* both inserting and updating. You can't specify this if you're going to
* specify the objects individually above.
* @param {?Knex} knex - if specified, this should represent an existing
* transaction if you want this to be rolled back/committed as part of
* another set of queries, otherwise it will fall back to the instance-level
* Knex object.
* @return {Promise}
*/
upsert(
{ table, insertObject, updateObject, insertOrUpdateObject },
trx = this.knex
) {
assert(
(!_.isNil(insertObject) &&
!_.isNil(updateObject) &&
_.isNil(insertOrUpdateObject)) ||
(_.isNil(insertObject) &&
_.isNil(updateObject) &&
!_.isNil(insertOrUpdateObject)),
'You can only specify either insertObject AND updateObject or just insertOrUpdateObject'
);
insertObject = _.defaultTo(insertObject, insertOrUpdateObject);
updateObject = _.defaultTo(updateObject, insertOrUpdateObject);
// This specific setup will only work with MySQL/Maria due to the raw SQL
// syntax.
assert(
trx.client instanceof knexClientMySQL ||
trx.client instanceof knexClientMaria
);
const insert = trx(table).insert(insertObject);
const update = trx
.queryBuilder()
.update(updateObject)
.toString();
// In MySQL, "update" typically pairs with "SET", but it does NOT when doing
// an upsert, so we have to remove the "set".
const mySqlUpdate = update.replace(/updates+set/i, 'update ');
return trx.raw(`? ON DUPLICATE KEY ${mySqlUpdate}`, [insert]);
}

Note: the return value from “upsert” looks something like this:

dbResp: [

{

“fieldCount”: 0,

“affectedRows”: 2,

“insertId”: 0,

“serverStatus”: 2,

“warningCount”: 0,

“message”: "",

“protocol41”: true,

“changedRows”: 0

},

null

]

  • insertId apparently only gets set if you have an auto-incremented primary key
  • affectedRows is 2 in the case of an update (rather than an insert) due to this

Originally, I thought that maybe I should do something like this:

const insertOrUpdateObject = { key, value: JSON.stringify(value) };
return this.knex.transaction(async (trx) => {
const dbRowsAffected = await trx('overseer_config')
.update(insertOrUpdateObject)
.where({
key,
});
if (dbRowsAffected === 0) {
await trx('overseer_config').insert(insertOrUpdateObject);
}
});

However, there’s a potential race condition here even though there’s a transaction in use. It is possible that each connection calls “update”, gets back 0 rows, then each connection tries inserting, which would result in an error on the second transaction to run.

There are two ways to address this race condition:

  • A table-level lock (which sounds like a bad idea)
  • A lock on a related table. For example, I ran into a problem with Bot Land where I needed to pull in rating information for two players (from the “users” table), update it, and save it back to both “users” and “usersleagues”. Saving to “users_leagues” would involve an upsert, which means that if multiple connections tried upserting into users_leagues for the same player, they _could both take the “insert” route, meaning one of them would fail. To get around this, I perform the entirety of the updates (to both “users” and “users_leagues”) in a single transaction and make sure to start by selecting “forUpdate” on “users”. By doing this, we can’t get to the upsert in users_leagues unless we acquired the lock for “users”.

Upsert conclusions:

  • Upsert is database-specific, so the MySQL upsert won’t be compatible with PostgreSQL.
  • Upsert is great for queries whose insert/update semantics are the same. E.g. if you always just want to set “column”=“value”, then awesome, use upsert. If you want to do something specific on only insert, then you probably should use a transaction.
  • If using a transaction, you should lock on another table since you can’t lock on rows that don’t exist yet and wouldn’t want a table-wide lock, that way you avoid having two transactions try to insert the same data and fail.

12/14/2015

  • Note: it’s a really good idea to take a back-up of a database before running migrations. Typically, migrations are going to alter tables in addition to adding new ones, and altering a table can’t be rolled back in MySQL (reference). To back up a database, check out this note.
  • knex migrations:
    • First, make your knexfile.js
      • knex init
    • Modify the knexfile.js to include your environments and connection information.
    • Set up your database. I never found out whether this is something you should do through knex or through the command line, so I just did it through the command line.

CREATE DATABASE knex_migration_db;

USE knex_migration_db;

GRANT ALL PRIVILEGES ON knex_migration_db.* TO ‘Adam’@‘localhost’;

  • Make the knex migration file (see next section for more notes on the structure of this)

    • knex migrate:make migration_name
      • If this fails with ENOENT, then you probably just need to make the “migrations” folder.
      • The “migration_name” is used in the file name; it’s appended to the datetime-stamp of creation. I believe it’s just a moniker for you so that you can say something like “v1_to_v2” or something.
      • For Adam: if you need to make a new migration for Bot Land locally:
        • Convert all of B:CodeBotLandbotlandpackagesdatabase\env.development into “set FOO=BAR” statements to put those environment variables into the current environment.
        • Go to the packages/database/src folder
        • “knex migrate:make migration_name”
  • Modify the migration file that was created to include any setup you need, e.g. making tables, inserting rows, etc.

  • Run the migrations

    • knex migrate:latest —env development
    • Remember: there is a configuration value in knexfile.js called migrations: {tableName: ‘knex_migrations’}. This will create a knex_migrations table in your database if it doesn’t already exist. That table keeps track of which migration files were run so that it can know where to pick up next time.
    • For Adam (for Bot Land), yarn run db:migrate
  • Migration files

    • You should define functions like this:
      • exports.up = function(knex, Promise) {};
      • exports.down = function(knex, Promise) {};
      • The “up” function defines how you upgrade from the last migration file to this one.
      • The “down” function lets you specify rollback behavior if you want. This is not necessarily an inverse of the “up” function, e.g. you don’t need to drop a table that you created in “up”. Instead, you can use this for custom functionality like creating a backup table, filling that with data, and THEN dropping the original table so that you’ve rolled back correctly.
    • Migrations are automatically run inside of transactions, but keep in mind that altering tables in MySQL cannot be rolled back (reference). That means that if you’re going to be altering tables, and hit a failure, you would have to manually undo your changes just to be able to re-run the file. This is why it’s good to take a back-up beforehand.

I originally tried to use node-babel for this via an NPM script in package.json, but I ran into issues trying to use the migrations from the API in test code (knex.migrate.latest()), so I had to switch to babel-register:

  • npm install —save-dev babel-register
  • Modify knexfile.js to include require(‘babel-register’);
  • Add this NPM script
    • “knex”: “node node_modules/knex/bin/cli.js —knexfile database/knexfile.js”,
    • Note that you can probably just say “knex —knexfile database/knexfile.js”, but I have “knex.cmd” in the same directory and Windows is acting very strange when it comes to writing it that way.
  • Make sure you have a .babelrc like this
{
// Note: this entire file only exists so that babel-register works
"compact": false,
"presets": [
[
"env",
{
"targets": {
"node": "current"
}
}
]
]
}
  • Make sure babel-register shows up in your test code somewhere since I think calling knex.migrate.latest directly bypasses the knexfile.js (not sure about this).

Note: you can consider using ESM, which is a smaller library solely for working with ES modules. It was made by the same creator of Lodash. To run using ESM, you just do something like this:

node -r esm index.js

By using this, you don’t need a .babelrc file.

Based on what I read at the reference and what tollus says below, it sounds like I need to use Babel and then change the command line that gets invoked (which would involve changing Ansible scripts for Bot Land):

10:15 tollus: if you want to use es6/imports you have to call babel first, i did it with a npm script called ‘knex’ with ‘babel-node node_modules/knex/bin/cli.js’

10:15 tollus: then i run ‘npm run knex — migrate:xyz’

What I did for this:

npm install —save-dev babel-cli

Add this to package.json (note: I had to specify the knexfile.js only because it was in a different folder):

“knex”: “babel-node node_modules/knex/bin/cli.js —knexfile database/knexfile.js”,

Modified Ansible to have the command look something like this:

command: npm run knex — migrate:latest —env development chdir={{ accountServerDir }}/account_server

Converting between MySQL Date and JavaScript Date objects

Section titled Converting between MySQL Date and JavaScript Date objects

6/17/2019

This is quite tedious! If you do “.where(‘somedate_column’, aJsDateObject)”, it _will work, but only if the Date object has the time portion set to 00:00:00. If you want to generate the current day’s Date object with a zeroed-out time portion, you can use this code:

const todayDate = new Date();
const todayAsUtc = new Date(
Date.UTC(
todayDate.getFullYear(),
todayDate.getMonth(),
todayDate.getDate(),
0,
0,
0
)
);

I’m on PDT, so it’s currently 11:48 AM, but the UTC time shows as 18:48, so the resulting Date from this is “2019-06-17T00:00:00.000Z”.

Alternatively, rather than doing anything in JavaScript, you can just use the MySQL built-in function “curdate()”, e.g.

.where(‘some_date_column’, knex.raw(‘curdate()‘))

Converting between MySQL DateTime and JavaScript Date objects

Section titled Converting between MySQL DateTime and JavaScript Date objects

1/21/2016

Knex handles conversions between MySQL DateTime and JavaScript Date objects. However, you should make sure you connect with timezone set to ‘Z’:

knex = require(‘knex’)({

client: ‘mysql’,

connection: {

host: databaseHost,

user: databaseUser,

password: databasePassword,

timezone: ‘Z’,

database: databaseName

}

});

If you DID want a function to convert from JavaScript —> MySQL, you can use this code that I adapted from a StackOverflow answer:

/**

  • This is mostly used by convertJSDateToMySQLDateTime to pad number strings.

  • @param {number} d

  • @return {string}

*/

function twoDigits(d) {

if (d >= 0 && d < 10) {

return ‘0’ + d;

}

if (d < 0 && d > -10) {

return ‘-0’ + (-1 * d);

}

return d.toString();

}

/**

  • Converts a JavaScript Date object into the MySQL format for DateTime.

  • E.g. the epoch turns into ‘1970-01-01 00:00:00’.

  • @param {Date} date

  • @return {string}

*/

export function convertJSDateToMySQLDateTime(date) {

const year = date.getUTCFullYear();

const month = twoDigits(1 + date.getUTCMonth());

const dayOfMonth = twoDigits(date.getUTCDate());

const hours = twoDigits(date.getUTCHours());

const minutes = twoDigits(date.getUTCMinutes());

const seconds = twoDigits(date.getUTCSeconds());

return ${year}-${month}-${dayOfMonth} ${hours}:${minutes}:${seconds};

// return ${date.getUTCFullYear()}-${twoDigits(1 + date.getUTCMonth())} +

// -${twoDigits(date.getUTCDate())} ${twoDigits(date.getUTCHours())} +

// :${twoDigits(date.getUTCMinutes())}:${twoDigits(date.getUTCSeconds())};

};

Just do something like this:

return this.knex

.select([‘id, def_estimated_skill’])

.from(‘users’)

// They need to have a defense

.whereNotNull(‘serialized_defense’)

// They need to have logged in recently

.andWhere(‘last*login_date’, ’>’, new Date(Date.now() - 30 * 24 _ 60 _ 60 _ 1000))

// Find users with the lowest difference in skill.

.orderByRaw(ABS(def_estimated_skill - ${atkEstimatedSkill}) ASC)

// We set the limit very high so that it’s likely to find someone

// who they haven’t played against before. Ideally, this function

// won’t be called many times because we should cache the results

// from every function like this in the Matchmaker.

.limit(500);

On Windows:

set DEBUG=knex:tx

<run your program>

Knex:warning - Transaction was implicitly committed, do not mix transactions and DDL with MySQL (#805)

Section titled Knex:warning - Transaction was implicitly committed, do not mix transactions and DDL with MySQL (#805)

This happened when I was doing certain alterTable statements from inside a migration (even if they were all running sequentially with “await”), then called into my own knex.transaction despite that the existing “knex” was already a transaction object due to how migrations take place in transactions. Thus, I’m pretty sure it means I could just ignore the warnings, but it also means that the auto-committing behavior would make it very difficult to recover from errors as only part of the migration would have gone through.

This apparently happens because a savepoint doesn’t exist in MySQL (reference).

If you run into errors like this:

Error: ER_CANNOT_ADD_FOREIGN: Cannot add foreign key constraint

Type this into MySQL:

SHOW ENGINE INNODB STATUS;

You’ll see a section like this:


LATEST FOREIGN KEY ERROR


2017-02-11 14:13:58 0x1034 Error in foreign key constraint of table botland/#sql-abc_146:

foreign key (game_version) references game_versions (id) on delete SET NULL:

You have defined a SET NULL condition though some of the

columns are defined as NOT NULL.

In this particular case, I had “ON DELETE SET NULL” on a column that couldn’t be null.

In another sillier case, I had a MySQL statement of “drop A; drop B; drop C;”, but table C needed to be deleted before B, so I just reordered the drop statements.

Transactions are not “all or nothing” due to locking tables

Section titled Transactions are not “all or nothing” due to locking tables

UPDATE: I think this is just how autocommit works and that you can disable it for a session.

As far as I can tell, this is just how MySQL works (reference). If you have something like this code, it will persist to disk immediately due to locking semantics with transactions:

async function transactionTest() {
await knex.transaction(async trx => {
await trx.raw('LOCK TABLES users WRITE');
// Kick off some promises
// Because an error is thrown, you would expect the whole transaction to
// roll back, but the lock causes certain things to be committed anyway.
throw new Error('fake error');
// TODO: unlock tables if you ever make it past this error in non-example code (the command is "UNLOCK TABLES" regardless of which or how many tables you have locked)
});
}

As for a solution, I didn’t actually come up with one. I bypassed this by avoiding table-level locks altogether (which itself could only be done by never running the particular queries unless the database couldn’t be modified via user-controlled actions (i.e. the servers are essentially offline)).

KEEP IN MIND THAT IF YOU’RE GOING TO EVER CALL “LOCK TABLES”, YOU STILL NEED TO USE A TRANSACTION, IT JUST WON’T BE ALL-OR-NOTHING.

If you don’t use a transaction you could get an error like “Error: ER_TABLE_NOT_LOCKED: Table ‘current_season’ was not locked with LOCK TABLES”. The reason why is because you’ll call “LOCK TABLES” on one connection without blocking other connections from going through, so if they try accessing any table that you haven’t locked, then you’ll get the aforementioned error.

In the end, here’s what your code should look like:

async someFunctionRequiringLockedTables() {
return this.knex.transaction(async (trx) => {
try {
await trx.raw('LOCK TABLES some_table_name WRITE');
// ...code to use some_table_name
} finally {
await trx.raw('UNLOCK TABLES');
}
});
}

The only issue is that the official MySQL + InnoDb docs (reference) say that you shouldn’t use “START TRANSACTION” when you’re going to lock a table because of the differences between MySQL and InnoDb locks. You should instead use “SET autocommit=0;” and then just call “UNLOCK TABLES” at the end.