Triply Client Tutorial

Automate Your Knowledge Graph

Wouter Beek (

Create a new project

$ mkdir test && cd test
$ echo '{"name": "test", "version": "1.0.0"}' > package.json
$ npm install typescript @triply/client.js
This creates a new directory with a package.json file and installs the Triple Client library.

Create a New Script

// import
import Client from "@triply/client.js/build/src/App";
// configure
const client = Client.get({
  token: process.env.TRIPLY_API_TOKEN,
  url: process.env.TRIPLY_API_URL
async function run() {
  // Your code goes here …
// error handling
run().catch(e => {
Store this in file script.ts inside your project directory.

Compile & Run the Script

$ ./node_modules/.bin/tsc script.ts
$ node script.js
After changing file script.ts, it can be compiled into file script.js. This file can be run from the command line.

Script 1: Obtain Your Account

const account = client.getAccount();
Prints information about your account.

Script 2: Create a New Dataset

const dataset = await account.addDataset({
  accessLevel: "private",
  name: "some-dataset",
  description: "A dataset created with Triply Client.",
  license: "CC0 1.0"
Prints information about a dataset to the terminal.

Script 3: Add Graphs to a Dataset

const dataset = await client.getAccount()
await dataset.getJob().exec();
Upload the RDF data in file.ttl.gz to a specific dataset.

Script 4: Start Services

await dataset.addService("sparql", "endpoint-1");
await dataset.addService("sparql", "endpoint-2");
await dataset.addService("sparql", "endpoint-3");
await dataset.addService("sparql", "endpoint-4");
Starts 4 SPARQL endpoints over the same dataset. This allows you to create a SPARQL cluster of arbitrary size.

Script 5: Query Your SPARQL Cluster

function sleep(y) { return new Promise(x => setTimeout(x, y)); }
while (true) {
  for (const service of await dataset.getServices()) {
    if (await service.getStatus() == "running") {
      console.log("Found a running service: ";
  await sleep(1000);
This script prints the endpoints from the SPARQL cluster that are currently available. This allows you to perform maintenance on individual SPARQL endpoints without downtime.

Thank you for your attention!

Wouter Beek (

Triply B.V.