Add initial draft of ACVP tool.

ACVP will be the replacement for CAVP. CAVP is the FIPS 140 test-vector
program. This commit contains some very rough support for ACVP.
Currently it only supports hash functions and it's not hard to hit
corner cases, but it's enough of a framework to work from.

Change-Id: Ifcde18ac560710e252220282acd66d08e7507262
Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36644
Commit-Queue: Adam Langley <agl@google.com>
Reviewed-by: David Benjamin <davidben@google.com>
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 4ff8663..c3992a9 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -559,6 +559,7 @@
 add_subdirectory(ssl/test)
 add_subdirectory(tool)
 add_subdirectory(util/fipstools/cavp)
+add_subdirectory(util/fipstools/acvp/modulewrapper)
 add_subdirectory(decrepit)
 
 if(FUZZ)
diff --git a/util/fipstools/acvp/ACVP.md b/util/fipstools/acvp/ACVP.md
new file mode 100644
index 0000000..ddae4ed
--- /dev/null
+++ b/util/fipstools/acvp/ACVP.md
@@ -0,0 +1,159 @@
+ACVP Client
+===========
+
+[ACVP](https://github.com/usnistgov/ACVP) is the next version of NIST's [CAVP](https://github.com/usnistgov/ACVP)—a program for running cryptographic implementations against a set of test vectors. CAVP involved emailing around zip files of somewhat-INI-like test vectors where no two files had quite the same format. ACVP is supposed to replace that with a) TLS connections rather than email and b) JSON rather than bespoke formats.
+
+The tool in this directory can speak to ACVP servers and run the resulting test vectors through a candidate FIPS module by lowering the tests to a much simpler protocol. It also provides an interface for manipulating the ACVP database which includes lists of modules, vendors, contacts, operating environments etc.
+
+## Configuration
+
+Configuration is done via a `config.json` file in the current working directory. Here's a template:
+
+```
+{
+        "ACVPServer": "https://demo.acvts.nist.gov/",
+        "CertPEMFile": "certificate_from_nist.pem",
+        "PrivateKeyDERFile": "your_private_key.key",
+        "TOTPSecret": "<base64 from NIST goes here>",
+        "SessionTokensCache": "~/.cache/acvp-session-tokens",
+        "LogFile": "log"
+}
+```
+
+NIST's ACVP servers use both TLS client certificates and TOTP for authentication. When registering with NIST, they'll sign a CSR and return a certificate in PEM format, which is pointed to be `CertPEMFile`. The corresponding PKCS#1, DER-encoded private key is expected in `PrivateKeyDERFile`. Lastly, NIST will provide a file that contains the base64-encoded TOTP seed, which must be pasted in as the value of `TOTPSecret`.
+
+NIST's ACVP server provides special access tokens for each test session and test sessions can _only_ be accessed via those tokens. The reasoning behind this is unclear but this client can, optionally, keep records of these access tokens in the directory named by `SessionTokensCache`. If that directory name begins with `~/` then that prefix will be replaced with the value of `$HOME`.
+
+Lastly, a log of all HTTP traffic will be written to the file named by `LogFile`, if provided. This is useful for debugging.
+
+## Interactive Use
+
+ACVP provides a fairly complex interface to a database of several types of objects. A rough UI is provided for this which is triggered when the client is invoked with no command-line arguments.
+
+The simplest objects in ACVP are request objects. These record the status of requested changes to the database and, in practice, changes to the NIST demo database never succeed. The set of pending requests for the current user can be enumerated just by evaluating the `requests` object:
+
+```
+> requests
+[
+  {
+    "url": "/acvp/v1/requests/374",
+    "status": "processing"
+  },
+  {
+    "url": "/acvp/v1/requests/218",
+    "status": "processing"
+  }
+]
+```
+
+A specific request can be evaluated by using indexing syntax:
+
+```
+> requests[374]
+{
+  "url": "/acvp/v1/requests/374",
+  "status": "processing"
+}
+```
+
+The list of vendors provides a more complex example. Since there are large number of duplicates in NIST's database, there are more than 10 000 vendor objects and enumerating them all takes a long time. Thus evaluating the `vendors` object doesn't do that:
+
+```
+> vendors
+[object set vendors]
+```
+
+It is still possible to use indexing syntax to read a specific vendor object if you know the ID:
+
+```
+> vendors[1234]
+{
+  "url": "/acvp/v1/vendors/1234",
+  "name": "Apple Inc.",
+  "website": "www.apple.com",
+  "contactsUrl": "/acvp/v1/vendors/1234/contacts",
+  "addresses": [
+    {
+      "url": "/acvp/v1/vendors/1234/addresses/1234",
+      "street1": "1 Infinite Loop",
+      "locality": "Cupertino",
+      "region": "CA",
+      "country": "USA",
+      "postalCode": "95014"
+    }
+  ]
+}
+```
+
+Finding a vendor when the ID is not known requires searching and the ACVP spec [documents](http://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.8.1), for each object type, what values and what relations can be searched on. This is reflected in a variant of the indexing syntax:
+
+```
+> vendors[where name contains "Google LLC"]
+[
+  {
+    "url": "/acvp/v1/vendors/11136",
+    "name": "Google LLC",
+    "website": "www.google.com",
+    "contactsUrl": "/acvp/v1/vendors/11136/contacts",
+    "addresses": [
+      {
+        "url": "/acvp/v1/vendors/11136/addresses/11136",
+        "street1": "1600 Amphitheatre Parkway",
+        "locality": "Mountain View",
+        "region": "CA",
+        "country": "USA",
+        "postalCode": "94043"
+      }
+    ]
+  },
+  {
+    "url": "/acvp/v1/vendors/11137",
+    "name": "Google LLC",
+    "website": "www.google.com",
+    "contactsUrl": "/acvp/v1/vendors/11137/contacts",
+    "addresses": [
+      {
+        "url": "/acvp/v1/vendors/11137/addresses/11137",
+        "street1": "1600 Amphitheatre Parkway",
+        "locality": "Mountain View",
+        "region": "CA",
+        "country": "USA",
+        "postalCode": "94043"
+      }
+    ]
+  }
+]
+```
+
+In general, `&&` and `||` can be used as in C and the relationships are `==`, `!=`, `contains`, `startsWith`, and `endsWith`. Only values and relations listed in the ACVP spec for a given object can be used.
+
+More complex interaction remains to be fleshed out. However, it is generally possible to create new objects by evaluating, for example, `vendors.new()`. That will invoke `$EDITOR` to edit the JSON to be submitted. (For now, however, no helpful templates are provided.)
+
+The current list of objects is:
+
+* `requests`
+* `vendors`
+* `persons`
+* `modules`
+* `oes` (operating environments)
+* `deps`
+* `algos`
+* `sessions`
+
+## Running test sessions
+
+Handling of test sessions (in non-interactive mode) is split into a “front” part, which talks to the ACVP server, and a “middle” part, which runs the actual test vectors. The middle part receives the raw JSON of the vector sets and returns the response. It also knows the set of algorithms that it supports and their respective parameters. For the moment, the only middle part provided is called `subprocess` which lowers the ACVP tests to simple binary protocol and talks to a FIPS module in a separate process to run the cryptographic algorithms.
+
+For development purposes, this code can be exercised by passing, say, `-run SHA2-256` to the client.
+
+### The subprocess protocol
+
+The lowering of ACVP to a simpler protocol might be useful for other projects so the protocol is described here. The C++ implementation for BoringSSL is in the `modulewrapper` directory.
+
+The protocol follows a strict request–response model over stdin/stdout: the subprocess only speaks in response to a request and there is exactly one response for every request. Conceptually requests consist of one or more byte strings and responses consist of zero or more byte strings.
+
+On the wire, a request involves sending the number of byte strings, then the length of each byte string in order, then the contents of each byte string. All numbers are little-endian and 32-bit. The first byte string is mandatory and is the name of the command to perform. A response has the same format except that there may be zero byte strings and the first byte string has no special semantics.
+
+All implementations must support the `getConfig` command which takes no arguments and returns a single byte string which is a JSON blob of ACVP algorithm configuration. This blob describes all the algorithms and capabilities that the module supports and is an array of JSON objects suitable for including as the `algorithms` value when [creating an ACVP vector set](http://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.15.2.1).
+
+Each supported algorithm will have its own commands that the module must implement. So far, only hash functions are supported and the commands take a byte string to hash and return a single byte string of the resulting digest. The commands are named after the ACVP algorithm names, i.e. `SHA-1`, `SHA2-224`, `SHA2-256`, `SHA2-384`, and `SHA2-512`.
diff --git a/util/fipstools/acvp/acvptool/acvp.go b/util/fipstools/acvp/acvptool/acvp.go
new file mode 100644
index 0000000..ed1a84f
--- /dev/null
+++ b/util/fipstools/acvp/acvptool/acvp.go
@@ -0,0 +1,430 @@
+package main
+
+import (
+	"bufio"
+	"bytes"
+	"crypto/hmac"
+	"crypto/sha256"
+	"crypto/x509"
+	"encoding/base64"
+	"encoding/binary"
+	"encoding/json"
+	"encoding/pem"
+	"errors"
+	"flag"
+	"fmt"
+	"io/ioutil"
+	"log"
+	"net/http"
+	neturl "net/url"
+	"os"
+	"path/filepath"
+	"strings"
+	"time"
+
+	"boringssl.googlesource.com/boringssl/util/fipstools/acvp/acvptool/acvp"
+	"boringssl.googlesource.com/boringssl/util/fipstools/acvp/acvptool/subprocess"
+)
+
+var (
+	configFilename = flag.String("config", "config.json", "Location of the configuration JSON file")
+	runFlag        = flag.String("run", "", "Name of primitive to run tests for")
+	wrapperPath    = flag.String("wrapper", "../../../../build/util/fipstools/acvp/modulewrapper/modulewrapper", "Path to the wrapper binary")
+)
+
+type Config struct {
+	CertPEMFile        string
+	PrivateKeyDERFile  string
+	TOTPSecret         string
+	ACVPServer         string
+	SessionTokensCache string
+	LogFile            string
+}
+
+func isCommentLine(line []byte) bool {
+	var foundCommentStart bool
+	for _, b := range line {
+		if !foundCommentStart {
+			if b == ' ' || b == '\t' {
+				continue
+			}
+			if b != '/' {
+				return false
+			}
+			foundCommentStart = true
+		} else {
+			return b == '/'
+		}
+	}
+	return false
+}
+
+func jsonFromFile(out interface{}, filename string) error {
+	in, err := os.Open(filename)
+	if err != nil {
+		return err
+	}
+	defer in.Close()
+
+	scanner := bufio.NewScanner(in)
+	var commentsRemoved bytes.Buffer
+	for scanner.Scan() {
+		if isCommentLine(scanner.Bytes()) {
+			continue
+		}
+		commentsRemoved.Write(scanner.Bytes())
+		commentsRemoved.WriteString("\n")
+	}
+	if err := scanner.Err(); err != nil {
+		return err
+	}
+
+	decoder := json.NewDecoder(&commentsRemoved)
+	decoder.DisallowUnknownFields()
+	if err := decoder.Decode(out); err != nil {
+		return err
+	}
+	if decoder.More() {
+		return errors.New("trailing garbage found")
+	}
+	return nil
+}
+
+// TOTP implements the time-based one-time password algorithm with the suggested
+// granularity of 30 seconds. See https://tools.ietf.org/html/rfc6238 and then
+// https://tools.ietf.org/html/rfc4226#section-5.3
+func TOTP(secret []byte) string {
+	const timeStep = 30
+	now := uint64(time.Now().Unix()) / 30
+	var nowBuf [8]byte
+	binary.BigEndian.PutUint64(nowBuf[:], now)
+	mac := hmac.New(sha256.New, secret)
+	mac.Write(nowBuf[:])
+	digest := mac.Sum(nil)
+	value := binary.BigEndian.Uint32(digest[digest[31]&15:])
+	value &= 0x7fffffff
+	value %= 100000000
+	return fmt.Sprintf("%08d", value)
+}
+
+type Middle interface {
+	Close()
+	Config() ([]byte, error)
+	Process(algorithm string, vectorSet []byte) ([]byte, error)
+}
+
+func loadCachedSessionTokens(server *acvp.Server, cachePath string) error {
+	cacheDir, err := os.Open(cachePath)
+	if err != nil {
+		if os.IsNotExist(err) {
+			if err := os.Mkdir(cachePath, 0700); err != nil {
+				return fmt.Errorf("Failed to create session token cache directory %q: %s", cachePath, err)
+			}
+			return nil
+		}
+		return fmt.Errorf("Failed to open session token cache directory %q: %s", cachePath, err)
+	}
+	defer cacheDir.Close()
+	names, err := cacheDir.Readdirnames(0)
+	if err != nil {
+		return fmt.Errorf("Failed to list session token cache directory %q: %s", cachePath, err)
+	}
+
+	loaded := 0
+	for _, name := range names {
+		if !strings.HasSuffix(name, ".token") {
+			continue
+		}
+		path := filepath.Join(cachePath, name)
+		contents, err := ioutil.ReadFile(path)
+		if err != nil {
+			return fmt.Errorf("Failed to read session token cache entry %q: %s", path, err)
+		}
+		urlPath, err := neturl.PathUnescape(name[:len(name)-6])
+		if err != nil {
+			return fmt.Errorf("Failed to unescape token filename %q: %s", name, err)
+		}
+		server.PrefixTokens[urlPath] = string(contents)
+		loaded++
+	}
+
+	log.Printf("Loaded %d cached tokens", loaded)
+	return nil
+}
+
+func trimLeadingSlash(s string) string {
+	if strings.HasPrefix(s, "/") {
+		return s[1:]
+	}
+	return s
+}
+
+func main() {
+	flag.Parse()
+
+	var config Config
+	if err := jsonFromFile(&config, *configFilename); err != nil {
+		log.Fatalf("Failed to load config file: %s", err)
+	}
+
+	if len(config.TOTPSecret) == 0 {
+		log.Fatal("Config file missing TOTPSecret")
+	}
+	totpSecret, err := base64.StdEncoding.DecodeString(config.TOTPSecret)
+	if err != nil {
+		log.Fatalf("Failed to decode TOTP secret from config file: %s", err)
+	}
+
+	if len(config.CertPEMFile) == 0 {
+		log.Fatal("Config file missing CertPEMFile")
+	}
+	certPEM, err := ioutil.ReadFile(config.CertPEMFile)
+	if err != nil {
+		log.Fatalf("failed to read certificate from %q: %s", config.CertPEMFile, err)
+	}
+	block, _ := pem.Decode(certPEM)
+	certDER := block.Bytes
+
+	if len(config.PrivateKeyDERFile) == 0 {
+		log.Fatal("Config file missing PrivateKeyDERFile")
+	}
+	keyDER, err := ioutil.ReadFile(config.PrivateKeyDERFile)
+	if err != nil {
+		log.Fatalf("failed to read private key from %q: %s", config.PrivateKeyDERFile, err)
+	}
+
+	certKey, err := x509.ParsePKCS1PrivateKey(keyDER)
+	if err != nil {
+		log.Fatalf("failed to parse private key from %q: %s", config.PrivateKeyDERFile, err)
+	}
+
+	var middle Middle
+	middle, err = subprocess.New(*wrapperPath)
+	if err != nil {
+		log.Fatalf("failed to initialise middle: %s", err)
+	}
+	defer middle.Close()
+
+	configBytes, err := middle.Config()
+	if err != nil {
+		log.Fatalf("failed to get config from middle: %s", err)
+	}
+
+	var supportedAlgos []map[string]interface{}
+	if err := json.Unmarshal(configBytes, &supportedAlgos); err != nil {
+		log.Fatalf("failed to parse configuration from Middle: %s", err)
+	}
+
+	runAlgos := make(map[string]bool)
+	if len(*runFlag) > 0 {
+		for _, substr := range strings.Split(*runFlag, ",") {
+			runAlgos[substr] = false
+		}
+	}
+
+	var algorithms []map[string]interface{}
+	for _, supportedAlgo := range supportedAlgos {
+		algoInterface, ok := supportedAlgo["algorithm"]
+		if !ok {
+			continue
+		}
+
+		algo, ok := algoInterface.(string)
+		if !ok {
+			continue
+		}
+
+		if _, ok := runAlgos[algo]; ok {
+			algorithms = append(algorithms, supportedAlgo)
+			runAlgos[algo] = true
+		}
+	}
+
+	for algo, recognised := range runAlgos {
+		if !recognised {
+			log.Fatalf("requested algorithm %q was not recognised", algo)
+		}
+	}
+
+	if len(config.ACVPServer) == 0 {
+		config.ACVPServer = "https://demo.acvts.nist.gov/"
+	}
+	server := acvp.NewServer(config.ACVPServer, config.LogFile, [][]byte{certDER}, certKey, func() string {
+		return TOTP(totpSecret[:])
+	})
+
+	var sessionTokensCacheDir string
+	if len(config.SessionTokensCache) > 0 {
+		sessionTokensCacheDir = config.SessionTokensCache
+		if strings.HasPrefix(sessionTokensCacheDir, "~/") {
+			home := os.Getenv("HOME")
+			if len(home) == 0 {
+				log.Fatal("~ used in config file but $HOME not set")
+			}
+			sessionTokensCacheDir = filepath.Join(home, sessionTokensCacheDir[2:])
+		}
+
+		if err := loadCachedSessionTokens(server, sessionTokensCacheDir); err != nil {
+			log.Fatal(err)
+		}
+	}
+
+	if err := server.Login(); err != nil {
+		log.Fatalf("failed to login: %s", err)
+	}
+
+	if len(*runFlag) == 0 {
+		runInteractive(server, config)
+		return
+	}
+
+	requestBytes, err := json.Marshal(acvp.TestSession{
+		IsSample:    true,
+		Publishable: false,
+		Algorithms:  algorithms,
+	})
+	if err != nil {
+		log.Fatalf("Failed to serialise JSON: %s", err)
+	}
+
+	var result acvp.TestSession
+	if err := server.Post(&result, "acvp/v1/testSessions", requestBytes); err != nil {
+		log.Fatalf("Request to create test session failed: %s", err)
+	}
+
+	url := trimLeadingSlash(result.URL)
+	log.Printf("Created test session %q", url)
+	if token := result.AccessToken; len(token) > 0 {
+		server.PrefixTokens[url] = token
+		if len(sessionTokensCacheDir) > 0 {
+			ioutil.WriteFile(filepath.Join(sessionTokensCacheDir, neturl.PathEscape(url))+".token", []byte(token), 0600)
+		}
+	}
+
+	log.Printf("Have vector sets %v", result.VectorSetURLs)
+
+	for _, setURL := range result.VectorSetURLs {
+		firstTime := true
+		for {
+			if firstTime {
+				log.Printf("Fetching test vectors %q", setURL)
+				firstTime = false
+			}
+
+			vectorsBytes, err := server.GetBytes(trimLeadingSlash(setURL))
+			if err != nil {
+				log.Fatalf("Failed to fetch vector set %q: %s", setURL, err)
+			}
+
+			var vectors acvp.Vectors
+			if err := json.Unmarshal(vectorsBytes, &vectors); err != nil {
+				log.Fatalf("Failed to parse vector set from %q: %s", setURL, err)
+			}
+
+			if retry := vectors.Retry; retry > 0 {
+				log.Printf("Server requested %d seconds delay", retry)
+				if retry > 10 {
+					retry = 10
+				}
+				time.Sleep(time.Duration(retry) * time.Second)
+				continue
+			}
+
+			replyGroups, err := middle.Process(vectors.Algo, vectorsBytes)
+			if err != nil {
+				log.Printf("Failed: %s", err)
+				log.Printf("Deleting test set")
+				server.Delete(url)
+				os.Exit(1)
+			}
+
+			headerBytes, err := json.Marshal(acvp.Vectors{
+				ID:   vectors.ID,
+				Algo: vectors.Algo,
+			})
+			if err != nil {
+				log.Printf("Failed to marshal result: %s", err)
+				log.Printf("Deleting test set")
+				server.Delete(url)
+				os.Exit(1)
+			}
+
+			var resultBuf bytes.Buffer
+			resultBuf.Write(headerBytes[:len(headerBytes)-1])
+			resultBuf.WriteString(`,"testGroups":`)
+			resultBuf.Write(replyGroups)
+			resultBuf.WriteString("}")
+
+			resultData := resultBuf.Bytes()
+			resultSize := uint64(len(resultData)) + 32 /* for framing overhead */
+			if resultSize >= server.SizeLimit {
+				log.Printf("Result is %d bytes, too much given server limit of %d bytes. Using large-upload process.", resultSize, server.SizeLimit)
+				largeRequestBytes, err := json.Marshal(acvp.LargeUploadRequest{
+					Size: resultSize,
+					URL:  setURL,
+				})
+				if err != nil {
+					log.Printf("Failed to marshal large-upload request: %s", err)
+					log.Printf("Deleting test set")
+					server.Delete(url)
+					os.Exit(1)
+				}
+
+				var largeResponse acvp.LargeUploadResponse
+				if err := server.Post(&largeResponse, "/large", largeRequestBytes); err != nil {
+					log.Fatalf("Failed to request large-upload endpoint: %s", err)
+				}
+
+				log.Printf("Directed to large-upload endpoint at %q", largeResponse.URL)
+				client := &http.Client{}
+				req, err := http.NewRequest("POST", largeResponse.URL, bytes.NewBuffer(resultData))
+				if err != nil {
+					log.Fatalf("Failed to create POST request: %s", err)
+				}
+				token := largeResponse.AccessToken
+				if len(token) == 0 {
+					token = server.AccessToken
+				}
+				req.Header.Add("Authorization", "Bearer "+token)
+				req.Header.Add("Content-Type", "application/json")
+				resp, err := client.Do(req)
+				if err != nil {
+					log.Fatalf("Failed writing large upload: %s", err)
+				}
+				resp.Body.Close()
+				if resp.StatusCode != 200 {
+					log.Fatalf("Large upload resulted in status code %d", resp.StatusCode)
+				}
+			} else {
+				log.Printf("Result size %d bytes", resultSize)
+				if err := server.Post(nil, trimLeadingSlash(setURL)+"/results", resultData); err != nil {
+					log.Fatalf("Failed to upload results: %s\n", err)
+				}
+			}
+
+			break
+		}
+	}
+
+FetchResults:
+	for {
+		var results acvp.SessionResults
+		if err := server.Get(&results, trimLeadingSlash(url)+"/results"); err != nil {
+			log.Fatalf("Failed to fetch session results: %s", err)
+		}
+
+		if results.Passed {
+			break
+		}
+
+		for _, result := range results.Results {
+			if result.Status == "incomplete" {
+				log.Print("Server hasn't finished processing results. Waiting 10 seconds.")
+				time.Sleep(10 * time.Second)
+				continue FetchResults
+			}
+		}
+
+		log.Fatalf("Server did not accept results: %#v", results)
+	}
+}
diff --git a/util/fipstools/acvp/acvptool/acvp/acvp.go b/util/fipstools/acvp/acvptool/acvp/acvp.go
new file mode 100644
index 0000000..2f5d363
--- /dev/null
+++ b/util/fipstools/acvp/acvptool/acvp/acvp.go
@@ -0,0 +1,646 @@
+package acvp
+
+import (
+	"bytes"
+	"crypto"
+	"crypto/tls"
+	"encoding/base64"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"io"
+	"io/ioutil"
+	"net"
+	"net/http"
+	"net/url"
+	"os"
+	"reflect"
+	"strings"
+	"time"
+)
+
+// Server represents an ACVP server.
+type Server struct {
+	// PrefixTokens are access tokens that apply to URLs under a certain prefix.
+	// The keys of this map are strings like "acvp/v1/testSessions/1234" and the
+	// values are JWT access tokens.
+	PrefixTokens map[string]string
+	// SizeLimit is the maximum number of bytes that the server can accept as an
+	// upload before the large endpoint support must be used.
+	SizeLimit uint64
+	// AccessToken is the top-level access token for the current session.
+	AccessToken string
+
+	client      *http.Client
+	prefix      string
+	totpFunc    func() string
+}
+
+// NewServer returns a fresh Server instance representing the ACVP server at
+// prefix (e.g. "https://acvp.example.com/"). A copy of all bytes exchanged
+// will be written to logFile, if not empty.
+func NewServer(prefix string, logFile string, derCertificates [][]byte, privateKey crypto.PrivateKey, totp func() string) *Server {
+	if !strings.HasSuffix(prefix, "/") {
+		prefix = prefix + "/"
+	}
+
+	tlsConfig := &tls.Config{
+		Certificates: []tls.Certificate{
+			tls.Certificate{
+				Certificate: derCertificates,
+				PrivateKey:  privateKey,
+			},
+		},
+		Renegotiation: tls.RenegotiateOnceAsClient,
+	}
+
+	client := &http.Client{
+		Transport: &http.Transport{
+			Dial: func(network, addr string) (net.Conn, error) {
+				panic("HTTP connection requested")
+			},
+			DialTLS: func(network, addr string) (net.Conn, error) {
+				conn, err := tls.Dial(network, addr, tlsConfig)
+				if err != nil {
+					return nil, err
+				}
+				if len(logFile) > 0 {
+					logFile, err := os.OpenFile(logFile, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0600)
+					if err != nil {
+						return nil, err
+					}
+					return &logger{Conn: conn, log: logFile}, nil
+				}
+				return conn, err
+			},
+		},
+		Timeout: 10 * time.Second,
+	}
+
+	return &Server{client: client, prefix: prefix, totpFunc: totp, PrefixTokens: make(map[string]string)}
+}
+
+type logger struct {
+	*tls.Conn
+	log           *os.File
+	lastDirection int
+}
+
+var newLine = []byte{'\n'}
+
+func (l *logger) Read(buf []byte) (int, error) {
+	if l.lastDirection != 1 {
+		l.log.Write(newLine)
+	}
+	l.lastDirection = 1
+
+	n, err := l.Conn.Read(buf)
+	if err == nil {
+		l.log.Write(buf[:n])
+	}
+	return n, err
+}
+
+func (l *logger) Write(buf []byte) (int, error) {
+	if l.lastDirection != 2 {
+		l.log.Write(newLine)
+	}
+	l.lastDirection = 2
+
+	n, err := l.Conn.Write(buf)
+	if err == nil {
+		l.log.Write(buf[:n])
+	}
+	return n, err
+}
+
+const requestPrefix = `[{"acvVersion":"1.0"},`
+const requestSuffix = "]"
+
+// parseHeaderElement parses the first JSON object that's always returned by
+// ACVP servers. If successful, it returns a JSON Decoder positioned just
+// before the second element.
+func parseHeaderElement(in io.Reader) (*json.Decoder, error) {
+	decoder := json.NewDecoder(in)
+	arrayStart, err := decoder.Token()
+	if err != nil {
+		return nil, errors.New("failed to read from server reply: " + err.Error())
+	}
+	if delim, ok := arrayStart.(json.Delim); !ok || delim != '[' {
+		return nil, fmt.Errorf("found %#v when expecting initial array from server", arrayStart)
+	}
+
+	var version struct {
+		Version string `json:"acvVersion"`
+	}
+	if err := decoder.Decode(&version); err != nil {
+		return nil, errors.New("parse error while decoding version element: " + err.Error())
+	}
+	if !strings.HasPrefix(version.Version, "1.") {
+		return nil, fmt.Errorf("expected version 1.* from server but found %q", version.Version)
+	}
+
+	return decoder, nil
+}
+
+// parseReplyToBytes reads the contents of an ACVP reply after removing the
+// header element.
+func parseReplyToBytes(in io.Reader) ([]byte, error) {
+	decoder, err := parseHeaderElement(in)
+	if err != nil {
+		return nil, err
+	}
+
+	buf, err := ioutil.ReadAll(decoder.Buffered())
+	if err != nil {
+		return nil, err
+	}
+
+	rest, err := ioutil.ReadAll(in)
+	if err != nil {
+		return nil, err
+	}
+	buf = append(buf, rest...)
+
+	buf = bytes.TrimSpace(buf)
+	if len(buf) == 0 || buf[0] != ',' {
+		return nil, errors.New("didn't find initial ','")
+	}
+	buf = buf[1:]
+
+	if len(buf) == 0 || buf[len(buf)-1] != ']' {
+		return nil, errors.New("didn't find trailing ']'")
+	}
+	buf = buf[:len(buf)-1]
+
+	return buf, nil
+}
+
+// parseReply parses the contents of an ACVP reply (after removing the header
+// element) into out. See the documentation of the encoding/json package for
+// details of the parsing.
+func parseReply(out interface{}, in io.Reader) error {
+	if out == nil {
+		// No reply expected.
+		return nil
+	}
+
+	decoder, err := parseHeaderElement(in)
+	if err != nil {
+		return err
+	}
+
+	if err := decoder.Decode(out); err != nil {
+		return errors.New("error while decoding reply body: " + err.Error())
+	}
+
+	arrayEnd, err := decoder.Token()
+	if err != nil {
+		return errors.New("failed to read end of reply from server: " + err.Error())
+	}
+	if delim, ok := arrayEnd.(json.Delim); !ok || delim != ']' {
+		return fmt.Errorf("found %#v when expecting end of array from server", arrayEnd)
+	}
+	if decoder.More() {
+		return errors.New("unexpected trailing data from server")
+	}
+
+	return nil
+}
+
+// expired returns true if the given JWT token has expired.
+func expired(tokenStr string) bool {
+	parts := strings.Split(tokenStr, ".")
+	if len(parts) != 3 {
+		return false
+	}
+	jsonBytes, err := base64.RawURLEncoding.DecodeString(parts[1])
+	if err != nil {
+		return false
+	}
+	var token struct {
+		Expiry uint64 `json:"exp"`
+	}
+	if json.Unmarshal(jsonBytes, &token) != nil {
+		return false
+	}
+	return token.Expiry > 0 && token.Expiry < uint64(time.Now().Unix())
+}
+
+func (server *Server) getToken(endPoint string) (string, error) {
+	for path, token := range server.PrefixTokens {
+		if endPoint != path && !strings.HasPrefix(endPoint, path+"/") {
+			continue
+		}
+
+		if !expired(token) {
+			return token, nil
+		}
+
+		var reply struct {
+			AccessToken string `json:"accessToken"`
+		}
+		if err := server.postMessage(&reply, "acvp/v1/login", map[string]string{
+			"password":    server.totpFunc(),
+			"accessToken": token,
+		}); err != nil {
+			return "", err
+		}
+		server.PrefixTokens[path] = reply.AccessToken
+		return reply.AccessToken, nil
+	}
+	return server.AccessToken, nil
+}
+
+// Login sends a login request and stores the returned access tokens for use
+// with future requests. The login process isn't specifically documented in
+// draft-fussell-acvp-spec and the best reference is
+// https://github.com/usnistgov/ACVP/wiki#credentials-for-accessing-the-demo-server
+func (server *Server) Login() error {
+	var reply struct {
+		AccessToken           string `json:"accessToken"`
+		LargeEndpointRequired bool   `json:"largeEndpointRequired"`
+		SizeLimit             uint64 `json:"sizeConstraint"`
+	}
+
+	if err := server.postMessage(&reply, "acvp/v1/login", map[string]string{"password": server.totpFunc()}); err != nil {
+		return err
+	}
+
+	if len(reply.AccessToken) == 0 {
+		return errors.New("login reply didn't contain access token")
+	}
+	server.AccessToken = reply.AccessToken
+
+	if reply.LargeEndpointRequired {
+		if reply.SizeLimit == 0 {
+			return errors.New("login indicated largeEndpointRequired but didn't provide a sizeConstraint")
+		}
+		server.SizeLimit = reply.SizeLimit
+	}
+
+	return nil
+}
+
+type Relation int
+
+const (
+	Equals           Relation = iota
+	NotEquals        Relation = iota
+	GreaterThan      Relation = iota
+	GreaterThanEqual Relation = iota
+	LessThan         Relation = iota
+	LessThanEqual    Relation = iota
+	Contains         Relation = iota
+	StartsWith       Relation = iota
+	EndsWith         Relation = iota
+)
+
+func (rel Relation) String() string {
+	switch rel {
+	case Equals:
+		return "eq"
+	case NotEquals:
+		return "ne"
+	case GreaterThan:
+		return "gt"
+	case GreaterThanEqual:
+		return "ge"
+	case LessThan:
+		return "lt"
+	case LessThanEqual:
+		return "le"
+	case Contains:
+		return "contains"
+	case StartsWith:
+		return "start"
+	case EndsWith:
+		return "end"
+	default:
+		panic("unknown relation")
+	}
+}
+
+type Condition struct {
+	Param    string
+	Relation Relation
+	Value    string
+}
+
+type Conjunction []Condition
+
+type Query []Conjunction
+
+func (query Query) toURLParams() string {
+	var ret string
+
+	for i, conj := range query {
+		for _, cond := range conj {
+			if len(ret) > 0 {
+				ret += "&"
+			}
+			ret += fmt.Sprintf("%s[%d]=%s:%s", url.QueryEscape(cond.Param), i, cond.Relation.String(), url.QueryEscape(cond.Value))
+		}
+	}
+
+	return ret
+}
+
+var NotFound = errors.New("acvp: HTTP code 404")
+
+func (server *Server) newRequestWithToken(method, endpoint string, body io.Reader) (*http.Request, error) {
+    token, err := server.getToken(endpoint)
+    if err != nil {
+        return nil, err
+    }
+    req, err := http.NewRequest(method, server.prefix+endpoint, body)
+    if err != nil {
+        return nil, err
+    }
+    if len(token) != 0 {
+       req.Header.Add("Authorization", "Bearer "+token)
+    }
+    return req, nil
+}
+
+func (server *Server) Get(out interface{}, endPoint string) error {
+	req, err := server.newRequestWithToken("GET", endPoint, nil)
+	if err != nil {
+		return err
+	}
+	resp, err := server.client.Do(req)
+	if err != nil {
+		return fmt.Errorf("error while fetching chunk for %q: %s", endPoint, err)
+	}
+
+	defer resp.Body.Close()
+	if resp.StatusCode == 404 {
+		return NotFound
+	} else if resp.StatusCode != 200 {
+		return fmt.Errorf("acvp: HTTP error %d", resp.StatusCode)
+	}
+	return parseReply(out, resp.Body)
+}
+
+func (server *Server) GetBytes(endPoint string) ([]byte, error) {
+	req, err := server.newRequestWithToken("GET", endPoint, nil)
+	if err != nil {
+		return nil, err
+	}
+	resp, err := server.client.Do(req)
+	if err != nil {
+		return nil, fmt.Errorf("error while fetching chunk for %q: %s", endPoint, err)
+	}
+
+	defer resp.Body.Close()
+	if resp.StatusCode == 404 {
+		return nil, NotFound
+	} else if resp.StatusCode != 200 {
+		return nil, fmt.Errorf("acvp: HTTP error %d", resp.StatusCode)
+	}
+	return parseReplyToBytes(resp.Body)
+}
+
+func (server *Server) write(method string, reply interface{}, endPoint string, contents []byte) error {
+	var buf bytes.Buffer
+	buf.WriteString(requestPrefix)
+	buf.Write(contents)
+	buf.WriteString(requestSuffix)
+
+	req, err := server.newRequestWithToken("POST", endPoint, &buf)
+	if err != nil {
+		return err
+	}
+	req.Header.Add("Content-Type", "application/json")
+	resp, err := server.client.Do(req)
+	if err != nil {
+		return fmt.Errorf("error while writing to %q: %s", endPoint, err)
+	}
+
+	defer resp.Body.Close()
+	if resp.StatusCode == 404 {
+		return NotFound
+	} else if resp.StatusCode != 200 {
+		return fmt.Errorf("acvp: HTTP error %d", resp.StatusCode)
+	}
+	return parseReply(reply, resp.Body)
+}
+
+func (server *Server) postMessage(reply interface{}, endPoint string, request interface{}) error {
+	contents, err := json.Marshal(request)
+	if err != nil {
+		return err
+	}
+	return server.write("POST", reply, endPoint, contents)
+}
+
+func (server *Server) Post(out interface{}, endPoint string, contents []byte) error {
+	return server.write("POST", out, endPoint, contents)
+}
+
+func (server *Server) Put(out interface{}, endPoint string, contents []byte) error {
+	return server.write("PUT", out, endPoint, contents)
+}
+
+func (server *Server) Delete(endPoint string) error {
+	req, err := server.newRequestWithToken("DELETE", endPoint, nil)
+	resp, err := server.client.Do(req)
+	if err != nil {
+		return fmt.Errorf("error while writing to %q: %s", endPoint, err)
+	}
+
+	defer resp.Body.Close()
+	if resp.StatusCode != 200 {
+		return fmt.Errorf("acvp: HTTP error %d", resp.StatusCode)
+	}
+	fmt.Printf("DELETE %q %d\n", server.prefix+endPoint, resp.StatusCode)
+	return nil
+}
+
+var (
+	uint64Type = reflect.TypeOf(uint64(0))
+	boolType   = reflect.TypeOf(false)
+	stringType = reflect.TypeOf("")
+)
+
+// GetPaged returns an array of records of some type using one or more requests to the server. See
+// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#paging_response
+func (server *Server) GetPaged(out interface{}, endPoint string, condition Query) error {
+	output := reflect.ValueOf(out)
+	if output.Kind() != reflect.Ptr {
+		panic(fmt.Sprintf("GetPaged output parameter of non-pointer type %T", out))
+	}
+
+	token, err := server.getToken(endPoint)
+	if err != nil {
+		return err
+	}
+
+	outputSlice := output.Elem()
+
+	replyType := reflect.StructOf([]reflect.StructField{
+		{Name: "TotalCount", Type: uint64Type, Tag: `json:"totalCount"`},
+		{Name: "Incomplete", Type: boolType, Tag: `json:"incomplete"`},
+		{Name: "Data", Type: output.Elem().Type(), Tag: `json:"data"`},
+		{Name: "Links", Type: reflect.StructOf([]reflect.StructField{
+			{Name: "Next", Type: stringType, Tag: `json:"next"`},
+		}), Tag: `json:"links"`},
+	})
+	nextURL := server.prefix + endPoint
+	conditionParams := condition.toURLParams()
+	if len(conditionParams) > 0 {
+		nextURL += "?" + conditionParams
+	}
+
+	isFirstRequest := true
+	for {
+		req, err := http.NewRequest("GET", nextURL, nil)
+		if err != nil {
+			return err
+		}
+		if len(token) != 0 {
+			req.Header.Add("Authorization", "Bearer "+token)
+		}
+		resp, err := server.client.Do(req)
+		if err != nil {
+			return fmt.Errorf("error while fetching chunk for %q: %s", endPoint, err)
+		}
+		if resp.StatusCode == 404 && isFirstRequest {
+			resp.Body.Close()
+			return nil
+		} else if resp.StatusCode != 200 {
+			resp.Body.Close()
+			return fmt.Errorf("acvp: HTTP error %d", resp.StatusCode)
+		}
+		isFirstRequest = false
+
+		reply := reflect.New(replyType)
+		err = parseReply(reply.Interface(), resp.Body)
+		resp.Body.Close()
+		if err != nil {
+			return err
+		}
+
+		data := reply.Elem().FieldByName("Data")
+		for i := 0; i < data.Len(); i++ {
+			outputSlice.Set(reflect.Append(outputSlice, data.Index(i)))
+		}
+
+		if uint64(outputSlice.Len()) == reply.Elem().FieldByName("TotalCount").Uint() ||
+			reply.Elem().FieldByName("Links").FieldByName("Next").String() == "" {
+			break
+		}
+
+		nextURL = server.prefix + endPoint + fmt.Sprintf("?offset=%d", outputSlice.Len())
+		if len(conditionParams) > 0 {
+			nextURL += "&" + conditionParams
+		}
+	}
+
+	return nil
+}
+
+// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.8.3.1
+type Vendor struct {
+	URL         string    `json:"url,omitempty"`
+	Name        string    `json:"name,omitempty"`
+	ParentURL   string    `json:"parentUrl,omitempty"`
+	Website     string    `json:"website,omitempty"`
+	Emails      []string  `json:"emails,omitempty"`
+	ContactsURL string    `json:"contactsUrl,omitempty"`
+	Addresses   []Address `json:"addresses,omitempty"`
+}
+
+// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.9
+type Address struct {
+	URL        string `json:"url,omitempty"`
+	Street1    string `json:"street1,omitempty"`
+	Street2    string `json:"street2,omitempty"`
+	Street3    string `json:"street3,omitempty"`
+	Locality   string `json:"locality,omitempty"`
+	Region     string `json:"region,omitempty"`
+	Country    string `json:"country,omitempty"`
+	PostalCode string `json:"postalCode,omitempty"`
+}
+
+// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.10
+type Person struct {
+	URL          string   `json:"url,omitempty"`
+	FullName     string   `json:"fullName,omitempty"`
+	VendorURL    string   `json:"vendorUrl,omitempty"`
+	Emails       []string `json:"emails,omitempty"`
+	PhoneNumbers []struct {
+		Number string `json:"number,omitempty"`
+		Type   string `json:"type,omitempty"`
+	} `json:"phoneNumbers,omitempty"`
+}
+
+// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.11
+type Module struct {
+	URL         string   `json:"url,omitempty"`
+	Name        string   `json:"name,omitempty"`
+	Version     string   `json:"version,omitempty"`
+	Type        string   `json:"type,omitempty"`
+	Website     string   `json:"website,omitempty"`
+	VendorURL   string   `json:"vendorUrl,omitempty"`
+	AddressURL  string   `json:"addressUrl,omitempty"`
+	ContactURLs []string `json:"contactUrls,omitempty"`
+	Description string   `json:"description,omitempty"`
+}
+
+type RequestStatus struct {
+	URL         string `json:"url,omitempty"`
+	Status      string `json:"status,omitempty"`
+	Message     string `json:"message,omitempty"`
+	ApprovedURL string `json:"approvedUrl,omitempty"`
+}
+
+type OperationalEnvironment struct {
+	URL            string       `json:"url,omitempty"`
+	Name           string       `json:"name,omitempty"`
+	DependencyUrls []string     `json:"dependencyUrls,omitempty"`
+	Dependencies   []Dependency `json:"dependencies,omitempty"`
+}
+
+type Dependency map[string]interface{}
+
+type Algorithm map[string]interface{}
+
+type TestSession struct {
+	URL           string                   `json:"url,omitempty"`
+	ACVPVersion   string                   `json:"acvpVersion,omitempty"`
+	Created       string                   `json:"createdOn,omitempty"`
+	Expires       string                   `json:"expiresOn,omitempty"`
+	VectorSetURLs []string                 `json:"vectorSetUrls,omitempty"`
+	AccessToken   string                   `json:"accessToken,omitempty"`
+	Algorithms    []map[string]interface{} `json:"algorithms,omitempty"`
+	EncryptAtRest bool                     `json:"encryptAtRest,omitempty"`
+	IsSample      bool                     `json:"isSample,omitempty"`
+	Publishable   bool                     `json:"publishable,omitempty"`
+	Passed        bool                     `json:"passed,omitempty"`
+}
+
+type Vectors struct {
+	Retry    uint64 `json:"retry,omitempty"`
+	ID       uint64 `json:"vsId"`
+	Algo     string `json:"algorithm,omitempty"`
+	Revision string `json:"revision,omitempty"`
+}
+
+type LargeUploadRequest struct {
+	Size uint64 `json:"submissionSize,omitempty"`
+	URL  string `json:"vectorSetUrl,omitempty"`
+}
+
+type LargeUploadResponse struct {
+	URL         string `json:"url"`
+	AccessToken string `json:"accessToken"`
+}
+
+type SessionResults struct {
+	Passed  bool `json:"passed"`
+	Results []struct {
+		URL    string `json:"vectorSetUrl,omitempty"`
+		Status string `json:"status"`
+	} `json:"results"`
+}
diff --git a/util/fipstools/acvp/acvptool/interactive.go b/util/fipstools/acvp/acvptool/interactive.go
new file mode 100644
index 0000000..8de57ff
--- /dev/null
+++ b/util/fipstools/acvp/acvptool/interactive.go
@@ -0,0 +1,707 @@
+package main
+
+import (
+	"bytes"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"io"
+	"io/ioutil"
+	neturl "net/url"
+	"os"
+	"os/exec"
+	"os/signal"
+	"path/filepath"
+	"reflect"
+	"strconv"
+	"strings"
+	"syscall"
+
+	"boringssl.googlesource.com/boringssl/util/fipstools/acvp/acvptool/acvp"
+	"golang.org/x/crypto/ssh/terminal"
+)
+
+func updateTerminalSize(term *terminal.Terminal) {
+	width, height, err := terminal.GetSize(0)
+	if err != nil {
+		return
+	}
+	term.SetSize(width, height)
+}
+
+func skipWS(node *node32) *node32 {
+	for ; node != nil && node.pegRule == ruleWS; node = node.next {
+	}
+	return node
+}
+
+func assertNodeType(node *node32, rule pegRule) {
+	if node.pegRule != rule {
+		panic(fmt.Sprintf("expected %q, found %q", rul3s[rule], rul3s[node.pegRule]))
+	}
+}
+
+type Object interface {
+	String() (string, error)
+	Index(string) (Object, error)
+	Search(acvp.Query) (Object, error)
+	Action(action string, args []string) error
+}
+
+type ServerObjectSet struct {
+	env          *Env
+	name         string
+	searchKeys   map[string][]acvp.Relation
+	resultType   reflect.Type
+	subObjects   map[string]func(*Env, string) (Object, error)
+	canEnumerate bool
+}
+
+func (set ServerObjectSet) String() (string, error) {
+	if !set.canEnumerate {
+		return "[object set " + set.name + "]", nil
+	}
+
+	data := reflect.New(reflect.SliceOf(set.resultType)).Interface()
+	if err := set.env.server.GetPaged(data, "acvp/v1/"+set.name, nil); err != nil {
+		return "", err
+	}
+	ret, err := json.MarshalIndent(data, "", "  ")
+	return string(ret), err
+}
+
+func (set ServerObjectSet) Index(indexStr string) (Object, error) {
+	index, err := strconv.ParseUint(indexStr, 0, 64)
+	if err != nil {
+		return nil, fmt.Errorf("object set indexes must be unsigned integers, trying to parse %q failed: %s", indexStr, err)
+	}
+	return ServerObject{&set, index}, nil
+}
+
+func (set ServerObjectSet) Search(condition acvp.Query) (Object, error) {
+	if set.searchKeys == nil {
+		return nil, errors.New("this object set cannot be searched")
+	}
+
+	for _, conj := range condition {
+	NextCondition:
+		for _, cond := range conj {
+			allowed, ok := set.searchKeys[cond.Param]
+			if !ok {
+				return nil, fmt.Errorf("search key %q not valid for this object set", cond.Param)
+			}
+
+			for _, rel := range allowed {
+				if rel == cond.Relation {
+					continue NextCondition
+				}
+			}
+
+			return nil, fmt.Errorf("search key %q cannot be used with relation %q", cond.Param, cond.Relation.String())
+		}
+	}
+
+	return Search{ServerObjectSet: set, query: condition}, nil
+}
+
+func (set ServerObjectSet) Action(action string, args []string) error {
+	switch action {
+	default:
+		return fmt.Errorf("unknown action %q", action)
+
+	case "new":
+		if len(args) != 0 {
+			return fmt.Errorf("found %d arguments but %q takes none", len(args), action)
+		}
+
+		newContents, err := edit("")
+		if err != nil {
+			return err
+		}
+
+		if strings.TrimSpace(string(newContents)) == "" {
+			io.WriteString(set.env.term, "Resulting file was empty. Ignoring.\n")
+			return nil
+		}
+
+		var result map[string]interface{}
+		if err := set.env.server.Post(&result, "acvp/v1/"+set.name, newContents); err != nil {
+			return err
+		}
+
+		// In case it's a testSession that was just created, poke any access token
+		// into the server's lookup table and the cache.
+		if urlInterface, ok := result["url"]; ok {
+			if url, ok := urlInterface.(string); ok {
+				if tokenInterface, ok := result["accessToken"]; ok {
+					if token, ok := tokenInterface.(string); ok {
+						for strings.HasPrefix(url, "/") {
+							url = url[1:]
+						}
+						set.env.server.PrefixTokens[url] = token
+						if len(set.env.config.SessionTokensCache) > 0 {
+							ioutil.WriteFile(filepath.Join(set.env.config.SessionTokensCache, neturl.PathEscape(url))+".token", []byte(token), 0600)
+						}
+					}
+				}
+			}
+		}
+
+		ret, err := json.MarshalIndent(result, "", "  ")
+		if err != nil {
+			return err
+		}
+		set.env.term.Write(ret)
+		return nil
+	}
+}
+
+type ServerObject struct {
+	set   *ServerObjectSet
+	index uint64
+}
+
+func (obj ServerObject) String() (string, error) {
+	data := reflect.New(obj.set.resultType).Interface()
+	if err := obj.set.env.server.Get(data, "acvp/v1/"+obj.set.name+"/"+strconv.FormatUint(obj.index, 10)); err != nil {
+		return "", err
+	}
+	ret, err := json.MarshalIndent(data, "", "  ")
+	return string(ret), err
+}
+
+func (obj ServerObject) Index(index string) (Object, error) {
+	if obj.set.subObjects == nil {
+		return nil, errors.New("cannot index " + obj.set.name + " objects")
+	}
+	constr, ok := obj.set.subObjects[index]
+	if !ok {
+		return nil, fmt.Errorf("no such subobject %q", index)
+	}
+	return constr(obj.set.env, fmt.Sprintf("%s/%d", obj.set.name, obj.index))
+}
+
+func (ServerObject) Search(condition acvp.Query) (Object, error) {
+	return nil, errors.New("cannot search individual object")
+}
+
+func edit(initialContents string) ([]byte, error) {
+	tmp, err := ioutil.TempFile("", "acvp*.json")
+	if err != nil {
+		return nil, err
+	}
+	path := tmp.Name()
+	defer os.Remove(path)
+
+	_, err = io.WriteString(tmp, initialContents)
+	tmp.Close()
+	if err != nil {
+		return nil, err
+	}
+
+	editor := os.Getenv("EDITOR")
+	if len(editor) == 0 {
+		editor = "vim"
+	}
+
+	cmd := exec.Command(editor, path)
+	cmd.Stdout = os.Stdout
+	cmd.Stdin = os.Stdin
+	cmd.Stderr = os.Stderr
+	if err := cmd.Run(); err != nil {
+		return nil, err
+	}
+
+	return ioutil.ReadFile(path)
+}
+
+func (obj ServerObject) Action(action string, args []string) error {
+	switch action {
+	default:
+		return fmt.Errorf("unknown action %q", action)
+
+	case "edit":
+		if len(args) != 0 {
+			return fmt.Errorf("found %d arguments but %q takes none", len(args), action)
+		}
+
+		contents, err := obj.String()
+		if err != nil {
+			return err
+		}
+
+		newContents, err := edit(contents)
+		if err != nil {
+			return err
+		}
+
+		if trimmed := strings.TrimSpace(string(newContents)); len(trimmed) == 0 || trimmed == strings.TrimSpace(contents) {
+			io.WriteString(obj.set.env.term, "Resulting file was equal or empty. Not updating.\n")
+			return nil
+		}
+
+		var status acvp.RequestStatus
+		if err := obj.set.env.server.Put(&status, "acvp/v1/"+obj.set.name+"/"+strconv.FormatUint(obj.index, 10), newContents); err != nil {
+			return err
+		}
+
+		fmt.Fprintf(obj.set.env.term, "%#v\n", status)
+		return nil
+
+	case "delete":
+		if len(args) != 0 {
+			return fmt.Errorf("found %d arguments but %q takes none", len(args), action)
+		}
+		return obj.set.env.server.Delete("acvp/v1/" + obj.set.name + "/" + strconv.FormatUint(obj.index, 10))
+	}
+}
+
+type Search struct {
+	ServerObjectSet
+	query acvp.Query
+}
+
+func (search Search) String() (string, error) {
+	data := reflect.New(reflect.SliceOf(search.resultType)).Interface()
+	fmt.Printf("Searching for %#v\n", search.query)
+	if err := search.env.server.GetPaged(data, "acvp/v1/"+search.name, search.query); err != nil {
+		return "", err
+	}
+	ret, err := json.MarshalIndent(data, "", "  ")
+	return string(ret), err
+}
+
+func (search Search) Index(_ string) (Object, error) {
+	return nil, errors.New("indexing of search results not supported")
+}
+
+func (search Search) Search(condition acvp.Query) (Object, error) {
+	search.query = append(search.query, condition...)
+	return search, nil
+}
+
+func (Search) Action(_ string, _ []string) error {
+	return errors.New("no actions supported on search objects")
+}
+
+type Algorithms struct {
+	ServerObjectSet
+}
+
+func (algos Algorithms) String() (string, error) {
+	var result struct {
+		Algorithms []map[string]interface{} `json:"algorithms"`
+	}
+	if err := algos.env.server.Get(&result, "acvp/v1/algorithms"); err != nil {
+		return "", err
+	}
+	ret, err := json.MarshalIndent(result.Algorithms, "", "  ")
+	return string(ret), err
+}
+
+type Env struct {
+	line      string
+	variables map[string]Object
+	server    *acvp.Server
+	term      *terminal.Terminal
+	config    Config
+}
+
+func (e *Env) bytes(node *node32) []byte {
+	return []byte(e.line[node.begin:node.end])
+}
+
+func (e *Env) contents(node *node32) string {
+	return e.line[node.begin:node.end]
+}
+
+type stringLiteral struct {
+	env      *Env
+	contents string
+}
+
+func (s stringLiteral) String() (string, error) {
+	return s.contents, nil
+}
+
+func (stringLiteral) Index(_ string) (Object, error) {
+	return nil, errors.New("cannot index strings")
+}
+
+func (stringLiteral) Search(_ acvp.Query) (Object, error) {
+	return nil, errors.New("cannot search strings")
+}
+
+func (s stringLiteral) Action(action string, args []string) error {
+	switch action {
+	default:
+		return fmt.Errorf("action %q not supported on string literals", action)
+
+	case "GET":
+		if len(args) != 0 {
+			return fmt.Errorf("found %d arguments but %q takes none", len(args), action)
+		}
+
+		var results map[string]interface{}
+		if err := s.env.server.Get(&results, s.contents); err != nil {
+			return err
+		}
+		ret, err := json.MarshalIndent(results, "", "  ")
+		if err != nil {
+			return err
+		}
+		s.env.term.Write(ret)
+		return nil
+	}
+}
+
+type results struct {
+	env    *Env
+	prefix string
+}
+
+func (r results) String() (string, error) {
+	var results map[string]interface{}
+	if err := r.env.server.Get(&results, "acvp/v1/"+r.prefix+"/results"); err != nil {
+		return "", err
+	}
+	ret, err := json.MarshalIndent(results, "", "  ")
+	return string(ret), err
+}
+
+func (results) Index(_ string) (Object, error) {
+	return nil, errors.New("cannot index results objects")
+}
+
+func (results) Search(_ acvp.Query) (Object, error) {
+	return nil, errors.New("cannot search results objects")
+}
+
+func (results) Action(_ string, _ []string) error {
+	return errors.New("no actions supported on results objects")
+}
+
+func (e *Env) parseStringLiteral(node *node32) string {
+	assertNodeType(node, ruleStringLiteral)
+	in := e.bytes(node)
+	var buf bytes.Buffer
+	for i := 1; i < len(in)-1; i++ {
+		if in[i] == '\\' {
+			switch in[i+1] {
+			case '\\':
+				buf.WriteByte('\\')
+			case 'n':
+				buf.WriteByte('\n')
+			case '"':
+				buf.WriteByte('"')
+			default:
+				panic("unknown escape")
+			}
+			i++
+			continue
+		}
+		buf.WriteByte(in[i])
+	}
+
+	return buf.String()
+}
+
+func (e *Env) evalExpression(node *node32) (obj Object, err error) {
+	switch node.pegRule {
+	case ruleStringLiteral:
+		return stringLiteral{e, e.parseStringLiteral(node)}, nil
+
+	case ruleVariable:
+		varName := e.contents(node)
+		obj, ok := e.variables[varName]
+		if !ok {
+			return nil, fmt.Errorf("unknown variable %q", varName)
+		}
+		return obj, nil
+
+	case ruleIndexing:
+		node = node.up
+		assertNodeType(node, ruleVariable)
+		varName := e.contents(node)
+		obj, ok := e.variables[varName]
+		if !ok {
+			return nil, fmt.Errorf("unknown variable %q", varName)
+		}
+
+		node = node.next
+		for node != nil {
+			assertNodeType(node, ruleIndex)
+			indexStr := e.contents(node)
+			if obj, err = obj.Index(indexStr); err != nil {
+				return nil, err
+			}
+			node = node.next
+		}
+
+		return obj, nil
+
+	case ruleSearch:
+		node = node.up
+		assertNodeType(node, ruleVariable)
+		varName := e.contents(node)
+		obj, ok := e.variables[varName]
+		if !ok {
+			return nil, fmt.Errorf("unknown variable %q", varName)
+		}
+
+		node = skipWS(node.next)
+		assertNodeType(node, ruleQuery)
+		node = node.up
+
+		var query acvp.Query
+		for node != nil {
+			assertNodeType(node, ruleConjunctions)
+			query = append(query, e.parseConjunction(node.up))
+			node = skipWS(node.next)
+		}
+
+		if len(query) == 0 {
+			return nil, errors.New("cannot have empty query")
+		}
+
+		return obj.Search(query)
+	}
+
+	panic("unhandled")
+}
+
+func (e *Env) evalAction(node *node32) error {
+	assertNodeType(node, ruleExpression)
+	obj, err := e.evalExpression(node.up)
+	if err != nil {
+		return err
+	}
+
+	node = node.next
+	assertNodeType(node, ruleCommand)
+	node = node.up
+	assertNodeType(node, ruleFunction)
+	function := e.contents(node)
+	node = node.next
+
+	var args []string
+	for node != nil {
+		assertNodeType(node, ruleArgs)
+		node = node.up
+		args = append(args, e.parseStringLiteral(node))
+
+		node = skipWS(node.next)
+	}
+
+	return obj.Action(function, args)
+}
+
+func (e *Env) parseConjunction(node *node32) (ret acvp.Conjunction) {
+	for node != nil {
+		assertNodeType(node, ruleConjunction)
+		ret = append(ret, e.parseCondition(node.up))
+
+		node = skipWS(node.next)
+		if node != nil {
+			assertNodeType(node, ruleConjunctions)
+			node = node.up
+		}
+	}
+	return ret
+}
+
+func (e *Env) parseCondition(node *node32) (ret acvp.Condition) {
+	assertNodeType(node, ruleField)
+	ret.Param = e.contents(node)
+	node = skipWS(node.next)
+
+	assertNodeType(node, ruleRelation)
+	switch e.contents(node) {
+	case "==":
+		ret.Relation = acvp.Equals
+	case "!=":
+		ret.Relation = acvp.NotEquals
+	case "contains":
+		ret.Relation = acvp.Contains
+	case "startsWith":
+		ret.Relation = acvp.StartsWith
+	case "endsWith":
+		ret.Relation = acvp.EndsWith
+	default:
+		panic("relation not handled: " + e.contents(node))
+	}
+	node = skipWS(node.next)
+
+	ret.Value = e.parseStringLiteral(node)
+
+	return ret
+}
+
+func runInteractive(server *acvp.Server, config Config) {
+	oldState, err := terminal.MakeRaw(0)
+	if err != nil {
+		panic(err)
+	}
+	defer terminal.Restore(0, oldState)
+	term := terminal.NewTerminal(os.Stdin, "> ")
+
+	resizeChan := make(chan os.Signal)
+	go func() {
+		for _ = range resizeChan {
+			updateTerminalSize(term)
+		}
+	}()
+	signal.Notify(resizeChan, syscall.SIGWINCH)
+
+	env := &Env{variables: make(map[string]Object), server: server, term: term, config: config}
+	env.variables["requests"] = ServerObjectSet{
+		env:          env,
+		name:         "requests",
+		resultType:   reflect.TypeOf(&acvp.RequestStatus{}),
+		canEnumerate: true,
+	}
+	// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.8
+	env.variables["vendors"] = ServerObjectSet{
+		env:  env,
+		name: "vendors",
+		searchKeys: map[string][]acvp.Relation{
+			// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.8.1
+			"name":        []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+			"website":     []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+			"email":       []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+			"phoneNumber": []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+		},
+		subObjects: map[string]func(*Env, string) (Object, error){
+			"contacts": func(env *Env, prefix string) (Object, error) {
+				return ServerObjectSet{
+					env:          env,
+					name:         prefix + "/contacts",
+					resultType:   reflect.TypeOf(&acvp.Person{}),
+					canEnumerate: true,
+				}, nil
+			},
+			"addresses": func(env *Env, prefix string) (Object, error) {
+				return ServerObjectSet{
+					env:          env,
+					name:         prefix + "/addresses",
+					resultType:   reflect.TypeOf(&acvp.Address{}),
+					canEnumerate: true,
+				}, nil
+			},
+		},
+		resultType: reflect.TypeOf(&acvp.Vendor{}),
+	}
+	// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.9
+	env.variables["persons"] = ServerObjectSet{
+		env:  env,
+		name: "persons",
+		searchKeys: map[string][]acvp.Relation{
+			// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.10.1
+			"fullName":    []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+			"email":       []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+			"phoneNumber": []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+			"vendorId":    []acvp.Relation{acvp.Equals, acvp.NotEquals, acvp.LessThan, acvp.LessThanEqual, acvp.GreaterThan, acvp.GreaterThanEqual},
+		},
+		resultType: reflect.TypeOf(&acvp.Person{}),
+	}
+	// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.11
+	env.variables["modules"] = ServerObjectSet{
+		env:  env,
+		name: "modules",
+		searchKeys: map[string][]acvp.Relation{
+			// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.10.1
+			"name":        []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+			"version":     []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+			"website":     []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+			"description": []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+			"type":        []acvp.Relation{acvp.Equals, acvp.NotEquals},
+			"vendorId":    []acvp.Relation{acvp.Equals, acvp.NotEquals, acvp.LessThan, acvp.LessThanEqual, acvp.GreaterThan, acvp.GreaterThanEqual},
+		},
+		resultType: reflect.TypeOf(&acvp.Module{}),
+	}
+	// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.12
+	env.variables["oes"] = ServerObjectSet{
+		env:  env,
+		name: "oes",
+		searchKeys: map[string][]acvp.Relation{
+			// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.12.1
+			"name": []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+		},
+		resultType: reflect.TypeOf(&acvp.OperationalEnvironment{}),
+	}
+	// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.13
+	env.variables["deps"] = ServerObjectSet{
+		env:  env,
+		name: "dependencies",
+		searchKeys: map[string][]acvp.Relation{
+			// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.12.1
+			"name":        []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+			"type":        []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+			"description": []acvp.Relation{acvp.Equals, acvp.StartsWith, acvp.EndsWith, acvp.Contains},
+		},
+		resultType: reflect.TypeOf(&acvp.Dependency{}),
+	}
+	// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.14
+	env.variables["algos"] = Algorithms{
+		ServerObjectSet{
+			env:          env,
+			name:         "algorithms",
+			resultType:   reflect.TypeOf(&acvp.Algorithm{}),
+			canEnumerate: true,
+		},
+	}
+	// https://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.15
+	env.variables["sessions"] = ServerObjectSet{
+		env:          env,
+		name:         "testSessions",
+		resultType:   reflect.TypeOf(&acvp.TestSession{}),
+		canEnumerate: true,
+		subObjects: map[string]func(env *Env, prefix string) (Object, error){
+			"results": func(env *Env, prefix string) (Object, error) {
+				return results{env: env, prefix: prefix}, nil
+			},
+		},
+	}
+
+	for {
+		if env.line, err = term.ReadLine(); err != nil {
+			return
+		}
+		if len(env.line) == 0 {
+			continue
+		}
+
+		stmt := Statement{Buffer: env.line, Pretty: true}
+		stmt.Init()
+		if err := stmt.Parse(); err != nil {
+			io.WriteString(term, err.Error())
+			continue
+		}
+
+		node := skipWS(stmt.AST().up)
+		switch node.pegRule {
+		case ruleExpression:
+			obj, err := env.evalExpression(node.up)
+			var repr string
+			if err == nil {
+				repr, err = obj.String()
+			}
+
+			if err != nil {
+				fmt.Fprintf(term, "error while evaluating expression: %s\n", err)
+			} else {
+				io.WriteString(term, repr)
+				io.WriteString(term, "\n")
+			}
+
+		case ruleAction:
+			if err := env.evalAction(node.up); err != nil {
+				io.WriteString(term, err.Error())
+				io.WriteString(term, "\n")
+			}
+
+		default:
+			fmt.Fprintf(term, "internal error parsing input.\n")
+		}
+	}
+}
diff --git a/util/fipstools/acvp/acvptool/parser.peg b/util/fipstools/acvp/acvptool/parser.peg
new file mode 100644
index 0000000..ca759c8
--- /dev/null
+++ b/util/fipstools/acvp/acvptool/parser.peg
@@ -0,0 +1,25 @@
+package main
+
+type Statement Peg {}
+
+Statement <- WS? (Assignment / Action / Expression) WS? !.
+Assignment <- Variable WS? '=' WS? Expression
+Variable <- [a-zA-Z_][a-zA-Z0-9_]*
+Expression <- (StringLiteral / Indexing / Search / Variable)
+StringLiteral <- '"' QuotedText '"'
+QuotedText <- (EscapedChar / [^\\"])*
+EscapedChar <- '\\' [\\n"]
+Indexing <- Variable ('[' Index ']')+
+Index <- [0-9a-z]+
+Search <- Variable '[' WS? 'where' WS Query ']'
+Action <- Expression '.' Command
+Command <- Function '(' Args? ')'
+Function <- [a-zA-Z]+
+Args <- StringLiteral (WS? ',' WS? Args)
+Query <- Conjunctions (WS? '||' WS? Conjunctions)?
+Conjunctions <- Conjunction (WS? '&&' WS? Conjunctions)?
+Conjunction <- Field WS? Relation WS? StringLiteral
+Field <- [a-z][a-zA-Z0-9]*
+Relation <- ('==' / '!=' / 'contains' / 'startsWith' / 'endsWith')
+
+WS <- [ \t]+
diff --git a/util/fipstools/acvp/acvptool/parser.peg.go b/util/fipstools/acvp/acvptool/parser.peg.go
new file mode 100644
index 0000000..c80a7dc
--- /dev/null
+++ b/util/fipstools/acvp/acvptool/parser.peg.go
@@ -0,0 +1,1334 @@
+package main
+
+import (
+	"fmt"
+	"math"
+	"sort"
+	"strconv"
+)
+
+const endSymbol rune = 1114112
+
+/* The rule types inferred from the grammar are below. */
+type pegRule uint8
+
+const (
+	ruleUnknown pegRule = iota
+	ruleStatement
+	ruleAssignment
+	ruleVariable
+	ruleExpression
+	ruleStringLiteral
+	ruleQuotedText
+	ruleEscapedChar
+	ruleIndexing
+	ruleIndex
+	ruleSearch
+	ruleAction
+	ruleCommand
+	ruleFunction
+	ruleArgs
+	ruleQuery
+	ruleConjunctions
+	ruleConjunction
+	ruleField
+	ruleRelation
+	ruleWS
+)
+
+var rul3s = [...]string{
+	"Unknown",
+	"Statement",
+	"Assignment",
+	"Variable",
+	"Expression",
+	"StringLiteral",
+	"QuotedText",
+	"EscapedChar",
+	"Indexing",
+	"Index",
+	"Search",
+	"Action",
+	"Command",
+	"Function",
+	"Args",
+	"Query",
+	"Conjunctions",
+	"Conjunction",
+	"Field",
+	"Relation",
+	"WS",
+}
+
+type token32 struct {
+	pegRule
+	begin, end uint32
+}
+
+func (t *token32) String() string {
+	return fmt.Sprintf("\x1B[34m%v\x1B[m %v %v", rul3s[t.pegRule], t.begin, t.end)
+}
+
+type node32 struct {
+	token32
+	up, next *node32
+}
+
+func (node *node32) print(pretty bool, buffer string) {
+	var print func(node *node32, depth int)
+	print = func(node *node32, depth int) {
+		for node != nil {
+			for c := 0; c < depth; c++ {
+				fmt.Printf(" ")
+			}
+			rule := rul3s[node.pegRule]
+			quote := strconv.Quote(string(([]rune(buffer)[node.begin:node.end])))
+			if !pretty {
+				fmt.Printf("%v %v\n", rule, quote)
+			} else {
+				fmt.Printf("\x1B[34m%v\x1B[m %v\n", rule, quote)
+			}
+			if node.up != nil {
+				print(node.up, depth+1)
+			}
+			node = node.next
+		}
+	}
+	print(node, 0)
+}
+
+func (node *node32) Print(buffer string) {
+	node.print(false, buffer)
+}
+
+func (node *node32) PrettyPrint(buffer string) {
+	node.print(true, buffer)
+}
+
+type tokens32 struct {
+	tree []token32
+}
+
+func (t *tokens32) Trim(length uint32) {
+	t.tree = t.tree[:length]
+}
+
+func (t *tokens32) Print() {
+	for _, token := range t.tree {
+		fmt.Println(token.String())
+	}
+}
+
+func (t *tokens32) AST() *node32 {
+	type element struct {
+		node *node32
+		down *element
+	}
+	tokens := t.Tokens()
+	var stack *element
+	for _, token := range tokens {
+		if token.begin == token.end {
+			continue
+		}
+		node := &node32{token32: token}
+		for stack != nil && stack.node.begin >= token.begin && stack.node.end <= token.end {
+			stack.node.next = node.up
+			node.up = stack.node
+			stack = stack.down
+		}
+		stack = &element{node: node, down: stack}
+	}
+	if stack != nil {
+		return stack.node
+	}
+	return nil
+}
+
+func (t *tokens32) PrintSyntaxTree(buffer string) {
+	t.AST().Print(buffer)
+}
+
+func (t *tokens32) PrettyPrintSyntaxTree(buffer string) {
+	t.AST().PrettyPrint(buffer)
+}
+
+func (t *tokens32) Add(rule pegRule, begin, end, index uint32) {
+	if tree := t.tree; int(index) >= len(tree) {
+		expanded := make([]token32, 2*len(tree))
+		copy(expanded, tree)
+		t.tree = expanded
+	}
+	t.tree[index] = token32{
+		pegRule: rule,
+		begin:   begin,
+		end:     end,
+	}
+}
+
+func (t *tokens32) Tokens() []token32 {
+	return t.tree
+}
+
+type Statement struct {
+	Buffer string
+	buffer []rune
+	rules  [21]func() bool
+	parse  func(rule ...int) error
+	reset  func()
+	Pretty bool
+	tokens32
+}
+
+func (p *Statement) Parse(rule ...int) error {
+	return p.parse(rule...)
+}
+
+func (p *Statement) Reset() {
+	p.reset()
+}
+
+type textPosition struct {
+	line, symbol int
+}
+
+type textPositionMap map[int]textPosition
+
+func translatePositions(buffer []rune, positions []int) textPositionMap {
+	length, translations, j, line, symbol := len(positions), make(textPositionMap, len(positions)), 0, 1, 0
+	sort.Ints(positions)
+
+search:
+	for i, c := range buffer {
+		if c == '\n' {
+			line, symbol = line+1, 0
+		} else {
+			symbol++
+		}
+		if i == positions[j] {
+			translations[positions[j]] = textPosition{line, symbol}
+			for j++; j < length; j++ {
+				if i != positions[j] {
+					continue search
+				}
+			}
+			break search
+		}
+	}
+
+	return translations
+}
+
+type parseError struct {
+	p   *Statement
+	max token32
+}
+
+func (e *parseError) Error() string {
+	tokens, error := []token32{e.max}, "\n"
+	positions, p := make([]int, 2*len(tokens)), 0
+	for _, token := range tokens {
+		positions[p], p = int(token.begin), p+1
+		positions[p], p = int(token.end), p+1
+	}
+	translations := translatePositions(e.p.buffer, positions)
+	format := "parse error near %v (line %v symbol %v - line %v symbol %v):\n%v\n"
+	if e.p.Pretty {
+		format = "parse error near \x1B[34m%v\x1B[m (line %v symbol %v - line %v symbol %v):\n%v\n"
+	}
+	for _, token := range tokens {
+		begin, end := int(token.begin), int(token.end)
+		error += fmt.Sprintf(format,
+			rul3s[token.pegRule],
+			translations[begin].line, translations[begin].symbol,
+			translations[end].line, translations[end].symbol,
+			strconv.Quote(string(e.p.buffer[begin:end])))
+	}
+
+	return error
+}
+
+func (p *Statement) PrintSyntaxTree() {
+	if p.Pretty {
+		p.tokens32.PrettyPrintSyntaxTree(p.Buffer)
+	} else {
+		p.tokens32.PrintSyntaxTree(p.Buffer)
+	}
+}
+
+func (p *Statement) Init() {
+	var (
+		max                  token32
+		position, tokenIndex uint32
+		buffer               []rune
+	)
+	p.reset = func() {
+		max = token32{}
+		position, tokenIndex = 0, 0
+
+		p.buffer = []rune(p.Buffer)
+		if len(p.buffer) == 0 || p.buffer[len(p.buffer)-1] != endSymbol {
+			p.buffer = append(p.buffer, endSymbol)
+		}
+		buffer = p.buffer
+	}
+	p.reset()
+
+	_rules := p.rules
+	tree := tokens32{tree: make([]token32, math.MaxInt16)}
+	p.parse = func(rule ...int) error {
+		r := 1
+		if len(rule) > 0 {
+			r = rule[0]
+		}
+		matches := p.rules[r]()
+		p.tokens32 = tree
+		if matches {
+			p.Trim(tokenIndex)
+			return nil
+		}
+		return &parseError{p, max}
+	}
+
+	add := func(rule pegRule, begin uint32) {
+		tree.Add(rule, begin, position, tokenIndex)
+		tokenIndex++
+		if begin != position && position > max.end {
+			max = token32{rule, begin, position}
+		}
+	}
+
+	matchDot := func() bool {
+		if buffer[position] != endSymbol {
+			position++
+			return true
+		}
+		return false
+	}
+
+	/*matchChar := func(c byte) bool {
+		if buffer[position] == c {
+			position++
+			return true
+		}
+		return false
+	}*/
+
+	/*matchRange := func(lower byte, upper byte) bool {
+		if c := buffer[position]; c >= lower && c <= upper {
+			position++
+			return true
+		}
+		return false
+	}*/
+
+	_rules = [...]func() bool{
+		nil,
+		/* 0 Statement <- <(WS? (Assignment / Action / Expression) WS? !.)> */
+		func() bool {
+			position0, tokenIndex0 := position, tokenIndex
+			{
+				position1 := position
+				{
+					position2, tokenIndex2 := position, tokenIndex
+					if !_rules[ruleWS]() {
+						goto l2
+					}
+					goto l3
+				l2:
+					position, tokenIndex = position2, tokenIndex2
+				}
+			l3:
+				{
+					position4, tokenIndex4 := position, tokenIndex
+					if !_rules[ruleAssignment]() {
+						goto l5
+					}
+					goto l4
+				l5:
+					position, tokenIndex = position4, tokenIndex4
+					if !_rules[ruleAction]() {
+						goto l6
+					}
+					goto l4
+				l6:
+					position, tokenIndex = position4, tokenIndex4
+					if !_rules[ruleExpression]() {
+						goto l0
+					}
+				}
+			l4:
+				{
+					position7, tokenIndex7 := position, tokenIndex
+					if !_rules[ruleWS]() {
+						goto l7
+					}
+					goto l8
+				l7:
+					position, tokenIndex = position7, tokenIndex7
+				}
+			l8:
+				{
+					position9, tokenIndex9 := position, tokenIndex
+					if !matchDot() {
+						goto l9
+					}
+					goto l0
+				l9:
+					position, tokenIndex = position9, tokenIndex9
+				}
+				add(ruleStatement, position1)
+			}
+			return true
+		l0:
+			position, tokenIndex = position0, tokenIndex0
+			return false
+		},
+		/* 1 Assignment <- <(Variable WS? '=' WS? Expression)> */
+		func() bool {
+			position10, tokenIndex10 := position, tokenIndex
+			{
+				position11 := position
+				if !_rules[ruleVariable]() {
+					goto l10
+				}
+				{
+					position12, tokenIndex12 := position, tokenIndex
+					if !_rules[ruleWS]() {
+						goto l12
+					}
+					goto l13
+				l12:
+					position, tokenIndex = position12, tokenIndex12
+				}
+			l13:
+				if buffer[position] != rune('=') {
+					goto l10
+				}
+				position++
+				{
+					position14, tokenIndex14 := position, tokenIndex
+					if !_rules[ruleWS]() {
+						goto l14
+					}
+					goto l15
+				l14:
+					position, tokenIndex = position14, tokenIndex14
+				}
+			l15:
+				if !_rules[ruleExpression]() {
+					goto l10
+				}
+				add(ruleAssignment, position11)
+			}
+			return true
+		l10:
+			position, tokenIndex = position10, tokenIndex10
+			return false
+		},
+		/* 2 Variable <- <(([a-z] / [A-Z] / '_') ([a-z] / [A-Z] / [0-9] / '_')*)> */
+		func() bool {
+			position16, tokenIndex16 := position, tokenIndex
+			{
+				position17 := position
+				{
+					position18, tokenIndex18 := position, tokenIndex
+					if c := buffer[position]; c < rune('a') || c > rune('z') {
+						goto l19
+					}
+					position++
+					goto l18
+				l19:
+					position, tokenIndex = position18, tokenIndex18
+					if c := buffer[position]; c < rune('A') || c > rune('Z') {
+						goto l20
+					}
+					position++
+					goto l18
+				l20:
+					position, tokenIndex = position18, tokenIndex18
+					if buffer[position] != rune('_') {
+						goto l16
+					}
+					position++
+				}
+			l18:
+			l21:
+				{
+					position22, tokenIndex22 := position, tokenIndex
+					{
+						position23, tokenIndex23 := position, tokenIndex
+						if c := buffer[position]; c < rune('a') || c > rune('z') {
+							goto l24
+						}
+						position++
+						goto l23
+					l24:
+						position, tokenIndex = position23, tokenIndex23
+						if c := buffer[position]; c < rune('A') || c > rune('Z') {
+							goto l25
+						}
+						position++
+						goto l23
+					l25:
+						position, tokenIndex = position23, tokenIndex23
+						if c := buffer[position]; c < rune('0') || c > rune('9') {
+							goto l26
+						}
+						position++
+						goto l23
+					l26:
+						position, tokenIndex = position23, tokenIndex23
+						if buffer[position] != rune('_') {
+							goto l22
+						}
+						position++
+					}
+				l23:
+					goto l21
+				l22:
+					position, tokenIndex = position22, tokenIndex22
+				}
+				add(ruleVariable, position17)
+			}
+			return true
+		l16:
+			position, tokenIndex = position16, tokenIndex16
+			return false
+		},
+		/* 3 Expression <- <(StringLiteral / Indexing / Search / Variable)> */
+		func() bool {
+			position27, tokenIndex27 := position, tokenIndex
+			{
+				position28 := position
+				{
+					position29, tokenIndex29 := position, tokenIndex
+					if !_rules[ruleStringLiteral]() {
+						goto l30
+					}
+					goto l29
+				l30:
+					position, tokenIndex = position29, tokenIndex29
+					if !_rules[ruleIndexing]() {
+						goto l31
+					}
+					goto l29
+				l31:
+					position, tokenIndex = position29, tokenIndex29
+					if !_rules[ruleSearch]() {
+						goto l32
+					}
+					goto l29
+				l32:
+					position, tokenIndex = position29, tokenIndex29
+					if !_rules[ruleVariable]() {
+						goto l27
+					}
+				}
+			l29:
+				add(ruleExpression, position28)
+			}
+			return true
+		l27:
+			position, tokenIndex = position27, tokenIndex27
+			return false
+		},
+		/* 4 StringLiteral <- <('"' QuotedText '"')> */
+		func() bool {
+			position33, tokenIndex33 := position, tokenIndex
+			{
+				position34 := position
+				if buffer[position] != rune('"') {
+					goto l33
+				}
+				position++
+				if !_rules[ruleQuotedText]() {
+					goto l33
+				}
+				if buffer[position] != rune('"') {
+					goto l33
+				}
+				position++
+				add(ruleStringLiteral, position34)
+			}
+			return true
+		l33:
+			position, tokenIndex = position33, tokenIndex33
+			return false
+		},
+		/* 5 QuotedText <- <(EscapedChar / (!('\\' / '"') .))*> */
+		func() bool {
+			{
+				position36 := position
+			l37:
+				{
+					position38, tokenIndex38 := position, tokenIndex
+					{
+						position39, tokenIndex39 := position, tokenIndex
+						if !_rules[ruleEscapedChar]() {
+							goto l40
+						}
+						goto l39
+					l40:
+						position, tokenIndex = position39, tokenIndex39
+						{
+							position41, tokenIndex41 := position, tokenIndex
+							{
+								position42, tokenIndex42 := position, tokenIndex
+								if buffer[position] != rune('\\') {
+									goto l43
+								}
+								position++
+								goto l42
+							l43:
+								position, tokenIndex = position42, tokenIndex42
+								if buffer[position] != rune('"') {
+									goto l41
+								}
+								position++
+							}
+						l42:
+							goto l38
+						l41:
+							position, tokenIndex = position41, tokenIndex41
+						}
+						if !matchDot() {
+							goto l38
+						}
+					}
+				l39:
+					goto l37
+				l38:
+					position, tokenIndex = position38, tokenIndex38
+				}
+				add(ruleQuotedText, position36)
+			}
+			return true
+		},
+		/* 6 EscapedChar <- <('\\' ('\\' / 'n' / '"'))> */
+		func() bool {
+			position44, tokenIndex44 := position, tokenIndex
+			{
+				position45 := position
+				if buffer[position] != rune('\\') {
+					goto l44
+				}
+				position++
+				{
+					position46, tokenIndex46 := position, tokenIndex
+					if buffer[position] != rune('\\') {
+						goto l47
+					}
+					position++
+					goto l46
+				l47:
+					position, tokenIndex = position46, tokenIndex46
+					if buffer[position] != rune('n') {
+						goto l48
+					}
+					position++
+					goto l46
+				l48:
+					position, tokenIndex = position46, tokenIndex46
+					if buffer[position] != rune('"') {
+						goto l44
+					}
+					position++
+				}
+			l46:
+				add(ruleEscapedChar, position45)
+			}
+			return true
+		l44:
+			position, tokenIndex = position44, tokenIndex44
+			return false
+		},
+		/* 7 Indexing <- <(Variable ('[' Index ']')+)> */
+		func() bool {
+			position49, tokenIndex49 := position, tokenIndex
+			{
+				position50 := position
+				if !_rules[ruleVariable]() {
+					goto l49
+				}
+				if buffer[position] != rune('[') {
+					goto l49
+				}
+				position++
+				if !_rules[ruleIndex]() {
+					goto l49
+				}
+				if buffer[position] != rune(']') {
+					goto l49
+				}
+				position++
+			l51:
+				{
+					position52, tokenIndex52 := position, tokenIndex
+					if buffer[position] != rune('[') {
+						goto l52
+					}
+					position++
+					if !_rules[ruleIndex]() {
+						goto l52
+					}
+					if buffer[position] != rune(']') {
+						goto l52
+					}
+					position++
+					goto l51
+				l52:
+					position, tokenIndex = position52, tokenIndex52
+				}
+				add(ruleIndexing, position50)
+			}
+			return true
+		l49:
+			position, tokenIndex = position49, tokenIndex49
+			return false
+		},
+		/* 8 Index <- <([0-9] / [a-z])+> */
+		func() bool {
+			position53, tokenIndex53 := position, tokenIndex
+			{
+				position54 := position
+				{
+					position57, tokenIndex57 := position, tokenIndex
+					if c := buffer[position]; c < rune('0') || c > rune('9') {
+						goto l58
+					}
+					position++
+					goto l57
+				l58:
+					position, tokenIndex = position57, tokenIndex57
+					if c := buffer[position]; c < rune('a') || c > rune('z') {
+						goto l53
+					}
+					position++
+				}
+			l57:
+			l55:
+				{
+					position56, tokenIndex56 := position, tokenIndex
+					{
+						position59, tokenIndex59 := position, tokenIndex
+						if c := buffer[position]; c < rune('0') || c > rune('9') {
+							goto l60
+						}
+						position++
+						goto l59
+					l60:
+						position, tokenIndex = position59, tokenIndex59
+						if c := buffer[position]; c < rune('a') || c > rune('z') {
+							goto l56
+						}
+						position++
+					}
+				l59:
+					goto l55
+				l56:
+					position, tokenIndex = position56, tokenIndex56
+				}
+				add(ruleIndex, position54)
+			}
+			return true
+		l53:
+			position, tokenIndex = position53, tokenIndex53
+			return false
+		},
+		/* 9 Search <- <(Variable '[' WS? ('w' 'h' 'e' 'r' 'e') WS Query ']')> */
+		func() bool {
+			position61, tokenIndex61 := position, tokenIndex
+			{
+				position62 := position
+				if !_rules[ruleVariable]() {
+					goto l61
+				}
+				if buffer[position] != rune('[') {
+					goto l61
+				}
+				position++
+				{
+					position63, tokenIndex63 := position, tokenIndex
+					if !_rules[ruleWS]() {
+						goto l63
+					}
+					goto l64
+				l63:
+					position, tokenIndex = position63, tokenIndex63
+				}
+			l64:
+				if buffer[position] != rune('w') {
+					goto l61
+				}
+				position++
+				if buffer[position] != rune('h') {
+					goto l61
+				}
+				position++
+				if buffer[position] != rune('e') {
+					goto l61
+				}
+				position++
+				if buffer[position] != rune('r') {
+					goto l61
+				}
+				position++
+				if buffer[position] != rune('e') {
+					goto l61
+				}
+				position++
+				if !_rules[ruleWS]() {
+					goto l61
+				}
+				if !_rules[ruleQuery]() {
+					goto l61
+				}
+				if buffer[position] != rune(']') {
+					goto l61
+				}
+				position++
+				add(ruleSearch, position62)
+			}
+			return true
+		l61:
+			position, tokenIndex = position61, tokenIndex61
+			return false
+		},
+		/* 10 Action <- <(Expression '.' Command)> */
+		func() bool {
+			position65, tokenIndex65 := position, tokenIndex
+			{
+				position66 := position
+				if !_rules[ruleExpression]() {
+					goto l65
+				}
+				if buffer[position] != rune('.') {
+					goto l65
+				}
+				position++
+				if !_rules[ruleCommand]() {
+					goto l65
+				}
+				add(ruleAction, position66)
+			}
+			return true
+		l65:
+			position, tokenIndex = position65, tokenIndex65
+			return false
+		},
+		/* 11 Command <- <(Function '(' Args? ')')> */
+		func() bool {
+			position67, tokenIndex67 := position, tokenIndex
+			{
+				position68 := position
+				if !_rules[ruleFunction]() {
+					goto l67
+				}
+				if buffer[position] != rune('(') {
+					goto l67
+				}
+				position++
+				{
+					position69, tokenIndex69 := position, tokenIndex
+					if !_rules[ruleArgs]() {
+						goto l69
+					}
+					goto l70
+				l69:
+					position, tokenIndex = position69, tokenIndex69
+				}
+			l70:
+				if buffer[position] != rune(')') {
+					goto l67
+				}
+				position++
+				add(ruleCommand, position68)
+			}
+			return true
+		l67:
+			position, tokenIndex = position67, tokenIndex67
+			return false
+		},
+		/* 12 Function <- <([a-z] / [A-Z])+> */
+		func() bool {
+			position71, tokenIndex71 := position, tokenIndex
+			{
+				position72 := position
+				{
+					position75, tokenIndex75 := position, tokenIndex
+					if c := buffer[position]; c < rune('a') || c > rune('z') {
+						goto l76
+					}
+					position++
+					goto l75
+				l76:
+					position, tokenIndex = position75, tokenIndex75
+					if c := buffer[position]; c < rune('A') || c > rune('Z') {
+						goto l71
+					}
+					position++
+				}
+			l75:
+			l73:
+				{
+					position74, tokenIndex74 := position, tokenIndex
+					{
+						position77, tokenIndex77 := position, tokenIndex
+						if c := buffer[position]; c < rune('a') || c > rune('z') {
+							goto l78
+						}
+						position++
+						goto l77
+					l78:
+						position, tokenIndex = position77, tokenIndex77
+						if c := buffer[position]; c < rune('A') || c > rune('Z') {
+							goto l74
+						}
+						position++
+					}
+				l77:
+					goto l73
+				l74:
+					position, tokenIndex = position74, tokenIndex74
+				}
+				add(ruleFunction, position72)
+			}
+			return true
+		l71:
+			position, tokenIndex = position71, tokenIndex71
+			return false
+		},
+		/* 13 Args <- <(StringLiteral (WS? ',' WS? Args))> */
+		func() bool {
+			position79, tokenIndex79 := position, tokenIndex
+			{
+				position80 := position
+				if !_rules[ruleStringLiteral]() {
+					goto l79
+				}
+				{
+					position81, tokenIndex81 := position, tokenIndex
+					if !_rules[ruleWS]() {
+						goto l81
+					}
+					goto l82
+				l81:
+					position, tokenIndex = position81, tokenIndex81
+				}
+			l82:
+				if buffer[position] != rune(',') {
+					goto l79
+				}
+				position++
+				{
+					position83, tokenIndex83 := position, tokenIndex
+					if !_rules[ruleWS]() {
+						goto l83
+					}
+					goto l84
+				l83:
+					position, tokenIndex = position83, tokenIndex83
+				}
+			l84:
+				if !_rules[ruleArgs]() {
+					goto l79
+				}
+				add(ruleArgs, position80)
+			}
+			return true
+		l79:
+			position, tokenIndex = position79, tokenIndex79
+			return false
+		},
+		/* 14 Query <- <(Conjunctions (WS? ('|' '|') WS? Conjunctions)?)> */
+		func() bool {
+			position85, tokenIndex85 := position, tokenIndex
+			{
+				position86 := position
+				if !_rules[ruleConjunctions]() {
+					goto l85
+				}
+				{
+					position87, tokenIndex87 := position, tokenIndex
+					{
+						position89, tokenIndex89 := position, tokenIndex
+						if !_rules[ruleWS]() {
+							goto l89
+						}
+						goto l90
+					l89:
+						position, tokenIndex = position89, tokenIndex89
+					}
+				l90:
+					if buffer[position] != rune('|') {
+						goto l87
+					}
+					position++
+					if buffer[position] != rune('|') {
+						goto l87
+					}
+					position++
+					{
+						position91, tokenIndex91 := position, tokenIndex
+						if !_rules[ruleWS]() {
+							goto l91
+						}
+						goto l92
+					l91:
+						position, tokenIndex = position91, tokenIndex91
+					}
+				l92:
+					if !_rules[ruleConjunctions]() {
+						goto l87
+					}
+					goto l88
+				l87:
+					position, tokenIndex = position87, tokenIndex87
+				}
+			l88:
+				add(ruleQuery, position86)
+			}
+			return true
+		l85:
+			position, tokenIndex = position85, tokenIndex85
+			return false
+		},
+		/* 15 Conjunctions <- <(Conjunction (WS? ('&' '&') WS? Conjunctions)?)> */
+		func() bool {
+			position93, tokenIndex93 := position, tokenIndex
+			{
+				position94 := position
+				if !_rules[ruleConjunction]() {
+					goto l93
+				}
+				{
+					position95, tokenIndex95 := position, tokenIndex
+					{
+						position97, tokenIndex97 := position, tokenIndex
+						if !_rules[ruleWS]() {
+							goto l97
+						}
+						goto l98
+					l97:
+						position, tokenIndex = position97, tokenIndex97
+					}
+				l98:
+					if buffer[position] != rune('&') {
+						goto l95
+					}
+					position++
+					if buffer[position] != rune('&') {
+						goto l95
+					}
+					position++
+					{
+						position99, tokenIndex99 := position, tokenIndex
+						if !_rules[ruleWS]() {
+							goto l99
+						}
+						goto l100
+					l99:
+						position, tokenIndex = position99, tokenIndex99
+					}
+				l100:
+					if !_rules[ruleConjunctions]() {
+						goto l95
+					}
+					goto l96
+				l95:
+					position, tokenIndex = position95, tokenIndex95
+				}
+			l96:
+				add(ruleConjunctions, position94)
+			}
+			return true
+		l93:
+			position, tokenIndex = position93, tokenIndex93
+			return false
+		},
+		/* 16 Conjunction <- <(Field WS? Relation WS? StringLiteral)> */
+		func() bool {
+			position101, tokenIndex101 := position, tokenIndex
+			{
+				position102 := position
+				if !_rules[ruleField]() {
+					goto l101
+				}
+				{
+					position103, tokenIndex103 := position, tokenIndex
+					if !_rules[ruleWS]() {
+						goto l103
+					}
+					goto l104
+				l103:
+					position, tokenIndex = position103, tokenIndex103
+				}
+			l104:
+				if !_rules[ruleRelation]() {
+					goto l101
+				}
+				{
+					position105, tokenIndex105 := position, tokenIndex
+					if !_rules[ruleWS]() {
+						goto l105
+					}
+					goto l106
+				l105:
+					position, tokenIndex = position105, tokenIndex105
+				}
+			l106:
+				if !_rules[ruleStringLiteral]() {
+					goto l101
+				}
+				add(ruleConjunction, position102)
+			}
+			return true
+		l101:
+			position, tokenIndex = position101, tokenIndex101
+			return false
+		},
+		/* 17 Field <- <([a-z] ([a-z] / [A-Z] / [0-9])*)> */
+		func() bool {
+			position107, tokenIndex107 := position, tokenIndex
+			{
+				position108 := position
+				if c := buffer[position]; c < rune('a') || c > rune('z') {
+					goto l107
+				}
+				position++
+			l109:
+				{
+					position110, tokenIndex110 := position, tokenIndex
+					{
+						position111, tokenIndex111 := position, tokenIndex
+						if c := buffer[position]; c < rune('a') || c > rune('z') {
+							goto l112
+						}
+						position++
+						goto l111
+					l112:
+						position, tokenIndex = position111, tokenIndex111
+						if c := buffer[position]; c < rune('A') || c > rune('Z') {
+							goto l113
+						}
+						position++
+						goto l111
+					l113:
+						position, tokenIndex = position111, tokenIndex111
+						if c := buffer[position]; c < rune('0') || c > rune('9') {
+							goto l110
+						}
+						position++
+					}
+				l111:
+					goto l109
+				l110:
+					position, tokenIndex = position110, tokenIndex110
+				}
+				add(ruleField, position108)
+			}
+			return true
+		l107:
+			position, tokenIndex = position107, tokenIndex107
+			return false
+		},
+		/* 18 Relation <- <(('=' '=') / ('!' '=') / ('c' 'o' 'n' 't' 'a' 'i' 'n' 's') / ('s' 't' 'a' 'r' 't' 's' 'W' 'i' 't' 'h') / ('e' 'n' 'd' 's' 'W' 'i' 't' 'h'))> */
+		func() bool {
+			position114, tokenIndex114 := position, tokenIndex
+			{
+				position115 := position
+				{
+					position116, tokenIndex116 := position, tokenIndex
+					if buffer[position] != rune('=') {
+						goto l117
+					}
+					position++
+					if buffer[position] != rune('=') {
+						goto l117
+					}
+					position++
+					goto l116
+				l117:
+					position, tokenIndex = position116, tokenIndex116
+					if buffer[position] != rune('!') {
+						goto l118
+					}
+					position++
+					if buffer[position] != rune('=') {
+						goto l118
+					}
+					position++
+					goto l116
+				l118:
+					position, tokenIndex = position116, tokenIndex116
+					if buffer[position] != rune('c') {
+						goto l119
+					}
+					position++
+					if buffer[position] != rune('o') {
+						goto l119
+					}
+					position++
+					if buffer[position] != rune('n') {
+						goto l119
+					}
+					position++
+					if buffer[position] != rune('t') {
+						goto l119
+					}
+					position++
+					if buffer[position] != rune('a') {
+						goto l119
+					}
+					position++
+					if buffer[position] != rune('i') {
+						goto l119
+					}
+					position++
+					if buffer[position] != rune('n') {
+						goto l119
+					}
+					position++
+					if buffer[position] != rune('s') {
+						goto l119
+					}
+					position++
+					goto l116
+				l119:
+					position, tokenIndex = position116, tokenIndex116
+					if buffer[position] != rune('s') {
+						goto l120
+					}
+					position++
+					if buffer[position] != rune('t') {
+						goto l120
+					}
+					position++
+					if buffer[position] != rune('a') {
+						goto l120
+					}
+					position++
+					if buffer[position] != rune('r') {
+						goto l120
+					}
+					position++
+					if buffer[position] != rune('t') {
+						goto l120
+					}
+					position++
+					if buffer[position] != rune('s') {
+						goto l120
+					}
+					position++
+					if buffer[position] != rune('W') {
+						goto l120
+					}
+					position++
+					if buffer[position] != rune('i') {
+						goto l120
+					}
+					position++
+					if buffer[position] != rune('t') {
+						goto l120
+					}
+					position++
+					if buffer[position] != rune('h') {
+						goto l120
+					}
+					position++
+					goto l116
+				l120:
+					position, tokenIndex = position116, tokenIndex116
+					if buffer[position] != rune('e') {
+						goto l114
+					}
+					position++
+					if buffer[position] != rune('n') {
+						goto l114
+					}
+					position++
+					if buffer[position] != rune('d') {
+						goto l114
+					}
+					position++
+					if buffer[position] != rune('s') {
+						goto l114
+					}
+					position++
+					if buffer[position] != rune('W') {
+						goto l114
+					}
+					position++
+					if buffer[position] != rune('i') {
+						goto l114
+					}
+					position++
+					if buffer[position] != rune('t') {
+						goto l114
+					}
+					position++
+					if buffer[position] != rune('h') {
+						goto l114
+					}
+					position++
+				}
+			l116:
+				add(ruleRelation, position115)
+			}
+			return true
+		l114:
+			position, tokenIndex = position114, tokenIndex114
+			return false
+		},
+		/* 19 WS <- <(' ' / '\t')+> */
+		func() bool {
+			position121, tokenIndex121 := position, tokenIndex
+			{
+				position122 := position
+				{
+					position125, tokenIndex125 := position, tokenIndex
+					if buffer[position] != rune(' ') {
+						goto l126
+					}
+					position++
+					goto l125
+				l126:
+					position, tokenIndex = position125, tokenIndex125
+					if buffer[position] != rune('\t') {
+						goto l121
+					}
+					position++
+				}
+			l125:
+			l123:
+				{
+					position124, tokenIndex124 := position, tokenIndex
+					{
+						position127, tokenIndex127 := position, tokenIndex
+						if buffer[position] != rune(' ') {
+							goto l128
+						}
+						position++
+						goto l127
+					l128:
+						position, tokenIndex = position127, tokenIndex127
+						if buffer[position] != rune('\t') {
+							goto l124
+						}
+						position++
+					}
+				l127:
+					goto l123
+				l124:
+					position, tokenIndex = position124, tokenIndex124
+				}
+				add(ruleWS, position122)
+			}
+			return true
+		l121:
+			position, tokenIndex = position121, tokenIndex121
+			return false
+		},
+	}
+	p.rules = _rules
+}
diff --git a/util/fipstools/acvp/acvptool/subprocess/hash.go b/util/fipstools/acvp/acvptool/subprocess/hash.go
new file mode 100644
index 0000000..7be3162
--- /dev/null
+++ b/util/fipstools/acvp/acvptool/subprocess/hash.go
@@ -0,0 +1,127 @@
+package subprocess
+
+import (
+	"encoding/hex"
+	"encoding/json"
+	"fmt"
+)
+
+// The following structures reflect the JSON of ACVP hash tests. See
+// https://usnistgov.github.io/ACVP/artifacts/draft-celi-acvp-sha-00.html#test_vectors
+
+type hashTestVectorSet struct {
+	Groups []hashTestGroup `json:"testGroups"`
+}
+
+type hashTestGroup struct {
+	ID    uint64 `json:"tgId"`
+	Type  string `json:"testType"`
+	Tests []struct {
+		ID        uint64 `json:"tcId"`
+		BitLength uint64 `json:"len"`
+		MsgHex    string `json:"msg"`
+	} `json:"tests"`
+}
+
+type hashTestGroupResponse struct {
+	ID    uint64             `json:"tgId"`
+	Tests []hashTestResponse `json:"tests"`
+}
+
+type hashTestResponse struct {
+	ID         uint64          `json:"tcId"`
+	DigestHex  string          `json:"md,omitempty"`
+	MCTResults []hashMCTResult `json:"resultsArray,omitempty"`
+}
+
+type hashMCTResult struct {
+	DigestHex string `json:"md"`
+}
+
+// hashPrimitive implements an ACVP algorithm by making requests to the
+// subprocess to hash strings.
+type hashPrimitive struct {
+	// algo is the ACVP name for this algorithm and also the command name
+	// given to the subprocess to hash with this hash function.
+	algo string
+	// size is the number of bytes of digest that the hash produces.
+	size int
+	m    *Subprocess
+}
+
+// hash uses the subprocess to hash msg and returns the digest.
+func (h *hashPrimitive) hash(msg []byte) []byte {
+	result, err := h.m.transact(h.algo, 1, msg)
+	if err != nil {
+		panic("hash operation failed: " + err.Error())
+	}
+	return result[0]
+}
+
+func (h *hashPrimitive) Process(vectorSet []byte) (interface{}, error) {
+	var parsed hashTestVectorSet
+	if err := json.Unmarshal(vectorSet, &parsed); err != nil {
+		return nil, err
+	}
+
+	var ret []hashTestGroupResponse
+	// See
+	// https://usnistgov.github.io/ACVP/artifacts/draft-celi-acvp-sha-00.html#rfc.section.3
+	// for details about the tests.
+	for _, group := range parsed.Groups {
+		response := hashTestGroupResponse{
+			ID: group.ID,
+		}
+
+		for _, test := range group.Tests {
+			if uint64(len(test.MsgHex))*4 != test.BitLength {
+				return nil, fmt.Errorf("test case %d/%d contains hex message of length %d but specifies a bit length of %d", group.ID, test.ID, len(test.MsgHex), test.BitLength)
+			}
+			msg, err := hex.DecodeString(test.MsgHex)
+			if err != nil {
+				return nil, fmt.Errorf("failed to decode hex in test case %d/%d: %s", group.ID, test.ID, err)
+			}
+
+			// http://usnistgov.github.io/ACVP/artifacts/draft-celi-acvp-sha-00.html#rfc.section.3
+			switch group.Type {
+			case "AFT":
+				response.Tests = append(response.Tests, hashTestResponse{
+					ID:        test.ID,
+					DigestHex: hex.EncodeToString(h.hash(msg)),
+				})
+
+			case "MCT":
+				if len(msg) != h.size {
+					return nil, fmt.Errorf("MCT test case %d/%d contains message of length %d but the digest length is %d", group.ID, test.ID, len(msg), h.size)
+				}
+
+				testResponse := hashTestResponse{ID: test.ID}
+
+				buf := make([]byte, 3*h.size)
+				var digest []byte
+				for i := 0; i < 100; i++ {
+					copy(buf, msg)
+					copy(buf[h.size:], msg)
+					copy(buf[2*h.size:], msg)
+					for j := 0; j < 1000; j++ {
+						digest = h.hash(buf)
+						copy(buf, buf[h.size:])
+						copy(buf[2*h.size:], digest)
+					}
+
+					testResponse.MCTResults = append(testResponse.MCTResults, hashMCTResult{hex.EncodeToString(digest)})
+					msg = digest
+				}
+
+				response.Tests = append(response.Tests, testResponse)
+
+			default:
+				return nil, fmt.Errorf("test group %d has unknown type %q", group.ID, group.Type)
+			}
+		}
+
+		ret = append(ret, response)
+	}
+
+	return ret, nil
+}
diff --git a/util/fipstools/acvp/acvptool/subprocess/subprocess.go b/util/fipstools/acvp/acvptool/subprocess/subprocess.go
new file mode 100644
index 0000000..7ab1c4b
--- /dev/null
+++ b/util/fipstools/acvp/acvptool/subprocess/subprocess.go
@@ -0,0 +1,162 @@
+package subprocess
+
+import (
+	"encoding/binary"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"io"
+	"os"
+	"os/exec"
+)
+
+// Subprocess is a "middle" layer that interacts with a FIPS module via running
+// a command and speaking a simple protocol over stdin/stdout.
+type Subprocess struct {
+	cmd        *exec.Cmd
+	stdin      io.WriteCloser
+	stdout     io.ReadCloser
+	primitives map[string]primitive
+}
+
+// New returns a new Subprocess middle layer that runs the given binary.
+func New(path string) (*Subprocess, error) {
+	cmd := exec.Command(path)
+	cmd.Stderr = os.Stderr
+	stdin, err := cmd.StdinPipe()
+	if err != nil {
+		return nil, err
+	}
+	stdout, err := cmd.StdoutPipe()
+	if err != nil {
+		return nil, err
+	}
+
+	if err := cmd.Start(); err != nil {
+		return nil, err
+	}
+
+	m := &Subprocess{
+		cmd:    cmd,
+		stdin:  stdin,
+		stdout: stdout,
+	}
+
+	m.primitives = map[string]primitive{
+		"SHA-1":    &hashPrimitive{"SHA-1", 20, m},
+		"SHA2-224": &hashPrimitive{"SHA2-224", 28, m},
+		"SHA2-256": &hashPrimitive{"SHA2-256", 32, m},
+		"SHA2-384": &hashPrimitive{"SHA2-384", 48, m},
+		"SHA2-512": &hashPrimitive{"SHA2-512", 64, m},
+	}
+
+	return m, nil
+}
+
+// Close signals the child process to exit and waits for it to complete.
+func (m *Subprocess) Close() {
+	m.stdout.Close()
+	m.stdin.Close()
+	m.cmd.Wait()
+}
+
+// transact performs a single request--response pair with the subprocess.
+func (m *Subprocess) transact(cmd string, expectedResults int, args ...[]byte) ([][]byte, error) {
+	argLength := len(cmd)
+	for _, arg := range args {
+		argLength += len(arg)
+	}
+
+	buf := make([]byte, 4*(2+len(args)), 4*(2+len(args))+argLength)
+	binary.LittleEndian.PutUint32(buf, uint32(1+len(args)))
+	binary.LittleEndian.PutUint32(buf[4:], uint32(len(cmd)))
+	for i, arg := range args {
+		binary.LittleEndian.PutUint32(buf[4*(i+2):], uint32(len(arg)))
+	}
+	buf = append(buf, []byte(cmd)...)
+	for _, arg := range args {
+		buf = append(buf, arg...)
+	}
+
+	if _, err := m.stdin.Write(buf); err != nil {
+		return nil, err
+	}
+
+	buf = buf[:4]
+	if _, err := io.ReadFull(m.stdout, buf); err != nil {
+		return nil, err
+	}
+
+	numResults := binary.LittleEndian.Uint32(buf)
+	if int(numResults) != expectedResults {
+		return nil, fmt.Errorf("expected %d results from %q but got %d", expectedResults, cmd, numResults)
+	}
+
+	buf = make([]byte, 4*numResults)
+	if _, err := io.ReadFull(m.stdout, buf); err != nil {
+		return nil, err
+	}
+
+	var resultsLength uint64
+	for i := uint32(0); i < numResults; i++ {
+		resultsLength += uint64(binary.LittleEndian.Uint32(buf[4*i:]))
+	}
+
+	if resultsLength > (1 << 30) {
+		return nil, fmt.Errorf("results too large (%d bytes)", resultsLength)
+	}
+
+	results := make([]byte, resultsLength)
+	if _, err := io.ReadFull(m.stdout, results); err != nil {
+		return nil, err
+	}
+
+	ret := make([][]byte, 0, numResults)
+	var offset int
+	for i := uint32(0); i < numResults; i++ {
+		length := binary.LittleEndian.Uint32(buf[4*i:])
+		ret = append(ret, results[offset:offset+int(length)])
+		offset += int(length)
+	}
+
+	return ret, nil
+}
+
+// Config returns a JSON blob that describes the supported primitives. The
+// format of the blob is defined by ACVP. See
+// http://usnistgov.github.io/ACVP/artifacts/draft-fussell-acvp-spec-00.html#rfc.section.11.15.2.1
+func (m *Subprocess) Config() ([]byte, error) {
+	results, err := m.transact("getConfig", 1)
+	if err != nil {
+		return nil, err
+	}
+	var config []struct {
+		Algorithm string `json:"algorithm"`
+	}
+	if err := json.Unmarshal(results[0], &config); err != nil {
+		return nil, errors.New("failed to parse config response from wrapper: " + err.Error())
+	}
+	for _, algo := range config {
+		if _, ok := m.primitives[algo.Algorithm]; !ok {
+			return nil, fmt.Errorf("wrapper config advertises support for unknown algorithm %q", algo.Algorithm)
+		}
+	}
+	return results[0], nil
+}
+
+// Process runs a set of test vectors and returns the result.
+func (m *Subprocess) Process(algorithm string, vectorSet []byte) ([]byte, error) {
+	prim, ok := m.primitives[algorithm]
+	if !ok {
+		return nil, fmt.Errorf("unknown algorithm %q", algorithm)
+	}
+	ret, err := prim.Process(vectorSet)
+	if err != nil {
+		return nil, err
+	}
+	return json.Marshal(ret)
+}
+
+type primitive interface {
+	Process(vectorSet []byte) (interface{}, error)
+}
diff --git a/util/fipstools/acvp/modulewrapper/CMakeLists.txt b/util/fipstools/acvp/modulewrapper/CMakeLists.txt
new file mode 100644
index 0000000..8bee5cd
--- /dev/null
+++ b/util/fipstools/acvp/modulewrapper/CMakeLists.txt
@@ -0,0 +1,13 @@
+include_directories(../../../../include)
+
+if(FIPS)
+  add_executable(
+    modulewrapper
+
+    modulewrapper.cc
+  )
+
+  add_dependencies(modulewrapper global_target)
+
+  target_link_libraries(modulewrapper crypto)
+endif()
diff --git a/util/fipstools/acvp/modulewrapper/modulewrapper.cc b/util/fipstools/acvp/modulewrapper/modulewrapper.cc
new file mode 100644
index 0000000..79976dc
--- /dev/null
+++ b/util/fipstools/acvp/modulewrapper/modulewrapper.cc
@@ -0,0 +1,274 @@
+/* Copyright (c) 2019, Google Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
+ * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
+ * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */
+
+#include <vector>
+
+#include <assert.h>
+#include <string.h>
+#include <sys/uio.h>
+#include <unistd.h>
+#include <cstdarg>
+
+#include <openssl/sha.h>
+#include <openssl/span.h>
+
+static constexpr size_t kMaxArgs = 8;
+static constexpr size_t kMaxArgLength = (1 << 20);
+static constexpr size_t kMaxNameLength = 30;
+
+static_assert((kMaxArgs - 1 * kMaxArgLength) + kMaxNameLength > (1 << 30),
+              "Argument limits permit excessive messages");
+
+using namespace bssl;
+
+static bool ReadAll(int fd, void *in_data, size_t data_len) {
+  uint8_t *data = reinterpret_cast<uint8_t *>(in_data);
+  size_t done = 0;
+
+  while (done < data_len) {
+    ssize_t r;
+    do {
+      r = read(fd, &data[done], data_len - done);
+    } while (r == -1 && errno == EINTR);
+
+    if (r <= 0) {
+      return false;
+    }
+
+    done += r;
+  }
+
+  return true;
+}
+
+template <typename... Args>
+static bool WriteReply(int fd, Args... args) {
+  std::vector<Span<const uint8_t>> spans = {args...};
+  if (spans.empty() || spans.size() > kMaxArgs) {
+    abort();
+  }
+
+  uint32_t nums[1 + kMaxArgs];
+  iovec iovs[kMaxArgs + 1];
+  nums[0] = spans.size();
+  iovs[0].iov_base = nums;
+  iovs[0].iov_len = sizeof(uint32_t) * (1 + spans.size());
+
+  for (size_t i = 0; i < spans.size(); i++) {
+    const auto &span = spans[i];
+    nums[i + 1] = span.size();
+    iovs[i + 1].iov_base = const_cast<uint8_t *>(span.data());
+    iovs[i + 1].iov_len = span.size();
+  }
+
+  const size_t num_iov = spans.size() + 1;
+  size_t iov_done = 0;
+  while (iov_done < num_iov) {
+    ssize_t r;
+    do {
+      r = writev(fd, &iovs[iov_done], num_iov - iov_done);
+    } while (r == -1 && errno == EINTR);
+
+    if (r <= 0) {
+      return false;
+    }
+
+    size_t written = r;
+    for (size_t i = iov_done; written > 0 && i < num_iov; i++) {
+      iovec &iov = iovs[i];
+
+      size_t done = written;
+      if (done > iov.iov_len) {
+        done = iov.iov_len;
+      }
+
+      iov.iov_base = reinterpret_cast<uint8_t *>(iov.iov_base) + done;
+      iov.iov_len -= done;
+      written -= done;
+
+      if (iov.iov_len == 0) {
+        iov_done++;
+      }
+    }
+
+    assert(written == 0);
+  }
+
+  return true;
+}
+
+static bool GetConfig(const Span<const uint8_t> args[]) {
+  static constexpr char kConfig[] =
+      "["
+      "{"
+      "  \"algorithm\": \"SHA2-224\","
+      "  \"revision\": \"1.0\","
+      "  \"messageLength\": [{"
+      "    \"min\": 0, \"max\": 65528, \"increment\": 8"
+      "  }]"
+      "},"
+      "{"
+      "  \"algorithm\": \"SHA2-256\","
+      "  \"revision\": \"1.0\","
+      "  \"messageLength\": [{"
+      "    \"min\": 0, \"max\": 65528, \"increment\": 8"
+      "  }]"
+      "},"
+      "{"
+      "  \"algorithm\": \"SHA2-384\","
+      "  \"revision\": \"1.0\","
+      "  \"messageLength\": [{"
+      "    \"min\": 0, \"max\": 65528, \"increment\": 8"
+      "  }]"
+      "},"
+      "{"
+      "  \"algorithm\": \"SHA2-512\","
+      "  \"revision\": \"1.0\","
+      "  \"messageLength\": [{"
+      "    \"min\": 0, \"max\": 65528, \"increment\": 8"
+      "  }]"
+      "},"
+      "{"
+      "  \"algorithm\": \"SHA-1\","
+      "  \"revision\": \"1.0\","
+      "  \"messageLength\": [{"
+      "    \"min\": 0, \"max\": 65528, \"increment\": 8"
+      "  }]"
+      "}"
+      "]";
+  return WriteReply(
+      STDOUT_FILENO,
+      Span<const uint8_t>(reinterpret_cast<const uint8_t *>(kConfig),
+                          sizeof(kConfig) - 1));
+}
+
+template <uint8_t *(*OneShotHash)(const uint8_t *, size_t, uint8_t *),
+          size_t DigestLength>
+static bool Hash(const Span<const uint8_t> args[]) {
+  uint8_t digest[DigestLength];
+  OneShotHash(args[0].data(), args[0].size(), digest);
+  return WriteReply(STDOUT_FILENO, Span<const uint8_t>(digest));
+}
+
+static constexpr struct {
+  const char name[kMaxNameLength + 1];
+  uint8_t expected_args;
+  bool (*handler)(const Span<const uint8_t>[]);
+} kFunctions[] = {
+    {"getConfig", 0, GetConfig},
+    {"SHA-1", 1, Hash<SHA1, SHA_DIGEST_LENGTH>},
+    {"SHA2-224", 1, Hash<SHA224, SHA224_DIGEST_LENGTH>},
+    {"SHA2-256", 1, Hash<SHA256, SHA256_DIGEST_LENGTH>},
+    {"SHA2-384", 1, Hash<SHA384, SHA256_DIGEST_LENGTH>},
+    {"SHA2-512", 1, Hash<SHA512, SHA512_DIGEST_LENGTH>},
+};
+
+int main() {
+  uint32_t nums[1 + kMaxArgs];
+  uint8_t *buf = nullptr;
+  size_t buf_len = 0;
+  Span<const uint8_t> args[kMaxArgs];
+
+  for (;;) {
+    if (!ReadAll(STDIN_FILENO, nums, sizeof(uint32_t) * 2)) {
+      return 1;
+    }
+
+    const size_t num_args = nums[0];
+    if (num_args == 0) {
+      fprintf(stderr, "Invalid, zero-argument operation requested.\n");
+      return 2;
+    } else if (num_args > kMaxArgs) {
+      fprintf(stderr,
+              "Operation requested with %zu args, but %zu is the limit.\n",
+              num_args, kMaxArgs);
+      return 2;
+    }
+
+    if (num_args > 1 &&
+        !ReadAll(STDIN_FILENO, &nums[2], sizeof(uint32_t) * (num_args - 1))) {
+      return 1;
+    }
+
+    size_t need = 0;
+    for (size_t i = 0; i < num_args; i++) {
+      const size_t arg_length = nums[i + 1];
+      if (i == 0 && arg_length > kMaxNameLength) {
+        fprintf(stderr,
+                "Operation with name of length %zu exceeded limit of %zu.\n",
+                arg_length, kMaxNameLength);
+        return 2;
+      } else if (arg_length > kMaxArgLength) {
+        fprintf(
+            stderr,
+            "Operation with argument of length %zu exceeded limit of %zu.\n",
+            arg_length, kMaxArgLength);
+        return 2;
+      }
+
+      // static_assert around kMaxArgs etc enforces that this doesn't overflow.
+      need += arg_length;
+    }
+
+    if (need > buf_len) {
+      free(buf);
+      size_t alloced = need + (need >> 1);
+      if (alloced < need) {
+        abort();
+      }
+      buf = reinterpret_cast<uint8_t *>(malloc(alloced));
+      if (buf == nullptr) {
+        abort();
+      }
+      buf_len = alloced;
+    }
+
+    if (!ReadAll(STDIN_FILENO, buf, need)) {
+      return 1;
+    }
+
+    size_t offset = 0;
+    for (size_t i = 0; i < num_args; i++) {
+      args[i] = Span<const uint8_t>(&buf[offset], nums[i + 1]);
+      offset += nums[i + 1];
+    }
+
+    bool found = true;
+    for (const auto &func : kFunctions) {
+      if (args[0].size() == strlen(func.name) &&
+          memcmp(args[0].data(), func.name, args[0].size()) == 0) {
+        if (num_args - 1 != func.expected_args) {
+          fprintf(stderr,
+                  "\'%s\' operation received %zu arguments but expected %u.\n",
+                  func.name, num_args - 1, func.expected_args);
+          return 2;
+        }
+
+        if (!func.handler(&args[1])) {
+          return 4;
+        }
+
+        found = true;
+        break;
+      }
+    }
+
+    if (!found) {
+      const std::string name(reinterpret_cast<const char *>(args[0].data()),
+                             args[0].size());
+      fprintf(stderr, "Unknown operation: %s\n", name.c_str());
+      return 3;
+    }
+  }
+}