Lines Matching refs:token
67 # reported as a token.
100 # all colons are escaped, it produces a TERM token whose value
102 # there are unescaped colons, it produces an FTERM token whose
104 # type, the action key, and the token that followed the last
113 # the escaped colon in a token. The colon is matching either a
115 # token or field. The part after the colon is attempting to
128 token = fields[-1]
134 t.type = self.reserved.get(token, "TERM")
135 t.value = token
147 t.value = (pkg_name, action_type, key, token)
170 def token(self):
171 return self.lexer.token()
186 tok = self.lexer.token()
257 # zap is split into a separate lexer TERM token. zap flows
266 pkg_name, at, key, token = p[1]
269 # If no token was attached to the FTERM, then attach
270 # the term found following it. If a token was attached
273 if token == "":
280 self.query_objs["TermQuery"](token)),
284 # no token was attached to the FTERM, it's necessary to make
287 if token == "":
288 token = "*"
290 self.query_objs["TermQuery"](token))
835 of the value which matched the token. If it is, use the value
883 fourth field (the token) of the structured search."""
1032 """term is a the string for the token to be searched for."""
1473 # tokens, return results for every known token.
1482 # Check that the token was what was expected.
1489 # Check the various action types this token was
1495 # Check the key types this token and action type
1502 # token.