Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
G
gitlab-ce
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
1
Merge Requests
1
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
nexedi
gitlab-ce
Commits
73647263
Commit
73647263
authored
Sep 25, 2017
by
Kim "BKC" Carlbäcker
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Update BurntSushi/toml
parent
9e4f081f
Changes
10
Hide whitespace changes
Inline
Side-by-side
Showing
10 changed files
with
248 additions
and
127 deletions
+248
-127
vendor/github.com/BurntSushi/toml/.gitignore
vendor/github.com/BurntSushi/toml/.gitignore
+0
-5
vendor/github.com/BurntSushi/toml/.travis.yml
vendor/github.com/BurntSushi/toml/.travis.yml
+0
-15
vendor/github.com/BurntSushi/toml/COMPATIBLE
vendor/github.com/BurntSushi/toml/COMPATIBLE
+1
-1
vendor/github.com/BurntSushi/toml/COPYING
vendor/github.com/BurntSushi/toml/COPYING
+17
-10
vendor/github.com/BurntSushi/toml/README.md
vendor/github.com/BurntSushi/toml/README.md
+8
-10
vendor/github.com/BurntSushi/toml/doc.go
vendor/github.com/BurntSushi/toml/doc.go
+1
-1
vendor/github.com/BurntSushi/toml/encode.go
vendor/github.com/BurntSushi/toml/encode.go
+1
-1
vendor/github.com/BurntSushi/toml/lex.go
vendor/github.com/BurntSushi/toml/lex.go
+179
-84
vendor/github.com/BurntSushi/toml/parse.go
vendor/github.com/BurntSushi/toml/parse.go
+35
-0
vendor/vendor.json
vendor/vendor.json
+6
-0
No files found.
vendor/github.com/BurntSushi/toml/.gitignore
deleted
100644 → 0
View file @
9e4f081f
TAGS
tags
.*.swp
tomlcheck/tomlcheck
toml.test
vendor/github.com/BurntSushi/toml/.travis.yml
deleted
100644 → 0
View file @
9e4f081f
language
:
go
go
:
-
1.1
-
1.2
-
1.3
-
1.4
-
1.5
-
1.6
-
tip
install
:
-
go install ./...
-
go get github.com/BurntSushi/toml-test
script
:
-
export PATH="$PATH:$HOME/gopath/bin"
-
make test
vendor/github.com/BurntSushi/toml/COMPATIBLE
View file @
73647263
Compatible with TOML version
[v0.
2.0](https://github.com/mojombo/toml/blob/master/versions/toml-v0.2
.0.md)
[v0.
4.0](https://github.com/toml-lang/toml/blob/v0.4.0/versions/en/toml-v0.4
.0.md)
vendor/github.com/BurntSushi/toml/COPYING
View file @
73647263
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
Version 2, December 2004
The MIT License (MIT)
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
Copyright (c) 2013 TOML authors
Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. You just DO WHAT THE FUCK YOU WANT TO.
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
vendor/github.com/BurntSushi/toml/README.md
View file @
73647263
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library
`json`
and
`xml`
reflection interface similar to Go's standard library
`json`
and
`xml`
packages. This package also supports the
`encoding.TextUnmarshaler`
and
`encoding.TextMarshaler`
interfaces so that you can define custom data
`encoding.TextMarshaler`
interfaces so that you can define custom data
representations. (There is an example of this below.)
Spec: https://github.com/
mojombo
/toml
Spec: https://github.com/
toml-lang
/toml
Compatible with TOML version
[
v0.
2.0
](
https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.2
.0.md
)
[
v0.
4.0
](
https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4
.0.md
)
Documentation: http://godoc.org/github.com/BurntSushi/toml
Documentation: http
s
://godoc.org/github.com/BurntSushi/toml
Installation:
...
...
@@ -26,8 +26,7 @@ go get github.com/BurntSushi/toml/cmd/tomlv
tomlv some-toml-file.toml
```
[

](https://travis-ci.org/BurntSushi/toml)
[

](https://travis-ci.org/BurntSushi/toml)
[

](https://godoc.org/github.com/BurntSushi/toml)
### Testing
...
...
@@ -87,7 +86,7 @@ type TOML struct {
### Using the `encoding.TextUnmarshaler` interface
Here's an example that automatically parses duration strings into
Here's an example that automatically parses duration strings into
`time.Duration`
values:
```
toml
...
...
@@ -120,7 +119,7 @@ for _, s := range favorites.Song {
}
```
And you'll also need a
`duration`
type that satisfies the
And you'll also need a
`duration`
type that satisfies the
`encoding.TextUnmarshaler`
interface:
```
go
...
...
@@ -217,4 +216,3 @@ Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in
`_examples/example.{go,toml}`
.
vendor/github.com/BurntSushi/toml/doc.go
View file @
73647263
...
...
@@ -4,7 +4,7 @@ files via reflection. There is also support for delaying decoding with
the Primitive type, and querying the set of keys in a TOML document with the
MetaData type.
The specification implemented: https://github.com/
mojombo
/toml
The specification implemented: https://github.com/
toml-lang
/toml
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
whether a file is a valid TOML document. It can also be used to print the
...
...
vendor/github.com/BurntSushi/toml/encode.go
View file @
73647263
...
...
@@ -241,7 +241,7 @@ func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
func
(
enc
*
Encoder
)
eTable
(
key
Key
,
rv
reflect
.
Value
)
{
panicIfInvalidKey
(
key
)
if
len
(
key
)
==
1
{
// Output an extra new
line between top-level tables.
// Output an extra newline between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc
.
newline
()
}
...
...
vendor/github.com/BurntSushi/toml/lex.go
View file @
73647263
...
...
@@ -30,24 +30,28 @@ const (
itemArrayTableEnd
itemKeyStart
itemCommentStart
itemInlineTableStart
itemInlineTableEnd
)
const
(
eof
=
0
tableStart
=
'['
tableEnd
=
']'
arrayTableStart
=
'['
arrayTableEnd
=
']'
tableSep
=
'.'
keySep
=
'='
arrayStart
=
'['
arrayEnd
=
']'
arrayValTerm
=
','
commentStart
=
'#'
stringStart
=
'"'
stringEnd
=
'"'
rawStringStart
=
'\'
'
rawStringEnd
=
'\'
'
eof
=
0
comma
=
','
tableStart
=
'['
tableEnd
=
']'
arrayTableStart
=
'['
arrayTableEnd
=
']'
tableSep
=
'.'
keySep
=
'='
arrayStart
=
'['
arrayEnd
=
']'
commentStart
=
'#'
stringStart
=
'"'
stringEnd
=
'"'
rawStringStart
=
'\'
'
rawStringEnd
=
'\'
'
inlineTableStart
=
'{'
inlineTableEnd
=
'}'
)
type
stateFn
func
(
lx
*
lexer
)
stateFn
...
...
@@ -56,11 +60,18 @@ type lexer struct {
input
string
start
int
pos
int
width
int
line
int
state
stateFn
items
chan
item
// Allow for backing up up to three runes.
// This is necessary because TOML contains 3-rune tokens (""" and ''').
prevWidths
[
3
]
int
nprev
int
// how many of prevWidths are in use
// If we emit an eof, we can still back up, but it is not OK to call
// next again.
atEOF
bool
// A stack of state functions used to maintain context.
// The idea is to reuse parts of the state machine in various places.
// For example, values can appear at the top level or within arbitrarily
...
...
@@ -88,7 +99,7 @@ func (lx *lexer) nextItem() item {
func
lex
(
input
string
)
*
lexer
{
lx
:=
&
lexer
{
input
:
input
+
"
\n
"
,
input
:
input
,
state
:
lexTop
,
line
:
1
,
items
:
make
(
chan
item
,
10
),
...
...
@@ -103,7 +114,7 @@ func (lx *lexer) push(state stateFn) {
func
(
lx
*
lexer
)
pop
()
stateFn
{
if
len
(
lx
.
stack
)
==
0
{
return
lx
.
errorf
(
"BUG in lexer: no states to pop
.
"
)
return
lx
.
errorf
(
"BUG in lexer: no states to pop"
)
}
last
:=
lx
.
stack
[
len
(
lx
.
stack
)
-
1
]
lx
.
stack
=
lx
.
stack
[
0
:
len
(
lx
.
stack
)
-
1
]
...
...
@@ -125,16 +136,25 @@ func (lx *lexer) emitTrim(typ itemType) {
}
func
(
lx
*
lexer
)
next
()
(
r
rune
)
{
if
lx
.
atEOF
{
panic
(
"next called after EOF"
)
}
if
lx
.
pos
>=
len
(
lx
.
input
)
{
lx
.
width
=
0
lx
.
atEOF
=
true
return
eof
}
if
lx
.
input
[
lx
.
pos
]
==
'\n'
{
lx
.
line
++
}
r
,
lx
.
width
=
utf8
.
DecodeRuneInString
(
lx
.
input
[
lx
.
pos
:
])
lx
.
pos
+=
lx
.
width
lx
.
prevWidths
[
2
]
=
lx
.
prevWidths
[
1
]
lx
.
prevWidths
[
1
]
=
lx
.
prevWidths
[
0
]
if
lx
.
nprev
<
3
{
lx
.
nprev
++
}
r
,
w
:=
utf8
.
DecodeRuneInString
(
lx
.
input
[
lx
.
pos
:
])
lx
.
prevWidths
[
0
]
=
w
lx
.
pos
+=
w
return
r
}
...
...
@@ -143,9 +163,20 @@ func (lx *lexer) ignore() {
lx
.
start
=
lx
.
pos
}
// backup steps back one rune. Can be called only
once per call of
next.
// backup steps back one rune. Can be called only
twice between calls to
next.
func
(
lx
*
lexer
)
backup
()
{
lx
.
pos
-=
lx
.
width
if
lx
.
atEOF
{
lx
.
atEOF
=
false
return
}
if
lx
.
nprev
<
1
{
panic
(
"backed up too far"
)
}
w
:=
lx
.
prevWidths
[
0
]
lx
.
prevWidths
[
0
]
=
lx
.
prevWidths
[
1
]
lx
.
prevWidths
[
1
]
=
lx
.
prevWidths
[
2
]
lx
.
nprev
--
lx
.
pos
-=
w
if
lx
.
pos
<
len
(
lx
.
input
)
&&
lx
.
input
[
lx
.
pos
]
==
'\n'
{
lx
.
line
--
}
...
...
@@ -182,7 +213,7 @@ func (lx *lexer) skip(pred func(rune) bool) {
// errorf stops all lexing by emitting an error and returning `nil`.
// Note that any value that is a character is escaped if it's a special
// character (new
lines, tabs, etc.).
// character (newlines, tabs, etc.).
func
(
lx
*
lexer
)
errorf
(
format
string
,
values
...
interface
{})
stateFn
{
lx
.
items
<-
item
{
itemError
,
...
...
@@ -198,7 +229,6 @@ func lexTop(lx *lexer) stateFn {
if
isWhitespace
(
r
)
||
isNL
(
r
)
{
return
lexSkip
(
lx
,
lexTop
)
}
switch
r
{
case
commentStart
:
lx
.
push
(
lexTop
)
...
...
@@ -207,7 +237,7 @@ func lexTop(lx *lexer) stateFn {
return
lexTableStart
case
eof
:
if
lx
.
pos
>
lx
.
start
{
return
lx
.
errorf
(
"
Unexpected EOF.
"
)
return
lx
.
errorf
(
"
unexpected EOF
"
)
}
lx
.
emit
(
itemEOF
)
return
nil
...
...
@@ -222,12 +252,12 @@ func lexTop(lx *lexer) stateFn {
// lexTopEnd is entered whenever a top-level item has been consumed. (A value
// or a table.) It must see only whitespace, and will turn back to lexTop
// upon a new
line. If it sees EOF, it will quit the lexer successfully.
// upon a newline. If it sees EOF, it will quit the lexer successfully.
func
lexTopEnd
(
lx
*
lexer
)
stateFn
{
r
:=
lx
.
next
()
switch
{
case
r
==
commentStart
:
// a comment will read to a new
line for us.
// a comment will read to a newline for us.
lx
.
push
(
lexTop
)
return
lexCommentStart
case
isWhitespace
(
r
)
:
...
...
@@ -236,11 +266,11 @@ func lexTopEnd(lx *lexer) stateFn {
lx
.
ignore
()
return
lexTop
case
r
==
eof
:
lx
.
ignore
(
)
return
lexTop
lx
.
emit
(
itemEOF
)
return
nil
}
return
lx
.
errorf
(
"
Expected a top-level item to end with a new
line, "
+
"comment
or EOF, but got %q instead.
"
,
r
)
return
lx
.
errorf
(
"
expected a top-level item to end with a new
line, "
+
"comment
, or EOF, but got %q instead
"
,
r
)
}
// lexTable lexes the beginning of a table. Namely, it makes sure that
...
...
@@ -267,8 +297,8 @@ func lexTableEnd(lx *lexer) stateFn {
func
lexArrayTableEnd
(
lx
*
lexer
)
stateFn
{
if
r
:=
lx
.
next
();
r
!=
arrayTableEnd
{
return
lx
.
errorf
(
"
E
xpected end of table array name delimiter %q, "
+
"but got %q instead
.
"
,
arrayTableEnd
,
r
)
return
lx
.
errorf
(
"
e
xpected end of table array name delimiter %q, "
+
"but got %q instead"
,
arrayTableEnd
,
r
)
}
lx
.
emit
(
itemArrayTableEnd
)
return
lexTopEnd
...
...
@@ -278,11 +308,11 @@ func lexTableNameStart(lx *lexer) stateFn {
lx
.
skip
(
isWhitespace
)
switch
r
:=
lx
.
peek
();
{
case
r
==
tableEnd
||
r
==
eof
:
return
lx
.
errorf
(
"
Unexpected end of table name. (Table names cannot
"
+
"
be empty.
)"
)
return
lx
.
errorf
(
"
unexpected end of table name
"
+
"
(table names cannot be empty
)"
)
case
r
==
tableSep
:
return
lx
.
errorf
(
"
Unexpected table separator. (Table names cannot
"
+
"
be empty.
)"
)
return
lx
.
errorf
(
"
unexpected table separator
"
+
"
(table names cannot be empty
)"
)
case
r
==
stringStart
||
r
==
rawStringStart
:
lx
.
ignore
()
lx
.
push
(
lexTableNameEnd
)
...
...
@@ -317,8 +347,8 @@ func lexTableNameEnd(lx *lexer) stateFn {
case
r
==
tableEnd
:
return
lx
.
pop
()
default
:
return
lx
.
errorf
(
"
Expected '.' or ']' to end table name, but got %q
"
+
"
instead.
"
,
r
)
return
lx
.
errorf
(
"
expected '.' or ']' to end table name,
"
+
"
but got %q instead
"
,
r
)
}
}
...
...
@@ -328,7 +358,7 @@ func lexKeyStart(lx *lexer) stateFn {
r
:=
lx
.
peek
()
switch
{
case
r
==
keySep
:
return
lx
.
errorf
(
"
Unexpected key separator %q.
"
,
keySep
)
return
lx
.
errorf
(
"
unexpected key separator %q
"
,
keySep
)
case
isWhitespace
(
r
)
||
isNL
(
r
)
:
lx
.
next
()
return
lexSkip
(
lx
,
lexKeyStart
)
...
...
@@ -359,7 +389,7 @@ func lexBareKey(lx *lexer) stateFn {
lx
.
emit
(
itemText
)
return
lexKeyEnd
default
:
return
lx
.
errorf
(
"
Bare keys cannot contain %q.
"
,
r
)
return
lx
.
errorf
(
"
bare keys cannot contain %q
"
,
r
)
}
}
...
...
@@ -372,7 +402,7 @@ func lexKeyEnd(lx *lexer) stateFn {
case
isWhitespace
(
r
)
:
return
lexSkip
(
lx
,
lexKeyEnd
)
default
:
return
lx
.
errorf
(
"
Expected key separator %q, but got %q instead.
"
,
return
lx
.
errorf
(
"
expected key separator %q, but got %q instead
"
,
keySep
,
r
)
}
}
...
...
@@ -381,9 +411,8 @@ func lexKeyEnd(lx *lexer) stateFn {
// lexValue will ignore whitespace.
// After a value is lexed, the last state on the next is popped and returned.
func
lexValue
(
lx
*
lexer
)
stateFn
{
// We allow whitespace to precede a value, but NOT new lines.
// In array syntax, the array states are responsible for ignoring new
// lines.
// We allow whitespace to precede a value, but NOT newlines.
// In array syntax, the array states are responsible for ignoring newlines.
r
:=
lx
.
next
()
switch
{
case
isWhitespace
(
r
)
:
...
...
@@ -397,6 +426,10 @@ func lexValue(lx *lexer) stateFn {
lx
.
ignore
()
lx
.
emit
(
itemArray
)
return
lexArrayValue
case
inlineTableStart
:
lx
.
ignore
()
lx
.
emit
(
itemInlineTableStart
)
return
lexInlineTableValue
case
stringStart
:
if
lx
.
accept
(
stringStart
)
{
if
lx
.
accept
(
stringStart
)
{
...
...
@@ -420,7 +453,7 @@ func lexValue(lx *lexer) stateFn {
case
'+'
,
'-'
:
return
lexNumberStart
case
'.'
:
// special error case, be kind to users
return
lx
.
errorf
(
"
Floats must start with a digit, not '.'.
"
)
return
lx
.
errorf
(
"
floats must start with a digit, not '.'
"
)
}
if
unicode
.
IsLetter
(
r
)
{
// Be permissive here; lexBool will give a nice error if the
...
...
@@ -430,11 +463,11 @@ func lexValue(lx *lexer) stateFn {
lx
.
backup
()
return
lexBool
}
return
lx
.
errorf
(
"
Expected value but found %q instead.
"
,
r
)
return
lx
.
errorf
(
"
expected value but found %q instead
"
,
r
)
}
// lexArrayValue consumes one value in an array. It assumes that '[' or ','
// have already been consumed. All whitespace and new
lines are ignored.
// have already been consumed. All whitespace and newlines are ignored.
func
lexArrayValue
(
lx
*
lexer
)
stateFn
{
r
:=
lx
.
next
()
switch
{
...
...
@@ -443,10 +476,11 @@ func lexArrayValue(lx *lexer) stateFn {
case
r
==
commentStart
:
lx
.
push
(
lexArrayValue
)
return
lexCommentStart
case
r
==
arrayValTerm
:
return
lx
.
errorf
(
"Unexpected array value terminator %q."
,
arrayValTerm
)
case
r
==
comma
:
return
lx
.
errorf
(
"unexpected comma"
)
case
r
==
arrayEnd
:
// NOTE(caleb): The spec isn't clear about whether you can have
// a trailing comma or not, so we'll allow it.
return
lexArrayEnd
}
...
...
@@ -455,8 +489,9 @@ func lexArrayValue(lx *lexer) stateFn {
return
lexValue
}
// lexArrayValueEnd consumes the cruft between values of an array. Namely,
// it ignores whitespace and expects either a ',' or a ']'.
// lexArrayValueEnd consumes everything between the end of an array value and
// the next value (or the end of the array): it ignores whitespace and newlines
// and expects either a ',' or a ']'.
func
lexArrayValueEnd
(
lx
*
lexer
)
stateFn
{
r
:=
lx
.
next
()
switch
{
...
...
@@ -465,31 +500,88 @@ func lexArrayValueEnd(lx *lexer) stateFn {
case
r
==
commentStart
:
lx
.
push
(
lexArrayValueEnd
)
return
lexCommentStart
case
r
==
arrayValTerm
:
case
r
==
comma
:
lx
.
ignore
()
return
lexArrayValue
// move on to the next value
case
r
==
arrayEnd
:
return
lexArrayEnd
}
return
lx
.
errorf
(
"Expected an array value terminator %q or an array "
+
"terminator %q, but got %q instead."
,
arrayValTerm
,
arrayEnd
,
r
)
return
lx
.
errorf
(
"expected a comma or array terminator %q, but got %q instead"
,
arrayEnd
,
r
,
)
}
// lexArrayEnd finishes the lexing of an array.
It assumes that a ']' has
// just been consumed.
// lexArrayEnd finishes the lexing of an array.
//
It assumes that a ']' has
just been consumed.
func
lexArrayEnd
(
lx
*
lexer
)
stateFn
{
lx
.
ignore
()
lx
.
emit
(
itemArrayEnd
)
return
lx
.
pop
()
}
// lexInlineTableValue consumes one key/value pair in an inline table.
// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
func
lexInlineTableValue
(
lx
*
lexer
)
stateFn
{
r
:=
lx
.
next
()
switch
{
case
isWhitespace
(
r
)
:
return
lexSkip
(
lx
,
lexInlineTableValue
)
case
isNL
(
r
)
:
return
lx
.
errorf
(
"newlines not allowed within inline tables"
)
case
r
==
commentStart
:
lx
.
push
(
lexInlineTableValue
)
return
lexCommentStart
case
r
==
comma
:
return
lx
.
errorf
(
"unexpected comma"
)
case
r
==
inlineTableEnd
:
return
lexInlineTableEnd
}
lx
.
backup
()
lx
.
push
(
lexInlineTableValueEnd
)
return
lexKeyStart
}
// lexInlineTableValueEnd consumes everything between the end of an inline table
// key/value pair and the next pair (or the end of the table):
// it ignores whitespace and expects either a ',' or a '}'.
func
lexInlineTableValueEnd
(
lx
*
lexer
)
stateFn
{
r
:=
lx
.
next
()
switch
{
case
isWhitespace
(
r
)
:
return
lexSkip
(
lx
,
lexInlineTableValueEnd
)
case
isNL
(
r
)
:
return
lx
.
errorf
(
"newlines not allowed within inline tables"
)
case
r
==
commentStart
:
lx
.
push
(
lexInlineTableValueEnd
)
return
lexCommentStart
case
r
==
comma
:
lx
.
ignore
()
return
lexInlineTableValue
case
r
==
inlineTableEnd
:
return
lexInlineTableEnd
}
return
lx
.
errorf
(
"expected a comma or an inline table terminator %q, "
+
"but got %q instead"
,
inlineTableEnd
,
r
)
}
// lexInlineTableEnd finishes the lexing of an inline table.
// It assumes that a '}' has just been consumed.
func
lexInlineTableEnd
(
lx
*
lexer
)
stateFn
{
lx
.
ignore
()
lx
.
emit
(
itemInlineTableEnd
)
return
lx
.
pop
()
}
// lexString consumes the inner contents of a string. It assumes that the
// beginning '"' has already been consumed and ignored.
func
lexString
(
lx
*
lexer
)
stateFn
{
r
:=
lx
.
next
()
switch
{
case
r
==
eof
:
return
lx
.
errorf
(
"unexpected EOF"
)
case
isNL
(
r
)
:
return
lx
.
errorf
(
"
Strings cannot contain new lines.
"
)
return
lx
.
errorf
(
"
strings cannot contain newlines
"
)
case
r
==
'\\'
:
lx
.
push
(
lexString
)
return
lexStringEscape
...
...
@@ -506,11 +598,12 @@ func lexString(lx *lexer) stateFn {
// lexMultilineString consumes the inner contents of a string. It assumes that
// the beginning '"""' has already been consumed and ignored.
func
lexMultilineString
(
lx
*
lexer
)
stateFn
{
r
:=
lx
.
next
()
switch
{
case
r
==
'\\'
:
switch
lx
.
next
()
{
case
eof
:
return
lx
.
errorf
(
"unexpected EOF"
)
case
'\\'
:
return
lexMultilineStringEscape
case
r
==
stringEnd
:
case
stringEnd
:
if
lx
.
accept
(
stringEnd
)
{
if
lx
.
accept
(
stringEnd
)
{
lx
.
backup
()
...
...
@@ -534,8 +627,10 @@ func lexMultilineString(lx *lexer) stateFn {
func
lexRawString
(
lx
*
lexer
)
stateFn
{
r
:=
lx
.
next
()
switch
{
case
r
==
eof
:
return
lx
.
errorf
(
"unexpected EOF"
)
case
isNL
(
r
)
:
return
lx
.
errorf
(
"
Strings cannot contain new lines.
"
)
return
lx
.
errorf
(
"
strings cannot contain newlines
"
)
case
r
==
rawStringEnd
:
lx
.
backup
()
lx
.
emit
(
itemRawString
)
...
...
@@ -547,12 +642,13 @@ func lexRawString(lx *lexer) stateFn {
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
// a string. It assumes that the beginning "'" has already been consumed and
// a string. It assumes that the beginning "'
''
" has already been consumed and
// ignored.
func
lexMultilineRawString
(
lx
*
lexer
)
stateFn
{
r
:=
lx
.
next
()
switch
{
case
r
==
rawStringEnd
:
switch
lx
.
next
()
{
case
eof
:
return
lx
.
errorf
(
"unexpected EOF"
)
case
rawStringEnd
:
if
lx
.
accept
(
rawStringEnd
)
{
if
lx
.
accept
(
rawStringEnd
)
{
lx
.
backup
()
...
...
@@ -605,10 +701,9 @@ func lexStringEscape(lx *lexer) stateFn {
case
'U'
:
return
lexLongUnicodeEscape
}
return
lx
.
errorf
(
"
Invalid escape character %q. O
nly the following "
+
return
lx
.
errorf
(
"
invalid escape character %q; o
nly the following "
+
"escape characters are allowed: "
+
"
\\
b,
\\
t,
\\
n,
\\
f,
\\
r,
\\\"
,
\\
/,
\\\\
, "
+
"
\\
uXXXX and
\\
UXXXXXXXX."
,
r
)
`\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`
,
r
)
}
func
lexShortUnicodeEscape
(
lx
*
lexer
)
stateFn
{
...
...
@@ -616,8 +711,8 @@ func lexShortUnicodeEscape(lx *lexer) stateFn {
for
i
:=
0
;
i
<
4
;
i
++
{
r
=
lx
.
next
()
if
!
isHexadecimal
(
r
)
{
return
lx
.
errorf
(
"Expected four hexadecimal digits after '
\\
u', "
+
"but got
'%s' instead.
"
,
lx
.
current
())
return
lx
.
errorf
(
`expected four hexadecimal digits after '\u', `
+
"but got
%q instead
"
,
lx
.
current
())
}
}
return
lx
.
pop
()
...
...
@@ -628,8 +723,8 @@ func lexLongUnicodeEscape(lx *lexer) stateFn {
for
i
:=
0
;
i
<
8
;
i
++
{
r
=
lx
.
next
()
if
!
isHexadecimal
(
r
)
{
return
lx
.
errorf
(
"Expected eight hexadecimal digits after '
\\
U', "
+
"but got
'%s' instead.
"
,
lx
.
current
())
return
lx
.
errorf
(
`expected eight hexadecimal digits after '\U', `
+
"but got
%q instead
"
,
lx
.
current
())
}
}
return
lx
.
pop
()
...
...
@@ -647,9 +742,9 @@ func lexNumberOrDateStart(lx *lexer) stateFn {
case
'e'
,
'E'
:
return
lexFloat
case
'.'
:
return
lx
.
errorf
(
"
Floats must start with a digit, not '.'.
"
)
return
lx
.
errorf
(
"
floats must start with a digit, not '.'
"
)
}
return
lx
.
errorf
(
"
Expected a digit but got %q.
"
,
r
)
return
lx
.
errorf
(
"
expected a digit but got %q
"
,
r
)
}
// lexNumberOrDate consumes either an integer, float or datetime.
...
...
@@ -680,7 +775,7 @@ func lexDatetime(lx *lexer) stateFn {
return
lexDatetime
}
switch
r
{
case
'-'
,
'T'
,
':'
,
'.'
,
'Z'
:
case
'-'
,
'T'
,
':'
,
'.'
,
'Z'
,
'+'
:
return
lexDatetime
}
...
...
@@ -697,9 +792,9 @@ func lexNumberStart(lx *lexer) stateFn {
r
:=
lx
.
next
()
if
!
isDigit
(
r
)
{
if
r
==
'.'
{
return
lx
.
errorf
(
"
Floats must start with a digit, not '.'.
"
)
return
lx
.
errorf
(
"
floats must start with a digit, not '.'
"
)
}
return
lx
.
errorf
(
"
Expected a digit but got %q.
"
,
r
)
return
lx
.
errorf
(
"
expected a digit but got %q
"
,
r
)
}
return
lexNumber
}
...
...
@@ -745,7 +840,7 @@ func lexBool(lx *lexer) stateFn {
var
rs
[]
rune
for
{
r
:=
lx
.
next
()
if
r
==
eof
||
isWhitespace
(
r
)
||
isNL
(
r
)
{
if
!
unicode
.
IsLetter
(
r
)
{
lx
.
backup
()
break
}
...
...
@@ -757,7 +852,7 @@ func lexBool(lx *lexer) stateFn {
lx
.
emit
(
itemBool
)
return
lx
.
pop
()
}
return
lx
.
errorf
(
"
Expected value but found %q instead.
"
,
s
)
return
lx
.
errorf
(
"
expected value but found %q instead
"
,
s
)
}
// lexCommentStart begins the lexing of a comment. It will emit
...
...
@@ -769,7 +864,7 @@ func lexCommentStart(lx *lexer) stateFn {
}
// lexComment lexes an entire comment. It assumes that '#' has been consumed.
// It will consume *up to* the first new
line character, and pass control
// It will consume *up to* the first newline character, and pass control
// back to the last state on the stack.
func
lexComment
(
lx
*
lexer
)
stateFn
{
r
:=
lx
.
peek
()
...
...
vendor/github.com/BurntSushi/toml/parse.go
View file @
73647263
...
...
@@ -269,6 +269,41 @@ func (p *parser) value(it item) (interface{}, tomlType) {
types
=
append
(
types
,
typ
)
}
return
array
,
p
.
typeOfArray
(
types
)
case
itemInlineTableStart
:
var
(
hash
=
make
(
map
[
string
]
interface
{})
outerContext
=
p
.
context
outerKey
=
p
.
currentKey
)
p
.
context
=
append
(
p
.
context
,
p
.
currentKey
)
p
.
currentKey
=
""
for
it
:=
p
.
next
();
it
.
typ
!=
itemInlineTableEnd
;
it
=
p
.
next
()
{
if
it
.
typ
!=
itemKeyStart
{
p
.
bug
(
"Expected key start but instead found %q, around line %d"
,
it
.
val
,
p
.
approxLine
)
}
if
it
.
typ
==
itemCommentStart
{
p
.
expect
(
itemText
)
continue
}
// retrieve key
k
:=
p
.
next
()
p
.
approxLine
=
k
.
line
kname
:=
p
.
keyString
(
k
)
// retrieve value
p
.
currentKey
=
kname
val
,
typ
:=
p
.
value
(
p
.
next
())
// make sure we keep metadata up to date
p
.
setType
(
kname
,
typ
)
p
.
ordered
=
append
(
p
.
ordered
,
p
.
context
.
add
(
p
.
currentKey
))
hash
[
kname
]
=
val
}
p
.
context
=
outerContext
p
.
currentKey
=
outerKey
return
hash
,
tomlHash
}
p
.
bug
(
"Unexpected value type: %s"
,
it
.
typ
)
panic
(
"unreachable"
)
...
...
vendor/vendor.json
View file @
73647263
...
...
@@ -2,6 +2,12 @@
"comment"
:
""
,
"ignore"
:
"test"
,
"package"
:
[
{
"checksumSHA1"
:
"Pc2ORQp+VY3Un/dkh4QwLC7R6lE="
,
"path"
:
"github.com/BurntSushi/toml"
,
"revision"
:
"a368813c5e648fee92e5f6c30e3944ff9d5e8895"
,
"revisionTime"
:
"2017-06-26T11:06:00Z"
},
{
"checksumSHA1"
:
"spyv5/YFBjYyZLZa1U2LBfDR8PM="
,
"path"
:
"github.com/beorn7/perks/quantile"
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment