1
0
Fork 0
forked from forgejo/forgejo

Update to last common bleve (#3986)

This commit is contained in:
Antoine GIRARD 2018-05-19 14:49:46 +02:00 committed by Lunny Xiao
parent 1b7cd3d0b0
commit 917b9641ec
184 changed files with 39576 additions and 121 deletions

7
vendor/github.com/philhofer/fwd/LICENSE.md generated vendored Normal file
View file

@ -0,0 +1,7 @@
Copyright (c) 2014-2015, Philip Hofer
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

315
vendor/github.com/philhofer/fwd/README.md generated vendored Normal file
View file

@ -0,0 +1,315 @@
# fwd
import "github.com/philhofer/fwd"
The `fwd` package provides a buffered reader
and writer. Each has methods that help improve
the encoding/decoding performance of some binary
protocols.
The `fwd.Writer` and `fwd.Reader` type provide similar
functionality to their counterparts in `bufio`, plus
a few extra utility methods that simplify read-ahead
and write-ahead. I wrote this package to improve serialization
performance for <a href="http://github.com/tinylib/msgp">http://github.com/tinylib/msgp</a>,
where it provided about a 2x speedup over `bufio` for certain
workloads. However, care must be taken to understand the semantics of the
extra methods provided by this package, as they allow
the user to access and manipulate the buffer memory
directly.
The extra methods for `fwd.Reader` are `Peek`, `Skip`
and `Next`. `(*fwd.Reader).Peek`, unlike `(*bufio.Reader).Peek`,
will re-allocate the read buffer in order to accommodate arbitrarily
large read-ahead. `(*fwd.Reader).Skip` skips the next `n` bytes
in the stream, and uses the `io.Seeker` interface if the underlying
stream implements it. `(*fwd.Reader).Next` returns a slice pointing
to the next `n` bytes in the read buffer (like `Peek`), but also
increments the read position. This allows users to process streams
in arbitrary block sizes without having to manage appropriately-sized
slices. Additionally, obviating the need to copy the data from the
buffer to another location in memory can improve performance dramatically
in CPU-bound applications.
`fwd.Writer` only has one extra method, which is `(*fwd.Writer).Next`, which
returns a slice pointing to the next `n` bytes of the writer, and increments
the write position by the length of the returned slice. This allows users
to write directly to the end of the buffer.
## Constants
``` go
const (
// DefaultReaderSize is the default size of the read buffer
DefaultReaderSize = 2048
)
```
``` go
const (
// DefaultWriterSize is the
// default write buffer size.
DefaultWriterSize = 2048
)
```
## type Reader
``` go
type Reader struct {
// contains filtered or unexported fields
}
```
Reader is a buffered look-ahead reader
### func NewReader
``` go
func NewReader(r io.Reader) *Reader
```
NewReader returns a new *Reader that reads from 'r'
### func NewReaderSize
``` go
func NewReaderSize(r io.Reader, n int) *Reader
```
NewReaderSize returns a new *Reader that
reads from 'r' and has a buffer size 'n'
### func (\*Reader) BufferSize
``` go
func (r *Reader) BufferSize() int
```
BufferSize returns the total size of the buffer
### func (\*Reader) Buffered
``` go
func (r *Reader) Buffered() int
```
Buffered returns the number of bytes currently in the buffer
### func (\*Reader) Next
``` go
func (r *Reader) Next(n int) ([]byte, error)
```
Next returns the next 'n' bytes in the stream.
Unlike Peek, Next advances the reader position.
The returned bytes point to the same
data as the buffer, so the slice is
only valid until the next reader method call.
An EOF is considered an unexpected error.
If an the returned slice is less than the
length asked for, an error will be returned,
and the reader position will not be incremented.
### func (\*Reader) Peek
``` go
func (r *Reader) Peek(n int) ([]byte, error)
```
Peek returns the next 'n' buffered bytes,
reading from the underlying reader if necessary.
It will only return a slice shorter than 'n' bytes
if it also returns an error. Peek does not advance
the reader. EOF errors are *not* returned as
io.ErrUnexpectedEOF.
### func (\*Reader) Read
``` go
func (r *Reader) Read(b []byte) (int, error)
```
Read implements `io.Reader`
### func (\*Reader) ReadByte
``` go
func (r *Reader) ReadByte() (byte, error)
```
ReadByte implements `io.ByteReader`
### func (\*Reader) ReadFull
``` go
func (r *Reader) ReadFull(b []byte) (int, error)
```
ReadFull attempts to read len(b) bytes into
'b'. It returns the number of bytes read into
'b', and an error if it does not return len(b).
EOF is considered an unexpected error.
### func (\*Reader) Reset
``` go
func (r *Reader) Reset(rd io.Reader)
```
Reset resets the underlying reader
and the read buffer.
### func (\*Reader) Skip
``` go
func (r *Reader) Skip(n int) (int, error)
```
Skip moves the reader forward 'n' bytes.
Returns the number of bytes skipped and any
errors encountered. It is analogous to Seek(n, 1).
If the underlying reader implements io.Seeker, then
that method will be used to skip forward.
If the reader encounters
an EOF before skipping 'n' bytes, it
returns io.ErrUnexpectedEOF. If the
underlying reader implements io.Seeker, then
those rules apply instead. (Many implementations
will not return `io.EOF` until the next call
to Read.)
### func (\*Reader) WriteTo
``` go
func (r *Reader) WriteTo(w io.Writer) (int64, error)
```
WriteTo implements `io.WriterTo`
## type Writer
``` go
type Writer struct {
// contains filtered or unexported fields
}
```
Writer is a buffered writer
### func NewWriter
``` go
func NewWriter(w io.Writer) *Writer
```
NewWriter returns a new writer
that writes to 'w' and has a buffer
that is `DefaultWriterSize` bytes.
### func NewWriterSize
``` go
func NewWriterSize(w io.Writer, size int) *Writer
```
NewWriterSize returns a new writer
that writes to 'w' and has a buffer
that is 'size' bytes.
### func (\*Writer) BufferSize
``` go
func (w *Writer) BufferSize() int
```
BufferSize returns the maximum size of the buffer.
### func (\*Writer) Buffered
``` go
func (w *Writer) Buffered() int
```
Buffered returns the number of buffered bytes
in the reader.
### func (\*Writer) Flush
``` go
func (w *Writer) Flush() error
```
Flush flushes any buffered bytes
to the underlying writer.
### func (\*Writer) Next
``` go
func (w *Writer) Next(n int) ([]byte, error)
```
Next returns the next 'n' free bytes
in the write buffer, flushing the writer
as necessary. Next will return `io.ErrShortBuffer`
if 'n' is greater than the size of the write buffer.
Calls to 'next' increment the write position by
the size of the returned buffer.
### func (\*Writer) ReadFrom
``` go
func (w *Writer) ReadFrom(r io.Reader) (int64, error)
```
ReadFrom implements `io.ReaderFrom`
### func (\*Writer) Write
``` go
func (w *Writer) Write(p []byte) (int, error)
```
Write implements `io.Writer`
### func (\*Writer) WriteByte
``` go
func (w *Writer) WriteByte(b byte) error
```
WriteByte implements `io.ByteWriter`
### func (\*Writer) WriteString
``` go
func (w *Writer) WriteString(s string) (int, error)
```
WriteString is analogous to Write, but it takes a string.
- - -
Generated by [godoc2md](http://godoc.org/github.com/davecheney/godoc2md)

383
vendor/github.com/philhofer/fwd/reader.go generated vendored Normal file
View file

@ -0,0 +1,383 @@
// The `fwd` package provides a buffered reader
// and writer. Each has methods that help improve
// the encoding/decoding performance of some binary
// protocols.
//
// The `fwd.Writer` and `fwd.Reader` type provide similar
// functionality to their counterparts in `bufio`, plus
// a few extra utility methods that simplify read-ahead
// and write-ahead. I wrote this package to improve serialization
// performance for http://github.com/tinylib/msgp,
// where it provided about a 2x speedup over `bufio` for certain
// workloads. However, care must be taken to understand the semantics of the
// extra methods provided by this package, as they allow
// the user to access and manipulate the buffer memory
// directly.
//
// The extra methods for `fwd.Reader` are `Peek`, `Skip`
// and `Next`. `(*fwd.Reader).Peek`, unlike `(*bufio.Reader).Peek`,
// will re-allocate the read buffer in order to accommodate arbitrarily
// large read-ahead. `(*fwd.Reader).Skip` skips the next `n` bytes
// in the stream, and uses the `io.Seeker` interface if the underlying
// stream implements it. `(*fwd.Reader).Next` returns a slice pointing
// to the next `n` bytes in the read buffer (like `Peek`), but also
// increments the read position. This allows users to process streams
// in arbitrary block sizes without having to manage appropriately-sized
// slices. Additionally, obviating the need to copy the data from the
// buffer to another location in memory can improve performance dramatically
// in CPU-bound applications.
//
// `fwd.Writer` only has one extra method, which is `(*fwd.Writer).Next`, which
// returns a slice pointing to the next `n` bytes of the writer, and increments
// the write position by the length of the returned slice. This allows users
// to write directly to the end of the buffer.
//
package fwd
import "io"
const (
// DefaultReaderSize is the default size of the read buffer
DefaultReaderSize = 2048
// minimum read buffer; straight from bufio
minReaderSize = 16
)
// NewReader returns a new *Reader that reads from 'r'
func NewReader(r io.Reader) *Reader {
return NewReaderSize(r, DefaultReaderSize)
}
// NewReaderSize returns a new *Reader that
// reads from 'r' and has a buffer size 'n'
func NewReaderSize(r io.Reader, n int) *Reader {
rd := &Reader{
r: r,
data: make([]byte, 0, max(minReaderSize, n)),
}
if s, ok := r.(io.Seeker); ok {
rd.rs = s
}
return rd
}
// Reader is a buffered look-ahead reader
type Reader struct {
r io.Reader // underlying reader
// data[n:len(data)] is buffered data; data[len(data):cap(data)] is free buffer space
data []byte // data
n int // read offset
state error // last read error
// if the reader past to NewReader was
// also an io.Seeker, this is non-nil
rs io.Seeker
}
// Reset resets the underlying reader
// and the read buffer.
func (r *Reader) Reset(rd io.Reader) {
r.r = rd
r.data = r.data[0:0]
r.n = 0
r.state = nil
if s, ok := rd.(io.Seeker); ok {
r.rs = s
} else {
r.rs = nil
}
}
// more() does one read on the underlying reader
func (r *Reader) more() {
// move data backwards so that
// the read offset is 0; this way
// we can supply the maximum number of
// bytes to the reader
if r.n != 0 {
if r.n < len(r.data) {
r.data = r.data[:copy(r.data[0:], r.data[r.n:])]
} else {
r.data = r.data[:0]
}
r.n = 0
}
var a int
a, r.state = r.r.Read(r.data[len(r.data):cap(r.data)])
if a == 0 && r.state == nil {
r.state = io.ErrNoProgress
return
} else if a > 0 && r.state == io.EOF {
// discard the io.EOF if we read more than 0 bytes.
// the next call to Read should return io.EOF again.
r.state = nil
}
r.data = r.data[:len(r.data)+a]
}
// pop error
func (r *Reader) err() (e error) {
e, r.state = r.state, nil
return
}
// pop error; EOF -> io.ErrUnexpectedEOF
func (r *Reader) noEOF() (e error) {
e, r.state = r.state, nil
if e == io.EOF {
e = io.ErrUnexpectedEOF
}
return
}
// buffered bytes
func (r *Reader) buffered() int { return len(r.data) - r.n }
// Buffered returns the number of bytes currently in the buffer
func (r *Reader) Buffered() int { return len(r.data) - r.n }
// BufferSize returns the total size of the buffer
func (r *Reader) BufferSize() int { return cap(r.data) }
// Peek returns the next 'n' buffered bytes,
// reading from the underlying reader if necessary.
// It will only return a slice shorter than 'n' bytes
// if it also returns an error. Peek does not advance
// the reader. EOF errors are *not* returned as
// io.ErrUnexpectedEOF.
func (r *Reader) Peek(n int) ([]byte, error) {
// in the degenerate case,
// we may need to realloc
// (the caller asked for more
// bytes than the size of the buffer)
if cap(r.data) < n {
old := r.data[r.n:]
r.data = make([]byte, n+r.buffered())
r.data = r.data[:copy(r.data, old)]
r.n = 0
}
// keep filling until
// we hit an error or
// read enough bytes
for r.buffered() < n && r.state == nil {
r.more()
}
// we must have hit an error
if r.buffered() < n {
return r.data[r.n:], r.err()
}
return r.data[r.n : r.n+n], nil
}
// Skip moves the reader forward 'n' bytes.
// Returns the number of bytes skipped and any
// errors encountered. It is analogous to Seek(n, 1).
// If the underlying reader implements io.Seeker, then
// that method will be used to skip forward.
//
// If the reader encounters
// an EOF before skipping 'n' bytes, it
// returns io.ErrUnexpectedEOF. If the
// underlying reader implements io.Seeker, then
// those rules apply instead. (Many implementations
// will not return `io.EOF` until the next call
// to Read.)
func (r *Reader) Skip(n int) (int, error) {
// fast path
if r.buffered() >= n {
r.n += n
return n, nil
}
// use seeker implementation
// if we can
if r.rs != nil {
return r.skipSeek(n)
}
// loop on filling
// and then erasing
o := n
for r.buffered() < n && r.state == nil {
r.more()
// we can skip forward
// up to r.buffered() bytes
step := min(r.buffered(), n)
r.n += step
n -= step
}
// at this point, n should be
// 0 if everything went smoothly
return o - n, r.noEOF()
}
// Next returns the next 'n' bytes in the stream.
// Unlike Peek, Next advances the reader position.
// The returned bytes point to the same
// data as the buffer, so the slice is
// only valid until the next reader method call.
// An EOF is considered an unexpected error.
// If an the returned slice is less than the
// length asked for, an error will be returned,
// and the reader position will not be incremented.
func (r *Reader) Next(n int) ([]byte, error) {
// in case the buffer is too small
if cap(r.data) < n {
old := r.data[r.n:]
r.data = make([]byte, n+r.buffered())
r.data = r.data[:copy(r.data, old)]
r.n = 0
}
// fill at least 'n' bytes
for r.buffered() < n && r.state == nil {
r.more()
}
if r.buffered() < n {
return r.data[r.n:], r.noEOF()
}
out := r.data[r.n : r.n+n]
r.n += n
return out, nil
}
// skipSeek uses the io.Seeker to seek forward.
// only call this function when n > r.buffered()
func (r *Reader) skipSeek(n int) (int, error) {
o := r.buffered()
// first, clear buffer
n -= o
r.n = 0
r.data = r.data[:0]
// then seek forward remaning bytes
i, err := r.rs.Seek(int64(n), 1)
return int(i) + o, err
}
// Read implements `io.Reader`
func (r *Reader) Read(b []byte) (int, error) {
// if we have data in the buffer, just
// return that.
if r.buffered() != 0 {
x := copy(b, r.data[r.n:])
r.n += x
return x, nil
}
var n int
// we have no buffered data; determine
// whether or not to buffer or call
// the underlying reader directly
if len(b) >= cap(r.data) {
n, r.state = r.r.Read(b)
} else {
r.more()
n = copy(b, r.data)
r.n = n
}
if n == 0 {
return 0, r.err()
}
return n, nil
}
// ReadFull attempts to read len(b) bytes into
// 'b'. It returns the number of bytes read into
// 'b', and an error if it does not return len(b).
// EOF is considered an unexpected error.
func (r *Reader) ReadFull(b []byte) (int, error) {
var n int // read into b
var nn int // scratch
l := len(b)
// either read buffered data,
// or read directly for the underlying
// buffer, or fetch more buffered data.
for n < l && r.state == nil {
if r.buffered() != 0 {
nn = copy(b[n:], r.data[r.n:])
n += nn
r.n += nn
} else if l-n > cap(r.data) {
nn, r.state = r.r.Read(b[n:])
n += nn
} else {
r.more()
}
}
if n < l {
return n, r.noEOF()
}
return n, nil
}
// ReadByte implements `io.ByteReader`
func (r *Reader) ReadByte() (byte, error) {
for r.buffered() < 1 && r.state == nil {
r.more()
}
if r.buffered() < 1 {
return 0, r.err()
}
b := r.data[r.n]
r.n++
return b, nil
}
// WriteTo implements `io.WriterTo`
func (r *Reader) WriteTo(w io.Writer) (int64, error) {
var (
i int64
ii int
err error
)
// first, clear buffer
if r.buffered() > 0 {
ii, err = w.Write(r.data[r.n:])
i += int64(ii)
if err != nil {
return i, err
}
r.data = r.data[0:0]
r.n = 0
}
for r.state == nil {
// here we just do
// 1:1 reads and writes
r.more()
if r.buffered() > 0 {
ii, err = w.Write(r.data)
i += int64(ii)
if err != nil {
return i, err
}
r.data = r.data[0:0]
r.n = 0
}
}
if r.state != io.EOF {
return i, r.err()
}
return i, nil
}
func min(a int, b int) int {
if a < b {
return a
}
return b
}
func max(a int, b int) int {
if a < b {
return b
}
return a
}

224
vendor/github.com/philhofer/fwd/writer.go generated vendored Normal file
View file

@ -0,0 +1,224 @@
package fwd
import "io"
const (
// DefaultWriterSize is the
// default write buffer size.
DefaultWriterSize = 2048
minWriterSize = minReaderSize
)
// Writer is a buffered writer
type Writer struct {
w io.Writer // writer
buf []byte // 0:len(buf) is bufered data
}
// NewWriter returns a new writer
// that writes to 'w' and has a buffer
// that is `DefaultWriterSize` bytes.
func NewWriter(w io.Writer) *Writer {
if wr, ok := w.(*Writer); ok {
return wr
}
return &Writer{
w: w,
buf: make([]byte, 0, DefaultWriterSize),
}
}
// NewWriterSize returns a new writer
// that writes to 'w' and has a buffer
// that is 'size' bytes.
func NewWriterSize(w io.Writer, size int) *Writer {
if wr, ok := w.(*Writer); ok && cap(wr.buf) >= size {
return wr
}
return &Writer{
w: w,
buf: make([]byte, 0, max(size, minWriterSize)),
}
}
// Buffered returns the number of buffered bytes
// in the reader.
func (w *Writer) Buffered() int { return len(w.buf) }
// BufferSize returns the maximum size of the buffer.
func (w *Writer) BufferSize() int { return cap(w.buf) }
// Flush flushes any buffered bytes
// to the underlying writer.
func (w *Writer) Flush() error {
l := len(w.buf)
if l > 0 {
n, err := w.w.Write(w.buf)
// if we didn't write the whole
// thing, copy the unwritten
// bytes to the beginnning of the
// buffer.
if n < l && n > 0 {
w.pushback(n)
if err == nil {
err = io.ErrShortWrite
}
}
if err != nil {
return err
}
w.buf = w.buf[:0]
return nil
}
return nil
}
// Write implements `io.Writer`
func (w *Writer) Write(p []byte) (int, error) {
c, l, ln := cap(w.buf), len(w.buf), len(p)
avail := c - l
// requires flush
if avail < ln {
if err := w.Flush(); err != nil {
return 0, err
}
l = len(w.buf)
}
// too big to fit in buffer;
// write directly to w.w
if c < ln {
return w.w.Write(p)
}
// grow buf slice; copy; return
w.buf = w.buf[:l+ln]
return copy(w.buf[l:], p), nil
}
// WriteString is analogous to Write, but it takes a string.
func (w *Writer) WriteString(s string) (int, error) {
c, l, ln := cap(w.buf), len(w.buf), len(s)
avail := c - l
// requires flush
if avail < ln {
if err := w.Flush(); err != nil {
return 0, err
}
l = len(w.buf)
}
// too big to fit in buffer;
// write directly to w.w
//
// yes, this is unsafe. *but*
// io.Writer is not allowed
// to mutate its input or
// maintain a reference to it,
// per the spec in package io.
//
// plus, if the string is really
// too big to fit in the buffer, then
// creating a copy to write it is
// expensive (and, strictly speaking,
// unnecessary)
if c < ln {
return w.w.Write(unsafestr(s))
}
// grow buf slice; copy; return
w.buf = w.buf[:l+ln]
return copy(w.buf[l:], s), nil
}
// WriteByte implements `io.ByteWriter`
func (w *Writer) WriteByte(b byte) error {
if len(w.buf) == cap(w.buf) {
if err := w.Flush(); err != nil {
return err
}
}
w.buf = append(w.buf, b)
return nil
}
// Next returns the next 'n' free bytes
// in the write buffer, flushing the writer
// as necessary. Next will return `io.ErrShortBuffer`
// if 'n' is greater than the size of the write buffer.
// Calls to 'next' increment the write position by
// the size of the returned buffer.
func (w *Writer) Next(n int) ([]byte, error) {
c, l := cap(w.buf), len(w.buf)
if n > c {
return nil, io.ErrShortBuffer
}
avail := c - l
if avail < n {
if err := w.Flush(); err != nil {
return nil, err
}
l = len(w.buf)
}
w.buf = w.buf[:l+n]
return w.buf[l:], nil
}
// take the bytes from w.buf[n:len(w.buf)]
// and put them at the beginning of w.buf,
// and resize to the length of the copied segment.
func (w *Writer) pushback(n int) {
w.buf = w.buf[:copy(w.buf, w.buf[n:])]
}
// ReadFrom implements `io.ReaderFrom`
func (w *Writer) ReadFrom(r io.Reader) (int64, error) {
// anticipatory flush
if err := w.Flush(); err != nil {
return 0, err
}
w.buf = w.buf[0:cap(w.buf)] // expand buffer
var nn int64 // written
var err error // error
var x int // read
// 1:1 reads and writes
for err == nil {
x, err = r.Read(w.buf)
if x > 0 {
n, werr := w.w.Write(w.buf[:x])
nn += int64(n)
if err != nil {
if n < x && n > 0 {
w.pushback(n - x)
}
return nn, werr
}
if n < x {
w.pushback(n - x)
return nn, io.ErrShortWrite
}
} else if err == nil {
err = io.ErrNoProgress
break
}
}
if err != io.EOF {
return nn, err
}
// we only clear here
// because we are sure
// the writes have
// succeeded. otherwise,
// we retain the data in case
// future writes succeed.
w.buf = w.buf[0:0]
return nn, nil
}

18
vendor/github.com/philhofer/fwd/writer_unsafe.go generated vendored Normal file
View file

@ -0,0 +1,18 @@
// +build !appengine
package fwd
import (
"reflect"
"unsafe"
)
// unsafe cast string as []byte
func unsafestr(b string) []byte {
l := len(b)
return *(*[]byte)(unsafe.Pointer(&reflect.SliceHeader{
Len: l,
Cap: l,
Data: (*reflect.StringHeader)(unsafe.Pointer(&b)).Data,
}))
}