1
0
Fork 0
forked from forgejo/forgejo

Vendor Update Go Libs (#13166)

* update github.com/alecthomas/chroma v0.8.0 -> v0.8.1

* github.com/blevesearch/bleve v1.0.10 -> v1.0.12

* editorconfig-core-go v2.1.1 -> v2.3.7

* github.com/gliderlabs/ssh v0.2.2 -> v0.3.1

* migrate editorconfig.ParseBytes to Parse

* github.com/shurcooL/vfsgen to 0d455de96546

* github.com/go-git/go-git/v5 v5.1.0 -> v5.2.0

* github.com/google/uuid v1.1.1 -> v1.1.2

* github.com/huandu/xstrings v1.3.0 -> v1.3.2

* github.com/klauspost/compress v1.10.11 -> v1.11.1

* github.com/markbates/goth v1.61.2 -> v1.65.0

* github.com/mattn/go-sqlite3 v1.14.0 -> v1.14.4

* github.com/mholt/archiver v3.3.0 -> v3.3.2

* github.com/microcosm-cc/bluemonday 4f7140c49acb -> v1.0.4

* github.com/minio/minio-go v7.0.4 -> v7.0.5

* github.com/olivere/elastic v7.0.9 -> v7.0.20

* github.com/urfave/cli v1.20.0 -> v1.22.4

* github.com/prometheus/client_golang v1.1.0 -> v1.8.0

* github.com/xanzy/go-gitlab v0.37.0 -> v0.38.1

* mvdan.cc/xurls v2.1.0 -> v2.2.0

Co-authored-by: Lauris BH <lauris@nix.lv>
This commit is contained in:
6543 2020-10-16 07:06:27 +02:00 committed by GitHub
parent 91f2afdb54
commit 12a1f914f4
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
656 changed files with 52967 additions and 25229 deletions

View file

@ -1,6 +1,4 @@
# Run only staticcheck for now. Additional linters will be enabled one-by-one.
---
linters:
enable:
- staticcheck
- govet
disable-all: true
- golint

View file

@ -0,0 +1,3 @@
## Prometheus Community Code of Conduct
Prometheus follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).

View file

@ -2,17 +2,120 @@
Prometheus uses GitHub to manage reviews of pull requests.
* If you are a new contributor see: [Steps to Contribute](#steps-to-contribute)
* If you have a trivial fix or improvement, go ahead and create a pull request,
addressing (with `@...`) the maintainer of this repository (see
addressing (with `@...`) a suitable maintainer of this repository (see
[MAINTAINERS.md](MAINTAINERS.md)) in the description of the pull request.
* If you plan to do something more involved, first discuss your ideas
on our [mailing list](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers).
This will avoid unnecessary work and surely give you and us a good deal
of inspiration.
of inspiration. Also please see our [non-goals issue](https://github.com/prometheus/docs/issues/149) on areas that the Prometheus community doesn't plan to work on.
* Relevant coding style guidelines are the [Go Code Review
Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments)
and the _Formatting and style_ section of Peter Bourgon's [Go: Best
Practices for Production
Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style).
Environments](https://peter.bourgon.org/go-in-production/#formatting-and-style).
* Be sure to sign off on the [DCO](https://github.com/probot/dco#how-it-works)
## Steps to Contribute
Should you wish to work on an issue, please claim it first by commenting on the GitHub issue that you want to work on it. This is to prevent duplicated efforts from contributors on the same issue.
Please check the [`help-wanted`](https://github.com/prometheus/procfs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) label to find issues that are good for getting started. If you have questions about one of the issues, with or without the tag, please comment on them and one of the maintainers will clarify it. For a quicker response, contact us over [IRC](https://prometheus.io/community).
For quickly compiling and testing your changes do:
```
make test # Make sure all the tests pass before you commit and push :)
```
We use [`golangci-lint`](https://github.com/golangci/golangci-lint) for linting the code. If it reports an issue and you think that the warning needs to be disregarded or is a false-positive, you can add a special comment `//nolint:linter1[,linter2,...]` before the offending line. Use this sparingly though, fixing the code to comply with the linter's recommendation is in general the preferred course of action.
## Pull Request Checklist
* Branch from the master branch and, if needed, rebase to the current master branch before submitting your pull request. If it doesn't merge cleanly with master you may be asked to rebase your changes.
* Commits should be as small as possible, while ensuring that each commit is correct independently (i.e., each commit should compile and pass tests).
* If your patch is not getting reviewed or you need a specific person to review it, you can @-reply a reviewer asking for a review in the pull request or a comment, or you can ask for a review on IRC channel [#prometheus](https://webchat.freenode.net/?channels=#prometheus) on irc.freenode.net (for the easiest start, [join via Riot](https://riot.im/app/#/room/#prometheus:matrix.org)).
* Add tests relevant to the fixed bug or new feature.
## Dependency management
The Prometheus project uses [Go modules](https://golang.org/cmd/go/#hdr-Modules__module_versions__and_more) to manage dependencies on external packages. This requires a working Go environment with version 1.12 or greater installed.
All dependencies are vendored in the `vendor/` directory.
To add or update a new dependency, use the `go get` command:
```bash
# Pick the latest tagged release.
go get example.com/some/module/pkg
# Pick a specific version.
go get example.com/some/module/pkg@vX.Y.Z
```
Tidy up the `go.mod` and `go.sum` files and copy the new/updated dependency to the `vendor/` directory:
```bash
# The GO111MODULE variable can be omitted when the code isn't located in GOPATH.
GO111MODULE=on go mod tidy
GO111MODULE=on go mod vendor
```
You have to commit the changes to `go.mod`, `go.sum` and the `vendor/` directory before submitting the pull request.
## API Implementation Guidelines
### Naming and Documentation
Public functions and structs should normally be named according to the file(s) being read and parsed. For example,
the `fs.BuddyInfo()` function reads the file `/proc/buddyinfo`. In addition, the godoc for each public function
should contain the path to the file(s) being read and a URL of the linux kernel documentation describing the file(s).
### Reading vs. Parsing
Most functionality in this library consists of reading files and then parsing the text into structured data. In most
cases reading and parsing should be separated into different functions/methods with a public `fs.Thing()` method and
a private `parseThing(r Reader)` function. This provides a logical separation and allows parsing to be tested
directly without the need to read from the filesystem. Using a `Reader` argument is preferred over other data types
such as `string` or `*File` because it provides the most flexibility regarding the data source. When a set of files
in a directory needs to be parsed, then a `path` string parameter to the parse function can be used instead.
### /proc and /sys filesystem I/O
The `proc` and `sys` filesystems are pseudo file systems and work a bit differently from standard disk I/O.
Many of the files are changing continuously and the data being read can in some cases change between subsequent
reads in the same file. Also, most of the files are relatively small (less than a few KBs), and system calls
to the `stat` function will often return the wrong size. Therefore, for most files it's recommended to read the
full file in a single operation using an internal utility function called `util.ReadFileNoStat`.
This function is similar to `ioutil.ReadFile`, but it avoids the system call to `stat` to get the current size of
the file.
Note that parsing the file's contents can still be performed one line at a time. This is done by first reading
the full file, and then using a scanner on the `[]byte` or `string` containing the data.
```
data, err := util.ReadFileNoStat("/proc/cpuinfo")
if err != nil {
return err
}
reader := bytes.NewReader(data)
scanner := bufio.NewScanner(reader)
```
The `/sys` filesystem contains many very small files which contain only a single numeric or text value. These files
can be read using an internal function called `util.SysReadFile` which is similar to `ioutil.ReadFile` but does
not bother to check the size of the file before reading.
```
data, err := util.SysReadFile("/sys/class/power_supply/BAT0/capacity")
```

View file

@ -69,12 +69,21 @@ else
GO_BUILD_PLATFORM ?= $(GOHOSTOS)-$(GOHOSTARCH)
endif
PROMU_VERSION ?= 0.4.0
GOTEST := $(GO) test
GOTEST_DIR :=
ifneq ($(CIRCLE_JOB),)
ifneq ($(shell which gotestsum),)
GOTEST_DIR := test-results
GOTEST := gotestsum --junitfile $(GOTEST_DIR)/unit-tests.xml --
endif
endif
PROMU_VERSION ?= 0.5.0
PROMU_URL := https://github.com/prometheus/promu/releases/download/v$(PROMU_VERSION)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM).tar.gz
GOLANGCI_LINT :=
GOLANGCI_LINT_OPTS ?=
GOLANGCI_LINT_VERSION ?= v1.16.0
GOLANGCI_LINT_VERSION ?= v1.18.0
# golangci-lint only supports linux, darwin and windows platforms on i386/amd64.
# windows isn't included here because of the path separator being different.
ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux darwin))
@ -86,7 +95,8 @@ endif
PREFIX ?= $(shell pwd)
BIN_DIR ?= $(shell pwd)
DOCKER_IMAGE_TAG ?= $(subst /,-,$(shell git rev-parse --abbrev-ref HEAD))
DOCKERFILE_PATH ?= ./
DOCKERFILE_PATH ?= ./Dockerfile
DOCKERBUILD_CONTEXT ?= ./
DOCKER_REPO ?= prom
DOCKER_ARCHS ?= amd64
@ -140,15 +150,29 @@ else
$(GO) get $(GOOPTS) -t ./...
endif
.PHONY: update-go-deps
update-go-deps:
@echo ">> updating Go dependencies"
@for m in $$($(GO) list -mod=readonly -m -f '{{ if and (not .Indirect) (not .Main)}}{{.Path}}{{end}}' all); do \
$(GO) get $$m; \
done
GO111MODULE=$(GO111MODULE) $(GO) mod tidy
ifneq (,$(wildcard vendor))
GO111MODULE=$(GO111MODULE) $(GO) mod vendor
endif
.PHONY: common-test-short
common-test-short:
common-test-short: $(GOTEST_DIR)
@echo ">> running short tests"
GO111MODULE=$(GO111MODULE) $(GO) test -short $(GOOPTS) $(pkgs)
GO111MODULE=$(GO111MODULE) $(GOTEST) -short $(GOOPTS) $(pkgs)
.PHONY: common-test
common-test:
common-test: $(GOTEST_DIR)
@echo ">> running all tests"
GO111MODULE=$(GO111MODULE) $(GO) test $(test-flags) $(GOOPTS) $(pkgs)
GO111MODULE=$(GO111MODULE) $(GOTEST) $(test-flags) $(GOOPTS) $(pkgs)
$(GOTEST_DIR):
@mkdir -p $@
.PHONY: common-format
common-format:
@ -200,7 +224,7 @@ endif
.PHONY: common-build
common-build: promu
@echo ">> building binaries"
GO111MODULE=$(GO111MODULE) $(PROMU) build --prefix $(PREFIX)
GO111MODULE=$(GO111MODULE) $(PROMU) build --prefix $(PREFIX) $(PROMU_BINARIES)
.PHONY: common-tarball
common-tarball: promu
@ -211,9 +235,10 @@ common-tarball: promu
common-docker: $(BUILD_DOCKER_ARCHS)
$(BUILD_DOCKER_ARCHS): common-docker-%:
docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" \
-f $(DOCKERFILE_PATH) \
--build-arg ARCH="$*" \
--build-arg OS="linux" \
$(DOCKERFILE_PATH)
$(DOCKERBUILD_CONTEXT)
.PHONY: common-docker-publish $(PUBLISH_DOCKER_ARCHS)
common-docker-publish: $(PUBLISH_DOCKER_ARCHS)

View file

@ -1,6 +1,6 @@
# procfs
This procfs package provides functions to retrieve system, kernel and process
This package provides functions to retrieve system, kernel, and process
metrics from the pseudo-filesystems /proc and /sys.
*WARNING*: This package is a work in progress. Its API may still break in
@ -13,7 +13,8 @@ backwards-incompatible ways without warnings. Use it at your own risk.
## Usage
The procfs library is organized by packages based on whether the gathered data is coming from
/proc, /sys, or both. Each package contains an `FS` type which represents the path to either /proc, /sys, or both. For example, current cpu statistics are gathered from
/proc, /sys, or both. Each package contains an `FS` type which represents the path to either /proc,
/sys, or both. For example, cpu statistics are gathered from
`/proc/stat` and are available via the root procfs package. First, the proc filesystem mount
point is initialized, and then the stat information is read.
@ -29,10 +30,17 @@ Some sub-packages such as `blockdevice`, require access to both the proc and sys
stats, err := fs.ProcDiskstats()
```
## Package Organization
The packages in this project are organized according to (1) whether the data comes from the `/proc` or
`/sys` filesystem and (2) the type of information being retrieved. For example, most process information
can be gathered from the functions in the root `procfs` package. Information about block devices such as disk drives
is available in the `blockdevices` sub-package.
## Building and Testing
The procfs library is normally built as part of another application. However, when making
changes to the library, the `make test` command can be used to run the API test suite.
The procfs library is intended to be built as part of another application, so there are no distributable binaries.
However, most of the API includes unit tests which can be run with `make test`.
### Updating Test Fixtures

View file

@ -31,7 +31,7 @@ type BuddyInfo struct {
Sizes []float64
}
// NewBuddyInfo reads the buddyinfo statistics from the specified `proc` filesystem.
// BuddyInfo reads the buddyinfo statistics from the specified `proc` filesystem.
func (fs FS) BuddyInfo() ([]BuddyInfo, error) {
file, err := os.Open(fs.proc.Path("buddyinfo"))
if err != nil {

464
vendor/github.com/prometheus/procfs/cpuinfo.go generated vendored Normal file
View file

@ -0,0 +1,464 @@
// Copyright 2019 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build linux
package procfs
import (
"bufio"
"bytes"
"errors"
"regexp"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// CPUInfo contains general information about a system CPU found in /proc/cpuinfo
type CPUInfo struct {
Processor uint
VendorID string
CPUFamily string
Model string
ModelName string
Stepping string
Microcode string
CPUMHz float64
CacheSize string
PhysicalID string
Siblings uint
CoreID string
CPUCores uint
APICID string
InitialAPICID string
FPU string
FPUException string
CPUIDLevel uint
WP string
Flags []string
Bugs []string
BogoMips float64
CLFlushSize uint
CacheAlignment uint
AddressSizes string
PowerManagement string
}
var (
cpuinfoClockRegexp = regexp.MustCompile(`([\d.]+)`)
cpuinfoS390XProcessorRegexp = regexp.MustCompile(`^processor\s+(\d+):.*`)
)
// CPUInfo returns information about current system CPUs.
// See https://www.kernel.org/doc/Documentation/filesystems/proc.txt
func (fs FS) CPUInfo() ([]CPUInfo, error) {
data, err := util.ReadFileNoStat(fs.proc.Path("cpuinfo"))
if err != nil {
return nil, err
}
return parseCPUInfo(data)
}
func parseCPUInfoX86(info []byte) ([]CPUInfo, error) {
scanner := bufio.NewScanner(bytes.NewReader(info))
// find the first "processor" line
firstLine := firstNonEmptyLine(scanner)
if !strings.HasPrefix(firstLine, "processor") || !strings.Contains(firstLine, ":") {
return nil, errors.New("invalid cpuinfo file: " + firstLine)
}
field := strings.SplitN(firstLine, ": ", 2)
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
firstcpu := CPUInfo{Processor: uint(v)}
cpuinfo := []CPUInfo{firstcpu}
i := 0
for scanner.Scan() {
line := scanner.Text()
if !strings.Contains(line, ":") {
continue
}
field := strings.SplitN(line, ": ", 2)
switch strings.TrimSpace(field[0]) {
case "processor":
cpuinfo = append(cpuinfo, CPUInfo{}) // start of the next processor
i++
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
cpuinfo[i].Processor = uint(v)
case "vendor", "vendor_id":
cpuinfo[i].VendorID = field[1]
case "cpu family":
cpuinfo[i].CPUFamily = field[1]
case "model":
cpuinfo[i].Model = field[1]
case "model name":
cpuinfo[i].ModelName = field[1]
case "stepping":
cpuinfo[i].Stepping = field[1]
case "microcode":
cpuinfo[i].Microcode = field[1]
case "cpu MHz":
v, err := strconv.ParseFloat(field[1], 64)
if err != nil {
return nil, err
}
cpuinfo[i].CPUMHz = v
case "cache size":
cpuinfo[i].CacheSize = field[1]
case "physical id":
cpuinfo[i].PhysicalID = field[1]
case "siblings":
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
cpuinfo[i].Siblings = uint(v)
case "core id":
cpuinfo[i].CoreID = field[1]
case "cpu cores":
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
cpuinfo[i].CPUCores = uint(v)
case "apicid":
cpuinfo[i].APICID = field[1]
case "initial apicid":
cpuinfo[i].InitialAPICID = field[1]
case "fpu":
cpuinfo[i].FPU = field[1]
case "fpu_exception":
cpuinfo[i].FPUException = field[1]
case "cpuid level":
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
cpuinfo[i].CPUIDLevel = uint(v)
case "wp":
cpuinfo[i].WP = field[1]
case "flags":
cpuinfo[i].Flags = strings.Fields(field[1])
case "bugs":
cpuinfo[i].Bugs = strings.Fields(field[1])
case "bogomips":
v, err := strconv.ParseFloat(field[1], 64)
if err != nil {
return nil, err
}
cpuinfo[i].BogoMips = v
case "clflush size":
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
cpuinfo[i].CLFlushSize = uint(v)
case "cache_alignment":
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
cpuinfo[i].CacheAlignment = uint(v)
case "address sizes":
cpuinfo[i].AddressSizes = field[1]
case "power management":
cpuinfo[i].PowerManagement = field[1]
}
}
return cpuinfo, nil
}
func parseCPUInfoARM(info []byte) ([]CPUInfo, error) {
scanner := bufio.NewScanner(bytes.NewReader(info))
firstLine := firstNonEmptyLine(scanner)
match, _ := regexp.MatchString("^[Pp]rocessor", firstLine)
if !match || !strings.Contains(firstLine, ":") {
return nil, errors.New("invalid cpuinfo file: " + firstLine)
}
field := strings.SplitN(firstLine, ": ", 2)
cpuinfo := []CPUInfo{}
featuresLine := ""
commonCPUInfo := CPUInfo{}
i := 0
if strings.TrimSpace(field[0]) == "Processor" {
commonCPUInfo = CPUInfo{ModelName: field[1]}
i = -1
} else {
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
firstcpu := CPUInfo{Processor: uint(v)}
cpuinfo = []CPUInfo{firstcpu}
}
for scanner.Scan() {
line := scanner.Text()
if !strings.Contains(line, ":") {
continue
}
field := strings.SplitN(line, ": ", 2)
switch strings.TrimSpace(field[0]) {
case "processor":
cpuinfo = append(cpuinfo, commonCPUInfo) // start of the next processor
i++
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
cpuinfo[i].Processor = uint(v)
case "BogoMIPS":
if i == -1 {
cpuinfo = append(cpuinfo, commonCPUInfo) // There is only one processor
i++
cpuinfo[i].Processor = 0
}
v, err := strconv.ParseFloat(field[1], 64)
if err != nil {
return nil, err
}
cpuinfo[i].BogoMips = v
case "Features":
featuresLine = line
case "model name":
cpuinfo[i].ModelName = field[1]
}
}
fields := strings.SplitN(featuresLine, ": ", 2)
for i := range cpuinfo {
cpuinfo[i].Flags = strings.Fields(fields[1])
}
return cpuinfo, nil
}
func parseCPUInfoS390X(info []byte) ([]CPUInfo, error) {
scanner := bufio.NewScanner(bytes.NewReader(info))
firstLine := firstNonEmptyLine(scanner)
if !strings.HasPrefix(firstLine, "vendor_id") || !strings.Contains(firstLine, ":") {
return nil, errors.New("invalid cpuinfo file: " + firstLine)
}
field := strings.SplitN(firstLine, ": ", 2)
cpuinfo := []CPUInfo{}
commonCPUInfo := CPUInfo{VendorID: field[1]}
for scanner.Scan() {
line := scanner.Text()
if !strings.Contains(line, ":") {
continue
}
field := strings.SplitN(line, ": ", 2)
switch strings.TrimSpace(field[0]) {
case "bogomips per cpu":
v, err := strconv.ParseFloat(field[1], 64)
if err != nil {
return nil, err
}
commonCPUInfo.BogoMips = v
case "features":
commonCPUInfo.Flags = strings.Fields(field[1])
}
if strings.HasPrefix(line, "processor") {
match := cpuinfoS390XProcessorRegexp.FindStringSubmatch(line)
if len(match) < 2 {
return nil, errors.New("Invalid line found in cpuinfo: " + line)
}
cpu := commonCPUInfo
v, err := strconv.ParseUint(match[1], 0, 32)
if err != nil {
return nil, err
}
cpu.Processor = uint(v)
cpuinfo = append(cpuinfo, cpu)
}
if strings.HasPrefix(line, "cpu number") {
break
}
}
i := 0
for scanner.Scan() {
line := scanner.Text()
if !strings.Contains(line, ":") {
continue
}
field := strings.SplitN(line, ": ", 2)
switch strings.TrimSpace(field[0]) {
case "cpu number":
i++
case "cpu MHz dynamic":
clock := cpuinfoClockRegexp.FindString(strings.TrimSpace(field[1]))
v, err := strconv.ParseFloat(clock, 64)
if err != nil {
return nil, err
}
cpuinfo[i].CPUMHz = v
}
}
return cpuinfo, nil
}
func parseCPUInfoMips(info []byte) ([]CPUInfo, error) {
scanner := bufio.NewScanner(bytes.NewReader(info))
// find the first "processor" line
firstLine := firstNonEmptyLine(scanner)
if !strings.HasPrefix(firstLine, "system type") || !strings.Contains(firstLine, ":") {
return nil, errors.New("invalid cpuinfo file: " + firstLine)
}
field := strings.SplitN(firstLine, ": ", 2)
cpuinfo := []CPUInfo{}
systemType := field[1]
i := 0
for scanner.Scan() {
line := scanner.Text()
if !strings.Contains(line, ":") {
continue
}
field := strings.SplitN(line, ": ", 2)
switch strings.TrimSpace(field[0]) {
case "processor":
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
i = int(v)
cpuinfo = append(cpuinfo, CPUInfo{}) // start of the next processor
cpuinfo[i].Processor = uint(v)
cpuinfo[i].VendorID = systemType
case "cpu model":
cpuinfo[i].ModelName = field[1]
case "BogoMIPS":
v, err := strconv.ParseFloat(field[1], 64)
if err != nil {
return nil, err
}
cpuinfo[i].BogoMips = v
}
}
return cpuinfo, nil
}
func parseCPUInfoPPC(info []byte) ([]CPUInfo, error) {
scanner := bufio.NewScanner(bytes.NewReader(info))
firstLine := firstNonEmptyLine(scanner)
if !strings.HasPrefix(firstLine, "processor") || !strings.Contains(firstLine, ":") {
return nil, errors.New("invalid cpuinfo file: " + firstLine)
}
field := strings.SplitN(firstLine, ": ", 2)
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
firstcpu := CPUInfo{Processor: uint(v)}
cpuinfo := []CPUInfo{firstcpu}
i := 0
for scanner.Scan() {
line := scanner.Text()
if !strings.Contains(line, ":") {
continue
}
field := strings.SplitN(line, ": ", 2)
switch strings.TrimSpace(field[0]) {
case "processor":
cpuinfo = append(cpuinfo, CPUInfo{}) // start of the next processor
i++
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
cpuinfo[i].Processor = uint(v)
case "cpu":
cpuinfo[i].VendorID = field[1]
case "clock":
clock := cpuinfoClockRegexp.FindString(strings.TrimSpace(field[1]))
v, err := strconv.ParseFloat(clock, 64)
if err != nil {
return nil, err
}
cpuinfo[i].CPUMHz = v
}
}
return cpuinfo, nil
}
func parseCPUInfoRISCV(info []byte) ([]CPUInfo, error) {
scanner := bufio.NewScanner(bytes.NewReader(info))
firstLine := firstNonEmptyLine(scanner)
if !strings.HasPrefix(firstLine, "processor") || !strings.Contains(firstLine, ":") {
return nil, errors.New("invalid cpuinfo file: " + firstLine)
}
field := strings.SplitN(firstLine, ": ", 2)
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
firstcpu := CPUInfo{Processor: uint(v)}
cpuinfo := []CPUInfo{firstcpu}
i := 0
for scanner.Scan() {
line := scanner.Text()
if !strings.Contains(line, ":") {
continue
}
field := strings.SplitN(line, ": ", 2)
switch strings.TrimSpace(field[0]) {
case "processor":
v, err := strconv.ParseUint(field[1], 0, 32)
if err != nil {
return nil, err
}
i = int(v)
cpuinfo = append(cpuinfo, CPUInfo{}) // start of the next processor
cpuinfo[i].Processor = uint(v)
case "hart":
cpuinfo[i].CoreID = field[1]
case "isa":
cpuinfo[i].ModelName = field[1]
}
}
return cpuinfo, nil
}
func parseCPUInfoDummy(_ []byte) ([]CPUInfo, error) { // nolint:unused,deadcode
return nil, errors.New("not implemented")
}
// firstNonEmptyLine advances the scanner to the first non-empty line
// and returns the contents of that line
func firstNonEmptyLine(scanner *bufio.Scanner) string {
for scanner.Scan() {
line := scanner.Text()
if strings.TrimSpace(line) != "" {
return line
}
}
return ""
}

19
vendor/github.com/prometheus/procfs/cpuinfo_armx.go generated vendored Normal file
View file

@ -0,0 +1,19 @@
// Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build linux
// +build arm arm64
package procfs
var parseCPUInfo = parseCPUInfoARM

19
vendor/github.com/prometheus/procfs/cpuinfo_mipsx.go generated vendored Normal file
View file

@ -0,0 +1,19 @@
// Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build linux
// +build mips mipsle mips64 mips64le
package procfs
var parseCPUInfo = parseCPUInfoMips

19
vendor/github.com/prometheus/procfs/cpuinfo_others.go generated vendored Normal file
View file

@ -0,0 +1,19 @@
// Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build linux
// +build !386,!amd64,!arm,!arm64,!mips,!mips64,!mips64le,!mipsle,!ppc64,!ppc64le,!riscv64,!s390x
package procfs
var parseCPUInfo = parseCPUInfoDummy

19
vendor/github.com/prometheus/procfs/cpuinfo_ppcx.go generated vendored Normal file
View file

@ -0,0 +1,19 @@
// Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build linux
// +build ppc64 ppc64le
package procfs
var parseCPUInfo = parseCPUInfoPPC

18
vendor/github.com/prometheus/procfs/cpuinfo_s390x.go generated vendored Normal file
View file

@ -0,0 +1,18 @@
// Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build linux
package procfs
var parseCPUInfo = parseCPUInfoS390X

19
vendor/github.com/prometheus/procfs/cpuinfo_x86.go generated vendored Normal file
View file

@ -0,0 +1,19 @@
// Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build linux
// +build 386 amd64
package procfs
var parseCPUInfo = parseCPUInfoX86

View file

@ -14,10 +14,10 @@
package procfs
import (
"bufio"
"bytes"
"fmt"
"io/ioutil"
"strconv"
"io"
"strings"
"github.com/prometheus/procfs/internal/util"
@ -52,80 +52,102 @@ type Crypto struct {
// structs containing the relevant info. More information available here:
// https://kernel.readthedocs.io/en/sphinx-samples/crypto-API.html
func (fs FS) Crypto() ([]Crypto, error) {
data, err := ioutil.ReadFile(fs.proc.Path("crypto"))
path := fs.proc.Path("crypto")
b, err := util.ReadFileNoStat(path)
if err != nil {
return nil, fmt.Errorf("error parsing crypto %s: %s", fs.proc.Path("crypto"), err)
return nil, fmt.Errorf("error reading crypto %s: %s", path, err)
}
crypto, err := parseCrypto(data)
crypto, err := parseCrypto(bytes.NewReader(b))
if err != nil {
return nil, fmt.Errorf("error parsing crypto %s: %s", fs.proc.Path("crypto"), err)
return nil, fmt.Errorf("error parsing crypto %s: %s", path, err)
}
return crypto, nil
}
func parseCrypto(cryptoData []byte) ([]Crypto, error) {
crypto := []Crypto{}
// parseCrypto parses a /proc/crypto stream into Crypto elements.
func parseCrypto(r io.Reader) ([]Crypto, error) {
var out []Crypto
cryptoBlocks := bytes.Split(cryptoData, []byte("\n\n"))
for _, block := range cryptoBlocks {
var newCryptoElem Crypto
lines := strings.Split(string(block), "\n")
for _, line := range lines {
if strings.TrimSpace(line) == "" || line[0] == ' ' {
continue
}
fields := strings.Split(line, ":")
key := strings.TrimSpace(fields[0])
value := strings.TrimSpace(fields[1])
vp := util.NewValueParser(value)
switch strings.TrimSpace(key) {
case "async":
b, err := strconv.ParseBool(value)
if err == nil {
newCryptoElem.Async = b
}
case "blocksize":
newCryptoElem.Blocksize = vp.PUInt64()
case "chunksize":
newCryptoElem.Chunksize = vp.PUInt64()
case "digestsize":
newCryptoElem.Digestsize = vp.PUInt64()
case "driver":
newCryptoElem.Driver = value
case "geniv":
newCryptoElem.Geniv = value
case "internal":
newCryptoElem.Internal = value
case "ivsize":
newCryptoElem.Ivsize = vp.PUInt64()
case "maxauthsize":
newCryptoElem.Maxauthsize = vp.PUInt64()
case "max keysize":
newCryptoElem.MaxKeysize = vp.PUInt64()
case "min keysize":
newCryptoElem.MinKeysize = vp.PUInt64()
case "module":
newCryptoElem.Module = value
case "name":
newCryptoElem.Name = value
case "priority":
newCryptoElem.Priority = vp.PInt64()
case "refcnt":
newCryptoElem.Refcnt = vp.PInt64()
case "seedsize":
newCryptoElem.Seedsize = vp.PUInt64()
case "selftest":
newCryptoElem.Selftest = value
case "type":
newCryptoElem.Type = value
case "walksize":
newCryptoElem.Walksize = vp.PUInt64()
}
s := bufio.NewScanner(r)
for s.Scan() {
text := s.Text()
switch {
case strings.HasPrefix(text, "name"):
// Each crypto element begins with its name.
out = append(out, Crypto{})
case text == "":
continue
}
kv := strings.Split(text, ":")
if len(kv) != 2 {
return nil, fmt.Errorf("malformed crypto line: %q", text)
}
k := strings.TrimSpace(kv[0])
v := strings.TrimSpace(kv[1])
// Parse the key/value pair into the currently focused element.
c := &out[len(out)-1]
if err := c.parseKV(k, v); err != nil {
return nil, err
}
crypto = append(crypto, newCryptoElem)
}
return crypto, nil
if err := s.Err(); err != nil {
return nil, err
}
return out, nil
}
// parseKV parses a key/value pair into the appropriate field of c.
func (c *Crypto) parseKV(k, v string) error {
vp := util.NewValueParser(v)
switch k {
case "async":
// Interpret literal yes as true.
c.Async = v == "yes"
case "blocksize":
c.Blocksize = vp.PUInt64()
case "chunksize":
c.Chunksize = vp.PUInt64()
case "digestsize":
c.Digestsize = vp.PUInt64()
case "driver":
c.Driver = v
case "geniv":
c.Geniv = v
case "internal":
c.Internal = v
case "ivsize":
c.Ivsize = vp.PUInt64()
case "maxauthsize":
c.Maxauthsize = vp.PUInt64()
case "max keysize":
c.MaxKeysize = vp.PUInt64()
case "min keysize":
c.MinKeysize = vp.PUInt64()
case "module":
c.Module = v
case "name":
c.Name = v
case "priority":
c.Priority = vp.PInt64()
case "refcnt":
c.Refcnt = vp.PInt64()
case "seedsize":
c.Seedsize = vp.PUInt64()
case "selftest":
c.Selftest = v
case "type":
c.Type = v
case "walksize":
c.Walksize = vp.PUInt64()
}
return vp.Err()
}

File diff suppressed because it is too large Load diff

422
vendor/github.com/prometheus/procfs/fscache.go generated vendored Normal file
View file

@ -0,0 +1,422 @@
// Copyright 2019 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package procfs
import (
"bufio"
"bytes"
"fmt"
"io"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// Fscacheinfo represents fscache statistics.
type Fscacheinfo struct {
// Number of index cookies allocated
IndexCookiesAllocated uint64
// data storage cookies allocated
DataStorageCookiesAllocated uint64
// Number of special cookies allocated
SpecialCookiesAllocated uint64
// Number of objects allocated
ObjectsAllocated uint64
// Number of object allocation failures
ObjectAllocationsFailure uint64
// Number of objects that reached the available state
ObjectsAvailable uint64
// Number of objects that reached the dead state
ObjectsDead uint64
// Number of objects that didn't have a coherency check
ObjectsWithoutCoherencyCheck uint64
// Number of objects that passed a coherency check
ObjectsWithCoherencyCheck uint64
// Number of objects that needed a coherency data update
ObjectsNeedCoherencyCheckUpdate uint64
// Number of objects that were declared obsolete
ObjectsDeclaredObsolete uint64
// Number of pages marked as being cached
PagesMarkedAsBeingCached uint64
// Number of uncache page requests seen
UncachePagesRequestSeen uint64
// Number of acquire cookie requests seen
AcquireCookiesRequestSeen uint64
// Number of acq reqs given a NULL parent
AcquireRequestsWithNullParent uint64
// Number of acq reqs rejected due to no cache available
AcquireRequestsRejectedNoCacheAvailable uint64
// Number of acq reqs succeeded
AcquireRequestsSucceeded uint64
// Number of acq reqs rejected due to error
AcquireRequestsRejectedDueToError uint64
// Number of acq reqs failed on ENOMEM
AcquireRequestsFailedDueToEnomem uint64
// Number of lookup calls made on cache backends
LookupsNumber uint64
// Number of negative lookups made
LookupsNegative uint64
// Number of positive lookups made
LookupsPositive uint64
// Number of objects created by lookup
ObjectsCreatedByLookup uint64
// Number of lookups timed out and requeued
LookupsTimedOutAndRequed uint64
InvalidationsNumber uint64
InvalidationsRunning uint64
// Number of update cookie requests seen
UpdateCookieRequestSeen uint64
// Number of upd reqs given a NULL parent
UpdateRequestsWithNullParent uint64
// Number of upd reqs granted CPU time
UpdateRequestsRunning uint64
// Number of relinquish cookie requests seen
RelinquishCookiesRequestSeen uint64
// Number of rlq reqs given a NULL parent
RelinquishCookiesWithNullParent uint64
// Number of rlq reqs waited on completion of creation
RelinquishRequestsWaitingCompleteCreation uint64
// Relinqs rtr
RelinquishRetries uint64
// Number of attribute changed requests seen
AttributeChangedRequestsSeen uint64
// Number of attr changed requests queued
AttributeChangedRequestsQueued uint64
// Number of attr changed rejected -ENOBUFS
AttributeChangedRejectDueToEnobufs uint64
// Number of attr changed failed -ENOMEM
AttributeChangedFailedDueToEnomem uint64
// Number of attr changed ops given CPU time
AttributeChangedOps uint64
// Number of allocation requests seen
AllocationRequestsSeen uint64
// Number of successful alloc reqs
AllocationOkRequests uint64
// Number of alloc reqs that waited on lookup completion
AllocationWaitingOnLookup uint64
// Number of alloc reqs rejected -ENOBUFS
AllocationsRejectedDueToEnobufs uint64
// Number of alloc reqs aborted -ERESTARTSYS
AllocationsAbortedDueToErestartsys uint64
// Number of alloc reqs submitted
AllocationOperationsSubmitted uint64
// Number of alloc reqs waited for CPU time
AllocationsWaitedForCPU uint64
// Number of alloc reqs aborted due to object death
AllocationsAbortedDueToObjectDeath uint64
// Number of retrieval (read) requests seen
RetrievalsReadRequests uint64
// Number of successful retr reqs
RetrievalsOk uint64
// Number of retr reqs that waited on lookup completion
RetrievalsWaitingLookupCompletion uint64
// Number of retr reqs returned -ENODATA
RetrievalsReturnedEnodata uint64
// Number of retr reqs rejected -ENOBUFS
RetrievalsRejectedDueToEnobufs uint64
// Number of retr reqs aborted -ERESTARTSYS
RetrievalsAbortedDueToErestartsys uint64
// Number of retr reqs failed -ENOMEM
RetrievalsFailedDueToEnomem uint64
// Number of retr reqs submitted
RetrievalsRequests uint64
// Number of retr reqs waited for CPU time
RetrievalsWaitingCPU uint64
// Number of retr reqs aborted due to object death
RetrievalsAbortedDueToObjectDeath uint64
// Number of storage (write) requests seen
StoreWriteRequests uint64
// Number of successful store reqs
StoreSuccessfulRequests uint64
// Number of store reqs on a page already pending storage
StoreRequestsOnPendingStorage uint64
// Number of store reqs rejected -ENOBUFS
StoreRequestsRejectedDueToEnobufs uint64
// Number of store reqs failed -ENOMEM
StoreRequestsFailedDueToEnomem uint64
// Number of store reqs submitted
StoreRequestsSubmitted uint64
// Number of store reqs granted CPU time
StoreRequestsRunning uint64
// Number of pages given store req processing time
StorePagesWithRequestsProcessing uint64
// Number of store reqs deleted from tracking tree
StoreRequestsDeleted uint64
// Number of store reqs over store limit
StoreRequestsOverStoreLimit uint64
// Number of release reqs against pages with no pending store
ReleaseRequestsAgainstPagesWithNoPendingStorage uint64
// Number of release reqs against pages stored by time lock granted
ReleaseRequestsAgainstPagesStoredByTimeLockGranted uint64
// Number of release reqs ignored due to in-progress store
ReleaseRequestsIgnoredDueToInProgressStore uint64
// Number of page stores cancelled due to release req
PageStoresCancelledByReleaseRequests uint64
VmscanWaiting uint64
// Number of times async ops added to pending queues
OpsPending uint64
// Number of times async ops given CPU time
OpsRunning uint64
// Number of times async ops queued for processing
OpsEnqueued uint64
// Number of async ops cancelled
OpsCancelled uint64
// Number of async ops rejected due to object lookup/create failure
OpsRejected uint64
// Number of async ops initialised
OpsInitialised uint64
// Number of async ops queued for deferred release
OpsDeferred uint64
// Number of async ops released (should equal ini=N when idle)
OpsReleased uint64
// Number of deferred-release async ops garbage collected
OpsGarbageCollected uint64
// Number of in-progress alloc_object() cache ops
CacheopAllocationsinProgress uint64
// Number of in-progress lookup_object() cache ops
CacheopLookupObjectInProgress uint64
// Number of in-progress lookup_complete() cache ops
CacheopLookupCompleteInPorgress uint64
// Number of in-progress grab_object() cache ops
CacheopGrabObjectInProgress uint64
CacheopInvalidations uint64
// Number of in-progress update_object() cache ops
CacheopUpdateObjectInProgress uint64
// Number of in-progress drop_object() cache ops
CacheopDropObjectInProgress uint64
// Number of in-progress put_object() cache ops
CacheopPutObjectInProgress uint64
// Number of in-progress attr_changed() cache ops
CacheopAttributeChangeInProgress uint64
// Number of in-progress sync_cache() cache ops
CacheopSyncCacheInProgress uint64
// Number of in-progress read_or_alloc_page() cache ops
CacheopReadOrAllocPageInProgress uint64
// Number of in-progress read_or_alloc_pages() cache ops
CacheopReadOrAllocPagesInProgress uint64
// Number of in-progress allocate_page() cache ops
CacheopAllocatePageInProgress uint64
// Number of in-progress allocate_pages() cache ops
CacheopAllocatePagesInProgress uint64
// Number of in-progress write_page() cache ops
CacheopWritePagesInProgress uint64
// Number of in-progress uncache_page() cache ops
CacheopUncachePagesInProgress uint64
// Number of in-progress dissociate_pages() cache ops
CacheopDissociatePagesInProgress uint64
// Number of object lookups/creations rejected due to lack of space
CacheevLookupsAndCreationsRejectedLackSpace uint64
// Number of stale objects deleted
CacheevStaleObjectsDeleted uint64
// Number of objects retired when relinquished
CacheevRetiredWhenReliquished uint64
// Number of objects culled
CacheevObjectsCulled uint64
}
// Fscacheinfo returns information about current fscache statistics.
// See https://www.kernel.org/doc/Documentation/filesystems/caching/fscache.txt
func (fs FS) Fscacheinfo() (Fscacheinfo, error) {
b, err := util.ReadFileNoStat(fs.proc.Path("fs/fscache/stats"))
if err != nil {
return Fscacheinfo{}, err
}
m, err := parseFscacheinfo(bytes.NewReader(b))
if err != nil {
return Fscacheinfo{}, fmt.Errorf("failed to parse Fscacheinfo: %v", err)
}
return *m, nil
}
func setFSCacheFields(fields []string, setFields ...*uint64) error {
var err error
if len(fields) < len(setFields) {
return fmt.Errorf("Insufficient number of fields, expected %v, got %v", len(setFields), len(fields))
}
for i := range setFields {
*setFields[i], err = strconv.ParseUint(strings.Split(fields[i], "=")[1], 0, 64)
if err != nil {
return err
}
}
return nil
}
func parseFscacheinfo(r io.Reader) (*Fscacheinfo, error) {
var m Fscacheinfo
s := bufio.NewScanner(r)
for s.Scan() {
fields := strings.Fields(s.Text())
if len(fields) < 2 {
return nil, fmt.Errorf("malformed Fscacheinfo line: %q", s.Text())
}
switch fields[0] {
case "Cookies:":
err := setFSCacheFields(fields[1:], &m.IndexCookiesAllocated, &m.DataStorageCookiesAllocated,
&m.SpecialCookiesAllocated)
if err != nil {
return &m, err
}
case "Objects:":
err := setFSCacheFields(fields[1:], &m.ObjectsAllocated, &m.ObjectAllocationsFailure,
&m.ObjectsAvailable, &m.ObjectsDead)
if err != nil {
return &m, err
}
case "ChkAux":
err := setFSCacheFields(fields[2:], &m.ObjectsWithoutCoherencyCheck, &m.ObjectsWithCoherencyCheck,
&m.ObjectsNeedCoherencyCheckUpdate, &m.ObjectsDeclaredObsolete)
if err != nil {
return &m, err
}
case "Pages":
err := setFSCacheFields(fields[2:], &m.PagesMarkedAsBeingCached, &m.UncachePagesRequestSeen)
if err != nil {
return &m, err
}
case "Acquire:":
err := setFSCacheFields(fields[1:], &m.AcquireCookiesRequestSeen, &m.AcquireRequestsWithNullParent,
&m.AcquireRequestsRejectedNoCacheAvailable, &m.AcquireRequestsSucceeded, &m.AcquireRequestsRejectedDueToError,
&m.AcquireRequestsFailedDueToEnomem)
if err != nil {
return &m, err
}
case "Lookups:":
err := setFSCacheFields(fields[1:], &m.LookupsNumber, &m.LookupsNegative, &m.LookupsPositive,
&m.ObjectsCreatedByLookup, &m.LookupsTimedOutAndRequed)
if err != nil {
return &m, err
}
case "Invals":
err := setFSCacheFields(fields[2:], &m.InvalidationsNumber, &m.InvalidationsRunning)
if err != nil {
return &m, err
}
case "Updates:":
err := setFSCacheFields(fields[1:], &m.UpdateCookieRequestSeen, &m.UpdateRequestsWithNullParent,
&m.UpdateRequestsRunning)
if err != nil {
return &m, err
}
case "Relinqs:":
err := setFSCacheFields(fields[1:], &m.RelinquishCookiesRequestSeen, &m.RelinquishCookiesWithNullParent,
&m.RelinquishRequestsWaitingCompleteCreation, &m.RelinquishRetries)
if err != nil {
return &m, err
}
case "AttrChg:":
err := setFSCacheFields(fields[1:], &m.AttributeChangedRequestsSeen, &m.AttributeChangedRequestsQueued,
&m.AttributeChangedRejectDueToEnobufs, &m.AttributeChangedFailedDueToEnomem, &m.AttributeChangedOps)
if err != nil {
return &m, err
}
case "Allocs":
if strings.Split(fields[2], "=")[0] == "n" {
err := setFSCacheFields(fields[2:], &m.AllocationRequestsSeen, &m.AllocationOkRequests,
&m.AllocationWaitingOnLookup, &m.AllocationsRejectedDueToEnobufs, &m.AllocationsAbortedDueToErestartsys)
if err != nil {
return &m, err
}
} else {
err := setFSCacheFields(fields[2:], &m.AllocationOperationsSubmitted, &m.AllocationsWaitedForCPU,
&m.AllocationsAbortedDueToObjectDeath)
if err != nil {
return &m, err
}
}
case "Retrvls:":
if strings.Split(fields[1], "=")[0] == "n" {
err := setFSCacheFields(fields[1:], &m.RetrievalsReadRequests, &m.RetrievalsOk, &m.RetrievalsWaitingLookupCompletion,
&m.RetrievalsReturnedEnodata, &m.RetrievalsRejectedDueToEnobufs, &m.RetrievalsAbortedDueToErestartsys,
&m.RetrievalsFailedDueToEnomem)
if err != nil {
return &m, err
}
} else {
err := setFSCacheFields(fields[1:], &m.RetrievalsRequests, &m.RetrievalsWaitingCPU, &m.RetrievalsAbortedDueToObjectDeath)
if err != nil {
return &m, err
}
}
case "Stores":
if strings.Split(fields[2], "=")[0] == "n" {
err := setFSCacheFields(fields[2:], &m.StoreWriteRequests, &m.StoreSuccessfulRequests,
&m.StoreRequestsOnPendingStorage, &m.StoreRequestsRejectedDueToEnobufs, &m.StoreRequestsFailedDueToEnomem)
if err != nil {
return &m, err
}
} else {
err := setFSCacheFields(fields[2:], &m.StoreRequestsSubmitted, &m.StoreRequestsRunning,
&m.StorePagesWithRequestsProcessing, &m.StoreRequestsDeleted, &m.StoreRequestsOverStoreLimit)
if err != nil {
return &m, err
}
}
case "VmScan":
err := setFSCacheFields(fields[2:], &m.ReleaseRequestsAgainstPagesWithNoPendingStorage,
&m.ReleaseRequestsAgainstPagesStoredByTimeLockGranted, &m.ReleaseRequestsIgnoredDueToInProgressStore,
&m.PageStoresCancelledByReleaseRequests, &m.VmscanWaiting)
if err != nil {
return &m, err
}
case "Ops":
if strings.Split(fields[2], "=")[0] == "pend" {
err := setFSCacheFields(fields[2:], &m.OpsPending, &m.OpsRunning, &m.OpsEnqueued, &m.OpsCancelled, &m.OpsRejected)
if err != nil {
return &m, err
}
} else {
err := setFSCacheFields(fields[2:], &m.OpsInitialised, &m.OpsDeferred, &m.OpsReleased, &m.OpsGarbageCollected)
if err != nil {
return &m, err
}
}
case "CacheOp:":
if strings.Split(fields[1], "=")[0] == "alo" {
err := setFSCacheFields(fields[1:], &m.CacheopAllocationsinProgress, &m.CacheopLookupObjectInProgress,
&m.CacheopLookupCompleteInPorgress, &m.CacheopGrabObjectInProgress)
if err != nil {
return &m, err
}
} else if strings.Split(fields[1], "=")[0] == "inv" {
err := setFSCacheFields(fields[1:], &m.CacheopInvalidations, &m.CacheopUpdateObjectInProgress,
&m.CacheopDropObjectInProgress, &m.CacheopPutObjectInProgress, &m.CacheopAttributeChangeInProgress,
&m.CacheopSyncCacheInProgress)
if err != nil {
return &m, err
}
} else {
err := setFSCacheFields(fields[1:], &m.CacheopReadOrAllocPageInProgress, &m.CacheopReadOrAllocPagesInProgress,
&m.CacheopAllocatePageInProgress, &m.CacheopAllocatePagesInProgress, &m.CacheopWritePagesInProgress,
&m.CacheopUncachePagesInProgress, &m.CacheopDissociatePagesInProgress)
if err != nil {
return &m, err
}
}
case "CacheEv:":
err := setFSCacheFields(fields[1:], &m.CacheevLookupsAndCreationsRejectedLackSpace, &m.CacheevStaleObjectsDeleted,
&m.CacheevRetiredWhenReliquished, &m.CacheevObjectsCulled)
if err != nil {
return &m, err
}
}
}
return &m, nil
}

View file

@ -1,6 +1,9 @@
module github.com/prometheus/procfs
go 1.12
require (
github.com/google/go-cmp v0.3.0
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4
github.com/google/go-cmp v0.3.1
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e
)

View file

@ -1,4 +1,6 @@
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 h1:YUO/7uOKsKeq9UokNS62b8FYywz3ker1l1vDZRCRefw=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e h1:vcxGaoTs7kV8m5Np9uUNQin4BrLOthgV7252N8V+FwY=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e h1:LwyF2AFISC9nVbS6MgzsaQNSUsRXI49GS+YQ5KX/QH0=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=

View file

@ -26,7 +26,7 @@ const (
// DefaultSysMountPoint is the common mount point of the sys filesystem.
DefaultSysMountPoint = "/sys"
// DefaultConfigfsMountPoint is the commont mount point of the configfs
// DefaultConfigfsMountPoint is the common mount point of the configfs
DefaultConfigfsMountPoint = "/sys/kernel/config"
)

View file

@ -73,6 +73,15 @@ func ReadUintFromFile(path string) (uint64, error) {
return strconv.ParseUint(strings.TrimSpace(string(data)), 10, 64)
}
// ReadIntFromFile reads a file and attempts to parse a int64 from it.
func ReadIntFromFile(path string) (int64, error) {
data, err := ioutil.ReadFile(path)
if err != nil {
return 0, err
}
return strconv.ParseInt(strings.TrimSpace(string(data)), 10, 64)
}
// ParseBool parses a string into a boolean pointer.
func ParseBool(b string) *bool {
var truth bool

View file

@ -0,0 +1,38 @@
// Copyright 2019 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package util
import (
"io"
"io/ioutil"
"os"
)
// ReadFileNoStat uses ioutil.ReadAll to read contents of entire file.
// This is similar to ioutil.ReadFile but without the call to os.Stat, because
// many files in /proc and /sys report incorrect file sizes (either 0 or 4096).
// Reads a max file size of 512kB. For files larger than this, a scanner
// should be used.
func ReadFileNoStat(filename string) ([]byte, error) {
const maxBufferSize = 1024 * 512
f, err := os.Open(filename)
if err != nil {
return nil, err
}
defer f.Close()
reader := io.LimitReader(f, maxBufferSize)
return ioutil.ReadAll(reader)
}

View file

@ -23,6 +23,8 @@ import (
// SysReadFile is a simplified ioutil.ReadFile that invokes syscall.Read directly.
// https://github.com/prometheus/node_exporter/pull/728/files
//
// Note that this function will not read files larger than 128 bytes.
func SysReadFile(file string) (string, error) {
f, err := os.Open(file)
if err != nil {
@ -35,7 +37,8 @@ func SysReadFile(file string) (string, error) {
//
// Since we either want to read data or bail immediately, do the simplest
// possible read using syscall directly.
b := make([]byte, 128)
const sysFileBufferSize = 128
b := make([]byte, sysFileBufferSize)
n, err := syscall.Read(int(f.Fd()), b)
if err != nil {
return "", err

View file

@ -33,6 +33,9 @@ func NewValueParser(v string) *ValueParser {
return &ValueParser{v: v}
}
// Int interprets the underlying value as an int and returns that value.
func (vp *ValueParser) Int() int { return int(vp.int64()) }
// PInt64 interprets the underlying value as an int64 and returns a pointer to
// that value.
func (vp *ValueParser) PInt64() *int64 {
@ -40,16 +43,27 @@ func (vp *ValueParser) PInt64() *int64 {
return nil
}
v := vp.int64()
return &v
}
// int64 interprets the underlying value as an int64 and returns that value.
// TODO: export if/when necessary.
func (vp *ValueParser) int64() int64 {
if vp.err != nil {
return 0
}
// A base value of zero makes ParseInt infer the correct base using the
// string's prefix, if any.
const base = 0
v, err := strconv.ParseInt(vp.v, base, 64)
if err != nil {
vp.err = err
return nil
return 0
}
return &v
return v
}
// PUInt64 interprets the underlying value as an uint64 and returns a pointer to

View file

@ -15,6 +15,7 @@ package procfs
import (
"bufio"
"bytes"
"encoding/hex"
"errors"
"fmt"
@ -24,6 +25,8 @@ import (
"os"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// IPVSStats holds IPVS statistics, as exposed by the kernel in `/proc/net/ip_vs_stats`.
@ -64,17 +67,16 @@ type IPVSBackendStatus struct {
// IPVSStats reads the IPVS statistics from the specified `proc` filesystem.
func (fs FS) IPVSStats() (IPVSStats, error) {
file, err := os.Open(fs.proc.Path("net/ip_vs_stats"))
data, err := util.ReadFileNoStat(fs.proc.Path("net/ip_vs_stats"))
if err != nil {
return IPVSStats{}, err
}
defer file.Close()
return parseIPVSStats(file)
return parseIPVSStats(bytes.NewReader(data))
}
// parseIPVSStats performs the actual parsing of `ip_vs_stats`.
func parseIPVSStats(file io.Reader) (IPVSStats, error) {
func parseIPVSStats(r io.Reader) (IPVSStats, error) {
var (
statContent []byte
statLines []string
@ -82,7 +84,7 @@ func parseIPVSStats(file io.Reader) (IPVSStats, error) {
stats IPVSStats
)
statContent, err := ioutil.ReadAll(file)
statContent, err := ioutil.ReadAll(r)
if err != nil {
return IPVSStats{}, err
}

62
vendor/github.com/prometheus/procfs/kernel_random.go generated vendored Normal file
View file

@ -0,0 +1,62 @@
// Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build !windows
package procfs
import (
"os"
"github.com/prometheus/procfs/internal/util"
)
// KernelRandom contains information about to the kernel's random number generator.
type KernelRandom struct {
// EntropyAvaliable gives the available entropy, in bits.
EntropyAvaliable *uint64
// PoolSize gives the size of the entropy pool, in bits.
PoolSize *uint64
// URandomMinReseedSeconds is the number of seconds after which the DRNG will be reseeded.
URandomMinReseedSeconds *uint64
// WriteWakeupThreshold the number of bits of entropy below which we wake up processes
// that do a select(2) or poll(2) for write access to /dev/random.
WriteWakeupThreshold *uint64
// ReadWakeupThreshold is the number of bits of entropy required for waking up processes that sleep
// waiting for entropy from /dev/random.
ReadWakeupThreshold *uint64
}
// KernelRandom returns values from /proc/sys/kernel/random.
func (fs FS) KernelRandom() (KernelRandom, error) {
random := KernelRandom{}
for file, p := range map[string]**uint64{
"entropy_avail": &random.EntropyAvaliable,
"poolsize": &random.PoolSize,
"urandom_min_reseed_secs": &random.URandomMinReseedSeconds,
"write_wakeup_threshold": &random.WriteWakeupThreshold,
"read_wakeup_threshold": &random.ReadWakeupThreshold,
} {
val, err := util.ReadUintFromFile(fs.proc.Path("sys", "kernel", "random", file))
if os.IsNotExist(err) {
continue
}
if err != nil {
return random, err
}
*p = &val
}
return random, nil
}

62
vendor/github.com/prometheus/procfs/loadavg.go generated vendored Normal file
View file

@ -0,0 +1,62 @@
// Copyright 2019 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package procfs
import (
"fmt"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// LoadAvg represents an entry in /proc/loadavg
type LoadAvg struct {
Load1 float64
Load5 float64
Load15 float64
}
// LoadAvg returns loadavg from /proc.
func (fs FS) LoadAvg() (*LoadAvg, error) {
path := fs.proc.Path("loadavg")
data, err := util.ReadFileNoStat(path)
if err != nil {
return nil, err
}
return parseLoad(data)
}
// Parse /proc loadavg and return 1m, 5m and 15m.
func parseLoad(loadavgBytes []byte) (*LoadAvg, error) {
loads := make([]float64, 3)
parts := strings.Fields(string(loadavgBytes))
if len(parts) < 3 {
return nil, fmt.Errorf("malformed loadavg line: too few fields in loadavg string: %s", string(loadavgBytes))
}
var err error
for i, load := range parts[0:3] {
loads[i], err = strconv.ParseFloat(load, 64)
if err != nil {
return nil, fmt.Errorf("could not parse load '%s': %s", load, err)
}
}
return &LoadAvg{
Load1: loads[0],
Load5: loads[1],
Load15: loads[2],
}, nil
}

View file

@ -52,7 +52,7 @@ type MDStat struct {
func (fs FS) MDStat() ([]MDStat, error) {
data, err := ioutil.ReadFile(fs.proc.Path("mdstat"))
if err != nil {
return nil, fmt.Errorf("error parsing mdstat %s: %s", fs.proc.Path("mdstat"), err)
return nil, err
}
mdstat, err := parseMDStat(data)
if err != nil {
@ -107,11 +107,14 @@ func parseMDStat(mdStatData []byte) ([]MDStat, error) {
syncedBlocks := size
recovering := strings.Contains(lines[syncLineIdx], "recovery")
resyncing := strings.Contains(lines[syncLineIdx], "resync")
checking := strings.Contains(lines[syncLineIdx], "check")
// Append recovery and resyncing state info.
if recovering || resyncing {
if recovering || resyncing || checking {
if recovering {
state = "recovering"
} else if checking {
state = "checking"
} else {
state = "resyncing"
}

277
vendor/github.com/prometheus/procfs/meminfo.go generated vendored Normal file
View file

@ -0,0 +1,277 @@
// Copyright 2019 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package procfs
import (
"bufio"
"bytes"
"fmt"
"io"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// Meminfo represents memory statistics.
type Meminfo struct {
// Total usable ram (i.e. physical ram minus a few reserved
// bits and the kernel binary code)
MemTotal uint64
// The sum of LowFree+HighFree
MemFree uint64
// An estimate of how much memory is available for starting
// new applications, without swapping. Calculated from
// MemFree, SReclaimable, the size of the file LRU lists, and
// the low watermarks in each zone. The estimate takes into
// account that the system needs some page cache to function
// well, and that not all reclaimable slab will be
// reclaimable, due to items being in use. The impact of those
// factors will vary from system to system.
MemAvailable uint64
// Relatively temporary storage for raw disk blocks shouldn't
// get tremendously large (20MB or so)
Buffers uint64
Cached uint64
// Memory that once was swapped out, is swapped back in but
// still also is in the swapfile (if memory is needed it
// doesn't need to be swapped out AGAIN because it is already
// in the swapfile. This saves I/O)
SwapCached uint64
// Memory that has been used more recently and usually not
// reclaimed unless absolutely necessary.
Active uint64
// Memory which has been less recently used. It is more
// eligible to be reclaimed for other purposes
Inactive uint64
ActiveAnon uint64
InactiveAnon uint64
ActiveFile uint64
InactiveFile uint64
Unevictable uint64
Mlocked uint64
// total amount of swap space available
SwapTotal uint64
// Memory which has been evicted from RAM, and is temporarily
// on the disk
SwapFree uint64
// Memory which is waiting to get written back to the disk
Dirty uint64
// Memory which is actively being written back to the disk
Writeback uint64
// Non-file backed pages mapped into userspace page tables
AnonPages uint64
// files which have been mapped, such as libraries
Mapped uint64
Shmem uint64
// in-kernel data structures cache
Slab uint64
// Part of Slab, that might be reclaimed, such as caches
SReclaimable uint64
// Part of Slab, that cannot be reclaimed on memory pressure
SUnreclaim uint64
KernelStack uint64
// amount of memory dedicated to the lowest level of page
// tables.
PageTables uint64
// NFS pages sent to the server, but not yet committed to
// stable storage
NFSUnstable uint64
// Memory used for block device "bounce buffers"
Bounce uint64
// Memory used by FUSE for temporary writeback buffers
WritebackTmp uint64
// Based on the overcommit ratio ('vm.overcommit_ratio'),
// this is the total amount of memory currently available to
// be allocated on the system. This limit is only adhered to
// if strict overcommit accounting is enabled (mode 2 in
// 'vm.overcommit_memory').
// The CommitLimit is calculated with the following formula:
// CommitLimit = ([total RAM pages] - [total huge TLB pages]) *
// overcommit_ratio / 100 + [total swap pages]
// For example, on a system with 1G of physical RAM and 7G
// of swap with a `vm.overcommit_ratio` of 30 it would
// yield a CommitLimit of 7.3G.
// For more details, see the memory overcommit documentation
// in vm/overcommit-accounting.
CommitLimit uint64
// The amount of memory presently allocated on the system.
// The committed memory is a sum of all of the memory which
// has been allocated by processes, even if it has not been
// "used" by them as of yet. A process which malloc()'s 1G
// of memory, but only touches 300M of it will show up as
// using 1G. This 1G is memory which has been "committed" to
// by the VM and can be used at any time by the allocating
// application. With strict overcommit enabled on the system
// (mode 2 in 'vm.overcommit_memory'),allocations which would
// exceed the CommitLimit (detailed above) will not be permitted.
// This is useful if one needs to guarantee that processes will
// not fail due to lack of memory once that memory has been
// successfully allocated.
CommittedAS uint64
// total size of vmalloc memory area
VmallocTotal uint64
// amount of vmalloc area which is used
VmallocUsed uint64
// largest contiguous block of vmalloc area which is free
VmallocChunk uint64
HardwareCorrupted uint64
AnonHugePages uint64
ShmemHugePages uint64
ShmemPmdMapped uint64
CmaTotal uint64
CmaFree uint64
HugePagesTotal uint64
HugePagesFree uint64
HugePagesRsvd uint64
HugePagesSurp uint64
Hugepagesize uint64
DirectMap4k uint64
DirectMap2M uint64
DirectMap1G uint64
}
// Meminfo returns an information about current kernel/system memory statistics.
// See https://www.kernel.org/doc/Documentation/filesystems/proc.txt
func (fs FS) Meminfo() (Meminfo, error) {
b, err := util.ReadFileNoStat(fs.proc.Path("meminfo"))
if err != nil {
return Meminfo{}, err
}
m, err := parseMemInfo(bytes.NewReader(b))
if err != nil {
return Meminfo{}, fmt.Errorf("failed to parse meminfo: %v", err)
}
return *m, nil
}
func parseMemInfo(r io.Reader) (*Meminfo, error) {
var m Meminfo
s := bufio.NewScanner(r)
for s.Scan() {
// Each line has at least a name and value; we ignore the unit.
fields := strings.Fields(s.Text())
if len(fields) < 2 {
return nil, fmt.Errorf("malformed meminfo line: %q", s.Text())
}
v, err := strconv.ParseUint(fields[1], 0, 64)
if err != nil {
return nil, err
}
switch fields[0] {
case "MemTotal:":
m.MemTotal = v
case "MemFree:":
m.MemFree = v
case "MemAvailable:":
m.MemAvailable = v
case "Buffers:":
m.Buffers = v
case "Cached:":
m.Cached = v
case "SwapCached:":
m.SwapCached = v
case "Active:":
m.Active = v
case "Inactive:":
m.Inactive = v
case "Active(anon):":
m.ActiveAnon = v
case "Inactive(anon):":
m.InactiveAnon = v
case "Active(file):":
m.ActiveFile = v
case "Inactive(file):":
m.InactiveFile = v
case "Unevictable:":
m.Unevictable = v
case "Mlocked:":
m.Mlocked = v
case "SwapTotal:":
m.SwapTotal = v
case "SwapFree:":
m.SwapFree = v
case "Dirty:":
m.Dirty = v
case "Writeback:":
m.Writeback = v
case "AnonPages:":
m.AnonPages = v
case "Mapped:":
m.Mapped = v
case "Shmem:":
m.Shmem = v
case "Slab:":
m.Slab = v
case "SReclaimable:":
m.SReclaimable = v
case "SUnreclaim:":
m.SUnreclaim = v
case "KernelStack:":
m.KernelStack = v
case "PageTables:":
m.PageTables = v
case "NFS_Unstable:":
m.NFSUnstable = v
case "Bounce:":
m.Bounce = v
case "WritebackTmp:":
m.WritebackTmp = v
case "CommitLimit:":
m.CommitLimit = v
case "Committed_AS:":
m.CommittedAS = v
case "VmallocTotal:":
m.VmallocTotal = v
case "VmallocUsed:":
m.VmallocUsed = v
case "VmallocChunk:":
m.VmallocChunk = v
case "HardwareCorrupted:":
m.HardwareCorrupted = v
case "AnonHugePages:":
m.AnonHugePages = v
case "ShmemHugePages:":
m.ShmemHugePages = v
case "ShmemPmdMapped:":
m.ShmemPmdMapped = v
case "CmaTotal:":
m.CmaTotal = v
case "CmaFree:":
m.CmaFree = v
case "HugePages_Total:":
m.HugePagesTotal = v
case "HugePages_Free:":
m.HugePagesFree = v
case "HugePages_Rsvd:":
m.HugePagesRsvd = v
case "HugePages_Surp:":
m.HugePagesSurp = v
case "Hugepagesize:":
m.Hugepagesize = v
case "DirectMap4k:":
m.DirectMap4k = v
case "DirectMap2M:":
m.DirectMap2M = v
case "DirectMap1G:":
m.DirectMap1G = v
}
}
return &m, nil
}

View file

@ -15,19 +15,13 @@ package procfs
import (
"bufio"
"bytes"
"fmt"
"io"
"os"
"strconv"
"strings"
)
var validOptionalFields = map[string]bool{
"shared": true,
"master": true,
"propagate_from": true,
"unbindable": true,
}
"github.com/prometheus/procfs/internal/util"
)
// A MountInfo is a type that describes the details, options
// for each mount, parsed from /proc/self/mountinfo.
@ -35,10 +29,10 @@ var validOptionalFields = map[string]bool{
// is described in the following man page.
// http://man7.org/linux/man-pages/man5/proc.5.html
type MountInfo struct {
// Unique Id for the mount
MountId int
// The Id of the parent mount
ParentId int
// Unique ID for the mount
MountID int
// The ID of the parent mount
ParentID int
// The value of `st_dev` for the files on this FS
MajorMinorVer string
// The pathname of the directory in the FS that forms
@ -58,18 +52,10 @@ type MountInfo struct {
SuperOptions map[string]string
}
// Returns part of the mountinfo line, if it exists, else an empty string.
func getStringSliceElement(parts []string, idx int, defaultValue string) string {
if idx >= len(parts) {
return defaultValue
}
return parts[idx]
}
// Reads each line of the mountinfo file, and returns a list of formatted MountInfo structs.
func parseMountInfo(r io.Reader) ([]*MountInfo, error) {
func parseMountInfo(info []byte) ([]*MountInfo, error) {
mounts := []*MountInfo{}
scanner := bufio.NewScanner(r)
scanner := bufio.NewScanner(bytes.NewReader(info))
for scanner.Scan() {
mountString := scanner.Text()
parsedMounts, err := parseMountInfoString(mountString)
@ -89,58 +75,76 @@ func parseMountInfo(r io.Reader) ([]*MountInfo, error) {
func parseMountInfoString(mountString string) (*MountInfo, error) {
var err error
// OptionalFields can be zero, hence these checks to ensure we do not populate the wrong values in the wrong spots
separatorIndex := strings.Index(mountString, "-")
if separatorIndex == -1 {
return nil, fmt.Errorf("no separator found in mountinfo string: %s", mountString)
mountInfo := strings.Split(mountString, " ")
mountInfoLength := len(mountInfo)
if mountInfoLength < 10 {
return nil, fmt.Errorf("couldn't find enough fields in mount string: %s", mountString)
}
beforeFields := strings.Fields(mountString[:separatorIndex])
afterFields := strings.Fields(mountString[separatorIndex+1:])
if (len(beforeFields) + len(afterFields)) < 7 {
return nil, fmt.Errorf("too few fields")
if mountInfo[mountInfoLength-4] != "-" {
return nil, fmt.Errorf("couldn't find separator in expected field: %s", mountInfo[mountInfoLength-4])
}
mount := &MountInfo{
MajorMinorVer: getStringSliceElement(beforeFields, 2, ""),
Root: getStringSliceElement(beforeFields, 3, ""),
MountPoint: getStringSliceElement(beforeFields, 4, ""),
Options: mountOptionsParser(getStringSliceElement(beforeFields, 5, "")),
MajorMinorVer: mountInfo[2],
Root: mountInfo[3],
MountPoint: mountInfo[4],
Options: mountOptionsParser(mountInfo[5]),
OptionalFields: nil,
FSType: getStringSliceElement(afterFields, 0, ""),
Source: getStringSliceElement(afterFields, 1, ""),
SuperOptions: mountOptionsParser(getStringSliceElement(afterFields, 2, "")),
FSType: mountInfo[mountInfoLength-3],
Source: mountInfo[mountInfoLength-2],
SuperOptions: mountOptionsParser(mountInfo[mountInfoLength-1]),
}
mount.MountId, err = strconv.Atoi(getStringSliceElement(beforeFields, 0, ""))
mount.MountID, err = strconv.Atoi(mountInfo[0])
if err != nil {
return nil, fmt.Errorf("failed to parse mount ID")
}
mount.ParentId, err = strconv.Atoi(getStringSliceElement(beforeFields, 1, ""))
mount.ParentID, err = strconv.Atoi(mountInfo[1])
if err != nil {
return nil, fmt.Errorf("failed to parse parent ID")
}
// Has optional fields, which is a space separated list of values.
// Example: shared:2 master:7
if len(beforeFields) > 6 {
mount.OptionalFields = make(map[string]string)
optionalFields := beforeFields[6:]
for _, field := range optionalFields {
optionSplit := strings.Split(field, ":")
target, value := optionSplit[0], ""
if len(optionSplit) == 2 {
value = optionSplit[1]
}
// Checks if the 'keys' in the optional fields in the mountinfo line are acceptable.
// Allowed 'keys' are shared, master, propagate_from, unbindable.
if _, ok := validOptionalFields[target]; ok {
mount.OptionalFields[target] = value
}
if mountInfo[6] != "" {
mount.OptionalFields, err = mountOptionsParseOptionalFields(mountInfo[6 : mountInfoLength-4])
if err != nil {
return nil, err
}
}
return mount, nil
}
// Parses the mount options, superblock options.
// mountOptionsIsValidField checks a string against a valid list of optional fields keys.
func mountOptionsIsValidField(s string) bool {
switch s {
case
"shared",
"master",
"propagate_from",
"unbindable":
return true
}
return false
}
// mountOptionsParseOptionalFields parses a list of optional fields strings into a double map of strings.
func mountOptionsParseOptionalFields(o []string) (map[string]string, error) {
optionalFields := make(map[string]string)
for _, field := range o {
optionSplit := strings.SplitN(field, ":", 2)
value := ""
if len(optionSplit) == 2 {
value = optionSplit[1]
}
if mountOptionsIsValidField(optionSplit[0]) {
optionalFields[optionSplit[0]] = value
}
}
return optionalFields, nil
}
// mountOptionsParser parses the mount options, superblock options.
func mountOptionsParser(mountOptions string) map[string]string {
opts := make(map[string]string)
options := strings.Split(mountOptions, ",")
@ -157,22 +161,20 @@ func mountOptionsParser(mountOptions string) map[string]string {
return opts
}
// Retrieves mountinfo information from `/proc/self/mountinfo`.
// GetMounts retrieves mountinfo information from `/proc/self/mountinfo`.
func GetMounts() ([]*MountInfo, error) {
f, err := os.Open("/proc/self/mountinfo")
data, err := util.ReadFileNoStat("/proc/self/mountinfo")
if err != nil {
return nil, err
}
defer f.Close()
return parseMountInfo(f)
return parseMountInfo(data)
}
// Retrieves mountinfo information from a processes' `/proc/<pid>/mountinfo`.
// GetProcMounts retrieves mountinfo information from a processes' `/proc/<pid>/mountinfo`.
func GetProcMounts(pid int) ([]*MountInfo, error) {
f, err := os.Open(fmt.Sprintf("/proc/%d/mountinfo", pid))
data, err := util.ReadFileNoStat(fmt.Sprintf("/proc/%d/mountinfo", pid))
if err != nil {
return nil, err
}
defer f.Close()
return parseMountInfo(f)
return parseMountInfo(data)
}

View file

@ -186,6 +186,8 @@ type NFSOperationStats struct {
CumulativeTotalResponseMilliseconds uint64
// Duration from when a request was enqueued to when it was completely handled.
CumulativeTotalRequestMilliseconds uint64
// The count of operations that complete with tk_status < 0. These statuses usually indicate error conditions.
Errors uint64
}
// A NFSTransportStats contains statistics for the NFS mount RPC requests and
@ -494,8 +496,8 @@ func parseNFSEventsStats(ss []string) (*NFSEventsStats, error) {
// line is reached.
func parseNFSOperationStats(s *bufio.Scanner) ([]NFSOperationStats, error) {
const (
// Number of expected fields in each per-operation statistics set
numFields = 9
// Minimum number of expected fields in each per-operation statistics set
minFields = 9
)
var ops []NFSOperationStats
@ -508,12 +510,12 @@ func parseNFSOperationStats(s *bufio.Scanner) ([]NFSOperationStats, error) {
break
}
if len(ss) != numFields {
if len(ss) < minFields {
return nil, fmt.Errorf("invalid NFS per-operations stats: %v", ss)
}
// Skip string operation name for integers
ns := make([]uint64, 0, numFields-1)
ns := make([]uint64, 0, minFields-1)
for _, st := range ss[1:] {
n, err := strconv.ParseUint(st, 10, 64)
if err != nil {
@ -523,7 +525,7 @@ func parseNFSOperationStats(s *bufio.Scanner) ([]NFSOperationStats, error) {
ns = append(ns, n)
}
ops = append(ops, NFSOperationStats{
opStats := NFSOperationStats{
Operation: strings.TrimSuffix(ss[0], ":"),
Requests: ns[0],
Transmissions: ns[1],
@ -533,7 +535,13 @@ func parseNFSOperationStats(s *bufio.Scanner) ([]NFSOperationStats, error) {
CumulativeQueueMilliseconds: ns[5],
CumulativeTotalResponseMilliseconds: ns[6],
CumulativeTotalRequestMilliseconds: ns[7],
})
}
if len(ns) > 8 {
opStats.Errors = ns[8]
}
ops = append(ops, opStats)
}
return ops, s.Err()

View file

@ -0,0 +1,153 @@
// Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package procfs
import (
"bufio"
"bytes"
"fmt"
"io"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// A ConntrackStatEntry represents one line from net/stat/nf_conntrack
// and contains netfilter conntrack statistics at one CPU core
type ConntrackStatEntry struct {
Entries uint64
Found uint64
Invalid uint64
Ignore uint64
Insert uint64
InsertFailed uint64
Drop uint64
EarlyDrop uint64
SearchRestart uint64
}
// ConntrackStat retrieves netfilter's conntrack statistics, split by CPU cores
func (fs FS) ConntrackStat() ([]ConntrackStatEntry, error) {
return readConntrackStat(fs.proc.Path("net", "stat", "nf_conntrack"))
}
// Parses a slice of ConntrackStatEntries from the given filepath
func readConntrackStat(path string) ([]ConntrackStatEntry, error) {
// This file is small and can be read with one syscall.
b, err := util.ReadFileNoStat(path)
if err != nil {
// Do not wrap this error so the caller can detect os.IsNotExist and
// similar conditions.
return nil, err
}
stat, err := parseConntrackStat(bytes.NewReader(b))
if err != nil {
return nil, fmt.Errorf("failed to read conntrack stats from %q: %v", path, err)
}
return stat, nil
}
// Reads the contents of a conntrack statistics file and parses a slice of ConntrackStatEntries
func parseConntrackStat(r io.Reader) ([]ConntrackStatEntry, error) {
var entries []ConntrackStatEntry
scanner := bufio.NewScanner(r)
scanner.Scan()
for scanner.Scan() {
fields := strings.Fields(scanner.Text())
conntrackEntry, err := parseConntrackStatEntry(fields)
if err != nil {
return nil, err
}
entries = append(entries, *conntrackEntry)
}
return entries, nil
}
// Parses a ConntrackStatEntry from given array of fields
func parseConntrackStatEntry(fields []string) (*ConntrackStatEntry, error) {
if len(fields) != 17 {
return nil, fmt.Errorf("invalid conntrackstat entry, missing fields")
}
entry := &ConntrackStatEntry{}
entries, err := parseConntrackStatField(fields[0])
if err != nil {
return nil, err
}
entry.Entries = entries
found, err := parseConntrackStatField(fields[2])
if err != nil {
return nil, err
}
entry.Found = found
invalid, err := parseConntrackStatField(fields[4])
if err != nil {
return nil, err
}
entry.Invalid = invalid
ignore, err := parseConntrackStatField(fields[5])
if err != nil {
return nil, err
}
entry.Ignore = ignore
insert, err := parseConntrackStatField(fields[8])
if err != nil {
return nil, err
}
entry.Insert = insert
insertFailed, err := parseConntrackStatField(fields[9])
if err != nil {
return nil, err
}
entry.InsertFailed = insertFailed
drop, err := parseConntrackStatField(fields[10])
if err != nil {
return nil, err
}
entry.Drop = drop
earlyDrop, err := parseConntrackStatField(fields[11])
if err != nil {
return nil, err
}
entry.EarlyDrop = earlyDrop
searchRestart, err := parseConntrackStatField(fields[16])
if err != nil {
return nil, err
}
entry.SearchRestart = searchRestart
return entry, nil
}
// Parses a uint64 from given hex in string
func parseConntrackStatField(field string) (uint64, error) {
val, err := strconv.ParseUint(field, 16, 64)
if err != nil {
return 0, fmt.Errorf("couldn't parse \"%s\" field: %s", field, err)
}
return val, err
}

View file

@ -183,7 +183,6 @@ func (netDev NetDev) Total() NetDevLine {
names = append(names, ifc.Name)
total.RxBytes += ifc.RxBytes
total.RxPackets += ifc.RxPackets
total.RxPackets += ifc.RxPackets
total.RxErrors += ifc.RxErrors
total.RxDropped += ifc.RxDropped
total.RxFIFO += ifc.RxFIFO

163
vendor/github.com/prometheus/procfs/net_sockstat.go generated vendored Normal file
View file

@ -0,0 +1,163 @@
// Copyright 2019 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package procfs
import (
"bufio"
"bytes"
"errors"
"fmt"
"io"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// A NetSockstat contains the output of /proc/net/sockstat{,6} for IPv4 or IPv6,
// respectively.
type NetSockstat struct {
// Used is non-nil for IPv4 sockstat results, but nil for IPv6.
Used *int
Protocols []NetSockstatProtocol
}
// A NetSockstatProtocol contains statistics about a given socket protocol.
// Pointer fields indicate that the value may or may not be present on any
// given protocol.
type NetSockstatProtocol struct {
Protocol string
InUse int
Orphan *int
TW *int
Alloc *int
Mem *int
Memory *int
}
// NetSockstat retrieves IPv4 socket statistics.
func (fs FS) NetSockstat() (*NetSockstat, error) {
return readSockstat(fs.proc.Path("net", "sockstat"))
}
// NetSockstat6 retrieves IPv6 socket statistics.
//
// If IPv6 is disabled on this kernel, the returned error can be checked with
// os.IsNotExist.
func (fs FS) NetSockstat6() (*NetSockstat, error) {
return readSockstat(fs.proc.Path("net", "sockstat6"))
}
// readSockstat opens and parses a NetSockstat from the input file.
func readSockstat(name string) (*NetSockstat, error) {
// This file is small and can be read with one syscall.
b, err := util.ReadFileNoStat(name)
if err != nil {
// Do not wrap this error so the caller can detect os.IsNotExist and
// similar conditions.
return nil, err
}
stat, err := parseSockstat(bytes.NewReader(b))
if err != nil {
return nil, fmt.Errorf("failed to read sockstats from %q: %v", name, err)
}
return stat, nil
}
// parseSockstat reads the contents of a sockstat file and parses a NetSockstat.
func parseSockstat(r io.Reader) (*NetSockstat, error) {
var stat NetSockstat
s := bufio.NewScanner(r)
for s.Scan() {
// Expect a minimum of a protocol and one key/value pair.
fields := strings.Split(s.Text(), " ")
if len(fields) < 3 {
return nil, fmt.Errorf("malformed sockstat line: %q", s.Text())
}
// The remaining fields are key/value pairs.
kvs, err := parseSockstatKVs(fields[1:])
if err != nil {
return nil, fmt.Errorf("error parsing sockstat key/value pairs from %q: %v", s.Text(), err)
}
// The first field is the protocol. We must trim its colon suffix.
proto := strings.TrimSuffix(fields[0], ":")
switch proto {
case "sockets":
// Special case: IPv4 has a sockets "used" key/value pair that we
// embed at the top level of the structure.
used := kvs["used"]
stat.Used = &used
default:
// Parse all other lines as individual protocols.
nsp := parseSockstatProtocol(kvs)
nsp.Protocol = proto
stat.Protocols = append(stat.Protocols, nsp)
}
}
if err := s.Err(); err != nil {
return nil, err
}
return &stat, nil
}
// parseSockstatKVs parses a string slice into a map of key/value pairs.
func parseSockstatKVs(kvs []string) (map[string]int, error) {
if len(kvs)%2 != 0 {
return nil, errors.New("odd number of fields in key/value pairs")
}
// Iterate two values at a time to gather key/value pairs.
out := make(map[string]int, len(kvs)/2)
for i := 0; i < len(kvs); i += 2 {
vp := util.NewValueParser(kvs[i+1])
out[kvs[i]] = vp.Int()
if err := vp.Err(); err != nil {
return nil, err
}
}
return out, nil
}
// parseSockstatProtocol parses a NetSockstatProtocol from the input kvs map.
func parseSockstatProtocol(kvs map[string]int) NetSockstatProtocol {
var nsp NetSockstatProtocol
for k, v := range kvs {
// Capture the range variable to ensure we get unique pointers for
// each of the optional fields.
v := v
switch k {
case "inuse":
nsp.InUse = v
case "orphan":
nsp.Orphan = &v
case "tw":
nsp.TW = &v
case "alloc":
nsp.Alloc = &v
case "mem":
nsp.Mem = &v
case "memory":
nsp.Memory = &v
}
}
return nsp
}

View file

@ -14,78 +14,89 @@
package procfs
import (
"bufio"
"bytes"
"fmt"
"io/ioutil"
"io"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// For the proc file format details,
// see https://elixir.bootlin.com/linux/v4.17/source/net/core/net-procfs.c#L162
// See:
// * Linux 2.6.23 https://elixir.bootlin.com/linux/v2.6.23/source/net/core/dev.c#L2343
// * Linux 4.17 https://elixir.bootlin.com/linux/v4.17/source/net/core/net-procfs.c#L162
// and https://elixir.bootlin.com/linux/v4.17/source/include/linux/netdevice.h#L2810.
// SoftnetEntry contains a single row of data from /proc/net/softnet_stat
type SoftnetEntry struct {
// SoftnetStat contains a single row of data from /proc/net/softnet_stat
type SoftnetStat struct {
// Number of processed packets
Processed uint
Processed uint32
// Number of dropped packets
Dropped uint
Dropped uint32
// Number of times processing packets ran out of quota
TimeSqueezed uint
TimeSqueezed uint32
}
// GatherSoftnetStats reads /proc/net/softnet_stat, parse the relevant columns,
// and then return a slice of SoftnetEntry's.
func (fs FS) GatherSoftnetStats() ([]SoftnetEntry, error) {
data, err := ioutil.ReadFile(fs.proc.Path("net/softnet_stat"))
var softNetProcFile = "net/softnet_stat"
// NetSoftnetStat reads data from /proc/net/softnet_stat.
func (fs FS) NetSoftnetStat() ([]SoftnetStat, error) {
b, err := util.ReadFileNoStat(fs.proc.Path(softNetProcFile))
if err != nil {
return nil, fmt.Errorf("error reading softnet %s: %s", fs.proc.Path("net/softnet_stat"), err)
return nil, err
}
return parseSoftnetEntries(data)
}
func parseSoftnetEntries(data []byte) ([]SoftnetEntry, error) {
lines := strings.Split(string(data), "\n")
entries := make([]SoftnetEntry, 0)
var err error
const (
expectedColumns = 11
)
for _, line := range lines {
columns := strings.Fields(line)
width := len(columns)
if width == 0 {
continue
}
if width != expectedColumns {
return []SoftnetEntry{}, fmt.Errorf("%d columns were detected, but %d were expected", width, expectedColumns)
}
var entry SoftnetEntry
if entry, err = parseSoftnetEntry(columns); err != nil {
return []SoftnetEntry{}, err
}
entries = append(entries, entry)
entries, err := parseSoftnet(bytes.NewReader(b))
if err != nil {
return nil, fmt.Errorf("failed to parse /proc/net/softnet_stat: %v", err)
}
return entries, nil
}
func parseSoftnetEntry(columns []string) (SoftnetEntry, error) {
var err error
var processed, dropped, timeSqueezed uint64
if processed, err = strconv.ParseUint(columns[0], 16, 32); err != nil {
return SoftnetEntry{}, fmt.Errorf("Unable to parse column 0: %s", err)
func parseSoftnet(r io.Reader) ([]SoftnetStat, error) {
const minColumns = 9
s := bufio.NewScanner(r)
var stats []SoftnetStat
for s.Scan() {
columns := strings.Fields(s.Text())
width := len(columns)
if width < minColumns {
return nil, fmt.Errorf("%d columns were detected, but at least %d were expected", width, minColumns)
}
// We only parse the first three columns at the moment.
us, err := parseHexUint32s(columns[0:3])
if err != nil {
return nil, err
}
stats = append(stats, SoftnetStat{
Processed: us[0],
Dropped: us[1],
TimeSqueezed: us[2],
})
}
if dropped, err = strconv.ParseUint(columns[1], 16, 32); err != nil {
return SoftnetEntry{}, fmt.Errorf("Unable to parse column 1: %s", err)
}
if timeSqueezed, err = strconv.ParseUint(columns[2], 16, 32); err != nil {
return SoftnetEntry{}, fmt.Errorf("Unable to parse column 2: %s", err)
}
return SoftnetEntry{
Processed: uint(processed),
Dropped: uint(dropped),
TimeSqueezed: uint(timeSqueezed),
}, nil
return stats, nil
}
func parseHexUint32s(ss []string) ([]uint32, error) {
us := make([]uint32, 0, len(ss))
for _, s := range ss {
u, err := strconv.ParseUint(s, 16, 32)
if err != nil {
return nil, err
}
us = append(us, uint32(u))
}
return us, nil
}

229
vendor/github.com/prometheus/procfs/net_udp.go generated vendored Normal file
View file

@ -0,0 +1,229 @@
// Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package procfs
import (
"bufio"
"encoding/hex"
"fmt"
"io"
"net"
"os"
"strconv"
"strings"
)
const (
// readLimit is used by io.LimitReader while reading the content of the
// /proc/net/udp{,6} files. The number of lines inside such a file is dynamic
// as each line represents a single used socket.
// In theory, the number of available sockets is 65535 (2^16 - 1) per IP.
// With e.g. 150 Byte per line and the maximum number of 65535,
// the reader needs to handle 150 Byte * 65535 =~ 10 MB for a single IP.
readLimit = 4294967296 // Byte -> 4 GiB
)
type (
// NetUDP represents the contents of /proc/net/udp{,6} file without the header.
NetUDP []*netUDPLine
// NetUDPSummary provides already computed values like the total queue lengths or
// the total number of used sockets. In contrast to NetUDP it does not collect
// the parsed lines into a slice.
NetUDPSummary struct {
// TxQueueLength shows the total queue length of all parsed tx_queue lengths.
TxQueueLength uint64
// RxQueueLength shows the total queue length of all parsed rx_queue lengths.
RxQueueLength uint64
// UsedSockets shows the total number of parsed lines representing the
// number of used sockets.
UsedSockets uint64
}
// netUDPLine represents the fields parsed from a single line
// in /proc/net/udp{,6}. Fields which are not used by UDP are skipped.
// For the proc file format details, see https://linux.die.net/man/5/proc.
netUDPLine struct {
Sl uint64
LocalAddr net.IP
LocalPort uint64
RemAddr net.IP
RemPort uint64
St uint64
TxQueue uint64
RxQueue uint64
UID uint64
}
)
// NetUDP returns the IPv4 kernel/networking statistics for UDP datagrams
// read from /proc/net/udp.
func (fs FS) NetUDP() (NetUDP, error) {
return newNetUDP(fs.proc.Path("net/udp"))
}
// NetUDP6 returns the IPv6 kernel/networking statistics for UDP datagrams
// read from /proc/net/udp6.
func (fs FS) NetUDP6() (NetUDP, error) {
return newNetUDP(fs.proc.Path("net/udp6"))
}
// NetUDPSummary returns already computed statistics like the total queue lengths
// for UDP datagrams read from /proc/net/udp.
func (fs FS) NetUDPSummary() (*NetUDPSummary, error) {
return newNetUDPSummary(fs.proc.Path("net/udp"))
}
// NetUDP6Summary returns already computed statistics like the total queue lengths
// for UDP datagrams read from /proc/net/udp6.
func (fs FS) NetUDP6Summary() (*NetUDPSummary, error) {
return newNetUDPSummary(fs.proc.Path("net/udp6"))
}
// newNetUDP creates a new NetUDP{,6} from the contents of the given file.
func newNetUDP(file string) (NetUDP, error) {
f, err := os.Open(file)
if err != nil {
return nil, err
}
defer f.Close()
netUDP := NetUDP{}
lr := io.LimitReader(f, readLimit)
s := bufio.NewScanner(lr)
s.Scan() // skip first line with headers
for s.Scan() {
fields := strings.Fields(s.Text())
line, err := parseNetUDPLine(fields)
if err != nil {
return nil, err
}
netUDP = append(netUDP, line)
}
if err := s.Err(); err != nil {
return nil, err
}
return netUDP, nil
}
// newNetUDPSummary creates a new NetUDP{,6} from the contents of the given file.
func newNetUDPSummary(file string) (*NetUDPSummary, error) {
f, err := os.Open(file)
if err != nil {
return nil, err
}
defer f.Close()
netUDPSummary := &NetUDPSummary{}
lr := io.LimitReader(f, readLimit)
s := bufio.NewScanner(lr)
s.Scan() // skip first line with headers
for s.Scan() {
fields := strings.Fields(s.Text())
line, err := parseNetUDPLine(fields)
if err != nil {
return nil, err
}
netUDPSummary.TxQueueLength += line.TxQueue
netUDPSummary.RxQueueLength += line.RxQueue
netUDPSummary.UsedSockets++
}
if err := s.Err(); err != nil {
return nil, err
}
return netUDPSummary, nil
}
// parseNetUDPLine parses a single line, represented by a list of fields.
func parseNetUDPLine(fields []string) (*netUDPLine, error) {
line := &netUDPLine{}
if len(fields) < 8 {
return nil, fmt.Errorf(
"cannot parse net udp socket line as it has less then 8 columns: %s",
strings.Join(fields, " "),
)
}
var err error // parse error
// sl
s := strings.Split(fields[0], ":")
if len(s) != 2 {
return nil, fmt.Errorf(
"cannot parse sl field in udp socket line: %s", fields[0])
}
if line.Sl, err = strconv.ParseUint(s[0], 0, 64); err != nil {
return nil, fmt.Errorf("cannot parse sl value in udp socket line: %s", err)
}
// local_address
l := strings.Split(fields[1], ":")
if len(l) != 2 {
return nil, fmt.Errorf(
"cannot parse local_address field in udp socket line: %s", fields[1])
}
if line.LocalAddr, err = hex.DecodeString(l[0]); err != nil {
return nil, fmt.Errorf(
"cannot parse local_address value in udp socket line: %s", err)
}
if line.LocalPort, err = strconv.ParseUint(l[1], 16, 64); err != nil {
return nil, fmt.Errorf(
"cannot parse local_address port value in udp socket line: %s", err)
}
// remote_address
r := strings.Split(fields[2], ":")
if len(r) != 2 {
return nil, fmt.Errorf(
"cannot parse rem_address field in udp socket line: %s", fields[1])
}
if line.RemAddr, err = hex.DecodeString(r[0]); err != nil {
return nil, fmt.Errorf(
"cannot parse rem_address value in udp socket line: %s", err)
}
if line.RemPort, err = strconv.ParseUint(r[1], 16, 64); err != nil {
return nil, fmt.Errorf(
"cannot parse rem_address port value in udp socket line: %s", err)
}
// st
if line.St, err = strconv.ParseUint(fields[3], 16, 64); err != nil {
return nil, fmt.Errorf(
"cannot parse st value in udp socket line: %s", err)
}
// tx_queue and rx_queue
q := strings.Split(fields[4], ":")
if len(q) != 2 {
return nil, fmt.Errorf(
"cannot parse tx/rx queues in udp socket line as it has a missing colon: %s",
fields[4],
)
}
if line.TxQueue, err = strconv.ParseUint(q[0], 16, 64); err != nil {
return nil, fmt.Errorf("cannot parse tx_queue value in udp socket line: %s", err)
}
if line.RxQueue, err = strconv.ParseUint(q[1], 16, 64); err != nil {
return nil, fmt.Errorf("cannot parse rx_queue value in udp socket line: %s", err)
}
// uid
if line.UID, err = strconv.ParseUint(fields[7], 0, 64); err != nil {
return nil, fmt.Errorf(
"cannot parse uid value in udp socket line: %s", err)
}
return line, nil
}

View file

@ -15,7 +15,6 @@ package procfs
import (
"bufio"
"errors"
"fmt"
"io"
"os"
@ -27,25 +26,15 @@ import (
// see https://elixir.bootlin.com/linux/v4.17/source/net/unix/af_unix.c#L2815
// and https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/net.h#L48.
const (
netUnixKernelPtrIdx = iota
netUnixRefCountIdx
_
netUnixFlagsIdx
netUnixTypeIdx
netUnixStateIdx
netUnixInodeIdx
// Inode and Path are optional.
netUnixStaticFieldsCnt = 6
)
// Constants for the various /proc/net/unix enumerations.
// TODO: match against x/sys/unix or similar?
const (
netUnixTypeStream = 1
netUnixTypeDgram = 2
netUnixTypeSeqpacket = 5
netUnixFlagListen = 1 << 16
netUnixFlagDefault = 0
netUnixFlagListen = 1 << 16
netUnixStateUnconnected = 1
netUnixStateConnecting = 2
@ -53,129 +42,127 @@ const (
netUnixStateDisconnected = 4
)
var errInvalidKernelPtrFmt = errors.New("Invalid Num(the kernel table slot number) format")
// NetUNIXType is the type of the type field.
type NetUNIXType uint64
// NetUnixType is the type of the type field.
type NetUnixType uint64
// NetUNIXFlags is the type of the flags field.
type NetUNIXFlags uint64
// NetUnixFlags is the type of the flags field.
type NetUnixFlags uint64
// NetUNIXState is the type of the state field.
type NetUNIXState uint64
// NetUnixState is the type of the state field.
type NetUnixState uint64
// NetUnixLine represents a line of /proc/net/unix.
type NetUnixLine struct {
// NetUNIXLine represents a line of /proc/net/unix.
type NetUNIXLine struct {
KernelPtr string
RefCount uint64
Protocol uint64
Flags NetUnixFlags
Type NetUnixType
State NetUnixState
Flags NetUNIXFlags
Type NetUNIXType
State NetUNIXState
Inode uint64
Path string
}
// NetUnix holds the data read from /proc/net/unix.
type NetUnix struct {
Rows []*NetUnixLine
// NetUNIX holds the data read from /proc/net/unix.
type NetUNIX struct {
Rows []*NetUNIXLine
}
// NewNetUnix returns data read from /proc/net/unix.
func NewNetUnix() (*NetUnix, error) {
fs, err := NewFS(DefaultMountPoint)
if err != nil {
return nil, err
}
return fs.NewNetUnix()
// NetUNIX returns data read from /proc/net/unix.
func (fs FS) NetUNIX() (*NetUNIX, error) {
return readNetUNIX(fs.proc.Path("net/unix"))
}
// NewNetUnix returns data read from /proc/net/unix.
func (fs FS) NewNetUnix() (*NetUnix, error) {
return NewNetUnixByPath(fs.proc.Path("net/unix"))
}
// NewNetUnixByPath returns data read from /proc/net/unix by file path.
// It might returns an error with partial parsed data, if an error occur after some data parsed.
func NewNetUnixByPath(path string) (*NetUnix, error) {
f, err := os.Open(path)
// readNetUNIX reads data in /proc/net/unix format from the specified file.
func readNetUNIX(file string) (*NetUNIX, error) {
// This file could be quite large and a streaming read is desirable versus
// reading the entire contents at once.
f, err := os.Open(file)
if err != nil {
return nil, err
}
defer f.Close()
return NewNetUnixByReader(f)
return parseNetUNIX(f)
}
// NewNetUnixByReader returns data read from /proc/net/unix by a reader.
// It might returns an error with partial parsed data, if an error occur after some data parsed.
func NewNetUnixByReader(reader io.Reader) (*NetUnix, error) {
nu := &NetUnix{
Rows: make([]*NetUnixLine, 0, 32),
}
scanner := bufio.NewScanner(reader)
// Omit the header line.
scanner.Scan()
header := scanner.Text()
// From the man page of proc(5), it does not contain an Inode field,
// but in actually it exists.
// This code works for both cases.
hasInode := strings.Contains(header, "Inode")
// parseNetUNIX creates a NetUnix structure from the incoming stream.
func parseNetUNIX(r io.Reader) (*NetUNIX, error) {
// Begin scanning by checking for the existence of Inode.
s := bufio.NewScanner(r)
s.Scan()
minFieldsCnt := netUnixStaticFieldsCnt
// From the man page of proc(5), it does not contain an Inode field,
// but in actually it exists. This code works for both cases.
hasInode := strings.Contains(s.Text(), "Inode")
// Expect a minimum number of fields, but Inode and Path are optional:
// Num RefCount Protocol Flags Type St Inode Path
minFields := 6
if hasInode {
minFieldsCnt++
minFields++
}
for scanner.Scan() {
line := scanner.Text()
item, err := nu.parseLine(line, hasInode, minFieldsCnt)
var nu NetUNIX
for s.Scan() {
line := s.Text()
item, err := nu.parseLine(line, hasInode, minFields)
if err != nil {
return nu, err
return nil, fmt.Errorf("failed to parse /proc/net/unix data %q: %v", line, err)
}
nu.Rows = append(nu.Rows, item)
}
return nu, scanner.Err()
if err := s.Err(); err != nil {
return nil, fmt.Errorf("failed to scan /proc/net/unix data: %v", err)
}
return &nu, nil
}
func (u *NetUnix) parseLine(line string, hasInode bool, minFieldsCnt int) (*NetUnixLine, error) {
func (u *NetUNIX) parseLine(line string, hasInode bool, min int) (*NetUNIXLine, error) {
fields := strings.Fields(line)
fieldsLen := len(fields)
if fieldsLen < minFieldsCnt {
return nil, fmt.Errorf(
"Parse Unix domain failed: expect at least %d fields but got %d",
minFieldsCnt, fieldsLen)
l := len(fields)
if l < min {
return nil, fmt.Errorf("expected at least %d fields but got %d", min, l)
}
kernelPtr, err := u.parseKernelPtr(fields[netUnixKernelPtrIdx])
// Field offsets are as follows:
// Num RefCount Protocol Flags Type St Inode Path
kernelPtr := strings.TrimSuffix(fields[0], ":")
users, err := u.parseUsers(fields[1])
if err != nil {
return nil, fmt.Errorf("Parse Unix domain num(%s) failed: %s", fields[netUnixKernelPtrIdx], err)
return nil, fmt.Errorf("failed to parse ref count(%s): %v", fields[1], err)
}
users, err := u.parseUsers(fields[netUnixRefCountIdx])
flags, err := u.parseFlags(fields[3])
if err != nil {
return nil, fmt.Errorf("Parse Unix domain ref count(%s) failed: %s", fields[netUnixRefCountIdx], err)
return nil, fmt.Errorf("failed to parse flags(%s): %v", fields[3], err)
}
flags, err := u.parseFlags(fields[netUnixFlagsIdx])
typ, err := u.parseType(fields[4])
if err != nil {
return nil, fmt.Errorf("Parse Unix domain flags(%s) failed: %s", fields[netUnixFlagsIdx], err)
return nil, fmt.Errorf("failed to parse type(%s): %v", fields[4], err)
}
typ, err := u.parseType(fields[netUnixTypeIdx])
state, err := u.parseState(fields[5])
if err != nil {
return nil, fmt.Errorf("Parse Unix domain type(%s) failed: %s", fields[netUnixTypeIdx], err)
}
state, err := u.parseState(fields[netUnixStateIdx])
if err != nil {
return nil, fmt.Errorf("Parse Unix domain state(%s) failed: %s", fields[netUnixStateIdx], err)
return nil, fmt.Errorf("failed to parse state(%s): %v", fields[5], err)
}
var inode uint64
if hasInode {
inodeStr := fields[netUnixInodeIdx]
inode, err = u.parseInode(inodeStr)
inode, err = u.parseInode(fields[6])
if err != nil {
return nil, fmt.Errorf("Parse Unix domain inode(%s) failed: %s", inodeStr, err)
return nil, fmt.Errorf("failed to parse inode(%s): %v", fields[6], err)
}
}
nuLine := &NetUnixLine{
n := &NetUNIXLine{
KernelPtr: kernelPtr,
RefCount: users,
Type: typ,
@ -185,61 +172,56 @@ func (u *NetUnix) parseLine(line string, hasInode bool, minFieldsCnt int) (*NetU
}
// Path field is optional.
if fieldsLen > minFieldsCnt {
pathIdx := netUnixInodeIdx + 1
if l > min {
// Path occurs at either index 6 or 7 depending on whether inode is
// already present.
pathIdx := 7
if !hasInode {
pathIdx--
}
nuLine.Path = fields[pathIdx]
n.Path = fields[pathIdx]
}
return nuLine, nil
return n, nil
}
func (u NetUnix) parseKernelPtr(str string) (string, error) {
if !strings.HasSuffix(str, ":") {
return "", errInvalidKernelPtrFmt
}
return str[:len(str)-1], nil
func (u NetUNIX) parseUsers(s string) (uint64, error) {
return strconv.ParseUint(s, 16, 32)
}
func (u NetUnix) parseUsers(hexStr string) (uint64, error) {
return strconv.ParseUint(hexStr, 16, 32)
}
func (u NetUnix) parseProtocol(hexStr string) (uint64, error) {
return strconv.ParseUint(hexStr, 16, 32)
}
func (u NetUnix) parseType(hexStr string) (NetUnixType, error) {
typ, err := strconv.ParseUint(hexStr, 16, 16)
func (u NetUNIX) parseType(s string) (NetUNIXType, error) {
typ, err := strconv.ParseUint(s, 16, 16)
if err != nil {
return 0, err
}
return NetUnixType(typ), nil
return NetUNIXType(typ), nil
}
func (u NetUnix) parseFlags(hexStr string) (NetUnixFlags, error) {
flags, err := strconv.ParseUint(hexStr, 16, 32)
func (u NetUNIX) parseFlags(s string) (NetUNIXFlags, error) {
flags, err := strconv.ParseUint(s, 16, 32)
if err != nil {
return 0, err
}
return NetUnixFlags(flags), nil
return NetUNIXFlags(flags), nil
}
func (u NetUnix) parseState(hexStr string) (NetUnixState, error) {
st, err := strconv.ParseInt(hexStr, 16, 8)
func (u NetUNIX) parseState(s string) (NetUNIXState, error) {
st, err := strconv.ParseInt(s, 16, 8)
if err != nil {
return 0, err
}
return NetUnixState(st), nil
return NetUNIXState(st), nil
}
func (u NetUnix) parseInode(inodeStr string) (uint64, error) {
return strconv.ParseUint(inodeStr, 10, 64)
func (u NetUNIX) parseInode(s string) (uint64, error) {
return strconv.ParseUint(s, 10, 64)
}
func (t NetUnixType) String() string {
func (t NetUNIXType) String() string {
switch t {
case netUnixTypeStream:
return "stream"
@ -251,7 +233,7 @@ func (t NetUnixType) String() string {
return "unknown"
}
func (f NetUnixFlags) String() string {
func (f NetUNIXFlags) String() string {
switch f {
case netUnixFlagListen:
return "listen"
@ -260,7 +242,7 @@ func (f NetUnixFlags) String() string {
}
}
func (s NetUnixState) String() string {
func (s NetUNIXState) String() string {
switch s {
case netUnixStateUnconnected:
return "unconnected"

View file

@ -22,6 +22,7 @@ import (
"strings"
"github.com/prometheus/procfs/internal/fs"
"github.com/prometheus/procfs/internal/util"
)
// Proc provides information about a running process.
@ -121,13 +122,7 @@ func (fs FS) AllProcs() (Procs, error) {
// CmdLine returns the command line of a process.
func (p Proc) CmdLine() ([]string, error) {
f, err := os.Open(p.path("cmdline"))
if err != nil {
return nil, err
}
defer f.Close()
data, err := ioutil.ReadAll(f)
data, err := util.ReadFileNoStat(p.path("cmdline"))
if err != nil {
return nil, err
}
@ -139,9 +134,9 @@ func (p Proc) CmdLine() ([]string, error) {
return strings.Split(string(bytes.TrimRight(data, string("\x00"))), string(byte(0))), nil
}
// Comm returns the command name of a process.
func (p Proc) Comm() (string, error) {
f, err := os.Open(p.path("comm"))
// Wchan returns the wchan (wait channel) of a process.
func (p Proc) Wchan() (string, error) {
f, err := os.Open(p.path("wchan"))
if err != nil {
return "", err
}
@ -152,6 +147,21 @@ func (p Proc) Comm() (string, error) {
return "", err
}
wchan := string(data)
if wchan == "" || wchan == "0" {
return "", nil
}
return wchan, nil
}
// Comm returns the command name of a process.
func (p Proc) Comm() (string, error) {
data, err := util.ReadFileNoStat(p.path("comm"))
if err != nil {
return "", err
}
return strings.TrimSpace(string(data)), nil
}
@ -252,13 +262,11 @@ func (p Proc) MountStats() ([]*Mount, error) {
// It supplies information missing in `/proc/self/mounts` and
// fixes various other problems with that file too.
func (p Proc) MountInfo() ([]*MountInfo, error) {
f, err := os.Open(p.path("mountinfo"))
data, err := util.ReadFileNoStat(p.path("mountinfo"))
if err != nil {
return nil, err
}
defer f.Close()
return parseMountInfo(f)
return parseMountInfo(data)
}
func (p Proc) fileDescriptors() ([]string, error) {

98
vendor/github.com/prometheus/procfs/proc_cgroup.go generated vendored Normal file
View file

@ -0,0 +1,98 @@
// Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package procfs
import (
"bufio"
"bytes"
"fmt"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// Cgroup models one line from /proc/[pid]/cgroup. Each Cgroup struct describes the the placement of a PID inside a
// specific control hierarchy. The kernel has two cgroup APIs, v1 and v2. v1 has one hierarchy per available resource
// controller, while v2 has one unified hierarchy shared by all controllers. Regardless of v1 or v2, all hierarchies
// contain all running processes, so the question answerable with a Cgroup struct is 'where is this process in
// this hierarchy' (where==what path on the specific cgroupfs). By prefixing this path with the mount point of
// *this specific* hierarchy, you can locate the relevant pseudo-files needed to read/set the data for this PID
// in this hierarchy
//
// Also see http://man7.org/linux/man-pages/man7/cgroups.7.html
type Cgroup struct {
// HierarchyID that can be matched to a named hierarchy using /proc/cgroups. Cgroups V2 only has one
// hierarchy, so HierarchyID is always 0. For cgroups v1 this is a unique ID number
HierarchyID int
// Controllers using this hierarchy of processes. Controllers are also known as subsystems. For
// Cgroups V2 this may be empty, as all active controllers use the same hierarchy
Controllers []string
// Path of this control group, relative to the mount point of the cgroupfs representing this specific
// hierarchy
Path string
}
// parseCgroupString parses each line of the /proc/[pid]/cgroup file
// Line format is hierarchyID:[controller1,controller2]:path
func parseCgroupString(cgroupStr string) (*Cgroup, error) {
var err error
fields := strings.Split(cgroupStr, ":")
if len(fields) < 3 {
return nil, fmt.Errorf("at least 3 fields required, found %d fields in cgroup string: %s", len(fields), cgroupStr)
}
cgroup := &Cgroup{
Path: fields[2],
Controllers: nil,
}
cgroup.HierarchyID, err = strconv.Atoi(fields[0])
if err != nil {
return nil, fmt.Errorf("failed to parse hierarchy ID")
}
if fields[1] != "" {
ssNames := strings.Split(fields[1], ",")
cgroup.Controllers = append(cgroup.Controllers, ssNames...)
}
return cgroup, nil
}
// parseCgroups reads each line of the /proc/[pid]/cgroup file
func parseCgroups(data []byte) ([]Cgroup, error) {
var cgroups []Cgroup
scanner := bufio.NewScanner(bytes.NewReader(data))
for scanner.Scan() {
mountString := scanner.Text()
parsedMounts, err := parseCgroupString(mountString)
if err != nil {
return nil, err
}
cgroups = append(cgroups, *parsedMounts)
}
err := scanner.Err()
return cgroups, err
}
// Cgroups reads from /proc/<pid>/cgroups and returns a []*Cgroup struct locating this PID in each process
// control hierarchy running on this system. On every system (v1 and v2), all hierarchies contain all processes,
// so the len of the returned struct is equal to the number of active hierarchies on this system
func (p Proc) Cgroups() ([]Cgroup, error) {
data, err := util.ReadFileNoStat(fmt.Sprintf("/proc/%d/cgroup", p.PID))
if err != nil {
return nil, err
}
return parseCgroups(data)
}

View file

@ -14,22 +14,16 @@
package procfs
import (
"io/ioutil"
"os"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// Environ reads process environments from /proc/<pid>/environ
func (p Proc) Environ() ([]string, error) {
environments := make([]string, 0)
f, err := os.Open(p.path("environ"))
if err != nil {
return environments, err
}
defer f.Close()
data, err := ioutil.ReadAll(f)
data, err := util.ReadFileNoStat(p.path("environ"))
if err != nil {
return environments, err
}

View file

@ -15,19 +15,20 @@ package procfs
import (
"bufio"
"fmt"
"io/ioutil"
"os"
"bytes"
"errors"
"regexp"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// Regexp variables
var (
rPos = regexp.MustCompile(`^pos:\s+(\d+)$`)
rFlags = regexp.MustCompile(`^flags:\s+(\d+)$`)
rMntID = regexp.MustCompile(`^mnt_id:\s+(\d+)$`)
rInotify = regexp.MustCompile(`^inotify`)
rPos = regexp.MustCompile(`^pos:\s+(\d+)$`)
rFlags = regexp.MustCompile(`^flags:\s+(\d+)$`)
rMntID = regexp.MustCompile(`^mnt_id:\s+(\d+)$`)
rInotify = regexp.MustCompile(`^inotify`)
rInotifyParts = regexp.MustCompile(`^inotify\s+wd:([0-9a-f]+)\s+ino:([0-9a-f]+)\s+sdev:([0-9a-f]+)(?:\s+mask:([0-9a-f]+))?`)
)
// ProcFDInfo contains represents file descriptor information.
@ -40,27 +41,21 @@ type ProcFDInfo struct {
Flags string
// Mount point ID
MntID string
// List of inotify lines (structed) in the fdinfo file (kernel 3.8+ only)
// List of inotify lines (structured) in the fdinfo file (kernel 3.8+ only)
InotifyInfos []InotifyInfo
}
// FDInfo constructor. On kernels older than 3.8, InotifyInfos will always be empty.
func (p Proc) FDInfo(fd string) (*ProcFDInfo, error) {
f, err := os.Open(p.path("fdinfo", fd))
data, err := util.ReadFileNoStat(p.path("fdinfo", fd))
if err != nil {
return nil, err
}
defer f.Close()
fdinfo, err := ioutil.ReadAll(f)
if err != nil {
return nil, fmt.Errorf("could not read %s: %s", f.Name(), err)
}
var text, pos, flags, mntid string
var inotify []InotifyInfo
scanner := bufio.NewScanner(strings.NewReader(string(fdinfo)))
scanner := bufio.NewScanner(bytes.NewReader(data))
for scanner.Scan() {
text = scanner.Text()
if rPos.MatchString(text) {
@ -103,15 +98,21 @@ type InotifyInfo struct {
// InotifyInfo constructor. Only available on kernel 3.8+.
func parseInotifyInfo(line string) (*InotifyInfo, error) {
r := regexp.MustCompile(`^inotify\s+wd:([0-9a-f]+)\s+ino:([0-9a-f]+)\s+sdev:([0-9a-f]+)\s+mask:([0-9a-f]+)`)
m := r.FindStringSubmatch(line)
i := &InotifyInfo{
WD: m[1],
Ino: m[2],
Sdev: m[3],
Mask: m[4],
m := rInotifyParts.FindStringSubmatch(line)
if len(m) >= 4 {
var mask string
if len(m) == 5 {
mask = m[4]
}
i := &InotifyInfo{
WD: m[1],
Ino: m[2],
Sdev: m[3],
Mask: mask,
}
return i, nil
}
return i, nil
return nil, errors.New("invalid inode entry: " + line)
}
// ProcFDInfos represents a list of ProcFDInfo structs.

View file

@ -15,8 +15,8 @@ package procfs
import (
"fmt"
"io/ioutil"
"os"
"github.com/prometheus/procfs/internal/util"
)
// ProcIO models the content of /proc/<pid>/io.
@ -43,13 +43,7 @@ type ProcIO struct {
func (p Proc) IO() (ProcIO, error) {
pio := ProcIO{}
f, err := os.Open(p.path("io"))
if err != nil {
return pio, err
}
defer f.Close()
data, err := ioutil.ReadAll(f)
data, err := util.ReadFileNoStat(p.path("io"))
if err != nil {
return pio, err
}

209
vendor/github.com/prometheus/procfs/proc_maps.go generated vendored Normal file
View file

@ -0,0 +1,209 @@
// Copyright 2019 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build aix darwin dragonfly freebsd linux netbsd openbsd solaris
package procfs
import (
"bufio"
"fmt"
"os"
"strconv"
"strings"
"golang.org/x/sys/unix"
)
// ProcMapPermissions contains permission settings read from /proc/[pid]/maps
type ProcMapPermissions struct {
// mapping has the [R]ead flag set
Read bool
// mapping has the [W]rite flag set
Write bool
// mapping has the [X]ecutable flag set
Execute bool
// mapping has the [S]hared flag set
Shared bool
// mapping is marked as [P]rivate (copy on write)
Private bool
}
// ProcMap contains the process memory-mappings of the process,
// read from /proc/[pid]/maps
type ProcMap struct {
// The start address of current mapping.
StartAddr uintptr
// The end address of the current mapping
EndAddr uintptr
// The permissions for this mapping
Perms *ProcMapPermissions
// The current offset into the file/fd (e.g., shared libs)
Offset int64
// Device owner of this mapping (major:minor) in Mkdev format.
Dev uint64
// The inode of the device above
Inode uint64
// The file or psuedofile (or empty==anonymous)
Pathname string
}
// parseDevice parses the device token of a line and converts it to a dev_t
// (mkdev) like structure.
func parseDevice(s string) (uint64, error) {
toks := strings.Split(s, ":")
if len(toks) < 2 {
return 0, fmt.Errorf("unexpected number of fields")
}
major, err := strconv.ParseUint(toks[0], 16, 0)
if err != nil {
return 0, err
}
minor, err := strconv.ParseUint(toks[1], 16, 0)
if err != nil {
return 0, err
}
return unix.Mkdev(uint32(major), uint32(minor)), nil
}
// parseAddress just converts a hex-string to a uintptr
func parseAddress(s string) (uintptr, error) {
a, err := strconv.ParseUint(s, 16, 0)
if err != nil {
return 0, err
}
return uintptr(a), nil
}
// parseAddresses parses the start-end address
func parseAddresses(s string) (uintptr, uintptr, error) {
toks := strings.Split(s, "-")
if len(toks) < 2 {
return 0, 0, fmt.Errorf("invalid address")
}
saddr, err := parseAddress(toks[0])
if err != nil {
return 0, 0, err
}
eaddr, err := parseAddress(toks[1])
if err != nil {
return 0, 0, err
}
return saddr, eaddr, nil
}
// parsePermissions parses a token and returns any that are set.
func parsePermissions(s string) (*ProcMapPermissions, error) {
if len(s) < 4 {
return nil, fmt.Errorf("invalid permissions token")
}
perms := ProcMapPermissions{}
for _, ch := range s {
switch ch {
case 'r':
perms.Read = true
case 'w':
perms.Write = true
case 'x':
perms.Execute = true
case 'p':
perms.Private = true
case 's':
perms.Shared = true
}
}
return &perms, nil
}
// parseProcMap will attempt to parse a single line within a proc/[pid]/maps
// buffer.
func parseProcMap(text string) (*ProcMap, error) {
fields := strings.Fields(text)
if len(fields) < 5 {
return nil, fmt.Errorf("truncated procmap entry")
}
saddr, eaddr, err := parseAddresses(fields[0])
if err != nil {
return nil, err
}
perms, err := parsePermissions(fields[1])
if err != nil {
return nil, err
}
offset, err := strconv.ParseInt(fields[2], 16, 0)
if err != nil {
return nil, err
}
device, err := parseDevice(fields[3])
if err != nil {
return nil, err
}
inode, err := strconv.ParseUint(fields[4], 10, 0)
if err != nil {
return nil, err
}
pathname := ""
if len(fields) >= 5 {
pathname = strings.Join(fields[5:], " ")
}
return &ProcMap{
StartAddr: saddr,
EndAddr: eaddr,
Perms: perms,
Offset: offset,
Dev: device,
Inode: inode,
Pathname: pathname,
}, nil
}
// ProcMaps reads from /proc/[pid]/maps to get the memory-mappings of the
// process.
func (p Proc) ProcMaps() ([]*ProcMap, error) {
file, err := os.Open(p.path("maps"))
if err != nil {
return nil, err
}
defer file.Close()
maps := []*ProcMap{}
scan := bufio.NewScanner(file)
for scan.Scan() {
m, err := parseProcMap(scan.Text())
if err != nil {
return nil, err
}
maps = append(maps, m)
}
return maps, nil
}

View file

@ -24,11 +24,13 @@ package procfs
// > full avg10=0.00 avg60=0.13 avg300=0.96 total=8183134
import (
"bufio"
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"strings"
"github.com/prometheus/procfs/internal/util"
)
const lineFormat = "avg10=%f avg60=%f avg300=%f total=%d"
@ -55,24 +57,21 @@ type PSIStats struct {
// resource from /proc/pressure/<resource>. At time of writing this can be
// either "cpu", "memory" or "io".
func (fs FS) PSIStatsForResource(resource string) (PSIStats, error) {
file, err := os.Open(fs.proc.Path(fmt.Sprintf("%s/%s", "pressure", resource)))
data, err := util.ReadFileNoStat(fs.proc.Path(fmt.Sprintf("%s/%s", "pressure", resource)))
if err != nil {
return PSIStats{}, fmt.Errorf("psi_stats: unavailable for %s", resource)
}
defer file.Close()
return parsePSIStats(resource, file)
return parsePSIStats(resource, bytes.NewReader(data))
}
// parsePSIStats parses the specified file for pressure stall information
func parsePSIStats(resource string, file io.Reader) (PSIStats, error) {
func parsePSIStats(resource string, r io.Reader) (PSIStats, error) {
psiStats := PSIStats{}
stats, err := ioutil.ReadAll(file)
if err != nil {
return psiStats, fmt.Errorf("psi_stats: unable to read data for %s", resource)
}
for _, l := range strings.Split(string(stats), "\n") {
scanner := bufio.NewScanner(r)
for scanner.Scan() {
l := scanner.Text()
prefix := strings.Split(l, " ")[0]
switch prefix {
case "some":

165
vendor/github.com/prometheus/procfs/proc_smaps.go generated vendored Normal file
View file

@ -0,0 +1,165 @@
// Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build !windows
package procfs
import (
"bufio"
"errors"
"fmt"
"os"
"regexp"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/util"
)
var (
// match the header line before each mapped zone in /proc/pid/smaps
procSMapsHeaderLine = regexp.MustCompile(`^[a-f0-9].*$`)
)
type ProcSMapsRollup struct {
// Amount of the mapping that is currently resident in RAM
Rss uint64
// Process's proportional share of this mapping
Pss uint64
// Size in bytes of clean shared pages
SharedClean uint64
// Size in bytes of dirty shared pages
SharedDirty uint64
// Size in bytes of clean private pages
PrivateClean uint64
// Size in bytes of dirty private pages
PrivateDirty uint64
// Amount of memory currently marked as referenced or accessed
Referenced uint64
// Amount of memory that does not belong to any file
Anonymous uint64
// Amount would-be-anonymous memory currently on swap
Swap uint64
// Process's proportional memory on swap
SwapPss uint64
}
// ProcSMapsRollup reads from /proc/[pid]/smaps_rollup to get summed memory information of the
// process.
//
// If smaps_rollup does not exists (require kernel >= 4.15), the content of /proc/pid/smaps will
// we read and summed.
func (p Proc) ProcSMapsRollup() (ProcSMapsRollup, error) {
data, err := util.ReadFileNoStat(p.path("smaps_rollup"))
if err != nil && os.IsNotExist(err) {
return p.procSMapsRollupManual()
}
if err != nil {
return ProcSMapsRollup{}, err
}
lines := strings.Split(string(data), "\n")
smaps := ProcSMapsRollup{}
// skip first line which don't contains information we need
lines = lines[1:]
for _, line := range lines {
if line == "" {
continue
}
if err := smaps.parseLine(line); err != nil {
return ProcSMapsRollup{}, err
}
}
return smaps, nil
}
// Read /proc/pid/smaps and do the roll-up in Go code.
func (p Proc) procSMapsRollupManual() (ProcSMapsRollup, error) {
file, err := os.Open(p.path("smaps"))
if err != nil {
return ProcSMapsRollup{}, err
}
defer file.Close()
smaps := ProcSMapsRollup{}
scan := bufio.NewScanner(file)
for scan.Scan() {
line := scan.Text()
if procSMapsHeaderLine.MatchString(line) {
continue
}
if err := smaps.parseLine(line); err != nil {
return ProcSMapsRollup{}, err
}
}
return smaps, nil
}
func (s *ProcSMapsRollup) parseLine(line string) error {
kv := strings.SplitN(line, ":", 2)
if len(kv) != 2 {
fmt.Println(line)
return errors.New("invalid net/dev line, missing colon")
}
k := kv[0]
if k == "VmFlags" {
return nil
}
v := strings.TrimSpace(kv[1])
v = strings.TrimRight(v, " kB")
vKBytes, err := strconv.ParseUint(v, 10, 64)
if err != nil {
return err
}
vBytes := vKBytes * 1024
s.addValue(k, v, vKBytes, vBytes)
return nil
}
func (s *ProcSMapsRollup) addValue(k string, vString string, vUint uint64, vUintBytes uint64) {
switch k {
case "Rss":
s.Rss += vUintBytes
case "Pss":
s.Pss += vUintBytes
case "Shared_Clean":
s.SharedClean += vUintBytes
case "Shared_Dirty":
s.SharedDirty += vUintBytes
case "Private_Clean":
s.PrivateClean += vUintBytes
case "Private_Dirty":
s.PrivateDirty += vUintBytes
case "Referenced":
s.Referenced += vUintBytes
case "Anonymous":
s.Anonymous += vUintBytes
case "Swap":
s.Swap += vUintBytes
case "SwapPss":
s.SwapPss += vUintBytes
}
}

View file

@ -16,10 +16,10 @@ package procfs
import (
"bytes"
"fmt"
"io/ioutil"
"os"
"github.com/prometheus/procfs/internal/fs"
"github.com/prometheus/procfs/internal/util"
)
// Originally, this USER_HZ value was dynamically retrieved via a sysconf call
@ -113,13 +113,7 @@ func (p Proc) NewStat() (ProcStat, error) {
// Stat returns the current status information of the process.
func (p Proc) Stat() (ProcStat, error) {
f, err := os.Open(p.path("stat"))
if err != nil {
return ProcStat{}, err
}
defer f.Close()
data, err := ioutil.ReadAll(f)
data, err := util.ReadFileNoStat(p.path("stat"))
if err != nil {
return ProcStat{}, err
}

View file

@ -15,13 +15,13 @@ package procfs
import (
"bytes"
"io/ioutil"
"os"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// ProcStat provides status information about the process,
// ProcStatus provides status information about the process,
// read from /proc/[pid]/stat.
type ProcStatus struct {
// The process ID.
@ -29,38 +29,41 @@ type ProcStatus struct {
// The process name.
Name string
// Thread group ID.
TGID int
// Peak virtual memory size.
VmPeak uint64
VmPeak uint64 // nolint:golint
// Virtual memory size.
VmSize uint64
VmSize uint64 // nolint:golint
// Locked memory size.
VmLck uint64
VmLck uint64 // nolint:golint
// Pinned memory size.
VmPin uint64
VmPin uint64 // nolint:golint
// Peak resident set size.
VmHWM uint64
VmHWM uint64 // nolint:golint
// Resident set size (sum of RssAnnon RssFile and RssShmem).
VmRSS uint64
VmRSS uint64 // nolint:golint
// Size of resident anonymous memory.
RssAnon uint64
RssAnon uint64 // nolint:golint
// Size of resident file mappings.
RssFile uint64
RssFile uint64 // nolint:golint
// Size of resident shared memory.
RssShmem uint64
RssShmem uint64 // nolint:golint
// Size of data segments.
VmData uint64
VmData uint64 // nolint:golint
// Size of stack segments.
VmStk uint64
VmStk uint64 // nolint:golint
// Size of text segments.
VmExe uint64
VmExe uint64 // nolint:golint
// Shared library code size.
VmLib uint64
VmLib uint64 // nolint:golint
// Page table entries size.
VmPTE uint64
VmPTE uint64 // nolint:golint
// Size of second-level page tables.
VmPMD uint64
VmPMD uint64 // nolint:golint
// Swapped-out virtual memory size by anonymous private.
VmSwap uint64
VmSwap uint64 // nolint:golint
// Size of hugetlb memory portions
HugetlbPages uint64
@ -68,17 +71,16 @@ type ProcStatus struct {
VoluntaryCtxtSwitches uint64
// Number of involuntary context switches.
NonVoluntaryCtxtSwitches uint64
// UIDs of the process (Real, effective, saved set, and filesystem UIDs)
UIDs [4]string
// GIDs of the process (Real, effective, saved set, and filesystem GIDs)
GIDs [4]string
}
// NewStatus returns the current status information of the process.
func (p Proc) NewStatus() (ProcStatus, error) {
f, err := os.Open(p.path("status"))
if err != nil {
return ProcStatus{}, err
}
defer f.Close()
data, err := ioutil.ReadAll(f)
data, err := util.ReadFileNoStat(p.path("status"))
if err != nil {
return ProcStatus{}, err
}
@ -113,8 +115,14 @@ func (p Proc) NewStatus() (ProcStatus, error) {
func (s *ProcStatus) fillStatus(k string, vString string, vUint uint64, vUintBytes uint64) {
switch k {
case "Tgid":
s.TGID = int(vUint)
case "Name":
s.Name = vString
case "Uid":
copy(s.UIDs[:], strings.Split(vString, "\t"))
case "Gid":
copy(s.GIDs[:], strings.Split(vString, "\t"))
case "VmPeak":
s.VmPeak = vUintBytes
case "VmSize":

View file

@ -26,7 +26,7 @@ var (
procLineRE = regexp.MustCompile(`(\d+) (\d+) (\d+)`)
)
// Schedstat contains scheduler statistics from /proc/schedstats
// Schedstat contains scheduler statistics from /proc/schedstat
//
// See
// https://www.kernel.org/doc/Documentation/scheduler/sched-stats.txt
@ -36,7 +36,6 @@ var (
// jiffies when they are actually in nanoseconds since 2.6.23 with the
// introduction of CFS. A fix to the documentation is pending. See
// https://lore.kernel.org/patchwork/project/lkml/list/?series=403473
type Schedstat struct {
CPUs []*SchedstatCPU
}
@ -57,6 +56,7 @@ type ProcSchedstat struct {
RunTimeslices uint64
}
// Schedstat reads data from /proc/schedstat
func (fs FS) Schedstat() (*Schedstat, error) {
file, err := os.Open(fs.proc.Path("schedstat"))
if err != nil {

View file

@ -15,13 +15,14 @@ package procfs
import (
"bufio"
"bytes"
"fmt"
"io"
"os"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/fs"
"github.com/prometheus/procfs/internal/util"
)
// CPUStat shows how much time the cpu spend in various stages.
@ -164,16 +165,15 @@ func (fs FS) NewStat() (Stat, error) {
// Stat returns information about current cpu/process statistics.
// See https://www.kernel.org/doc/Documentation/filesystems/proc.txt
func (fs FS) Stat() (Stat, error) {
f, err := os.Open(fs.proc.Path("stat"))
fileName := fs.proc.Path("stat")
data, err := util.ReadFileNoStat(fileName)
if err != nil {
return Stat{}, err
}
defer f.Close()
stat := Stat{}
scanner := bufio.NewScanner(f)
scanner := bufio.NewScanner(bytes.NewReader(data))
for scanner.Scan() {
line := scanner.Text()
parts := strings.Fields(scanner.Text())
@ -237,7 +237,7 @@ func (fs FS) Stat() (Stat, error) {
}
if err := scanner.Err(); err != nil {
return Stat{}, fmt.Errorf("couldn't parse %s: %s", f.Name(), err)
return Stat{}, fmt.Errorf("couldn't parse %s: %s", fileName, err)
}
return stat, nil

89
vendor/github.com/prometheus/procfs/swaps.go generated vendored Normal file
View file

@ -0,0 +1,89 @@
// Copyright 2019 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package procfs
import (
"bufio"
"bytes"
"fmt"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/util"
)
// Swap represents an entry in /proc/swaps.
type Swap struct {
Filename string
Type string
Size int
Used int
Priority int
}
// Swaps returns a slice of all configured swap devices on the system.
func (fs FS) Swaps() ([]*Swap, error) {
data, err := util.ReadFileNoStat(fs.proc.Path("swaps"))
if err != nil {
return nil, err
}
return parseSwaps(data)
}
func parseSwaps(info []byte) ([]*Swap, error) {
swaps := []*Swap{}
scanner := bufio.NewScanner(bytes.NewReader(info))
scanner.Scan() // ignore header line
for scanner.Scan() {
swapString := scanner.Text()
parsedSwap, err := parseSwapString(swapString)
if err != nil {
return nil, err
}
swaps = append(swaps, parsedSwap)
}
err := scanner.Err()
return swaps, err
}
func parseSwapString(swapString string) (*Swap, error) {
var err error
swapFields := strings.Fields(swapString)
swapLength := len(swapFields)
if swapLength < 5 {
return nil, fmt.Errorf("too few fields in swap string: %s", swapString)
}
swap := &Swap{
Filename: swapFields[0],
Type: swapFields[1],
}
swap.Size, err = strconv.Atoi(swapFields[2])
if err != nil {
return nil, fmt.Errorf("invalid swap size: %s", swapFields[2])
}
swap.Used, err = strconv.Atoi(swapFields[3])
if err != nil {
return nil, fmt.Errorf("invalid swap used: %s", swapFields[3])
}
swap.Priority, err = strconv.Atoi(swapFields[4])
if err != nil {
return nil, fmt.Errorf("invalid swap priority: %s", swapFields[4])
}
return swap, nil
}