Compare commits

..

50 Commits

Author SHA1 Message Date
T.v.Dein
787178b17e Update to tableWriter 1.0.6 (#50) 2025-05-27 13:09:18 +02:00
T.v.Dein
eae39bbff1 Merge pull request #48 from TLINDEN/updatego
update to go1.23, update dependencies
2025-03-06 17:28:42 +01:00
40fbf17779 also update ci to go 1.23 2025-03-06 17:25:54 +01:00
832841c1ff deprecate testscript.RunMain() 2025-03-06 17:24:16 +01:00
5726ed3f7f update to go1.23, update dependencies 2025-03-06 17:16:20 +01:00
T.v.Dein
5e52cd9ce0 Merge pull request #45 from TLINDEN/dependabot/go_modules/github.com/spf13/cobra-1.9.1
Bump github.com/spf13/cobra from 1.8.1 to 1.9.1
2025-03-06 17:11:24 +01:00
T.v.Dein
8c7c89c9ea Merge pull request #47 from TLINDEN/headernumbers
reverse the meaning of -n
2025-03-06 17:11:00 +01:00
25aa172c41 reverse the meaning of -n, setting it enables numbered headers 2025-03-06 17:02:34 +01:00
dependabot[bot]
c436a92bcb Bump github.com/spf13/cobra from 1.8.1 to 1.9.1
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.8.1 to 1.9.1.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.8.1...v1.9.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-01 10:40:32 +00:00
T.v.Dein
65732a58d0 Merge pull request #38 from TLINDEN/feature/yank
add yank support
2025-02-23 18:21:15 +01:00
T.v.Dein
ace7f76210 Merge branch 'main' into feature/yank 2025-02-23 18:09:04 +01:00
fda365bd8b bump version 2025-02-23 18:06:42 +01:00
c1cfc08c23 fix windows test, add clean to test target 2025-02-23 18:02:52 +01:00
150fdddd2a use latest go-clipboard 2025-02-23 18:00:29 +01:00
6b659773f1 build release bins w/o symbols and debug, +static 2025-02-19 18:09:05 +01:00
74d82fa356 fix ci tests on windows: make clean before running test 2025-02-12 14:08:04 +01:00
3949411c57 add change log generator, update release builder 2025-02-05 17:51:14 +01:00
a455f6b79a bump version 2025-01-30 17:31:56 +01:00
2c08687c29 add support for negative filters (-F field!=regex) 2025-01-30 17:31:26 +01:00
200f1f32f8 using patched tiagomeol/go-clipboard/clipboard, fixes #37 2025-01-28 14:40:17 +01:00
768a19b4d6 fine tuning, added test, which hangs, but yanking works anyway 2025-01-23 13:59:02 +01:00
Thomas von Dein
dc718392b6 fix-import 2025-01-22 23:15:12 +01:00
Thomas von Dein
e8f4fef41c fix #37: make yank portable 2025-01-22 23:12:42 +01:00
6566dd66f0 fixed pattern regex, fixed pattern AND operation 2025-01-22 17:53:10 +01:00
1593799c03 added multi pattern tests 2025-01-22 17:53:10 +01:00
ea3dd75fec fix linting error 2025-01-22 17:53:10 +01:00
a306f2c601 implement multiple regex support and icase and negate flags 2025-01-22 17:53:10 +01:00
82f54c120d catch err 2025-01-21 18:42:04 +01:00
T.v.Dein
2d5799e2f2 Use primary clipboard on unix 2025-01-20 21:27:26 +01:00
8e33cadcaa add -y 2025-01-20 19:28:19 +01:00
03f3225f24 build release binaries using ci workflow 2025-01-18 10:51:28 +01:00
63c7ef26b6 add -k<name> and sort by multiple columns support, fixes #34 2025-01-15 18:53:34 +01:00
dependabot[bot]
c2e7d8037a Bump github.com/hashicorp/hcl/v2 from 2.22.0 to 2.23.0
Bumps [github.com/hashicorp/hcl/v2](https://github.com/hashicorp/hcl) from 2.22.0 to 2.23.0.
- [Release notes](https://github.com/hashicorp/hcl/releases)
- [Changelog](https://github.com/hashicorp/hcl/blob/main/CHANGELOG.md)
- [Commits](https://github.com/hashicorp/hcl/compare/v2.22.0...v2.23.0)

---
updated-dependencies:
- dependency-name: github.com/hashicorp/hcl/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-14 13:12:06 +01:00
323c070caa tests don't work on windows 2025-01-14 13:10:09 +01:00
53cf1e2ebe fix for windows 2025-01-14 13:10:09 +01:00
16c5053752 satisfy linter 2025-01-14 13:10:09 +01:00
7d2d9a55d3 added prior art, fixes #30 as well 2025-01-14 13:10:09 +01:00
14c50b4e63 get rid of lisp interpreter, -R and -F are enough, fixes #30 2025-01-14 13:10:09 +01:00
0e68dc585d added testscript test to test the combination of all tasks 2025-01-14 13:10:09 +01:00
6ca835add1 changed file handling, use -r <file> or nothing to use stdin 2025-01-14 13:10:09 +01:00
306f583522 fixed transpose error message if count is incorrect 2025-01-14 13:10:09 +01:00
9f971ed3b9 fix #32: treat header filters case insensitively 2025-01-14 13:10:09 +01:00
2ae2d2b33d add transpose stuff to README, bump version 2025-01-14 13:10:09 +01:00
cf1a555b9b added tests, reorganized Parse() by dismantling parsing and processing 2025-01-14 13:10:09 +01:00
4d894a728b added transpose function (-T + -R) 2025-01-14 13:10:09 +01:00
8792c5a40f fix regex in example 2025-01-10 18:33:55 +01:00
7ab1a1178a add zygo reference 2025-01-10 18:27:41 +01:00
1e44da4f6e added documentation about current state of lisp support 2025-01-10 18:26:33 +01:00
59171f0fab bump versions 2024-12-13 10:37:44 +01:00
8098ccf000 fix #29: fix stat() error checking 2024-12-13 10:35:56 +01:00
46 changed files with 2070 additions and 1129 deletions

View File

@@ -4,8 +4,8 @@ jobs:
build:
strategy:
matrix:
version: ['1.22']
os: [ubuntu-latest, windows-latest, macos-latest]
version: ['1.23']
os: [ubuntu-latest, macos-latest, windows-latest]
name: Build
runs-on: ${{ matrix.os }}
steps:
@@ -30,7 +30,7 @@ jobs:
steps:
- uses: actions/setup-go@v5
with:
go-version: 1.22
go-version: 1.23
- uses: actions/checkout@v4
- name: golangci-lint
uses: golangci/golangci-lint-action@v6

87
.github/workflows/release.yaml vendored Normal file
View File

@@ -0,0 +1,87 @@
name: build-release
on:
push:
tags:
- "v*.*.*"
jobs:
release:
name: Build Release Assets
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: 1.22.11
- name: Build the executables
run: ./mkrel.sh tablizer ${{ github.ref_name}}
- name: List the executables
run: ls -l ./releases
- name: Upload the binaries
uses: svenstaro/upload-release-action@v2
with:
repo_token: ${{ secrets.GITHUB_TOKEN }}
tag: ${{ github.ref_name }}
file: ./releases/*
file_glob: true
- name: Build Changelog
id: github_release
uses: mikepenz/release-changelog-builder-action@v5
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
mode: "PR"
configurationJson: |
{
"template": "#{{CHANGELOG}}\n\n**Full Changelog**: #{{RELEASE_DIFF}}",
"pr_template": "- #{{TITLE}} (##{{NUMBER}}) by #{{AUTHOR}}\n#{{BODY}}",
"empty_template": "- no changes",
"categories": [
{
"title": "## New Features",
"labels": ["add", "feature"]
},
{
"title": "## Bug Fixes",
"labels": ["fix", "bug", "revert"]
},
{
"title": "## Documentation Enhancements",
"labels": ["doc"]
},
{
"title": "## Refactoring Efforts",
"labels": ["refactor"]
},
{
"title": "## Miscellaneus Changes",
"labels": []
}
],
"ignore_labels": [
"duplicate", "good first issue", "help wanted", "invalid", "question", "wontfix"
],
"label_extractor": [
{
"pattern": "(.) (.+)",
"target": "$1"
},
{
"pattern": "(.) (.+)",
"target": "$1",
"on_property": "title"
}
]
}
- name: Create Release
uses: softprops/action-gh-release@v2
with:
body: ${{steps.github_release.outputs.changelog}}

1
.gitignore vendored
View File

@@ -1,2 +1,3 @@
releases
tablizer
*.out

View File

@@ -53,8 +53,7 @@ buildlocal:
go build -ldflags "-X 'github.com/tlinden/tablizer/cfg.VERSION=$(VERSION)'"
release:
./mkrel.sh $(tool) $(version)
gh release create $(version) --generate-notes releases/*
gh release create $(version) --generate-notes
install: buildlocal
install -d -o $(UID) -g $(GID) $(PREFIX)/bin
@@ -65,13 +64,12 @@ install: buildlocal
clean:
rm -rf $(tool) releases coverage.out
test:
go test -v ./...
bash t/test.sh
test: clean
go test ./... $(OPTS)
singletest:
@echo "Call like this: ''make singletest TEST=TestPrepareColumns MOD=lib"
go test -run $(TEST) github.com/tlinden/tablizer/$(MOD)
@echo "Call like this: 'make singletest TEST=TestPrepareColumns MOD=lib'"
go test -run $(TEST) github.com/tlinden/tablizer/$(MOD) $(OPTS)
cover-report:
go test ./... -cover -coverprofile=coverage.out

View File

@@ -8,6 +8,51 @@ Tablizer can be used to re-format tabular output of other
programs. While you could do this using standard unix tools, in some
cases it's a hard job.
Usage:
```default
Usage:
tablizer [regex] [file, ...] [flags]
Operational Flags:
-c, --columns string Only show the speficied columns (separated by ,)
-v, --invert-match select non-matching rows
-n, --no-numbering Disable header numbering
-N, --no-color Disable pattern highlighting
-H, --no-headers Disable headers display
-s, --separator string Custom field separator
-k, --sort-by int Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental]
-F, --filter field[!]=reg Filter given field with regex, can be used multiple times
-T, --transpose-columns string Transpose the speficied columns (separated by ,)
-R, --regex-transposer /from/to/ Apply /search/replace/ regexp to fields given in -T
Output Flags (mutually exclusive):
-X, --extended Enable extended output
-M, --markdown Enable markdown table output
-O, --orgtbl Enable org-mode table output
-S, --shell Enable shell evaluable output
-Y, --yaml Enable yaml output
-C, --csv Enable CSV output
-A, --ascii Default output mode, ascii tabular
-L, --hightlight-lines Use alternating background colors for tables
-y, --yank-columns Yank specified columns (separated by ,) to clipboard,
space separated
Sort Mode Flags (mutually exclusive):
-a, --sort-age sort according to age (duration) string
-D, --sort-desc Sort in descending order (default: ascending)
-i, --sort-numeric sort according to string numerical value
-t, --sort-time sort according to time string
Other Flags:
--completion <shell> Generate the autocompletion script for <shell>
-f, --config <file> Configuration file (default: ~/.config/tablizer/config)
-d, --debug Enable debugging
-h, --help help for tablizer
-m, --man Display manual page
-V, --version Print program version
```
Let's take this output:
```
% kubectl get pods -o wide
@@ -83,6 +128,23 @@ otherwise on all rows.
There are more output modes like org-mode (orgtbl) and markdown.
You can also use it to modify certain cells using regular expression
matching. For example:
```shell
kubectl get pods | tablizer -n -T4 -R '/ /-/'
NAME READY STATUS RESTARTS AGE
repldepl-7bcd8d5b64-7zq4l 1/1 Running 1-(69m-ago) 5h26m
repldepl-7bcd8d5b64-m48n8 1/1 Running 1-(69m-ago) 5h26m
repldepl-7bcd8d5b64-q2bf4 1/1 Running 1-(69m-ago) 5h26m
```
Here, we modified the 4th column (`-T4`) by replacing every space with
a dash. If you need to work with `/` characters, you can also use any
other separator, for instance: `-R '| |-|'`.
## Demo
[![asciicast](demo/tablizer-demo.gif)](https://asciinema.org/a/9FKc3HPnlg8D2X8otheleEa9t)
@@ -138,6 +200,41 @@ In order to report a bug, unexpected behavior, feature requests
or to submit a patch, please open an issue on github:
https://github.com/TLINDEN/tablizer/issues.
## Prior Art
When I started with tablizer I was not aware that other tools
exist. Here is a non-exhausive list of the ones I find especially
awesome:
### [miller](https://github.com/johnkerl/miller)
This is a really powerful tool to work with tabular data and it also
allows other inputs as json, csv etc. You can filter, manipulate,
create pipelines, there's even a programming language builtin to do
even more amazing things.
### [csvq](https://github.com/mithrandie/csvq)
Csvq allows you to query CSV and TSV data using SQL queries. How nice
is that? Highly recommended if you have to work with a large (and
wide) dataset and need to apply a complicated set of rules.
### [goawk](https://github.com/benhoyt/goawk)
Goawk is a 100% POSIX compliant AWK implementation in GO, which also
supports CSV and TSV data as input (using `-i csv` for example). You
can apply any kind of awk code to your tabular data, there are no
limit to your creativity!
### [teip](https://github.com/greymd/teip)
I particularly like teip, it's a real gem. You can use it to drill
"holes" into your tabular data and modify these "holes" using small
external unix commands such as grep or sed. The possibilities are
endless, you can even use teip to modify data inside a hole created by
teip. Highly recommended.
## Copyright and license
This software is licensed under the GNU GENERAL PUBLIC LICENSE version 3.

10
TODO.md
View File

@@ -6,13 +6,3 @@
- add --no-headers option
### Lisp Plugin Infrastructure using zygo
Hooks:
| Filter | Purpose | Args | Return |
|-----------|-------------------------------------------------------------|---------------------|--------|
| filter | include or exclude lines | row as hash | bool |
| process | do calculations with data, store results in global lisp env | whole dataset | nil |
| transpose | modify a cell | headername and cell | cell |
| append | add one or more rows to the dataset (use this to add stats) | nil | rows |

View File

@@ -1,5 +1,5 @@
/*
Copyright © 2022-2024 Thomas von Dein
Copyright © 2022-2025 Thomas von Dein
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@@ -23,16 +23,14 @@ import (
"regexp"
"strings"
"github.com/glycerine/zygomys/zygo"
"github.com/gookit/color"
"github.com/hashicorp/hcl/v2/hclsimple"
)
const DefaultSeparator string = `(\s\s+|\t)`
const Version string = "v1.2.2"
const Version string = "v1.4.2"
const MAXPARTS = 2
var DefaultLoadPath = os.Getenv("HOME") + "/.config/tablizer/lisp"
var DefaultConfigfile = os.Getenv("HOME") + "/.config/tablizer/config"
var VERSION string // maintained by -x
@@ -49,24 +47,47 @@ type Settings struct {
HighlightHdrBG string `hcl:"HighlightHdrBG"`
}
type Transposer struct {
Search regexp.Regexp
Replace string
}
type Pattern struct {
Pattern string
PatternRe *regexp.Regexp
Negate bool
}
type Filter struct {
Regex *regexp.Regexp
Negate bool
}
// internal config
type Config struct {
Debug bool
NoNumbering bool
Numbering bool
NoHeaders bool
Columns string
UseColumns []int
YankColumns string
UseYankColumns []int
Separator string
OutputMode int
InvertMatch bool
Pattern string
PatternR *regexp.Regexp
Patterns []*Pattern
UseFuzzySearch bool
UseHighlight bool
SortMode string
SortDescending bool
SortByColumn int
SortMode string
SortDescending bool
SortByColumn string // 1,2
UseSortByColumn []int // []int{1,2}
TransposeColumns string // 1,2
UseTransposeColumns []int // []int{1,2}
Transposers []string // []string{"/ /-/", "/foo/bar/"}
UseTransposers []Transposer // {Search: re, Replace: string}
/*
FIXME: make configurable somehow, config file or ENV
@@ -79,13 +100,6 @@ type Config struct {
NoColor bool
// special case: we use the config struct to transport the lisp
// env trough the program
Lisp *zygo.Zlisp
// a path containing lisp scripts to be loaded on startup
LispLoadPath string
// config file, optional
Configfile string
@@ -93,7 +107,10 @@ type Config struct {
// used for field filtering
Rawfilters []string
Filters map[string]*regexp.Regexp
Filters map[string]Filter //map[string]*regexp.Regexp
// -r <file>
InputFile string
}
// maps outputmode short flags to output mode, ie. -O => -o orgtbl
@@ -125,9 +142,6 @@ type Sortmode struct {
Age bool
}
// valid lisp hooks
var ValidHooks []string
// default color schemes
func (conf *Config) Colors() map[color.Level]map[string]color.Color {
colors := map[color.Level]map[string]color.Color{
@@ -263,12 +277,20 @@ func (conf *Config) PrepareModeFlags(flag Modeflag) {
}
func (conf *Config) PrepareFilters() error {
conf.Filters = make(map[string]*regexp.Regexp, len(conf.Rawfilters))
conf.Filters = make(map[string]Filter, len(conf.Rawfilters))
for _, filter := range conf.Rawfilters {
parts := strings.Split(filter, "=")
for _, rawfilter := range conf.Rawfilters {
filter := Filter{}
parts := strings.Split(rawfilter, "!=")
if len(parts) != MAXPARTS {
return errors.New("filter field and value must be separated by =")
parts = strings.Split(rawfilter, "=")
if len(parts) != MAXPARTS {
return errors.New("filter field and value must be separated by '=' or '!='")
}
} else {
filter.Negate = true
}
reg, err := regexp.Compile(parts[1])
@@ -277,7 +299,31 @@ func (conf *Config) PrepareFilters() error {
parts[0], err)
}
conf.Filters[strings.ToLower(parts[0])] = reg
filter.Regex = reg
conf.Filters[strings.ToLower(parts[0])] = filter
}
return nil
}
// check if transposers match transposer columns and prepare transposer structs
func (conf *Config) PrepareTransposers() error {
if len(conf.Transposers) != len(conf.UseTransposeColumns) {
return fmt.Errorf("the number of transposers needs to correspond to the number of transpose columns: %d != %d",
len(conf.Transposers), len(conf.UseTransposeColumns))
}
for _, transposer := range conf.Transposers {
parts := strings.Split(transposer, string(transposer[0]))
if len(parts) != 4 {
return fmt.Errorf("transposer function must have the format /regexp/replace-string/")
}
conf.UseTransposers = append(conf.UseTransposers,
Transposer{
Search: *regexp.MustCompile(parts[1]),
Replace: parts[2]},
)
}
return nil
@@ -286,10 +332,10 @@ func (conf *Config) PrepareFilters() error {
func (conf *Config) CheckEnv() {
// check for environment vars, command line flags have precedence,
// NO_COLOR is being checked by the color module itself.
if !conf.NoNumbering {
_, set := os.LookupEnv("T_NO_HEADER_NUMBERING")
if !conf.Numbering {
_, set := os.LookupEnv("T_HEADER_NUMBERING")
if set {
conf.NoNumbering = true
conf.Numbering = true
}
}
@@ -304,21 +350,41 @@ func (conf *Config) CheckEnv() {
func (conf *Config) ApplyDefaults() {
// mode specific defaults
if conf.OutputMode == Yaml || conf.OutputMode == CSV {
conf.NoNumbering = true
conf.Numbering = false
}
ValidHooks = []string{"filter", "process", "transpose", "append"}
}
func (conf *Config) PreparePattern(pattern string) error {
PatternR, err := regexp.Compile(pattern)
func (conf *Config) PreparePattern(patterns []*Pattern) error {
// regex checks if a pattern looks like /$pattern/[i!]
flagre := regexp.MustCompile(`^/(.*)/([i!]*)$`)
if err != nil {
return fmt.Errorf("regexp pattern %s is invalid: %w", conf.Pattern, err)
for _, pattern := range patterns {
matches := flagre.FindAllStringSubmatch(pattern.Pattern, -1)
// we have a regex with flags
for _, match := range matches {
pattern.Pattern = match[1] // the inner part is our actual pattern
flags := match[2] // the flags
for _, flag := range flags {
switch flag {
case 'i':
pattern.Pattern = `(?i)` + pattern.Pattern
case '!':
pattern.Negate = true
}
}
}
PatternRe, err := regexp.Compile(pattern.Pattern)
if err != nil {
return fmt.Errorf("regexp pattern %s is invalid: %w", pattern.Pattern, err)
}
pattern.PatternRe = PatternRe
}
conf.PatternR = PatternR
conf.Pattern = pattern
conf.Patterns = patterns
return nil
}
@@ -328,7 +394,16 @@ func (conf *Config) PreparePattern(pattern string) error {
func (conf *Config) ParseConfigfile() error {
path, err := os.Stat(conf.Configfile)
if os.IsNotExist(err) || path.IsDir() {
if err != nil {
if os.IsNotExist(err) {
// ignore non-existent files
return nil
}
return fmt.Errorf("failed to stat config file: %w", err)
}
if path.IsDir() {
// ignore non-existent or dirs
return nil
}

View File

@@ -79,20 +79,55 @@ func TestPrepareSortFlags(t *testing.T) {
func TestPreparePattern(t *testing.T) {
var tests = []struct {
pattern string
wanterr bool
patterns []*Pattern
name string
wanterr bool
wanticase bool
wantneg bool
}{
{"[A-Z]+", false},
{"[a-z", true},
{
[]*Pattern{{Pattern: "[A-Z]+"}},
"simple",
false,
false,
false,
},
{
[]*Pattern{{Pattern: "[a-z"}},
"regfail",
true,
false,
false,
},
{
[]*Pattern{{Pattern: "/[A-Z]+/i"}},
"icase",
false,
true,
false,
},
{
[]*Pattern{{Pattern: "/[A-Z]+/!"}},
"negate",
false,
false,
true,
},
{
[]*Pattern{{Pattern: "/[A-Z]+/!i"}},
"negicase",
false,
true,
true,
},
}
for _, testdata := range tests {
testname := fmt.Sprintf("PreparePattern-pattern-%s-wanterr-%t",
testdata.pattern, testdata.wanterr)
testname := fmt.Sprintf("PreparePattern-pattern-%s-wanterr-%t", testdata.name, testdata.wanterr)
t.Run(testname, func(t *testing.T) {
conf := Config{}
err := conf.PreparePattern(testdata.pattern)
err := conf.PreparePattern(testdata.patterns)
if err != nil {
if !testdata.wanterr {

View File

@@ -1,5 +1,5 @@
/*
Copyright © 2022-2024 Thomas von Dein
Copyright © 2022-2025 Thomas von Dein
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@@ -117,9 +117,6 @@ func Execute() {
conf.DetermineColormode()
conf.ApplyDefaults()
// setup lisp env, load plugins etc
wrapE(lib.SetupLisp(&conf))
// actual execution starts here
wrapE(lib.ProcessFiles(&conf, args))
},
@@ -128,7 +125,7 @@ func Execute() {
// options
rootCmd.PersistentFlags().BoolVarP(&conf.Debug, "debug", "d", false,
"Enable debugging")
rootCmd.PersistentFlags().BoolVarP(&conf.NoNumbering, "no-numbering", "n", false,
rootCmd.PersistentFlags().BoolVarP(&conf.Numbering, "numbering", "n", false,
"Disable header numbering")
rootCmd.PersistentFlags().BoolVarP(&conf.NoHeaders, "no-headers", "H", false,
"Disable header display")
@@ -150,9 +147,13 @@ func Execute() {
"Custom field separator")
rootCmd.PersistentFlags().StringVarP(&conf.Columns, "columns", "c", "",
"Only show the speficied columns (separated by ,)")
rootCmd.PersistentFlags().StringVarP(&conf.YankColumns, "yank-columns", "y", "",
"Yank the speficied columns (separated by ,) to the clipboard")
rootCmd.PersistentFlags().StringVarP(&conf.TransposeColumns, "transpose-columns", "T", "",
"Transpose the speficied columns (separated by ,)")
// sort options
rootCmd.PersistentFlags().IntVarP(&conf.SortByColumn, "sort-by", "k", 0,
rootCmd.PersistentFlags().StringVarP(&conf.SortByColumn, "sort-by", "k", "",
"Sort by column (default: 1)")
// sort mode, only 1 allowed
@@ -185,16 +186,19 @@ func Execute() {
rootCmd.MarkFlagsMutuallyExclusive("extended", "markdown", "orgtbl",
"shell", "yaml", "csv")
// lisp options
rootCmd.PersistentFlags().StringVarP(&conf.LispLoadPath, "load-path", "l", cfg.DefaultLoadPath,
"Load path for lisp plugins (expects *.zy files)")
// config file
rootCmd.PersistentFlags().StringVarP(&conf.Configfile, "config", "f", cfg.DefaultConfigfile,
"config file (default: ~/.config/tablizer/config)")
// filters
rootCmd.PersistentFlags().StringArrayVarP(&conf.Rawfilters, "filter", "F", nil, "Filter by field (field=regexp)")
rootCmd.PersistentFlags().StringArrayVarP(&conf.Rawfilters,
"filter", "F", nil, "Filter by field (field=regexp || field!=regexp)")
rootCmd.PersistentFlags().StringArrayVarP(&conf.Transposers,
"regex-transposer", "R", nil, "apply /search/replace/ regexp to fields given in -T")
// input
rootCmd.PersistentFlags().StringVarP(&conf.InputFile, "read-file", "r", "",
"Read input data from file")
rootCmd.SetUsageTemplate(strings.TrimSpace(usage) + "\n")

View File

@@ -6,42 +6,46 @@ NAME
SYNOPSIS
Usage:
tablizer [regex] [file, ...] [flags]
tablizer [regex,...] [file, ...] [flags]
Operational Flags:
-c, --columns string Only show the speficied columns (separated by ,)
-v, --invert-match select non-matching rows
-n, --no-numbering Disable header numbering
-N, --no-color Disable pattern highlighting
-H, --no-headers Disable headers display
-s, --separator string Custom field separator
-k, --sort-by int Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental]
-F, --filter field=reg Filter given field with regex, can be used multiple times
-c, --columns string Only show the speficied columns (separated by ,)
-v, --invert-match select non-matching rows
-n, --numbering Enable header numbering
-N, --no-color Disable pattern highlighting
-H, --no-headers Disable headers display
-s, --separator string Custom field separator
-k, --sort-by int|name Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental]
-F, --filter field[!]=reg Filter given field with regex, can be used multiple times
-T, --transpose-columns string Transpose the speficied columns (separated by ,)
-R, --regex-transposer /from/to/ Apply /search/replace/ regexp to fields given in -T
Output Flags (mutually exclusive):
-X, --extended Enable extended output
-M, --markdown Enable markdown table output
-O, --orgtbl Enable org-mode table output
-S, --shell Enable shell evaluable output
-Y, --yaml Enable yaml output
-C, --csv Enable CSV output
-A, --ascii Default output mode, ascii tabular
-L, --hightlight-lines Use alternating background colors for tables
-X, --extended Enable extended output
-M, --markdown Enable markdown table output
-O, --orgtbl Enable org-mode table output
-S, --shell Enable shell evaluable output
-Y, --yaml Enable yaml output
-C, --csv Enable CSV output
-A, --ascii Default output mode, ascii tabular
-L, --hightlight-lines Use alternating background colors for tables
-y, --yank-columns Yank specified columns (separated by ,) to clipboard,
space separated
Sort Mode Flags (mutually exclusive):
-a, --sort-age sort according to age (duration) string
-D, --sort-desc Sort in descending order (default: ascending)
-i, --sort-numeric sort according to string numerical value
-t, --sort-time sort according to time string
-a, --sort-age sort according to age (duration) string
-D, --sort-desc Sort in descending order (default: ascending)
-i, --sort-numeric sort according to string numerical value
-t, --sort-time sort according to time string
Other Flags:
--completion <shell> Generate the autocompletion script for <shell>
-f, --config <file> Configuration file (default: ~/.config/tablizer/config)
-d, --debug Enable debugging
-h, --help help for tablizer
-m, --man Display manual page
-V, --version Print program version
--completion <shell> Generate the autocompletion script for <shell>
-f, --config <file> Configuration file (default: ~/.config/tablizer/config)
-d, --debug Enable debugging
-h, --help help for tablizer
-m, --man Display manual page
-V, --version Print program version
DESCRIPTION
Many programs generate tabular output. But sometimes you need to
@@ -101,10 +105,19 @@ DESCRIPTION
highlighted. You can disable this behavior with the -N option.
Use the -k option to specify by which column to sort the tabular data
(as in GNU sort(1)). The default sort column is the first one. To
disable sorting at all, supply 0 (Zero) to -k. The default sort order is
ascending. You can change this to descending order using the option -D.
The default sort order is by string, but there are other sort modes:
(as in GNU sort(1)). The default sort column is the first one. You can
specify column numbers or names. Column numbers start with 1, names are
case insensitive. You can specify multiple columns separated by comma to
sort, but the type must be the same. For example if you want to sort
numerically, all columns must be numbers. If you use column numbers,
then be aware, that these are the numbers before column extraction. For
example if you have a table with 4 columns and specify "-c4", then only
1 column (the fourth) will be printed, however if you want to sort by
this column, you'll have to specify "-k4".
The default sort order is ascending. You can change this to descending
order using the option -D. The default sort order is by alphanumeric
string, but there are other sort modes:
-a --sort-age
Sorts duration strings like "1d4h32m51s".
@@ -119,30 +132,43 @@ DESCRIPTION
for the developer.
PATTERNS AND FILTERING
You can reduce the rows being displayed by using a regular expression
pattern. The regexp is PCRE compatible, refer to the syntax cheat sheet
here: <https://github.com/google/re2/wiki/Syntax>. If you want to read a
more comprehensive documentation about the topic and have perl installed
you can read it with:
You can reduce the rows being displayed by using one or more regular
expression patterns. The regexp language being used is the one of
GOLANG, refer to the syntax cheat sheet here:
<https://pkg.go.dev/regexp/syntax>.
If you want to read a more comprehensive documentation about the topic
and have perl installed you can read it with:
perldoc perlre
Or read it online: <https://perldoc.perl.org/perlre>.
Or read it online: <https://perldoc.perl.org/perlre>. But please note
that the GO regexp engine does NOT support all perl regex terms,
especially look-ahead and look-behind.
A note on modifiers: the regexp engine used in tablizer uses another
modifier syntax:
If you want to supply flags to a regex, then surround it with slashes
and append the flag. The following flags are supported:
(?MODIFIER)
The most important modifiers are:
"i" ignore case "m" multiline mode "s" single line mode
i => case insensitive
! => negative match
Example for a case insensitive search:
kubectl get pods -A | tablizer "(?i)account"
kubectl get pods -A | tablizer "/account/i"
You can use the experimental fuzzy search feature by providing the
If you use the "!" flag, then the regex match will be negated, that is,
if a line in the input matches the given regex, but "!" is supplied,
tablizer will NOT include it in the output.
For example, here we want to get all lines matching "foo" but not "bar":
cat table | tablizer foo '/bar/!'
This would match a line "foo zorro" but not "foo bar".
The flags can also be combined.
You can also use the experimental fuzzy search feature by providing the
option -z, in which case the pattern is regarded as a fuzzy search term,
not a regexp.
@@ -157,6 +183,10 @@ DESCRIPTION
If you specify more than one filter, both filters have to match (AND
operation).
These field filters can also be negated:
fieldname!=regexp
If the option -v is specified, the filtering is inverted.
COLUMNS
@@ -185,6 +215,44 @@ DESCRIPTION
where "C" is our regexp which matches CMD.
If a column specifier doesn't look like a regular expression, matching
against header fields will be case insensitive. So, if you have a field
with the name "ID" then these will all match: "-c id", "-c Id". The same
rule applies to the options "-T" and "-F".
TRANSPOSE FIELDS USING REGEXPS
You can manipulate field contents using regular expressions. You have to
tell tablizer which field[s] to operate on using the option "-T" and the
search/replace pattern using "-R". The number of columns and patterns
must match.
A search/replace pattern consists of the following elements:
/search-regexp/replace-string/
The separator can be any valid character. Especially if you want to use
a regexp containing the "/" character, eg:
|search-regexp|replace-string|
Example:
cat t/testtable2
NAME DURATION
x 10
a 100
z 0
u 4
k 6
cat t/testtable2 | tablizer -T2 -R '/^\d/4/' -n
NAME DURATION
x 40
a 400
z 4
u 4
k 4
OUTPUT MODES
There might be cases when the tabular output of a program is way too
large for your current terminal but you still need to see every column.
@@ -218,12 +286,24 @@ DESCRIPTION
markdown which prints a Markdown table, yaml, which prints yaml encoding
and CSV mode, which prints a comma separated value file.
PUT FIELDS TO CLIPBOARD
You can let tablizer put fields to the clipboard using the option "-y".
This best fits the use-case when the result of your filtering yields
just one row. For example:
cloudctl cluster ls | tablizer -yid matchbox
If "matchbox" matches one cluster, you can immediately use the id of
that cluster somewhere else and paste it. Of course, if there are
multiple matches, then all id's will be put into the clipboard separated
by one space.
ENVIRONMENT VARIABLES
tablizer supports certain environment variables which use can use to
influence program behavior. Commandline flags have always precedence
over environment variables.
<T_NO_HEADER_NUMBERING> - disable numbering of header fields, like -n.
<T_HEADER_NUMBERING> - enable numbering of header fields, like -n.
<T_COLUMNS> - comma separated list of columns to output, like -c
<NO_COLORS> - disable colorization of matches, like -N
@@ -343,42 +423,46 @@ AUTHORS
var usage = `
Usage:
tablizer [regex] [file, ...] [flags]
tablizer [regex,...] [file, ...] [flags]
Operational Flags:
-c, --columns string Only show the speficied columns (separated by ,)
-v, --invert-match select non-matching rows
-n, --no-numbering Disable header numbering
-N, --no-color Disable pattern highlighting
-H, --no-headers Disable headers display
-s, --separator string Custom field separator
-k, --sort-by int Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental]
-F, --filter field=reg Filter given field with regex, can be used multiple times
-c, --columns string Only show the speficied columns (separated by ,)
-v, --invert-match select non-matching rows
-n, --numbering Enable header numbering
-N, --no-color Disable pattern highlighting
-H, --no-headers Disable headers display
-s, --separator string Custom field separator
-k, --sort-by int|name Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental]
-F, --filter field[!]=reg Filter given field with regex, can be used multiple times
-T, --transpose-columns string Transpose the speficied columns (separated by ,)
-R, --regex-transposer /from/to/ Apply /search/replace/ regexp to fields given in -T
Output Flags (mutually exclusive):
-X, --extended Enable extended output
-M, --markdown Enable markdown table output
-O, --orgtbl Enable org-mode table output
-S, --shell Enable shell evaluable output
-Y, --yaml Enable yaml output
-C, --csv Enable CSV output
-A, --ascii Default output mode, ascii tabular
-L, --hightlight-lines Use alternating background colors for tables
-X, --extended Enable extended output
-M, --markdown Enable markdown table output
-O, --orgtbl Enable org-mode table output
-S, --shell Enable shell evaluable output
-Y, --yaml Enable yaml output
-C, --csv Enable CSV output
-A, --ascii Default output mode, ascii tabular
-L, --hightlight-lines Use alternating background colors for tables
-y, --yank-columns Yank specified columns (separated by ,) to clipboard,
space separated
Sort Mode Flags (mutually exclusive):
-a, --sort-age sort according to age (duration) string
-D, --sort-desc Sort in descending order (default: ascending)
-i, --sort-numeric sort according to string numerical value
-t, --sort-time sort according to time string
-a, --sort-age sort according to age (duration) string
-D, --sort-desc Sort in descending order (default: ascending)
-i, --sort-numeric sort according to string numerical value
-t, --sort-time sort according to time string
Other Flags:
--completion <shell> Generate the autocompletion script for <shell>
-f, --config <file> Configuration file (default: ~/.config/tablizer/config)
-d, --debug Enable debugging
-h, --help help for tablizer
-m, --man Display manual page
-V, --version Print program version
--completion <shell> Generate the autocompletion script for <shell>
-f, --config <file> Configuration file (default: ~/.config/tablizer/config)
-d, --debug Enable debugging
-h, --help help for tablizer
-m, --man Display manual page
-V, --version Print program version
`

49
go.mod
View File

@@ -1,44 +1,43 @@
module github.com/tlinden/tablizer
go 1.22
go 1.23.0
toolchain go1.23.5
require (
github.com/alecthomas/repr v0.4.0
github.com/araddon/dateparse v0.0.0-20210429162001-6b43995a97de
github.com/glycerine/zygomys v5.1.2+incompatible
github.com/gookit/color v1.5.4
github.com/hashicorp/hcl/v2 v2.22.0
github.com/hashicorp/hcl/v2 v2.23.0
github.com/lithammer/fuzzysearch v1.1.8
github.com/olekukonko/tablewriter v0.0.5
github.com/spf13/cobra v1.8.1
github.com/olekukonko/tablewriter v1.0.6
github.com/rogpeppe/go-internal v1.14.1
github.com/spf13/cobra v1.9.1
github.com/tiagomelo/go-clipboard v0.1.2
gopkg.in/yaml.v3 v3.0.1
)
require (
github.com/agext/levenshtein v1.2.1 // indirect
github.com/agext/levenshtein v1.2.3 // indirect
github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect
github.com/apparentlymart/go-textseg/v15 v15.0.0 // indirect
github.com/glycerine/blake2b v0.0.0-20151022103502-3c8c640cd7be // indirect
github.com/glycerine/goconvey v0.0.0-20190410193231-58a59202ab31 // indirect
github.com/glycerine/greenpack v5.1.1+incompatible // indirect
github.com/glycerine/liner v0.0.0-20160121172638-72909af234e0 // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/gopherjs/gopherjs v1.17.2 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jtolds/gls v4.20.0+incompatible // indirect
github.com/mattn/go-runewidth v0.0.10 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7 // indirect
github.com/philhofer/fwd v1.1.2 // indirect
github.com/rivo/uniseg v0.1.0 // indirect
github.com/shurcooL/go v0.0.0-20200502201357-93f07166e636 // indirect
github.com/shurcooL/go-goon v1.0.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/tinylib/msgp v1.1.9 // indirect
github.com/ugorji/go/codec v1.2.11 // indirect
github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778 // indirect
github.com/zclconf/go-cty v1.13.0 // indirect
golang.org/x/mod v0.13.0 // indirect
golang.org/x/sys v0.13.0 // indirect
github.com/olekukonko/errors v1.1.0 // indirect
github.com/olekukonko/ll v0.0.8 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/spf13/pflag v1.0.6 // indirect
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
github.com/zclconf/go-cty v1.13.3 // indirect
golang.org/x/mod v0.21.0 // indirect
golang.org/x/sync v0.8.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/text v0.11.0 // indirect
golang.org/x/tools v0.14.0 // indirect
golang.org/x/tools v0.26.0 // indirect
)

116
go.sum
View File

@@ -1,5 +1,5 @@
github.com/agext/levenshtein v1.2.1 h1:QmvMAjj2aEICytGiWzmxoE0x2KZvE0fvmqMOfy2tjT8=
github.com/agext/levenshtein v1.2.1/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7lmo=
github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc=
github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4=
github.com/apparentlymart/go-textseg/v13 v13.0.0 h1:Y+KvPE1NYz0xl601PVImeQfFyEy6iT90AvPUL1NNfNw=
@@ -8,76 +8,95 @@ github.com/apparentlymart/go-textseg/v15 v15.0.0 h1:uYvfpb3DyLSCGWnctWKGj857c6ew
github.com/apparentlymart/go-textseg/v15 v15.0.0/go.mod h1:K8XmNZdhEBkdlyDdvbmmsvpAG721bKi0joRfFdHIWJ4=
github.com/araddon/dateparse v0.0.0-20210429162001-6b43995a97de h1:FxWPpzIjnTlhPwqqXc4/vE0f7GvRjuAsbW+HOIe8KnA=
github.com/araddon/dateparse v0.0.0-20210429162001-6b43995a97de/go.mod h1:DCaWoUhZrYW9p1lxo/cm8EmUOOzAPSEZNGF2DK1dJgw=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/glycerine/blake2b v0.0.0-20151022103502-3c8c640cd7be h1:XBJdPGgA3qqhW+p9CANCAVdF7ZIXdu3pZAkypMkKAjE=
github.com/glycerine/blake2b v0.0.0-20151022103502-3c8c640cd7be/go.mod h1:OSCrScrFAjcBObrulk6BEQlytA462OkG1UGB5NYj9kE=
github.com/glycerine/goconvey v0.0.0-20190410193231-58a59202ab31 h1:gclg6gY70GLy3PbkQ1AERPfmLMMagS60DKF78eWwLn8=
github.com/glycerine/goconvey v0.0.0-20190410193231-58a59202ab31/go.mod h1:Ogl1Tioa0aV7gstGFO7KhffUsb9M4ydbEbbxpcEDc24=
github.com/glycerine/greenpack v5.1.1+incompatible h1:fDr9i6MkSGZmAy4VXPfJhW+SyK2/LNnzIp5nHyDiaIM=
github.com/glycerine/greenpack v5.1.1+incompatible/go.mod h1:us0jVISAESGjsEuLlAfCd5nkZm6W6WQF18HPuOecIg4=
github.com/glycerine/liner v0.0.0-20160121172638-72909af234e0 h1:4ZegphJXBTc4uFQ08UVoWYmQXorGa+ipXetUj83sMBc=
github.com/glycerine/liner v0.0.0-20160121172638-72909af234e0/go.mod h1:AqJLs6UeoC65dnHxyCQ6MO31P5STpjcmgaANAU+No8Q=
github.com/glycerine/zygomys v5.1.2+incompatible h1:jmcdmA3XPxgfOunAXFpipE9LQoUL6eX6d2mhYyjV4GE=
github.com/glycerine/zygomys v5.1.2+incompatible/go.mod h1:i3SPKZpmy9dwF/3iWrXJ/ZLyzZucegwypwOmqRkUUaQ=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/fatih/color v1.15.0 h1:kOqh6YHBtK8aywxGerMG2Eq3H6Qgoqeo13Bk2Mv/nBs=
github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw=
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68=
github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/gookit/color v1.5.4 h1:FZmqs7XOyGgCAxmWyPslpiok1k05wmY3SJTytgvYFs0=
github.com/gookit/color v1.5.4/go.mod h1:pZJOeOS8DM43rXbp4AZo1n9zCU2qjpcRko0b6/QJi9w=
github.com/gopherjs/gopherjs v1.17.2 h1:fQnZVsXk8uxXIStYb0N4bGk7jeyTalG/wsZjQ25dO0g=
github.com/gopherjs/gopherjs v1.17.2/go.mod h1:pRRIvn/QzFLrKfvEz3qUuEhtE/zLCWfreZ6J5gM2i+k=
github.com/hashicorp/hcl/v2 v2.22.0 h1:hkZ3nCtqeJsDhPRFz5EA9iwcG1hNWGePOTw6oyul12M=
github.com/hashicorp/hcl/v2 v2.22.0/go.mod h1:62ZYHrXgPoX8xBnzl8QzbWq4dyDsDtfCRgIq1rbJEvA=
github.com/hashicorp/hcl/v2 v2.23.0 h1:Fphj1/gCylPxHutVSEOf2fBOh1VE4AuLV7+kbJf3qos=
github.com/hashicorp/hcl/v2 v2.23.0/go.mod h1:62ZYHrXgPoX8xBnzl8QzbWq4dyDsDtfCRgIq1rbJEvA=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/lithammer/fuzzysearch v1.1.8 h1:/HIuJnjHuXS8bKaiTMeeDlW2/AyIWk2brx1V8LFgLN4=
github.com/lithammer/fuzzysearch v1.1.8/go.mod h1:IdqeyBClc3FFqSzYq/MXESsS4S0FsZ5ajtkr5xPLts4=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
github.com/mattn/go-runewidth v0.0.10 h1:CoZ3S2P7pvtP45xOtBw+/mDL2z0RKI576gSkzRRpdGg=
github.com/mattn/go-runewidth v0.0.10/go.mod h1:RAqKPSqVFrSLVXbA8x7dzmKdmGzieGRCM46jaSJTDAk=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7 h1:DpOJ2HYzCv8LZP15IdmG+YdwD2luVPHITV96TkirNBM=
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/olekukonko/errors v0.0.0-20250405072817-4e6d85265da6 h1:r3FaAI0NZK3hSmtTDrBVREhKULp8oUeqLT5Eyl2mSPo=
github.com/olekukonko/errors v0.0.0-20250405072817-4e6d85265da6/go.mod h1:ppzxA5jBKcO1vIpCXQ9ZqgDh8iwODz6OXIGKU8r5m4Y=
github.com/olekukonko/errors v1.1.0 h1:RNuGIh15QdDenh+hNvKrJkmxxjV4hcS50Db478Ou5sM=
github.com/olekukonko/errors v1.1.0/go.mod h1:ppzxA5jBKcO1vIpCXQ9ZqgDh8iwODz6OXIGKU8r5m4Y=
github.com/olekukonko/ll v0.0.6-0.20250511102614-9564773e9d27 h1:LgDwLQDELPB6wMOx1x4DSXnH2pjQNDKFgqv2inJuiAU=
github.com/olekukonko/ll v0.0.6-0.20250511102614-9564773e9d27/go.mod h1:En+sEW0JNETl26+K8eZ6/W4UQ7CYSrrgg/EdIYT2H8g=
github.com/olekukonko/ll v0.0.8 h1:sbGZ1Fx4QxJXEqL/6IG8GEFnYojUSQ45dJVwN2FH2fc=
github.com/olekukonko/ll v0.0.8/go.mod h1:En+sEW0JNETl26+K8eZ6/W4UQ7CYSrrgg/EdIYT2H8g=
github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec=
github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY=
github.com/philhofer/fwd v1.1.2 h1:bnDivRJ1EWPjUIRXV5KfORO897HTbpFAQddBdE8t7Gw=
github.com/philhofer/fwd v1.1.2/go.mod h1:qkPdfjR2SIEbspLqpe1tO4n5yICnr2DY7mqEx2tUTP0=
github.com/olekukonko/tablewriter v1.0.2 h1:nxz/j28kPYQUuc4veIv3Ymmef7gHKn8rhr42aauENnk=
github.com/olekukonko/tablewriter v1.0.2/go.mod h1:eUa4ArVhHJYomS27xrJB/GyLtnzKKVkZeLM6/MNO+pA=
github.com/olekukonko/tablewriter v1.0.4 h1:Lnz32TW+q/MQhA4qwhIyLA+j5hZ3dcNpZrcpPC+4iaM=
github.com/olekukonko/tablewriter v1.0.4/go.mod h1:eUa4ArVhHJYomS27xrJB/GyLtnzKKVkZeLM6/MNO+pA=
github.com/olekukonko/tablewriter v1.0.6 h1:/T45mIHc5hcEvibgzBzvMy7ruT+RjgoQRvkHbnl6OWA=
github.com/olekukonko/tablewriter v1.0.6/go.mod h1:SJ0MV1aHb/89fLcsBMXMp30Xg3g5eGoOUu0RptEk4AU=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rivo/uniseg v0.1.0 h1:+2KBaVoUmb9XzDsrx/Ct0W/EYOSFf/nWTauy++DprtY=
github.com/rivo/uniseg v0.1.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/scylladb/termtables v0.0.0-20191203121021-c4c0b6d42ff4/go.mod h1:C1a7PQSMz9NShzorzCiG2fk9+xuCgLkPeCvMHYR2OWg=
github.com/shurcooL/go v0.0.0-20200502201357-93f07166e636 h1:aSISeOcal5irEhJd1M+IrApc0PdcN7e7Aj4yuEnOrfQ=
github.com/shurcooL/go v0.0.0-20200502201357-93f07166e636/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk=
github.com/shurcooL/go-goon v1.0.0 h1:BCQPvxGkHHJ4WpBO4m/9FXbITVIsvAm/T66cCcCGI7E=
github.com/shurcooL/go-goon v1.0.0/go.mod h1:2wTHMsGo7qnpmqA8ADYZtP4I1DD94JpXGQ3Dxq2YQ5w=
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/cobra v1.9.1 h1:CXSaggrXdbHK9CF+8ywj8Amf7PBRmPCOJugH954Nnlo=
github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0=
github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o=
github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/tinylib/msgp v1.1.9 h1:SHf3yoO2sGA0veCJeCBYLHuttAVFHGm2RHgNodW7wQU=
github.com/tinylib/msgp v1.1.9/go.mod h1:BCXGB54lDD8qUEPmiG0cQQUANC4IUQyB2ItS2UDlO/k=
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778 h1:QldyIu/L63oPpyvQmHgvgickp1Yw510KJOqX7H24mg8=
github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778/go.mod h1:2MuV+tbUrU1zIOPMxZ5EncGwgmMJsa+9ucAQZXxsObs=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/tiagomelo/go-clipboard v0.1.2 h1:Ph2icR0vZRIj3v5ExvsGweBwsbbDUTlS6HoF40MkQD8=
github.com/tiagomelo/go-clipboard v0.1.2/go.mod h1:kXtjJBIMimZaGbxmcKZ8+JqK+acSNf5tAJiChlZBOr8=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/zclconf/go-cty v1.13.0 h1:It5dfKTTZHe9aeppbNOda3mN7Ag7sg6QkBNm6TkyFa0=
github.com/zclconf/go-cty v1.13.0/go.mod h1:YKQzy/7pZ7iq2jNFzy5go57xdxdWoLLpaEp4u238AE0=
github.com/zclconf/go-cty v1.13.3 h1:m+b9q3YDbg6Bec5rr+KGy1MzEVzY/jC2X+YX4yqKtHI=
github.com/zclconf/go-cty v1.13.3/go.mod h1:YKQzy/7pZ7iq2jNFzy5go57xdxdWoLLpaEp4u238AE0=
github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940 h1:4r45xpDWB6ZMSMNJFMOjqrGHynW3DIBuR2H9j0ug+Mo=
github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940/go.mod h1:CmBdvvj3nqzfzJ6nTCIwDTPZ56aVGvDrmztiO5g3qrM=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E=
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.13.0 h1:I/DsJXRlw/8l/0c24sM9yb0T4z9liZTduXvdAWYiysY=
golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.21.0 h1:vvrHzRwRfVKSiLrG+d4FMl/Qi4ukBCE6kZlTUkDYRT0=
golang.org/x/mod v0.21.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
@@ -85,15 +104,20 @@ golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.4.0 h1:zxkM55ReGkDlKSM+Fu41A+zmbZuaPVbGMzvvdUPznYQ=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@@ -108,8 +132,8 @@ golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGm
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.14.0 h1:jvNa2pY0M4r62jkRQ6RwEZZyPcymeL9XZMLBbV7U2nc=
golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg=
golang.org/x/tools v0.26.0 h1:v/60pFQmzmT9ExmjDv2gGIfi3OqfKoEP6I5+umXlbnQ=
golang.org/x/tools v0.26.0/go.mod h1:TPVVj70c7JJ3WCazhD8OdXcZg/og+b9+tH/KxylGwH0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -34,3 +34,39 @@ func (data *Tabdata) CloneEmpty() Tabdata {
return newdata
}
// add a TAB (\t) in front of every cell, but not the first
func (data *Tabdata) TabEntries() [][]string {
newentries := make([][]string, len(data.entries))
for rowidx, row := range data.entries {
newentries[rowidx] = make([]string, len(row))
for colidx, cell := range row {
switch colidx {
case 0:
newentries[rowidx][colidx] = cell
default:
newentries[rowidx][colidx] = "\t" + cell
}
}
}
return newentries
}
// add a TAB (\t) in front of every header, but not the first
func (data *Tabdata) TabHeaders() []string {
newheaders := make([]string, len(data.headers))
for colidx, cell := range data.headers {
switch colidx {
case 0:
newheaders[colidx] = cell
default:
newheaders[colidx] = "\t" + cell
}
}
return newheaders
}

View File

@@ -1,5 +1,5 @@
/*
Copyright © 2022-2024 Thomas von Dein
Copyright © 2022-2025 Thomas von Dein
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@@ -19,7 +19,6 @@ package lib
import (
"bufio"
"fmt"
"io"
"strings"
@@ -28,15 +27,46 @@ import (
)
/*
* [!]Match a line, use fuzzy search for normal pattern strings and
* regexp otherwise.
*/
* [!]Match a line, use fuzzy search for normal pattern strings and
* regexp otherwise.
'foo bar' foo, /bar/! => false => line contains foo and not (not bar)
'foo nix' foo, /bar/! => ture => line contains foo and (not bar)
'foo bar' foo, /bar/ => true => line contains both foo and bar
'foo nix' foo, /bar/ => false => line does not contain bar
'foo bar' foo, /nix/ => false => line does not contain nix
*/
func matchPattern(conf cfg.Config, line string) bool {
if conf.UseFuzzySearch {
return fuzzy.MatchFold(conf.Pattern, line)
if len(conf.Patterns) == 0 {
// any line always matches ""
return true
}
return conf.PatternR.MatchString(line)
if conf.UseFuzzySearch {
// fuzzy search only considers the 1st pattern
return fuzzy.MatchFold(conf.Patterns[0].Pattern, line)
}
var match int
//fmt.Printf("<%s>\n", line)
for _, re := range conf.Patterns {
patmatch := re.PatternRe.MatchString(line)
if re.Negate {
// toggle the meaning of match
patmatch = !patmatch
}
if patmatch {
match++
}
//fmt.Printf("patmatch: %t, match: %d, pattern: %s, negate: %t\n", patmatch, match, re.Pattern, re.Negate)
}
// fmt.Printf("result: %t\n", match == len(conf.Patterns))
//fmt.Println()
return match == len(conf.Patterns)
}
/*
@@ -44,10 +74,10 @@ func matchPattern(conf cfg.Config, line string) bool {
* more filters match on a row, it will be kept, otherwise it will be
* excluded.
*/
func FilterByFields(conf cfg.Config, data Tabdata) (Tabdata, bool, error) {
func FilterByFields(conf cfg.Config, data *Tabdata) (*Tabdata, bool, error) {
if len(conf.Filters) == 0 {
// no filters, no checking
return Tabdata{}, false, nil
return nil, false, nil
}
newdata := data.CloneEmpty()
@@ -56,15 +86,19 @@ func FilterByFields(conf cfg.Config, data Tabdata) (Tabdata, bool, error) {
keep := true
for idx, header := range data.headers {
if !Exists(conf.Filters, strings.ToLower(header)) {
lcheader := strings.ToLower(header)
if !Exists(conf.Filters, lcheader) {
// do not filter by unspecified field
continue
}
if !conf.Filters[strings.ToLower(header)].MatchString(row[idx]) {
// there IS a filter, but it doesn't match
keep = false
match := conf.Filters[lcheader].Regex.MatchString(row[idx])
if conf.Filters[lcheader].Negate {
match = !match
}
if !match {
keep = false
break
}
}
@@ -75,7 +109,44 @@ func FilterByFields(conf cfg.Config, data Tabdata) (Tabdata, bool, error) {
}
}
return newdata, true, nil
return &newdata, true, nil
}
/*
* Transpose fields using search/replace regexp.
*/
func TransposeFields(conf cfg.Config, data *Tabdata) (*Tabdata, bool, error) {
if len(conf.UseTransposers) == 0 {
// nothing to be done
return nil, false, nil
}
newdata := data.CloneEmpty()
transposed := false
for _, row := range data.entries {
transposedrow := false
for idx := range data.headers {
transposeidx, hasone := findindex(conf.UseTransposeColumns, idx+1)
if hasone {
row[idx] =
conf.UseTransposers[transposeidx].Search.ReplaceAllString(
row[idx],
conf.UseTransposers[transposeidx].Replace,
)
transposedrow = true
}
}
if transposedrow {
// also apply -v
newdata.entries = append(newdata.entries, row)
transposed = true
}
}
return &newdata, transposed, nil
}
/* generic map.Exists(key) */
@@ -87,8 +158,11 @@ func Exists[K comparable, V any](m map[K]V, v K) bool {
return false
}
/*
* Filters the whole input lines, returns filtered lines
*/
func FilterByPattern(conf cfg.Config, input io.Reader) (io.Reader, error) {
if conf.Pattern == "" {
if len(conf.Patterns) == 0 {
return input, nil
}
@@ -100,25 +174,13 @@ func FilterByPattern(conf cfg.Config, input io.Reader) (io.Reader, error) {
line := strings.TrimSpace(scanner.Text())
if hadFirst {
// don't match 1st line, it's the header
if conf.Pattern != "" && matchPattern(conf, line) == conf.InvertMatch {
if matchPattern(conf, line) == conf.InvertMatch {
// by default -v is false, so if a line does NOT
// match the pattern, we will ignore it. However,
// if the user specified -v, the matching is inverted,
// so we ignore all lines, which DO match.
continue
}
// apply user defined lisp filters, if any
accept, err := RunFilterHooks(conf, line)
if err != nil {
return input, fmt.Errorf("failed to apply filter hook: %w", err)
}
if !accept {
// IF there are filter hook[s] and IF one of them
// returns false on the current line, reject it
continue
}
}
lines = append(lines, line)

View File

@@ -1,5 +1,5 @@
/*
Copyright © 2024 Thomas von Dein
Copyright © 2024-2025 Thomas von Dein
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@@ -27,21 +27,21 @@ import (
func TestMatchPattern(t *testing.T) {
var input = []struct {
name string
fuzzy bool
pattern string
line string
name string
fuzzy bool
patterns []*cfg.Pattern
line string
}{
{
name: "normal",
pattern: "haus",
line: "hausparty",
name: "normal",
patterns: []*cfg.Pattern{{Pattern: "haus"}},
line: "hausparty",
},
{
name: "fuzzy",
pattern: "hpt",
line: "haus-party-termin",
fuzzy: true,
name: "fuzzy",
patterns: []*cfg.Pattern{{Pattern: "hpt"}},
line: "haus-party-termin",
fuzzy: true,
},
}
@@ -55,7 +55,7 @@ func TestMatchPattern(t *testing.T) {
conf.UseFuzzySearch = true
}
err := conf.PreparePattern(inputdata.pattern)
err := conf.PreparePattern(inputdata.patterns)
if err != nil {
t.Errorf("PreparePattern returned error: %s", err)
}
@@ -98,6 +98,20 @@ func TestFilterByFields(t *testing.T) {
},
},
{
name: "one-field-negative",
filter: []string{"one!=asd"},
expect: Tabdata{
headers: []string{
"ONE", "TWO", "THREE",
},
entries: [][]string{
{"19191", "EDD 1", "x"},
{"8d8", "AN 1", "y"},
},
},
},
{
name: "one-field-inverted",
filter: []string{"one=19"},
@@ -153,8 +167,8 @@ func TestFilterByFields(t *testing.T) {
t.Errorf("PrepareFilters returned error: %s", err)
}
data, _, _ := FilterByFields(conf, data)
if !reflect.DeepEqual(data, inputdata.expect) {
data, _, _ := FilterByFields(conf, &data)
if !reflect.DeepEqual(*data, inputdata.expect) {
t.Errorf("Filtered data does not match expected data:\ngot: %+v\nexp: %+v", data, inputdata.expect)
}
})

View File

@@ -40,6 +40,16 @@ func contains(s []int, e int) bool {
return false
}
func findindex(s []int, e int) (int, bool) {
for i, a := range s {
if a == e {
return i, true
}
}
return 0, false
}
// validate the consitency of parsed data
func ValidateConsistency(data *Tabdata) error {
expectedfields := len(data.headers)
@@ -55,31 +65,102 @@ func ValidateConsistency(data *Tabdata) error {
}
// parse columns list given with -c, modifies config.UseColumns based
// on eventually given regex
// on eventually given regex.
// This is an output filter, because -cN,N,... is being applied AFTER
// processing of the input data.
func PrepareColumns(conf *cfg.Config, data *Tabdata) error {
if conf.Columns == "" {
return nil
// -c columns
usecolumns, err := PrepareColumnVars(conf.Columns, data)
if err != nil {
return err
}
for _, use := range strings.Split(conf.Columns, ",") {
if len(use) == 0 {
return fmt.Errorf("could not parse columns list %s: empty column", conf.Columns)
conf.UseColumns = usecolumns
// -y columns
useyankcolumns, err := PrepareColumnVars(conf.YankColumns, data)
if err != nil {
return err
}
conf.UseYankColumns = useyankcolumns
return nil
}
// Same thing as above but for -T option, which is an input option,
// because transposers are being applied before output.
func PrepareTransposerColumns(conf *cfg.Config, data *Tabdata) error {
// -T columns
usetransposecolumns, err := PrepareColumnVars(conf.TransposeColumns, data)
if err != nil {
return err
}
conf.UseTransposeColumns = usetransposecolumns
// verify that columns and transposers match and prepare transposer structs
if err := conf.PrepareTransposers(); err != nil {
return err
}
return nil
}
// output option, prepare -k1,2 sort fields
func PrepareSortColumns(conf *cfg.Config, data *Tabdata) error {
// -c columns
usecolumns, err := PrepareColumnVars(conf.SortByColumn, data)
if err != nil {
return err
}
conf.UseSortByColumn = usecolumns
return nil
}
func PrepareColumnVars(columns string, data *Tabdata) ([]int, error) {
if columns == "" {
return nil, nil
}
usecolumns := []int{}
isregex := regexp.MustCompile(`\W`)
for _, columnpattern := range strings.Split(columns, ",") {
if len(columnpattern) == 0 {
return nil, fmt.Errorf("could not parse columns list %s: empty column", columns)
}
usenum, err := strconv.Atoi(use)
usenum, err := strconv.Atoi(columnpattern)
if err != nil {
// might be a regexp
colPattern, err := regexp.Compile(use)
if err != nil {
msg := fmt.Sprintf("Could not parse columns list %s: %v", conf.Columns, err)
// not a number
return errors.New(msg)
}
if !isregex.MatchString(columnpattern) {
// is not a regexp (contains no non-word chars)
// lc() it so that word searches are case insensitive
columnpattern = strings.ToLower(columnpattern)
// find matching header fields
for i, head := range data.headers {
if colPattern.MatchString(head) {
conf.UseColumns = append(conf.UseColumns, i+1)
for i, head := range data.headers {
if columnpattern == strings.ToLower(head) {
usecolumns = append(usecolumns, i+1)
}
}
} else {
colPattern, err := regexp.Compile("(?i)" + columnpattern)
if err != nil {
msg := fmt.Sprintf("Could not parse columns list %s: %v", columns, err)
return nil, errors.New(msg)
}
// find matching header fields, ignoring case
for i, head := range data.headers {
if colPattern.MatchString(strings.ToLower(head)) {
usecolumns = append(usecolumns, i+1)
}
}
}
} else {
@@ -87,27 +168,28 @@ func PrepareColumns(conf *cfg.Config, data *Tabdata) error {
// a colum spec is not a number, we process them above
// inside the err handler for atoi(). so only add the
// number, if it's really just a number.
conf.UseColumns = append(conf.UseColumns, usenum)
usecolumns = append(usecolumns, usenum)
}
}
// deduplicate: put all values into a map (value gets map key)
// thereby removing duplicates, extract keys into new slice
// and sort it
imap := make(map[int]int, len(conf.UseColumns))
for _, i := range conf.UseColumns {
imap := make(map[int]int, len(usecolumns))
for _, i := range usecolumns {
imap[i] = 0
}
conf.UseColumns = nil
// fill with deduplicated columns
usecolumns = nil
for k := range imap {
conf.UseColumns = append(conf.UseColumns, k)
usecolumns = append(usecolumns, k)
}
sort.Ints(conf.UseColumns)
sort.Ints(usecolumns)
return nil
return usecolumns, nil
}
// prepare headers: add numbers to headers
@@ -126,13 +208,13 @@ func numberizeAndReduceHeaders(conf cfg.Config, data *Tabdata) {
}
}
if conf.NoNumbering {
numberedHeaders = append(numberedHeaders, head)
headlen = len(head)
} else {
if conf.Numbering {
numhead := fmt.Sprintf("%s(%d)", head, idx+1)
headlen = len(numhead)
numberedHeaders = append(numberedHeaders, numhead)
} else {
numberedHeaders = append(numberedHeaders, head)
headlen = len(head)
}
if headlen > maxwidth {
@@ -172,17 +254,6 @@ func reduceColumns(conf cfg.Config, data *Tabdata) {
}
}
// FIXME: remove this when we only use Tablewriter and strip in ParseFile()!
func trimRow(row []string) []string {
var fixedrow = make([]string, len(row))
for idx, cell := range row {
fixedrow[idx] = strings.TrimSpace(cell)
}
return fixedrow
}
// FIXME: refactor this beast!
func colorizeData(conf cfg.Config, output string) string {
switch {
@@ -219,12 +290,20 @@ func colorizeData(conf cfg.Config, output string) string {
return colorized
case len(conf.Pattern) > 0 && !conf.NoColor && color.IsConsole(os.Stdout):
r := regexp.MustCompile("(" + conf.Pattern + ")")
case len(conf.Patterns) > 0 && !conf.NoColor && color.IsConsole(os.Stdout):
out := output
return r.ReplaceAllStringFunc(output, func(in string) string {
return conf.ColorStyle.Sprint(in)
})
for _, re := range conf.Patterns {
if !re.Negate {
r := regexp.MustCompile("(" + re.Pattern + ")")
out = r.ReplaceAllStringFunc(out, func(in string) string {
return conf.ColorStyle.Sprint(in)
})
}
}
return out
default:
return output

View File

@@ -67,8 +67,8 @@ func TestPrepareColumns(t *testing.T) {
}{
{"1,2,3", []int{1, 2, 3}, false},
{"1,2,", []int{}, true},
{"T", []int{2, 3}, false},
{"T,2,3", []int{2, 3}, false},
{"T.", []int{2, 3}, false},
{"T.,2,3", []int{2, 3}, false},
{"[a-z,4,5", []int{4, 5}, true}, // invalid regexp
}
@@ -90,6 +90,86 @@ func TestPrepareColumns(t *testing.T) {
}
}
func TestPrepareTransposerColumns(t *testing.T) {
data := Tabdata{
maxwidthHeader: 5,
columns: 3,
headers: []string{
"ONE", "TWO", "THREE",
},
entries: [][]string{
{
"2", "3", "4",
},
},
}
var tests = []struct {
input string
transp []string
exp int
wanterror bool // expect error
}{
{
"1",
[]string{`/\d/x/`},
1,
false,
},
{
"T.", // will match [T]WO and [T]HREE
[]string{`/\d/x/`, `/.//`},
2,
false,
},
{
"TH.,2",
[]string{`/\d/x/`, `/.//`},
2,
false,
},
{
"1",
[]string{},
1,
true,
},
{
"",
[]string{`|.|N|`},
0,
true,
},
{
"1",
[]string{`|.|N|`},
1,
false,
},
}
for _, testdata := range tests {
testname := fmt.Sprintf("PrepareTransposerColumns-%s-%t", testdata.input, testdata.wanterror)
t.Run(testname, func(t *testing.T) {
conf := cfg.Config{TransposeColumns: testdata.input, Transposers: testdata.transp}
err := PrepareTransposerColumns(&conf, &data)
if err != nil {
if !testdata.wanterror {
t.Errorf("got error: %v", err)
}
} else {
if len(conf.UseTransposeColumns) != testdata.exp {
t.Errorf("got %d, want %d", conf.UseTransposeColumns, testdata.exp)
}
if len(conf.Transposers) != len(conf.UseTransposeColumns) {
t.Errorf("got %d, want %d", conf.UseTransposeColumns, testdata.exp)
}
}
})
}
}
func TestReduceColumns(t *testing.T) {
var tests = []struct {
expect [][]string
@@ -136,21 +216,21 @@ func TestNumberizeHeaders(t *testing.T) {
}
var tests = []struct {
expect []string
columns []int
nonum bool
expect []string
columns []int
numberize bool
}{
{[]string{"ONE(1)", "TWO(2)", "THREE(3)"}, []int{1, 2, 3}, false},
{[]string{"ONE(1)", "TWO(2)"}, []int{1, 2}, false},
{[]string{"ONE", "TWO"}, []int{1, 2}, true},
{[]string{"ONE(1)", "TWO(2)", "THREE(3)"}, []int{1, 2, 3}, true},
{[]string{"ONE(1)", "TWO(2)"}, []int{1, 2}, true},
{[]string{"ONE", "TWO"}, []int{1, 2}, false},
}
for _, testdata := range tests {
testname := fmt.Sprintf("numberize-headers-columns-%+v-nonum-%t",
testdata.columns, testdata.nonum)
testdata.columns, testdata.numberize)
t.Run(testname, func(t *testing.T) {
conf := cfg.Config{Columns: "x", UseColumns: testdata.columns, NoNumbering: testdata.nonum}
conf := cfg.Config{Columns: "x", UseColumns: testdata.columns, Numbering: testdata.numberize}
usedata := data
numberizeAndReduceHeaders(conf, &usedata)
if !reflect.DeepEqual(usedata.headers, testdata.expect) {

107
lib/io.go
View File

@@ -29,90 +29,79 @@ import (
const RWRR = 0755
func ProcessFiles(conf *cfg.Config, args []string) error {
fds, pattern, err := determineIO(conf, args)
fd, patterns, err := determineIO(conf, args)
if err != nil {
return err
}
if err := conf.PreparePattern(pattern); err != nil {
if err := conf.PreparePattern(patterns); err != nil {
return err
}
for _, fd := range fds {
data, err := Parse(*conf, fd)
if err != nil {
return err
}
if err = ValidateConsistency(&data); err != nil {
return err
}
err = PrepareColumns(conf, &data)
if err != nil {
return err
}
printData(os.Stdout, *conf, &data)
data, err := Parse(*conf, fd)
if err != nil {
return err
}
if err = ValidateConsistency(&data); err != nil {
return err
}
err = PrepareSortColumns(conf, &data)
if err != nil {
return err
}
err = PrepareColumns(conf, &data)
if err != nil {
return err
}
printData(os.Stdout, *conf, &data)
return nil
}
func determineIO(conf *cfg.Config, args []string) ([]io.Reader, string, error) {
var filehandles []io.Reader
var pattern string
func determineIO(conf *cfg.Config, args []string) (io.Reader, []*cfg.Pattern, error) {
var filehandle io.Reader
var patterns []*cfg.Pattern
var haveio bool
stat, _ := os.Stdin.Stat()
if (stat.Mode() & os.ModeCharDevice) == 0 {
// we're reading from STDIN, which takes precedence over file args
filehandles = append(filehandles, os.Stdin)
switch {
case conf.InputFile == "-":
filehandle = os.Stdin
haveio = true
case conf.InputFile != "":
fd, err := os.OpenFile(conf.InputFile, os.O_RDONLY, RWRR)
if len(args) > 0 {
// ignore any args > 1
pattern = args[0]
conf.Pattern = args[0] // used for colorization by printData()
if err != nil {
return nil, nil, fmt.Errorf("failed to read input file %s: %w", conf.InputFile, err)
}
filehandle = fd
haveio = true
} else if len(args) > 0 {
// threre were args left, take a look
if args[0] == "-" {
// in traditional unix programs a dash denotes STDIN (forced)
filehandles = append(filehandles, os.Stdin)
}
if !haveio {
stat, _ := os.Stdin.Stat()
if (stat.Mode() & os.ModeCharDevice) == 0 {
// we're reading from STDIN, which takes precedence over file args
filehandle = os.Stdin
haveio = true
} else {
if _, err := os.Stat(args[0]); err != nil {
// first one is not a file, consider it as regexp and
// shift arg list
pattern = args[0]
conf.Pattern = args[0] // used for colorization by printData()
args = args[1:]
}
}
}
if len(args) > 0 {
// consider any other args as files
for _, file := range args {
filehandle, err := os.OpenFile(file, os.O_RDONLY, RWRR)
if err != nil {
return nil, "", fmt.Errorf("failed to read input file %s: %w", file, err)
}
filehandles = append(filehandles, filehandle)
haveio = true
}
}
if len(args) > 0 {
patterns = make([]*cfg.Pattern, len(args))
for i, arg := range args {
patterns[i] = &cfg.Pattern{Pattern: arg}
}
}
if !haveio {
return nil, "", errors.New("no file specified and nothing to read on stdin")
return nil, nil, errors.New("no file specified and nothing to read on stdin")
}
return filehandles, pattern, nil
return filehandle, patterns, nil
}

View File

@@ -1,313 +0,0 @@
/*
Copyright © 2023 Thomas von Dein
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package lib
import (
"errors"
"fmt"
"log"
"os"
"strings"
"github.com/glycerine/zygomys/zygo"
"github.com/tlinden/tablizer/cfg"
)
/*
needs to be global because we can't feed an cfg object to AddHook()
which is being called from user lisp code
*/
var Hooks map[string][]*zygo.SexpSymbol
/*
AddHook() (called addhook from lisp code) can be used by the user to
add a function to one of the available hooks provided by tablizer.
*/
func AddHook(env *zygo.Zlisp, name string, args []zygo.Sexp) (zygo.Sexp, error) {
var hookname string
if len(args) < 2 {
return zygo.SexpNull, errors.New("argument of %add-hook should be: %hook-name %your-function")
}
switch sexptype := args[0].(type) {
case *zygo.SexpSymbol:
if !HookExists(sexptype.Name()) {
return zygo.SexpNull, errors.New("Unknown hook " + sexptype.Name())
}
hookname = sexptype.Name()
default:
return zygo.SexpNull, errors.New("hook name must be a symbol ")
}
switch sexptype := args[1].(type) {
case *zygo.SexpSymbol:
_, exists := Hooks[hookname]
if !exists {
Hooks[hookname] = []*zygo.SexpSymbol{sexptype}
} else {
Hooks[hookname] = append(Hooks[hookname], sexptype)
}
default:
return zygo.SexpNull, errors.New("hook function must be a symbol ")
}
return zygo.SexpNull, nil
}
/*
Check if a hook exists
*/
func HookExists(key string) bool {
for _, hook := range cfg.ValidHooks {
if hook == key {
return true
}
}
return false
}
/*
* Basic sanity checks and load lisp file
*/
func LoadAndEvalFile(env *zygo.Zlisp, path string) error {
if strings.HasSuffix(path, `.zy`) {
code, err := os.ReadFile(path)
if err != nil {
return fmt.Errorf("failed to read lisp file %s: %w", path, err)
}
// FIXME: check what res (_ here) could be and mean
_, err = env.EvalString(string(code))
if err != nil {
log.Fatal(env.GetStackTrace(err))
}
}
return nil
}
/*
* Setup lisp interpreter environment
*/
func SetupLisp(conf *cfg.Config) error {
// iterate over load-path and evaluate all *.zy files there, if any
// we ignore if load-path does not exist, which is the default anyway
path, err := os.Stat(conf.LispLoadPath)
if os.IsNotExist(err) {
return nil
}
// init global hooks
Hooks = make(map[string][]*zygo.SexpSymbol)
// init sandbox
env := zygo.NewZlispSandbox()
env.AddFunction("addhook", AddHook)
if !path.IsDir() {
// load single lisp file
err = LoadAndEvalFile(env, conf.LispLoadPath)
if err != nil {
return err
}
} else {
// load all lisp file in load dir
dir, err := os.ReadDir(conf.LispLoadPath)
if err != nil {
return fmt.Errorf("failed to read lisp dir %s: %w",
conf.LispLoadPath, err)
}
for _, entry := range dir {
if !entry.IsDir() {
err := LoadAndEvalFile(env, conf.LispLoadPath+"/"+entry.Name())
if err != nil {
return err
}
}
}
}
RegisterLib(env)
conf.Lisp = env
return nil
}
/*
Execute every user lisp function registered as filter hook.
Each function is given the current line as argument and is expected to
return a boolean. True indicates to keep the line, false to skip
it.
If there are multiple such functions registered, then the first one
returning false wins, that is if each function returns true the line
will be kept, if at least one of them returns false, it will be
skipped.
*/
func RunFilterHooks(conf cfg.Config, line string) (bool, error) {
for _, hook := range Hooks["filter"] {
var result bool
conf.Lisp.Clear()
res, err := conf.Lisp.EvalString(fmt.Sprintf("(%s `%s`)", hook.Name(), line))
if err != nil {
return false, fmt.Errorf("failed to evaluate hook loader: %w", err)
}
switch sexptype := res.(type) {
case *zygo.SexpBool:
result = sexptype.Val
default:
return false, fmt.Errorf("filter hook shall return bool")
}
if !result {
// the first hook which returns false leads to complete false
return result, nil
}
}
// if no hook returned false, we succeed and accept the given line
return true, nil
}
/*
These hooks get the data (Tabdata) readily processed by tablizer as
argument. They are expected to return a SexpPair containing a boolean
denoting if the data has been modified and the actual modified
data. Columns must be the same, rows may differ. Cells may also have
been modified.
Replaces the internal data structure Tabdata with the user supplied
version.
Only one process hook function is supported.
The somewhat complicated code is being caused by the fact, that we
need to convert our internal structure to a lisp variable and vice
versa afterwards.
*/
func RunProcessHooks(conf cfg.Config, data Tabdata) (Tabdata, bool, error) {
var userdata Tabdata
lisplist := []zygo.Sexp{}
if len(Hooks["process"]) == 0 {
return userdata, false, nil
}
if len(Hooks["process"]) > 1 {
fmt.Println("Warning: only one process hook is allowed!")
}
// there are hook[s] installed, convert the go data structure 'data to lisp
for _, row := range data.entries {
var entry zygo.SexpHash
for idx, cell := range row {
err := entry.HashSet(&zygo.SexpStr{S: data.headers[idx]}, &zygo.SexpStr{S: cell})
if err != nil {
return userdata, false, fmt.Errorf("failed to convert to lisp data: %w", err)
}
}
lisplist = append(lisplist, &entry)
}
// we need to add it to the env so that the function can use the struct directly
conf.Lisp.AddGlobal("data", &zygo.SexpArray{Val: lisplist, Env: conf.Lisp})
// execute the actual hook
hook := Hooks["process"][0]
conf.Lisp.Clear()
var result bool
res, err := conf.Lisp.EvalString(fmt.Sprintf("(%s data)", hook.Name()))
if err != nil {
return userdata, false, fmt.Errorf("failed to eval lisp loader: %w", err)
}
// we expect (bool, array(hash)) as return from the function
switch sexptype := res.(type) {
case *zygo.SexpPair:
switch th := sexptype.Head.(type) {
case *zygo.SexpBool:
result = th.Val
default:
return userdata, false, errors.New("xpect (bool, array(hash)) as return value")
}
switch sexptailtype := sexptype.Tail.(type) {
case *zygo.SexpArray:
lisplist = sexptailtype.Val
default:
return userdata, false, errors.New("expect (bool, array(hash)) as return value ")
}
default:
return userdata, false, errors.New("filter hook shall return array of hashes ")
}
if !result {
// no further processing required
return userdata, result, nil
}
// finally convert lispdata back to Tabdata
for _, item := range lisplist {
row := []string{}
switch hash := item.(type) {
case *zygo.SexpHash:
for _, header := range data.headers {
entry, err := hash.HashGetDefault(
conf.Lisp,
&zygo.SexpStr{S: header},
&zygo.SexpStr{S: ""})
if err != nil {
return userdata, false, fmt.Errorf("failed to get lisp hash entry: %w", err)
}
switch sexptype := entry.(type) {
case *zygo.SexpStr:
row = append(row, sexptype.S)
default:
return userdata, false, errors.New("hsh values should be string ")
}
}
default:
return userdata, false, errors.New("rturned array should contain hashes ")
}
userdata.entries = append(userdata.entries, row)
}
userdata.headers = data.headers
return userdata, result, nil
}

View File

@@ -1,88 +0,0 @@
/*
Copyright © 2023 Thomas von Dein
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package lib
import (
"errors"
"fmt"
"regexp"
"strconv"
"github.com/glycerine/zygomys/zygo"
)
func Splice2SexpList(list []string) zygo.Sexp {
slist := []zygo.Sexp{}
for _, item := range list {
slist = append(slist, &zygo.SexpStr{S: item})
}
return zygo.MakeList(slist)
}
func StringReSplit(env *zygo.Zlisp, name string, args []zygo.Sexp) (zygo.Sexp, error) {
if len(args) < 2 {
return zygo.SexpNull, errors.New("expecting 2 arguments")
}
var separator, input string
switch t := args[0].(type) {
case *zygo.SexpStr:
input = t.S
default:
return zygo.SexpNull, errors.New("second argument must be a string")
}
switch t := args[1].(type) {
case *zygo.SexpStr:
separator = t.S
default:
return zygo.SexpNull, errors.New("first argument must be a string")
}
sep := regexp.MustCompile(separator)
return Splice2SexpList(sep.Split(input, -1)), nil
}
func String2Int(env *zygo.Zlisp, name string, args []zygo.Sexp) (zygo.Sexp, error) {
var number int
switch t := args[0].(type) {
case *zygo.SexpStr:
num, err := strconv.Atoi(t.S)
if err != nil {
return zygo.SexpNull, fmt.Errorf("failed to convert string to number: %w", err)
}
number = num
default:
return zygo.SexpNull, errors.New("argument must be a string")
}
return &zygo.SexpInt{Val: int64(number)}, nil
}
func RegisterLib(env *zygo.Zlisp) {
env.AddFunction("resplit", StringReSplit)
env.AddFunction("atoi", String2Int)
}

View File

@@ -33,11 +33,31 @@ import (
Parser switch
*/
func Parse(conf cfg.Config, input io.Reader) (Tabdata, error) {
var data Tabdata
var err error
// first step, parse the data
if len(conf.Separator) == 1 {
return parseCSV(conf, input)
data, err = parseCSV(conf, input)
} else {
data, err = parseTabular(conf, input)
}
return parseTabular(conf, input)
if err != nil {
return data, err
}
// 2nd step, apply filters, code or transposers, if any
postdata, changed, err := PostProcess(conf, &data)
if err != nil {
return data, err
}
if changed {
return *postdata, nil
}
return data, err
}
/*
@@ -77,16 +97,6 @@ func parseCSV(conf cfg.Config, input io.Reader) (Tabdata, error) {
}
}
// apply user defined lisp process hooks, if any
userdata, changed, err := RunProcessHooks(conf, data)
if err != nil {
return data, fmt.Errorf("failed to apply filter hook: %w", err)
}
if changed {
data = userdata
}
return data, nil
}
@@ -110,9 +120,6 @@ func parseTabular(conf cfg.Config, input io.Reader) (Tabdata, error) {
if !hadFirst {
// header processing
data.columns = len(parts)
// if Debug {
// fmt.Println(parts)
// }
// process all header fields
for _, part := range parts {
@@ -130,7 +137,7 @@ func parseTabular(conf cfg.Config, input io.Reader) (Tabdata, error) {
}
} else {
// data processing
if conf.Pattern != "" && matchPattern(conf, line) == conf.InvertMatch {
if matchPattern(conf, line) == conf.InvertMatch {
// by default -v is false, so if a line does NOT
// match the pattern, we will ignore it. However,
// if the user specified -v, the matching is inverted,
@@ -138,18 +145,6 @@ func parseTabular(conf cfg.Config, input io.Reader) (Tabdata, error) {
continue
}
// apply user defined lisp filters, if any
accept, err := RunFilterHooks(conf, line)
if err != nil {
return data, fmt.Errorf("failed to apply filter hook: %w", err)
}
if !accept {
// IF there are filter hook[s] and IF one of them
// returns false on the current line, reject it
continue
}
idx := 0 // we cannot use the header index, because we could exclude columns
values := []string{}
for _, part := range parts {
@@ -174,29 +169,42 @@ func parseTabular(conf cfg.Config, input io.Reader) (Tabdata, error) {
return data, fmt.Errorf("failed to read from io.Reader: %w", scanner.Err())
}
return data, nil
}
func PostProcess(conf cfg.Config, data *Tabdata) (*Tabdata, bool, error) {
var modified bool
// filter by field filters, if any
filtereddata, changed, err := FilterByFields(conf, data)
if err != nil {
return data, fmt.Errorf("failed to filter fields: %w", err)
return data, false, fmt.Errorf("failed to filter fields: %w", err)
}
if changed {
data = filtereddata
modified = true
}
// apply user defined lisp process hooks, if any
userdata, changed, err := RunProcessHooks(conf, data)
// check if transposers are valid and turn into Transposer structs
if err := PrepareTransposerColumns(&conf, data); err != nil {
return data, false, err
}
// transpose if demanded
modifieddata, changed, err := TransposeFields(conf, data)
if err != nil {
return data, fmt.Errorf("failed to apply filter hook: %w", err)
return data, false, fmt.Errorf("failed to transpose fields: %w", err)
}
if changed {
data = userdata
data = modifieddata
modified = true
}
if conf.Debug {
repr.Print(data)
}
return data, nil
return data, modified, nil
}

View File

@@ -83,36 +83,42 @@ func TestParser(t *testing.T) {
func TestParserPatternmatching(t *testing.T) {
var tests = []struct {
entries [][]string
pattern string
invert bool
want bool
name string
entries [][]string
patterns []*cfg.Pattern
invert bool
want bool
}{
{
name: "match",
entries: [][]string{
{"asd", "igig", "cxxxncnc"},
},
pattern: "ig",
invert: false,
patterns: []*cfg.Pattern{{Pattern: "ig"}},
invert: false,
},
{
name: "invert",
entries: [][]string{
{"19191", "EDD 1", "X"},
},
pattern: "ig",
invert: true,
patterns: []*cfg.Pattern{{Pattern: "ig"}},
invert: true,
},
}
for _, inputdata := range input {
for _, testdata := range tests {
testname := fmt.Sprintf("parse-%s-with-pattern-%s-inverted-%t",
inputdata.name, testdata.pattern, testdata.invert)
inputdata.name, testdata.name, testdata.invert)
t.Run(testname, func(t *testing.T) {
conf := cfg.Config{InvertMatch: testdata.invert, Pattern: testdata.pattern,
Separator: inputdata.separator}
conf := cfg.Config{
InvertMatch: testdata.invert,
Patterns: testdata.patterns,
Separator: inputdata.separator,
}
_ = conf.PreparePattern(testdata.pattern)
_ = conf.PreparePattern(testdata.patterns)
readFd := strings.NewReader(strings.TrimSpace(inputdata.text))
gotdata, err := Parse(conf, readFd)
@@ -125,7 +131,7 @@ func TestParserPatternmatching(t *testing.T) {
} else {
if !reflect.DeepEqual(testdata.entries, gotdata.entries) {
t.Errorf("Parser returned invalid data (pattern: %s, invert: %t)\nExp: %+v\nGot: %+v\n",
testdata.pattern, testdata.invert, testdata.entries, gotdata.entries)
testdata.name, testdata.invert, testdata.entries, gotdata.entries)
}
}
})

View File

@@ -1,5 +1,5 @@
/*
Copyright © 2022 Thomas von Dein
Copyright © 2022-2025 Thomas von Dein
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@@ -22,26 +22,32 @@ import (
"fmt"
"io"
"log"
"regexp"
"strconv"
"strings"
"github.com/gookit/color"
"github.com/olekukonko/tablewriter"
"github.com/olekukonko/tablewriter/renderer"
"github.com/olekukonko/tablewriter/tw"
"github.com/tlinden/tablizer/cfg"
"gopkg.in/yaml.v3"
)
func printData(writer io.Writer, conf cfg.Config, data *Tabdata) {
// add numbers to headers and remove this we're not interested in
// Sort the data first, before headers+entries are being
// reduced. That way the user can specify any valid column to sort
// by, independently if it's being used for display or not.
sortTable(conf, data)
// put one or more columns into clipboard
yankColumns(conf, data)
// add numbers to headers and remove those we're not interested in
numberizeAndReduceHeaders(conf, data)
// remove unwanted columns, if any
reduceColumns(conf, data)
// sort the data
sortTable(conf, data)
switch conf.OutputMode {
case cfg.Extended:
printExtendedData(writer, conf, data)
@@ -71,36 +77,58 @@ Emacs org-mode compatible table (also orgtbl-mode)
*/
func printOrgmodeData(writer io.Writer, conf cfg.Config, data *Tabdata) {
tableString := &strings.Builder{}
table := tablewriter.NewWriter(tableString)
table := tablewriter.NewTable(tableString,
tablewriter.WithRenderer(
renderer.NewBlueprint(
tw.Rendition{
Borders: tw.Border{
Left: tw.On,
Right: tw.On,
Top: tw.On,
Bottom: tw.On,
},
Settings: tw.Settings{
Separators: tw.Separators{
ShowHeader: tw.On,
ShowFooter: tw.Off,
BetweenRows: tw.Off,
BetweenColumns: 0,
},
},
Symbols: tw.NewSymbols(tw.StyleASCII),
})),
tablewriter.WithConfig(
tablewriter.Config{
Header: tw.CellConfig{
Formatting: tw.CellFormatting{
Alignment: tw.AlignLeft,
AutoFormat: tw.Off,
},
},
Row: tw.CellConfig{
Formatting: tw.CellFormatting{
Alignment: tw.AlignLeft,
},
},
},
),
)
if !conf.NoHeaders {
table.SetHeader(data.headers)
table.Header(data.headers)
}
for _, row := range data.entries {
table.Append(trimRow(row))
if err := table.Bulk(data.entries); err != nil {
log.Fatalf("Failed to add data to table renderer: %s", err)
}
table.Render()
if err := table.Render(); err != nil {
log.Fatalf("Failed to render table: %s", err)
}
/* fix output for org-mode (orgtbl)
tableWriter output:
+------+------+
| cell | cell |
+------+------+
Needed for org-mode compatibility:
|------+------|
| cell | cell |
|------+------|
*/
leftR := regexp.MustCompile(`(?m)^\\+`)
rightR := regexp.MustCompile(`\\+(?m)$`)
output(writer, color.Sprint(
colorizeData(conf,
rightR.ReplaceAllString(
leftR.ReplaceAllString(tableString.String(), "|"), "|"))))
output(writer, color.Sprint(colorizeData(conf, tableString.String())))
}
/*
@@ -108,20 +136,57 @@ Markdown table
*/
func printMarkdownData(writer io.Writer, conf cfg.Config, data *Tabdata) {
tableString := &strings.Builder{}
table := tablewriter.NewWriter(tableString)
table := tablewriter.NewTable(tableString,
tablewriter.WithRenderer(
renderer.NewBlueprint(
tw.Rendition{
Borders: tw.Border{
Left: tw.On,
Right: tw.On,
Top: tw.Off,
Bottom: tw.Off,
},
Settings: tw.Settings{
Separators: tw.Separators{
ShowHeader: tw.On,
ShowFooter: tw.Off,
BetweenRows: tw.Off,
BetweenColumns: 0,
},
},
Symbols: tw.NewSymbols(tw.StyleMarkdown),
})),
tablewriter.WithConfig(
tablewriter.Config{
Header: tw.CellConfig{
Formatting: tw.CellFormatting{
Alignment: tw.AlignLeft,
AutoFormat: tw.Off,
},
},
Row: tw.CellConfig{
Formatting: tw.CellFormatting{
Alignment: tw.AlignLeft,
},
},
},
),
)
if !conf.NoHeaders {
table.SetHeader(data.headers)
table.Header(data.headers)
}
for _, row := range data.entries {
table.Append(trimRow(row))
if err := table.Bulk(data.entries); err != nil {
log.Fatalf("Failed to add data to table renderer: %s", err)
}
table.SetBorders(tablewriter.Border{Left: true, Top: false, Right: true, Bottom: false})
table.SetCenterSeparator("|")
if err := table.Render(); err != nil {
log.Fatalf("Failed to render table: %s", err)
}
table.Render()
output(writer, color.Sprint(colorizeData(conf, tableString.String())))
}
@@ -130,33 +195,53 @@ Simple ASCII table without any borders etc, just like the input we expect
*/
func printASCIIData(writer io.Writer, conf cfg.Config, data *Tabdata) {
tableString := &strings.Builder{}
table := tablewriter.NewWriter(tableString)
table := tablewriter.NewTable(tableString,
tablewriter.WithRenderer(
renderer.NewBlueprint(tw.Rendition{
Borders: tw.BorderNone,
Symbols: tw.NewSymbols(tw.StyleASCII),
Settings: tw.Settings{
Separators: tw.Separators{BetweenRows: tw.Off, BetweenColumns: tw.Off},
Lines: tw.Lines{ShowFooterLine: tw.Off, ShowHeaderLine: tw.Off},
},
})),
tablewriter.WithConfig(tablewriter.Config{
Header: tw.CellConfig{
Formatting: tw.CellFormatting{
AutoFormat: tw.Off,
},
Padding: tw.CellPadding{
Global: tw.Padding{Left: "", Right: ""},
},
},
Row: tw.CellConfig{
Formatting: tw.CellFormatting{
AutoWrap: tw.WrapNone,
Alignment: tw.AlignLeft,
},
Padding: tw.CellPadding{
Global: tw.Padding{Left: "", Right: ""},
},
},
Debug: true,
}),
tablewriter.WithPadding(tw.PaddingNone),
)
if !conf.NoHeaders {
table.SetHeader(data.headers)
table.Header(data.TabHeaders())
}
table.AppendBulk(data.entries)
table.SetAutoWrapText(false)
table.SetAutoFormatHeaders(true)
table.SetHeaderAlignment(tablewriter.ALIGN_LEFT)
table.SetAlignment(tablewriter.ALIGN_LEFT)
table.SetCenterSeparator("")
table.SetColumnSeparator("")
table.SetRowSeparator("")
table.SetHeaderLine(false)
table.SetBorder(false)
table.SetNoWhiteSpace(true)
if !conf.UseHighlight {
// the tabs destroy the highlighting
table.SetTablePadding("\t") // pad with tabs
} else {
table.SetTablePadding(" ")
if err := table.Bulk(data.TabEntries()); err != nil {
log.Fatalf("Failed to add data to table renderer: %s", err)
}
if err := table.Render(); err != nil {
log.Fatalf("Failed to render table: %s", err)
}
table.Render()
output(writer, color.Sprint(colorizeData(conf, tableString.String())))
}

View File

@@ -63,9 +63,9 @@ var tests = []struct {
name string // so we can identify which one fails, can be the same
// for multiple tests, because flags will be appended to the name
sortby string // empty == default
column int // sort by this column, 0 == default first or NO Sort
column int // sort by this column (numbers start by 1)
desc bool // sort in descending order, default == ascending
nonum bool // hide numbering
numberize bool // add header numbering
mode int // shell, orgtbl, etc. empty == default: ascii
usecol []int // columns to display, empty == display all
usecolstr string // for testname, must match usecol
@@ -73,17 +73,19 @@ var tests = []struct {
}{
// --------------------- Default settings mode tests ``
{
mode: cfg.ASCII,
name: "default",
mode: cfg.ASCII,
numberize: true,
name: "default",
expect: `
NAME(1) DURATION(2) COUNT(3) WHEN(4)
beta 1d10h5m1s 33 3/1/2014
alpha 4h35m 170 2013-Feb-03
NAME(1) DURATION(2) COUNT(3) WHEN(4)
beta 1d10h5m1s 33 3/1/2014
alpha 4h35m 170 2013-Feb-03
ceta 33d12h 9 06/Jan/2008 15:04:05 -0700`,
},
{
mode: cfg.CSV,
name: "csv",
mode: cfg.CSV,
numberize: false,
name: "csv",
expect: `
NAME,DURATION,COUNT,WHEN
beta,1d10h5m1s,33,3/1/2014
@@ -91,40 +93,42 @@ alpha,4h35m,170,2013-Feb-03
ceta,33d12h,9,06/Jan/2008 15:04:05 -0700`,
},
{
name: "default",
mode: cfg.Orgtbl,
name: "orgtbl",
numberize: true,
mode: cfg.Orgtbl,
expect: `
+---------+-------------+----------+----------------------------+
| NAME(1) | DURATION(2) | COUNT(3) | WHEN(4) |
| NAME(1) | DURATION(2) | COUNT(3) | WHEN(4) |
+---------+-------------+----------+----------------------------+
| beta | 1d10h5m1s | 33 | 3/1/2014 |
| alpha | 4h35m | 170 | 2013-Feb-03 |
| ceta | 33d12h | 9 | 06/Jan/2008 15:04:05 -0700 |
| beta | 1d10h5m1s | 33 | 3/1/2014 |
| alpha | 4h35m | 170 | 2013-Feb-03 |
| ceta | 33d12h | 9 | 06/Jan/2008 15:04:05 -0700 |
+---------+-------------+----------+----------------------------+`,
},
{
name: "default",
mode: cfg.Markdown,
name: "markdown",
mode: cfg.Markdown,
numberize: true,
expect: `
| NAME(1) | DURATION(2) | COUNT(3) | WHEN(4) |
| NAME(1) | DURATION(2) | COUNT(3) | WHEN(4) |
|---------|-------------|----------|----------------------------|
| beta | 1d10h5m1s | 33 | 3/1/2014 |
| alpha | 4h35m | 170 | 2013-Feb-03 |
| ceta | 33d12h | 9 | 06/Jan/2008 15:04:05 -0700 |`,
| beta | 1d10h5m1s | 33 | 3/1/2014 |
| alpha | 4h35m | 170 | 2013-Feb-03 |
| ceta | 33d12h | 9 | 06/Jan/2008 15:04:05 -0700 |`,
},
{
name: "default",
mode: cfg.Shell,
nonum: true,
name: "shell",
mode: cfg.Shell,
numberize: false,
expect: `
NAME="beta" DURATION="1d10h5m1s" COUNT="33" WHEN="3/1/2014"
NAME="alpha" DURATION="4h35m" COUNT="170" WHEN="2013-Feb-03"
NAME="ceta" DURATION="33d12h" COUNT="9" WHEN="06/Jan/2008 15:04:05 -0700"`,
},
{
name: "default",
mode: cfg.Yaml,
nonum: true,
name: "yaml",
mode: cfg.Yaml,
numberize: false,
expect: `
entries:
- count: 33
@@ -141,8 +145,9 @@ entries:
when: "06/Jan/2008 15:04:05 -0700"`,
},
{
name: "default",
mode: cfg.Extended,
name: "extended",
mode: cfg.Extended,
numberize: true,
expect: `
NAME(1): beta
DURATION(2): 1d10h5m1s
@@ -162,36 +167,39 @@ DURATION(2): 33d12h
//------------------------ SORT TESTS
{
name: "sortbycolumn",
column: 3,
sortby: "numeric",
desc: false,
name: "sortbycolumn3",
column: 3,
sortby: "numeric",
numberize: true,
desc: false,
expect: `
NAME(1) DURATION(2) COUNT(3) WHEN(4)
ceta 33d12h 9 06/Jan/2008 15:04:05 -0700
beta 1d10h5m1s 33 3/1/2014
NAME(1) DURATION(2) COUNT(3) WHEN(4)
ceta 33d12h 9 06/Jan/2008 15:04:05 -0700
beta 1d10h5m1s 33 3/1/2014
alpha 4h35m 170 2013-Feb-03`,
},
{
name: "sortbycolumn",
column: 4,
sortby: "time",
desc: false,
name: "sortbycolumn4",
column: 4,
sortby: "time",
desc: false,
numberize: true,
expect: `
NAME(1) DURATION(2) COUNT(3) WHEN(4)
ceta 33d12h 9 06/Jan/2008 15:04:05 -0700
alpha 4h35m 170 2013-Feb-03
NAME(1) DURATION(2) COUNT(3) WHEN(4)
ceta 33d12h 9 06/Jan/2008 15:04:05 -0700
alpha 4h35m 170 2013-Feb-03
beta 1d10h5m1s 33 3/1/2014`,
},
{
name: "sortbycolumn",
column: 2,
sortby: "duration",
desc: false,
name: "sortbycolumn2",
column: 2,
sortby: "duration",
numberize: true,
desc: false,
expect: `
NAME(1) DURATION(2) COUNT(3) WHEN(4)
alpha 4h35m 170 2013-Feb-03
beta 1d10h5m1s 33 3/1/2014
NAME(1) DURATION(2) COUNT(3) WHEN(4)
alpha 4h35m 170 2013-Feb-03
beta 1d10h5m1s 33 3/1/2014
ceta 33d12h 9 06/Jan/2008 15:04:05 -0700`,
},
@@ -199,75 +207,85 @@ ceta 33d12h 9 06/Jan/2008 15:04:05 -0700`,
{
name: "usecolumns",
usecol: []int{1, 4},
numberize: true,
usecolstr: "1,4",
expect: `
NAME(1) WHEN(4)
beta 3/1/2014
alpha 2013-Feb-03
NAME(1) WHEN(4)
beta 3/1/2014
alpha 2013-Feb-03
ceta 06/Jan/2008 15:04:05 -0700`,
},
{
name: "usecolumns",
usecol: []int{2},
numberize: true,
usecolstr: "2",
expect: `
DURATION(2)
1d10h5m1s
4h35m
DURATION(2)
1d10h5m1s
4h35m
33d12h`,
},
{
name: "usecolumns",
usecol: []int{3},
numberize: true,
usecolstr: "3",
expect: `
COUNT(3)
33
170
COUNT(3)
33
170
9`,
},
{
name: "usecolumns",
column: 0,
usecol: []int{1, 3},
numberize: true,
usecolstr: "1,3",
expect: `
NAME(1) COUNT(3)
beta 33
alpha 170
NAME(1) COUNT(3)
beta 33
alpha 170
ceta 9`,
},
{
name: "usecolumns",
usecol: []int{2, 4},
numberize: true,
usecolstr: "2,4",
expect: `
DURATION(2) WHEN(4)
1d10h5m1s 3/1/2014
4h35m 2013-Feb-03
DURATION(2) WHEN(4)
1d10h5m1s 3/1/2014
4h35m 2013-Feb-03
33d12h 06/Jan/2008 15:04:05 -0700`,
},
}
func TestPrinter(t *testing.T) {
for _, testdata := range tests {
testname := fmt.Sprintf("print-sortcol-%d-desc-%t-sortby-%s-mode-%d-usecolumns-%s",
testdata.column, testdata.desc, testdata.sortby, testdata.mode, testdata.usecolstr)
testname := fmt.Sprintf("print-%s-%d-desc-%t-sortby-%s-mode-%d-usecolumns-%s-numberize-%t",
testdata.name, testdata.column, testdata.desc, testdata.sortby,
testdata.mode, testdata.usecolstr, testdata.numberize)
t.Run(testname, func(t *testing.T) {
// replaces os.Stdout, but we ignore it
var writer bytes.Buffer
// cmd flags
conf := cfg.Config{
SortByColumn: testdata.column,
SortDescending: testdata.desc,
SortMode: testdata.sortby,
OutputMode: testdata.mode,
NoNumbering: testdata.nonum,
Numbering: testdata.numberize,
UseColumns: testdata.usecol,
NoColor: true,
}
if testdata.column > 0 {
conf.UseSortByColumn = []int{testdata.column}
}
conf.ApplyDefaults()
// the test checks the len!

View File

@@ -18,6 +18,7 @@ along with this program. If not, see <http://www.gnu.org/licenses/>.
package lib
import (
"cmp"
"regexp"
"sort"
"strconv"
@@ -27,34 +28,41 @@ import (
)
func sortTable(conf cfg.Config, data *Tabdata) {
if conf.SortByColumn <= 0 {
if len(conf.UseSortByColumn) == 0 {
// no sorting wanted
return
}
// slightly modified here to match internal array indicies
col := conf.SortByColumn
col-- // ui starts counting by 1, but use 0 internally
// sanity checks
if len(data.entries) == 0 {
return
}
if col >= len(data.headers) {
// fall back to default column
col = 0
}
// actual sorting
sort.SliceStable(data.entries, func(i, j int) bool {
return compare(&conf, data.entries[i][col], data.entries[j][col])
// holds the result of a sort of one column
comparators := []int{}
// iterate over all columns to be sorted, conf.SortMode must be identical!
for _, column := range conf.UseSortByColumn {
comparators = append(comparators, compare(&conf, data.entries[i][column-1], data.entries[j][column-1]))
}
// return the combined result
res := cmp.Or(comparators...)
switch res {
case 0:
return true
default:
return false
}
})
}
// config is not modified here, but it would be inefficient to copy it every loop
func compare(conf *cfg.Config, left string, right string) bool {
func compare(conf *cfg.Config, left string, right string) int {
var comp bool
switch conf.SortMode {
@@ -88,7 +96,12 @@ func compare(conf *cfg.Config, left string, right string) bool {
comp = !comp
}
return comp
switch comp {
case true:
return 0
default:
return 1
}
}
/*

View File

@@ -53,18 +53,18 @@ func TestCompare(t *testing.T) {
mode string
a string
b string
want bool
want int
desc bool
}{
// ascending
{"numeric", "10", "20", true, false},
{"duration", "2d4h5m", "45m", false, false},
{"time", "12/24/2022", "1/1/1970", false, false},
{"numeric", "10", "20", 0, false},
{"duration", "2d4h5m", "45m", 1, false},
{"time", "12/24/2022", "1/1/1970", 1, false},
// descending
{"numeric", "10", "20", false, true},
{"duration", "2d4h5m", "45m", true, true},
{"time", "12/24/2022", "1/1/1970", true, true},
{"numeric", "10", "20", 1, true},
{"duration", "2d4h5m", "45m", 0, true},
{"time", "12/24/2022", "1/1/1970", 0, true},
}
for _, testdata := range tests {
@@ -75,7 +75,7 @@ func TestCompare(t *testing.T) {
c := cfg.Config{SortMode: testdata.mode, SortDescending: testdata.desc}
got := compare(&c, testdata.a, testdata.b)
if got != testdata.want {
t.Errorf("got %t, want %t", got, testdata.want)
t.Errorf("got %d, want %d", got, testdata.want)
}
})
}

51
lib/yank.go Normal file
View File

@@ -0,0 +1,51 @@
/*
Copyright © 2022-2025 Thomas von Dein
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package lib
import (
"log"
"strings"
"github.com/tiagomelo/go-clipboard/clipboard"
"github.com/tlinden/tablizer/cfg"
)
func yankColumns(conf cfg.Config, data *Tabdata) {
var yank []string
if len(data.entries) == 0 || len(conf.UseYankColumns) == 0 {
return
}
for _, row := range data.entries {
for i, field := range row {
for _, idx := range conf.UseYankColumns {
if i == idx-1 {
yank = append(yank, field)
}
}
}
}
if len(yank) > 0 {
cb := clipboard.New(clipboard.ClipboardOptions{Primary: true})
if err := cb.CopyText(strings.Join(yank, " ")); err != nil {
log.Fatalln("error writing string to clipboard:", err)
}
}
}

72
lib/yank_test.go Normal file
View File

@@ -0,0 +1,72 @@
/*
Copyright © 2025 Thomas von Dein
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package lib
import (
"bytes"
"fmt"
"testing"
"github.com/tiagomelo/go-clipboard/clipboard"
"github.com/tlinden/tablizer/cfg"
)
var yanktests = []struct {
name string
yank []int // -y$colum,$column... after processing
filter string
expect string
}{
{
name: "one",
yank: []int{1},
filter: "beta",
},
}
func DISABLED_TestYankColumns(t *testing.T) {
cb := clipboard.New()
for _, testdata := range yanktests {
testname := fmt.Sprintf("yank-%s-filter-%s",
testdata.name, testdata.filter)
t.Run(testname, func(t *testing.T) {
conf := cfg.Config{
OutputMode: cfg.ASCII,
UseYankColumns: testdata.yank,
NoColor: true,
}
conf.ApplyDefaults()
data := newData() // defined in printer_test.go, reused here
var writer bytes.Buffer
printData(&writer, conf, &data)
got, err := cb.PasteText()
if err != nil {
t.Errorf("failed to fetch yanked text from clipboard")
}
if got != testdata.expect {
t.Errorf("not yanked correctly:\n+++ got:\n%s\n+++ want:\n%s",
got, testdata.expect)
}
})
}
}

10
main.go
View File

@@ -18,9 +18,17 @@ along with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"os"
"github.com/tlinden/tablizer/cmd"
)
func main() {
cmd.Execute()
os.Exit(Main())
}
func Main() int {
cmd.Execute()
return 0 // cmd takes care of exit 1 itself
}

19
main_test.go Normal file
View File

@@ -0,0 +1,19 @@
package main
import (
"testing"
"github.com/rogpeppe/go-internal/testscript"
)
func TestMain(m *testing.M) {
testscript.Main(m, map[string]func(){
"tablizer": main,
})
}
func TestTablizer(t *testing.T) {
testscript.Run(t, testscript.Params{
Dir: "t",
})
}

View File

@@ -42,8 +42,15 @@ for D in $DIST; do
binfile="releases/${tool}-${os}-${arch}-${version}"
tardir="${tool}-${os}-${arch}-${version}"
tarfile="releases/${tool}-${os}-${arch}-${version}.tar.gz"
pie=""
if test "$D" = "linux/amd64"; then
pie="-buildmode=pie"
fi
set -x
GOOS=${os} GOARCH=${arch} go build -o ${binfile} -ldflags "-X 'github.com/tlinden/tablizer/cfg.VERSION=${version}'"
GOOS=${os} GOARCH=${arch} go build -tags osusergo,netgo -ldflags "-extldflags=-static -w -X 'github.com/tlinden/tablizer/cfg.VERSION=${version}'" --trimpath $pie -o ${binfile}
strip --strip-all ${binfile}
mkdir -p ${tardir}
cp ${binfile} README.md LICENSE ${tardir}/
echo 'tool = tablizer

43
t/test-basics.txtar Normal file
View File

@@ -0,0 +1,43 @@
# usage
exec tablizer -h
stdout Usage
# version
exec tablizer -V
stdout version
# manpage
exec tablizer -m
stdout SYNOPSIS
# completion
exec tablizer --completion bash
stdout __tablizer_init_completion
# use config (configures colors, but these are not being used, since
# this env doesn't support it, but at least it should succeed.
exec tablizer -f config.hcl -r testtable.txt Runn
stdout Runn
# will be automatically created in work dir
-- testtable.txt --
NAME READY STATUS RESTARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 (45m ago) 11d
grafana-fcc54cbc9-bk7s8 1/1 Running 17 (45m ago) 1d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 (45m ago) 1h44m
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 (45m ago) 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 (45m ago) 54s
-- config.hcl --
BG = "lightGreen"
FG = "white"
HighlightBG = "lightGreen"
HighlightFG = "white"
NoHighlightBG = "white"
NoHighlightFG = "lightGreen"
HighlightHdrBG = "red"
HighlightHdrFG = "white"

26
t/test-csv.txtar Normal file
View File

@@ -0,0 +1,26 @@
# reading from file and matching with lowercase words
exec tablizer -c name,status -r testtable.csv -s,
stdout grafana.*Runn
# matching mixed case
exec tablizer -c NAME,staTUS -r testtable.csv -s,
stdout grafana.*Runn
# matching using numbers
exec tablizer -c 1,3 -r testtable.csv -s,
stdout grafana.*Runn
# matching using regex
exec tablizer -c 'na.*,stat.' -r testtable.csv -s,
stdout grafana.*Runn
# will be automatically created in work dir
-- testtable.csv --
NAME,READY,STATUS,RESTARTS,AGE
alertmanager-kube-prometheus-alertmanager-0,2/2,Running,35 (45m ago),11d
grafana-fcc54cbc9-bk7s8,1/1,Running,17 (45m ago),1d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7,1/1,Running,17 (45m ago),1h44m
kube-prometheus-kube-state-metrics-b4cd9487-75p7f,1/1,Running,20 (45m ago),45m
kube-prometheus-node-exporter-bfzpl,1/1,Running,17 (45m ago),54s

21
t/test-filtering.txtar Normal file
View File

@@ -0,0 +1,21 @@
# filtering
exec tablizer -r testtable.txt -F name=grafana
stdout grafana.*Runn
# filtering two columns
exec tablizer -r testtable.txt -F name=prometh -F age=1h
stdout blackbox.*Runn
# filtering two same columns
exec tablizer -r testtable.txt -F name=prometh -F name=alert
stdout prometheus-alertmanager.*Runn
# will be automatically created in work dir
-- testtable.txt --
NAME READY STATUS RESTARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 (45m ago) 11d
grafana-fcc54cbc9-bk7s8 1/1 Running 17 (45m ago) 1d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 (45m ago) 1h44m
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 (45m ago) 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 (45m ago) 54s

View File

@@ -0,0 +1,25 @@
# reading from file and matching with lowercase words
exec tablizer -c name,status -r testtable.txt
stdout grafana.*Runn
# matching mixed case
exec tablizer -c NAME,staTUS -r testtable.txt
stdout grafana.*Runn
# matching using numbers
exec tablizer -c 1,3 -r testtable.txt
stdout grafana.*Runn
# matching using regex
exec tablizer -c 'na.*,stat.' -r testtable.txt
stdout grafana.*Runn
# will be automatically created in work dir
-- testtable.txt --
NAME READY STATUS RESTARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 (45m ago) 11d
grafana-fcc54cbc9-bk7s8 1/1 Running 17 (45m ago) 1d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 (45m ago) 1h44m
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 (45m ago) 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 (45m ago) 54s

View File

@@ -0,0 +1,46 @@
# filtering
# a AND b
exec tablizer -r testtable.txt -H -cspecies invasive imperium
stdout 'namak'
! stdout human
# a AND !b
exec tablizer -r testtable.txt -H -cspecies invasive '/imperium/!'
stdout 'human'
! stdout namak
# a AND !b AND c
exec tablizer -r testtable.txt -H -cspecies peaceful '/imperium/!' planetary
stdout 'kenaha'
! stdout 'namak|heduu|riedl'
# case insensitive
exec tablizer -r testtable.txt -H -cspecies '/REGIONAL/i'
stdout namak
! stdout 'human|riedl|heduu|kenaa'
# case insensitive negated
exec tablizer -r testtable.txt -H -cspecies '/REGIONAL/!i'
stdout 'human|riedl|heduu|kenaa'
! stdout namak
# !a AND !b
exec tablizer -r testtable.txt -H -cspecies '/galactic/!' '/planetary/!'
stdout namak
! stdout 'human|riedl|heduu|kenaa'
# same case insensitive
exec tablizer -r testtable.txt -H -cspecies '/GALACTIC/i!' '/PLANETARY/!i'
stdout namak
! stdout 'human|riedl|heduu|kenaa'
# will be automatically created in work dir
-- testtable.txt --
SPECIES TYPE HOME STAGE SPREAD
human invasive earth brink planetary
riedl peaceful keauna civilized pangalactic
namak invasive namak imperium regional
heduu peaceful iu imperium galactic
kenaha peaceful kohi hunter-gatherer planetary

49
t/test-sort.txtar Normal file
View File

@@ -0,0 +1,49 @@
# sort by name
exec tablizer -r testtable.txt -k 1
stdout '^alert.*\n^grafana.*\n^kube'
# sort by name reversed
exec tablizer -r testtable.txt -k 1 -D
stdout 'kube.*\n^grafana.*\n^alert'
# sort by starts numerically
exec tablizer -r testtable.txt -k 4 -i -c4
stdout '17\s*\n^20\s*\n^35'
# sort by starts numerically reversed
exec tablizer -r testtable.txt -k 4 -i -c4 -D
stdout '35\s*\n^20\s*\n^17'
# sort by age
exec tablizer -r testtable.txt -k 5 -a
stdout '45m\s*\n.*1h44m'
# sort by age reverse
exec tablizer -r testtable.txt -k 5 -a -D
stdout '1h44m\s*\n.*45m'
# sort by time
exec tablizer -r timetable.txt -k 2 -t
stdout '^sel.*\n^foo.*\nbar'
# sort by time reverse
exec tablizer -r timetable.txt -k 2 -t -D
stdout '^bar.*\n^foo.*\nsel'
# will be automatically created in work dir
-- testtable.txt --
NAME READY STATUS STARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 11d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 1h44m
grafana-fcc54cbc9-bk7s8 1/1 Running 17 1d
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 54s
-- timetable.txt --
NAME TIME
foo 2024-11-18T12:00:00+01:00
bar 2024-11-18T12:45:00+01:00
sel 2024-07-18T12:00:00+01:00

18
t/test-stdin.txtar Normal file
View File

@@ -0,0 +1,18 @@
# reading from stdin and matching with lowercase words
stdin testtable.txt
exec tablizer -c name,status
stdout grafana.*Runn
# reading from -r stdin and matching with lowercase words
stdin testtable.txt
exec tablizer -c name,status -r -
stdout grafana.*Runn
# will be automatically created in work dir
-- testtable.txt --
NAME READY STATUS RESTARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 (45m ago) 11d
grafana-fcc54cbc9-bk7s8 1/1 Running 17 (45m ago) 1d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 (45m ago) 1h44m
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 (45m ago) 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 (45m ago) 54s

21
t/test-transpose.txtar Normal file
View File

@@ -0,0 +1,21 @@
# transpose one field
exec tablizer -r testtable.txt -T status -R '/Running/OK/'
stdout grafana.*OK
# transpose two fields
exec tablizer -r testtable.txt -T name,status -R '/alertmanager-//' -R '/Running/OK/'
stdout prometheus-0.*OK
# transpose one field and show one column
exec tablizer -r testtable.txt -T status -R '/Running/OK/' -c name
! stdout grafana.*OK
# will be automatically created in work dir
-- testtable.txt --
NAME READY STATUS RESTARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 (45m ago) 11d
grafana-fcc54cbc9-bk7s8 1/1 Running 17 (45m ago) 1d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 (45m ago) 1h44m
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 (45m ago) 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 (45m ago) 54s

View File

@@ -1,45 +0,0 @@
#!/bin/sh
# simple commandline unit test script
t="../tablizer"
fail=0
ex() {
# execute a test, report+exit on error, stay silent otherwise
log="/tmp/test-tablizer.$$.log"
name=$1
shift
echo -n "TEST $name "
$* > $log 2>&1
if test $? -ne 0; then
echo "failed, see $log"
fail=1
else
echo "ok"
rm -f $log
fi
}
# only use files in test dir
cd $(dirname $0)
echo "Executing commandline tests ..."
# io pattern tests
ex io-pattern-and-file $t bk7 testtable
cat testtable | ex io-pattern-and-stdin $t bk7
cat testtable | ex io-pattern-and-stdin-dash $t bk7 -
# same w/o pattern
ex io-just-file $t testtable
cat testtable | ex io-just-stdin $t
cat testtable | ex io-just-stdin-dash $t -
if test $fail -ne 0; then
echo "!!! Some tests failed !!!"
exit 1
fi

6
t/testtable.csv Normal file
View File

@@ -0,0 +1,6 @@
NAME,DURATION
x,10
a,100
z,0
u,4
k,6
1 NAME DURATION
2 x 10
3 a 100
4 z 0
5 u 4
6 k 6

6
t/testtable3 Normal file
View File

@@ -0,0 +1,6 @@
NAME READY STATUS STARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 11d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 1h44m
grafana-fcc54cbc9-bk7s8 1/1 Running 17 1d
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 54s

4
t/testtable4 Normal file
View File

@@ -0,0 +1,4 @@
ONE TWO
1 4
3 1
5 2

6
t/testtable5 Normal file
View File

@@ -0,0 +1,6 @@
SPECIES TYPE HOME STAGE
human invasive earth brink
riedl peaceful keauna civilized
namak invasive namak imperium
heduu peaceful iu imperium
kenaha peaceful kohi hunter-gatherer

View File

@@ -133,7 +133,7 @@
.\" ========================================================================
.\"
.IX Title "TABLIZER 1"
.TH TABLIZER 1 "2024-05-07" "1" "User Commands"
.TH TABLIZER 1 "2025-03-06" "1" "User Commands"
.\" For nroff, turn off justification. Always turn off hyphenation; it makes
.\" way too many mistakes in technical documents.
.if n .ad l
@@ -144,42 +144,46 @@ tablizer \- Manipulate tabular output of other programs
.IX Header "SYNOPSIS"
.Vb 2
\& Usage:
\& tablizer [regex] [file, ...] [flags]
\& tablizer [regex,...] [file, ...] [flags]
\&
\& Operational Flags:
\& \-c, \-\-columns string Only show the speficied columns (separated by ,)
\& \-v, \-\-invert\-match select non\-matching rows
\& \-n, \-\-no\-numbering Disable header numbering
\& \-N, \-\-no\-color Disable pattern highlighting
\& \-H, \-\-no\-headers Disable headers display
\& \-s, \-\-separator string Custom field separator
\& \-k, \-\-sort\-by int Sort by column (default: 1)
\& \-z, \-\-fuzzy Use fuzzy search [experimental]
\& \-F, \-\-filter field=reg Filter given field with regex, can be used multiple times
\& \-c, \-\-columns string Only show the speficied columns (separated by ,)
\& \-v, \-\-invert\-match select non\-matching rows
\& \-n, \-\-numbering Enable header numbering
\& \-N, \-\-no\-color Disable pattern highlighting
\& \-H, \-\-no\-headers Disable headers display
\& \-s, \-\-separator string Custom field separator
\& \-k, \-\-sort\-by int|name Sort by column (default: 1)
\& \-z, \-\-fuzzy Use fuzzy search [experimental]
\& \-F, \-\-filter field[!]=reg Filter given field with regex, can be used multiple times
\& \-T, \-\-transpose\-columns string Transpose the speficied columns (separated by ,)
\& \-R, \-\-regex\-transposer /from/to/ Apply /search/replace/ regexp to fields given in \-T
\&
\& Output Flags (mutually exclusive):
\& \-X, \-\-extended Enable extended output
\& \-M, \-\-markdown Enable markdown table output
\& \-O, \-\-orgtbl Enable org\-mode table output
\& \-S, \-\-shell Enable shell evaluable output
\& \-Y, \-\-yaml Enable yaml output
\& \-C, \-\-csv Enable CSV output
\& \-A, \-\-ascii Default output mode, ascii tabular
\& \-L, \-\-hightlight\-lines Use alternating background colors for tables
\& \-X, \-\-extended Enable extended output
\& \-M, \-\-markdown Enable markdown table output
\& \-O, \-\-orgtbl Enable org\-mode table output
\& \-S, \-\-shell Enable shell evaluable output
\& \-Y, \-\-yaml Enable yaml output
\& \-C, \-\-csv Enable CSV output
\& \-A, \-\-ascii Default output mode, ascii tabular
\& \-L, \-\-hightlight\-lines Use alternating background colors for tables
\& \-y, \-\-yank\-columns Yank specified columns (separated by ,) to clipboard,
\& space separated
\&
\& Sort Mode Flags (mutually exclusive):
\& \-a, \-\-sort\-age sort according to age (duration) string
\& \-D, \-\-sort\-desc Sort in descending order (default: ascending)
\& \-i, \-\-sort\-numeric sort according to string numerical value
\& \-t, \-\-sort\-time sort according to time string
\& \-a, \-\-sort\-age sort according to age (duration) string
\& \-D, \-\-sort\-desc Sort in descending order (default: ascending)
\& \-i, \-\-sort\-numeric sort according to string numerical value
\& \-t, \-\-sort\-time sort according to time string
\&
\& Other Flags:
\& \-\-completion <shell> Generate the autocompletion script for <shell>
\& \-f, \-\-config <file> Configuration file (default: ~/.config/tablizer/config)
\& \-d, \-\-debug Enable debugging
\& \-h, \-\-help help for tablizer
\& \-m, \-\-man Display manual page
\& \-V, \-\-version Print program version
\& \-\-completion <shell> Generate the autocompletion script for <shell>
\& \-f, \-\-config <file> Configuration file (default: ~/.config/tablizer/config)
\& \-d, \-\-debug Enable debugging
\& \-h, \-\-help help for tablizer
\& \-m, \-\-man Display manual page
\& \-V, \-\-version Print program version
.Ve
.SH "DESCRIPTION"
.IX Header "DESCRIPTION"
@@ -248,11 +252,20 @@ By default, if a \fBpattern\fR has been speficied, matches will be
highlighted. You can disable this behavior with the \fB\-N\fR option.
.PP
Use the \fB\-k\fR option to specify by which column to sort the tabular
data (as in \s-1GNU\s0 \fBsort\fR\|(1)). The default sort column is the first one. To
disable sorting at all, supply 0 (Zero) to \-k. The default sort order
is ascending. You can change this to descending order using the option
\&\fB\-D\fR. The default sort order is by string, but there are other sort
modes:
data (as in \s-1GNU\s0 \fBsort\fR\|(1)). The default sort column is the first
one. You can specify column numbers or names. Column numbers start
with 1, names are case insensitive. You can specify multiple columns
separated by comma to sort, but the type must be the same. For example
if you want to sort numerically, all columns must be numbers. If you
use column numbers, then be aware, that these are the numbers before
column extraction. For example if you have a table with 4 columns and
specify \f(CW\*(C`\-c4\*(C'\fR, then only 1 column (the fourth) will be printed,
however if you want to sort by this column, you'll have to specify
\&\f(CW\*(C`\-k4\*(C'\fR.
.PP
The default sort order is ascending. You can change this to
descending order using the option \fB\-D\fR. The default sort order is by
alphanumeric string, but there are other sort modes:
.IP "\fB\-a \-\-sort\-age\fR" 4
.IX Item "-a --sort-age"
Sorts duration strings like \*(L"1d4h32m51s\*(R".
@@ -267,38 +280,52 @@ Finally the \fB\-d\fR option enables debugging output which is mostly
useful for the developer.
.SS "\s-1PATTERNS AND FILTERING\s0"
.IX Subsection "PATTERNS AND FILTERING"
You can reduce the rows being displayed by using a regular expression
pattern. The regexp is \s-1PCRE\s0 compatible, refer to the syntax cheat
sheet here: <https://github.com/google/re2/wiki/Syntax>. If you want
to read a more comprehensive documentation about the topic and have
perl installed you can read it with:
You can reduce the rows being displayed by using one or more regular
expression patterns. The regexp language being used is the one of
\&\s-1GOLANG,\s0 refer to the syntax cheat sheet here:
<https://pkg.go.dev/regexp/syntax>.
.PP
If you want to read a more comprehensive documentation about the
topic and have perl installed you can read it with:
.PP
.Vb 1
\& perldoc perlre
.Ve
.PP
Or read it online: <https://perldoc.perl.org/perlre>.
Or read it online: <https://perldoc.perl.org/perlre>. But please note
that the \s-1GO\s0 regexp engine does \s-1NOT\s0 support all perl regex terms,
especially look-ahead and look-behind.
.PP
A note on modifiers: the regexp engine used in tablizer uses another
modifier syntax:
If you want to supply flags to a regex, then surround it with slashes
and append the flag. The following flags are supported:
.PP
.Vb 1
\& (?MODIFIER)
.Vb 2
\& i => case insensitive
\& ! => negative match
.Ve
.PP
The most important modifiers are:
.PP
\&\f(CW\*(C`i\*(C'\fR ignore case
\&\f(CW\*(C`m\*(C'\fR multiline mode
\&\f(CW\*(C`s\*(C'\fR single line mode
.PP
Example for a case insensitive search:
.PP
.Vb 1
\& kubectl get pods \-A | tablizer "(?i)account"
\& kubectl get pods \-A | tablizer "/account/i"
.Ve
.PP
You can use the experimental fuzzy search feature by providing the
If you use the \f(CW\*(C`!\*(C'\fR flag, then the regex match will be negated, that
is, if a line in the input matches the given regex, but \f(CW\*(C`!\*(C'\fR is
supplied, tablizer will \s-1NOT\s0 include it in the output.
.PP
For example, here we want to get all lines matching \*(L"foo\*(R" but not
\&\*(L"bar\*(R":
.PP
.Vb 1
\& cat table | tablizer foo \*(Aq/bar/!\*(Aq
.Ve
.PP
This would match a line \*(L"foo zorro\*(R" but not \*(L"foo bar\*(R".
.PP
The flags can also be combined.
.PP
You can also use the experimental fuzzy search feature by providing the
option \fB\-z\fR, in which case the pattern is regarded as a fuzzy search
term, not a regexp.
.PP
@@ -315,6 +342,12 @@ Fieldnames (== columns headers) are case insensitive.
If you specify more than one filter, both filters have to match (\s-1AND\s0
operation).
.PP
These field filters can also be negated:
.PP
.Vb 1
\& fieldname!=regexp
.Ve
.PP
If the option \fB\-v\fR is specified, the filtering is inverted.
.SS "\s-1COLUMNS\s0"
.IX Subsection "COLUMNS"
@@ -348,6 +381,50 @@ We want to see only the \s-1CMD\s0 column and use a regex for this:
.Ve
.PP
where \*(L"C\*(R" is our regexp which matches \s-1CMD.\s0
.PP
If a column specifier doesn't look like a regular expression, matching
against header fields will be case insensitive. So, if you have a
field with the name \f(CW\*(C`ID\*(C'\fR then these will all match: \f(CW\*(C`\-c id\*(C'\fR, \f(CW\*(C`\-c
Id\*(C'\fR. The same rule applies to the options \f(CW\*(C`\-T\*(C'\fR and \f(CW\*(C`\-F\*(C'\fR.
.SS "\s-1TRANSPOSE FIELDS USING REGEXPS\s0"
.IX Subsection "TRANSPOSE FIELDS USING REGEXPS"
You can manipulate field contents using regular expressions. You have
to tell tablizer which field[s] to operate on using the option \f(CW\*(C`\-T\*(C'\fR
and the search/replace pattern using \f(CW\*(C`\-R\*(C'\fR. The number of columns and
patterns must match.
.PP
A search/replace pattern consists of the following elements:
.PP
.Vb 1
\& /search\-regexp/replace\-string/
.Ve
.PP
The separator can be any valid character. Especially if you want to
use a regexp containing the \f(CW\*(C`/\*(C'\fR character, eg:
.PP
.Vb 1
\& |search\-regexp|replace\-string|
.Ve
.PP
Example:
.PP
.Vb 7
\& cat t/testtable2
\& NAME DURATION
\& x 10
\& a 100
\& z 0
\& u 4
\& k 6
\&
\& cat t/testtable2 | tablizer \-T2 \-R \*(Aq/^\ed/4/\*(Aq \-n
\& NAME DURATION
\& x 40
\& a 400
\& z 4
\& u 4
\& k 4
.Ve
.SS "\s-1OUTPUT MODES\s0"
.IX Subsection "OUTPUT MODES"
There might be cases when the tabular output of a program is way too
@@ -387,13 +464,27 @@ more output modes available: \fBorgtbl\fR which prints an Emacs org-mode
table and \fBmarkdown\fR which prints a Markdown table, \fByaml\fR, which
prints yaml encoding and \s-1CSV\s0 mode, which prints a comma separated
value file.
.SS "\s-1PUT FIELDS TO CLIPBOARD\s0"
.IX Subsection "PUT FIELDS TO CLIPBOARD"
You can let tablizer put fields to the clipboard using the option
\&\f(CW\*(C`\-y\*(C'\fR. This best fits the use-case when the result of your filtering
yields just one row. For example:
.PP
.Vb 1
\& cloudctl cluster ls | tablizer \-yid matchbox
.Ve
.PP
If \*(L"matchbox\*(R" matches one cluster, you can immediately use the id of
that cluster somewhere else and paste it. Of course, if there are
multiple matches, then all id's will be put into the clipboard
separated by one space.
.SS "\s-1ENVIRONMENT VARIABLES\s0"
.IX Subsection "ENVIRONMENT VARIABLES"
\&\fBtablizer\fR supports certain environment variables which use can use
to influence program behavior. Commandline flags have always
precedence over environment variables.
.IP "<T_NO_HEADER_NUMBERING> \- disable numbering of header fields, like \fB\-n\fR." 4
.IX Item "<T_NO_HEADER_NUMBERING> - disable numbering of header fields, like -n."
.IP "<T_HEADER_NUMBERING> \- enable numbering of header fields, like \fB\-n\fR." 4
.IX Item "<T_HEADER_NUMBERING> - enable numbering of header fields, like -n."
.PD 0
.IP "<T_COLUMNS> \- comma separated list of columns to output, like \fB\-c\fR" 4
.IX Item "<T_COLUMNS> - comma separated list of columns to output, like -c"

View File

@@ -5,42 +5,46 @@ tablizer - Manipulate tabular output of other programs
=head1 SYNOPSIS
Usage:
tablizer [regex] [file, ...] [flags]
tablizer [regex,...] [file, ...] [flags]
Operational Flags:
-c, --columns string Only show the speficied columns (separated by ,)
-v, --invert-match select non-matching rows
-n, --no-numbering Disable header numbering
-N, --no-color Disable pattern highlighting
-H, --no-headers Disable headers display
-s, --separator string Custom field separator
-k, --sort-by int Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental]
-F, --filter field=reg Filter given field with regex, can be used multiple times
-c, --columns string Only show the speficied columns (separated by ,)
-v, --invert-match select non-matching rows
-n, --numbering Enable header numbering
-N, --no-color Disable pattern highlighting
-H, --no-headers Disable headers display
-s, --separator string Custom field separator
-k, --sort-by int|name Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental]
-F, --filter field[!]=reg Filter given field with regex, can be used multiple times
-T, --transpose-columns string Transpose the speficied columns (separated by ,)
-R, --regex-transposer /from/to/ Apply /search/replace/ regexp to fields given in -T
Output Flags (mutually exclusive):
-X, --extended Enable extended output
-M, --markdown Enable markdown table output
-O, --orgtbl Enable org-mode table output
-S, --shell Enable shell evaluable output
-Y, --yaml Enable yaml output
-C, --csv Enable CSV output
-A, --ascii Default output mode, ascii tabular
-L, --hightlight-lines Use alternating background colors for tables
-X, --extended Enable extended output
-M, --markdown Enable markdown table output
-O, --orgtbl Enable org-mode table output
-S, --shell Enable shell evaluable output
-Y, --yaml Enable yaml output
-C, --csv Enable CSV output
-A, --ascii Default output mode, ascii tabular
-L, --hightlight-lines Use alternating background colors for tables
-y, --yank-columns Yank specified columns (separated by ,) to clipboard,
space separated
Sort Mode Flags (mutually exclusive):
-a, --sort-age sort according to age (duration) string
-D, --sort-desc Sort in descending order (default: ascending)
-i, --sort-numeric sort according to string numerical value
-t, --sort-time sort according to time string
-a, --sort-age sort according to age (duration) string
-D, --sort-desc Sort in descending order (default: ascending)
-i, --sort-numeric sort according to string numerical value
-t, --sort-time sort according to time string
Other Flags:
--completion <shell> Generate the autocompletion script for <shell>
-f, --config <file> Configuration file (default: ~/.config/tablizer/config)
-d, --debug Enable debugging
-h, --help help for tablizer
-m, --man Display manual page
-V, --version Print program version
--completion <shell> Generate the autocompletion script for <shell>
-f, --config <file> Configuration file (default: ~/.config/tablizer/config)
-d, --debug Enable debugging
-h, --help help for tablizer
-m, --man Display manual page
-V, --version Print program version
=head1 DESCRIPTION
@@ -104,11 +108,20 @@ By default, if a B<pattern> has been speficied, matches will be
highlighted. You can disable this behavior with the B<-N> option.
Use the B<-k> option to specify by which column to sort the tabular
data (as in GNU sort(1)). The default sort column is the first one. To
disable sorting at all, supply 0 (Zero) to -k. The default sort order
is ascending. You can change this to descending order using the option
B<-D>. The default sort order is by string, but there are other sort
modes:
data (as in GNU sort(1)). The default sort column is the first
one. You can specify column numbers or names. Column numbers start
with 1, names are case insensitive. You can specify multiple columns
separated by comma to sort, but the type must be the same. For example
if you want to sort numerically, all columns must be numbers. If you
use column numbers, then be aware, that these are the numbers before
column extraction. For example if you have a table with 4 columns and
specify C<-c4>, then only 1 column (the fourth) will be printed,
however if you want to sort by this column, you'll have to specify
C<-k4>.
The default sort order is ascending. You can change this to
descending order using the option B<-D>. The default sort order is by
alphanumeric string, but there are other sort modes:
=over
@@ -131,32 +144,44 @@ useful for the developer.
=head2 PATTERNS AND FILTERING
You can reduce the rows being displayed by using a regular expression
pattern. The regexp is PCRE compatible, refer to the syntax cheat
sheet here: L<https://github.com/google/re2/wiki/Syntax>. If you want
to read a more comprehensive documentation about the topic and have
perl installed you can read it with:
You can reduce the rows being displayed by using one or more regular
expression patterns. The regexp language being used is the one of
GOLANG, refer to the syntax cheat sheet here:
L<https://pkg.go.dev/regexp/syntax>.
If you want to read a more comprehensive documentation about the
topic and have perl installed you can read it with:
perldoc perlre
Or read it online: L<https://perldoc.perl.org/perlre>.
Or read it online: L<https://perldoc.perl.org/perlre>. But please note
that the GO regexp engine does NOT support all perl regex terms,
especially look-ahead and look-behind.
A note on modifiers: the regexp engine used in tablizer uses another
modifier syntax:
If you want to supply flags to a regex, then surround it with slashes
and append the flag. The following flags are supported:
(?MODIFIER)
The most important modifiers are:
C<i> ignore case
C<m> multiline mode
C<s> single line mode
i => case insensitive
! => negative match
Example for a case insensitive search:
kubectl get pods -A | tablizer "(?i)account"
kubectl get pods -A | tablizer "/account/i"
You can use the experimental fuzzy search feature by providing the
If you use the C<!> flag, then the regex match will be negated, that
is, if a line in the input matches the given regex, but C<!> is
supplied, tablizer will NOT include it in the output.
For example, here we want to get all lines matching "foo" but not
"bar":
cat table | tablizer foo '/bar/!'
This would match a line "foo zorro" but not "foo bar".
The flags can also be combined.
You can also use the experimental fuzzy search feature by providing the
option B<-z>, in which case the pattern is regarded as a fuzzy search
term, not a regexp.
@@ -171,6 +196,10 @@ Fieldnames (== columns headers) are case insensitive.
If you specify more than one filter, both filters have to match (AND
operation).
These field filters can also be negated:
fieldname!=regexp
If the option B<-v> is specified, the filtering is inverted.
@@ -203,6 +232,46 @@ We want to see only the CMD column and use a regex for this:
where "C" is our regexp which matches CMD.
If a column specifier doesn't look like a regular expression, matching
against header fields will be case insensitive. So, if you have a
field with the name C<ID> then these will all match: C<-c id>, C<-c
Id>. The same rule applies to the options C<-T> and C<-F>.
=head2 TRANSPOSE FIELDS USING REGEXPS
You can manipulate field contents using regular expressions. You have
to tell tablizer which field[s] to operate on using the option C<-T>
and the search/replace pattern using C<-R>. The number of columns and
patterns must match.
A search/replace pattern consists of the following elements:
/search-regexp/replace-string/
The separator can be any valid character. Especially if you want to
use a regexp containing the C</> character, eg:
|search-regexp|replace-string|
Example:
cat t/testtable2
NAME DURATION
x 10
a 100
z 0
u 4
k 6
cat t/testtable2 | tablizer -T2 -R '/^\d/4/' -n
NAME DURATION
x 40
a 400
z 4
u 4
k 4
=head2 OUTPUT MODES
There might be cases when the tabular output of a program is way too
@@ -239,6 +308,19 @@ table and B<markdown> which prints a Markdown table, B<yaml>, which
prints yaml encoding and CSV mode, which prints a comma separated
value file.
=head2 PUT FIELDS TO CLIPBOARD
You can let tablizer put fields to the clipboard using the option
C<-y>. This best fits the use-case when the result of your filtering
yields just one row. For example:
cloudctl cluster ls | tablizer -yid matchbox
If "matchbox" matches one cluster, you can immediately use the id of
that cluster somewhere else and paste it. Of course, if there are
multiple matches, then all id's will be put into the clipboard
separated by one space.
=head2 ENVIRONMENT VARIABLES
B<tablizer> supports certain environment variables which use can use
@@ -247,7 +329,7 @@ precedence over environment variables.
=over
=item <T_NO_HEADER_NUMBERING> - disable numbering of header fields, like B<-n>.
=item <T_HEADER_NUMBERING> - enable numbering of header fields, like B<-n>.
=item <T_COLUMNS> - comma separated list of columns to output, like B<-c>
@@ -341,6 +423,8 @@ the C<-L> parameter).
Colorization can be turned off completely either by setting the
parameter C<-N> or the environment variable B<NO_COLOR> to a true value.
=head1 BUGS
In order to report a bug, unexpected behavior, feature requests