Compare commits

...

12 Commits

31 changed files with 929 additions and 940 deletions

View File

@@ -5,7 +5,9 @@ jobs:
strategy: strategy:
matrix: matrix:
version: ['1.22'] version: ['1.22']
os: [ubuntu-latest, windows-latest, macos-latest] # windows-latest removed, see:
# https://github.com/rogpeppe/go-internal/issues/284
os: [ubuntu-latest, macos-latest]
name: Build name: Build
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:

1
.gitignore vendored
View File

@@ -1,2 +1,3 @@
releases releases
tablizer tablizer
*.out

View File

@@ -66,11 +66,10 @@ clean:
rm -rf $(tool) releases coverage.out rm -rf $(tool) releases coverage.out
test: test:
go test -v ./... go test ./... $(OPTS)
bash t/test.sh
singletest: singletest:
@echo "Call like this: ''make singletest TEST=TestPrepareColumns MOD=lib" @echo "Call like this: 'make singletest TEST=TestPrepareColumns MOD=lib'"
go test -run $(TEST) github.com/tlinden/tablizer/$(MOD) go test -run $(TEST) github.com/tlinden/tablizer/$(MOD)
cover-report: cover-report:

View File

@@ -8,6 +8,49 @@ Tablizer can be used to re-format tabular output of other
programs. While you could do this using standard unix tools, in some programs. While you could do this using standard unix tools, in some
cases it's a hard job. cases it's a hard job.
Usage:
```default
Usage:
tablizer [regex] [file, ...] [flags]
Operational Flags:
-c, --columns string Only show the speficied columns (separated by ,)
-v, --invert-match select non-matching rows
-n, --no-numbering Disable header numbering
-N, --no-color Disable pattern highlighting
-H, --no-headers Disable headers display
-s, --separator string Custom field separator
-k, --sort-by int Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental]
-F, --filter field=reg Filter given field with regex, can be used multiple times
-T, --transpose-columns string Transpose the speficied columns (separated by ,)
-R, --regex-transposer /from/to/ Apply /search/replace/ regexp to fields given in -T
Output Flags (mutually exclusive):
-X, --extended Enable extended output
-M, --markdown Enable markdown table output
-O, --orgtbl Enable org-mode table output
-S, --shell Enable shell evaluable output
-Y, --yaml Enable yaml output
-C, --csv Enable CSV output
-A, --ascii Default output mode, ascii tabular
-L, --hightlight-lines Use alternating background colors for tables
Sort Mode Flags (mutually exclusive):
-a, --sort-age sort according to age (duration) string
-D, --sort-desc Sort in descending order (default: ascending)
-i, --sort-numeric sort according to string numerical value
-t, --sort-time sort according to time string
Other Flags:
--completion <shell> Generate the autocompletion script for <shell>
-f, --config <file> Configuration file (default: ~/.config/tablizer/config)
-d, --debug Enable debugging
-h, --help help for tablizer
-m, --man Display manual page
-V, --version Print program version
```
Let's take this output: Let's take this output:
``` ```
% kubectl get pods -o wide % kubectl get pods -o wide
@@ -83,9 +126,22 @@ otherwise on all rows.
There are more output modes like org-mode (orgtbl) and markdown. There are more output modes like org-mode (orgtbl) and markdown.
Last but not least tablizer has support for plugins written in You can also use it to modify certain cells using regular expression
lisp. This feature is expermental yet. Take a look into the manpage matching. For example:
for details.
```shell
kubectl get pods | tablizer -n -T4 -R '/ /-/'
NAME READY STATUS RESTARTS AGE
repldepl-7bcd8d5b64-7zq4l 1/1 Running 1-(69m-ago) 5h26m
repldepl-7bcd8d5b64-m48n8 1/1 Running 1-(69m-ago) 5h26m
repldepl-7bcd8d5b64-q2bf4 1/1 Running 1-(69m-ago) 5h26m
```
Here, we modified the 4th column (`-T4`) by replacing every space with
a dash. If you need to work with `/` characters, you can also use any
other separator, for instance: `-R '| |-|'`.
## Demo ## Demo
@@ -142,6 +198,41 @@ In order to report a bug, unexpected behavior, feature requests
or to submit a patch, please open an issue on github: or to submit a patch, please open an issue on github:
https://github.com/TLINDEN/tablizer/issues. https://github.com/TLINDEN/tablizer/issues.
## Prior Art
When I started with tablizer I was not aware that other tools
exist. Here is a non-exhausive list of the ones I find especially
awesome:
### [miller](https://github.com/johnkerl/miller)
This is a really powerful tool to work with tabular data and it also
allows other inputs as json, csv etc. You can filter, manipulate,
create pipelines, there's even a programming language builtin to do
even more amazing things.
### [csvq](https://github.com/mithrandie/csvq)
Csvq allows you to query CSV and TSV data using SQL queries. How nice
is that? Highly recommended if you have to work with a large (and
wide) dataset and need to apply a complicated set of rules.
### [goawk](https://github.com/benhoyt/goawk)
Goawk is a 100% POSIX compliant AWK implementation in GO, which also
supports CSV and TSV data as input (using `-i csv` for example). You
can apply any kind of awk code to your tabular data, there are no
limit to your creativity!
### [teip](https://github.com/greymd/teip)
I particularly like teip, it's a real gem. You can use it to drill
"holes" into your tabular data and modify these "holes" using small
external unix commands such as grep or sed. The possibilities are
endless, you can even use teip to modify data inside a hole created by
teip. Highly recommended.
## Copyright and license ## Copyright and license
This software is licensed under the GNU GENERAL PUBLIC LICENSE version 3. This software is licensed under the GNU GENERAL PUBLIC LICENSE version 3.

10
TODO.md
View File

@@ -6,13 +6,3 @@
- add --no-headers option - add --no-headers option
### Lisp Plugin Infrastructure using zygo
Hooks:
| Filter | Purpose | Args | Return |
|-----------|-------------------------------------------------------------|---------------------|--------|
| filter | include or exclude lines | row as hash | bool |
| process | do calculations with data, store results in global lisp env | whole dataset | nil |
| transpose | modify a cell | headername and cell | cell |
| append | add one or more rows to the dataset (use this to add stats) | nil | rows |

View File

@@ -23,16 +23,14 @@ import (
"regexp" "regexp"
"strings" "strings"
"github.com/glycerine/zygomys/zygo"
"github.com/gookit/color" "github.com/gookit/color"
"github.com/hashicorp/hcl/v2/hclsimple" "github.com/hashicorp/hcl/v2/hclsimple"
) )
const DefaultSeparator string = `(\s\s+|\t)` const DefaultSeparator string = `(\s\s+|\t)`
const Version string = "v1.2.3" const Version string = "v1.3.0"
const MAXPARTS = 2 const MAXPARTS = 2
var DefaultLoadPath = os.Getenv("HOME") + "/.config/tablizer/lisp"
var DefaultConfigfile = os.Getenv("HOME") + "/.config/tablizer/config" var DefaultConfigfile = os.Getenv("HOME") + "/.config/tablizer/config"
var VERSION string // maintained by -x var VERSION string // maintained by -x
@@ -49,6 +47,11 @@ type Settings struct {
HighlightHdrBG string `hcl:"HighlightHdrBG"` HighlightHdrBG string `hcl:"HighlightHdrBG"`
} }
type Transposer struct {
Search regexp.Regexp
Replace string
}
// internal config // internal config
type Config struct { type Config struct {
Debug bool Debug bool
@@ -68,6 +71,11 @@ type Config struct {
SortDescending bool SortDescending bool
SortByColumn int SortByColumn int
TransposeColumns string // 1,2
UseTransposeColumns []int // []int{1,2}
Transposers []string // []string{"/ /-/", "/foo/bar/"}
UseTransposers []Transposer // {Search: re, Replace: string}
/* /*
FIXME: make configurable somehow, config file or ENV FIXME: make configurable somehow, config file or ENV
see https://github.com/gookit/color. see https://github.com/gookit/color.
@@ -79,13 +87,6 @@ type Config struct {
NoColor bool NoColor bool
// special case: we use the config struct to transport the lisp
// env trough the program
Lisp *zygo.Zlisp
// a path containing lisp scripts to be loaded on startup
LispLoadPath string
// config file, optional // config file, optional
Configfile string Configfile string
@@ -94,6 +95,9 @@ type Config struct {
// used for field filtering // used for field filtering
Rawfilters []string Rawfilters []string
Filters map[string]*regexp.Regexp Filters map[string]*regexp.Regexp
// -r <file>
InputFile string
} }
// maps outputmode short flags to output mode, ie. -O => -o orgtbl // maps outputmode short flags to output mode, ie. -O => -o orgtbl
@@ -125,9 +129,6 @@ type Sortmode struct {
Age bool Age bool
} }
// valid lisp hooks
var ValidHooks []string
// default color schemes // default color schemes
func (conf *Config) Colors() map[color.Level]map[string]color.Color { func (conf *Config) Colors() map[color.Level]map[string]color.Color {
colors := map[color.Level]map[string]color.Color{ colors := map[color.Level]map[string]color.Color{
@@ -277,7 +278,30 @@ func (conf *Config) PrepareFilters() error {
parts[0], err) parts[0], err)
} }
conf.Filters[strings.ToLower(parts[0])] = reg conf.Filters[strings.ToLower(strings.ToLower(parts[0]))] = reg
}
return nil
}
// check if transposers match transposer columns and prepare transposer structs
func (conf *Config) PrepareTransposers() error {
if len(conf.Transposers) != len(conf.UseTransposeColumns) {
return fmt.Errorf("the number of transposers needs to correspond to the number of transpose columns: %d != %d",
len(conf.Transposers), len(conf.UseTransposeColumns))
}
for _, transposer := range conf.Transposers {
parts := strings.Split(transposer, string(transposer[0]))
if len(parts) != 4 {
return fmt.Errorf("transposer function must have the format /regexp/replace-string/")
}
conf.UseTransposers = append(conf.UseTransposers,
Transposer{
Search: *regexp.MustCompile(parts[1]),
Replace: parts[2]},
)
} }
return nil return nil
@@ -306,8 +330,6 @@ func (conf *Config) ApplyDefaults() {
if conf.OutputMode == Yaml || conf.OutputMode == CSV { if conf.OutputMode == Yaml || conf.OutputMode == CSV {
conf.NoNumbering = true conf.NoNumbering = true
} }
ValidHooks = []string{"filter", "process", "transpose", "append"}
} }
func (conf *Config) PreparePattern(pattern string) error { func (conf *Config) PreparePattern(pattern string) error {

View File

@@ -117,9 +117,6 @@ func Execute() {
conf.DetermineColormode() conf.DetermineColormode()
conf.ApplyDefaults() conf.ApplyDefaults()
// setup lisp env, load plugins etc
wrapE(lib.SetupLisp(&conf))
// actual execution starts here // actual execution starts here
wrapE(lib.ProcessFiles(&conf, args)) wrapE(lib.ProcessFiles(&conf, args))
}, },
@@ -150,6 +147,8 @@ func Execute() {
"Custom field separator") "Custom field separator")
rootCmd.PersistentFlags().StringVarP(&conf.Columns, "columns", "c", "", rootCmd.PersistentFlags().StringVarP(&conf.Columns, "columns", "c", "",
"Only show the speficied columns (separated by ,)") "Only show the speficied columns (separated by ,)")
rootCmd.PersistentFlags().StringVarP(&conf.TransposeColumns, "transpose-columns", "T", "",
"Transpose the speficied columns (separated by ,)")
// sort options // sort options
rootCmd.PersistentFlags().IntVarP(&conf.SortByColumn, "sort-by", "k", 0, rootCmd.PersistentFlags().IntVarP(&conf.SortByColumn, "sort-by", "k", 0,
@@ -185,16 +184,19 @@ func Execute() {
rootCmd.MarkFlagsMutuallyExclusive("extended", "markdown", "orgtbl", rootCmd.MarkFlagsMutuallyExclusive("extended", "markdown", "orgtbl",
"shell", "yaml", "csv") "shell", "yaml", "csv")
// lisp options
rootCmd.PersistentFlags().StringVarP(&conf.LispLoadPath, "load-path", "l", cfg.DefaultLoadPath,
"Load path for lisp plugins (expects *.zy files)")
// config file // config file
rootCmd.PersistentFlags().StringVarP(&conf.Configfile, "config", "f", cfg.DefaultConfigfile, rootCmd.PersistentFlags().StringVarP(&conf.Configfile, "config", "f", cfg.DefaultConfigfile,
"config file (default: ~/.config/tablizer/config)") "config file (default: ~/.config/tablizer/config)")
// filters // filters
rootCmd.PersistentFlags().StringArrayVarP(&conf.Rawfilters, "filter", "F", nil, "Filter by field (field=regexp)") rootCmd.PersistentFlags().StringArrayVarP(&conf.Rawfilters,
"filter", "F", nil, "Filter by field (field=regexp)")
rootCmd.PersistentFlags().StringArrayVarP(&conf.Transposers,
"regex-transposer", "R", nil, "apply /search/replace/ regexp to fields given in -T")
// input
rootCmd.PersistentFlags().StringVarP(&conf.InputFile, "read-file", "r", "",
"Read input data from file")
rootCmd.SetUsageTemplate(strings.TrimSpace(usage) + "\n") rootCmd.SetUsageTemplate(strings.TrimSpace(usage) + "\n")

View File

@@ -18,6 +18,8 @@ SYNOPSIS
-k, --sort-by int Sort by column (default: 1) -k, --sort-by int Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental] -z, --fuzzy Use fuzzy search [experimental]
-F, --filter field=reg Filter given field with regex, can be used multiple times -F, --filter field=reg Filter given field with regex, can be used multiple times
-T, --transpose-columns string Transpose the speficied columns (separated by ,)
-R, --regex-transposer /from/to/ Apply /search/replace/ regexp to fields given in -T
Output Flags (mutually exclusive): Output Flags (mutually exclusive):
-X, --extended Enable extended output -X, --extended Enable extended output
@@ -38,7 +40,6 @@ SYNOPSIS
Other Flags: Other Flags:
--completion <shell> Generate the autocompletion script for <shell> --completion <shell> Generate the autocompletion script for <shell>
-f, --config <file> Configuration file (default: ~/.config/tablizer/config) -f, --config <file> Configuration file (default: ~/.config/tablizer/config)
-l, --load-path <path> Load path for lisp plugins (expects *.zy files)
-d, --debug Enable debugging -d, --debug Enable debugging
-h, --help help for tablizer -h, --help help for tablizer
-m, --man Display manual page -m, --man Display manual page
@@ -186,6 +187,44 @@ DESCRIPTION
where "C" is our regexp which matches CMD. where "C" is our regexp which matches CMD.
If a column specifier doesn't look like a regular expression, matching
against header fields will be case insensitive. So, if you have a field
with the name "ID" then these will all match: "-c id", "-c Id". The same
rule applies to the options "-T" and "-F".
TRANSPOSE FIELDS USING REGEXPS
You can manipulate field contents using regular expressions. You have to
tell tablizer which field[s] to operate on using the option "-T" and the
search/replace pattern using "-R". The number of columns and patterns
must match.
A search/replace pattern consists of the following elements:
/search-regexp/replace-string/
The separator can be any valid character. Especially if you want to use
a regexp containing the "/" character, eg:
|search-regexp|replace-string|
Example:
cat t/testtable2
NAME DURATION
x 10
a 100
z 0
u 4
k 6
cat t/testtable2 | tablizer -T2 -R '/^\d/4/' -n
NAME DURATION
x 40
a 400
z 4
u 4
k 4
OUTPUT MODES OUTPUT MODES
There might be cases when the tabular output of a program is way too There might be cases when the tabular output of a program is way too
large for your current terminal but you still need to see every column. large for your current terminal but you still need to see every column.
@@ -304,60 +343,6 @@ CONFIGURATION AND COLORS
Colorization can be turned off completely either by setting the Colorization can be turned off completely either by setting the
parameter "-N" or the environment variable NO_COLOR to a true value. parameter "-N" or the environment variable NO_COLOR to a true value.
LISP PLUGINS [experimental]
Tablizer supports plugins written in zygomys lisp. You can supply a
directory to the "-l" parameter containing *.zy files or a single .zy
file containing lisp code.
You can put as much code as you want into the file, but you need to add
one lips function to a hook at the end.
The following hooks are available:
filter
The filter hook works one a whole line of the input. Your hook
function is expected to return true or false. If you return true,
the line will be included in the output, otherwise not.
Multiple filter hook functions are supported.
Example:
/*
Simple filter hook function. Splits the argument by whitespace,
fetches the 2nd element, converts it to an int and returns true
if it s larger than 5, false otherwise.
*/
(defn uselarge [line]
(cond (> (atoi (second (resplit line ` +`))) 5) true false))
/* Register the filter hook */
(addhook %filter %uselarge)
process
The process hook function gets a table containing the parsed input
data (see "lib/common.go:type Tabdata struct". It is expected to
return a pair containing a bool to denote if the table has been
modified, and the [modified] table. The resulting table may have
less rows than the original and cells may have changed content but
the number of columns must persist.
transpose
not yet implemented.
append
not yet implemented.
Beside the existing language features, the following additional lisp
functions are provided by tablizer:
(resplit [string, regex]) => list
(atoi [string]) => int
(matchre [string, regex]) => bool
The standard language is described here:
<https://github.com/glycerine/zygomys/wiki/Language>.
BUGS BUGS
In order to report a bug, unexpected behavior, feature requests or to In order to report a bug, unexpected behavior, feature requests or to
submit a patch, please open an issue on github: submit a patch, please open an issue on github:
@@ -410,6 +395,8 @@ Operational Flags:
-k, --sort-by int Sort by column (default: 1) -k, --sort-by int Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental] -z, --fuzzy Use fuzzy search [experimental]
-F, --filter field=reg Filter given field with regex, can be used multiple times -F, --filter field=reg Filter given field with regex, can be used multiple times
-T, --transpose-columns string Transpose the speficied columns (separated by ,)
-R, --regex-transposer /from/to/ Apply /search/replace/ regexp to fields given in -T
Output Flags (mutually exclusive): Output Flags (mutually exclusive):
-X, --extended Enable extended output -X, --extended Enable extended output
@@ -430,7 +417,6 @@ Sort Mode Flags (mutually exclusive):
Other Flags: Other Flags:
--completion <shell> Generate the autocompletion script for <shell> --completion <shell> Generate the autocompletion script for <shell>
-f, --config <file> Configuration file (default: ~/.config/tablizer/config) -f, --config <file> Configuration file (default: ~/.config/tablizer/config)
-l, --load-path <path> Load path for lisp plugins (expects *.zy files)
-d, --debug Enable debugging -d, --debug Enable debugging
-h, --help help for tablizer -h, --help help for tablizer
-m, --man Display manual page -m, --man Display manual page

7
go.mod
View File

@@ -30,6 +30,7 @@ require (
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7 // indirect github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7 // indirect
github.com/philhofer/fwd v1.1.2 // indirect github.com/philhofer/fwd v1.1.2 // indirect
github.com/rivo/uniseg v0.2.0 // indirect github.com/rivo/uniseg v0.2.0 // indirect
github.com/rogpeppe/go-internal v1.13.1 // indirect
github.com/shurcooL/go v0.0.0-20230706063926-5fe729b41b3a // indirect github.com/shurcooL/go v0.0.0-20230706063926-5fe729b41b3a // indirect
github.com/shurcooL/go-goon v1.0.0 // indirect github.com/shurcooL/go-goon v1.0.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect github.com/spf13/pflag v1.0.5 // indirect
@@ -37,8 +38,8 @@ require (
github.com/ugorji/go/codec v1.2.12 // indirect github.com/ugorji/go/codec v1.2.12 // indirect
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
github.com/zclconf/go-cty v1.13.3 // indirect github.com/zclconf/go-cty v1.13.3 // indirect
golang.org/x/mod v0.13.0 // indirect golang.org/x/mod v0.18.0 // indirect
golang.org/x/sys v0.13.0 // indirect golang.org/x/sys v0.21.0 // indirect
golang.org/x/text v0.11.0 // indirect golang.org/x/text v0.11.0 // indirect
golang.org/x/tools v0.14.0 // indirect golang.org/x/tools v0.22.0 // indirect
) )

8
go.sum
View File

@@ -55,6 +55,8 @@ github.com/rivo/uniseg v0.1.0 h1:+2KBaVoUmb9XzDsrx/Ct0W/EYOSFf/nWTauy++DprtY=
github.com/rivo/uniseg v0.1.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= github.com/rivo/uniseg v0.1.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY= github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/scylladb/termtables v0.0.0-20191203121021-c4c0b6d42ff4/go.mod h1:C1a7PQSMz9NShzorzCiG2fk9+xuCgLkPeCvMHYR2OWg= github.com/scylladb/termtables v0.0.0-20191203121021-c4c0b6d42ff4/go.mod h1:C1a7PQSMz9NShzorzCiG2fk9+xuCgLkPeCvMHYR2OWg=
github.com/shurcooL/go v0.0.0-20200502201357-93f07166e636 h1:aSISeOcal5irEhJd1M+IrApc0PdcN7e7Aj4yuEnOrfQ= github.com/shurcooL/go v0.0.0-20200502201357-93f07166e636 h1:aSISeOcal5irEhJd1M+IrApc0PdcN7e7Aj4yuEnOrfQ=
@@ -92,6 +94,8 @@ golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.13.0 h1:I/DsJXRlw/8l/0c24sM9yb0T4z9liZTduXvdAWYiysY= golang.org/x/mod v0.13.0 h1:I/DsJXRlw/8l/0c24sM9yb0T4z9liZTduXvdAWYiysY=
golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.18.0 h1:5+9lSbEzPSdWkH32vYPBwEpX8KwDbM52Ud9xBUvNlb0=
golang.org/x/mod v0.18.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
@@ -108,6 +112,8 @@ golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE= golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.21.0 h1:rF+pYz3DAGSQAxAu1CbC7catZg4ebC4UIeIhKxBZvws=
golang.org/x/sys v0.21.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@@ -124,6 +130,8 @@ golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.14.0 h1:jvNa2pY0M4r62jkRQ6RwEZZyPcymeL9XZMLBbV7U2nc= golang.org/x/tools v0.14.0 h1:jvNa2pY0M4r62jkRQ6RwEZZyPcymeL9XZMLBbV7U2nc=
golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg= golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg=
golang.org/x/tools v0.22.0 h1:gqSGLZqv+AI9lIQzniJ0nZDRG5GBPsSi+DRNHWNz6yA=
golang.org/x/tools v0.22.0/go.mod h1:aCwcsjqvq7Yqt6TNyX7QMU2enbQ/Gt0bo6krSeEri+c=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -19,7 +19,6 @@ package lib
import ( import (
"bufio" "bufio"
"fmt"
"io" "io"
"strings" "strings"
@@ -44,10 +43,10 @@ func matchPattern(conf cfg.Config, line string) bool {
* more filters match on a row, it will be kept, otherwise it will be * more filters match on a row, it will be kept, otherwise it will be
* excluded. * excluded.
*/ */
func FilterByFields(conf cfg.Config, data Tabdata) (Tabdata, bool, error) { func FilterByFields(conf cfg.Config, data *Tabdata) (*Tabdata, bool, error) {
if len(conf.Filters) == 0 { if len(conf.Filters) == 0 {
// no filters, no checking // no filters, no checking
return Tabdata{}, false, nil return nil, false, nil
} }
newdata := data.CloneEmpty() newdata := data.CloneEmpty()
@@ -75,7 +74,44 @@ func FilterByFields(conf cfg.Config, data Tabdata) (Tabdata, bool, error) {
} }
} }
return newdata, true, nil return &newdata, true, nil
}
/*
* Transpose fields using search/replace regexp.
*/
func TransposeFields(conf cfg.Config, data *Tabdata) (*Tabdata, bool, error) {
if len(conf.UseTransposers) == 0 {
// nothing to be done
return nil, false, nil
}
newdata := data.CloneEmpty()
transposed := false
for _, row := range data.entries {
transposedrow := false
for idx := range data.headers {
transposeidx, hasone := findindex(conf.UseTransposeColumns, idx+1)
if hasone {
row[idx] =
conf.UseTransposers[transposeidx].Search.ReplaceAllString(
row[idx],
conf.UseTransposers[transposeidx].Replace,
)
transposedrow = true
}
}
if transposedrow {
// also apply -v
newdata.entries = append(newdata.entries, row)
transposed = true
}
}
return &newdata, transposed, nil
} }
/* generic map.Exists(key) */ /* generic map.Exists(key) */
@@ -107,18 +143,6 @@ func FilterByPattern(conf cfg.Config, input io.Reader) (io.Reader, error) {
// so we ignore all lines, which DO match. // so we ignore all lines, which DO match.
continue continue
} }
// apply user defined lisp filters, if any
accept, err := RunFilterHooks(conf, line)
if err != nil {
return input, fmt.Errorf("failed to apply filter hook: %w", err)
}
if !accept {
// IF there are filter hook[s] and IF one of them
// returns false on the current line, reject it
continue
}
} }
lines = append(lines, line) lines = append(lines, line)

View File

@@ -153,8 +153,8 @@ func TestFilterByFields(t *testing.T) {
t.Errorf("PrepareFilters returned error: %s", err) t.Errorf("PrepareFilters returned error: %s", err)
} }
data, _, _ := FilterByFields(conf, data) data, _, _ := FilterByFields(conf, &data)
if !reflect.DeepEqual(data, inputdata.expect) { if !reflect.DeepEqual(*data, inputdata.expect) {
t.Errorf("Filtered data does not match expected data:\ngot: %+v\nexp: %+v", data, inputdata.expect) t.Errorf("Filtered data does not match expected data:\ngot: %+v\nexp: %+v", data, inputdata.expect)
} }
}) })

View File

@@ -40,6 +40,16 @@ func contains(s []int, e int) bool {
return false return false
} }
func findindex(s []int, e int) (int, bool) {
for i, a := range s {
if a == e {
return i, true
}
}
return 0, false
}
// validate the consitency of parsed data // validate the consitency of parsed data
func ValidateConsistency(data *Tabdata) error { func ValidateConsistency(data *Tabdata) error {
expectedfields := len(data.headers) expectedfields := len(data.headers)
@@ -55,31 +65,81 @@ func ValidateConsistency(data *Tabdata) error {
} }
// parse columns list given with -c, modifies config.UseColumns based // parse columns list given with -c, modifies config.UseColumns based
// on eventually given regex // on eventually given regex.
// This is an output filter, because -cN,N,... is being applied AFTER
// processing of the input data.
func PrepareColumns(conf *cfg.Config, data *Tabdata) error { func PrepareColumns(conf *cfg.Config, data *Tabdata) error {
if conf.Columns == "" { // -c columns
usecolumns, err := PrepareColumnVars(conf.Columns, data)
if err != nil {
return err
}
conf.UseColumns = usecolumns
return nil return nil
} }
for _, use := range strings.Split(conf.Columns, ",") { // Same thing as above but for -T option, which is an input option,
if len(use) == 0 { // because transposers are being applied before output.
return fmt.Errorf("could not parse columns list %s: empty column", conf.Columns) func PrepareTransposerColumns(conf *cfg.Config, data *Tabdata) error {
// -T columns
usetransposecolumns, err := PrepareColumnVars(conf.TransposeColumns, data)
if err != nil {
return err
} }
usenum, err := strconv.Atoi(use) conf.UseTransposeColumns = usetransposecolumns
if err != nil {
// might be a regexp
colPattern, err := regexp.Compile(use)
if err != nil {
msg := fmt.Sprintf("Could not parse columns list %s: %v", conf.Columns, err)
return errors.New(msg) // verify that columns and transposers match and prepare transposer structs
if err := conf.PrepareTransposers(); err != nil {
return err
} }
// find matching header fields return nil
}
func PrepareColumnVars(columns string, data *Tabdata) ([]int, error) {
if columns == "" {
return nil, nil
}
usecolumns := []int{}
isregex := regexp.MustCompile(`\W`)
for _, columnpattern := range strings.Split(columns, ",") {
if len(columnpattern) == 0 {
return nil, fmt.Errorf("could not parse columns list %s: empty column", columns)
}
usenum, err := strconv.Atoi(columnpattern)
if err != nil {
// not a number
if !isregex.MatchString(columnpattern) {
// is not a regexp (contains no non-word chars)
// lc() it so that word searches are case insensitive
columnpattern = strings.ToLower(columnpattern)
for i, head := range data.headers { for i, head := range data.headers {
if colPattern.MatchString(head) { if columnpattern == strings.ToLower(head) {
conf.UseColumns = append(conf.UseColumns, i+1) usecolumns = append(usecolumns, i+1)
}
}
} else {
colPattern, err := regexp.Compile("(?i)" + columnpattern)
if err != nil {
msg := fmt.Sprintf("Could not parse columns list %s: %v", columns, err)
return nil, errors.New(msg)
}
// find matching header fields, ignoring case
for i, head := range data.headers {
if colPattern.MatchString(strings.ToLower(head)) {
usecolumns = append(usecolumns, i+1)
}
} }
} }
} else { } else {
@@ -87,27 +147,28 @@ func PrepareColumns(conf *cfg.Config, data *Tabdata) error {
// a colum spec is not a number, we process them above // a colum spec is not a number, we process them above
// inside the err handler for atoi(). so only add the // inside the err handler for atoi(). so only add the
// number, if it's really just a number. // number, if it's really just a number.
conf.UseColumns = append(conf.UseColumns, usenum) usecolumns = append(usecolumns, usenum)
} }
} }
// deduplicate: put all values into a map (value gets map key) // deduplicate: put all values into a map (value gets map key)
// thereby removing duplicates, extract keys into new slice // thereby removing duplicates, extract keys into new slice
// and sort it // and sort it
imap := make(map[int]int, len(conf.UseColumns)) imap := make(map[int]int, len(usecolumns))
for _, i := range conf.UseColumns { for _, i := range usecolumns {
imap[i] = 0 imap[i] = 0
} }
conf.UseColumns = nil // fill with deduplicated columns
usecolumns = nil
for k := range imap { for k := range imap {
conf.UseColumns = append(conf.UseColumns, k) usecolumns = append(usecolumns, k)
} }
sort.Ints(conf.UseColumns) sort.Ints(usecolumns)
return nil return usecolumns, nil
} }
// prepare headers: add numbers to headers // prepare headers: add numbers to headers

View File

@@ -67,8 +67,8 @@ func TestPrepareColumns(t *testing.T) {
}{ }{
{"1,2,3", []int{1, 2, 3}, false}, {"1,2,3", []int{1, 2, 3}, false},
{"1,2,", []int{}, true}, {"1,2,", []int{}, true},
{"T", []int{2, 3}, false}, {"T.", []int{2, 3}, false},
{"T,2,3", []int{2, 3}, false}, {"T.,2,3", []int{2, 3}, false},
{"[a-z,4,5", []int{4, 5}, true}, // invalid regexp {"[a-z,4,5", []int{4, 5}, true}, // invalid regexp
} }
@@ -90,6 +90,86 @@ func TestPrepareColumns(t *testing.T) {
} }
} }
func TestPrepareTransposerColumns(t *testing.T) {
data := Tabdata{
maxwidthHeader: 5,
columns: 3,
headers: []string{
"ONE", "TWO", "THREE",
},
entries: [][]string{
{
"2", "3", "4",
},
},
}
var tests = []struct {
input string
transp []string
exp int
wanterror bool // expect error
}{
{
"1",
[]string{`/\d/x/`},
1,
false,
},
{
"T.", // will match [T]WO and [T]HREE
[]string{`/\d/x/`, `/.//`},
2,
false,
},
{
"TH.,2",
[]string{`/\d/x/`, `/.//`},
2,
false,
},
{
"1",
[]string{},
1,
true,
},
{
"",
[]string{`|.|N|`},
0,
true,
},
{
"1",
[]string{`|.|N|`},
1,
false,
},
}
for _, testdata := range tests {
testname := fmt.Sprintf("PrepareTransposerColumns-%s-%t", testdata.input, testdata.wanterror)
t.Run(testname, func(t *testing.T) {
conf := cfg.Config{TransposeColumns: testdata.input, Transposers: testdata.transp}
err := PrepareTransposerColumns(&conf, &data)
if err != nil {
if !testdata.wanterror {
t.Errorf("got error: %v", err)
}
} else {
if len(conf.UseTransposeColumns) != testdata.exp {
t.Errorf("got %d, want %d", conf.UseTransposeColumns, testdata.exp)
}
if len(conf.Transposers) != len(conf.UseTransposeColumns) {
t.Errorf("got %d, want %d", conf.UseTransposeColumns, testdata.exp)
}
}
})
}
}
func TestReduceColumns(t *testing.T) { func TestReduceColumns(t *testing.T) {
var tests = []struct { var tests = []struct {
expect [][]string expect [][]string

View File

@@ -29,7 +29,7 @@ import (
const RWRR = 0755 const RWRR = 0755
func ProcessFiles(conf *cfg.Config, args []string) error { func ProcessFiles(conf *cfg.Config, args []string) error {
fds, pattern, err := determineIO(conf, args) fd, pattern, err := determineIO(conf, args)
if err != nil { if err != nil {
return err return err
@@ -39,7 +39,6 @@ func ProcessFiles(conf *cfg.Config, args []string) error {
return err return err
} }
for _, fd := range fds {
data, err := Parse(*conf, fd) data, err := Parse(*conf, fd)
if err != nil { if err != nil {
return err return err
@@ -55,64 +54,47 @@ func ProcessFiles(conf *cfg.Config, args []string) error {
} }
printData(os.Stdout, *conf, &data) printData(os.Stdout, *conf, &data)
}
return nil return nil
} }
func determineIO(conf *cfg.Config, args []string) ([]io.Reader, string, error) { func determineIO(conf *cfg.Config, args []string) (io.Reader, string, error) {
var filehandles []io.Reader var filehandle io.Reader
var pattern string var pattern string
var haveio bool var haveio bool
switch {
case conf.InputFile == "-":
filehandle = os.Stdin
haveio = true
case conf.InputFile != "":
fd, err := os.OpenFile(conf.InputFile, os.O_RDONLY, RWRR)
if err != nil {
return nil, "", fmt.Errorf("failed to read input file %s: %w", conf.InputFile, err)
}
filehandle = fd
haveio = true
}
if !haveio {
stat, _ := os.Stdin.Stat() stat, _ := os.Stdin.Stat()
if (stat.Mode() & os.ModeCharDevice) == 0 { if (stat.Mode() & os.ModeCharDevice) == 0 {
// we're reading from STDIN, which takes precedence over file args // we're reading from STDIN, which takes precedence over file args
filehandles = append(filehandles, os.Stdin) filehandle = os.Stdin
haveio = true
if len(args) > 0 {
// ignore any args > 1
pattern = args[0]
conf.Pattern = args[0] // used for colorization by printData()
} }
haveio = true
} else if len(args) > 0 {
// threre were args left, take a look
if args[0] == "-" {
// in traditional unix programs a dash denotes STDIN (forced)
filehandles = append(filehandles, os.Stdin)
haveio = true
} else {
if _, err := os.Stat(args[0]); err != nil {
// first one is not a file, consider it as regexp and
// shift arg list
pattern = args[0]
conf.Pattern = args[0] // used for colorization by printData()
args = args[1:]
} }
if len(args) > 0 { if len(args) > 0 {
// consider any other args as files pattern = args[0]
for _, file := range args { conf.Pattern = args[0]
filehandle, err := os.OpenFile(file, os.O_RDONLY, RWRR)
if err != nil {
return nil, "", fmt.Errorf("failed to read input file %s: %w", file, err)
}
filehandles = append(filehandles, filehandle)
haveio = true
}
}
}
} }
if !haveio { if !haveio {
return nil, "", errors.New("no file specified and nothing to read on stdin") return nil, "", errors.New("no file specified and nothing to read on stdin")
} }
return filehandles, pattern, nil return filehandle, pattern, nil
} }

View File

@@ -1,319 +0,0 @@
/*
Copyright © 2023 Thomas von Dein
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package lib
import (
"errors"
"fmt"
"log"
"os"
"strings"
"github.com/glycerine/zygomys/zygo"
"github.com/tlinden/tablizer/cfg"
)
/*
needs to be global because we can't feed an cfg object to AddHook()
which is being called from user lisp code
*/
var Hooks map[string][]*zygo.SexpSymbol
/*
AddHook() (called addhook from lisp code) can be used by the user to
add a function to one of the available hooks provided by tablizer.
*/
func AddHook(env *zygo.Zlisp, name string, args []zygo.Sexp) (zygo.Sexp, error) {
var hookname string
if len(args) < 2 {
return zygo.SexpNull, errors.New("argument of %add-hook should be: %hook-name %your-function")
}
switch sexptype := args[0].(type) {
case *zygo.SexpSymbol:
if !HookExists(sexptype.Name()) {
return zygo.SexpNull, errors.New("Unknown hook " + sexptype.Name())
}
hookname = sexptype.Name()
default:
return zygo.SexpNull, errors.New("hook name must be a symbol ")
}
switch sexptype := args[1].(type) {
case *zygo.SexpSymbol:
_, exists := Hooks[hookname]
if !exists {
Hooks[hookname] = []*zygo.SexpSymbol{sexptype}
} else {
Hooks[hookname] = append(Hooks[hookname], sexptype)
}
default:
return zygo.SexpNull, errors.New("hook function must be a symbol ")
}
return zygo.SexpNull, nil
}
/*
Check if a hook exists
*/
func HookExists(key string) bool {
for _, hook := range cfg.ValidHooks {
if hook == key {
return true
}
}
return false
}
/*
* Basic sanity checks and load lisp file
*/
func LoadAndEvalFile(env *zygo.Zlisp, path string) error {
if strings.HasSuffix(path, `.zy`) {
code, err := os.ReadFile(path)
if err != nil {
return fmt.Errorf("failed to read lisp file %s: %w", path, err)
}
// FIXME: check what res (_ here) could be and mean
_, err = env.EvalString(string(code))
if err != nil {
log.Fatal(env.GetStackTrace(err))
}
}
return nil
}
/*
* Setup lisp interpreter environment
*/
func SetupLisp(conf *cfg.Config) error {
// iterate over load-path and evaluate all *.zy files there, if any
// we ignore if load-path does not exist, which is the default anyway
path, err := os.Stat(conf.LispLoadPath)
if err != nil {
if os.IsNotExist(err) {
// ignore non-existent files
return nil
}
return fmt.Errorf("failed to stat path: %w", err)
}
// init global hooks
Hooks = make(map[string][]*zygo.SexpSymbol)
// init sandbox
env := zygo.NewZlispSandbox()
env.AddFunction("addhook", AddHook)
if !path.IsDir() {
// load single lisp file
err = LoadAndEvalFile(env, conf.LispLoadPath)
if err != nil {
return err
}
} else {
// load all lisp file in load dir
dir, err := os.ReadDir(conf.LispLoadPath)
if err != nil {
return fmt.Errorf("failed to read lisp dir %s: %w",
conf.LispLoadPath, err)
}
for _, entry := range dir {
if !entry.IsDir() {
err := LoadAndEvalFile(env, conf.LispLoadPath+"/"+entry.Name())
if err != nil {
return err
}
}
}
}
RegisterLib(env)
conf.Lisp = env
return nil
}
/*
Execute every user lisp function registered as filter hook.
Each function is given the current line as argument and is expected to
return a boolean. True indicates to keep the line, false to skip
it.
If there are multiple such functions registered, then the first one
returning false wins, that is if each function returns true the line
will be kept, if at least one of them returns false, it will be
skipped.
*/
func RunFilterHooks(conf cfg.Config, line string) (bool, error) {
for _, hook := range Hooks["filter"] {
var result bool
conf.Lisp.Clear()
res, err := conf.Lisp.EvalString(fmt.Sprintf("(%s `%s`)", hook.Name(), line))
if err != nil {
return false, fmt.Errorf("failed to evaluate hook loader: %w", err)
}
switch sexptype := res.(type) {
case *zygo.SexpBool:
result = sexptype.Val
default:
return false, fmt.Errorf("filter hook shall return bool")
}
if !result {
// the first hook which returns false leads to complete false
return result, nil
}
}
// if no hook returned false, we succeed and accept the given line
return true, nil
}
/*
These hooks get the data (Tabdata) readily processed by tablizer as
argument. They are expected to return a SexpPair containing a boolean
denoting if the data has been modified and the actual modified
data. Columns must be the same, rows may differ. Cells may also have
been modified.
Replaces the internal data structure Tabdata with the user supplied
version.
Only one process hook function is supported.
The somewhat complicated code is being caused by the fact, that we
need to convert our internal structure to a lisp variable and vice
versa afterwards.
*/
func RunProcessHooks(conf cfg.Config, data Tabdata) (Tabdata, bool, error) {
var userdata Tabdata
lisplist := []zygo.Sexp{}
if len(Hooks["process"]) == 0 {
return userdata, false, nil
}
if len(Hooks["process"]) > 1 {
fmt.Println("Warning: only one process hook is allowed!")
}
// there are hook[s] installed, convert the go data structure 'data to lisp
for _, row := range data.entries {
var entry zygo.SexpHash
for idx, cell := range row {
err := entry.HashSet(&zygo.SexpStr{S: data.headers[idx]}, &zygo.SexpStr{S: cell})
if err != nil {
return userdata, false, fmt.Errorf("failed to convert to lisp data: %w", err)
}
}
lisplist = append(lisplist, &entry)
}
// we need to add it to the env so that the function can use the struct directly
conf.Lisp.AddGlobal("data", &zygo.SexpArray{Val: lisplist, Env: conf.Lisp})
// execute the actual hook
hook := Hooks["process"][0]
conf.Lisp.Clear()
var result bool
res, err := conf.Lisp.EvalString(fmt.Sprintf("(%s data)", hook.Name()))
if err != nil {
return userdata, false, fmt.Errorf("failed to eval lisp loader: %w", err)
}
// we expect (bool, array(hash)) as return from the function
switch sexptype := res.(type) {
case *zygo.SexpPair:
switch th := sexptype.Head.(type) {
case *zygo.SexpBool:
result = th.Val
default:
return userdata, false, errors.New("expect (bool, array(hash)) as return value")
}
switch sexptailtype := sexptype.Tail.(type) {
case *zygo.SexpArray:
lisplist = sexptailtype.Val
default:
return userdata, false, errors.New("expect (bool, array(hash)) as return value ")
}
default:
return userdata, false, errors.New("process hook shall return array of hashes ")
}
if !result {
// no further processing required
return userdata, result, nil
}
// finally convert lispdata back to Tabdata
for _, item := range lisplist {
row := []string{}
switch hash := item.(type) {
case *zygo.SexpHash:
for _, header := range data.headers {
entry, err := hash.HashGetDefault(
conf.Lisp,
&zygo.SexpStr{S: header},
&zygo.SexpStr{S: ""})
if err != nil {
return userdata, false, fmt.Errorf("failed to get lisp hash entry: %w", err)
}
switch sexptype := entry.(type) {
case *zygo.SexpStr:
row = append(row, sexptype.S)
default:
return userdata, false, errors.New("hash values should be string ")
}
}
default:
return userdata, false, errors.New("returned array should contain hashes ")
}
userdata.entries = append(userdata.entries, row)
}
userdata.headers = data.headers
return userdata, result, nil
}

View File

@@ -1,110 +0,0 @@
/*
Copyright © 2023 Thomas von Dein
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package lib
import (
"errors"
"fmt"
"regexp"
"strconv"
"github.com/glycerine/zygomys/zygo"
)
func Splice2SexpList(list []string) zygo.Sexp {
slist := []zygo.Sexp{}
for _, item := range list {
slist = append(slist, &zygo.SexpStr{S: item})
}
return zygo.MakeList(slist)
}
func StringReSplit(env *zygo.Zlisp, name string, args []zygo.Sexp) (zygo.Sexp, error) {
if len(args) < 2 {
return zygo.SexpNull, errors.New("expecting 2 arguments: <string>, <regex>")
}
var separator, input string
switch t := args[0].(type) {
case *zygo.SexpStr:
input = t.S
default:
return zygo.SexpNull, errors.New("first argument must be a string")
}
switch t := args[1].(type) {
case *zygo.SexpStr:
separator = t.S
default:
return zygo.SexpNull, errors.New("second argument must be a string")
}
sep := regexp.MustCompile(separator)
return Splice2SexpList(sep.Split(input, -1)), nil
}
func String2Int(env *zygo.Zlisp, name string, args []zygo.Sexp) (zygo.Sexp, error) {
var number int
switch t := args[0].(type) {
case *zygo.SexpStr:
num, err := strconv.Atoi(t.S)
if err != nil {
return zygo.SexpNull, fmt.Errorf("failed to convert string to number: %w", err)
}
number = num
default:
return zygo.SexpNull, errors.New("argument must be a string")
}
return &zygo.SexpInt{Val: int64(number)}, nil
}
func RegMatch(env *zygo.Zlisp, name string, args []zygo.Sexp) (zygo.Sexp, error) {
if len(args) != 2 {
return zygo.SexpNull, fmt.Errorf("argument must be <regexp>, <string>")
}
arguments := []string{}
for _, arg := range args {
switch t := arg.(type) {
case *zygo.SexpStr:
arguments = append(arguments, t.S)
default:
return zygo.SexpNull, errors.New("argument must be a string")
}
}
reg := regexp.MustCompile(arguments[0])
return &zygo.SexpBool{Val: reg.MatchString(arguments[1])}, nil
}
func RegisterLib(env *zygo.Zlisp) {
env.AddFunction("resplit", StringReSplit)
env.AddFunction("atoi", String2Int)
env.AddFunction("matchre", RegMatch)
}

View File

@@ -33,11 +33,31 @@ import (
Parser switch Parser switch
*/ */
func Parse(conf cfg.Config, input io.Reader) (Tabdata, error) { func Parse(conf cfg.Config, input io.Reader) (Tabdata, error) {
var data Tabdata
var err error
// first step, parse the data
if len(conf.Separator) == 1 { if len(conf.Separator) == 1 {
return parseCSV(conf, input) data, err = parseCSV(conf, input)
} else {
data, err = parseTabular(conf, input)
} }
return parseTabular(conf, input) if err != nil {
return data, err
}
// 2nd step, apply filters, code or transposers, if any
postdata, changed, err := PostProcess(conf, &data)
if err != nil {
return data, err
}
if changed {
return *postdata, nil
}
return data, err
} }
/* /*
@@ -77,16 +97,6 @@ func parseCSV(conf cfg.Config, input io.Reader) (Tabdata, error) {
} }
} }
// apply user defined lisp process hooks, if any
userdata, changed, err := RunProcessHooks(conf, data)
if err != nil {
return data, fmt.Errorf("failed to apply filter hook: %w", err)
}
if changed {
data = userdata
}
return data, nil return data, nil
} }
@@ -110,9 +120,6 @@ func parseTabular(conf cfg.Config, input io.Reader) (Tabdata, error) {
if !hadFirst { if !hadFirst {
// header processing // header processing
data.columns = len(parts) data.columns = len(parts)
// if Debug {
// fmt.Println(parts)
// }
// process all header fields // process all header fields
for _, part := range parts { for _, part := range parts {
@@ -138,18 +145,6 @@ func parseTabular(conf cfg.Config, input io.Reader) (Tabdata, error) {
continue continue
} }
// apply user defined lisp filters, if any
accept, err := RunFilterHooks(conf, line)
if err != nil {
return data, fmt.Errorf("failed to apply filter hook: %w", err)
}
if !accept {
// IF there are filter hook[s] and IF one of them
// returns false on the current line, reject it
continue
}
idx := 0 // we cannot use the header index, because we could exclude columns idx := 0 // we cannot use the header index, because we could exclude columns
values := []string{} values := []string{}
for _, part := range parts { for _, part := range parts {
@@ -174,29 +169,42 @@ func parseTabular(conf cfg.Config, input io.Reader) (Tabdata, error) {
return data, fmt.Errorf("failed to read from io.Reader: %w", scanner.Err()) return data, fmt.Errorf("failed to read from io.Reader: %w", scanner.Err())
} }
return data, nil
}
func PostProcess(conf cfg.Config, data *Tabdata) (*Tabdata, bool, error) {
var modified bool
// filter by field filters, if any // filter by field filters, if any
filtereddata, changed, err := FilterByFields(conf, data) filtereddata, changed, err := FilterByFields(conf, data)
if err != nil { if err != nil {
return data, fmt.Errorf("failed to filter fields: %w", err) return data, false, fmt.Errorf("failed to filter fields: %w", err)
} }
if changed { if changed {
data = filtereddata data = filtereddata
modified = true
} }
// apply user defined lisp process hooks, if any // check if transposers are valid and turn into Transposer structs
userdata, changed, err := RunProcessHooks(conf, data) if err := PrepareTransposerColumns(&conf, data); err != nil {
return data, false, err
}
// transpose if demanded
modifieddata, changed, err := TransposeFields(conf, data)
if err != nil { if err != nil {
return data, fmt.Errorf("failed to apply filter hook: %w", err) return data, false, fmt.Errorf("failed to transpose fields: %w", err)
} }
if changed { if changed {
data = userdata data = modifieddata
modified = true
} }
if conf.Debug { if conf.Debug {
repr.Print(data) repr.Print(data)
} }
return data, nil return data, modified, nil
} }

10
main.go
View File

@@ -18,9 +18,17 @@ along with this program. If not, see <http://www.gnu.org/licenses/>.
package main package main
import ( import (
"os"
"github.com/tlinden/tablizer/cmd" "github.com/tlinden/tablizer/cmd"
) )
func main() { func main() {
cmd.Execute() os.Exit(Main())
}
func Main() int {
cmd.Execute()
return 0 // cmd takes care of exit 1 itself
} }

20
main_test.go Normal file
View File

@@ -0,0 +1,20 @@
package main
import (
"os"
"testing"
"github.com/rogpeppe/go-internal/testscript"
)
func TestMain(m *testing.M) {
os.Exit(testscript.RunMain(m, map[string]func() int{
"tablizer": Main,
}))
}
func TestTablizer(t *testing.T) {
testscript.Run(t, testscript.Params{
Dir: "t",
})
}

43
t/test-basics.txtar Normal file
View File

@@ -0,0 +1,43 @@
# usage
exec tablizer -h
stdout Usage
# version
exec tablizer -V
stdout version
# manpage
exec tablizer -m
stdout SYNOPSIS
# completion
exec tablizer --completion bash
stdout __tablizer_init_completion
# use config (configures colors, but these are not being used, since
# this env doesn't support it, but at least it should succeed.
exec tablizer -f config.hcl -r testtable.txt Runn
stdout Runn
# will be automatically created in work dir
-- testtable.txt --
NAME READY STATUS RESTARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 (45m ago) 11d
grafana-fcc54cbc9-bk7s8 1/1 Running 17 (45m ago) 1d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 (45m ago) 1h44m
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 (45m ago) 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 (45m ago) 54s
-- config.hcl --
BG = "lightGreen"
FG = "white"
HighlightBG = "lightGreen"
HighlightFG = "white"
NoHighlightBG = "white"
NoHighlightFG = "lightGreen"
HighlightHdrBG = "red"
HighlightHdrFG = "white"

26
t/test-csv.txtar Normal file
View File

@@ -0,0 +1,26 @@
# reading from file and matching with lowercase words
exec tablizer -c name,status -r testtable.csv -s,
stdout grafana.*Runn
# matching mixed case
exec tablizer -c NAME,staTUS -r testtable.csv -s,
stdout grafana.*Runn
# matching using numbers
exec tablizer -c 1,3 -r testtable.csv -s,
stdout grafana.*Runn
# matching using regex
exec tablizer -c 'na.*,stat.' -r testtable.csv -s,
stdout grafana.*Runn
# will be automatically created in work dir
-- testtable.csv --
NAME,READY,STATUS,RESTARTS,AGE
alertmanager-kube-prometheus-alertmanager-0,2/2,Running,35 (45m ago),11d
grafana-fcc54cbc9-bk7s8,1/1,Running,17 (45m ago),1d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7,1/1,Running,17 (45m ago),1h44m
kube-prometheus-kube-state-metrics-b4cd9487-75p7f,1/1,Running,20 (45m ago),45m
kube-prometheus-node-exporter-bfzpl,1/1,Running,17 (45m ago),54s

21
t/test-filtering.txtar Normal file
View File

@@ -0,0 +1,21 @@
# filtering
exec tablizer -r testtable.txt -F name=grafana
stdout grafana.*Runn
# filtering two columns
exec tablizer -r testtable.txt -F name=prometh -F age=1h
stdout blackbox.*Runn
# filtering two same columns
exec tablizer -r testtable.txt -F name=prometh -F name=alert
stdout prometheus-alertmanager.*Runn
# will be automatically created in work dir
-- testtable.txt --
NAME READY STATUS RESTARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 (45m ago) 11d
grafana-fcc54cbc9-bk7s8 1/1 Running 17 (45m ago) 1d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 (45m ago) 1h44m
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 (45m ago) 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 (45m ago) 54s

View File

@@ -0,0 +1,25 @@
# reading from file and matching with lowercase words
exec tablizer -c name,status -r testtable.txt
stdout grafana.*Runn
# matching mixed case
exec tablizer -c NAME,staTUS -r testtable.txt
stdout grafana.*Runn
# matching using numbers
exec tablizer -c 1,3 -r testtable.txt
stdout grafana.*Runn
# matching using regex
exec tablizer -c 'na.*,stat.' -r testtable.txt
stdout grafana.*Runn
# will be automatically created in work dir
-- testtable.txt --
NAME READY STATUS RESTARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 (45m ago) 11d
grafana-fcc54cbc9-bk7s8 1/1 Running 17 (45m ago) 1d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 (45m ago) 1h44m
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 (45m ago) 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 (45m ago) 54s

49
t/test-sort.txtar Normal file
View File

@@ -0,0 +1,49 @@
# sort by name
exec tablizer -r testtable.txt -k 1
stdout '^alert.*\n^grafana.*\n^kube'
# sort by name reversed
exec tablizer -r testtable.txt -k 1 -D
stdout 'kube.*\n^grafana.*\n^alert'
# sort by starts numerically
exec tablizer -r testtable.txt -k 4 -i -c4
stdout '17\s*\n^20\s*\n^35'
# sort by starts numerically reversed
exec tablizer -r testtable.txt -k 4 -i -c4 -D
stdout '35\s*\n^20\s*\n^17'
# sort by age
exec tablizer -r testtable.txt -k 5 -a
stdout '45m\s*\n.*1h44m'
# sort by age reverse
exec tablizer -r testtable.txt -k 5 -a -D
stdout '1h44m\s*\n.*45m'
# sort by time
exec tablizer -r timetable.txt -k 2 -t
stdout '^sel.*\n^foo.*\nbar'
# sort by time reverse
exec tablizer -r timetable.txt -k 2 -t -D
stdout '^bar.*\n^foo.*\nsel'
# will be automatically created in work dir
-- testtable.txt --
NAME READY STATUS STARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 11d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 1h44m
grafana-fcc54cbc9-bk7s8 1/1 Running 17 1d
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 54s
-- timetable.txt --
NAME TIME
foo 2024-11-18T12:00:00+01:00
bar 2024-11-18T12:45:00+01:00
sel 2024-07-18T12:00:00+01:00

18
t/test-stdin.txtar Normal file
View File

@@ -0,0 +1,18 @@
# reading from stdin and matching with lowercase words
stdin testtable.txt
exec tablizer -c name,status
stdout grafana.*Runn
# reading from -r stdin and matching with lowercase words
stdin testtable.txt
exec tablizer -c name,status -r -
stdout grafana.*Runn
# will be automatically created in work dir
-- testtable.txt --
NAME READY STATUS RESTARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 (45m ago) 11d
grafana-fcc54cbc9-bk7s8 1/1 Running 17 (45m ago) 1d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 (45m ago) 1h44m
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 (45m ago) 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 (45m ago) 54s

21
t/test-transpose.txtar Normal file
View File

@@ -0,0 +1,21 @@
# transpose one field
exec tablizer -r testtable.txt -T status -R '/Running/OK/'
stdout grafana.*OK
# transpose two fields
exec tablizer -r testtable.txt -T name,status -R '/alertmanager-//' -R '/Running/OK/'
stdout prometheus-0.*OK
# transpose one field and show one column
exec tablizer -r testtable.txt -T status -R '/Running/OK/' -c name
! stdout grafana.*OK
# will be automatically created in work dir
-- testtable.txt --
NAME READY STATUS RESTARTS AGE
alertmanager-kube-prometheus-alertmanager-0 2/2 Running 35 (45m ago) 11d
grafana-fcc54cbc9-bk7s8 1/1 Running 17 (45m ago) 1d
kube-prometheus-blackbox-exporter-5d85b5d8f4-tskh7 1/1 Running 17 (45m ago) 1h44m
kube-prometheus-kube-state-metrics-b4cd9487-75p7f 1/1 Running 20 (45m ago) 45m
kube-prometheus-node-exporter-bfzpl 1/1 Running 17 (45m ago) 54s

View File

@@ -1,45 +0,0 @@
#!/bin/sh
# simple commandline unit test script
t="../tablizer"
fail=0
ex() {
# execute a test, report+exit on error, stay silent otherwise
log="/tmp/test-tablizer.$$.log"
name=$1
shift
echo -n "TEST $name "
$* > $log 2>&1
if test $? -ne 0; then
echo "failed, see $log"
fail=1
else
echo "ok"
rm -f $log
fi
}
# only use files in test dir
cd $(dirname $0)
echo "Executing commandline tests ..."
# io pattern tests
ex io-pattern-and-file $t bk7 testtable
cat testtable | ex io-pattern-and-stdin $t bk7
cat testtable | ex io-pattern-and-stdin-dash $t bk7 -
# same w/o pattern
ex io-just-file $t testtable
cat testtable | ex io-just-stdin $t
cat testtable | ex io-just-stdin-dash $t -
if test $fail -ne 0; then
echo "!!! Some tests failed !!!"
exit 1
fi

6
t/testtable.csv Normal file
View File

@@ -0,0 +1,6 @@
NAME,DURATION
x,10
a,100
z,0
u,4
k,6
1 NAME DURATION
2 x 10
3 a 100
4 z 0
5 u 4
6 k 6

View File

@@ -133,7 +133,7 @@
.\" ======================================================================== .\" ========================================================================
.\" .\"
.IX Title "TABLIZER 1" .IX Title "TABLIZER 1"
.TH TABLIZER 1 "2025-01-10" "1" "User Commands" .TH TABLIZER 1 "2025-01-14" "1" "User Commands"
.\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" For nroff, turn off justification. Always turn off hyphenation; it makes
.\" way too many mistakes in technical documents. .\" way too many mistakes in technical documents.
.if n .ad l .if n .ad l
@@ -156,6 +156,8 @@ tablizer \- Manipulate tabular output of other programs
\& \-k, \-\-sort\-by int Sort by column (default: 1) \& \-k, \-\-sort\-by int Sort by column (default: 1)
\& \-z, \-\-fuzzy Use fuzzy search [experimental] \& \-z, \-\-fuzzy Use fuzzy search [experimental]
\& \-F, \-\-filter field=reg Filter given field with regex, can be used multiple times \& \-F, \-\-filter field=reg Filter given field with regex, can be used multiple times
\& \-T, \-\-transpose\-columns string Transpose the speficied columns (separated by ,)
\& \-R, \-\-regex\-transposer /from/to/ Apply /search/replace/ regexp to fields given in \-T
\& \&
\& Output Flags (mutually exclusive): \& Output Flags (mutually exclusive):
\& \-X, \-\-extended Enable extended output \& \-X, \-\-extended Enable extended output
@@ -176,7 +178,6 @@ tablizer \- Manipulate tabular output of other programs
\& Other Flags: \& Other Flags:
\& \-\-completion <shell> Generate the autocompletion script for <shell> \& \-\-completion <shell> Generate the autocompletion script for <shell>
\& \-f, \-\-config <file> Configuration file (default: ~/.config/tablizer/config) \& \-f, \-\-config <file> Configuration file (default: ~/.config/tablizer/config)
\& \-l, \-\-load\-path <path> Load path for lisp plugins (expects *.zy files)
\& \-d, \-\-debug Enable debugging \& \-d, \-\-debug Enable debugging
\& \-h, \-\-help help for tablizer \& \-h, \-\-help help for tablizer
\& \-m, \-\-man Display manual page \& \-m, \-\-man Display manual page
@@ -349,6 +350,50 @@ We want to see only the \s-1CMD\s0 column and use a regex for this:
.Ve .Ve
.PP .PP
where \*(L"C\*(R" is our regexp which matches \s-1CMD.\s0 where \*(L"C\*(R" is our regexp which matches \s-1CMD.\s0
.PP
If a column specifier doesn't look like a regular expression, matching
against header fields will be case insensitive. So, if you have a
field with the name \f(CW\*(C`ID\*(C'\fR then these will all match: \f(CW\*(C`\-c id\*(C'\fR, \f(CW\*(C`\-c
Id\*(C'\fR. The same rule applies to the options \f(CW\*(C`\-T\*(C'\fR and \f(CW\*(C`\-F\*(C'\fR.
.SS "\s-1TRANSPOSE FIELDS USING REGEXPS\s0"
.IX Subsection "TRANSPOSE FIELDS USING REGEXPS"
You can manipulate field contents using regular expressions. You have
to tell tablizer which field[s] to operate on using the option \f(CW\*(C`\-T\*(C'\fR
and the search/replace pattern using \f(CW\*(C`\-R\*(C'\fR. The number of columns and
patterns must match.
.PP
A search/replace pattern consists of the following elements:
.PP
.Vb 1
\& /search\-regexp/replace\-string/
.Ve
.PP
The separator can be any valid character. Especially if you want to
use a regexp containing the \f(CW\*(C`/\*(C'\fR character, eg:
.PP
.Vb 1
\& |search\-regexp|replace\-string|
.Ve
.PP
Example:
.PP
.Vb 7
\& cat t/testtable2
\& NAME DURATION
\& x 10
\& a 100
\& z 0
\& u 4
\& k 6
\&
\& cat t/testtable2 | tablizer \-T2 \-R \*(Aq/^\ed/4/\*(Aq \-n
\& NAME DURATION
\& x 40
\& a 400
\& z 4
\& u 4
\& k 4
.Ve
.SS "\s-1OUTPUT MODES\s0" .SS "\s-1OUTPUT MODES\s0"
.IX Subsection "OUTPUT MODES" .IX Subsection "OUTPUT MODES"
There might be cases when the tabular output of a program is way too There might be cases when the tabular output of a program is way too
@@ -495,63 +540,6 @@ the \f(CW\*(C`\-L\*(C'\fR parameter).
.PP .PP
Colorization can be turned off completely either by setting the Colorization can be turned off completely either by setting the
parameter \f(CW\*(C`\-N\*(C'\fR or the environment variable \fB\s-1NO_COLOR\s0\fR to a true value. parameter \f(CW\*(C`\-N\*(C'\fR or the environment variable \fB\s-1NO_COLOR\s0\fR to a true value.
.SH "LISP PLUGINS [experimental]"
.IX Header "LISP PLUGINS [experimental]"
Tablizer supports plugins written in zygomys lisp. You can supply a
directory to the \f(CW\*(C`\-l\*(C'\fR parameter containing \fB*.zy\fR files or a single
\&.zy file containing lisp code.
.PP
You can put as much code as you want into the file, but you need to
add one lips function to a hook at the end.
.PP
The following hooks are available:
.IP "\fBfilter\fR" 4
.IX Item "filter"
The filter hook works one a whole line of the input. Your hook
function is expected to return true or false. If you return true, the
line will be included in the output, otherwise not.
.Sp
Multiple filter hook functions are supported.
.Sp
Example:
.Sp
.Vb 7
\& /*
\& Simple filter hook function. Splits the argument by whitespace,
\& fetches the 2nd element, converts it to an int and returns true
\& if it s larger than 5, false otherwise.
\& */
\& (defn uselarge [line]
\& (cond (> (atoi (second (resplit line \` +\`))) 5) true false))
\&
\& /* Register the filter hook */
\& (addhook %filter %uselarge)
.Ve
.IP "\fBprocess\fR" 4
.IX Item "process"
The process hook function gets a table containing the parsed input
data (see \f(CW\*(C`lib/common.go:type Tabdata struct\*(C'\fR. It is expected to
return a pair containing a bool to denote if the table has been
modified, and the [modified] table. The resulting table may have less
rows than the original and cells may have changed content but the
number of columns must persist.
.IP "\fBtranspose\fR" 4
.IX Item "transpose"
not yet implemented.
.IP "\fBappend\fR" 4
.IX Item "append"
not yet implemented.
.PP
Beside the existing language features, the following additional lisp
functions are provided by tablizer:
.PP
.Vb 3
\& (resplit [string, regex]) => list
\& (atoi [string]) => int
\& (matchre [string, regex]) => bool
.Ve
.PP
The standard language is described here: <https://github.com/glycerine/zygomys/wiki/Language>.
.SH "BUGS" .SH "BUGS"
.IX Header "BUGS" .IX Header "BUGS"
In order to report a bug, unexpected behavior, feature requests In order to report a bug, unexpected behavior, feature requests

View File

@@ -17,6 +17,8 @@ tablizer - Manipulate tabular output of other programs
-k, --sort-by int Sort by column (default: 1) -k, --sort-by int Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental] -z, --fuzzy Use fuzzy search [experimental]
-F, --filter field=reg Filter given field with regex, can be used multiple times -F, --filter field=reg Filter given field with regex, can be used multiple times
-T, --transpose-columns string Transpose the speficied columns (separated by ,)
-R, --regex-transposer /from/to/ Apply /search/replace/ regexp to fields given in -T
Output Flags (mutually exclusive): Output Flags (mutually exclusive):
-X, --extended Enable extended output -X, --extended Enable extended output
@@ -37,7 +39,6 @@ tablizer - Manipulate tabular output of other programs
Other Flags: Other Flags:
--completion <shell> Generate the autocompletion script for <shell> --completion <shell> Generate the autocompletion script for <shell>
-f, --config <file> Configuration file (default: ~/.config/tablizer/config) -f, --config <file> Configuration file (default: ~/.config/tablizer/config)
-l, --load-path <path> Load path for lisp plugins (expects *.zy files)
-d, --debug Enable debugging -d, --debug Enable debugging
-h, --help help for tablizer -h, --help help for tablizer
-m, --man Display manual page -m, --man Display manual page
@@ -204,6 +205,46 @@ We want to see only the CMD column and use a regex for this:
where "C" is our regexp which matches CMD. where "C" is our regexp which matches CMD.
If a column specifier doesn't look like a regular expression, matching
against header fields will be case insensitive. So, if you have a
field with the name C<ID> then these will all match: C<-c id>, C<-c
Id>. The same rule applies to the options C<-T> and C<-F>.
=head2 TRANSPOSE FIELDS USING REGEXPS
You can manipulate field contents using regular expressions. You have
to tell tablizer which field[s] to operate on using the option C<-T>
and the search/replace pattern using C<-R>. The number of columns and
patterns must match.
A search/replace pattern consists of the following elements:
/search-regexp/replace-string/
The separator can be any valid character. Especially if you want to
use a regexp containing the C</> character, eg:
|search-regexp|replace-string|
Example:
cat t/testtable2
NAME DURATION
x 10
a 100
z 0
u 4
k 6
cat t/testtable2 | tablizer -T2 -R '/^\d/4/' -n
NAME DURATION
x 40
a 400
z 4
u 4
k 4
=head2 OUTPUT MODES =head2 OUTPUT MODES
There might be cases when the tabular output of a program is way too There might be cases when the tabular output of a program is way too
@@ -342,67 +383,7 @@ the C<-L> parameter).
Colorization can be turned off completely either by setting the Colorization can be turned off completely either by setting the
parameter C<-N> or the environment variable B<NO_COLOR> to a true value. parameter C<-N> or the environment variable B<NO_COLOR> to a true value.
=head1 LISP PLUGINS [experimental]
Tablizer supports plugins written in zygomys lisp. You can supply a
directory to the C<-l> parameter containing B<*.zy> files or a single
.zy file containing lisp code.
You can put as much code as you want into the file, but you need to
add one lips function to a hook at the end.
The following hooks are available:
=over
=item B<filter>
The filter hook works one a whole line of the input. Your hook
function is expected to return true or false. If you return true, the
line will be included in the output, otherwise not.
Multiple filter hook functions are supported.
Example:
/*
Simple filter hook function. Splits the argument by whitespace,
fetches the 2nd element, converts it to an int and returns true
if it s larger than 5, false otherwise.
*/
(defn uselarge [line]
(cond (> (atoi (second (resplit line ` +`))) 5) true false))
/* Register the filter hook */
(addhook %filter %uselarge)
=item B<process>
The process hook function gets a table containing the parsed input
data (see C<lib/common.go:type Tabdata struct>. It is expected to
return a pair containing a bool to denote if the table has been
modified, and the [modified] table. The resulting table may have less
rows than the original and cells may have changed content but the
number of columns must persist.
=item B<transpose>
not yet implemented.
=item B<append>
not yet implemented.
=back
Beside the existing language features, the following additional lisp
functions are provided by tablizer:
(resplit [string, regex]) => list
(atoi [string]) => int
(matchre [string, regex]) => bool
The standard language is described here: L<https://github.com/glycerine/zygomys/wiki/Language>.
=head1 BUGS =head1 BUGS