Compare commits

..

13 Commits

Author SHA1 Message Date
bc717baa3f fix typo 2025-10-25 21:51:56 +02:00
c34f030914 add stew 2025-10-25 21:50:31 +02:00
T.v.Dein
f1aa9d0000 add json output mode (-J) (#87) 2025-10-14 07:18:30 +02:00
736dd37f16 fixed feature entry 2025-10-13 07:24:35 +02:00
e0dc6bb845 updated and added feature list 2025-10-13 07:23:54 +02:00
T.v.Dein
8bdb3db105 fix #85: add --auto-headers and --custom-headers (#86) 2025-10-10 13:08:16 +02:00
4ce6c30f54 fix short usage formatting 2025-10-09 23:16:07 +02:00
T.v.Dein
ec0b210167 add some handy builtin character classes as split separators (#84) 2025-10-09 23:03:57 +02:00
253ef8262e fix builder go version 2025-10-08 10:36:09 +02:00
da48994744 fix comment 2025-10-06 23:27:48 +02:00
39f06fddc8 md fix 2025-10-06 23:02:28 +02:00
T.v.Dein
50a9378d92 use column order of -c when specified (#81) 2025-10-06 22:55:04 +02:00
T.v.Dein
35b726fee4 Fix json parser (#80)
* fix #77: parse floats and nils as well and convert them to string
2025-10-06 22:54:31 +02:00
15 changed files with 601 additions and 91 deletions

View File

@@ -15,7 +15,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v6 uses: actions/setup-go@v6
with: with:
go-version: 1.22.11 go-version: 1.24.0
- name: Build the executables - name: Build the executables
run: ./mkrel.sh tablizer ${{ github.ref_name}} run: ./mkrel.sh tablizer ${{ github.ref_name}}

View File

@@ -65,7 +65,7 @@ clean:
rm -rf $(tool) releases coverage.out rm -rf $(tool) releases coverage.out
test: clean test: clean
go test -cover ./... $(OPTS) go test -count=1 -cover ./... $(OPTS)
singletest: singletest:
@echo "Call like this: 'make singletest TEST=TestPrepareColumns MOD=lib'" @echo "Call like this: 'make singletest TEST=TestPrepareColumns MOD=lib'"

View File

@@ -11,6 +11,23 @@ ignore certain column[s] by regex, name or number. It can output the
tabular data in a range of formats (see below). There's even an tabular data in a range of formats (see below). There's even an
interactive filter/selection tool available. interactive filter/selection tool available.
## FEATURES
- supports csv, json or ascii format input from files or stdin
- split any tabular input data by character or regular expression into columns
- add headers if input data doesn't contain them (automatically or manually)
- print tabular data as ascii table, org-mode, markdown, csv, shell-evaluable or yaml format
- filter rows by regular expression (saves a call to `| grep ...`)
- filter rows by column filter
- filters may also be negations eg `-Fname!=cow.*` or `-v`
- modify cells wih regular expressions
- reduce columns by specifying which columns to show, with regex support
- color support
- sort by any field[s], multiple sort modes are supported
- shell completion for options
- regular used options can be put into a config file
- filter TUI where where you can interactively sort and filter rows
## Demo ## Demo
![demo cast](vhsdemo/demo.gif) ![demo cast](vhsdemo/demo.gif)
@@ -36,6 +53,9 @@ Operational Flags:
-R, --regex-transposer </from/to/> Apply /search/replace/ regexp to fields given in -T -R, --regex-transposer </from/to/> Apply /search/replace/ regexp to fields given in -T
-j, --json Read JSON input (must be array of hashes) -j, --json Read JSON input (must be array of hashes)
-I, --interactive Interactively filter and select rows -I, --interactive Interactively filter and select rows
--auto-headers Generate headers if there are none present in input
--custom-headers a,b,... Use custom headers, separated by comma
Output Flags (mutually exclusive): Output Flags (mutually exclusive):
-X, --extended Enable extended output -X, --extended Enable extended output
@@ -167,6 +187,11 @@ you can interactively filter and select rows:
There are multiple ways to install **tablizer**: There are multiple ways to install **tablizer**:
- You can use [stew](https://github.com/marwanhawari/stew) to install tablizer:
```default
stew install tlinden/tablizer
```
- Go to the [latest release page](https://github.com/tlinden/tablizer/releases/latest), - Go to the [latest release page](https://github.com/tlinden/tablizer/releases/latest),
locate the binary for your operating system and platform. locate the binary for your operating system and platform.
@@ -192,10 +217,9 @@ hesitate to ask me about it, I'll add it.
## Documentation ## Documentation
The documentation is provided as a unix man-page. It will be The documentation is provided as a unix man-page. It will be
automatically installed if you install from source. However, you can automatically installed if you install from source.
read the man-page online:
https://github.com/TLINDEN/tablizer/blob/main/tablizer.pod [However, you can read the man-page online](https://github.com/TLINDEN/tablizer/blob/main/tablizer.pod).
Or if you cloned the repository you can read it this way (perl needs Or if you cloned the repository you can read it this way (perl needs
to be installed though): `perldoc tablizer.pod`. to be installed though): `perldoc tablizer.pod`.

View File

@@ -27,13 +27,26 @@ import (
"github.com/hashicorp/hcl/v2/hclsimple" "github.com/hashicorp/hcl/v2/hclsimple"
) )
const DefaultSeparator string = `(\s\s+|\t)` const (
const Version string = "v1.5.7" Version = "v1.5.11"
const MAXPARTS = 2 MAXPARTS = 2
)
var DefaultConfigfile = os.Getenv("HOME") + "/.config/tablizer/config" var (
DefaultConfigfile = os.Getenv("HOME") + "/.config/tablizer/config"
VERSION string // maintained by -x
var VERSION string // maintained by -x SeparatorTemplates = map[string]string{
":tab:": `\s*\t\s*`, // tab but eats spaces around
":spaces:": `\s{2,}`, // 2 or more spaces
":pipe:": `\s*\|\s*`, // one pipe eating spaces around
":default:": `(\s\s+|\t)`, // 2 or more spaces or tab
":nonword:": `\W`, // word boundary
":nondigit:": `\D`, // same for numbers
":special:": `[\*\+\-_\(\)\[\]\{\}?\\/<>=&$§"':,\^]+`, // match any special char
":nonprint:": `[[:^print:]]+`, // non printables
}
)
// public config, set via config file or using defaults // public config, set via config file or using defaults
type Settings struct { type Settings struct {
@@ -80,6 +93,8 @@ type Config struct {
UseHighlight bool UseHighlight bool
Interactive bool Interactive bool
InputJSON bool InputJSON bool
AutoHeaders bool
CustomHeaders []string
SortMode string SortMode string
SortDescending bool SortDescending bool
@@ -126,6 +141,7 @@ type Modeflag struct {
Y bool Y bool
A bool A bool
C bool C bool
J bool
} }
// used for switching printers // used for switching printers
@@ -137,6 +153,7 @@ const (
Yaml Yaml
CSV CSV
ASCII ASCII
Json
) )
// various sort types // various sort types
@@ -275,6 +292,8 @@ func (conf *Config) PrepareModeFlags(flag Modeflag) {
conf.OutputMode = Yaml conf.OutputMode = Yaml
case flag.C: case flag.C:
conf.OutputMode = CSV conf.OutputMode = CSV
case flag.J:
conf.OutputMode = Json
default: default:
conf.OutputMode = ASCII conf.OutputMode = ASCII
} }
@@ -356,6 +375,13 @@ func (conf *Config) ApplyDefaults() {
if conf.OutputMode == Yaml || conf.OutputMode == CSV { if conf.OutputMode == Yaml || conf.OutputMode == CSV {
conf.Numbering = false conf.Numbering = false
} }
if conf.Separator[0] == ':' && conf.Separator[len(conf.Separator)-1] == ':' {
separator, ok := SeparatorTemplates[conf.Separator]
if ok {
conf.Separator = separator
}
}
} }
func (conf *Config) PreparePattern(patterns []*Pattern) error { func (conf *Config) PreparePattern(patterns []*Pattern) error {
@@ -393,6 +419,12 @@ func (conf *Config) PreparePattern(patterns []*Pattern) error {
return nil return nil
} }
func (conf *Config) PrepareCustomHeaders(custom string) {
if len(custom) > 0 {
conf.CustomHeaders = strings.Split(custom, ",")
}
}
// Parse config file. Ignore if the file doesn't exist but return an // Parse config file. Ignore if the file doesn't exist but return an
// error if it exists but fails to read or parse // error if it exists but fails to read or parse
func (conf *Config) ParseConfigfile() error { func (conf *Config) ParseConfigfile() error {

View File

@@ -59,6 +59,7 @@ func Execute() {
ShowCompletion string ShowCompletion string
modeflag cfg.Modeflag modeflag cfg.Modeflag
sortmode cfg.Sortmode sortmode cfg.Sortmode
headers string
) )
var rootCmd = &cobra.Command{ var rootCmd = &cobra.Command{
@@ -91,6 +92,7 @@ func Execute() {
conf.CheckEnv() conf.CheckEnv()
conf.PrepareModeFlags(modeflag) conf.PrepareModeFlags(modeflag)
conf.PrepareSortFlags(sortmode) conf.PrepareSortFlags(sortmode)
conf.PrepareCustomHeaders(headers)
wrapE(conf.PrepareFilters()) wrapE(conf.PrepareFilters())
@@ -123,7 +125,7 @@ func Execute() {
"Use alternating background colors") "Use alternating background colors")
rootCmd.PersistentFlags().StringVarP(&ShowCompletion, "completion", "", "", rootCmd.PersistentFlags().StringVarP(&ShowCompletion, "completion", "", "",
"Display completion code") "Display completion code")
rootCmd.PersistentFlags().StringVarP(&conf.Separator, "separator", "s", cfg.DefaultSeparator, rootCmd.PersistentFlags().StringVarP(&conf.Separator, "separator", "s", cfg.SeparatorTemplates[":default:"],
"Custom field separator") "Custom field separator")
rootCmd.PersistentFlags().StringVarP(&conf.Columns, "columns", "c", "", rootCmd.PersistentFlags().StringVarP(&conf.Columns, "columns", "c", "",
"Only show the speficied columns (separated by ,)") "Only show the speficied columns (separated by ,)")
@@ -133,10 +135,14 @@ func Execute() {
"Transpose the speficied columns (separated by ,)") "Transpose the speficied columns (separated by ,)")
rootCmd.PersistentFlags().BoolVarP(&conf.Interactive, "interactive", "I", false, rootCmd.PersistentFlags().BoolVarP(&conf.Interactive, "interactive", "I", false,
"interactive mode") "interactive mode")
rootCmd.PersistentFlags().StringVarP(&conf.OFS, "ofs", "", "", rootCmd.PersistentFlags().StringVarP(&conf.OFS, "ofs", "o", "",
"Output field separator (' ' for ascii table, ',' for CSV)") "Output field separator (' ' for ascii table, ',' for CSV)")
rootCmd.PersistentFlags().BoolVarP(&conf.InputJSON, "json", "j", false, rootCmd.PersistentFlags().BoolVarP(&conf.InputJSON, "json", "j", false,
"JSON input mode") "JSON input mode")
rootCmd.PersistentFlags().BoolVarP(&conf.AutoHeaders, "auto-headers", "g", false,
"Generate headers automatically")
rootCmd.PersistentFlags().StringVarP(&headers, "custom-headers", "x", "",
"Custom headers")
// sort options // sort options
rootCmd.PersistentFlags().StringVarP(&conf.SortByColumn, "sort-by", "k", "", rootCmd.PersistentFlags().StringVarP(&conf.SortByColumn, "sort-by", "k", "",
@@ -165,6 +171,8 @@ func Execute() {
"Enable shell mode output") "Enable shell mode output")
rootCmd.PersistentFlags().BoolVarP(&modeflag.Y, "yaml", "Y", false, rootCmd.PersistentFlags().BoolVarP(&modeflag.Y, "yaml", "Y", false,
"Enable yaml output") "Enable yaml output")
rootCmd.PersistentFlags().BoolVarP(&modeflag.J, "jsonout", "J", false,
"Enable json output")
rootCmd.PersistentFlags().BoolVarP(&modeflag.C, "csv", "C", false, rootCmd.PersistentFlags().BoolVarP(&modeflag.C, "csv", "C", false,
"Enable CSV output") "Enable CSV output")
rootCmd.PersistentFlags().BoolVarP(&modeflag.A, "ascii", "A", false, rootCmd.PersistentFlags().BoolVarP(&modeflag.A, "ascii", "A", false,

View File

@@ -1,16 +1,18 @@
package cmd package cmd
const shortusage = `tablizer [regex,...] [-r file] [flags] const shortusage = `tablizer [regex,...] [-r file] [flags]
-c col,... show specified columns -L highlight matching lines -c col,... show specified columns -L highlight matching lines
-k col,... sort by specified columns -j read JSON input -k col,... sort by specified columns -j read JSON input
-F col=reg filter field with regexp -v invert match -F col=reg filter field with regexp -v invert match
-T col,... transpose specified columns -n numberize columns -T col,... transpose specified columns -n numberize columns
-R /from/to/ apply replacement to columns in -T -N do not use colors -R /from/to/ apply replacement to columns in -T -N do not use colors
-y col,... yank columns to clipboard -H do not show headers -y col,... yank columns to clipboard -H do not show headers
--ofs char output field separator -s specify field separator --ofs char output field separator -s specify field separator
-r file read input from file -z use fuzzy search -r file read input from file -z use fuzzy search
-f file read config from file -I interactive filter mode -f file read config from file -I interactive filter mode
-d debug -x col,... use custom headers -d debug
-O org -C CSV -M md -X ext -S shell -Y yaml -D sort descending order -o char use char as output separator -g auto generate headers
-m show manual --help show detailed help -v show version
-a sort by age -i sort numerically -t sort by time` -O org -C CSV -M md -X ext -S shell -Y yaml -J json -D sort descending order
-m show manual --help show detailed help -v show version
-a sort by age -i sort numerically -t sort by time`

View File

@@ -14,7 +14,7 @@ SYNOPSIS
-n, --numbering Enable header numbering -n, --numbering Enable header numbering
-N, --no-color Disable pattern highlighting -N, --no-color Disable pattern highlighting
-H, --no-headers Disable headers display -H, --no-headers Disable headers display
-s, --separator <string> Custom field separator -s, --separator <string> Custom field separator (maybe char, string or :class:)
-k, --sort-by <int|name> Sort by column (default: 1) -k, --sort-by <int|name> Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental] -z, --fuzzy Use fuzzy search [experimental]
-F, --filter <field[!]=reg> Filter given field with regex, can be used multiple times -F, --filter <field[!]=reg> Filter given field with regex, can be used multiple times
@@ -22,6 +22,8 @@ SYNOPSIS
-R, --regex-transposer </from/to/> Apply /search/replace/ regexp to fields given in -T -R, --regex-transposer </from/to/> Apply /search/replace/ regexp to fields given in -T
-j, --json Read JSON input (must be array of hashes) -j, --json Read JSON input (must be array of hashes)
-I, --interactive Interactively filter and select rows -I, --interactive Interactively filter and select rows
-g, --auto-headers Generate headers if there are none present in input
-x, --custom-headers a,b,... Use custom headers, separated by comma
Output Flags (mutually exclusive): Output Flags (mutually exclusive):
-X, --extended Enable extended output -X, --extended Enable extended output
@@ -29,12 +31,13 @@ SYNOPSIS
-O, --orgtbl Enable org-mode table output -O, --orgtbl Enable org-mode table output
-S, --shell Enable shell evaluable output -S, --shell Enable shell evaluable output
-Y, --yaml Enable yaml output -Y, --yaml Enable yaml output
-J, --jsonout Enable JSON output
-C, --csv Enable CSV output -C, --csv Enable CSV output
-A, --ascii Default output mode, ascii tabular -A, --ascii Default output mode, ascii tabular
-L, --hightlight-lines Use alternating background colors for tables -L, --hightlight-lines Use alternating background colors for tables
-o, --ofs <char> Output field separator, used by -A and -C.
-y, --yank-columns Yank specified columns (separated by ,) to clipboard, -y, --yank-columns Yank specified columns (separated by ,) to clipboard,
space separated space separated
--ofs <char> Output field separator, used by -A and -C.
Sort Mode Flags (mutually exclusive): Sort Mode Flags (mutually exclusive):
-a, --sort-age sort according to age (duration) string -a, --sort-age sort according to age (duration) string
@@ -141,6 +144,57 @@ DESCRIPTION
Finally the -d option enables debugging output which is mostly useful Finally the -d option enables debugging output which is mostly useful
for the developer. for the developer.
SEPARATOR
The option -s can be a single character, in which case the CSV parser
will be invoked. You can also specify a string as separator. The string
will be interpreted as literal string unless it is a valid go regular
expression. For example:
-s '\t{2,}\'
is being used as a regexp and will match two or more consecutive tabs.
-s 'foo'
on the other hand is no regular expression and will be used literally.
To make live easier, there are a couple of predefined regular
expressions, which you can specify as classes:
* :tab:
Matches a tab and eats spaces around it.
* :spaces:
Matches 2 or more spaces.
* :pipe:
Matches a pipe character and eats spaces around it.
* :default:
Matches 2 or more spaces or tab. This is the default separator if
none is specified.
* :nonword:
Matches a non-word character.
* :nondigit:
Matches a non-digit character.
* :special:
Matches one or more special chars like brackets, dollar sign,
slashes etc.
* :nonprint:
Matches one or more non-printable characters.
PATTERNS AND FILTERING PATTERNS AND FILTERING
You can reduce the rows being displayed by using one or more regular You can reduce the rows being displayed by using one or more regular
expression patterns. The regexp language being used is the one of expression patterns. The regexp language being used is the one of
@@ -458,7 +512,7 @@ Operational Flags:
-n, --numbering Enable header numbering -n, --numbering Enable header numbering
-N, --no-color Disable pattern highlighting -N, --no-color Disable pattern highlighting
-H, --no-headers Disable headers display -H, --no-headers Disable headers display
-s, --separator <string> Custom field separator -s, --separator <string> Custom field separator (maybe char, string or :class:)
-k, --sort-by <int|name> Sort by column (default: 1) -k, --sort-by <int|name> Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental] -z, --fuzzy Use fuzzy search [experimental]
-F, --filter <field[!]=reg> Filter given field with regex, can be used multiple times -F, --filter <field[!]=reg> Filter given field with regex, can be used multiple times
@@ -466,6 +520,8 @@ Operational Flags:
-R, --regex-transposer </from/to/> Apply /search/replace/ regexp to fields given in -T -R, --regex-transposer </from/to/> Apply /search/replace/ regexp to fields given in -T
-j, --json Read JSON input (must be array of hashes) -j, --json Read JSON input (must be array of hashes)
-I, --interactive Interactively filter and select rows -I, --interactive Interactively filter and select rows
-g, --auto-headers Generate headers if there are none present in input
-x, --custom-headers a,b,... Use custom headers, separated by comma
Output Flags (mutually exclusive): Output Flags (mutually exclusive):
-X, --extended Enable extended output -X, --extended Enable extended output
@@ -473,12 +529,13 @@ Output Flags (mutually exclusive):
-O, --orgtbl Enable org-mode table output -O, --orgtbl Enable org-mode table output
-S, --shell Enable shell evaluable output -S, --shell Enable shell evaluable output
-Y, --yaml Enable yaml output -Y, --yaml Enable yaml output
-J, --jsonout Enable JSON output
-C, --csv Enable CSV output -C, --csv Enable CSV output
-A, --ascii Default output mode, ascii tabular -A, --ascii Default output mode, ascii tabular
-L, --hightlight-lines Use alternating background colors for tables -L, --hightlight-lines Use alternating background colors for tables
-o, --ofs <char> Output field separator, used by -A and -C.
-y, --yank-columns Yank specified columns (separated by ,) to clipboard, -y, --yank-columns Yank specified columns (separated by ,) to clipboard,
space separated space separated
--ofs <char> Output field separator, used by -A and -C.
Sort Mode Flags (mutually exclusive): Sort Mode Flags (mutually exclusive):
-a, --sort-age sort according to age (duration) string -a, --sort-age sort according to age (duration) string

View File

@@ -22,7 +22,7 @@ import (
"fmt" "fmt"
"os" "os"
"regexp" "regexp"
"sort" "slices"
"strconv" "strconv"
"strings" "strings"
@@ -30,16 +30,6 @@ import (
"github.com/tlinden/tablizer/cfg" "github.com/tlinden/tablizer/cfg"
) )
func contains(s []int, e int) bool {
for _, a := range s {
if a == e {
return true
}
}
return false
}
func findindex(s []int, e int) (int, bool) { func findindex(s []int, e int) (int, bool) {
for i, a := range s { for i, a := range s {
if a == e { if a == e {
@@ -172,48 +162,32 @@ func PrepareColumnVars(columns string, data *Tabdata) ([]int, error) {
} }
} }
// deduplicate: put all values into a map (value gets map key) // deduplicate columns, preserve order
// thereby removing duplicates, extract keys into new slice deduped := []int{}
// and sort it
imap := make(map[int]int, len(usecolumns))
for _, i := range usecolumns { for _, i := range usecolumns {
imap[i] = 0 if !slices.Contains(deduped, i) {
deduped = append(deduped, i)
}
} }
// fill with deduplicated columns return deduped, nil
usecolumns = nil
for k := range imap {
usecolumns = append(usecolumns, k)
}
sort.Ints(usecolumns)
return usecolumns, nil
} }
// prepare headers: add numbers to headers // prepare headers: add numbers to headers
func numberizeAndReduceHeaders(conf cfg.Config, data *Tabdata) { func numberizeAndReduceHeaders(conf cfg.Config, data *Tabdata) {
numberedHeaders := []string{} numberedHeaders := make([]string, len(data.headers))
maxwidth := 0 // start from scratch, so we only look at displayed column widths maxwidth := 0 // start from scratch, so we only look at displayed column widths
// add numbers to headers if needed, get widest cell width
for idx, head := range data.headers { for idx, head := range data.headers {
var headlen int var headlen int
if len(conf.Columns) > 0 {
// -c specified
if !contains(conf.UseColumns, idx+1) {
// ignore this one
continue
}
}
if conf.Numbering { if conf.Numbering {
numhead := fmt.Sprintf("%s(%d)", head, idx+1) newhead := fmt.Sprintf("%s(%d)", head, idx+1)
headlen = len(numhead) numberedHeaders[idx] = newhead
numberedHeaders = append(numberedHeaders, numhead) headlen = len(newhead)
} else { } else {
numberedHeaders = append(numberedHeaders, head)
headlen = len(head) headlen = len(head)
} }
@@ -222,7 +196,24 @@ func numberizeAndReduceHeaders(conf cfg.Config, data *Tabdata) {
} }
} }
data.headers = numberedHeaders if conf.Numbering {
data.headers = numberedHeaders
}
if len(conf.UseColumns) > 0 {
// re-align headers based on user requested column list
headers := make([]string, len(conf.UseColumns))
for i, col := range conf.UseColumns {
for idx := range data.headers {
if col-1 == idx {
headers[i] = data.headers[col-1]
}
}
}
data.headers = headers
}
if data.maxwidthHeader != maxwidth && maxwidth > 0 { if data.maxwidthHeader != maxwidth && maxwidth > 0 {
data.maxwidthHeader = maxwidth data.maxwidthHeader = maxwidth
@@ -234,17 +225,17 @@ func reduceColumns(conf cfg.Config, data *Tabdata) {
if len(conf.Columns) > 0 { if len(conf.Columns) > 0 {
reducedEntries := [][]string{} reducedEntries := [][]string{}
var reducedEntry []string
for _, entry := range data.entries { for _, entry := range data.entries {
reducedEntry = nil var reducedEntry []string
for i, value := range entry { for _, col := range conf.UseColumns {
if !contains(conf.UseColumns, i+1) { col--
continue
for idx, value := range entry {
if idx == col {
reducedEntry = append(reducedEntry, value)
}
} }
reducedEntry = append(reducedEntry, value)
} }
reducedEntries = append(reducedEntries, reducedEntry) reducedEntries = append(reducedEntries, reducedEntry)

View File

@@ -19,6 +19,7 @@ package lib
import ( import (
"fmt" "fmt"
"slices"
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -38,7 +39,7 @@ func TestContains(t *testing.T) {
for _, tt := range tests { for _, tt := range tests {
testname := fmt.Sprintf("contains-%d,%d,%t", tt.list, tt.search, tt.want) testname := fmt.Sprintf("contains-%d,%d,%t", tt.list, tt.search, tt.want)
t.Run(testname, func(t *testing.T) { t.Run(testname, func(t *testing.T) {
answer := contains(tt.list, tt.search) answer := slices.Contains(tt.list, tt.search)
assert.EqualValues(t, tt.want, answer) assert.EqualValues(t, tt.want, answer)
}) })
@@ -72,7 +73,8 @@ func TestPrepareColumns(t *testing.T) {
} }
for _, testdata := range tests { for _, testdata := range tests {
testname := fmt.Sprintf("PrepareColumns-%s-%t", testdata.input, testdata.wanterror) testname := fmt.Sprintf("PrepareColumns-%s-%t",
testdata.input, testdata.wanterror)
t.Run(testname, func(t *testing.T) { t.Run(testname, func(t *testing.T) {
conf := cfg.Config{Columns: testdata.input} conf := cfg.Config{Columns: testdata.input}
err := PrepareColumns(&conf, &data) err := PrepareColumns(&conf, &data)

View File

@@ -25,6 +25,7 @@ import (
"fmt" "fmt"
"io" "io"
"log" "log"
"math"
"regexp" "regexp"
"strings" "strings"
@@ -65,6 +66,43 @@ func Parse(conf cfg.Config, input io.Reader) (Tabdata, error) {
return data, err return data, err
} }
/*
* Setup headers, given headers might be usable headers or just the
* first row, which we use to determine how many headers to generate,
* if enabled.
*/
func SetHeaders(conf cfg.Config, headers []string) []string {
if !conf.AutoHeaders && len(conf.CustomHeaders) == 0 {
return headers
}
if conf.AutoHeaders {
heads := make([]string, len(headers))
for idx := range headers {
heads[idx] = fmt.Sprintf("%d", idx+1)
}
return heads
}
if len(conf.CustomHeaders) == len(headers) {
return conf.CustomHeaders
}
// use as much custom ones we have, generate the remainder
heads := make([]string, len(headers))
for idx := range headers {
if idx < len(conf.CustomHeaders) {
heads[idx] = conf.CustomHeaders[idx]
} else {
heads[idx] = fmt.Sprintf("%d", idx+1)
}
}
return heads
}
/* /*
Parse CSV input. Parse CSV input.
*/ */
@@ -86,7 +124,7 @@ func parseCSV(conf cfg.Config, input io.Reader) (Tabdata, error) {
} }
if len(records) >= 1 { if len(records) >= 1 {
data.headers = records[0] data.headers = SetHeaders(conf, records[0])
data.columns = len(records) data.columns = len(records)
for _, head := range data.headers { for _, head := range data.headers {
@@ -97,9 +135,14 @@ func parseCSV(conf cfg.Config, input io.Reader) (Tabdata, error) {
} }
} }
if len(records) > 1 { if len(records) >= 1 {
data.entries = records[1:] if conf.AutoHeaders || len(conf.CustomHeaders) > 0 {
data.entries = records
} else {
data.entries = records[1:]
}
} }
} }
return data, nil return data, nil
@@ -127,7 +170,9 @@ func parseTabular(conf cfg.Config, input io.Reader) (Tabdata, error) {
data.columns = len(parts) data.columns = len(parts)
// process all header fields // process all header fields
for _, part := range parts { firstrow := make([]string, len(parts))
for idx, part := range parts {
// register widest header field // register widest header field
headerlen := len(part) headerlen := len(part)
if headerlen > data.maxwidthHeader { if headerlen > data.maxwidthHeader {
@@ -135,11 +180,22 @@ func parseTabular(conf cfg.Config, input io.Reader) (Tabdata, error) {
} }
// register fields data // register fields data
data.headers = append(data.headers, strings.TrimSpace(part)) firstrow[idx] = strings.TrimSpace(part)
// done // done
hadFirst = true hadFirst = true
} }
data.headers = SetHeaders(conf, firstrow)
if conf.AutoHeaders || len(conf.CustomHeaders) > 0 {
// we do not use generated headers, consider as row
if matchPattern(conf, line) == conf.InvertMatch {
continue
}
data.entries = append(data.entries, firstrow)
}
} else { } else {
// data processing // data processing
if matchPattern(conf, line) == conf.InvertMatch { if matchPattern(conf, line) == conf.InvertMatch {
@@ -222,6 +278,32 @@ func parseRawJSON(conf cfg.Config, input io.Reader) (Tabdata, error) {
row[idxmap[currentfield]] = val row[idxmap[currentfield]] = val
} }
} }
case float64:
var value string
// we set precision to 0 if the float is a whole number
if val == math.Trunc(val) {
value = fmt.Sprintf("%.f", val)
} else {
value = fmt.Sprintf("%f", val)
}
if !haveheaders {
row = append(row, value)
} else {
row[idxmap[currentfield]] = value
}
case nil:
// we ignore here if a value shall be an int or a string,
// because tablizer only works with strings anyway
if !haveheaders {
row = append(row, "")
} else {
row[idxmap[currentfield]] = ""
}
case json.Delim: case json.Delim:
if val.String() == "}" { if val.String() == "}" {
data = append(data, row) data = append(data, row)
@@ -240,6 +322,8 @@ func parseRawJSON(conf cfg.Config, input io.Reader) (Tabdata, error) {
haveheaders = true haveheaders = true
} }
isjson = true isjson = true
default:
fmt.Printf("unknown token: %v type: %T\n", t, t)
} }
iskey = !iskey iskey = !iskey

View File

@@ -34,7 +34,7 @@ var input = []struct {
}{ }{
{ {
name: "tabular-data", name: "tabular-data",
separator: cfg.DefaultSeparator, separator: cfg.SeparatorTemplates[":default:"],
text: ` text: `
ONE TWO THREE ONE TWO THREE
asd igig cxxxncnc asd igig cxxxncnc
@@ -148,7 +148,7 @@ asd igig
19191 EDD 1 X` 19191 EDD 1 X`
readFd := strings.NewReader(strings.TrimSpace(table)) readFd := strings.NewReader(strings.TrimSpace(table))
conf := cfg.Config{Separator: cfg.DefaultSeparator} conf := cfg.Config{Separator: cfg.SeparatorTemplates[":default:"]}
gotdata, err := wrapValidateParser(conf, readFd) gotdata, err := wrapValidateParser(conf, readFd)
assert.NoError(t, err) assert.NoError(t, err)
@@ -180,6 +180,38 @@ func TestParserJSONInput(t *testing.T) {
expect: Tabdata{}, expect: Tabdata{},
}, },
{
// contains nil, int and float values
name: "niljson",
wanterror: false,
input: `[
{
"NAME": "postgres-operator-7f4c7c8485-ntlns",
"READY": "1/1",
"STATUS": "Running",
"RESTARTS": 0,
"AGE": null,
"X": 12,
"Y": 34.222
}
]`,
expect: Tabdata{
columns: 7,
headers: []string{"NAME", "READY", "STATUS", "RESTARTS", "AGE", "X", "Y"},
entries: [][]string{
[]string{
"postgres-operator-7f4c7c8485-ntlns",
"1/1",
"Running",
"0",
"",
"12",
"34.222000",
},
},
},
},
{ {
// one field missing + different order // one field missing + different order
// but shall not fail // but shall not fail
@@ -282,6 +314,108 @@ func TestParserJSONInput(t *testing.T) {
} }
} }
func TestParserSeparators(t *testing.T) {
list := []string{"alpha", "beta", "delta"}
tests := []struct {
input string
sep string
}{
{
input: `🎲`,
sep: ":nonprint:",
},
{
input: `|`,
sep: ":pipe:",
},
{
input: ` `,
sep: ":spaces:",
},
{
input: " \t ",
sep: ":tab:",
},
{
input: `-`,
sep: ":nonword:",
},
{
input: `//$`,
sep: ":special:",
},
}
for _, testdata := range tests {
testname := fmt.Sprintf("parse-%s", testdata.sep)
t.Run(testname, func(t *testing.T) {
header := strings.Join(list, testdata.input)
row := header
content := header + "\n" + row
readFd := strings.NewReader(strings.TrimSpace(content))
conf := cfg.Config{Separator: testdata.sep}
conf.ApplyDefaults()
gotdata, err := wrapValidateParser(conf, readFd)
assert.NoError(t, err)
assert.EqualValues(t, [][]string{list}, gotdata.entries)
})
}
}
func TestParserSetHeaders(t *testing.T) {
row := []string{"c", "b", "c", "d", "e"}
tests := []struct {
name string
custom []string
expect []string
auto bool
}{
{
name: "default",
expect: row,
},
{
name: "auto",
expect: strings.Split("1 2 3 4 5", " "),
auto: true,
},
{
name: "custom-complete",
custom: strings.Split("A B C D E", " "),
expect: strings.Split("A B C D E", " "),
},
{
name: "custom-too-short",
custom: strings.Split("A B", " "),
expect: strings.Split("A B 3 4 5", " "),
},
{
name: "custom-too-long",
custom: strings.Split("A B C D E F G", " "),
expect: strings.Split("A B C D E", " "),
},
}
for _, testdata := range tests {
testname := fmt.Sprintf("parse-%s", testdata.name)
t.Run(testname, func(t *testing.T) {
conf := cfg.Config{
AutoHeaders: testdata.auto,
CustomHeaders: testdata.custom,
}
headers := SetHeaders(conf, row)
assert.NotNil(t, headers)
assert.EqualValues(t, testdata.expect, headers)
})
}
}
func wrapValidateParser(conf cfg.Config, input io.Reader) (Tabdata, error) { func wrapValidateParser(conf cfg.Config, input io.Reader) (Tabdata, error) {
data, err := Parse(conf, input) data, err := Parse(conf, input)

View File

@@ -19,6 +19,7 @@ package lib
import ( import (
"encoding/csv" "encoding/csv"
"encoding/json"
"fmt" "fmt"
"io" "io"
"log" "log"
@@ -61,6 +62,8 @@ func printData(writer io.Writer, conf cfg.Config, data *Tabdata) {
printShellData(writer, data) printShellData(writer, data)
case cfg.Yaml: case cfg.Yaml:
printYamlData(writer, data) printYamlData(writer, data)
case cfg.Json:
printJsonData(writer, data)
case cfg.CSV: case cfg.CSV:
printCSVData(writer, conf, data) printCSVData(writer, conf, data)
default: default:
@@ -291,6 +294,35 @@ func printShellData(writer io.Writer, data *Tabdata) {
output(writer, out) output(writer, out)
} }
func printJsonData(writer io.Writer, data *Tabdata) {
objlist := make([]map[string]any, len(data.entries))
if len(data.entries) > 0 {
for i, entry := range data.entries {
obj := make(map[string]any, len(entry))
for idx, value := range entry {
num, err := strconv.Atoi(value)
if err == nil {
obj[data.headers[idx]] = num
} else {
obj[data.headers[idx]] = value
}
}
objlist[i] = obj
}
}
jsonstr, err := json.MarshalIndent(&objlist, "", " ")
if err != nil {
log.Fatal(err)
}
output(writer, string(jsonstr))
}
func printYamlData(writer io.Writer, data *Tabdata) { func printYamlData(writer io.Writer, data *Tabdata) {
type Data struct { type Data struct {
Entries []map[string]interface{} `yaml:"entries"` Entries []map[string]interface{} `yaml:"entries"`

View File

@@ -125,6 +125,31 @@ ceta,33d12h,9,06/Jan/2008 15:04:05 -0700`,
NAME="beta" DURATION="1d10h5m1s" COUNT="33" WHEN="3/1/2014" NAME="beta" DURATION="1d10h5m1s" COUNT="33" WHEN="3/1/2014"
NAME="alpha" DURATION="4h35m" COUNT="170" WHEN="2013-Feb-03" NAME="alpha" DURATION="4h35m" COUNT="170" WHEN="2013-Feb-03"
NAME="ceta" DURATION="33d12h" COUNT="9" WHEN="06/Jan/2008 15:04:05 -0700"`, NAME="ceta" DURATION="33d12h" COUNT="9" WHEN="06/Jan/2008 15:04:05 -0700"`,
},
{
name: "json",
mode: cfg.Json,
numberize: false,
expect: `[
{
"COUNT": 33,
"DURATION": "1d10h5m1s",
"NAME": "beta",
"WHEN": "3/1/2014"
},
{
"COUNT": 170,
"DURATION": "4h35m",
"NAME": "alpha",
"WHEN": "2013-Feb-03"
},
{
"COUNT": 9,
"DURATION": "33d12h",
"NAME": "ceta",
"WHEN": "06/Jan/2008 15:04:05 -0700"
}
]`,
}, },
{ {
name: "yaml", name: "yaml",
@@ -292,6 +317,7 @@ func TestPrinter(t *testing.T) {
conf.UseSortByColumn = []int{testdata.column} conf.UseSortByColumn = []int{testdata.column}
} }
conf.Separator = cfg.SeparatorTemplates[":default:"]
conf.ApplyDefaults() conf.ApplyDefaults()
// the test checks the len! // the test checks the len!

View File

@@ -133,7 +133,7 @@
.\" ======================================================================== .\" ========================================================================
.\" .\"
.IX Title "TABLIZER 1" .IX Title "TABLIZER 1"
.TH TABLIZER 1 "2025-10-01" "1" "User Commands" .TH TABLIZER 1 "2025-10-13" "1" "User Commands"
.\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" For nroff, turn off justification. Always turn off hyphenation; it makes
.\" way too many mistakes in technical documents. .\" way too many mistakes in technical documents.
.if n .ad l .if n .ad l
@@ -152,7 +152,7 @@ tablizer \- Manipulate tabular output of other programs
\& \-n, \-\-numbering Enable header numbering \& \-n, \-\-numbering Enable header numbering
\& \-N, \-\-no\-color Disable pattern highlighting \& \-N, \-\-no\-color Disable pattern highlighting
\& \-H, \-\-no\-headers Disable headers display \& \-H, \-\-no\-headers Disable headers display
\& \-s, \-\-separator <string> Custom field separator \& \-s, \-\-separator <string> Custom field separator (maybe char, string or :class:)
\& \-k, \-\-sort\-by <int|name> Sort by column (default: 1) \& \-k, \-\-sort\-by <int|name> Sort by column (default: 1)
\& \-z, \-\-fuzzy Use fuzzy search [experimental] \& \-z, \-\-fuzzy Use fuzzy search [experimental]
\& \-F, \-\-filter <field[!]=reg> Filter given field with regex, can be used multiple times \& \-F, \-\-filter <field[!]=reg> Filter given field with regex, can be used multiple times
@@ -160,6 +160,8 @@ tablizer \- Manipulate tabular output of other programs
\& \-R, \-\-regex\-transposer </from/to/> Apply /search/replace/ regexp to fields given in \-T \& \-R, \-\-regex\-transposer </from/to/> Apply /search/replace/ regexp to fields given in \-T
\& \-j, \-\-json Read JSON input (must be array of hashes) \& \-j, \-\-json Read JSON input (must be array of hashes)
\& \-I, \-\-interactive Interactively filter and select rows \& \-I, \-\-interactive Interactively filter and select rows
\& \-g, \-\-auto\-headers Generate headers if there are none present in input
\& \-x, \-\-custom\-headers a,b,... Use custom headers, separated by comma
\& \&
\& Output Flags (mutually exclusive): \& Output Flags (mutually exclusive):
\& \-X, \-\-extended Enable extended output \& \-X, \-\-extended Enable extended output
@@ -167,12 +169,13 @@ tablizer \- Manipulate tabular output of other programs
\& \-O, \-\-orgtbl Enable org\-mode table output \& \-O, \-\-orgtbl Enable org\-mode table output
\& \-S, \-\-shell Enable shell evaluable output \& \-S, \-\-shell Enable shell evaluable output
\& \-Y, \-\-yaml Enable yaml output \& \-Y, \-\-yaml Enable yaml output
\& \-J, \-\-jsonout Enable JSON output
\& \-C, \-\-csv Enable CSV output \& \-C, \-\-csv Enable CSV output
\& \-A, \-\-ascii Default output mode, ascii tabular \& \-A, \-\-ascii Default output mode, ascii tabular
\& \-L, \-\-hightlight\-lines Use alternating background colors for tables \& \-L, \-\-hightlight\-lines Use alternating background colors for tables
\& \-o, \-\-ofs <char> Output field separator, used by \-A and \-C.
\& \-y, \-\-yank\-columns Yank specified columns (separated by ,) to clipboard, \& \-y, \-\-yank\-columns Yank specified columns (separated by ,) to clipboard,
\& space separated \& space separated
\& \-\-ofs <char> Output field separator, used by \-A and \-C.
\& \&
\& Sort Mode Flags (mutually exclusive): \& Sort Mode Flags (mutually exclusive):
\& \-a, \-\-sort\-age sort according to age (duration) string \& \-a, \-\-sort\-age sort according to age (duration) string
@@ -293,6 +296,62 @@ Sorts timestamps.
.PP .PP
Finally the \fB\-d\fR option enables debugging output which is mostly Finally the \fB\-d\fR option enables debugging output which is mostly
useful for the developer. useful for the developer.
.SS "\s-1SEPARATOR\s0"
.IX Subsection "SEPARATOR"
The option \fB\-s\fR can be a single character, in which case the \s-1CSV\s0
parser will be invoked. You can also specify a string as
separator. The string will be interpreted as literal string unless it
is a valid go regular expression. For example:
.PP
.Vb 1
\& \-s \*(Aq\et{2,}\e\*(Aq
.Ve
.PP
is being used as a regexp and will match two or more consecutive tabs.
.PP
.Vb 1
\& \-s \*(Aqfoo\*(Aq
.Ve
.PP
on the other hand is no regular expression and will be used literally.
.PP
To make live easier, there are a couple of predefined regular
expressions, which you can specify as classes:
.Sp
.RS 4
* :tab:
.Sp
Matches a tab and eats spaces around it.
.Sp
* :spaces:
.Sp
Matches 2 or more spaces.
.Sp
* :pipe:
.Sp
Matches a pipe character and eats spaces around it.
.Sp
* :default:
.Sp
Matches 2 or more spaces or tab. This is the default separator if none
is specified.
.Sp
* :nonword:
.Sp
Matches a non-word character.
.Sp
* :nondigit:
.Sp
Matches a non-digit character.
.Sp
* :special:
.Sp
Matches one or more special chars like brackets, dollar sign, slashes etc.
.Sp
* :nonprint:
.Sp
Matches one or more non-printable characters.
.RE
.SS "\s-1PATTERNS AND FILTERING\s0" .SS "\s-1PATTERNS AND FILTERING\s0"
.IX Subsection "PATTERNS AND FILTERING" .IX Subsection "PATTERNS AND FILTERING"
You can reduce the rows being displayed by using one or more regular You can reduce the rows being displayed by using one or more regular

View File

@@ -13,7 +13,7 @@ tablizer - Manipulate tabular output of other programs
-n, --numbering Enable header numbering -n, --numbering Enable header numbering
-N, --no-color Disable pattern highlighting -N, --no-color Disable pattern highlighting
-H, --no-headers Disable headers display -H, --no-headers Disable headers display
-s, --separator <string> Custom field separator -s, --separator <string> Custom field separator (maybe char, string or :class:)
-k, --sort-by <int|name> Sort by column (default: 1) -k, --sort-by <int|name> Sort by column (default: 1)
-z, --fuzzy Use fuzzy search [experimental] -z, --fuzzy Use fuzzy search [experimental]
-F, --filter <field[!]=reg> Filter given field with regex, can be used multiple times -F, --filter <field[!]=reg> Filter given field with regex, can be used multiple times
@@ -21,6 +21,8 @@ tablizer - Manipulate tabular output of other programs
-R, --regex-transposer </from/to/> Apply /search/replace/ regexp to fields given in -T -R, --regex-transposer </from/to/> Apply /search/replace/ regexp to fields given in -T
-j, --json Read JSON input (must be array of hashes) -j, --json Read JSON input (must be array of hashes)
-I, --interactive Interactively filter and select rows -I, --interactive Interactively filter and select rows
-g, --auto-headers Generate headers if there are none present in input
-x, --custom-headers a,b,... Use custom headers, separated by comma
Output Flags (mutually exclusive): Output Flags (mutually exclusive):
-X, --extended Enable extended output -X, --extended Enable extended output
@@ -28,12 +30,13 @@ tablizer - Manipulate tabular output of other programs
-O, --orgtbl Enable org-mode table output -O, --orgtbl Enable org-mode table output
-S, --shell Enable shell evaluable output -S, --shell Enable shell evaluable output
-Y, --yaml Enable yaml output -Y, --yaml Enable yaml output
-J, --jsonout Enable JSON output
-C, --csv Enable CSV output -C, --csv Enable CSV output
-A, --ascii Default output mode, ascii tabular -A, --ascii Default output mode, ascii tabular
-L, --hightlight-lines Use alternating background colors for tables -L, --hightlight-lines Use alternating background colors for tables
-o, --ofs <char> Output field separator, used by -A and -C.
-y, --yank-columns Yank specified columns (separated by ,) to clipboard, -y, --yank-columns Yank specified columns (separated by ,) to clipboard,
space separated space separated
--ofs <char> Output field separator, used by -A and -C.
Sort Mode Flags (mutually exclusive): Sort Mode Flags (mutually exclusive):
-a, --sort-age sort according to age (duration) string -a, --sort-age sort according to age (duration) string
@@ -153,6 +156,62 @@ Sorts timestamps.
Finally the B<-d> option enables debugging output which is mostly Finally the B<-d> option enables debugging output which is mostly
useful for the developer. useful for the developer.
=head2 SEPARATOR
The option B<-s> can be a single character, in which case the CSV
parser will be invoked. You can also specify a string as
separator. The string will be interpreted as literal string unless it
is a valid go regular expression. For example:
-s '\t{2,}\'
is being used as a regexp and will match two or more consecutive tabs.
-s 'foo'
on the other hand is no regular expression and will be used literally.
To make live easier, there are a couple of predefined regular
expressions, which you can specify as classes:
=over
* :tab:
Matches a tab and eats spaces around it.
* :spaces:
Matches 2 or more spaces.
* :pipe:
Matches a pipe character and eats spaces around it.
* :default:
Matches 2 or more spaces or tab. This is the default separator if none
is specified.
* :nonword:
Matches a non-word character.
* :nondigit:
Matches a non-digit character.
* :special:
Matches one or more special chars like brackets, dollar sign, slashes etc.
* :nonprint:
Matches one or more non-printable characters.
=back
=head2 PATTERNS AND FILTERING =head2 PATTERNS AND FILTERING
You can reduce the rows being displayed by using one or more regular You can reduce the rows being displayed by using one or more regular