2021/11/29

rebar3 configuration

Configuration

Base Config

  • OS 環境變數
REBAR_PROFILE="term"         # force a base profile
HEX_CDN="https://..."        # change the Hex endpoint for a private one
QUIET=1                      # only display errors
DEBUG=1                      # show debug output
                             # "QUIET=1 DEBUG=1" displays both errors and warnings
REBAR_COLOR="low"            # reduces amount of color in output if supported
REBAR_CACHE_DIR              # override where Rebar3 stores cache data
REBAR_GLOBAL_CONFIG_DIR      # override where Rebar3 stores config data
REBAR_CONFIG="rebar3.config" # changes the name of rebar.config files
REBAR_GIT_CLONE_OPTIONS=""   # pass additional options to all git clone operations
                             # for example, a cache across project can be set up
                             # with "--reference ~/.cache/repos.reference"
http_proxy                   # standard proxy ENV variable is respected
https_proxy                  # standard proxy ENV variable is respected
TERM                         # standard terminal definition value. TERM=dumb disables color
  • Alias

執行多個指令的 alias

{alias, [{check, [eunit, {ct, "--sys_config=config/app.config"}]}]}.
  • Artifacts

在成功編譯後,必須要存在的 a list of files

rebar3 會知道 non-Erlang artifacts (例如 shared libraries in C code)是否有 built 成功

{artifacts, [file:filename_all()]}.

template keys

template key description default value
profile_dir base output dir with the profile string appended _build/default/
base_dir base output dir _build
out_dir applicaiton's output dir _build/default/lib/
{escript_name, rebar3}.

{provider_hooks, [{post, [{compile, escriptize}]}]}.

%% path 是 umbrella project 的 toplevel 的相對目錄
{artifacts, ["bin/rebar3"]}.

%% {{profile_dir}}
{artifacts, ["{{profile_dir}}/bin/rebar3"]}.
  • Compilation

compiler options,可使用的 options 可參考 文件

% version string might look like "22.0-x86_64-apple-darwin18.5.0-64"
{erl_opts, [{platform_define,
               "(linux|solaris|freebsd|darwin)",
               'HAVE_SENDFILE'},
              {platform_define, "(linux|freebsd)",
                'BACKLOG', 128},
              {platform_define, "^18",
                'OTP_GREATER_THAN_18'},
              {platform_define, "^R13",
                'old_inets'}]
}.

erlfirstfiles 可在編譯其他檔案時,先編譯這些 modules

{erl_first_files, ["src/mymodule.erl", "src/mymodule.erl"]}.

還有一些 general options

{validate_app_modules, true}. % Make sure modules in .app match those found in code
{app_vars_file, undefined | Path}. % file containing elements to put in all generated app files
%% Paths the compiler outputs when reporting warnings or errors
%% relative (default), build (all paths are in _build, default prior
%% to 3.2.0, and absolute are valid options
{compiler_source_format, relative}.

其他相關 compiler options

rebar3 compiler options

%% Disable or enable recursive compiling globally
{erlc_compiler,[{recursive,boolean()}]}.

%%%%%%%%%%
%% Disable or enable recursive compiling on src_dirs
{src_dirs, [{"src", [{recursive, true|false}]}]}
%% Or alternatively:
{erl_opts, [{src_dirs,[{string(),[{recursive,boolean()}]}]}]}.

%%%%%%%%%%%
%% Disable or enable recursive compiling on for extra_src_dirs:
{extra_src_dirs, [{"test", [{recursive, true | false}]}]}
%% or
{erl_opts, [{extra_src_dirs,[{string(),[{recursive,boolean()}]}]}]}.

example: Disable recursive compiling globally, but enable it for a few dirs

{erlc_compiler,[{recursive,false}]},
{erl_opts,[{src_dirs,["src",{"other_src",[{recursive,true}]}]}]}.

example: Disable recursive compiling on test and other dirs

{erl_opts, [
            {extra_src_dirs,[
                    {"test", [{recursive,boolean()}]},
                    {"other_dir", [{recursive,boolean()}]}]}
            ]
}.
  • Common Test
{ct_first_files, [...]}. % {erl_first_files, ...} but for CT
{ct_opts, [...]}. % same as options for ct:run_test(...)
{ct_readable, true | false}. % disable Rebar3 modifying CT output in the shell

ref: ct_opts

  • Cover

{cover_enabled, true} enable code coverage in tests

{cover_opts, [verbose]} 讓 coverage report 印到 terminal,不只是在 files

{cover_excl_mods, [Modules]} blacklist some modules

{cover_excl_apps, [AppNames]} blacklist some apps

  • Dialyzer
-type warning() :: no_return | no_unused | no_improper_lists | no_fun_app | no_match | no_opaque | no_fail_call | no_contracts | no_behaviours | no_undefined_callbacks | unmatched_returns | error_handling | race_conditions | overspecs | underspecs | specdiffs

{dialyzer, [{warnings, [warning()]},
            {get_warnings, boolean()},
            {plt_apps, top_level_deps | all_deps} % default: top_level_deps
            {plt_extra_apps, [atom()]},
            {plt_location, local | file:filename()},
            {plt_prefix, string()},
            {base_plt_apps, [atom(), ...]},
            {base_plt_location, global | file:filename()},
            {base_plt_prefix, string()}]}.
  • Distribution
{dist_node, [
    {setcookie, 'atom-cookie'},
    {name | sname, 'nodename'},
]}.
  • Directories

以下是可使用的 目錄變數 的預設值

%% directory for artifacts produced by Rebar3
{base_dir, "_build"}.
%% directory in '<base_dir>/<profile>/' where deps go
{deps_dir, "lib"}.
%% where Rebar3 operates from; defaults to the current working directory
{root_dir, "."}.
%% where checkout dependencies are to be located
{checkouts_dir, "_checkouts"}.
%% directory in '<base_dir>/<profile>/' where plugins go
{plugins_dir, "plugins"}.
%% directories where OTP applications for the project can be located
{project_app_dirs, ["apps/*", "lib/*", "."]}.
%% Directories where source files for an OTP application can be found
{src_dirs, ["src"]}.
%% Paths to miscellaneous Erlang files to compile for an app
%% without including them in its modules list
{extra_src_dirs, []}.
%% Paths the compiler outputs when reporting warnings or errors
%% relative (default), build (all paths are in _build, default prior
%% to 3.2.0, and absolute are valid options
{compiler_source_format, relative}.

rebar3 有另外儲存資料在這個目錄,可用這個方式修改 {global_rebar_dir, "./some/path"}.

%% 設定資料
~/.config/rebar3

%% cache
~/.cache/rebar3
  • EDoc

參考 文件

{edoc_opts, [...]}.
  • Escript

參考escriptize command

{escript_main_app, AppName}. % specify which app is the escript app
{escript_name, "FinalName"}. % name of final generated escript
{escript_incl_apps, [App]}. % apps (other than the main one and its deps) to be included
{escript_emu_args, "%%! -escript main Module\n"}. % emulator args
{escript_shebang, "#!/usr/bin/env escript\n"}. % executable line
{escript_comment, "%%\n"}. % comment at top of escript file
  • EUnit

eunit options

{eunit_first_files, [...]}. % {erl_first_files, ...} but for CT
{eunit_opts, [...]}. % same as options for eunit:test(Tests, ...)
{eunit_tests, [...]}. % same as Tests argument in eunit:test(Tests, ...)
  • Hex Repos and Indexes

Rebar3 version 3.7.0 以後就支援多個 Hex repositories

如果要使用 private repository,就要安裝 rebar3_hex plugin

% 認證
rebar3 hex auth

% 會產生 ~/.config/rebar3/hex.config 記錄 keys
{hex, [
   {repos, [
      %% A self-hosted repository that allows publishing may look like this
      #{name => <<"my_hexpm">>,
        api_url => <<"https://localhost:8080/api">>,
        repo_url => <<"https://localhost:8080/repo">>,
        repo_public_key => <<"-----BEGIN PUBLIC KEY-----
        ...
        -----END PUBLIC KEY-----">>
      },
      %% A mirror looks like a standard repo definition, but uses the same
      %% public key as hex itself. Note that the API URL is not required
      %% if all you do is fetch information
      #{name => <<"jsDelivr">>,
        repo_url => <<"https://cdn.jsdelivr.net/hex">>,
        ...
       },
       %% If you are a paying hex.pm user with a private organisation, your
       %% private repository can be declared as:
       #{name => <<"hexpm:private_repo">>}
       %% and authenticate with the hex plugin, rebar3 hex user auth
   ]}
]}.

%% The default Hex config is always implicitly present.
%% You could however replace it wholesale by using a 'replace' value,
%% which in this case would redirect to a local index with no signature
%% validation being done. Any repository can be replaced.
{hex, [
   {repos, replace, [
      #{name => <<"hexpm">>,
        api_url => <<"https://localhost:8080/api">>,
        repo_url => <<"https://localhost:8080/repo">>,
        ...
       }
   ]}
]}.
  • Minimum OTP Version
{minimum_otp_vsn, "17.4"}.
  • Overrides

可修改某個 dependency 的 config,以便快速解決一些套件的設定問題

有三種:add, override on app, override on all

{overrides, [{add, app_name(), [{atom(), any()}]},
             {del, app_name(), [{atom(), any()}]},
             {override, app_name(), [{atom(), any()}]},
             {add, [{atom(), any()}]},
             {del, [{atom(), any()}]},
             {override, [{atom(), any()}]}]}.

apply 順序:override on all, app override, per app additions

example: 強制編譯時,增加 debug_info,並在 production profile,強制為 no_debug_info

{overrides, [{override, [{erl_opts, [debug_info]}]}]}.

{profiles, [{prod, [{overrides, [{override, [{erl_opts,[no_debug_info]}]}]},
                    {relx, [{dev_mode, false},
                            {include_erts, true}]}]}
           ]}.

example: 對所有 app 移除 warnings_as_errors 編譯設定

{overrides, [
    %% For all apps:
    {del, [{erl_opts, [warnings_as_errors]}]},
    %% Or for just one app:
    {del, one_app, [{erl_opts, [warnings_as_errors]}]}
]}.
  • Shell Hooks
-type hook() :: {atom(), string()}
              | {string(), atom(), string()}.

{pre_hooks, [hook()]}.
{post_hooks, [hook()]}.

example: 用 rebar3 編譯 merl

{pre_hooks, [{"(linux|darwin|solaris)", compile, "make -C \"$REBAR_DEPS_DIR/merl\" all -W test"},
             {"(freebsd|netbsd|openbsd)", compile, "gmake -C \"$REBAR_DEPS_DIR/merl\" all"},
             {"win32", compile, "make -C \"%REBAR_DEPS_DIR%/merl\" all -W test"},
             {eunit, "erlc -I include/erlydtl_preparser.hrl -o test test/erlydtl_extension_testparser.yrl"},
             {"(linux|darwin|solaris)", eunit, "make -C \"$REBAR_DEPS_DIR/merl\" test"},
             {"(freebsd|netbsd|openbsd)", eunit, "gmake -C \"$REBAR_DEPS_DIR/merl\" test"},
             {"win32", eunit, "make -C \"%REBAR_DEPS_DIR%/merl\" test"}
            ]}.
  • Provider Hooks

在 compile 以前要先做 clean

{provider_hooks, [{pre, [{compile, clean}]}
                  {post, [{compile, {erlydtl, compile}}]}]}

Hookable Points in Providers

Hook before and after
clean each application and dependency, and/or before and after all top-level applications are compiled*
ct the entire run
compile each application and dependency, and/or before and after all top-level applications are compiled*
edoc the entire run
escriptize the entire run
eunit the entire run
release the entire run
tar the entire run
erlc_compile compilation of the beam files for an app
app_compile building of the .app file from .app.src for an app
  1. the rebar.config file at the application root

  2. each top-level app’s (in apps/ or libs/) rebar.config

  3. each dependency’s rebar.config

  • Relx

參考 Release 章節

  • Plugins

參考 Plugins 章節

  • Shell

如果有 relex entry,rebar3 shell REPL 會自動 boot applications

被啟動的 app 要用這個指定 {shell, [{apps, [App]}]}.

Option Value Description
apps [app1, app2, …] Applications to be booted by the shell. Overtakes the relx entry values
config “path/to/a/file.config” 載入 .config file
script_file “path/to/a/file.escript” 在 booting app前,執行 escript
appreloadblacklist [app1, app2, …] 呼叫 r3.compile()不載入的 apps
對於 ranch 很有用,因為載入兩次會 crash
  • XRef
{xref_warnings,false}.
{xref_extra_paths,[]}.
{xref_checks,[undefined_function_calls,undefined_functions,locals_not_used,
              exports_not_used,deprecated_function_calls,
              deprecated_functions]}.
{xref_queries,[{"(xc - uc) || (xu - x - b - (\"mod\":\".*foo\"/\"4\"))", []}]}.
{xref_ignores, [Module, {Module, Fun}, {Module, Fun, Arity}]}.

可用 -ignore_xref(_). 忽略 warnings

-ignore_xref({other, call, 0}).   % ignore warnings for calls to "external" module function
-ignore_xref([{other, call, 0}]). % equivalent to the previous declaration

-ignore_xref({function,0}).       % ignore warnings for locally exported function not used in the analysis scope
-ignore_xref([{function,0}]).     % equivalent to the previous declaration
-ignore_xref(function/0).         % equivalent to the previous declaration
-ignore_xref([function/0]).       % equivalent to the previous declaration

-ignore_xref(module).             % ignore warnings related to a given module
-ignore_xref([module]).           % equivalent to previous declaration

Dependencies

  • 宣告

rebar.config 宣告 dependencies,可用 rebar3 tree 查看

rebar3 支援兩種 dependencies

  1. source (git, Mercurial)

  2. package dependencies

    使用 hex.pm 提供的 packages,會 cache 在 ~/.cache/rebar3/

{deps,[
  %% Packages
  rebar,
  {rebar,"1.0.0"},
  {rebar, {pkg, rebar_fork}}, % rebar app under a different pkg name
  {rebar, "1.0.0", {pkg, rebar_fork}},
  %% Source Dependencies
  {rebar, {git, "git://github.com/erlang/rebar3.git"}},
  {rebar, {git, "http://github.com/erlang/rebar3.git"}},
  {rebar, {git, "https://github.com/erlang/rebar3.git"}},
  {rebar, {git, "git@github.com:erlang/rebar3.git"}},
  {rebar, {hg, "https://othersite.com/erlang/rebar3"}},
  {rebar, {git, "git://github.com/erlang/rebar3.git", {ref, "aef728"}}},
  {rebar, {git, "git://github.com/erlang/rebar3.git", {branch, "master"}}},
  {rebar, {git, "git://github.com/erlang/rebar3.git", {tag, "3.0.0"}}},
  %% Source dependencies (git only) in subdirectories, from version 3.14 onwards
  {rebar, {git_subdir, "git://github.com/erlang/rebar3.git", {branch, "main"}, "subdir"}},
  {rebar, {git_subdir, "git://github.com/erlang/rebar3.git", {tag, "3.14"}, "sub/dir"},
  {rebar, {git_subdir, "git://github.com/erlang/rebar3.git", {ref, "aeaefd"}, "dir"}
]}.
  • Runtime Dependencies

要將 dependency 加到 appapp.src 裡面

{application, <APPNAME>,
 [{description, ""},
  {vsn, "<APPVSN>"},
  {registered, []},
  {modules, []},
  {applications, [kernel
                 ,stdlib
                 ,cowboy
                 ]},
  {mod, {<APPNAME>_app, []}},
  {env, []}
 ]}.
  • Dependency Version Handling

hex dependencies 可用 semver-like syntax

{deps,[
  rebar,                   % fetches latest known version, ignoring pre-releases
  {rebar, "~> 2.0.0"},     % >= 2.0.0 and < 2.1.0`
  {rebar, "~> 2.1.2"},     % >= 2.1.2 and < 2.2.0`
  {rebar, "~> 2.1.3-dev"}` % >= 2.1.3-dev and < 2.2.0`
  {rebar, "~> 2.0"}`       % >= 2.0.0 and < 3.0.0`
  {rebar, "~> 2.1"}`       % >= 2.1.0 and < 3.0.0`
]}.

呼叫指令更新 package dependencies

rebar3 update

如果要改用其他 hex.pm CDN,可在 rebar.config 增加設定

{rebar_packages_cdn, "https://s3-eu-west-1.amazonaws.com/s3-eu.hex.pm"}.
  • Checkout Dependencies

如要處理 dependencies 而不需要持續 publish new version,可利用 _checkouts 目錄,只要把 dependency 建立 symlink 到 _checkouts 即可

_checkouts
└── depA
    └── src
  • Fetching Order
  A
 / \
B   C

A -> B -> C

   A
 /   \
B    C1
|
C2

A -> B -> C1 -> skip C2 (因為跟 C1 一樣)

   A
 /   \
B     C
|     |
D1    D2

A -> B -> C -> D1 -> skip D2

  A D2
 /   \
B     C
|     |
D1    D2

A -> B -> C -> D2 (將 D2 提前到 top level)

  • Lock Files
rebar.lock

由 rebar3 產生,必須要放入 source control。會紀錄確切的 dependencies 的版本號碼,避免 rebar.config 太過鬆散的版本設定

  • Upgrading Dependencies
A  B
|  |
C  D

可以單獨或全部更新

rebar3 upgrade A
rebar3 upgrade B

% 全部
rebar3 upgrade A,B
rebar3 upgrade
% flush the lock file
rebar3 unlock

% 查看 dependencies
rebar3 tree

Profiles

profile 是一組 configuration settings 用在特定的 context

有三種指定方式

  1. rebar3 as <profile> <command>rebar3 as <profile1>,<profile2> <command>
  2. 特定的 rebar3 指令,例如 eunitct 都會使用 test profile
  3. REBAR_PROFILE 環境變數

在 rebar.config 中的設定

{profiles, [{ProfileName1, [Options, ...]},
            {ProfileName2, [Options, ...]}]}.

example: test profile 增加 meck dependency

{profiles, [{test, [{deps, [meck]}]}]}.

example:

{deps, [...]}.
{relx, [
    ...
]}.

{profiles, [
    {prod, [
        {erl_opts, [no_debug_info, warnings_as_errors]},
        {relx, [{dev_mode, false}]}
    ]},
    {native, [
        {erl_opts, [{native, {hipe, o3}}]}
    ]},
    {test, [
        {deps, [meck]},
        {erl_opts, [debug_info]}
    ]}
]}.

有四個 profile

  1. default
  2. prod 通常用來產生 full releases without symlinks
  3. nativeHiPE 編譯
  4. test 會載入 mocking libraries

執行時

  1. rebar3 ct

    執行 Common Test Suites,會使用 defaulttest profiles

  2. rebar3 as test ct

    跟上面一樣

  3. rebar3 as native ct

    native mode,profile: default -> native -> test

  4. rebar3 as test,native ct

    profile: default -> test -> native

  5. rebar3 release

    profile: default

  6. rebar3 as prod release

    build release without developement mode

  7. rebar3 as prod,native release

    build release 但 compiling modules to native mode

  8. rebar3 as prod release with REBAR_PROFILE=native

    build release, profile: native -> prod

profile 順序

  1. default
  2. REBAR_PROFILE
  3. as
  4. command 的特定 profile

只有 default profile 的 dependencies ,也就是 rebar.config 的 dependencies 會寫入 rebar.lock

  • Option-Merging Algorithm

rebar3 可將三種寫法的 option 合併在一起

  1. native

  2. {native, {hipe, o3}}

  3. {native, still, supported}

example:

{profiles, [
    {prod, [
        {erl_opts, [no_debug_info, warnings_as_errors]},
    ]},
    {native, [
        {erl_opts, [{native, {hipe, o3}}, {d, 'NATIVE'}]}
    ]},
    {test, [
        {erl_opts, [debug_info]}
    ]}
]}.

不同的 profile 會有不同的 options fot erl_opts

  1. rebar3 as prod,native,test <command>: [debug_info, {d, 'NATIVE'}, {native, {hipe, o3}}, no_debug_info, warnings_as_errors]
  2. rebar3 as test,prod,native <command>: [{d, 'NATIVE'}, {native, {hipe, o3}}, no_debug_info, warnings_as_errors, debug_info]
  3. rebar3 as native,test,prod <command>: [no_debug_info, warnings_as_errors, debug_info, {d, 'NATIVE'}, {native, {hipe, o3}}]
  4. rebar3 as native,prod,test <command>: [debug_info, no_debug_info, warnings_as_errors, {d, 'NATIVE'}, {native, {hipe, o3}}]

要注意,最後一個 profile 會先被 apply

Plugins

plugin 可安裝在 project (放在 project 的 rebar.config 的 plugins)或是 globally(放在 ~/.config/rebar3/rebar.config)。

  • Including Plugins

build application 需要的 plugins,會產生到 _build/<profile>/plugins/ 裡面。

可用在 provider_hooks

{plugins, [{rebar_erl_vsn, "~> 0.1"}]}.
{provider_hooks, [{pre, [{compile, {default, erl_vsn}}]}]}.
  • Project Plugins and Overriding Commands

project_plugins 定義直接以 rebar3 編譯時,可使用的 plugins

example: 在build release 時,使用 cuttlefish

{project_plugins, [rebar3_cuttlefish]}.

執行 rebar3 relaserebar3 tar 會啟動 rebar3_cuttlefish providers

example: 在開發時使用某些 plugin

以往需要在 dev profile 裡面使用 plugin,現在可定義 project plugin

{project_plugins, [rebar3_gpb_plugin]}.

然後直接這樣使用

rebar3 protobuf
  • Upgrading Plugins

必須呼叫指令才更新

  1. rebar3 plugins upgrade <plugin_name>

    更新 project-local plugin

  2. rebar3 as global plugins upgrade <plugin_name>

    更新 global plugins

如果是使用 Hex package 的 plugins,版本錯了,必須要呼叫 rebar3 update 更新 Hex index

因 plugins 並沒有被 lock file 鎖定,故建議使用時要指定版本

  • Recommended Plugins

Auto Compile and Load

{plugins, [rebar3_auto]}.

建議 global 使用 auto,也就是放在 ~/.config/rebar3/rebar.config

rebar3 auto 會啟動 shell (跟 rebar3 shell) 一樣,並監測 src dir 的 file changes,只要有更新就會自動編譯。

Auto-Test

{plugins, [{rebar3_autotest, "0.1.1"}]}.

建議 global 使用

rebar3 as test autotest 會啟動 eunit,並監測 src, headers, test-files 的異動

Hex Package Management

{plugins, [rebar3_hex]}.

建議 global 使用

參考 Hex Package Management

Port Compiler

rebar interface to build C, C++

在 project 的 rebar.config 增加以下設定

{plugins, [pc]}.

{provider_hooks,
 [
  {pre,
   [
    {compile, {pc, compile}},
    {clean, {pc, clean}}
   ]
  }
 ]
}.

支援的設定變數

%% Supported configuration variables:
%%
%% * port_specs - Erlang list of tuples of the forms
%%                {ArchRegex, TargetFile, Sources, Options}
%%                {ArchRegex, TargetFile, Sources}
%%                {TargetFile, Sources}
%%
%% * port_env - Erlang list of key/value pairs which will control
%%              the environment when running the compiler and linker.
%%              Variables set in the surrounding system shell are taken
%%              into consideration when expanding port_env.
%%
%%              By default, the following variables are defined:
%%              CC       - C compiler
%%              CXX      - C++ compiler
%%              CFLAGS   - C compiler
%%              CXXFLAGS - C++ compiler
%%              LDFLAGS  - Link flags
%%              ERL_CFLAGS  - default -I paths for erts and ei
%%              ERL_LDFLAGS - default -L and -lerl_interface -lei
%%              DRV_CFLAGS  - flags that will be used for compiling
%%              DRV_LDFLAGS - flags that will be used for linking
%%              EXE_CFLAGS  - flags that will be used for compiling
%%              EXE_LDFLAGS - flags that will be used for linking
%%              ERL_EI_LIBDIR - ei library directory
%%              DRV_CXX_TEMPLATE      - C++ command template
%%              DRV_CC_TEMPLATE       - C command template
%%              DRV_LINK_TEMPLATE     - C Linker command template
%%              DRV_LINK_CXX_TEMPLATE - C++ Linker command template
%%              EXE_CXX_TEMPLATE      - C++ command template
%%              EXE_CC_TEMPLATE       - C command template
%%              EXE_LINK_TEMPLATE     - C Linker command template
%%              EXE_LINK_CXX_TEMPLATE - C++ Linker command template
%%
%%              Note that if you wish to extend (vs. replace) these variables,
%%              you MUST include a shell-style reference in your definition.
%%              e.g. to extend CFLAGS, do something like:
%%
%%              {port_env, [{"CFLAGS", "$CFLAGS -MyOtherOptions"}]}
%%
%%              It is also possible to specify platform specific options
%%              by specifying a triplet where the first string is a regex
%%              that is checked against Erlang's system architecture string.
%%              e.g. to specify a CFLAG that only applies to x86_64 on linux
%%              do:
%%
%%              {port_env, [{"x86_64.*-linux", "CFLAGS",
%%                           "$CFLAGS -X86Options"}]}
%%
%%              Cross-arch environment variables to configure toolchain:
%%              GET_ARCH to set the tool chain name to use
%%              GET_ARCH_WORDSIZE (optional - to determine word size)"
%%              word size is 32
%%              GET_ARCH_VSN (optional - "
%%              l version of CC/CXX is requested),

Run Release

{plugins, [rebar3_run]}.

rebar3 run 會啟動 release console,取代 _build/default/rel/<release>/bin/<release> console

Alias

rebar3 (v3.5.0 以後直接支援 不需要 plugin)

舊版 rebar3

{plugins, [rebar_alias]}.

{alias, [{check, [eunit, {ct, "--sys_config=config/app.config"}]}]}.

參數以 {Provider, Args}. 方式設定

QuickCheck

{plugins, [rebar3_eqc]}.

etc_opts

{eqc_opts, [{numtests, 500}]}.
Config Option Type Description
numtests integer Number of test executions, default 100.
testing_time integer Time in seconds to execute property. If both are specified, the testing_time setting is ignored.

Command line options

Option Type Description
-n integer Number of test executions, default 100.
-t integer Time in seconds to execute property. If both are specified, the testing_time setting is ignored.
-p string Property to execute. This can be either module:property or property and the plugin will determine the module.

PropEr

可取代 Quviq QuickCheck

%% the plugin itself
{plugins, [rebar3_proper]}.

%% The PropEr dependency is still required to compile the test cases
{profiles,
    [{test, [
        {deps, [{proper, "1.1.1-beta"}]}
    ]}
]}.

options

{proper_opts, Options}

rebar.config key Command Line Description
{dir, String} -d, –dir directory where the property tests are located (defaults to “test”)
{module, [Modules]} -m, –module name of one or more modules to test
{properties, [PropNames]} -p, –prop name of properties to test within a specified module
{numtests, N} -n, –numtests number of tests to run when testing a given property
verbose | quiet -v, –verbose Whether each property tested shows its output or not (defaults to true/verbose)
{cover, true | false} -c, –cover generate cover data (default: false)
long_result –long_result enables long-result mode, displaying counter-examples on failure rather than just false
{start_size, N} –start_size specifies the initial value of the size parameter
{max_size, N} –max_size specifies the maximum value of the size parameter
{max_shrinks, N} –max_shrinks specifies the maximum number of times a failing test case should be shrunk before returning
noshrink –noshrink instructs PropEr to not attempt to shrink any failing test cases
{constraint_tries, N} –constraint_tries specifies the maximum number of tries before the generator subsystem gives up on producing an instance that satisfies a ?SUCHTHAT constraint
{spec_timeout, Millisecs} –spec_timeout duration, in milliseconds, after which PropEr considers an input to be failing
anytointeger –anytointeger converts instances of the any() type to integers in order to speed up execution

Diameter

在 rebar3 編譯 .dia files

{plugins, [rebar3_diameter_compiler]}.

%% 可用 hook 自動編譯與 clean  diameter dictionaries
{provider_hooks, [
    {pre, [
        {compile, {diameter, compile}},
        {clean, {diameter, clean}}
    ]}
]}.

options

Config Option Type Description
dia_opts list Options from diameter_make:codec/2 supported with exception of inherits.
diafirstfiles list Files in sequence to compile first.

ErlyDTL

erlydtl compiler

{plugins, [
    {rebar3_erlydtl_plugin, ".*",
     {git, "https://github.com/tsloughter/rebar3_erlydtl_plugin.git", {branch, "master"}}}
]}.

erlydtl_opts

Config Option Type Description
doc_root string Where to find templates to compile. “priv/templates” by default.
compiler_options proplist Template compilation options to pass to erlydtl. Descriptions here.
out_dir string Where to put compiled template beam files “ebin” by default.
source_ext string The file extension the template sources have “.dtl” by default.
module_ext string Characters to append to the template’s module name “_dtl” by default.
recursive boolean Boolean that determines if doc_root(s) need to be scanned recursively for matching template file names. ‘true’ by default.

Neotoma

使用 Sean Cribbs neotoma app 產生 PEG files,此 plugin 有放在 Hex

{plugins, [rebar3_neotoma_plugin]}.

%% 可用 hook 自動編譯
{provider_hooks, [
    {pre, [{compile, {neotoma, compile}}]}
]}.

Protocol Buffers

利用 Tomas Abrahamsson’s gpb 編譯 .proto,該 plugin 有放到 Hex

{erl_opts, [{i, "./_build/default/plugins/gpb/include/"}]}.
{plugins, [{rebar3_gpb_plugin, "2.10.0"}]}.

{gpb_opts, [{i, "proto"},
        {o_erl, "src"},
        {o_hrl, "include"}]}.

%% 可用 hook 自動編譯
{provider_hooks, [
    {pre, [{compile, {protobuf, compile}}]}
]}.

Appup

generating, compiling, validating .appup.src files,有發布到 Hex

{plugins, [rebar3_appup_plugin]}.

%% 可用 hook 自動編譯
{provider_hooks, [
    {post, [{compile, {appup, compile}},
            {clean, {appup, clean}}]}
]}.

如果要產生兩個 releases 的 .appup 的步驟

git checkout <from version>
rebar3 release
git checkout <to version>
rebar3 release
rebar3 appup generate
rebar3 relup tar
Argument Type Description
previous optional Path location of the previous release to compare with
current optional Path location of the current release to compare with, defaults to build//rel/<appname>
target_dir optional Location of where to generate the .appup file.
previous_version optional Version to update from

Vendoring dependencies

儲存 vendor dependencies within a project

自 rebar3 v3.7.0 開始,可使用 rebar3_path_deps plugin 指定 vendored paths for dependency retrieval

example:

在 helloworld project 裡面新增 helloutils OTP app

# inside of hello-world/
$ rebar3 new app hello_utils

會產生 hello_utils folder,裡面有 rebar.config 及 src

為了讓 rebar3 知道有這個 app,必須在 helloworld/rebar.config 裡面 add helloutils

也就是 helloworld depend on helloutils

{deps, [
  {hello_utils, {path, "hello_utils"}},
  ...
]

把 plugin 放到 rebar.config

{plugins, [
   rebar3_path_deps
]}.

編譯時

$ rebar3 compile
===> Compiling rebar3_path_deps
===> Verifying dependencies...
===> Fetching hello_utils ({path,"hello_utils",
                            {mtime,<<"2018-10-17T11:21:18Z">>}})
===> Compiling hello_utils
===> Compiling hello_world

在 v3.7.0 以前,要 add plugin

{plugins, [rebar3_vendor]}.

儲存 fetched dependencies 到 ./deps/

rebar3 vendor store

取得 dependencies,放到 build dir

rebar3 vendor apply

SVN Dependencies

rebar3svndeps

Elixir Dependencies

自 v3.7.0 開始,以 rebar_mix 支援 Mix dependencies

{plugins, [rebar_mix]}.

{provider_hooks, [{post, [{compile, {mix, consolidate_protocols}}]}]}.

consolidate_protocols hook 會將 beams 放到 _build//consolidated

release 時要自己複製進去

{overlay, [{copy, "{{base_dir}}/consolidated", "releases/{{release_version}}/consolidated"}]}

update vm.args.src

-pa releases/${REL_VSN}/consolidate

v3.7.0 以前,建議使用 rebar3elixircompile plugin

Config Scripts

rebar.config*.app.src 可透過 file:script/2 做 dynamic configuration

在原始 file 的相同目錄,有 .script 檔案,該檔案會作為 configuration

ex: rebar.config 以及 rebar.config.script

example: rebar.config.script

case os:getenv("REBAR_DEPS") of
    false -> CONFIG; % env var not defined
    []    -> CONFIG; % env var set to empty string
    Dir ->
    lists:keystore(deps_dir, 1, CONFIG, {deps_dir, Dir})
end.

如果要 build properly,就先呼叫 unset REBAR_DEPS

References

rebar3 docs

eeeggghit rebar3

Erlang:Rebar3的简单使用

Automatic release upgrades in Erlang

relx

2021/11/22

rebar3

安裝

直接到 rebar release 下載 binary 執行檔,就可以使用。

Workflow

使用 rebar3 建議採用以下程序

選擇專案類型

template 類型 說明
escript short script or util 使用者需要自己安裝 erlang。
dependencies in C 可以直接包進專案,或是讓使用者自行安裝
release
umbrella
full, self-contained, executable system production deploy for erlang system
lib
app
library lib: stateless library that contain modules
app: stateful library with a supervision tree
umbrella collection of multiple libraries on form of project supported where multiple top-level apps are used

設定 dependencies

設定時要做兩件事

  1. track the rebar.lock file

    可以取得 repeatable builds,然後能在 switch branches 時,讓 rebar3 處理自動 re-update dependencies

  2. 忽略 _build 目錄

    可以直接刪除 _build 目錄,但不應該這樣做。rebar3 會追蹤所有 rebar.config 中宣告的 applications,但在發生奇怪的 bug 時,或是修改 project structure (例如移動 src 目錄),就可以直接刪除 _build

接下來就是 add dependencies

  1. 如果 dependency 是 needed at runtime (例如需要 web server 或是 要直接呼叫 library),就要新增到 application 的 .app.src{applications, [stdlib, kernel,...]} 裡面,這樣 erlang VM 就會知道如果沒有這些 dependencies,就無法 boot app
  2. 如果 dependency 是 needed for release (例如 observer or recon),類似 debugging tools,就要新增到 {release, ...} 裡面

更新 dependencies

  1. update the index cache
  2. update the dependency itself

rebar3 會保留一份 Hex.pm respository packages 的 cache,以及先前使用的 versions。但如果有了新版,rebar3 不會自動知道,故要下指令查詢

rebar3 update

這會更新 Hex packages,但不會修改既有的 project。要修改既有專案定義的方法,是修改 lock file,以下指令會修改 lock file 定義,在下次 build 時,會取得新版

rebar3 upgrade <depname>

要盡量避免刪除 lock file,可同時更新多個 app,如果沒有 app 參數,就是更新所有 applications

rebar3 upgrade app1,app2,app3

create aliases for common tasks

xref 是尋找 dead code

dialyzer 是 type analysis

ct 是 Common Test suites

cover 是 coverage analysis

可利用 alias 建立多個 tasks 的 simple command

{alias, [
    {check, [xref, dialyzer, edoc,
             {proper, "--regressions"},
             {proper, "-c"}, {ct, "-c"}, {cover, "-v --min_coverage=80"}]}
]}.

呼叫 rebar3 check,會依序處理

  1. xref 檢查是否有呼叫未定義的 functions
  2. dialyzer 檢查 inconsistencies and type errors
  3. 產生 edoc
  4. proper 執行 regression tests(使用 PropEr plugin)
  5. 在 compiling with coverage analysis 時,用 PropEr 執行 regular properties
  6. 在 compiling with coverage analysis 時,執行 Common Test test suites
  7. 執行 cover,將結果輸出到 shell

建議設定

隱藏太詳細的 report

{dialyzer, [
    {warnings, [
       %% Warn about undefined types and unknown functions
       unknown
    ]}
]}.

{xref_checks,[
    %% enable most checks, but avoid 'unused calls' which is often
    %% very verbose
    undefined_function_calls, undefined_functions, locals_not_used,
    deprecated_function_calls, deprecated_functions
]}.

{profiles, [
    {test, [
        %% Avoid warnings when test suites use `-compile(export_all)`
        {erl_opts, [nowarn_export_all]}
    ]}
]}.

References

rebar3 docs

eeeggghit rebar3

Erlang:Rebar3的简单使用

Automatic release upgrades in Erlang

relx

2021/11/15

Boyer-Moore字串匹配演算法

Boyer-Moore 演算法又稱BM演算法,是1977 年 Robert S. Boyer and J Strother Moore 提出的高效率字串搜尋演算法。

在開始搜尋字串之前,會先對搜尋目標 pattern 進行預處理 preprocessing 的動作,前處理可得到後續要使用的「好後綴位移」(good-suffix shift)和「壞字元位移」(bad-character shift) 兩種資料。

接下來在使用這個 pattern 到原文搜尋字串時,就能利用「好後綴位移」(good-suffix shift)和「壞字元位移」(bad-character shift),來想辦法讓目前無法與樣本相匹配的文字索引位置,能夠位移最多個字元的索引距離,減少原文字元與樣本字元的判斷是否相同的次數,來加速字串搜尋。

「好後綴」為目前位置之樣本和原文相符的字串後綴;「壞字元」為目前位置之樣本和原文從後面判斷第一個出現的不相符字元,就是好後綴開頭的前一個字元。

定義

bad-character

目前無法與 pattern 匹配的字元,匹配順序是由 pattern 最後面往前找

good-suffix

在匹配遇到 bad-character 之前,跟 pattern 匹配成功的字串

bad-character shift rule

往右移動的字元數 = "bad-character" 在 pattern 中出現的位置 - "bad-character" 在 pattern 中出現最右邊的位置

「把 pattern 往右移動最少幾個索引值後,pattern 本身所含有的『壞字元』能和剛剛在原文中發現的『壞字元』對齊」

如果 pattern 中不存在 bad-character,則最右邊的位置是在 pattern 的前面一個字元。也就是要把 pattern 往右移動最少幾個索引值後,pattern 的第1個字元會在原文的「壞字元」的下一個索引位置。

good-suffix shift rule

往右移動的字元數 = "good-suffix" 在 pattern 中出現的位置 - "good-suffix" 在 pattern 中最右邊出現且 prefix 字元不同的位置

要把 pattern 往右移動最少幾個索引值後,pattern 本身所含有的 good-suffix 或是延伸的good-suffix能和剛剛在原文中發現的 good-suffix 或是延伸的 good-suffix 對齊。

實例

搜尋 pattern:

EXAMPLE

原文:

HERE IS A SIMPLE EXAMPLE

Step 1

HERE IS A SIMPLE EXAMPLE
EXAMPLE
  • S 跟 E 不同,這時候的 S 為 bad-character
  • 因為沒有一個字元一樣,所以沒有 good-suffix
  • 因為 EXAMPLE 裡面不存在 S,無法完成「把EXAMPLE這個 pattern 往右移動最少幾個索引值後,pattern 本身所含有的『壞字元』能和剛剛在原文中發現的『壞字元』對齊」這個條件
  • 現在的 bad-character shift,變成是要把EXAMPLE這個樣本往右移動最少幾個索引值後,樣本的第1個字元會在原文的「壞字元」的下一個索引位置

Step 2

HERE IS A SIMPLE EXAMPLE
       EXAMPLE
  • 因為 P 跟 E 不同,P 為 bad-character
  • 沒有字元匹配成功,故沒有 good-suffix
  • 接下來判斷「把EXAMPLE這個 pattern 往右移動最少幾個索引值後,pattern 本身所含有的『壞字元』能和剛剛在原文中發現的『壞字元』對齊」這個條件,故往右移動 2 個字元

Step 3

HERE IS A SIMPLE EXAMPLE
         EXAMPLE
  • MPLE 一樣,稱為 good-suffix
  • I 跟 A 不同,稱為 bad-character
  • 因為 EXA 裡面不存在 I,無法完成「把EXAMPLE這個 pattern 往右移動最少幾個索引值後,pattern 本身所含有的『壞字元』能和剛剛在原文中發現的『壞字元』對齊」這個條件
  • 現在的 bad-character shift,變成是要把EXAMPLE這個樣本往右移動最少幾個索引值後,樣本的第1個字元會在原文的「壞字元」的下一個索引位置,所以是 3
  • 同時利用 good-suffix MPLE 以及 PLE, LE, E,計算 good-suffix shift。
  • 目標是要把EXAMPLE這個 pattern 往右移動最少幾個索引值後,pattern 本身所含有的 good-suffix 或是延伸的good-suffix能和剛剛在原文中發現的 good-suffix 或是延伸的 good-suffix 對齊。
  • 先看 MPLE,無法在 pattern 右移後,還能跟 MPLE 對齊
  • 無法在 pattern 右移後,跟 PLE 對齊
  • 無法在 pattern 右移後,跟 LE 對齊
  • 可以在 pattern 右移後,跟 E 對齊,pattern 必須右移 6 個字元
  • 因為 bad-character shift 為 3,good-suffix shift 為 6,選擇使用 good-suffix shift 為 6

Step 4

HERE IS A SIMPLE EXAMPLE
               EXAMPLE
  • P 與 E 不同,稱為 bad-character
  • 接下來判斷「把EXAMPLE這個 pattern 往右移動最少幾個索引值後,pattern 本身所含有的『壞字元』能和剛剛在原文中發現的『壞字元』對齊」這個條件,故往右移動 2 個字元

Step 5

HERE IS A SIMPLE EXAMPLE
                 EXAMPLE
  • EXAMPLE 完全一樣,搜尋完成

References

Boyer-Moore-MagicLen(BM-MagicLen)字串搜尋演算法,超快速的全文搜尋演算法

演算法: Boyer-Moore字串匹配演算法

演算法――字串匹配之BM演算法

字串匹配Boyer-Moore演算法:文字編輯器中的查詢功能是如何實現的?

字串搜尋演算法Boyer-Moore的Java實現

字符串匹配的Boyer-Moore算法

图解字符串匹配的KMP算法

2021/11/08

asyncio - transports and protocols

transport 負責處理 data bytes 的傳輸方法。是 socket (類似 I/O endpoint) 的 abstraction

protocol 決定什麼時候,要傳送哪些 data bytes。是 application 的 abstraction

transport 跟 protocol 物件永遠都是 1:1 的關係,protocol 呼叫 transport methods 傳送資料,transport 呼叫 protocol methods,將收到的資料傳給它

最常用的 event loop method 為 loop.create_connection(),通常以 protocol_factory 為參數,會產生一個 Protocol object 用來處理以 Transport 物件代表的 connection,這個 method 通常回傳 tuple: (transport, protocol)

tcp echo server

import asyncio


class EchoServerProtocol(asyncio.Protocol):
    def connection_made(self, transport):
        self.peername = transport.get_extra_info('peername')
        print('Connection from {}'.format(self.peername))
        self.transport = transport

    def data_received(self, data):
        message = data.decode()
        print('{} received: {!r}'.format(self.peername, message))

        print('{} Send: {!r}'.format(self.peername, message))
        self.transport.write(data)

    def connection_lost(self, exc):
        print('{} closed the connection'.format(self.peername))


async def main():
    # Get a reference to the event loop as we plan to use
    # low-level APIs.
    loop = asyncio.get_running_loop()

    server = await loop.create_server(
        lambda: EchoServerProtocol(),
        '127.0.0.1', 8888)

    async with server:
        await server.serve_forever()


try:
    asyncio.run(main())
except KeyboardInterrupt:
    pass

tcp echo client

import asyncio
# aioconsole 套件,用來取得 console input,一次一行
from aioconsole import ainput


class EchoClientProtocol(asyncio.Protocol):
    def __init__(self, on_con_lost):
        self.on_con_lost = on_con_lost

    def connection_made(self, transport):
        self.transport = transport

    def write(self, message):
        self.transport.write(message.encode())
        # print('Data sent: {!r}'.format(message))

    def data_received(self, data):
        print('Data received: {!r}'.format(data.decode()))

    def connection_lost(self, exc):
        print('The server closed the connection')
        self.on_con_lost.set_result(True)


async def main():
    # Get a reference to the event loop as we plan to use
    # low-level APIs.
    loop = asyncio.get_running_loop()

    on_con_lost = loop.create_future()

    transport, protocol = await loop.create_connection(
        lambda: EchoClientProtocol(on_con_lost),
        '127.0.0.1', 8888)


    # protocol.write('Hello World! 22')
    while True:
        # aioconsole 是從 stdin 讀取一行字串的套件
        cmd = await ainput('')
        if cmd == 'q':
            transport.close()
            break
        else:
            protocol.write(cmd)

    # Wait until the protocol signals that the connection
    # is lost and close the transport.
    try:
        await on_con_lost
    finally:
        transport.close()


try:
    asyncio.run(main())
except KeyboardInterrupt:
    pass

udp echo server

import asyncio


class EchoServerProtocol:
    def __init__(self, on_con_lost):
        self.on_con_lost = on_con_lost
        self.transport = None

    def connection_made(self, transport):
        self.transport = transport

    def datagram_received(self, data, addr):
        message = data.decode()
        print('Received %r from %s' % (message, addr))
        print('Send %r to %s' % (message, addr))
        self.transport.sendto(data, addr)

    def connection_lost(self, exc):
        print("server closed")
        self.on_con_lost.set_result(True)


async def main():
    print("Starting UDP server")

    # Get a reference to the event loop as we plan to use
    # low-level APIs.
    loop = asyncio.get_running_loop()
    on_con_lost = loop.create_future()

    # One protocol instance will be created to serve all
    # client requests.
    transport, protocol = await loop.create_datagram_endpoint(
        lambda: EchoServerProtocol(on_con_lost),
        local_addr=('127.0.0.1', 9999))

    try:
        await asyncio.sleep(3600)  # Serve for 1 hour.
    finally:
        transport.close()

try:
    asyncio.run(main())
except KeyboardInterrupt:
    pass

udp echo client

import asyncio
# # aioconsole 套件,用來取得 console input,一次一行
from aioconsole import ainput


class EchoClientProtocol:
    def __init__(self, on_con_lost):
        self.on_con_lost = on_con_lost
        self.transport = None

    def connection_made(self, transport):
        self.transport = transport

    def datagram_send(self, message):
        self.transport.sendto(message.encode())

    def datagram_received(self, data, addr):
        print("Received:", data.decode())

    # 前一個 send/receive operation 發生 OSError,很少發生
    def error_received(self, exc):
        print('Error received:', exc)

    # 當 connection is lost or closed 的 callback function
    def connection_lost(self, exc):
        print("Connection closed")
        self.on_con_lost.set_result(True)

    def close(self):
        self.transport.close()


async def main():
    # Get a reference to the event loop as we plan to use
    # low-level APIs.
    loop = asyncio.get_running_loop()

    on_con_lost = loop.create_future()

    transport, protocol = await loop.create_datagram_endpoint(
        lambda: EchoClientProtocol(on_con_lost),
        remote_addr=('127.0.0.1', 9999))

    while True:
        # aioconsole 是從 stdin 讀取一行字串的套件
        cmd = await ainput('')
        if cmd == 'q':
            protocol.close()
            break
        else:
            protocol.datagram_send(cmd)


    try:
        # 等待 EchoClientProtocol 在 connection_lost 時,
        # 填入 on_con_lost,將該 connection 結束並交回控制權
        await on_con_lost
    finally:
        transport.close()


try:
    asyncio.run(main())
except KeyboardInterrupt:
    pass

tcp echo server with Streams

Streams 是以 async/await 處理網路連線的高階 API,可不透過 callback/low-level protocols, transports 收發資料。

import asyncio

async def handle_echo(reader, writer):
    data = await reader.read(100)
    message = data.decode()
    addr = writer.get_extra_info('peername')

    print(f"Received {message!r} from {addr!r}")

    print(f"Send: {message!r}")
    writer.write(data)
    await writer.drain()

    print("Close the connection")
    writer.close()

async def main():
    server = await asyncio.start_server(
        handle_echo, '127.0.0.1', 8888)

    addr = server.sockets[0].getsockname()
    print(f'Serving on {addr}')

    async with server:
        await server.serve_forever()

asyncio.run(main())

tcp echo client with Streams

import asyncio
from aioconsole import ainput

async def tcp_echo_client():
    reader, writer = await asyncio.open_connection(
        '127.0.0.1', 8888)

    message = await ainput('')
    print(f'Send: {message!r}')
    writer.write( message.encode() )
    await writer.drain()

    print(f'wait for repsone')
    data = await reader.read(100)
    print(f'Received: {data.decode()!r}')

    print('Close the connection')
    writer.close()

asyncio.run(tcp_echo_client())

References

Transports and Protocols

udp2tcp

2021/11/01

socat

socat 就是 SOcket CAT,可在兩個任意的 addresses之間傳遞資料的工具。address可以是 network socket, file descriptor, TCP/UDP over IPv4/IPv6, pip, readline, stdio .... 。

基本的指令為

socat [options] <bi-address> <bi-address>

提供兩個 address 給 socat,socat 就能將左邊的資料流,對接起來,左邊的 address 輸出資料給右邊,右邊的 address 輸出資料給左邊,是兩個 addresses 之間的資料串接通道。

network service testing

以往通常是用 telnet 指令來測試遠端 service 的 port,現在可用 socat 取代

檢測 service port

# -d[ddd] 增加日誌詳細程度,-dd  Prints fatal, error, warning, and notice messages.
socat -dd - TCP:192.168.1.88:3306

# -v 顯示詳細資訊
# -z 不傳送資料,效果為立即關閉連線,快速得出結果
nc -vz 192.168.1.88 3306

# -vv 顯示更詳細的內容
# -w2 timeout 時間設為 2 秒
# 用 nc 做簡單的 port scan
nc -vv -w2 -z 192.168.1.88 20-500

TCP/UDP

在本機啟動監聽的 TCP Port,將 stdin 傳給客戶端,收到的資料傳到 stdout

# 服務端啟動命令,socat/nc
socat TCP-LISTEN:7000 -
# -l --listening
nc -l 7000

# 客戶端連線命令,socat/nc
socat TCP:192.168.1.157:7000 -
nc 192.168.1.157 7000

udp

socat UDP-LISTEN:7000 -

socat UDP:192.168.1.157:7000 -

TLS

ref: Securing Traffic Between two Socat Instances Using SSL

產生 server cert

# 產生 public/private key pair
openssl genrsa -out server.key 2048
# 產生 self signed certicate
openssl req -new -key server.key -x509 -days 3650 -out server.crt

# 產生 pem
cat server.key server.crt > server.pem

chmod 600 server.key server.pem

同樣的方式產生 client certificate

# 產生 public/private key pair
openssl genrsa -out client.key 2048
# 產生 self signed certicate
openssl req -new -key client.key -x509 -days 3650 -out client.crt

# 產生 pem
cat client.key client.crt > client.pem

chmod 600 client.key client.pem
# server side
socat openssl-listen:4433,reuseaddr,cert=server.pem,cafile=client.crt,verify=0 -

# client side
socat - openssl-connect:192.168.1.157:4433,cert=client.pem,cafile=server.crt,verify=0

port redirect

# 將 8080 連接埠重新導向至遠端 80 連接埠(單一連線)
socat TCP-LISTEN:8080 TCP:192.168.1.157:80

# 將 8080 連接埠重新導向至遠端 80 連接埠(同時多條連線)
socat TCP-LISTEN:8080,fork,reuseaddr TCP:192.168.1.157:80

# 將 8080 連接埠重新導向至遠端 80 連接埠(以 nobody 權限 fork process)
socat TCP-LISTEN:8080,fork,reuseaddr,su=nobody TCP:192.168.1.157:80

file transfer

如果要將檔案從 A 傳到 B

先在 A

socat -u open:filename tcp-listen:12345

然後到 B

socat -u tcp:192.168.1.157:12345 open:filename,create

# 也可以直接 stdout 重定向
socat -u tcp:192.168.1.157:12345 - > filename

如果用 nc

# 先在接收方啟動 server端
nc -l -p 8080 > filename

# 再在傳送方啟動 client 端傳送資料
nc 192.168.1.157 8080 < filename

web server

socat \
    -v -d -d \
    TCP-LISTEN:8080,crlf,reuseaddr,fork \
    SYSTEM:"
        echo HTTP/1.1 200 OK; 
        echo Content-Type\: text/plain; 
        echo; 
        echo \"Server: \$SOCAT_SOCKADDR:\$SOCAT_SOCKPORT\";
        echo \"Client: \$SOCAT_PEERADDR:\$SOCAT_PEERPORT\";
    "

然後就能直接用 browser 連接 http://localhost:8080/

讀寫分離

#server
socat open:filename\!\!open:log.txt,create,append tcp-listen:12345,reuseaddr,fork

\!\! 是因為 linux 要對 ! 做 escape, !! 的左邊是 read,右邊是 write

open:filename 是讀取 filename 檔案內容

open:log.txt 是將收到的資料寫入 log.txt

#client
socat TCP:192.168.1.157:12345 -

可取得 filename 的檔案內容

References

Linux 網路工具中的瑞士軍刀 - socat & netcat