Enable Core Dump On Linux

Outlines

  1. setup core-dump limit
  2. setup core-pattern
  3. prepare folder for core-dumps
  4. check bash limit is removed
  5. verify that everything works

 

Setup Resource Limit

First we need to setup resource limit at system-wide level.

  • edit /etc/security/limits.conf, add or un-comment this line
    *        soft    core    unlimited   # or number in KB
  • (optional) edit /etc/init.d/functions and add this line
    ulimit -S -c ${daemon_corefile_limit:-0} >/dev/null 2>&1
  • (centos/redhad only) edit /etc/sysconfig/init to enable core-dump globally
    DAEMON_COREFILE_LIMIT='unlimited'

 

Setup Core-Pattern

The current working directory is usually not a good place to save cores

  • the process might not have writing permission
  • the space might not enough
  • cwd might change during the running, and core might be dumped anytime
  • leave cores everywhere in the filesystem

Edit file /etc/sysctl.conf, add following lines

# setup core-pattern
fs.suid_dumpable = 1          # enable dumping suid app
kernel.core_uses_pid = 1      # append pid to the following string
kernel.core_pattern = /tmp/core  # core-dump prefix string
# the final core file will be /tmp/core.$pid

Let the settings take effect

sudo /sbin/sysctl -p

 

Prepare Folders for Cores

After setting core-patterns, don’t forget to create the folder that holds cores. The folder should be accessible for process that potentially create cores.

 

Check Bash limits

Resource limits of a process are inherited from its parent. The process can reduce but can’t increase the limit. Many shell profiles by default restrict core-dump ability. Check and remove/edit such restrictions like

ulimit -S -c 0 > /dev/null 2>&1

from profile of shell that launch the application

  • /etc/profile
  • ~/.profile
  • ~/.bashrc

 

Verify That Everything Works

  1. Logout and login
  2. Type ulimit -a to check core file size.
  3. Run any command and send it signal SIGSEGV while it is still running.
Advertisements

Debian vs Apline – aka glibc vs musl

Intro

Alpine is commonly chosen for light-weight docker container. However most applications are built on “normal” systems, i.e. glibc. Are programs compatible each other on two distributions?

 

About Alpine

Alpine is a minimal linux distro, based on musl. musl is a re-implementation of glibc, and is declared to be smaller, faster and more secure. Typical alpine docker image is 5mb (compare to debian 180mb).

$ docker pull alpine
$ docker pull frolvlad/alpine-gxx
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
centos latest 2d194b392dd1 13 days ago 195MB
frolvlad/alpine-gxx latest 1ef8d941aadd 2 weeks ago 151MB
alpine latest 3fd9065eaf02 2 months ago 4.15MB
hello-world latest f2a91732366c 3 months ago 1.85kB

 

Sample Programs

Here are two simple c/c++ programs. We are going to build them on debian and run on alpine, and vice versa.

cat t.c

#include <stdio.h>
int main()
{
 printf("Hello, world\n");
 return 0;
}

cat t.cpp

#include <iostream>
using namespace std;

int main()
{
cout << "Hello, this is c++" << endl;
return 0;
}

 

From Debian To Alpine

Nothing special to build those two programs. We build them staticly and dynamicly and see what is different.

jzou@debian9:~/tmp/exdk$ gcc -o t t.c
jzou@debian9:~/tmp/exdk$ gcc -o ts -static t.c
jzou@debian9:~/tmp/exdk$ g++ -o t+ t.cpp
jzou@debian9:~/tmp/exdk$ g++ -o t+s -static t.cpp
jzou@debian9:~/tmp/exdk$ ls -lh t*
-rwxr-xr-x 1 jzou jzou 8.5K Mar 19 16:04 t
-rwxr-xr-x 1 jzou jzou 9.1K Mar 19 16:04 t+
-rw-r--r-- 1 jzou jzou 73 Mar 19 13:07 t.c
-rw-r--r-- 1 jzou jzou 107 Mar 19 13:08 t.cpp
-rwxr-xr-x 1 jzou jzou 792K Mar 19 16:04 ts
-rwxr-xr-x 1 jzou jzou 2.0M Mar 19 16:04 t+s

 

Launch alpine docker and run those binaries.

docker run -it --name alpine -v ~/tmp/exdk:/home/jzou alpine
# cd /home/jzou
/home/jzou # ./t
/bin/sh: ./t: not found
/home/jzou # ./t+
/bin/sh: ./t+: not found
/home/jzou # ./ts
Hello, world
/home/jzou # ./t+s
Hello, this is c++

Go back to debian host, check dependencies

jzou@debian9:~/tmp/exdk$ ldd t
 linux-vdso.so.1 (0x00007ffe2fdd1000)
 libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fadd02b7000)
 /lib64/ld-linux-x86-64.so.2 (0x00007fadd0858000)
jzou@debian9:~/tmp/exdk$ ldd t+
 linux-vdso.so.1 (0x00007ffffa1d2000)
 libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f4259f97000)
 libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4259c93000)
 libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f4259a7c000)
 libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f42596dd000)
 /lib64/ld-linux-x86-64.so.2 (0x00007f425a51b000)

Copy dependancies to alpine container

mkdir mylibs && cd mylibs
cp /lib64/ld-linux-x86-64.so.2 .
cp /lib/x86_64-linux-gnu/libc.so.6 .
cp /usr/lib/x86_64-linux-gnu/libstdc++.so.6 .
cp /lib/x86_64-linux-gnu/libgcc_s.so.1 .
cp /lib/x86_64-linux-gnu/libm.so.6 .
ls -lh
-rwxr-xr-x 1 jzou jzou 150K Mar 19 13:14 ld-linux-x86-64.so.2
-rwxr-xr-x 1 jzou jzou 1.7M Mar 19 13:12 libc.so.6
-rw-r--r-- 1 jzou jzou 91K Mar 19 13:18 libgcc_s.so.1
-rw-r--r-- 1 jzou jzou 1.1M Mar 19 13:17 libm.so.6
-rw-r--r-- 1 jzou jzou 1.5M Mar 19 13:17 libstdc++.so.6

Go apline container, set proper load and libc/c++

mkdir -p /lib64
cp mylibs/lib64/ld-linux-x86-64.so.2 /lib64
/home/jzou # ./t
./t: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
/home/jzou # export LD_LIBRARY_PATH=/home/jzou/mylibs
/home/jzou # ./t
Hello, world
/home/jzou # ./t+
Hello, this is c++

Well done!

 

From Alpine to Debian

Build programs on alpine with its gcc/g++, which are based on musl.

docker run -it --name agxx -v /home/jzou/tmp/exdk:/home/jzou "frolvlad/alpine-gxx"
# cd /home/jzou
/home/jzou # gcc -o at t.c
/home/jzou # ./at
Hello, world
/home/jzou # g++ -o at+ t.cpp
/home/jzou # ./at+
Hello, this is c++
/home/jzou # gcc -o ats -static t.c
/home/jzou # g++ -o at+s -static t.cpp
/home/jzou # ./ats
Hello, world
/home/jzou # ./at+s
Hello, this is c++
/home/jzou # ls -lh a*
-rwxr-xr-x 1 root root 10.4K Mar 19 23:27 at
-rwxr-xr-x 1 root root 11.0K Mar 19 23:27 at+
-rwxr-xr-x 1 root root 5.8M Mar 19 23:28 at+s
-rwxr-xr-x 1 root root 78.5K Mar 19 23:28 ats
/home/jzou # ls -lh t*
-rwxr-xr-x 1 1000 1000 8.4K Mar 19 23:04 t
-rwxr-xr-x 1 1000 1000 9.1K Mar 19 23:04 t+
-rwxr-xr-x 1 1000 1000 2.0M Mar 19 23:04 t+s
-rw-r--r-- 1 1000 1000 73 Mar 19 20:07 t.c
-rw-r--r-- 1 1000 1000 107 Mar 19 20:08 t.cpp
-rwxr-xr-x 1 1000 1000 791.7K Mar 19 23:04 ts

Back to debian host, try to run those binaries in debian

jzou@debian9:~/tmp/exdk$ ./at
bash: ./at: No such file or directory
jzou@debian9:~/tmp/exdk$ ./at+
bash: ./at+: No such file or directory
jzou@debian9:~/tmp/exdk$ ./ats
Hello, world
jzou@debian9:~/tmp/exdk$ ./at+s
Hello, this is c++

Like what we have done before, let’s copy load and support libraries to debian.

Back to alpine, find dependencies and grab them.

/home/jzou # ldd at
 /lib/ld-musl-x86_64.so.1 (0x7efdeef3c000)
 libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7efdeef3c000)
/home/jzou # ldd at+
 /lib/ld-musl-x86_64.so.1 (0x7f852e70b000)
 libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x7f852e1b7000)
 libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f852e70b000)
 libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x7f852dfa5000)
/home/jzou # mkdir mymusl
/home/jzou # cp /lib/ld-musl-x86_64.so.1 mymusl
/home/jzou # cp /usr/lib/libstdc++.so.6 mymusl
/home/jzou # cp /usr/lib/libgcc_s.so.1 mymusl
/home/jzou # ls -lh mymusl
total 1964
-rwxr-xr-x 1 root root 550.5K Mar 20 00:04 ld-musl-x86_64.so.1
-rw-r--r-- 1 root root 69.7K Mar 20 00:09 libgcc_s.so.1
-rwxr-xr-x 1 root root 1.3M Mar 20 00:05 libstdc++.so.6

Go to debian

jzou@debian9:~/tmp/exdk$ sudo cp mylibs/ld-musl-x86_64.so.1 /lib
jzou@debian9:~/tmp/exdk$ export LD_LIBRARY_PATH=`pwd`/mymusl
jzou@debian9:~/tmp/exdk$ ./at
Hello, world
jzou@debian9:~/tmp/exdk$ ./at+
Hello, this is c++

Well done!

Conclusion

  • static-linked apps can run everywhere! Of course
  • dynamic-linked apps (by default) need to setup load lib and supporting libs (libc/c++)

Quick Steps on Docker

Install Docker

On debian

sudo apt-get update
sudo apt-get install \
 apt-transport-https \
 ca-certificates \
 curl \
 gnupg2 \
 software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository \
 "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
 $(lsb_release -cs) \
 stable"
sudo apt-get update
sudo apt-get install docker-ce
# apt-cache madison docker-ce
# verify everything is fine by launching a simple container
sudo docker run hello-world

Run docker command from non-root account

sudo groupadd docker
sudo usermod -aG docker jzou

 

Change Docker Storage

By default docker save all its data (images, containers, etc) at /var/lib/docker

use this command to check if it is not

docker info

Quite often there is enough space on the volume /var and you want to move it to some other mount. Here is the procedure that move data to /mnt/huge/docker

  1. Stop Docker
    systemctl stop docker
    systemctl daemon-reload
  2. Edit docker daemon configuration file /lib/systemd/system/docker.service
    FROM ExecStart=/usr/bin/dockerd
    TO ExecStart=/usr/bin/dockerd -g /mnt/huge/docker/
  3. Copy content
    rsync -aqxP /var/lib/docker/ /mnt/huge/docker/
  4. Restart docker
    systemctl start docker

Ref: https://stackoverflow.com/questions/32070113/how-do-i-change-the-default-docker-container-location

Pull Docker Images from Docker Hub

Here are some samples to

docker pull debian

In addition to some common base images such as CentOS, Debian/Ubuntu, some minimal linux distros are widely used, such as alpine, busybox

docker pull alpine

Start A Container

docker run -it --name cname docker_image [ command ]
  • -i          interactively
  • -t         bind with current terminal
  • –name give the container a name for easy reference (other than hash string)
  • command by default is specified by image, or /bin/bash for most OS images

 

Share Directory With Host

docker run --name cname -v host_path:container_path docker_image

 

 

Understanding Property Sheet In Visual Studio Project Management

Naive Editing Properties

The naive method to manage a Visual-Studio project for different configurations (configuration + platform, such as Debug+x64) is to modify values directly in the Property Editor for each configuration. The disadvantages are

  • duplication in multi-configurations, which is difficult to maintain the consistency.
  • difficult to apply to multiple users, or multiple similar projects.

The appropriated solution is to use customized property sheets, which support cascade overriding.

Two Ways to Access Property Editor

  • naive: from project explorer, you can access properties for different configurations
  • advanced: from property manager. Upper sheet in each configuration overrides values in lower sheet. For VS2015, the global sheets locate at <drive>\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V140, user sheets locates at <userprofile>\AppData\Local\Microsoft\MSBuild\v4.0. User sheets are obsolete, thus should be avoid to use (recommend to delete them from projects).

 

In the Property Manager, you can create/add any user custom property sheets at project level (that applies to all configuration) or at specified configuration.

custom-props.png

Project Property Sheet

Custom property sheets are saved in stand-alone XML files (for example, MyProps4All.props), instead of project files (.vcxproj). The advantage of stand-alone property sheet files is that they can be shared by different projects and configurations via importing.

The property inheritance (or override order) is

  1. Default settings from the MSBuild CPP Toolset (..\Program Files\MSBuild\Microsoft.Cpp\v4.0\Microsoft.Cpp.Default.props, which is imported by the .vcxproj file.)
  2. Property sheets
  3. .vcxproj file. (Can override the default and property sheet settings.)
  4. Items metadata

 

Reference

 

 

Displaying Chinese UTF-8 Characters in gvim On Windows

Following tip is copied from https://www.dzhang.com/blog/2013/04/02/displaying-chinese-utf-8-characters-in-gvim-on-windows

By default, gvim on my Windows machines just displays question marks, boxes, or garbled characters when I try to open files with Chinese text. The fix was rather simple:

  1. From the gettext project on SourceForge, get libiconv-1.9.1.bin.woe32.zip, which contains bin/iconv.dll. Put that file into gvim’s installation directory (for me, it’s C:\Program Files (x86)\Vim\vim73).
  2. Put this into vimrc:
    set encoding=utf8
    set guifontwide=NSimSun

Note: I’m using gvim 7.3.46 from the official site.

Credit goes to user Tobbe for this answer on superuser.com, which pointed out the key ingredient (iconv.dll).

 

Extra info

There is an extra vim plug-in that might be interesting, I haven’t verified it yet.

VimIM : Vim Input Method — Vim 中文输入法

Remote Debug With gdb/gdbserve

Overview

Remote debugging is useful or necessary, for example, in following scenarios:

  • There’s no full debugger on the target host, but a small stub (e.g. gdbserver) is available.
  • There’s no full source on the target host(for various reasons, such as size, security, etc). In a large project, synchronizing source codes on different hosts is either convenient nor safe. This is probably the most common case in the real world.
  • Debug input/output may pollute the target application input/output, for example, you are debugging a full-screen editor.

In practice, it’s better to have gdb and gdbserver with the same version. I had an experience that gdb (on centos5, v7.0.1) couldn’t connect to gdbserver (on debian 8, v7.7.1).

In the following example, the source codes locate on the host debian, the application is build and debug on debian, and the application is actually run on the remote host centos.

Prepare The Executable

No need to say, you need -g to keep symbols when you build your application. One additional tip: because the target application runs remotely, you don’t actually need to keep symbols in the target binary.

jason@debian$ gcc -g -o app app.c
jason@debian$ objcopy --only-keep-debug app app.debug
jason@debian$ strip -g -o app.remote app
jason@debian$ scp app.remote tony@centos:path/app

Explanation:

  1. build app with symbol
  2. extract symbols into a separate file. Now app.debug contains all debug information. This is common distribution practice.
  3. generate a version of executable without symbols, which (app.remote) is the version you actually distribute. The binary app still contains full symbols for debugger (you can, however, using app.remote plus app.debug, but what’s the point of that?)
  4. distribute the binary to the target host (host: centos, user: tony)

Launch At Target Host

On the target host (centos), start the application

tony@centos$ gdbserver localhost:4444 app
Process app created; pid = 10307
Listening on port 4444

The example uses tcp protocol. You can use serial com, in some special cases (such as in embedded system). The host part in host:port pair is not actually used in the current gdbserver version, so you can give it anything.

Start Debugging Session

Start debugging on the host debian. From the perspective of execution (on centos), gdb is running remotely (on debian); from the perspective of debugger, gdb (on debian) is debugging a program running on the remote host (centos).

jason@debian$ gdb app
... license info
Reading symbols from /home/jason/app...done.
(gdb) target remote centos:4444
Remote debugging using centos:4444
Reading symbos from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
(gdb) break main
(gdb) run

Once gdb connects to gdbserver, you can debug as usual as a normal gdb session, such as step, break, print variables, etc. One thing to remember is that the input and output of the program happen on the remote host.

On the remote host (centos), the gdbserver session looks like

...
Listening on port 4444
Remote debugging from host 192.168.205.96

... normal application input/output

Child exited with status 0
GDBserver exiting
tony@centos$ 

p4 – perforce common commands for batch operations

p4 help
get online help
p4 dirs //Drivers/*
list direct directories under given path (//Drivers)
p4 files //Drivers/…
list files (recursively) under given path (//Drivers)
p4 labels -m 5 //Drivers/…
list latest 5 labels of a path
p4 changes -m 1 //Drivers/…
list  the latest change list
p4 user jason
list user jason’s information
p4 client
show current client
p4 opened
list opened files (that are in the pending change lists)
p4 sync
get files to the workspace

p4 sync file#rev
p4 sync @label
p4 sync //depot/proj/...@rev
p4 sync @2011/06/24
p4 sync file#none           # delete from workspace
p4 unshelve -s changelist [ file_pattern … ]
unshelve a file/change list

perforce file name format

file#n              the n-th revision of file
file#m,n            revision range m,n
file#none           nonexistent revision (delete from workspace)
file#0
file#head           head (latest) revision
file#have           the revision on the current client
file@=n             change number
file@n
file@label          file in label
file@clientname     revision of file last taken into client workspace
file@datespec       datespec    yyyy/mm/dd([ :]hh:mm:ss)?
file@now