Understanding Property Sheet In Visual Studio Project Management

Naive Editing Properties

The naive method to manage a Visual-Studio project for different configurations (configuration + platform, such as Debug+x64) is to modify values directly in the Property Editor for each configuration. The disadvantages are

  • duplication in multi-configurations, which is difficult to maintain the consistency.
  • difficult to apply to multiple users, or multiple similar projects.

The appropriated solution is to use customized property sheets, which support cascade overriding.

Two Ways to Access Property Editor

  • naive: from project explorer, you can access properties for different configurations
  • advanced: from property manager. Upper sheet in each configuration overrides values in lower sheet. For VS2015, the global sheets locate at <drive>\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V140, user sheets locates at <userprofile>\AppData\Local\Microsoft\MSBuild\v4.0. User sheets are obsolete, thus should be avoid to use (recommend to delete them from projects).

 

In the Property Manager, you can create/add any user custom property sheets at project level (that applies to all configuration) or at specified configuration.

custom-props.png

Project Property Sheet

Custom property sheets are saved in stand-alone XML files (for example, MyProps4All.props), instead of project files (.vcxproj). The advantage of stand-alone property sheet files is that they can be shared by different projects and configurations via importing.

The property inheritance (or override order) is

  1. Default settings from the MSBuild CPP Toolset (..\Program Files\MSBuild\Microsoft.Cpp\v4.0\Microsoft.Cpp.Default.props, which is imported by the .vcxproj file.)
  2. Property sheets
  3. .vcxproj file. (Can override the default and property sheet settings.)
  4. Items metadata

 

Reference

 

 

Advertisements

Displaying Chinese UTF-8 Characters in gvim On Windows

Following tip is copied from https://www.dzhang.com/blog/2013/04/02/displaying-chinese-utf-8-characters-in-gvim-on-windows

By default, gvim on my Windows machines just displays question marks, boxes, or garbled characters when I try to open files with Chinese text. The fix was rather simple:

  1. From the gettext project on SourceForge, get libiconv-1.9.1.bin.woe32.zip, which contains bin/iconv.dll. Put that file into gvim’s installation directory (for me, it’s C:\Program Files (x86)\Vim\vim73).
  2. Put this into vimrc:
    set encoding=utf8
    set guifontwide=NSimSun

Note: I’m using gvim 7.3.46 from the official site.

Credit goes to user Tobbe for this answer on superuser.com, which pointed out the key ingredient (iconv.dll).

 

Extra info

There is an extra vim plug-in that might be interesting, I haven’t verified it yet.

VimIM : Vim Input Method — Vim 中文输入法

Remote Debug With gdb/gdbserve

Overview

Remote debugging is useful or necessary, for example, in following scenarios:

  • There’s no full debugger on the target host, but a small stub (e.g. gdbserver) is available.
  • There’s no full source on the target host(for various reasons, such as size, security, etc). In a large project, synchronizing source codes on different hosts is either convenient nor safe. This is probably the most common case in the real world.
  • Debug input/output may pollute the target application input/output, for example, you are debugging a full-screen editor.

In practice, it’s better to have gdb and gdbserver with the same version. I had an experience that gdb (on centos5, v7.0.1) couldn’t connect to gdbserver (on debian 8, v7.7.1).

In the following example, the source codes locate on the host debian, the application is build and debug on debian, and the application is actually run on the remote host centos.

Prepare The Executable

No need to say, you need -g to keep symbols when you build your application. One additional tip: because the target application runs remotely, you don’t actually need to keep symbols in the target binary.

jason@debian$ gcc -g -o app app.c
jason@debian$ objcopy --only-keep-debug app app.debug
jason@debian$ strip -g -o app.remote app
jason@debian$ scp app.remote tony@centos:path/app

Explanation:

  1. build app with symbol
  2. extract symbols into a separate file. Now app.debug contains all debug information. This is common distribution practice.
  3. generate a version of executable without symbols, which (app.remote) is the version you actually distribute. The binary app still contains full symbols for debugger (you can, however, using app.remote plus app.debug, but what’s the point of that?)
  4. distribute the binary to the target host (host: centos, user: tony)

Launch At Target Host

On the target host (centos), start the application

tony@centos$ gdbserver localhost:4444 app
Process app created; pid = 10307
Listening on port 4444

The example uses tcp protocol. You can use serial com, in some special cases (such as in embedded system). The host part in host:port pair is not actually used in the current gdbserver version, so you can give it anything.

Start Debugging Session

Start debugging on the host debian. From the perspective of execution (on centos), gdb is running remotely (on debian); from the perspective of debugger, gdb (on debian) is debugging a program running on the remote host (centos).

jason@debian$ gdb app
... license info
Reading symbols from /home/jason/app...done.
(gdb) target remote centos:4444
Remote debugging using centos:4444
Reading symbos from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
(gdb) break main
(gdb) run

Once gdb connects to gdbserver, you can debug as usual as a normal gdb session, such as step, break, print variables, etc. One thing to remember is that the input and output of the program happen on the remote host.

On the remote host (centos), the gdbserver session looks like

...
Listening on port 4444
Remote debugging from host 192.168.205.96

... normal application input/output

Child exited with status 0
GDBserver exiting
tony@centos$ 

Integrate Python and C/C++ (1)

In python official document there are two chapters with the topic of integrating python and C/C++

  • Extending and Embedding – tutorial for C/C++ programmer
  • Python/C API

In addition, there are several open-source utilities that make programmers’ lives easier, the most popular ones are

  • SWIG
  • Boost.Python/Pyl11
  • Cython
  • CFFI

This series posts explain the practical experience of the integration of python and C/C++.

Create a simple python module with C

The Python document gives a very simple example (spam), with full source code and detail explanation. What is missing is how you run on a real computer and what the output looks like.

Source Code

Here is the full source code with some concise comment

/* module method

 the name does not matter, and should be static, because the method
 is exposed by its implementation name, it is by method-define-array
 (see below)

 as practical convention, make your function name as module_method_name
*/

static PyObject*
spam_system(PyObject *self, PyObject *args)
{
 const char* command;

 int rc;

 if (!PyArg_ParseTuple(args, "s", &command))
 return NULL;

 rc = system(command);
 return Py_BuildValue("i", rc);
}

/*
 An array of PyMethodDef is passed to module initializer.
 This array defines all methods, each item is

 { name, function_pointer, argument_type, description }

 This array should also be static (no need to be exposed)
*/
static PyMethodDef SpamMethods[] = {
 { "system", spam_system, METH_VARARGS, "Execute a shell command." },

 // other methods

 // end of list
 {NULL, NULL, 0, NULL }
};


/*
 Each should have one (and only one) module initializer, that introduces
 module objects into python namespace.

 Its name MUST BE init.

 For example if the name is changed to init_spam, it still can be
 compiled but python can't import it:

python test.py echo hello
Traceback (most recent call last):
 File "test.py", line 4, in 
 import spam
ImportError: dynamic module does not define init function (initspam)

*/
PyMODINIT_FUNC
initspam(void)
{
 PyObject *m;

 m = Py_InitModule("spam", SpamMethods);
 if (m == NULL)
 return;
}

Manually build (on linux)

The document doesn’t mention the build process, instead it recommends to use distutils module. It is true, however, for curiosity I still decide to try to build manually. Here is the make file

$ cat manual.mk
# use pkg-config to find python information
#
# pkg-config --list-all
# pkg-config --cflags --libs python2
#
# -I/usr/include/python2.7 -I/usr/include/x86_64-linux-gnu/python2.7 -lpython2.7
CFLAGS = -g -fPIC -I/usr/include/python2.7
LDFLAGS = -g -shared -fPIC -L/usr/lib/python2.7
LIBS = -lpython2.7

all: spam.so

spam.o: spam.c
        $(CC) -c $(CFLAGS) -o $@ $<

spam.so: spam.o
        $(LD) -o $@ $< $(LDFLAGS) $(LIBS)

clean:
        rm spam.so spam.o

The build process

$ make -f manual.mk
cc -c -g -fPIC -I/usr/include/python2.7 -o spam.o spam.c
ld -o spam.so spam.o -g -shared -fPIC -L/usr/lib/python2.7 -lpython2.7

The python script that tests our new module spam

$ cat test.py
#! /usr/bin/env python

import sys,os
import spam

cmd = ' '.join(sys.argv[1:])
print 'cmd=[{}]'.format(cmd)
rc = spam.system(cmd)
print 'rc={:x}'.format(rc)

And the result of running.

$ python test.py hello
cmd=[hello]
sh: 1: hello: not found
rc=7f00
$ python test.py echo hello
cmd=[echo hello]
hello
rc=0

Proper method of building (on linux)

As python document recommends, it is much easier (and more standard and flexible) to build with distutils module

$ cat setup.py
#from distutils.core import setup, Extension
# distutils.core is obsolete, use setuptools instead
from setuptools import setup, Extension

# a package consists of one or more modules, each module consists of
# one or more source files
# this is module spam, built by source: spam.c
module1 = Extension('spam', sources = ['spam.c'])

# this is a package spam, consist of module1
setup(name='spam',
      version='1.0',
      description='This is a demo package',
      ext_modules=[module1])

The process that run with distutils

$ python setup.py build
running build
running build_ext
building 'spam' extension
creating build
creating build/temp.linux-x86_64-2.7
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c spam.c -o build/temp.linux-x86_64-2.7/spam.o
creating build/lib.linux-x86_64-2.7
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/spam.o -o build/lib.linux-x86_64-2.7/spam.so
jasonz@jzdebian $ find build
build
build/lib.linux-x86_64-2.7
build/lib.linux-x86_64-2.7/spam.so
build/temp.linux-x86_64-2.7
build/temp.linux-x86_64-2.7/spam.o

Compare the output of the two build method

$ ls -l spam.o build/lib.linux-x86_64-2.7/spam.so
-rwxr-xr-x 1 jasonz jasonz 17304 Nov 30 13:46 build/lib.linux-x86_64-2.7/spam.so
-rw-r--r-- 1 jasonz jasonz 16752 Nov 30 13:49 spam.o
$ file spam.o build/lib.linux-x86_64-2.7/spam.so
spam.o:                             ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped
build/lib.linux-x86_64-2.7/spam.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=9cdc9b1ebf2df10d243fbd931ecea38f3bd63dfc, not stripped

Build with distutils on Windows

Python 3.5 and later support VS2015 and later. For Python 2.7, Microsoft offers a special version of VC++, which is actually an engine of VC++2008.

Note:

  1. Python 2.7 for Windows is built by VC2008, this can be checked by launching python interpreter and check its build info, which should by MSC v.1500
  2. Without VC2008, setup.py auto detect and complains error “Unable to find vcvarsall.bat“.
  3. some old version of setup.py uses module distutils.core, which lacks the auto-detect ability, replace it with setuptools
  4. Microsoft requests to update setuptools to version 6.0 or later for letting the special version VC++ works
  5. download setuptools 6.0.1 ez_setup.py
  6. run command python ez_setup.py
Here is the process of build module on Windows.
D:\codex\python\ext>python setup.py build
running build
running build_ext
building 'spam' extension
creating build
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
C:\Users\jasonz\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\Python27\include -IC:\Python27\PC /Tcspam.c /Fobuild\tem
p.win-amd64-2.7\Release\spam.obj
spam.c
creating build\lib.win-amd64-2.7
C:\Users\jasonz\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\Python27\libs /LIBPATH:C:\Python27\PCbuild\amd64 /LIBPA
TH:C:\Python27\PC\VS9.0\amd64 /EXPORT:initspam build\temp.win-amd64-2.7\Release\spam.obj /OUT:build\lib.win-amd64-2.7\spam.pyd /IMPLIB:build\temp.win-amd64-2.7\Release\spam.lib /MANIFESTFILE:build\tem
p.win-amd64-2.7\Release\spam.pyd.manifest
spam.obj : warning LNK4197: export 'initspam' specified multiple times; using first specification
   Creating library build\temp.win-amd64-2.7\Release\spam.lib and object build\temp.win-amd64-2.7\Release\spam.exp

D:\codex\python\ext>python test.py echo hello
cmd=[echo hello]
hello
rc=0

D:\codex\python\ext>python test.py  hello
cmd=[hello]
'hello' is not recognized as an internal or external command,
operable program or batch file.
rc=1

D:\codex\python\ext>

Find DLL/SO of an Executable

On Linux

ldd

ldd – print shared library dependencies

$ cat hello.cpp
#include <iostream>
int main()
{
        std::cout << "Hello, world!" << std::endl;         return 0;
}
$ g++ -o hello hello.cpp $ ldd -v hello
        linux-vdso.so.1 =>  (0x00007fffb0bfd000)
        libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x0000003964a00000)
        libm.so.6 => /lib64/libm.so.6 (0x000000395e600000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x0000003964200000)
        libc.so.6 => /lib64/libc.so.6 (0x000000395e200000)
        /lib64/ld-linux-x86-64.so.2 (0x000000395de00000)

        Version information:
        ./hello:
                libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6
                libstdc++.so.6 (CXXABI_1.3) => /usr/lib64/libstdc++.so.6
                libstdc++.so.6 (GLIBCXX_3.4) => /usr/lib64/libstdc++.so.6
        /usr/lib64/libstdc++.so.6:
                ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2
                libgcc_s.so.1 (GCC_4.2.0) => /lib64/libgcc_s.so.1
                libgcc_s.so.1 (GCC_3.3) => /lib64/libgcc_s.so.1
                libgcc_s.so.1 (GCC_3.0) => /lib64/libgcc_s.so.1
                libc.so.6 (GLIBC_2.3.2) => /lib64/libc.so.6
                libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6
                libc.so.6 (GLIBC_2.3) => /lib64/libc.so.6
                libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6
        /lib64/libm.so.6:
                libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6
        /lib64/libgcc_s.so.1:
                libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6
                libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6
        /lib64/libc.so.6:
                ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2
                ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2

locate

locate – locate reads one or more databases prepared by updatedb(8) and writes file names matching at least one of the PATTERNs to standard output, one per line.

By default, locate does not check whether files found in database still exist; locate can never report files created after the most recent update of the relevant database.

Windows

Process Monitor

Process Monitor is an advanced monitoring tool for Windows that shows real-time file system, Registry and process/thread activity.

  • More data captured for operation input and output parameters
  • Non-destructive filters allow you to set filters without losing data
  • Capture of thread stacks for each operation make it possible in many cases to identify the root cause of an operation
  • Reliable capture of process details, including image path, command line, user and session ID
  • Configurable and moveable columns for any event property
  • Filters can be set for any data field, including fields not configured as columns
  • Advanced logging architecture scales to tens of millions of captured events and gigabytes of log data
  • Process tree tool shows relationship of all processes referenced in a trace
  • Native log format preserves all data for loading in a different Process Monitor instance
  • Process tooltip for easy viewing of process image information
  • Detail tooltip allows convenient access to formatted data that doesn’t fit in the column
  • Cancellable search
  • Boot time logging of all operations

Environment in Make

Following description is based on linux/bash. It is easy to switch to other platforms/shells.

Set Environment

There are two ways to set environment variables

export MYVAR=myvalue

the command sets the environment variable in current shell, the variable keep valid until it is deleted or changed and will be passed to all child processes by default.

MYVAR=myvalue command

This sets the environment variable for the command only (and it will be inherited by that process. It won’t change current shell environment.

We use this perl script to show a process environment

$ cat env.pl
#! /usr/bin/env perl

my $en = shift || 'MYVAR';
my $ev = $ENV{$en};
$ev = '' unless defined($ev);
print "$en=$ev\n";

Here is the result

$ ./env.pl
MYVAR=
$ export MYVAR=hello
$ echo $MYVAR
hello
$ ./env.pl
MYVAR=hello
$ unset MYVAR
$ echo $MYVAR

$ ./env.pl
MYVAR=
$ MYVAR=Hello ./env.pl
MYVAR=Hello
$ echo $MYVAR

$

Set Make Variable

Make variable can be set by environment (by setting before make), by command line (by setting after make), or set inside make file.

  1. both environment and command set initial variable value
  2. inside setting override environment value (unless -e is set for make)
  3. command line setting always overrides makefile setting (in other words, makefile setting can’t change command line settings)

Here is a makefile

$(info init value CC=$(CC), CXX=$(CXX))

CC :=gcc
CXX :=g++

all:
        @echo CC=$(CC) CXX=$(CXX)

The execute result is

$ CC=hello make
init value CC=hello, CXX=g++
CC=gcc CXX=g++
$ CC=hello make -e
init value CC=hello, CXX=g++
CC=hello CXX=g++
$ make CC=hello
init value CC=hello, CXX=g++
CC=hello CXX=g++
$ CC=hello make CXX=world
init value CC=hello, CXX=world
CC=gcc CXX=world
$