Debugging c++ code linked to from an R library

I’m trying to understand how the R library DESeq2 works. There is some compiled C++ code and DESeq2 has several dependencies. To do this, I’ve compiled this C++ code with debug flags, however I’m stumped about what to do next. I’m using R-3.4.2 and GCC-4.9.2 and working on CentOS 6.6.

Within the R console, I’ve freshly installed the dependencies of DESeq2.

> .libPaths()
[1] "~/Scratch/R-3.4.2_debug"
[2] "/gpfs0/export/opt/R/3.4.2/lib64/R/library"      
> source("")
> biocLite("S4Vectors", lib.loc="~/Scratch/R-3.4.2_debug/", lib="~/Scratch/R-3.4.2_debug/")
> biocLite("IRanges", lib.loc="~/Scratch/R-3.4.2_debug/", lib="~/Scratch/R-3.4.2_debug/")
> biocLite("GenomicRanges", lib.loc="~/Scratch/R-3.4.2_debug/", lib="~/Scratch/R-3.4.2_debug/")
> biocLite("SummarizedExperiment", lib.loc="~/Scratch/R-3.4.2_debug/", lib="~/Scratch/R-3.4.2_debug/")
> biocLite("RcppArmadillo", lib.loc="~/Scratch/R-3.4.2_debug/", lib="~/Scratch/R-3.4.2_debug/")
> biocLite("Rcpp", lib.loc="~/Scratch/R-3.4.2_debug/", lib="~/Scratch/R-3.4.2_debug/")

Then from the terminal, within ~/Scratch/DESeq2/src, I compiled DESeq2:

gcc -I/opt/R/3.4.2/lib64/R/include -I~/Scratch/R-3.4.2_debug/RcppArmadillo/include/ -I~/Scratch/R-3.4.2_debug/Rcpp/include/ -DNDEBUG  -fpic  -g -c DESeq2.cpp -o DESeq2.o
gcc -I/opt/R/3.4.2/lib64/R/include -I~/Scratch/R-3.4.2_debug/RcppArmadillo/include/ -I~/Scratch/R-3.4.2_debug/Rcpp/include/ -DNDEBUG  -fpic  -g -c RcppExports.cpp -o RcppExports.o
gcc -shared -L/opt/R/3.4.2/lib64/R/lib -lRlapack -lRblas -lR -g -o ~/Scratch/R-3.4.2_debug/DESeq2/libs/ RcppExports.o DESeq2.o 

The final compiled shared object ( finds all the dependencies.

Then in R, DESeq2 cannot be found:

> library("DESeq2")
Error in library("DESeq2") : ‘DESeq2’ is not a valid installed package
No traceback available 

Comparing this to a previous install of DESeq2, I’m missing the (seemingly) important ~/Scratch/DESeq2/R/DESeq2, ~/Scratch/DESeq2/R/DESeq2.rdb, ~/Scratch/DESeq2/R/DESeq2.rdx.

There are two main problems with my understanding here :

  1. I don’t understand how biocLite / install.packages builds packages from BioConductor / CRAN (i.e. where does the R code go?).

  2. I’m not sure how to step through both R and C++ code.


How do I build the DESeq2 package such that I can seamlessly step through both the R and C++ code? This seems promising, but gdb is running R and there is no control of the actual R code (even with inserting browser() into the code).

Chain REST requests using restbed C++

I am trying to chain some REST requests using restbed lib and I have an issue.
So the work flow is something like this: the frontend sends a GET request to the backend. The backend does some processing and should return a reponse to the frontend but in the same time it should also POST the resposnse to another REST server.

void CCMService::get_method_handler(const shared_ptr< Session > session)
    const auto request = session->get_request();

    int content_length = request->get_header("Content-Length", 0);

    session->fetch(content_length, [](const shared_ptr< Session > session, const Bytes & body)
        std::vector<std::string> resultImages;
        fprintf(stdout, "%.*s\n", (int)body.size(),;
        const auto request = session->get_request();

        const string parameter = request->get_path_parameter("camGroupId");
            resultImages = prepareImages(parameter.c_str());
        catch (const std::exception& e)
            std::string error = e.what();
            std::string message = "{error: \"" + error + "\"}";
            throw std::exception(message.c_str());

        fprintf(stderr, "Return response\n");
        session->close(OK, resultImages[0], { { "Content-Length",  std::to_string(resultImages[0].length())} });
        fprintf(stderr, "Send tiles to inference\n");
        //send POST request

void CCMService::sendResult(char* result)
    auto request = make_shared< Request >(Uri(""));

    request->set_header("Accept", "*/*");
    request->set_header("Content-Type", "application/json");
    request->set_header("Host", "");
    //request->set_header("Cache-Control", "no-cache");

    //create json from result - jsonContent

    request->set_header("Content-Length", std::to_string(jsonContent.length()));

    auto settings = make_shared< Settings >();

    auto response = Http::sync(request, settings);

What happens is that when I do the POST request from sendResult function it immediately gets a error response and does not wait for the real response.
What am I doing wrong?

Is “using namespace” transitive in C++?

To my astonishment the following code compiles and prints “X” on VC++ 2017:

#include <string>
#include <iostream>

namespace A {
    using namespace std;

namespace B {
    using namespace A;

namespace C {
    using namespace B;
    string a;

int main()
    C::a = "X";
    std::cout << C::a;
    return 0;

It looks like the using namespace std works from namespace A through namespace B into namespace C.

Is this a bug in Visual C++ or does it concur with the language specification?

I had expected that using namespace std ends at the end of the enclosing scope wich is at the end of the definition of namespace A.

EDIT: I understand that the accepted answer to this question also answers my question. But that post is more about anonymous namespaces, while this one is about the transitivity of the using namespace directive. So I think it s a better example and the question makes sense.

Why is my array declared as an array of references

Today, while compiling some code with GCC 4.9.2 for the first time, I encountered a strange error about an array being interpreted as an array of references.

I was able to reproduce the error with a quick example. Why is the constructor of Link interpreting buses as an array of references in the constructor of Stuff.

The following code works with MSVC10 and ICC 11.1

#include <iostream>

struct Bus
    Bus(std::string n) : name(n) {}
    std::string name;

template<typename T>
class Link
    Link(const T* i)
        data = (T*)i;   

    const T* get() const
        return data;
    T* data = nullptr;  

class Stuff
    Stuff(Link<Bus> l_b) : link(l_b) {}
    Link<Bus> link;

void print(Link<Bus> l)
    std::cout << l.get()->name << '\n';   

int main(void) {
    Bus buses[4] = { Bus("0"), Bus("1"), Bus("2"), Bus("3") };


    Stuff s(Link<Bus>(&buses[0]));    

    return 0;

But with GCC and Clang, this gives an error :

main.cpp: In function 'int main()':

main.cpp:44:32: error: declaration of 'buses' as array of references

     Stuff s(Link<Bus>(&buses[0]));

Yet, the call to the print function works as intended. I am clueless about why the constructor fails.

I found a solution to that problem, by calling buses lik that in the call to the constructor of Stuff

Stuff s(Link<Bus>((&buses)[0]));    

But I’m really interested to know why it fails.

Live example here

unique_ptr does not see custom constructor for derived class

class A{
    string m_name;
    int m_num;
    A(string name="", int number=0) : m_name(name), m_num(number)
    { cout << "ctorA " << m_name << endl; }

    virtual ~A(){ cout << "dtorA " << m_name << endl; }

    string getName(){ return m_name; }
    void setName(const string name){ m_name = name; }
    int getNumber(){ return m_num; }

class B : public A{
    string m_s;
    B(string name="", int number=0, string s="")
        : A(name, number){ m_s = s; }

    string getS(){ return m_s; }


auto upB = unique_ptr<B>("B", 2, "B");   //ERROR HERE

error: no matching function for call to 'std::unique_ptr<B>::unique_ptr(const char [2], int, const char [2])'

I don’t understand why it doesn’t see the B constructor. All seems fine to me. Works with default constructor as such:

auto upB = unique_ptr<B>();

Am I doing something wrong or is there some special issue with derived classes?

C-style array vs std::array for library interface

I want to write a library with an interface that provide a read function.
C-style array is error prone but allow to pass a buffer of any size.
C++ array are safer but impose to be constructed with a size.

// interface.h

// C-style array
int read (std::uint8_t* buf, size_t len);

// C++ array
int read (std::array<std::uint8_t, 16>& buff)

How can I have the best of both worlds?

I was thinking about function template but it does not seems practical for a library interface.

template <size_t N>
int read (std::array<std::uint8_t, N>& buf);

std::vector could be a good candidate but if we consider that char* and std::array do not have dynamic allocation.

EDIT I like a lot the solution with gsl::span. I am stuck with C++14 so no std::span. I don’t know if using a third library (gsl) will be an issue/allow.

EDIT I did not think that using char over another type could have some influence on the answer, so to be clearer it is to manipulate bytes. I change char to std::uint8_t

EDIT Since C++11 guarantee that a return std::vector will moved and not copied, returning std::vector<std::uint8_t> is acceptable.

std::vector<std::uint8_t> read();

Numerical differentiation under a table [on hold]

Can you help me with Numerical Differentiation,please?
I need to find the first and the second derivatives. Tried to do it by my own , finding the first derivatieve. Can you chek me, if I’m alrigh with that?

I have:

double xx[] = {0.5,1.,1.5,2.,2.5};
double yy[] = { -0.69,0.,0.41,0.69,0.92 };

//chck the formula ,please
void num(double *x, double* y)
    double f,f1;
    double d;

    for (int i = 0; i < 5; i++)

        f = (y[i + 1] - y[0])/(x[i + 1] - 0.5);

        std::cout <<"f':" <<f << std::endl;

    std::cout << std::endl;

Am I alright? Can you fix me, if I am not? Please.
how should I try to take the second derivatieve?

Protocol Buffers 2 and 3 in same C++ Linux application

I’ve got a C++ Linux application that already uses proto2.

It now needs to be able to parse a particular proto3 schema as well.

Unfortunately, upgrading the proto2 schema is not an option; neither is downgrading the proto3 one.

I’m aware the design smells, but this is what I’ve got to work with.

What’s the least painful way of supporting both at the same time?

I understand the C++ PB libraries allow dynamically loading a .proto instead of using protoc. If I were to go down this path, would I have to completely change the proto2 bits as well? Can the proto2 and proto3 dynamic loaders coexist?

The other solution that comes to mind is doing the proto3 parsing in a shared object and dynamically linking it.

Any other ideas?