Question

Is boost::lexical_cast redundant now that C++11 introduces stoi, stof and family, or is there any reason to still use it? (apart from not having a C++11 compiler) Do they provide exactly the same functionality?

Was it helpful?

Solution

boost::lexical_cast

  • handles more kinds of conversion, including iterator pairs, arrays, C strings, etc.
  • offers the same generic interface (sto* have different names for different types)
  • is locale-sensitive (sto*/to_string are only in part, e.g. lexical_cast can process thousands separators, while stoul usually doesn't)

OTHER TIPS

boost::lexical_cast gives you a uniform interface across types which is often very important in generic code.

In general, consistent interface across types for same functionality allows generic code better. For example, following can be used as generic parser from string tokens to std::tuple:

template<typename T>
void fill(T& item, const std::string& token){
    item = boost::lexical_cast<T>(token)
} 

template<int N, typename ...Ts> 
void parse(std::integral_constant<int, N>, std::tuple<Ts...>& info, std::vector<std::string>& tokens) {
    fill(std::get<N>(info), tokens[N]);
    parse(std::integral_constant<int, N - 1>, info, tokens); 
}

template<typename ...Ts> 
void parse(std::integral_constant<int, 0>, std::tuple<Ts...>& info, std::vector<std::string>& tokens) {
    fill(std::get<0>(info), tokens[0]);
}

Instead of tuple, I often use boost fusion struct to deserialize some tokenized strings directly into a struct in a generic way.

boost::lexical_cast is more than converting to a distinct set of types:

struct A {};
std::ostream& operator << (std::ostream& stream, const A&) {
    return stream;
}

struct B {};
std::istream& operator >> (std::istream& stream, B&) {
    return stream;
}

int main(){
    A a;
    B b = boost::lexical_cast<B>(a);
}

Its strength and weakness is the acceptance of any pair of types for the conversion through an intermediate std::stringstream (where an optimized algorithm is or is not applied).

Performance wise, you could do the comparison using the following code (it's a variation of my post here)

#include <iostream>
#include <string>
#include <sstream>
#include <vector>
#include <chrono>
#include <random>
#include <exception>
#include <type_traits>
#include <boost/lexical_cast.hpp>

using namespace std;

// 1. A way to easily measure elapsed time -------------------
template<typename TimeT = std::chrono::milliseconds>
struct measure
{
    template<typename F>
    static typename TimeT::rep execution(F const &func)
    {
        auto start = std::chrono::system_clock::now();
        func();
        auto duration = std::chrono::duration_cast< TimeT>(
            std::chrono::system_clock::now() - start);
        return duration.count();
    }
};
// -----------------------------------------------------------

// 2. Define the convertion functions ========================
// A. Using boost::lexical_cast ------------------------------
template<typename Ret> 
Ret NumberFromString(string const &value) {
    return boost::lexical_cast<Ret>(value);
}

// B. Using c++11 stoi() -------------------------------------
int IntFromString(string const &value) { 
    return std::stoi(value);
}

// C. Using c++11 stof() -------------------------------------
float FloatFromString(string const &value) { 
    return std::stof(value);
}
// ===========================================================

// 3. A wrapper to measure the different executions ----------
template<typename T, typename F> long long 
MeasureExec(std::vector<string> const &v1, F const &func)
{
    return measure<>::execution([&]() {
        for (auto const &i : v1) {
            if (func(i) != NumberFromString<T>(i)) {
                throw std::runtime_error("FAIL");
            }
        }
    });
}
// -----------------------------------------------------------

// 4. Machinery to generate random numbers into a vector -----
template<typename T>
typename std::enable_if<std::is_integral<T>::value>::type
FillVec(vector<T> &v)
{
    mt19937 e2(1);
    uniform_int_distribution<> dist(3, 1440);
    generate(v.begin(), v.end(), [&]() { return dist(e2); });
}

template<typename T>
typename std::enable_if<!std::is_integral<T>::value>::type
FillVec(vector<T> &v)
{
    mt19937 e2(1);
    uniform_real_distribution<> dist(-1440., 1440.);
    generate(v.begin(), v.end(), [&]() { return dist(e2); });
}

template<typename T>
void FillVec(vector<T> const &vec, vector<string> *result)
{
    result->resize(vec.size());
    for (size_t i = 0; i < vec.size(); i++)
        result->at(i) = boost::lexical_cast<string>(vec[i]);
}
// -----------------------------------------------------------

int main()
{
    std::vector<int> vi(991908);
    FillVec(vi);
    std::vector<float> vf(991908);
    FillVec(vf);

    std::vector<string> vsi, vsf;
    FillVec(vi, &vsi);
    FillVec(vf, &vsf);

    cout << "C++ 11 stof function .. " <<
        MeasureExec<float>(vsf, FloatFromString) << endl;
    cout << "Lexical cast method ... " <<
        MeasureExec<float>(vsf, NumberFromString<float>) << endl;

    cout << endl << endl;

    cout << "C++ 11 stoi function .. " <<
        MeasureExec<int>(vsi, IntFromString) << endl;
    cout << "Lexical cast method ... " <<
        MeasureExec<int>(vsi, NumberFromString<int>) << endl;

    return 0;
}

When executed with

g++ -std=c++11 -Ofast -march=native -Wall -pedantic main.cpp && ./a.out

The results are

C++ 11 stof function .. 540

Lexical cast method ... 559

C++ 11 stoi function .. 117

Lexical cast method ... 156

The C++11 specialized functions certainly seem to perrform better. But they are exactly that, specialized, and as such make the construction of abstract interfaces harder than lexical_cast

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top