Hacker News new | past | comments | ask | show | jobs | submit login
C++ – From goto to std::transform (github.com/dobiasd)
88 points by jlemoine on April 11, 2014 | hide | past | favorite | 92 comments



What I find painful about writing pseudo-functional C++ using <algorithm> and other tricks is that you have to allocate storage for intermediate values. This makes C++ algorithms a lot less composable than similar things in other languages.

So if I wanted to do something like (in Haskell):

    sumOfOddSquares xs = sum . map (^2) . filter odd $ xs
In C++, I would need to allocate temporary storage for the intermediate values.

    std::vector odd_numbers;
    std::copy_if(xs.begin(), xs.end(), back_inserter(odd_numbers), isOdd);
    std::transform(odd_numbers.begin(), odd_numbers.end(), odd_numbers.begin(), square);
    std::accumulate(odd_numbers.begin(), odd_numbers.end(), 0, plus);
This becomes quite painful very quickly. You can use solutions like boost::filter_iterator, but soon you're knee deep in C++ templates and the long compile times and awful error messages that follow.

The example above is a bit contrived since it can be easily be done with a single accumulate, but you get the idea...

    std::accumulate(xs.begin(), xs.end(), 0,
        [](int sum, int i) { return sum + ((i&1) ? i*i : 0); });
So while you can have lots of fun with <algorithm>, it's quite limited in what it can do.


On the other hand, C++ has more flexibility. Since C++ allows you to specify your allocation scheme, you can guarantee a piece of code will never trigger an allocation, allocate in a special way to ensure cache coherence, etc.

For example, you could have written equivalent logic that allocates odd_numbers on the stack or from a special memory pool.

Sure, application developers shouldn't care about that kind of fine-grained control. However, they should care about having fast, small, safe, and high-quality libraries available. For library authors and developers in special domains (high-performance, safety-critical, etc.), these tools are essential.


> On the other hand, C++ has more flexibility. Since C++ allows you to specify your allocation scheme, you can guarantee a piece of code will never trigger an allocation, allocate in a special way to ensure cache coherence, etc.

This is true but the way C++ standard containers work with allocators is rather clumsy. So clumsy that often when custom allocation logic is required, standard containers are not used at all and custom containers are built instead. For example, EA has used a custom STL-esque container library to avoid memory allocation pressure in console and PC games.

> For example, you could have written equivalent logic that allocates odd_numbers on the stack or from a special memory pool.

But this example does not require any allocation space at all. But there's no way of doing this with standard C++ algorithms without allocating additional space.


Luckily functions from <algorithm> don't only work with STL containers; they work with any class that adheres to the STL containers interfaces.


> But this example does not require any allocation space at all.

That's not possible if you're comparing algorithms apples-to-apples. If you have an intermediate collection, it's in memory somewhere, even if the language does the allocation for you transparently.


> If you have an intermediate collection, it's in memory somewhere, even if the language does the allocation for you transparently.

This toy example algorithm, and many practical algorithms, do not need an intermediate collection to be stored in memory.

Using some kind of laziness, you only need exactly one temporary element, not the whole collection. You can make this happen not only in Haskell but also using Python generators, Clojure lazy sequences or even in C++ using Folly.

The lack of means of composition, the inability to pipe the output of one algorithm as the input of another, is in my opinion the greatest shortcoming of C++ <algorithm>.


> you can guarantee a piece of code will never trigger an allocation

I'd be interested to see an example of this, if possible, say, with an application of flatMap and filtering in a chain or something similar (or just exDM69's example), without pre-scanning the collection.

I think C++ does really well when memory needs are known or can be bounded to something reasonable up front, but not as well as a good high performance GC when many temporary allocations need to be made where their size can't be known until runtime. I would love to be wrong about that though, the language has a lot going for it.


You can guarantee the heap will not be touched in any code that:

- does not use malloc or new - does not call any function that uses malloc or new

When you do that, you make allocation decisions orthogonal to the algorithm. This is how all the std algorithms are implemented.

Haskell also separates allocation from algorithm, but it so by making decisions for the programmer -- by using GC. This is not an acceptable trade-off for a C++ developer or else she would be using a different language (or smart pointers).


> You can guarantee the heap will not be touched in any code > that: - does not use malloc or new - does not call any > function that uses malloc or new

Which is extremely limiting, in the context of data analysis chains like the examples given.


No it isn't. You just need C++ libraries that do data analysis chains in a similar way. There are examples elsewhere in the comments here.


This is indeed painful. However, with C++11's move semantics, I suspect libraries will over time develop in ways that make this less painful. See nly's comment for an example.


It's mostly painful because of all the iterator pairs, not because of the allocation.

Though I agree that better libraries could clean this sort of trivial transformation up immensely.


<algorithm> is so 90s. In C++14 you'll be able to do this:

    auto square_copy = [](auto v) {
       for (auto& e: v) { e *= e; }
       return v;
    };
Another option:

    auto square = [](auto& v) { v *= v; };

    auto map = [](auto range, auto fun) {
        transform (begin(range), end(range), fun); 
        return range;
    };

    vector<int> vec = {1,2,3,4,5,6};
    auto squared = map (vec, square);
Note that, in both of these cases, if you use std::move() to gift a container to these functions, they will not actually allocate any memory and will effectively operate in-place.


Am I missing something? I don't see anything beyond C++11 in here.


Parameters can't be auto in C++11. Polymorphic lambdas are a C++14 feature.

Incidentally, are you Stormhawk on Freenodes #C++ ?


Ah right. I have to admit I was so astounded this was omitted in C++11 (with some really dumb rationale, iirc) that I keep forgetting it wasn't.

And nope, that's not me. I go by stormbrew on freenode as well and haven't been in a C++ focused channel since the late 90s on efnet. :)


Not quite as astounding as omitting make_unique :P No matter, C++14 is going to be here very soon. Both of the open source compilers are almost there, and I've read the standard will be final once the Ts are crossed.


Using Facebook's `folly::gen` library [1], you can get it down to:

    vector<int> squareVec3(const vector<int>& v) {
      return from(v) | mapped([](int i){ return i * i; }) | as<vector>();
    }
which is pretty slick. There are a bunch of the usual LINQ/functional operators in `folly::gen` that are quite nice. As an example, to filter out the odd elements of `v`, you can do

    vector<int> squareVecFiltered(const vector<int>& v) {
      return from(v) 
           | filter([](int i){ return i % 2 == 0; })
           | mapped([](int i){ return i * i; })  
           | as<vector>();
    }
https://github.com/facebook/folly/tree/master/folly/gen


Or using [Linq](https://github.com/pfultz2/Linq):

    vector<int> squareVec(const vector<int>& v)
    {
        return LINQ(from(i, v) select(i*i)) | linq::to_container;
    }
And also filtering out odd elements:

    vector<int> squareVecFiltered(const vector<int>& v)
    {
        return LINQ(from(i, v) where(i % 2 == 0) select(i*i)) | linq::to_container;
    }


That's cool, but I'd rather not bitwise-or my functional operators together. Something like this would be better:

    vector<int> squareVecFiltered(const vector<int>& v) {
      return from(v
           filter([](int i){ return i % 2 == 0; }),
           mapped([](int i){ return i * i; }),
           as<vector>());
    }


That is actually really nice.


C++ is evolving at a kind of breakneck pace lately, and it's already a lot different from the language it used to be. Auto, range loops, and lambdas are making it much more pleasant to work in while still maintaining a lot of performance advantages.

I know it's pretty popular to hate on C++, but I'm pretty excited about where it's going, personally.


I've recently started working in C++ (I've had some prior knowledge, but mostly basics) and whenever I try to Google stuff up, a few of the results are Don't use C++ because of X

I read a few of them out of curiosity, and realised that they are mostly religious reasons to hate C++, hardly any of them are proper, factual, or empirical reasons. So, my hypothesis is the following: They were unable to learn C++, and because of immaturity, decided it was stupid[0][1].

[0]: I'm no saint to this. I said on numerous occasions Erlang is shit because I couldn't wrap my head around the syntax.

[1]: I believe it's a phenomenon associated with Psychological Projection, but I might've confused it with something else.


C++ isn't the easiest language to work with, but what I find truly bizarre are all the internet people who apparently hate C++ while also praising C as wonderful and "elegant". I'm convinced that the vast majority of them have never written anything significant in C, which is a painful experience tolerated only by systems programmers who actually need something that's one step above a portable assembly language.

C++ only fails if you try to write it like C, and don't take advantage of all the lovely new tools it gives you. It's always been a more useful language for nontrivial projects, and C++11 elevates it so much further.


Yes, many fail to understand that most C++ warts are caused by its C compatibility and are actually C issues.

Which incidentally also affect Objective-C for the same reasons.


Dear HN,

Is it really necessary to start another language war thread?

Sincerely, a happy C developer.


I am not starting a another language war.

Just stating the fact that when someone posts "C++ is horrible because of X", that X is either also available in "ANSI C", caused by compatibility with C semantics or compatibility with C toolchain.


How about this one: "C++ is horrible because of implicit copy constructors"? That one's not in C :-).

The newest additions to C++ for functional programming and automatic typing are quite intriguing. Blaming everything wrong with C++ on C is as disingenuous as labeling everything about C++ as wrong because one thing about it is wrong.


> How about this one: "C++ is horrible because of implicit copy constructors"? That one's not in C :-).

I also mentioned compatibility with C semantics.

"The Design and Evolution of C++" contains lots of such examples.


C doesn't implicitly convert one struct into another. Why implicitly convert one class into another just because I created a constructor in one class that takes the other class as a parameter? The "explicit" keyword should have been "implicit", with the default behavior reversed.


I'm a 20 year C programmer finally making the transition to C++ and as far as I can tell the main advantages C has left are: * Faster compilation * Comprehensible error messages * No cout


I use C++ daily at work.

C++ code tends also to have a lot of allocations and data copying going on. I know you can get around those, but it seems like C++ programmers don't.

C++ looks nicer at first sight, but when you want to make sure error handling is correct, there's practically little advantage compared to C. Sure, C++ does have STL, but programmers still seem to occasionally allocate function stack and return a pointer to it and manage to do buffer overflows.

C has much simpler flow of execution. C++ exceptions mean there are eventually a lot of different possible code execution paths. In C, flow of execution is almost always simple and predictable.

C++ invites style where logic is spread over many layers of abstraction. It's often hard to see the whole picture of what's going on because of that. Instead you have to check all those constructors, destructors, watch out for inheritance, etc. In C, most related logic tends to be in same source file.

A lot of bugs I've seen in legacy code are related to these issues.


I have more issue with the poor performance of iostreams than with the interface, personally. Typesafe and type deducting string formatting is a pretty nice thing to have.


Maybe because iostreams are synchronized with stdio by default? It's possible to disable that for a big speed boost.

There's also the issue that iostreams might be implemented on top of stdio for simplicity. Hard to make it faster in that case!


Even then, it's pretty much the only place in the stdlib that extensively uses virtual calls, via std::locale. It's very complicated implementation-wise (I was actually working on an implementation of it once and it's not fun at all) and hard to find places to optimize it.

It's not that it's too heavy for a console app's output, really, but for things like converting numbers to strings and vice-versa (a-la boost::format) it is definitely a lot slower than atoi/sprintf.


I agree. I think Stoustrup got many things right, particularly RAII. It's not a consistent language, but its a very practical one.


Most Internet hivemind (euphemistically called "conventional wisdom") behavior is more about belonging and less about discussing truth. It's similar to the rants about Perl's ugliness: it's up to the programmer to make it readable.

C++ can be a perilous language, but it affords a surprising amount of expressive power with a very low performance overhead. It requires you to be focused and think clearly about things like object lifetimes. It rewards thinking hard ahead of time to simply jumping in, writing code, and debugging it until it works.


I have seen people giving up on C++ without actually even trying to learn it.

Somehow all these pointers are "scary" and "hard". And people are usually trying to learn C++ the wrong way. (Lets download this big library/framework and figure out by doing)


There are good arguments against C++, but they resume to its extreme complexity and lack of extra safety compared to C. Both are fair points and why I avoid it.


I consider modern C++ to be a lot safer than C. Between RAII, copy, and move schematics the pointer use has gone down quite a bit.

When you do use pointers, we finally have a real null pointer literal now. (I believe the latest C spec has also received this.)

C++ is certainly crufty. The language has been around for a long time and many features in the specification exist to support legacy code. This is a side effect of being a living language with a huge user base.

Modern C++ tends to be fairly clean and reasonably easy to work in. Herb Sutter has some great talks and blog articles around the topic. For example: http://channel9.msdn.com/Events/Build/2014/2-661


I think it is somewhat telling that even gcc switched to C++.


What do you mean by "Lack of extra safety compared to C"? I'd have imagined the exact opposite, but maybe I'm missing something there.


C++ is way safer than C thanks to stronger type checking and what you can achieve via library types like STL collections.

However the C unsafety is still present due to the compatibility compromises with C in C++'s design.

With teams it is very hard to control what each developer makes use of, even with static analysis.


That was kind of my point. I really think that if anything, C++ cannot be as unsafe as C. You can do crap in C++ just as well as you can do in C, but at least C++ gives you tools to do things way more safely IMO.


Yes, that is one reason why I never liked C, but do enjoy C++ for personal projects or when everyone on the team is quite comfortable with it.

For the other type of enterprise projects I rather use languages that follow Algol linage of memory safety.


I'm not really sure why references are being used here, if you're just making a copy straight-away.

    vector<int> square(vector<int> vcopy)
    {
        for (int &i : vcopy)
            i *= i;
        return vcopy;
    }
is shorter, (IMO) much more readable, and on my tests, measurably faster than the reference version. If you really want the std::accumulate version,

    vector<int> square(vector<int> vcopy)
    {
        accumulate(begin(vcopy), end(vcopy), begin(vcopy),
                   [](int i) { return i * i; });
        return vcopy;
    }
works just as well.

Still, I guess it's beside the point.


Awesome! It is more readable and makes it faster. Thank you very much. I changed my article accordingly.


I like your version better, but I suspect it would be significantly slower for many compilers that you are not testing.


After coding in languages in like Scala/Python for a while, it took me a while to parse this. Scala equivalent: `def square(a: List[Int]) = a map {i => i*i}`


After in coding in languages like C/C++/ObjC for a while, it took me a while to parse your Scale equivalent.

Personally I find the C++ much clearer – I don’t see either as that much better/worse.


Convenience overloads for common use cases (iterating over the full vector, accumulating on a default-constructed accumulator, etc) would be great to have as standard. Are there and I have missed them? If not, what's the rationale? (this goes for the STL in general, not specifically these high order algos)



That's a valid technical reason why there isn't a std::sort that takes containers. It's not a valid reason why there can't be a std::range_sort or std::range::sort that takes container arguments.


If I was doing heavy C++11 today, that kind of range namespace would be the second thing I'd write; the first being a bunch of string functions for the operations I actually want to do with strings (as opposed to the low level "character buffer manipulation primitives" the STL offers).

I was ok with that low level, low semantics design in the STL until I heard that stuff about integrating a graphics library. Now I'm just confused. :)


Readability improved from squareVec1 to squareVec4, then went downhill. Hiding loops behind new names doesn't make programming better.


At Going Native last year Sean Parent, from Adobe, gave a superb talk called 'C++ Seasoning' where his number one pro-tip for C++ was: no raw loops.

He made a good case that anywhere you have a non-trivial loop in C++, you're almost always going to be better off folding the problem in to a composition of algorithms. The code got more readable, more performant, and redundant code can often be removed.

Please take an hour to watch it here in glorious high quality:

http://channel9.msdn.com/Events/GoingNative/2013/Cpp-Seasoni...


And if someone has nothing better to do, may I suggest to watch anything by Sean Parent. Some old adobe talks might be in low-res, but they were still very interesting, on the abstraction side of things.


Where can we find these old talks?


Here's one :

A Possible Future of Software Development : www.youtube.com/watch?v=4moyKUHApq4‎


I think most of them are on youtube. Ill get back to you tomorrow.


It does if you're familiar with the common high-order functions used in functional programming.


If you are familiar with both C++ and common high-order functions used in functional programming I still think that squareVec4 is, by leaps and bounds, the most readable.

squareVec4 is simple, concise, and leaves little to no room for ambiguity. squareVec6 is a monstrosity that requires reading more text that contains more complicated ascii and involves more concepts.

And for the author to then claim that making use of "cool stuff" in <algorithm> is more maintainable code? That's just crazy talk man!


Isn't this an artifact of C++ rather than a general truth, though? If you could just invoke something like:

   return v.fold([], def (acc, i) { acc.push(i*i) });
Or better:

   return v.map(def(i) { i * i })
Something like 'map' in this case is much more appropriate, since it's operation on a collection which returns a new collection with a given operation applied on each element.


My thoughts exactly. It's simply not true that "everybody knows without thinking" what the std::accumulate version will do. It's specific not only to a particular pattern but to the expression of that pattern in a particular language. People new to C++ won't be able to parse "[](vector<int> acc" right away, and it's key to understanding the whole. By contrast, the range-based for loop is obvious even to a C++ novice and easily guessable even by those who don't know C++.

If you want something that's both concise and readable, try the Python list-comprehension version. That is an elegant solution. The C++ accumulate/transform solutions are the very opposite of elegant. In a code review, I would flag them as obfuscatory.


Just as a side note: if you're looking to write a convenience wrapper like the author's transformVec, do not have it take the function as `const function<T(T)>&`. Instead, do something like:

    template <typename T, typename F>
    vector<T> transformVec(const vector<T>& v, F op)
Using std::function in a case where you just want something callable just makes the optimizer's job a lot harder. Using std::function is a good idea if you actually need type erasure (for example if you want to pass something callable to a function that is not a template).


Wow, awesome tip, thanks. This makes the wrapped version as fast as the others. I updated my article accordingly.


[deleted]


How would you feel about

    copy_if( begin(v), end(v), back_inserter(result), is_even );
Having reusable named functions increases the readability over the roll your own for loop version when you turn your iterations into named primitives. It can be easy to get lazy about doing that where appropriate.


The author probably thinks that all versions are equally fast, but in fact they are just equally slow. Neither gcc 4.8 nor icc 14 can use vectorization for this loop (likely because of aliasing).


It's not because of aliasing, it's because of the push_back call inside the inner loop, which may potentially have to allocate (in this case we know it doesn't, but the compiler does not). If you replace squareVec6 with

    vector<int> squareVec6(const vector<int>& v)
    {
        vector<int> result(v.size()); // preallocate
        // result.reserve(v.size());
        transform(begin(v), end(v), begin(result), [](int i)
        {
            return i*i;
        });
        return result;
    }
the compiler will be happy to vectorize the loop.


  def square(vector)
    vector.collect { |i| i**2 }
  end


Cool, it will also be probably 30x slower.


Does speed really matter? I still prefer the readable Lisp version (SBCL, no optimization):

  (defun square-vect (vect)
    (map 'vector (lambda (x) (* x x)) vect))

  (setq *vector*
        (map 'vector (lambda (x) (random 32767))
             (make-array 10000 :element-type 'integer)))

  (time
   (dotimes (i 1000)
     (square-vect *vector*)))
Results :

  0.084 seconds of real time
  0.084005 seconds of total run time (0.084005 user, 0.000000 system)
  100.00% CPU
  218,686,836 processor cycles
  79,876,912 bytes consed
On my system, the C++ version perform like this:

   ./main 1
   1: 270756377 - elapsed time: 0.022324s
   ~$ ./main 2
   2: 270756377 - elapsed time: 0.018883s
   ~$ ./main 3
   3: 270756377 - elapsed time: 0.016929s
   ~$ ./main 4
   4: 270756377 - elapsed time: 0.014268s
   ~$ ./main 5
   5: 270756377 - elapsed time: 0.016093s
   ~$ ./main 6
   6: 270756377 - elapsed time: 0.018651s
   ~$ ./main 7 # :-)
   7: -1455582645 - elapsed time: 5.3e-05s


Does speed really matter?

Yes. C++ is chosen by developers who tend to choose performance over elegance.

When I was doing a lot more C++, I was intimately aware of the deep performance implications of choosing one data type and implementation technique over another.


Honestly, I understand; I am really bothered too when I see people applying techniques without even thinking about performance implications (and btw, this is also true for security, correctness, portability ... implications).

C++ is viewed as the ultimate language when it comes to performance, for good reasons; of course, if I ever need a fast program, I will consider it.

But as an exercise, why not try to optimize the CL code, first?

By using type declarations and avoiding allocating memory, here is an optimized CL version that runs under 0.01s, beating the C++ code (note that the original C++ code could be optimized too; e.g. why check the command-line argument every time?).

     (defpackage :test (:use :cl))
     (in-package :test)

     (defconstant +vector+
       (map '(simple-array fixnum (10000))
            (lambda (x) (declare (ignorable x)) (random 32767))
            (make-array 10000 :element-type 'fixnum)))

     (let ((result (make-array 10000 :element-type 'fixnum)))
       (declare (type (simple-array fixnum (10000)) result))
       (flet ((square (x)
                (declare (optimize (speed 3) (safety 0) (debug 0))
                         (fixnum x))
                (the fixnum (* x x))))
         (defun square-vect ()
           (declare (optimize (speed 3) (safety 0) (debug 0))           
                    (type (simple-array fixnum (10000)) +vector+))
           (loop for i from 0 to 9999
              do (setf (aref result i) (square (aref +vector+ i))))
           result)))

     (time
      (dotimes (i 1000)
        (square-vect)))
       
     Evaluation took:
       0.009 seconds of real time
       0.008000 seconds of total run time (0.008000 user, 0.000000 system)
       88.89% CPU
       22,650,364 processor cycles
       0 bytes consed
       
     Evaluation took:
       0.009 seconds of real time
       0.008000 seconds of total run time (0.008000 user, 0.000000 system)
       88.89% CPU
       22,789,708 processor cycles
       0 bytes consed
       
     Evaluation took:
       0.009 seconds of real time
       0.008001 seconds of total run time (0.008001 user, 0.000000 system)
       88.89% CPU
       22,418,566 processor cycles
       0 bytes consed
       
     Evaluation took:
       0.009 seconds of real time
       0.012001 seconds of total run time (0.012001 user, 0.000000 system)
       133.33% CPU
       22,498,018 processor cycles
       0 bytes consed
EDIT: of course, now, you can say that the programs is unsafe under certain conditions; you may have an issue with the "defconstant" declaration (replacing it with a variable leads to 0.012s results).

But after all, there is no specification. In practice, one must deal with actual requirements and sensible measurments.


Speed always matters. (Sometimes this is speed of development/iteration, sometimes this is speed of execution.)


Likely not that much slower. I rather have slightly slower code that is easy to develop & maintain, than code that is fast but hard to read, hard to maintain and easy to introduce bugs in to.


I write a lot of C++, and nothing about a standard algorithm loop is hard to read or maintain. For the Python or C++ guru both are trivial to work with.

If I were doing fancy conditional transforms on maps with non-trivial modifications (ie, string manipulation) I'd almost always try to use Python first, though.


I like the Python version of:

  def square(vector):
    return [v**2 for v in vector]


Haskell:

  let square = map (^ 2)
Real partial application and currying is something I'm longing for in other languages; it's easy to fall for the beauty of the Point Free Style.


Well, in Python that would be:

  from functools import partial
  square = partial(map, lambda i: i**2)
EDIT: I guess it's still not the same, since the lambda isn't a partial application. But I don't think we can just set the exponent in pow(), we need to create a function that takes the arguments in the reverse order. Maybe:

  from functools import wraps, partial
  def flip(func):
      'Create a new function from the original with the arguments reversed'
      @wraps(func)
      def newfunc(*args):
          return func(*args[::-1])
      return newfunc

  _pow = flip(pow) #Take Y, then X

  # Then the equivalent would be
  square = partial(map, partial(_pow, 2)
Of course, flip is rather long-winded because it's a generic function, one could just do:

  _pow = lambda y, x: pow(x, y)


Or SQL:

    SELECT x*x FROM vector;
There's just so many things that SQL got right:

- Non-programmers actually read and modify SQL code. I swear, it happens. It doesn't happen very much even with Python, and it will never happen with C++.

- It can join multiple data structures in a single query, on arbitrary criteria.

- A query can be optimized as a whole. Optimizers freely reorder the parts and actually come up with good plans.

- The presence or absence of indexes (data structures used for acceleration) is transparent to the query. Can any other language do that?

A lot of the code I've seen would be 10x to 100x smaller if expressed in SQL, or an SQL-like language that can handle nested data structures (which is something that I really want).


f# multiple implementations (comprension, partial, parallel)

  //array comprehension
  let squareVec vector = [| for x in vector -> x ** 2.0 |]
  squareVec [| 1.0; 2.0; 3.0 |]
  [| 1.0; 2.0; 3.0 |] |> squareVec

  //inline function
  [| 1;2;3 |] |> Array.map (fun x -> x * x)

  //extract function
  let square x = x * x
  [| 1;2;3 |] |> Array.map square

  //partial
  let squareVec3 = Array.map square
  [| 1;2;3 |] |> squareVec3

  //partial
  let pow x y = y ** x
  [| 1.0; 2.0; 3.0 |] |> Array.map (pow 2.0)
 
  //parallel
  [| 1.0; 2.0; 3.0 |] |> Array.Parallel.map (pow 2.0)

  //can use SIMD on mono (next version on .NET too)
  //no first hand experience, but can also use GPU from F#


Y not map.


collect and map are the same thing, just different names. IIRC, collect was the name used in Smalltalk for map operations.


`result = [i * i for i in v]` looks much more readable to me.

What if you need to square every other element? copy_if and transform then don't look so pretty anymore. `result = [i * i for i in v if i % 2 == 0]` is still more readable.

What if you need to do something else and you don't know if there is a transform like function for it?


> What if you need to do something else and you don't know if there is a transform like function for it?

What about the same thing in python? As far as I know the two examples you gave exhausted list comprehensions' functionality.


It seems like it's the loop itself that is redundant. Why couldn't I just do:

    squaredVec = vec * vec
or

    squaredVec = vec ^ 2


You can, but not with vector. Use valarray if you need to perform those kinds of computations.


The example is too trivial. you could have 100 ways to write "hello world", but it does not matter.


Why were all the examples using iterators, and not square bracket indexing?


am i the only one who thinks the std::transform version is LESS readable than the for loop?


  but you still have to read the whole line
Oh the pain!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: