OpenMP: task_reduction=reduction? What is 'in_reduction'?

The way task reduction works is that the task needs to know where to contribute its local result to. SO, what you have to do is to have an taskgroup that "creates" the reduction and then have tasks contribute to it:

void example() {
    int result = 0;
#pragma omp parallel   // create parallel team
#pragma omp single     // have only one task creator
    {
        #pragma omp taskgroup task_reduction(+:result)
        {
            while(have_to_create_tasks()) {
                #pragma omp task in_reduction(+:result)
                {   // this tasks contribute to the reduction
                    result = do_something();
                }
                #pragma omp task firstprivate(result)
                {   // this task does not contribute to the reduction
                    result = do_something_else();
                }
            }
        }
    }
}

So, the in_reduction is needed for a task to contribute to a reduction that has been created by a task_reduction clause of the enclosing taskgroup region.

The reduction clause cannot be used with the task construct, but only worksharing constructs and other loop constructs.

The only tasking construct that has the reduction clause, is the taskloop construct that uses it for a short cut for a hidden task_reduction construct that encloses all the loop constructs that it creates and that then have a hidden in_reduction clause, too.

UPDATE (to cover the edits by the original poster):

The problem with the code is now that you have two things happening (see the inline comments in your updated code):

#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
int main(int argc, char* argv[]){
    int Array [10]= {0,1,2,3,4,5,6,7,8,9};
    int Array_length = 10;
    int counter = 0;
    int result = 0;

    #pragma omp parallel
    #pragma omp single
    {
        #pragma omp taskgroup task_reduction(+:result)
        {
            // "result" is a shared variable in this parallel region
            while (Array_length!=counter) {
                if (counter%2==0){
                    #pragma omp task in_reduction(+:result)
                    {
                        // This task will contribute to the reduction result
                        // as you would expect.
                        result+=Array[counter];
                    }
                } else {
                    // This addition to "result" is performed by the "single"
                    // thread and thus hits the shared variable.  You can see
                    // this when you print the address of "result" here
                    // and before the parallel region.
                    result+=Array[counter];
                }
                counter=counter+1;
            }
        } // Here the "single" thread waits for the taskgroup to complete
          // and the reduction to happen.  So, here the shared variable
          // "result" is added to the value of "result" coming from the
          // task reduction.  So, result = 25 from the "single" thread and
          // result = 20 are added up to result =45
    }
    printf("The sum of all array elements is equal to %d.\n", result);
}

The addition at the end of the task reduction seems to be a race condition as the updates coming from the single thread and the updates coming from the end of the taskgroup are not synchronized. I guess that the race does not show up, as the code is too fast to clearly expose it.

To fix the code, you'd have to also have a task construct around the update for odd numbers, like so:

#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
int main(int argc, char* argv[]){
    int Array [10]= {0,1,2,3,4,5,6,7,8,9};
    int Array_length = 10;
    int counter = 0;
    int result = 0;

    #pragma omp parallel
    #pragma omp single
    {
        #pragma omp taskgroup task_reduction(+:result)
        {
            // "result" is a shared variable in this parallel region
            while (Array_length!=counter) {
                if (counter%2==0){
                    #pragma omp task in_reduction(+:result)
                    {
                        // This task will contribute to the reduction result
                        // as you would expect.
                        result+=Array[counter];
                    }
                } else {
                    #pragma omp task firstprivate(result)
                    {
                        // "result" is now a task-local variable that is not
                        // shared.  If you remove the firstprivate, then the
                        // race condition on the shared variable "result" is
                        // back.
                        result+=Array[counter];
                    }
                }
                counter=counter+1;
            }
        } // Here the "single" thread waits for the taskgroup to complete
          // and the reduction to happen.  So, here the shared variable
          // "result" is added to the value of "result" coming from the
          // task reduction.  So, result = 25 from the "single" thread and
          // result = 20 are added up to result =45
    }
    printf("The sum of all array elements is equal to %d.\n", result);
}

In my first answer, I failed to add a proper firstprivate or private clause to the task. I'm sorry about that.