Industrial challenge 2017

The 2017 industrial challenge and its solutions to be presented at WATERS'17
Post Reply
arne.hamann
Posts: 34
Joined: Thu Jan 28, 2016
Location: Renningen

Industrial challenge 2017

Post by arne.hamann » Thu Feb 16, 2017

We are happy to announce the industrial challenge of WATERS 2017 proposed by Arne Hamann, Simon Kramer, Michael Pressler, Dakshina Dasari, Falk Wurst, and Dirk Ziegenbein from Robert Bosch GmbH.

Details about the intention of the challenge and a high level description can be found here: https://waters2017.inria.fr/challenge/.

A detailed description of the actual challenge can be found in the attached document:
WATERS2017_Industrial_Challenge_Bosch.pdf
(124.04 KiB) Downloaded 1069 times
The corresponding challenge model file is available in two formats:

The latest version of the Amalthea format which is now supported by a tool chain of the official Eclipse project APP4MC https://projects.eclipse.org/proposals/app4mc
And the previously used Amalthea format supported by a tool chain of the Amalthea4public project http://www.amalthea-project.org/

arne.hamann
Posts: 34
Joined: Thu Jan 28, 2016
Location: Renningen

Re: Industrial challenge 2017

Post by arne.hamann » Fri Feb 17, 2017

The attached archive contains some screenshot explaining how to work with the model using the above mentioned tool chains.
workflowExample.zip
(135.44 KiB) Downloaded 920 times

medinajl
Posts: 9
Joined: Fri Jun 26, 2015

Re: Industrial challenge 2017-> Please clarify "Local" and "Frequency"

Post by medinajl » Mon Feb 27, 2017

Hi Arne, hi Sophie,

First of all thanks for your efforts in preparing this challenge!

We are starting the work to prepare a response for this edition's challenge and have seen some points we'd need to understand a bit better:

1 - By looking at the last paragraph in section B. Implicit Communication, which says:
"However on the memory storage front, more local storage is required, since for every task which accesses the label, an extra local copy is required"

we understand that the "local" copies of the global data labels are made local to each task, not to each core, so when there are many tasks in the same core using the same global label there are as many different copies as tasks in that core. Is this interpretation correct?

2 - Similarly, we'd also like to have clear the same question for the Logical Execution Time Model described in section C, which says:
"As with implicit communication, LET requires that a local copy is available for each variable accessed by a task."

So again we understand here that there are as many "local" copies of the label as tasks that access it disregarding they are in the same or different cores.

3 - Could you please clarify what the specified "frequency" means for a label used by a runnable? Is this the number of times a label is accessed from each and every single instance of a runnable? Or on the contrary this is the number of instances of a runnable after which a label is accessed once? Also please give us the validity range and data type for this parameter, if it is a real value could it be less than one? If it is always an integer value is it present only if it is greater than one?

Cheers,
Julio, Javi, Juan, and Michael

Simon Kramer
Posts: 10
Joined: Mon Jul 13, 2015

Re: Industrial challenge 2017-> Please clarify "Local" and "Frequency"

Post by Simon Kramer » Wed Mar 01, 2017

Hi Julio, Javi, Juan, and Michael,

thanks for your questions and your participation in the challenge! :-) Please find some answers below:
medinajl wrote: 1 - By looking at the last paragraph in section B. Implicit Communication, which says:
"However on the memory storage front, more local storage is required, since for every task which accesses the label, an extra local copy is required"

we understand that the "local" copies of the global data labels are made local to each task, not to each core, so when there are many tasks in the same core using the same global label there are as many different copies as tasks in that core. Is this interpretation correct?
Correct! Every task that uses a label gets a separate copy.
medinajl wrote: 2 - Similarly, we'd also like to have clear the same question for the Logical Execution Time Model described in section C, which says:
"As with implicit communication, LET requires that a local copy is available for each variable accessed by a task."

So again we understand here that there are as many "local" copies of the label as tasks that access it disregarding they are in the same or different cores.
Again correct. Every task that accesses a label gets a copy of it, independent of the placement.
medinajl wrote: 3 - Could you please clarify what the specified "frequency" means for a label used by a runnable? Is this the number of times a label is accessed from each and every single instance of a runnable? Or on the contrary this is the number of instances of a runnable after which a label is accessed once? Also please give us the validity range and data type for this parameter, if it is a real value could it be less than one? If it is always an integer value is it present only if it is greater than one?
The frequency is the number of times a label is accessed from a runnable during one run/instance. Meaning that if a frequency of 5 is given, the runnable accesses this label 5 times during one execution. In the given Amalthea model this is specified in the LabelAccessStatistic and is a real value and thus could in principle also be less than one. By this you could represent that a label is access only in every second instance. (But I think that we only specified values >= 1. I have to check that) The range is then all positive values.

Hope that helps!
Best, Simon

medinajl
Posts: 9
Joined: Fri Jun 26, 2015

Re: Industrial challenge 2017

Post by medinajl » Wed Mar 01, 2017

Hi Simon,

Thanks for your quick response.

I just forgot to include one detail to clarify: We understand that in any of the synchronization cases proposed for each label there is only one task that acts as a Writer, all the rest of tasks that access that label do it for reading it. Is this the correct interpretation?

Cheers,

Julio

Simon Kramer
Posts: 10
Joined: Mon Jul 13, 2015

Re: Industrial challenge 2017

Post by Simon Kramer » Wed Mar 01, 2017

Hi Julio,

yes that interpretation is correct. Every label has exactly one writer task, or more precisely one writer-runnable. All other tasks only read that label.
In the provided Amalthea model, the writer or reader role is attached to the label access within a runnable.

Best, Simon

medinajl
Posts: 9
Joined: Fri Jun 26, 2015

Re: Industrial challenge 2017

Post by medinajl » Thu Mar 09, 2017

Hi Simon, Arne,

We need to ask you a couple of questions/clarifications more:

(1) About the LET model for synchronization, you say:

"With the LET model, tasks always read data at the beginning of the activation interval and write data at the end of the activation interval."

This does not state clearly the timing requirements for the end of the writing of the data.

Our first interpretation (Option A) is that every period of activation the labels processed in the previous activation are written and immediately the labels needed for the processing of the next activation are read (at high priority) an then the runnables can work at their nominal priority. This allows for a simple fully periodic implementation of the activation mechanism but also implies that the deadline for finishing the read+processing-runnables+write of the task is to be slightly longer than the period. This additional latency (let’s call it MaxWt) can be calculated by the analysis considering all other high-priority tasks sections.

Another alternative (Option B) is that the deadline for the task is strictly its period and consequently the writing of the labels is to be scheduled towards the end of the activation period but at a fixed pre-calculated instant before the next activation start (at least MaxWt seconds earlier), so that in the worst case (including any other activity that may preempt it) the writing of the task labels is executed before the start of the next activation. The implementation of this option is a bit more complex, since it requires the knowledge of the analysis results for MaxWt at compilation time, but may serve to guarantee the availability of the new written values for the labels in any subsequent job of the tasks that read it.

Then our question is Which of the two options (A) or (B) is the one you expect.

(2) Regarding the number of execution cycles given for each runnable. We understand that since the frequency to access the labels is provided, the actual execution times are the given numbers plus the time needed to access the local copies of the labels multiplied by the given frequency. Is this interpretation correct? Or the values for the execution times of the runnables given in the model include already the access to the local copies of the labels....

Thanks a lot for your care.

Cheers,

Julio, Michael, Javi, and Juan

medinajl
Posts: 9
Joined: Fri Jun 26, 2015

Re: Industrial challenge 2017

Post by medinajl » Sun Mar 12, 2017

Hi Arne,

I have a question regarding the level of freedom you have to implement the order of the runnables inside the Almathea tasks in reality.

My doubt is about the actual functional/logical/local-memory-state dependencies between the runnables that are grouped in a task. Of course if the implementation does invoke the runnables one after the other in the order they are in the Almathea tasks, there is a clear (over imposed) causal flow of execution among them. But my question regards back to the real-world requirements, those that must be respected by looking at the original real application that is taken to synthesize the almathea model. Does the functional logic of the real application runnables depend upon the order in which they finally go into the corresponding periodic task?
Consider the case that using an optimization tool we could rearrange the runnables in order to meet effect-chain deadlines or improve their response times. Would it be possible to do that?
In case the order is only sometimes relevant, would it be helpful to organize them in non-disorderable chunks or simply collapse them in a kind of composed-runnables?
I guess that any change that would be necessary in the almathea model to support something like that would not fit here and would be in a different challenge istead, but I just wonder how it would work.
Thanks for your attention.
Cheers,
Julio

Simon Kramer
Posts: 10
Joined: Mon Jul 13, 2015

Re: Industrial challenge 2017

Post by Simon Kramer » Mon Mar 13, 2017

medinajl wrote:Hi Simon, Arne,

We need to ask you a couple of questions/clarifications more:

(1) About the LET model for synchronization, you say:

"With the LET model, tasks always read data at the beginning of the activation interval and write data at the end of the activation interval."

This does not state clearly the timing requirements for the end of the writing of the data.

Our first interpretation (Option A) is that every period of activation the labels processed in the previous activation are written and immediately the labels needed for the processing of the next activation are read (at high priority) an then the runnables can work at their nominal priority. This allows for a simple fully periodic implementation of the activation mechanism but also implies that the deadline for finishing the read+processing-runnables+write of the task is to be slightly longer than the period. This additional latency (let’s call it MaxWt) can be calculated by the analysis considering all other high-priority tasks sections.

Another alternative (Option B) is that the deadline for the task is strictly its period and consequently the writing of the labels is to be scheduled towards the end of the activation period but at a fixed pre-calculated instant before the next activation start (at least MaxWt seconds earlier), so that in the worst case (including any other activity that may preempt it) the writing of the task labels is executed before the start of the next activation. The implementation of this option is a bit more complex, since it requires the knowledge of the analysis results for MaxWt at compilation time, but may serve to guarantee the availability of the new written values for the labels in any subsequent job of the tasks that read it.

Then our question is Which of the two options (A) or (B) is the one you expect.

(2) Regarding the number of execution cycles given for each runnable. We understand that since the frequency to access the labels is provided, the actual execution times are the given numbers plus the time needed to access the local copies of the labels multiplied by the given frequency. Is this interpretation correct? Or the values for the execution times of the runnables given in the model include already the access to the local copies of the labels....

Thanks a lot for your care.

Cheers,

Julio, Michael, Javi, and Juan
Hello Julio, Michael, Javi, and Juan,

thanks a lot for your questions!

To answer your first question:
LET demands the copying to happen at exactly the interval boundary (conceptually in zero time). As you clearly found out, this is not possible to implement. So in practice an implementation has to be as close as possible. You sketched two alternatives, which are both valid. To achieve an equal semantic, the produced data may not be used/altered before it was copied, which can for instance be achieved by using copy-operations at high priority as you suggested. In your analyses you could use both approaches and compare them with respect to the ideal behavior (for example, Option A would lead to execution jitter).

To the second question:
Your first interpretation is correct. The execution time given for runnables is excluding label accesses. To calculate the complete execution time, you have to add the access times to labels (latency multiplied by frequency + contention) to the runtime specified in the runnable.

Best,
Simon & Arne

arne.hamann
Posts: 34
Joined: Thu Jan 28, 2016
Location: Renningen

Re: Industrial challenge 2017

Post by arne.hamann » Mon Mar 13, 2017

Hi Julio,

very good question ;-)
In general not all order relations between runnables are of importance (even if they exchange data). What counts are "functional sensitive" cause-effect chains, e.g. chains that contribute to controlling physical processes with high dynamics. In a perfect world, all critical timing requirements are known specified, i.e. all cause-effect chains along with reaction or data age constraints. If this is the case, your optimization idea could be done. However, in reality the system "grew" and many timing constraints are not known. In such cases, our engineers prefer not to touch the runnable order. This could be modeled as you suggested in non-disorderable chunks (we call these process grouping constraints in Amalthea). If you want, you can do such an optimization for the given chains (e.g. assuming all read-write dependencies as chains). It would be interesting to see if you have smart ideas about how to optimize the latencies.

Arne
--
medinajl wrote:Hi Arne,

I have a question regarding the level of freedom you have to implement the order of the runnables inside the Almathea tasks in reality.

My doubt is about the actual functional/logical/local-memory-state dependencies between the runnables that are grouped in a task. Of course if the implementation does invoke the runnables one after the other in the order they are in the Almathea tasks, there is a clear (over imposed) causal flow of execution among them. But my question regards back to the real-world requirements, those that must be respected by looking at the original real application that is taken to synthesize the almathea model. Does the functional logic of the real application runnables depend upon the order in which they finally go into the corresponding periodic task?
Consider the case that using an optimization tool we could rearrange the runnables in order to meet effect-chain deadlines or improve their response times. Would it be possible to do that?
In case the order is only sometimes relevant, would it be helpful to organize them in non-disorderable chunks or simply collapse them in a kind of composed-runnables?
I guess that any change that would be necessary in the almathea model to support something like that would not fit here and would be in a different challenge istead, but I just wonder how it would work.
Thanks for your attention.
Cheers,
Julio

medinajl
Posts: 9
Joined: Fri Jun 26, 2015

Re: Industrial challenge 2017

Post by medinajl » Tue Mar 14, 2017

Hi Simon, hi Arne,

Thanks for your prompt answer. Just a small detail to get it fully clear...

When you say:
Simon Kramer wrote:
.... The execution time given for runnables is excluding label accesses. To calculate the complete execution time, you have to add the access times to labels (latency multiplied by frequency + contention) to the runtime specified in the runnable.
You refer mainly to the explicit communication case... Are we right?

We understand that the "contention" in the acces to the remote labels does not affect the execution times of the individual runnables in the LET and implicit cases, since (1) The writing is always local and (2) the remote accesses for reading are done only at the beginnig of the activation period and tasks executions respectively, so the "contention" is to be modelled/considered at that other instant/priority not in the execution of the runnable itself.

Thanks again,
Julio, Michael, Javi, and Juan

Simon Kramer
Posts: 10
Joined: Mon Jul 13, 2015

Re: Industrial challenge 2017

Post by Simon Kramer » Thu Mar 16, 2017

Hi Julio, Michael, Javi, and Juan,

the statement that the runnable execution time is without label access and those have to come additionally is valid for all communication semantics. The semantics (from the view of the runnable) only differ in placement of the accessed labels, and thus different access times and contention.

Let me give you an short example:
Consider a runnable R that reads label A and writes label B. Both labels reside in a global memory. With explicit communcation, R accesses A and B directly, giving 2 bus transfers to the global memory and contention. (Although the write, due to the store-buffer, does not lead to additional execution time)

With implicit or LET, there is a function running before R - either beginning of the interval or task - that creates a local copy of A. So there is a read from global memory and a store to a local memory. When R is the executed, it accesses the copy of A and puts the content into a register. Here there could also be some contention due to parallel tasks that also can access this memory too. Even if its a local memory. For the writing of B its the same. R writes to a copy (not really a copy, more an intermediate buffer) of B, putting some register content to it. At the end of the task/interval this copy is read from the local memory and stored back into global memory. In total we have now the following memory accesses: Read global, store local - read local, store local - read local, store global.

As you see, there are now more memory accesses and for every memory access, there might be contention. But as you already said, they are at different points in time and also to different memories. And given a proper mapping (e.g. when only the local core accesses the local memory), the contention for the local memory is negligible.

Hope that helps!
Best,
Simon

p.pazzaglia
Posts: 3
Joined: Mon Mar 20, 2017

Re: Industrial challenge 2017

Post by p.pazzaglia » Wed Mar 22, 2017

Hi Arne, Simon and Sophie

First of all, thank you very much for your great work on organizing the WATERS 2017 challenge.

I'm Paolo Pazzaglia, PhD student at the RETIS lab of Sant'Anna School, and togheter with Alessandro Biondi, Alessio Balsini and Marco Di Natale, we are currently working on your challenge.

We would like to ask a couple of question about the task model:

1) Using the data provided in the Amalthea model, the task utilization bound is >100% using maximum execution time of runnables (even neglecting the memory accesses). How can we deal with the problem when using worst case analysis?

2) We weren't able to find any deadline constraints regarding event chains in the Amalthea model, did we miss them? Do you have these data available? Otherwise, can we assume that minimizing the end-to-end latencies is not a constraint in the optimization problem?

Thank you in advance, best,
Paolo, Alessandro, Alessio, Marco

medinajl
Posts: 9
Joined: Fri Jun 26, 2015

Re: Industrial challenge 2017

Post by medinajl » Thu Mar 30, 2017

Hi Paolo,

I guess the proponents of the challenge would confirm or correct these suggestions, so please take them with care.

Last year, the system proposed in the challenge was very similar to the one proposed now, and the questions you do now are very close to our first questions in the forum at that time :-)

Regarding you comment:
...utilization bound is >100% using maximum execution time...
I may say that the first version of the model last year had an utilization over 1000%, soon they corrected it to be no so extremely large but in any case larger than 100%. Then, after discusing the issue with the organizers we agree in following criteria like: (a) you can try using average execution times as wcet.... and/or (b) you can calculate and make proposals on how much the processors frequency need to be increased so that the maximum utilization goes below 100% (+ your schedulability results)

About:
...deadline constraints regarding event chains...

I guess you can play either with implicit deadlines (d=t) or with a sufficiently larger set of them assigned in a proportional way.

I hope this helps you to keep working... :-)
Cheers,
Julio

Post Reply