Hello_via_task Ada example on Rapberry

Jan Sommer soja-misc at aries.uberspace.de
Tue Jul 21 16:07:12 UTC 2015


I applied the patch, but it was not it (I could build 4.9.2 before without it).
The explanations of Pavel with his JTAG setup got me thinking if I get my hands on a JTAG debugger would it be possible to step into the delay functions and see the calls to pthread after delay and during initialization phase? If yes, I would try to get one and see if I can find the problem.

In the meantime I will try to build gcc/gnat for the Leon3 again and will try to run the task example on it at work.

Cheers,

   Jan

Am Donnerstag, 16. Juli 2015, 16:58:23 schrieb Joel Sherrill:
> The free lines look like you might not be printing the address freed
> correctly but it doesn't matter. I don't think it explains anything you
> are seeing.
> 
> I built a SPARC Ada toolset. Can you post your test case including
> Makefile, etc.? Just tar up the directory.
> 
> I want to see if I can tell what is happening.
> 
> FWIW attached is the patch I needed to build the 4.9 branch of gcc
> current as of today.
> 
> On 7/15/2015 4:23 PM, Jan Sommer wrote:
> > I placed the debug statements in the files you suggested, rebuild rtems and compiled both examples: hello_world_ada (which works) and hello_via_task (which hangs).
> > The output from generated by workspace_allocate are basically the same they only differ in the addresses they print, but the size and order of the allocation are the same, except that for the hello_via_task example one additional allocation is called at the very end.
> > However several allocations seem to have failed, especially in the beginning (but in both examples). I don't know if this is a severe failure, or if it is expected to happen. Nevertheless the output I get the hello_via_task example is the following (as said before the output for the hello_ada example is the same except for the last allocation):
> >
> > Workspace_Allocate_or_fatal_error(24) from 44A50/0 -> 10ED78
> > Workspace_Allocate_or_fatal_error(112) from 4161C/0 -> 10EDB8
> > Workspace_Allocate_or_fatal_error(28) from 41660/0 -> 10EE50
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate_or_fatal_error(1304) from 4161C/0 -> 10EE90
> > Workspace_Allocate_or_fatal_error(24) from 41660/0 -> 10F3D0
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate_or_fatal_error(26080) from 4161C/0 -> 10F410
> > Workspace_Allocate_or_fatal_error(100) from 41660/0 -> 115A18
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(480) from 417CC/0 -> 115AA0
> > Workspace_Allocate(52) from 417E0/0 -> 115CA8
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(736) from 417CC/0 -> 115D00
> > Workspace_Allocate(52) from 417E0/0 -> 116008
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate_or_fatal_error(1512) from 4161C/0 -> 116060
> > Workspace_Allocate_or_fatal_error(128) from 41660/0 -> 116670
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(384) from 417CC/0 -> 116718
> > Workspace_Allocate(52) from 417E0/0 -> 1168C0
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(1856) from 417CC/0 -> 116918
> > Workspace_Allocate(52) from 417E0/0 -> 117080
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(224) from 417CC/0 -> 1170D8
> > Workspace_Allocate(52) from 417E0/0 -> 1171E0
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(1664) from 417CC/0 -> 117238
> > Workspace_Allocate(52) from 417E0/0 -> 1178E0
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(416) from 417CC/0 -> 117938
> > Workspace_Allocate(52) from 417E0/0 -> 117B00
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(180) from 417CC/0 -> 117B58
> > Workspace_Allocate(56) from 417E0/0 -> 117C30
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate_or_fatal_error(288) from 402EC/0 -> 117C90
> > Workspace_Allocate(36512) from 417CC/0 -> 117DD8
> > Workspace_Allocate(132) from 417E0/0 -> 120CA0
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(1232) from 417CC/0 -> 120D48
> > Workspace_Allocate(132) from 417E0/0 -> 121240
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(2128) from 417CC/0 -> 1212E8
> > Workspace_Allocate(172) from 417E0/0 -> 121B60
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(960) from 417CC/0 -> 121C30
> > Workspace_Allocate(52) from 417E0/0 -> 122018
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(192) from 417CC/0 -> 122070
> > Workspace_Allocate(52) from 417E0/0 -> 122158
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(480) from 417CC/0 -> 1221B0
> > Workspace_Allocate(52) from 417E0/0 -> 1223B8
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(928) from 417CC/0 -> 122410
> > Workspace_Allocate(52) from 417E0/0 -> 1227D8
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(384) from 417CC/0 -> 122830
> > Workspace_Allocate(52) from 417E0/0 -> 1229D8
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(384) from 417CC/0 -> 122A30
> > Workspace_Allocate(52) from 417E0/0 -> 122BD8
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(256) from 417CC/0 -> 122C30
> > Workspace_Allocate(52) from 417E0/0 -> 122D58
> > Workspace_Free(0) from 4173C/0
> > Workspace_Allocate(4096) from 4448C/0 -> 122DB0
> > Workspace_Allocate(8192) from 4448C/0 -> 124000
> > Workspace_Allocate(102400) from 4448C/0 -> 126398
> > Workspace_Allocate(16384) from 4448C/0 -> 142CF8
> > *** GNAT/RTEMS Hello World Test ***
> >
> > Welcome to the World of Lady Ada
> >
> > Initiating 2.5 second delay
> >
> >
> >
> > Am Dienstag, 14. Juli 2015, 15:26:01 schrieb Joel Sherrill:
> >>
> >> On 7/14/2015 3:13 PM, Jan Sommer wrote:
> >>> One question. If the Ada main thread is created using pthread in the beginning and then creates new task
> >>> (in Ada), do these tasks take their stack from the Ada main thread or do they get their own stack
> >>> allocated from available memory?
> >>
> >> Ada tasks are just pthreads and their stack comes from the workspace.
> >>
> >> One option is some selectively placed debug statements.
> >>
> >> + score/src/objectallocate.c already has a debug section you
> >> can turn on to print on object allocation failure. This would
> >> let you know if you ran out of pthreads, semaphores, etc.
> >>
> >> + libcsupport/src/malloc.c for a NULL being returned.
> >>
> >> + score/src/wkspace.c - _Workspace_Allocate for a NULL being
> >> returned. There is a printk in here which prints every allocation
> >> and the return addresses.
> >>
> >> If it is a resource issue, that should help narrow it down.
> >>
> >>> Am Sonntag, 12. Juli 2015, 18:31:57 schrieb Jan Sommer:
> >>>> Am Sonntag, 12. Juli 2015, 10:26:56 schrieb Joel Sherrill:
> >>>>> Just to make sure we are on the same page, what gcc version?
> >>>>
> >>>> gcc 4.9.2
> >>>>
> >>>>>
> >>>>> It sounds like the number of resources configured might be off. Can you try unlimited objects and unified workspace? They are CONFIGURE_ parameters.
> >>>>>
> >>>>
> >>>> I added
> >>>> #define CONFIGURE_UNIFIED_WORK_AREAS
> >>>> #define CONFIGURE_UNLIMITED_OBJECTS
> >>>> to the rtems-init.c of the Ada examples I hope this does not conflict with the other defines like
> >>>> #define CONFIGURE_MAXIMUM_TASKS                         20
> >>>> #define CONFIGURE_MAXIMUM_SEMAPHORES          20
> >>>>
> >>>> #define CONFIGURE_GNAT_RTEMS
> >>>> #define CONFIGURE_MAXIMUM_ADA_TASKS      20
> >>>>
> >>>> #if !defined(CONFIGURE_MAXIMUM_FAKE_ADA_TASKS)
> >>>>     #define CONFIGURE_MAXIMUM_FAKE_ADA_TASKS 0
> >>>> #endif
> >>>>
> >>>> #if !defined(ADA_APPLICATION_NEEDS_EXTRA_MEMORY)
> >>>>     #define ADA_APPLICATION_NEEDS_EXTRA_MEMORY 0
> >>>> #endif
> >>>>
> >>>> /* Account for any extra task stack size */
> >>>> #define CONFIGURE_MEMORY_OVERHEAD \
> >>>>     (ADA_APPLICATION_NEEDS_EXTRA_MEMORY + GNAT_MAIN_STACKSPACE)
> >>>>
> >>>> Still the task hangs after it calls the delay.
> >>>>
> >>>> Best regards,
> >>>>
> >>>>     Jan
> >>>>
> >>>> _______________________________________________
> >>>> users mailing list
> >>>> users at rtems.org
> >>>> http://lists.rtems.org/mailman/listinfo/users
> >>>
> >>> _______________________________________________
> >>> users mailing list
> >>> users at rtems.org
> >>> http://lists.rtems.org/mailman/listinfo/users
> >>>
> >>
> >>
> >
> 
> 



More information about the users mailing list