style question: breaking inline assembly lines

Gedare Bloom gedare at rtems.org
Mon Jun 12 22:04:40 UTC 2023


On Mon, Jun 12, 2023 at 4:03 PM Gedare Bloom <gedare at rtems.org> wrote:
>
> On Mon, Jun 12, 2023 at 3:54 PM Joel Sherrill <joel at rtems.org> wrote:
> >
> >
> >
> > On Mon, Jun 12, 2023 at 4:50 PM Gedare Bloom <gedare at rtems.org> wrote:
> >>
> >> On Mon, Jun 12, 2023 at 2:28 PM Joel Sherrill <joel at rtems.org> wrote:
> >> >
> >> >
> >> >
> >> > On Mon, Jun 12, 2023 at 3:10 PM Gedare Bloom <gedare at rtems.org> wrote:
> >> >>
> >> >> We have a mix of ways to write inline assembly. It would be convenient
> >> >> to choose one way. The prevailing options are based on breaking around
> >> >> the colons (reg/field separators), either to break at colons if the
> >> >> line length > 80, or to always break at colons.
> >> >>
> >> >> I personally find it is easier to read inline assembly that has broken
> >> >> at the colons. But we have these options:
> >> >> 1. Always break at the colon
> >> >> 2. Only break when the line length is exceeded
> >> >
> >> >
> >> > I lean to (1).
> >> >>
> >> >>
> >> >> With #2, we also can break as usual, or we can force breaks at the
> >> >> colons. I have seen examples of both in the source tree. Any strong
> >> >> opinions one way or another?
> >> >
> >> >
> >> > There may also be cases of multiple assembly instructions on the
> >> > same line or split across multiple with line continuation. I don't know
> >> > of one offhand though.
> >> A complicated example of that would be
> >> https://git.rtems.org/rtems/tree/cpukit/score/cpu/arm/armv7m-exception-default.c#n60
> >>
> >
> > Yikes! Does that look ok after clang-format processes it?
> >
> Yes.
>
> Below is a snippet of patch for that. One thing to note, there's
> currently no way to avoid adding the spaces inside of
> '__atribute__((naked))' as a special case. If this is desired, it will
> need to be added to clang-format probably.
>
> diff --git a/cpukit/score/cpu/arm/armv7m-exception-default.c
> b/cpukit/score/cpu/arm/armv7m-exception-default.c
> index 35dde50dc3..ff00ad2a72 100644
> --- a/cpukit/score/cpu/arm/armv7m-exception-default.c
> +++ b/cpukit/score/cpu/arm/armv7m-exception-default.c
> @@ -42,23 +42,23 @@
>
>  #ifdef ARM_MULTILIB_ARCH_V7M
>
> -void __attribute__((naked)) _ARMV7M_Exception_default( void )
> +void __attribute__( ( naked ) ) _ARMV7M_Exception_default( void )
>  {
> -    /* On exception entry, ARMv7M saves context state onto a stack pointed to
> -     * by either MSP or PSP. The value stored in LR indicates whether we were
> -     * in Thread or Handler mode, whether we were using the FPU (if any),
> -     * and which stack pointer we were using.
> -     * In particular, bit 2 of LR will be 0 if we were using MSP.
> -     *
> -     * For a more detailed explanation, see the Exception Entry Behavior
> -     * section of the ARMv7M Architecture Reference Manual.
> -     */
> +  /* On exception entry, ARMv7M saves context state onto a stack pointed to
> +   * by either MSP or PSP. The value stored in LR indicates whether we were
> +   * in Thread or Handler mode, whether we were using the FPU (if any),
> +   * and which stack pointer we were using.
> +   * In particular, bit 2 of LR will be 0 if we were using MSP.
> +   *
> +   * For a more detailed explanation, see the Exception Entry Behavior
> +   * section of the ARMv7M Architecture Reference Manual.
> +   */
>
> -    /* As we're in Handler mode here, we'll always operate on MSP.
> -     * However, we need to store the right SP in our CPU_Exception_frame.
> -     */
> -  __asm__ volatile (
> -    "sub sp, %[cpufsz]\n"   /* Allocate space for a CPU_Exception_frame. */
> +  /* As we're in Handler mode here, we'll always operate on MSP.
> +   * However, we need to store the right SP in our CPU_Exception_frame.
> +   */
> +  __asm__ volatile(

One other note, there's currently no way to force a space after
volatile. Again if that's something we want to preserve, it probably
needs to be added somehow.

> +    "sub sp, %[cpufsz]\n" /* Allocate space for a CPU_Exception_frame. */
>      "stm sp, {r0-r12}\n"
>      "tst lr, #4\n"          /* Check if bit 2 of LR is zero. If so,
> PSR.Z = 1 */
>      "itte eq\n"             /* IF bit 2 of LR is zero... (PSR.Z == 1) */
> @@ -67,15 +67,15 @@ void __attribute__((naked))
> _ARMV7M_Exception_default( void )
>      "mrsne r3, psp\n"       /* ELSE it is not zero; we were using PSP */
>      "add r2, r3, %[v7mlroff]\n"
>      "add r1, sp, %[cpuspoff]\n"
> -    "ldm r2, {r4-r6}\n"     /* Grab LR, PC and xPSR from the stack */
> -    "tst lr, #16\n"         /* Check if we have an FP state on the frame */
> +    "ldm r2, {r4-r6}\n" /* Grab LR, PC and xPSR from the stack */
> +    "tst lr, #16\n"     /* Check if we have an FP state on the frame */
>      "ite eq\n"
> -    "addeq r3, #104\n"      /* Back to previous SP with FP state */
> -    "addne r3, #32\n"       /* Back to previous SP without FP state */
> -    "tst r6, #512\n"        /* Check xPSR if the SP was aligned */
> +    "addeq r3, #104\n" /* Back to previous SP with FP state */
> +    "addne r3, #32\n"  /* Back to previous SP without FP state */
> +    "tst r6, #512\n"   /* Check xPSR if the SP was aligned */
>      "it ne\n"
> -    "addne r3, #4\n"        /* Undo alignment */
> -    "stm r1, {r3-r6}\n"     /* Store to CPU_Exception_frame */
> +    "addne r3, #4\n"    /* Undo alignment */
> +    "stm r1, {r3-r6}\n" /* Store to CPU_Exception_frame */
>      "mrs r1, ipsr\n"
>      "str r1, [sp, %[cpuvecoff]]\n"
>
> @@ -113,13 +113,13 @@ void __attribute__((naked))
> _ARMV7M_Exception_default( void )
>
>      "b _ARM_Exception_default\n"
>      :
> -    : [cpufsz] "i" (sizeof(CPU_Exception_frame)),
> -      [cpuspoff] "i" (offsetof(CPU_Exception_frame, register_sp)),
> -      [v7mlroff] "i" (offsetof(ARMV7M_Exception_frame, register_lr)),
> -      [cpuvecoff] "J" (offsetof(CPU_Exception_frame, vector)),
> -      [cpuvfpoff] "i" (ARM_EXCEPTION_FRAME_VFP_CONTEXT_OFFSET),
> -      [cpacr] "i" (ARMV7M_CPACR),
> -      [vfpsz] "i" (ARM_VFP_CONTEXT_SIZE)
> +    : [cpufsz] "i"( sizeof( CPU_Exception_frame ) ),
> +      [cpuspoff] "i"( offsetof( CPU_Exception_frame, register_sp ) ),
> +      [v7mlroff] "i"( offsetof( ARMV7M_Exception_frame, register_lr ) ),
> +      [cpuvecoff] "J"( offsetof( CPU_Exception_frame, vector ) ),
> +      [cpuvfpoff] "i"( ARM_EXCEPTION_FRAME_VFP_CONTEXT_OFFSET ),
> +      [cpacr] "i"( ARMV7M_CPACR ),
> +      [vfpsz] "i"( ARM_VFP_CONTEXT_SIZE )
>    );
>  }
>
>
> >>
> >> >>
> >> >>
> >> >> here's a line broken because of line lengths, that has not split the
> >> >> arguments at the colons:
> >> >> https://git.rtems.org/rtems/tree/cpukit/score/cpu/arm/include/rtems/score/aarch32-system-registers.h#n69
> >> >>
> >> >> Here's a line broken because of line lengths:
> >> >> https://git.rtems.org/rtems/tree/cpukit/score/cpu/arm/include/rtems/score/cpu.h#n501
> >> >>
> >> >> Here's a line broken always:
> >> >> https://git.rtems.org/rtems/tree/cpukit/score/cpu/aarch64/cpu.c#n153
> >> >>
> >> >> And for good measure, here's an unbroken line that should be broken:
> >> >> https://git.rtems.org/rtems/tree/cpukit/score/cpu/microblaze/include/rtems/score/cpu.h#n206
> >> >>
> >> >> With the newest version of clang-format we will be able to accommodate
> >> >> always breaking the lines. It currently is inconsistent with whether
> >> >> it puts the first argument on its own line, or keeps it with the
> >> >> "__asm__ volatile (" that I could probably make consistent if we
> >> >> decide we need it to be.
> >> >>
> >> >> Gedare
> >> >> _______________________________________________
> >> >> devel mailing list
> >> >> devel at rtems.org
> >> >> http://lists.rtems.org/mailman/listinfo/devel


More information about the devel mailing list