Sie sind auf Seite 1von 2

27/12/12

Think Before You Code

Think Before You Code


We are going to extend the sequential counter to implement a Gray code sequence rather than a simple linear increment. The sequence w e w ill implement is as follow s:
NTH STEP AFTER RESET (IN BINARY) Gray Code

000 001 010 011 100 101 110 111

000 001 011 010 110 111 101 100

w hich yields the follow ing alw ays block for a Gray code counter, derived directly from the above truth table: a l w a y s@ ( n e g e d g er e s e to rp o s e d g ec l o c k ) i f( ~ r e s e t ) q< =0 ; e l s e c a s e( q ) 3 ' b 0 0 0 : q< =3 ' b 0 0 1 ; 3 ' b 0 0 1 : q< =3 ' b 0 1 1 ; 3 ' b 0 1 1 : q< =3 ' b 0 1 0 ; 3 ' b 0 1 0 : q< =3 ' b 1 1 0 ; 3 ' b 1 1 0 : q< =3 ' b 1 1 1 ; 3 ' b 1 1 1 : q< =3 ' b 1 0 1 ; 3 ' b 1 0 1 : q< =3 ' b 1 0 0 ; 3 ' b 1 0 0 : q< =3 ' b 0 0 0 ; d e f a u l t :q< =3 ' b x ; e n d c a s e Obviously, the Verilog code doesn't look like a normal counter alw ays block. Perhaps more w orrying is that this approach to coding is going to take a long time for a counter > 3 or 4 bits. Yes, the code space increases as O(2N). In the same w ay that w e try to avoid algorithms w hose complexity creates execution times or w hose output data sets increase at anything greater than O(N3/2), it's also w ise to avoid such algorithms from a coding standpoint. As an example of algorithm choice, consider the options for sorting a set of data. Various algorithms exist, ye olde BubbleSort, Williams' HeapSort and Hoare's QuickSort spring to mind. The HeapSort algorithm executes in time O(Nlog2N), w hereas BubbleSort executes in time O(N2). QuickSort is generally faster than HeapSort by about 25% on typical (< 100 items) random data sets providing the data set is not already ordered. As an aside, if you don't know how to code the BubbleSort algorithm don't bother, go straight to the HeapSort algorithm (do not pass GO, but do collect 200 coding Brow nie points!). I know , w e'll do a HeapSort model as next month's Model of the Month...
www.doulos.com/knowhow/verilog_designers_guide/think_before_you_code/ 1/2

27/12/12

Think Before You Code

So let's find a w ay to code the Gray code sequence in as succinct a w ay as possible. By examining the sequence in the table, w e can see that as w e increment through each step bit N+1 inverts the value of bit N w hen 1. This looks just like an XOR operation and so it proves. Note, that this inversion rule applies because the sequence is incrementing and thus an incrementer is required too. Thus, the horrid case statement in our first code snippett becomes somew hat simpler: q< =q+1 ;-t h ei n c r e m e n t e r and for the inversion, q _ g r a y={ q [ 3 ] ,q [ 3 ] ^ q [ 2 ] ,q [ 2 ] ^ q [ 1 ] ,q [ 1 ] ^ q [ 0 ] } ; so the Verilog code now becomes: a l w a y s@ ( n e g e d g er e s e to rp o s e d g ec l o c k ) i f( ~ r e s e t ) q< =0 ; e l s e q< =q+1 ; a s s i g nq _ g r a y={ q [ 3 ] ,^ q [ 3 : 2 ] ,^ q [ 2 : 1 ] ,^ q [ 1 : 0 ] } ; Yes, w e took advantage of Verilog's unary XOR operator to make the code even shorter, but w ell, that's w hat I thought it w as there for! So a summary of the key points, avoid any kind of complexity greater than O(N3/2), w hether it be code space, execution time or data set alw ays try to recognise patterns in your code (look at the Synchronizer Scaler model for yet another example of this cognitive approach to hardw are design) use neat language features w here they fit, such as Verilog's unary ^ operator One final point, if the code looks neat from a coding, performance (execution and real-time) and data set perspective, you're probably not going to do much better. Unless, of course, you use a better algorithm... Copyright 2005-2012 Doulos. All rights reserved.

www.doulos.com/knowhow/verilog_designers_guide/think_before_you_code/

2/2

Das könnte Ihnen auch gefallen