Sie sind auf Seite 1von 13

http://www.microchip.com/forums/m115924print.

aspx

1. Hi, how to display the binary result that get from input port to LCD? I am writing a program to check the voltage level of input pin RB0-RB7 and i would like to display the result to LCD, let say the value of RB7-0 is 00010111, this binary result in decimal value is 23, how can i display the decimal value on LCD? if I direct load the binary value to the data pin of LCD, it will show me some funny character, anyone know how to solve? Ans: Your LCD is probably a character LCD and you need to send ASCII data to it. To display a decimal value of 23, you need to send an ASCII 2 (0x32) followed by an ASCII 3 (0x33). 2. that's the problem..... I get the data from port B, the data format is in binary, how to tell PIC to convert this value to ASCII before send to LCD?? sample of my code:

movfw movwf

PORTB

; portB is connect to switch ;move the value in W register to a location and send to LCD

LCD_Data

ans: 'convert to ascii' is not very meaningful. What format do you want for your 8-bit number?
Hexadecimal, like "C3" (0x43, 0x33), Binary like "11000011" (0x31, 0x31, 0x30, 0x30, 0x30, 0x30, 0x31, 0x31), or Decimal, like "195" (0x31, 0x39, 0x35) ? 3. sorry for my poor explaination... as we know, LCD driver only accept ASCII format no, if I want to display C, i need to write movlw 'C' or movlw 0x43 right? what about 00010111? if I direct load this binary no to LCD, it will display funny character bcoz there is no ASCII code for this binary no.... now my problem is... I wont know wat binary value will produce on Port B, it depend on user on and off the swicth that connect to Port B, my task is to tell the MCU to get the value from PORT B and then display the result at LCD. but I need to convert the result from binary format to ASCII format so that LCD driver understand the character, how to do that? for example, the result I get from Port B is 01111101 , how to display the result on LCD (LCD should display 125)?

Ans: No, LCD can display extended ASCII characters

You could do a check on the 8th bit, like a If the bit is 0, you send a '0'. If the bit is 1, you send a '1'. After that you could do a left rotate of the Variable you are using. And repeat this loop 8 times. You will be sendinf Most Significant Bit first, wich is the appropiate way for a Left to Right print of the binary number. Of course the LCD cursor should be in increment to right mode I hope this helps 4. Hi magio, I would like to display the decimal no that represent 00010111 on LCd, how to do that? I saw some example on internet , they change the binary value to BCD format, but i not clear how to do that.... Ans: First, you should have a HEX to BCD routine. This routines are feeded with an hex byte, and you get three nibbles (two byte) that represent the BCD representation the number. Take a look at www.piclist.com Having a number, for example 0xF3 in hex (243 dec), after the HEXtoBCD conversion you will have something like HIGH BYTE = 02h LOW BYTE = 43h Then you get the low nibble of the first byte , and you add it a 48d value, to convert the number 2 into the ascii '2'. After that take the high nibble of the LOW BYTE, do a swap into a temporary var so you will have temporaly the 34h stored in that variable, do and AND 0x0F wich will erase the high part. THen add 48d to te result, you will have the ascii representation of the '4'. The last step will be to do a LOWBYTE AND 0x0F to get the lowest byte. Then add 48dec to the result and you will have the '3' That should work. 5. ya.. that's what i looking for... !! but i cant find the HEX to BCD routine at piclist webpage, cause a lot of info at the website (a bit messy :> ),can u give me the code?? ANS: I just put BCD on the search box wich is above. Yes, I agree is a little bit messy but they will arrange it one day!! Check this links bcd packed to binary 2 digit to 8 bit at least i hope so.

http://www.piclist.com/techref/microchip/math/radix/bp2b2d8b.htm?key=bcd&from=%2Ftechref%2Fpiclist%2Findex%2Ehtm

PIC Microcontroller Radix Math Method http://www.piclist.com/techref/microchip/math/radix/bcd2bin16.htm?key=bcd&from=%2Ftec href%2Fpiclist%2Findex%2Ehtm

Binary to bcd packed and ASCII, 32 bit to 10 digits http://www.piclist.com/techref/microchip/math/radix/b2bp32b10d.htm?key=bcd&from=%2Ftechref%2Fpiclist%2Findex%2Ehtm

Next time I advice you to be more patient. If you do you will success in "Embedded World" . I hope this helps

Ans: 2 Application note AN-526 shows an efficient way to do this for 8 and 16 bit binary integers.
If you want to do it the hard way, you can divide the binary number by powers of ten to get the digits. For a 16 bit integer N, start by subtracting 10,000 from N until the result is negative. Let D4 be the number of times that the subtraction produced a positive difference (D4 can be zero). Then D4 is the most significant digit of the decimal equivalent of N. Add 10,000 back to the last result so that you have a positive number again, and continue by subtracting 1,000. Let D3 be the number of times that a positive result is obtained. Add 1,000 back to get a positive difference, and continue with 100, 10, and 1 to get D2, D1, and D0. Convert D4, D3, D2, D1, and D0 to ASCII by adding 0x30 to each. For an 8 bit byte, convert it by dividing it by 100, 10, and 1 to get D2, D1, and D0. There are more efficient algorithms for doing a division than just subtracting the divisor multiple times, and the most efficient algorithm for the total conversion is that given in AN-526. 6. thanks to all! I have solved my convertion problem ...

I need to take an 8-bit binary input and display it on an LCD display using VHDL. The problem is that the LCD only takes one ASCII character at a time and displays them in sequence. How do I take the 8-bit binary number and send individual characters to the LCD? I tried converting the 8bit number into decimal and then sending the individual characters back in binary. But I cant get the conversion right. My conversion code is below: library ieee; use ieee.std_logic_1164.all; use ieee.std_logic_arith.all; --use ieee.std_logic_unsigned.all; -- these libraries conflict with ieee.std_logic_arith.all !!!!! --use ieee.numeric_std.all; -- these libraries conflict with ieee.std_logic_arith.all !!!!! entity bintodec is PORT( clock:in std_logic; dip1: in std_logic_vector(7 downto 0); output: out std_logic_vector(7 downto 0) ---using to test the outputs ); end bintodec; ARCHITECTURE arch of bintodec is signal var2: integer range 0 to 255:=8; signal var: integer; signal varout: std_logic_vector (7 downto 0); signal dip1_un: unsigned (7 downto 0); signal sumof2: integer; signal unsnd: unsigned (7 downto 0); BEGIN process(clock) begin for i in 0 to 7 loop if dip1(i) = '1' then sumof2<=1; for j in 0 to i loop sumof2<=sumof2*2; end loop;

var<=var+sumof2; end if; end loop; unsnd <= conv_UNSIGNED (var2, ; -- DOESNT WORK!!

output <= std_logic_vector( to_unsigned( unsnd, 8 )); -- COMPILES BUT NO OUTPUT !! What am i doing wrong? Ans: I cannot see where you use the clock ?? You just have it in your sensitivity list ... Apart from that, first try to understand how a display works ...

dunda wrote: > I need to take an 8-bit binary input and display it on an LCD display > using VHDL. The problem is that the LCD only takes one ASCII character > at a time and displays them in sequence. How do I take the 8-bit binary > number and send individual characters to the LCD? I tried converting > the 8bit number into decimal and then sending the individual characters > back in binary. But I cant get the conversion right. My conversion code > is below: > > library ieee; > use ieee.std_logic_1164.all; > use ieee.std_logic_arith.all; > --use ieee.std_logic_unsigned.all; -- these libraries conflict with > ieee.std_logic_arith.all !!!!! > --use ieee.numeric_std.all; -- these libraries conflict > with ieee.std_logic_arith.all !!!!! Never use std_logic_arith and std_logic_(un)signed. Use only numeric_std and you'll solve the "conflicting libraries" problem. Blame your FPGA vendor for not fixing their docs and their code-generation tools. > entity bintodec is > PORT( > clock:in std_logic; > dip1: in std_logic_vector(7 downto 0); > output: out std_logic_vector(7 downto 0) ---using to test the outputs > ); > > end bintodec; > > ARCHITECTURE arch of bintodec is

> > signal var2: integer range 0 to 255:=8; > signal var: integer; > signal varout: std_logic_vector (7 downto 0); > signal dip1_un: unsigned (7 downto 0); > signal sumof2: integer; > signal unsnd: unsigned (7 downto 0); > BEGIN > > process(clock) > > begin > for i in 0 to 7 loop > > if dip1(i) = '1' then > sumof2<=1; > for j in 0 to i loop > sumof2<=sumof2*2; > end loop; > var<=var+sumof2; > end if; > end loop; > > unsnd <= conv_UNSIGNED (var2, ; -- DOESNT WORK!! > > output <= std_logic_vector( to_unsigned( unsnd, 8 )); -- COMPILES BUT > NO OUTPUT !! you should use variables instead of signals for the intermediate stuff in your for loop. =-a

dunda wrote: > I need to take an 8-bit binary input and display it on an LCD display > using VHDL. The problem is that the LCD only takes one ASCII character > at a time and displays them in sequence. How do I take the 8-bit binary > number and send individual characters to the LCD? I tried converting > the 8bit number into decimal and then sending the individual characters > back in binary. But I cant get the conversion right. My conversion code > is below: > > library ieee; > use ieee.std_logic_1164.all; > use ieee.std_logic_arith.all; > --use ieee.std_logic_unsigned.all; -- these libraries conflict with > ieee.std_logic_arith.all !!!!! > --use ieee.numeric_std.all; -- these libraries conflict > with ieee.std_logic_arith.all !!!!! > > entity bintodec is > PORT( > clock:in std_logic; > dip1: in std_logic_vector(7 downto 0); > output: out std_logic_vector(7 downto 0) ---using to test the outputs > ); > > end bintodec; > > ARCHITECTURE arch of bintodec is > > signal var2: integer range 0 to 255:=8; > signal var: integer; > signal varout: std_logic_vector (7 downto 0); > signal dip1_un: unsigned (7 downto 0); > signal sumof2: integer; > signal unsnd: unsigned (7 downto 0); > BEGIN > > process(clock) > > begin > for i in 0 to 7 loop > > if dip1(i) = '1' then > sumof2<=1; > for j in 0 to i loop > sumof2<=sumof2*2; > end loop; > var<=var+sumof2; > end if; > end loop;

> > unsnd <= conv_UNSIGNED (var2, ; -- DOESNT WORK!! > > output <= std_logic_vector( to_unsigned( unsnd, 8 )); -- COMPILES BUT > NO OUTPUT !! > > What am i doing wrong? > Thanks Aside from what the others have mentioned, I don't see that var2 is assigned a value, except for "signal var2.. :=8", and I'm not sure if that gets synthesized or not. Is dip1 a signed or unsigned value? When you convert dip1 to be displayed, are you going to display it as 2 hex digits (00h - FFh) or decimal digits (-1, 255, etc). Your output port "output" will only hold a single ASCII character. Does your LCD have a serial or parallel interface? This affects how you send ASCII characters to the LCD.

FLOTING POINT ARITHEMETIC ON FPGA

http://www.scribd.com/doc/30441900/Floating-PointArithmetic-final -- VERIGOOD IT including verilog code


2. ABSTRACT Floating point operations are hard to implement on FPGAs because of the complexity of their algorithms. On the other hand, many scientific problems require floating point arithmetic with high levels of accuracy in their calculations. Therefore, we have explored FPGA implementations of addition and multiplication for IEEE-754 single precision floating-point numbers. For floating point multiplication, in IEEE single precision format, we have to multiply two 24 bits. As we know that in Spartan 3E, 8 bit multiplier is already there. The main idea is to replace the existing 8 bit multiplier with a dedicated 24 bit multiplier designed with small 4 bit multiplier. For floating point addition, exponent matching and shifting of 24 bit mantissa and sign logic are coded in behavioral style. Entire our project is divided into 4 modules. 1. Designing of floating point adder/subtractor. 2. Designing of floating point multiplier. 3. Creation of combined control & data paths. 4. I/O interfacing: Interfacing of LCD for displaying the output and tacking inputs from block RAM.. Prototypes have been implemented on Xilinx Spartan 3E. INTRODUCTION: Image and digital signal processing applications require high floating point calculations throughput, and nowadays FPGAs are being used for performing these Digital Signal Processing (DSP) operations. Floating point operations are hard to implement on FPGAs as their algorithms are quite complex. In order to combat this performance bottleneck, FPGAs vendors including Xilinx have introduced FPGAs with nearly 254 18x18 bit dedicated multipliers. These architectures can cater the need of high speed integer operations but are not suitable for performing floating point operations especially multiplication. Floating point multiplication is one of the performance bottlenecks in high speed and low power image and digital signal processing applications. Recently, there has been significant work on analysis of high-performance floating-point arithmetic on FPGAs. But so far no one has addressed the issue of changing the dedicated 18x18 multipliers in FPGAs by an alternative implementation for improvement in floating point efficiency. It is a well known concept that the single precision floating point multiplication algorithm is divided into three main parts corresponding to the three parts of the single precision format. In FPGAs, the bottleneck of any single precision floating-point design is the 24x24 bit integer multiplier required for multiplication of the mantissas. In order to circumvent the aforesaid problems, we designed floating point multiplication and addition. The designed architecture can perform both single precision floating point addition as well as single precision floating point multiplication with a single dedicated 24x24 bit multiplier block designed with small 4x4 bit multipliers. The basic idea is to replace the existing 18x18 multipliers in FPGAs by dedicated 24x24 bit multiplier blocks which are implemented with dedicated4x4 bit multipliers. This architecture can also be used for integer multiplication as well. 3.1. FLOATING POINT FORMAT USED:

As mentioned above, the IEEE Standard for Binary Floating PointArithmetic (ANSI/IEEE Std 7541985) will be used throughout our work. Thesingle precision format is shown in Figure1 . Numbers in this format arecomposed of the following three fields: 1-bit sign, S : A value of 1 indicates that the number is negative, and a 0 indicates a positive number. Bias-127 exponent, e = E + bias: This gives us an exponent range fromEmin=1 26 to Emax = 1 27. Fraction, f/mantissa : The fractional part of the number. The fractional part must not be confused with the significand, which is 1 plusthe fractional part. The leading 1 in the significand is implicit. Whenperforming arithmetic with this format, the implicit bit is usually made explicit. To determine the value of a floating point number in this format we use thefollowing formula: Value = (1 ) sign x 2 (exponent-127) x 1.f22f21f20.....f1f0 Fig 1 . Representation of floating point number 3.2. DETECTION OF SPECIAL INPUTS: In the ieee-754 single precision floating point numbers support threespecial inputs Signed Infinities The two infinities, + and -, represent the maximum positive andnegative real numbers, respectively, that can be represented in the floating-point format. Infinity is always represented by a zero significand (fraction and integer bit) and the maximum biased exponent allowed in the specified format(for example, 255

10 for the single-real format). The signs of infinities are observed, and comparisons are possible.Infinities are always interpreted in the affine sense; that is, is less than anyfinite number and + is greater than any finite number. Arithmetic on infinitiesis always exact. Exceptions are generated only when the use of infinity as asource operand constitutes an invalid operation.Whereas de-normalized numbers represent an underflow condition, thetwo infinity numbers represent the result of an overflow condition. Here, thenormalized result of a computation has a biased exponent greater than thelargest allowable exponent for the selected result format. NaN's Since NaNs are non-numbers, they are not part of the real number line. The encoding space for NaNs in the FPU floating-point formats is shown abovethe ends of the real number line. This space includes any value with themaximum allowable biased exponent and a non-zero fraction. (The sign bit isignored for NaNs.) The IEEE standard defines two classes of NaNs: quiet NaNs (QNaNs) andsignaling NaNs (SNaNs). A QNaN is a NaN with the most significant fraction bitset; an SNaN is a NaN with the most significant fraction bit clear. QNaNs areallowed to propagate through most arithmetic operations without signaling anexception. SNaNs generally signal an invalid-operation exception whenever theyappear as operands in arithmetic operations. Though zero is not a special input, if one of the operands is zero, thenthe result is known without performing any operation, so a zero which isdenoted by zero exponent and zero mantissa. One more reason to detect zeroesis that it is difficult to find the result as adder may interpret it to decimal value 1 after adding the hidden 1 to mantissa.

5. FLOATING POINT MULTIPLIER: The single precision floating point algorithm is divided into three mainparts corresponding to the three parts of the single precision format. The firstpart of the product which is the sign is determined by an exclusive OR functionof the two input signs. The exponent of the product which is the second part isCalculated by adding the two input exponents. The third part which is thesignificand of the product is determined by multiplying the two inputsignificands each with a 1 concatenated to it.Below figure shows the architecture and flowchart of the single precisionfloating point multiplier. It can be easily observed from the Figure that 24x24

Das könnte Ihnen auch gefallen