Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Little endian to Big conversion, simulation results are not clear..?

Status
Not open for further replies.

Jansi Meena

Junior Member level 1
Joined
Mar 15, 2012
Messages
17
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,281
Location
Chennai
Activity points
1,394
Please help me, I have got this file, which they say it converts little to big endian. One module I have gives data in (0 to 33) and another module has (33 downto 0) data type. I've been told that the below code will do the convesion
Code:
entity le_2_be is
    port (
        ARRAY_IN    : in    std_logic_vector(0 to 33));
        ARRAY_OUT   : out   std_logic_vector(33 downto 0)
    );
end le_2_be;

architecture le_2_be_arch of le_2_be is

begin
LE_2_BE_PROC: process(ARRAY_IN)
begin
    for i in 0 to 33  loop
        ARRAY_OUT(i)    <= ARRAY_IN(33-i);
    end loop;
end process;

end le_2_be_arch;
When I simulate by feedin
ARRAY_IN <= x"000000001";
I still get ARRAY_OUT <= x"000000001" the same as in Input, but I expect x"100000000" (is this right?)
But when I see the bit order, it has changed, but the simulation shows same as input. Why is it reading like this?

Please help. I use ISIM, Xilinx

---------- Post added at 14:50 ---------- Previous post was at 14:45 ----------

Will the reading order make such issue?. If so, how it shouldbe read?.
Also, dowto is little endian? and vice versa?
 

It has changed. If you look at the actual number, ARRAY_OUT(0) will be equal to ARRAY_IN(33). The way it is displayed is correct, because 0 is the MSB for array_in, and 33 is the MSB for array out. So the MSB index changes.

The code you have is a little bit of overkil. You could do it simply by writing:

ARRAY_OUT <= ARRAY_IN;

No for loop or process needed.
 

OMG, the code is useless?. I never thought that way. Then I wonder why it has been used in many places....
So even if I pass a value from "downto" to "to", the integer or number wont change?.

Please clarify me, as I am greatly confused....
For eg, this is just for my clarity and the code is no meaning.

Code:
process(a,b)
variable a : std_logic_vector(0 to 3)
variable b : std_logic_vector(3 to 0)

begin

a(0 to 3) <:= "0001";               -- line 1
b <= a;                                -- line 2
c <= b+1;                            -- line 3
end process
Now what will the value of c?.
a) is it "0010" or "1001"?
b) If, "0010", then why little~big end, wont alter number?
c) is my sensitivity list correct?

I assume, the value of a will change from 1 to 8 , is it correct?
Is my doubt clear?.

Thanks
 

first of all, the syntax error. You mean 3 downto 0, not 3 to 0.

a) C is "0010"
"0010" on its own has no meaning, until applied to an array. for a (3 downto 0) array. bits (3 downto 1) are '0' and bit 0 is '1'. for (0 to 3) bits (0 to 2) are '0' and bit 3 is '1'.

b) It is "0010". Big endianness or little endianness doesnt matter, the number is the same. It is just the MSB index that changes.

c) No, unless a and b are signals externally. You can only put signals in a sensitivity list. You have then got local variables a and b that override the external signal names. So the internal references are to the variables (and you have used the wrong assignment operator, for variables it is := ). In this
 
C will be 0010 only.
See the Endian is only for index as Tricky said, and it wont alter your numeral value.
The module which reads little endian where as data has come from big, just have the bit order reveresed and the data will simply fit into the slot. It is when you read bit wise for eg, writing 32-bit data and when you read (15 downto 0), this will not be equal to your (0 to 15)
But as a whole 32-bit Dword, the numeral value will always remain same.

like this....

0 1 2 3
0 0 1 0 (a)

3 2 1 0
0 0 1 0 (b)

Here a(0 to 2) is not equal to a(2 downto 0), but still a = b

Cleared?....
 
first of all, the syntax error. You mean 3 downto 0, not 3 to 0.

a) C is "0010"
"0010" on its own has no meaning, until applied to an array. for a (3 downto 0) array. bits (3 downto 1) are '0' and bit 0 is '1'. for (0 to 3) bits (0 to 2) are '0' and bit 3 is '1'.

b) It is "0010". Big endianness or little endianness doesnt matter, the number is the same. It is just the MSB index that changes.

c) No, unless a and b are signals externally. You can only put signals in a sensitivity list. You have then got local variables a and b that override the external signal names. So the internal references are to the variables (and you have used the wrong assignment operator, for variables it is := ). In this

yes, it is supposed to be "downto".

Thanks friends.
One more doubt, is endian not applicable for bitwise?.
 

endianness does apply to bitwise. 31 downto 0 is big endian, 0 to 31 is little endian (IIRC). It can a little more complicated with computers, because the bytes can be big endian when it comes down to bits, and little endian when it comes to byte ordering. Reading bitmap files is a bit of a pain.
 

In general, I would say, MSB first order what ever it be, bit\byte\word\dword\qword etc, it is Big-endian. It is easy to remember,rite?...

Tricky:
because the bytes can be big endian when it comes down to bits, and little endian when it comes to byte ordering. Reading bitmap files is a bit of a pain.

I dont understand what you are trying to mean here?....
 

I dont understand what you are trying to mean here?....

Each byte is ordered (7 downto 0), but then byte zero is the least sig byte of the first dword.
so the bit order is
7..0 15..8 23..16 31..24 etc

annoying.
 
Each byte is ordered (7 downto 0), but then byte zero is the least sig byte of the first dword.
so the bit order is
7..0 15..8 23..16 31..24 etc

annoying.
That is little-endian both for bits and bytes, and I prefer it. It may be natural in our culture that "left" is first, but there is no "left" and "right" in the hardware. I prefer that bit 0 is the least significant bit, and that byte 0 is the least significant byte. The Intel processors are like this.

Old Motorola processors like 68000 are little-endian for bits but big-endian for bytes (31..24 23..16 15..8 7..0).
It may look "right" for a human, but there is no advantage for hardware or software.

Power PC is big endian for bits and switchable for bytes. In the fully big-endian mode the bits are like this: (0..7 8..15 16..24 25..31)
Bit 0 is the most significant.

Say you want to check if a number in memory is even or odd. In a little-endian (for byte ordering) system you only need to know the start address of the word. In a big-endian (for byte ordering) system you must know the start address and the word size.
 
In a big-endian (for byte ordering) system you must know the start address and the word size.

Why do you mean it?, can you say an example?...

---------- Post added at 12:14 ---------- Previous post was at 12:10 ----------

Each byte is ordered (7 downto 0), but then byte zero is the least sig byte of the first dword.
so the bit order is
7..0 15..8 23..16 31..24 etc

annoying.

(7..0\15..8) is really headache

How do you handle in such case?. Should we have to format the byte order again and then process?.....OR how is this taken care in modern architectures...?
 
Why do you mean it?, can you say an example?
In a big-endian system pointers point to the most significant byte (the first byte).
To find the least significant byte you must know the word size.

You get a similar problem if you want to do a multi-word/byte adder or a serial adder. You must start with the lowest word/byte/bit.
(7..0\15..8) is really headache
std_logic_vector(7 downto 0) doesn't mean that bit 7 is the first bit. It is the "left" bit. The hardware doesn't care about "left" or "right" so why say that bit 7 is the first? I think everything is easier if I consider the element with the lowest index as the "first". Then 7..0 15..8 etc. is the most logical bit/byte order since I want bit 0 to be the least significant bit. This is how Intel processors do it.
 
  • Like
Reactions: xtcx

    xtcx

    Points: 2
    Helpful Answer Positive Rating
In a big-endian system pointers point to the most significant byte (the first byte).
To find the least significant byte you must know the word size.

Yeah, agreed. In case of little endian isn't this word-size still required?.

std_logic_vector(7 downto 0) doesn't mean that bit 7 is the first bit. It is the "left" bit. The hardware doesn't care about "left" or "right" so why say that bit 7 is the first? I think everything is easier if I consider the element with the lowest index as the "first". Then 7..0 15..8 etc. is the most logical bit/byte order since I want bit 0 to be the least significant bit. This is how Intel processors do it.

Well, I meant to say reading this order (31..24) (23..16) (15..08) (07..0) might be easier compared to that one(7..0) (15..8) etc.....This is a endian change in byte order only

Instead, this might be easy (0..7) (8..15) etc
But this is bit order change
 
Thanks, most of the devices, like some Ti DSP I've worked, is in big endian only. This is what they used for SPI, other communications...
What is it to have problem with read-order?. In VHDL I find is no trouble in switching between Big\Little endian....
 

I think, it is independent. Some manufacturers make that as a standard and some are programmable.

In FPGA there is no problem. Because we always deal with bit vectors and in fact it is very easy. But in processors like 32-bit, it is a tough job, it may take more machine cycles to get bit level information in a dword. Hence it might make this convention a headache....
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top