Beruflich Dokumente
Kultur Dokumente
Home
Embedded Channels
Directory
Tracker
Newsletter
Information Centers
About Us
Contact Us
Processor Tracker - Real-time updates for select processors and development tools Embedded Insights Channels Home
Search
I remember the first embedded project that I worked on where I had visibility into choosing the processor. I was a junior member of the technical staff and I assisted a more senior member in selecting and documenting our choice of processor for a proposal. I say assisted because my contribution consisted mostly of writing up the proposal rather than actually evaluating the different options. What stuck with me over the years about that experience was the large number of options and the apparent ease with which the other team member chose the target processor (a 80C196KB). I felt that processor had been chosen mostly based on his prior experience and familiarity with the part. Today, the number of processing options available to embedded developers is vastly larger; just check out the Embedded Processing Directory to get a sense of the current market players and types of parts they offer. While prior experience with a processor family is valuable, I suspect it is only one of many considerations when choosing a target processor. Todays device families offer many peripherals and hardware accelerators in a single package that were not available just a few years ago. Todays devices are so complex that it is insufficient for processor vendors to just supply a datasheet and cross assembler. Today, most processor suppliers provide substantial amounts of software to go with their processors. I view most of this software as low-hanging integration fruit rather than a necessary evil to sell processors, but that is a topic for another day. I suspect that while instruction set architecture and maximum processing performance are important, they are not necessarily the same level of deciding criteria that they used to be. There are entire processor platforms built around low energy or value pricing that trade processing performance to enable entirely different sets of end applications. There is a growing body of bundled, vertically targeted, software that many processor platforms support, and I suspect the bundled software is playing a larger role in getting a processor across the line into a design win. With the recent launch of the Embedded Processing Directory, I would like to ask you what matters most to you when choosing an embedded processor? Is having access to the on-chip resources in a table format still sufficient, or are there other types of information that you must evaluate before selecting a target processor? We have a roadmap planned for the directory to incorporate more application-specific criteria, as well as information about the entire ecosystem that surrounds a processor offering. Is this the right idea, or do you need something different? Please share your thoughts on this and include what types of applications you are working on to provide a context for the criteria you examine when selecting an embedded processor.
This entry was posted on Wednesday, July 14th, 2010 at 4:38 pm and is filed under Question of the Week. You can follow any responses to this entry through the RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.
RSS Feeds
All Entries RSS Feed All Comments RSS Feed
Article Archives
By Publish Date
Recent Entries
Eclipse and NetBeans replacing embedded IDEs (part 3) Eclipse and NetBeans replacing embedded IDEs (part 2) Eclipse and NetBeans replacing embedded IDEs (part 1) One Picture Is Worth a Thousand Words How could easing restrictions for in-flight electronics affect designs?
Recent Comments
harshad on Replacing Mechanical Buttons with Capacitive Touch Ed Stark on The Express Traffic Lane (Its Not the Computer, Its How You Use It) true_toyou on To MMU or not to MMU? C.L. @ LI on Are random numbers a solved function? L.H. @ LI on Do you ever think about endianess?
Channels (Info)
38 Responses to What matters most when choosing an embedded processor?
ARM Architecture (4) Automation (2) Certification (2) Computers & Peripherals (2) Education & Training (3) Energy Management (9) Exploration & Prototyping (6) Extreme Processing (25) Machine Sensing (3) Medical (2) Multiprocessing (5) Power Architecture (1) Project Management (8) Question of the Week (107) Random Insights (16) Robotics (3)
Reply
RVDW @ LI says:
July 19, 2010 at 12:30 pm 1. Decide what software it is to run. 2. Then select a core with peripherals that can run that software. Cost, development tools, core speed and peripherals are really all determined by 1, because if there are several ICs that fulfill 1, all thats left at 2 is picking the least expensive option. The only secret is that memory is the largest expense in a computer, and cores can differ substantially in code density, so sometimes a slightly more expensive core can reduce the systems total cost. In my tests last year, popular modern cores differed by less than 5% in code density, when using leading compilers. So, that leaves the costs of silicon and licensing the IP. IP vendors seem to use similar technologies, so the silicon areas are similar for similar classes of CPU. Of course, the most popular cores have the highest licensing costs. So, it boils down to popular, but not -toopopular. We picked ColdFire. Reply
Robust Design (38) Security & Encryption (7) Simulation & Debugging (12) Software Techniques (25) Speech (1) User Interfaces (35) Verification & Validation (7) Video & Imaging (2) Voices of Industry (22)
Article Archives
By Publish Date
R.A. @ LI says:
M. is correct that it is the requirements of the product that you are building that matters most. The most common mistake made in choosing a processor, is in the choice of who the decision maker should be. In general, the software team responsible for application should be the decision maker for the processor. The reason for this is simple; the processor is there for no other reason than to support the application software, and the software is directly answerable to the vast majority of product requirements (typically more than 90% of documented product requirements these days are implemented by the software). Items that M. didnt mention: 1. Does the processor have a MMU. With the complexity of device software today, a MMU is mandatory for pretty much every application. 2. Does the processor have a cycle counter. Again, because of the complexity of devices advanced debugging support for the software is required. Cycle counters allow the debugging of race conditions that are exceedingly difficult (if not impossible) to debug otherwise. 3. Is the instruction set a target for any binary software components that might be used. If the application uses any off-the-shelf 3rd party binaries, then the processor must support that instruction set. 4. What calling conventions does the processor support. This is related to the tools support that Murray mentioned, but it requires analysis by the software team. A good example of this is the Freescale E500 cores. They do not have a floating point unit, but do have a SPE unit that can be used to do hardware floating point. On more than one occasion I have seen the selection team/individual choose this processor because they felt the SPE could be used for the floating point requirement, only to learn that the calling convention is different, and thus incompatible with the system software. Reply
R.A. @ LI says:
July 20, 2010 at 1:12 pm Ive never designed a system with an MMU and dont see it on the horizon for the kind of work I do Thats why I said pretty much. There are, of course, always exceptions, but the vast majority of embedded developers today are working on systems that have significantly more than 9KB of code throughout the product. Actually R., Im not sure about letting software people choose a chip for low power either a flat battery trumps even totally bug free software What makes you think that using MMU enabled chips increases the power consumption of the total system? The truth is, that having a single MMU based chip that replaces 15 MMU-less micro-controllers, actually reduces the systems overall power consumption; sure, if your system has only a single micro-controller with 9KB of code, then an MMU is pointless; but unless you are building something on the scale of a musical greeting card or a microwave oven, you arent likely to have the luxury of this degree of simplicity. In any case, such simple applications are universally well understood, and while this may have been an interesting discussion in 1974, it really isnt all that interesting in 2010 Reply
J.H. @ LI says:
July 20, 2010 at 8:11 pm R. Theres plenty of realtime embedded systems out there without MMUs. and there will be for a very long time. maybe if someone is coming to embedded applications from a big systems programming background, or developing things with graphical touchscreen interfaces and TCP stacks for networking then there could be a point in having an MMU, but plenty of realtime embedded applications do not require one. Reply
M.Z. @ LI says:
July 20, 2010 at 11:29 pm The most stable will have MMUs though. I completely agree with R. on this one. You *can* code on a system without one, but if you want stability, you have to be very, very careful. Reply
R.A. @ LI says:
July 21, 2010 at 4:13 pm Your truth that merging many micro based sub systems into one mega system is an asssertion which would require some actual evidence to be convincing. If you perform an electronic systems reliability prediction computation such as SR332 on two systems, one implemented out of many distributed processors, and one implemented using a single processor, you will have
Embedded Insights - Embedded Insights Channels your evidence (unless you elect to discount the decades of research, development and testing that went into these analysis techniques). There are many reasons, some commercial, some regulatory and some performance based for building large systems from a collection of smaller components. As long as cost, development effort, simplicity, debugability and overall system reliability arent concerns, then systems composed of multiple extraneous processors are always an option. My assertion, (unsupported by hard facts but I still think it is true) is that building complex systems from many simple, proven components gets you to a reliable result faster and cheaper. If by components, you are referring to hardware components, then not only is this assertion not supported by facts, it flies-in-the-face of many decades of research and analysis which, in fact, show the opposite conclusion to be true. It also seems intuitively obvious that the more components there are, the less reliable the overall system will be (i.e. this isnt one of those cases where the reality is counter-intuitive). One counter-intuitive result that is manifest by adding complexity, is that if one builds two systems that are functionally identical and implements software that synchronizes one system as a back-up, that the overall SERVICE AVAILABILITY increases (though, as intuition would indicate, because of the additional complexity, the overall reliability of the system does indeed decrease). Reply
J.H. @ LI says:
July 22, 2010 at 12:25 pm m.: Actually, Id take code for an 8 bit micro not written with any dynamic memory usage at all over a huge overblown contrivance with an MMU for stability (OK I admit not everything you can do this way, and without dynamic memory, things like comms protocols have to be written to survive throwing away of data sometimes, but simple systems written well this way are pretty much bulletproof) rennie: theres a whole world of embedded engineering you clearly dont have the faintest clue about. good luck with that As for the religious war, what I want to know is who let all the computer programmers into embedded engineering anyway? shouldnt you all be sitting about wearing turtlenecks and writing ipad apps or something? Reply
R.A. @ LI says:
July 22, 2010 at 12:25 pm m.: Actually, Id take code for an 8 bit micro not written with any dynamic memory usage at all over a huge overblown contrivance with an MMU for stability The presence or absence of dynamic memory allocation is completely unrelated to the absence or presence of a MMU. Reply
J.H. @ LI says:
Im not talking about an MMU thats just for simple bank switching.. in that case we should be also listing the importance of the processor having pins or pads to solder to, a VCC and a GND connection, and some kind of packaging to allow transport of the part from the manufacturer to the assembly house and loading into a p&p machine. The presence or absence of dynamic memory allocation is completely unrelated to the absence or presence of a MMU. So professor, what exactly would you want in an MMU for in a system (say 16k of code space, and 1k of ram) where you statically allocated all your memory that was needed? MMUs do have a place, but to assume that systems without them are irrelevant when there are such massive volumes of devices made up with small microcontrollers sold these days that dont use them for anything more than bank switching if at all is just plain ignorant. Reply
R.A. @ LI says:
July 25, 2010 at 12:22 am So professor, what exactly would you want in an MMU for in a system (say 16k of code space, and 1k of ram) where you statically allocated all your memory that was needed? It seems you have misunderstood. I never suggested that micro-controllers should have MMUs, I am suggesting that micro-controllers are becoming less and less relevant (by the minute it seems) as they simply become components in more complex systems where the micro-controllers functionality can be simply replicated for zero additional incremental cost, as a thread in a process within the protection domain of a MMU within a larger processor. The benefit is not only cost, but reliability since an appropriately sized 32 bit processor with MMU amortizes the cost of the supporting circuitry thus reducing the overall gate count, and therefore improving reliability. Reply
T.W. @ LI says:
July 30, 2010 at 7:41 pm after quite few years working around embedded systems the only use i saw for mmu was to run hypervisor on multiple core system and delegate each core to run other os in its own address space + stack overflow checking (not sure if the second reason is still valid; it was there primarily for diagnosis). right now even vxworks still plans to support the memory protection within hypervisor client, but they still dont have it (i guess its going to show up somewhere early next year, when 6.9 is out?) there have been some topics on the embedded systems and os functions to manage memory here in the past, and i agree that dynamic memory allocation is not exactly going to do any good to anybody as long as we talk about embedded systems; youd profit from mmu mostly during debug, when you have to catch something that messes around in your software (later it the field it wont help you recover if page fault, dsi, isi or such kill your surveillance task and everything goes havoc youd need to reboot anyways; you know all your applications already), but even that can be done without mmu. besides, maintaining TLBs with all the rights and coherency require you to be consequent all the time, even when its not required (or even unwanted say in an interrupt? or exception handler?); but still, since your memory will most of the time be preallocated, mmu is something you dont really need to care about. apart from what has been said already i would only add one more point: just think carefully what you need to achieve, what the software is supposed to do. theres plenty of project around that still utilize old good 8bit cpus with no real OSes or MMU and they fit just perfectly
Reply
D.M. @ LI says:
August 31, 2010 at 11:01 am Availability into the future is something that also needs due consideration. A product that I currently work on was designed over 15 years ago. The embedded processor chosen at the time is no longer available and yet the product is still being manufactured and sold. Reply
R.A. @ LI says:
August 31, 2010 at 11:01 am A product that I currently work on was designed over 15 years ago. The embedded processor chosen at the time is no longer available and yet the product is still being manufactured and sold. It is certainly an important factor. Longevity of design can be aided through selection of operating system as well. The O/S can abstract the processor, allowing the hardware to be re-engineered with current components, and minimal changes to the application code (assuming that the code is also well designed and implemented). Reply
Embedded Insights - Embedded Insights Channels able to get by with a small part now, but if your design exhibits feature creep (which all embedded products do) can you easily expand to a larger part? #4 Environmental concerns. Clock frequency, temperature ranges, I/O, RAM/ROM. All of those things are important. While it may be possible to run a chip faster, does a faster crystal cause EMC issues? Does the chip have a PLL if you need a faster clock speed but can not have a faster crystal? How about ESD protection? Is your part susceptible to ESD problems? All parts are in theory, but some manage this better. Reply
J.G. @ LI says:
September 2, 2010 at 11:45 pm The only simple answer to this question is: How well does it meet the requirements of the project? As you can see, thats another question. The reason for that is that there isnt any one factor to be considered. Like buying a house, youll need to list a number of needs and wants. Then youll look at the available options then select the one that meets all the needs and as many of the wants as possible. Reply
R.K. @ LI says:
September 12, 2010 at 6:33 pm In desending order of priority Application to be implemented Speed required for application Features available does it support the application and obviously Price Reply
Leave a Reply
Submit Comment