Bookmark and Share

Embedded Vision Insights Newsletter

Embedded Vision Insights, the newsletter of the Embedded Vision Alliance, is periodically distributed via email. To receive Embedded Vision Insights, please register as a website user or, if you've already done so, update your user profile in order to add your email address to the distribution list.


September 9, 2014 Edition

Dear Colleague,Embedded Vision Summit

I'm pleased to report that the full suite of content from the late May Embedded Vision Summit is now published on the Alliance website. Demonstration videos which have appeared in just the last two weeks include those from:

  • CEVA: Various computer vision functions running on the company's MM3101 imaging and vision DSP core
  • PercepTonic: One video on smart surveillance systems, the other on Harris corner detection and Lucas-Kanade feature tracking
  • Synopsys: A HOG-based pedestrian detection application running on the company's processor cores
  • Texas Instruments: One video on an ADAS surround view application, the other showcasing a structured light depth camera setup
  • VanGogh Imaging: Object detection and recognition using the company's software, and
  • videantis: Pedestrian detection, video encode/decode and feature tracking all running on the company's vision processor core

And don't forget about all the other already published Summit material; 36 videos in all, plus a downloadable proceedings set. You can access it all from one place, the May 2014 Embedded Vision Summit Content page. And one click away from it is the overview page for next year's Embedded Vision Summit, taking place May 12, 2015 (with accompanying partial- and full-day workshops on both the 11th and 13th), once again at the Santa Clara Convention Center. Mark your calendars and plan to attend; additional information, including sponsors, workshops and registration details, is forthcoming.

Speaking of content, last time I mentioned that the Alliance had joined with Apress Media and author Scott Krig to enable free publication of Scott's new book "Computer Vision Metrics: Survey, Taxonomy, and Analysis" on the Alliance website. Currently, four of the book's eight chapters are already published, along with the introduction and bibliography. The remainder of the chapters, plus the four appendices, will follow in the coming weeks. And don't forget; if you're based in the United States and are willing to post a review of the book to the website's discussion forums, we have a limited number of complimentary print copies available as thank-you gifts. Email us with your commitment and contact information, for consideration.

If you're interested in purchasing a print copy of "Computer Vision Metrics", visit Apress's website for more information. And while you're on the Alliance website, make sure you check out all the other great new content published there in recent weeks. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. As always, I welcome your suggestions on what the Alliance can do to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


August 28, 2014 Edition

Dear Colleague,Computer Vision Metrics

I'm happy to be able to share some exciting news with you! The Embedded Vision Alliance has joined with Apress Media and author Scott Krig to enable free publication of Scott's new book "Computer Vision Metrics: Survey, Taxonomy, and Analysis" on the Alliance website. At the moment; the book's introduction, bibliography and first two chapters are available for you to both read online and download as PDFs; successive chapters and other book sections will follow in future weeks.

If you're based in the United States and are willing to post a review of the book to the website's discussion forums, we have a limited number of complimentary print copies available as thank-you gifts. Email us with your commitment and contact information, for consideration. And if you're interested in purchasing a print copy, visit Apress's website for more information.

Here's an abridged version of the book description, provided by Apress:

Computer Vision Metrics provides an extensive survey and analysis of over 100 current and historical feature description and machine vision methods, with a detailed taxonomy for local, regional and global features. This book provides necessary background to develop intuition about why interest point detectors and feature descriptors actually work, how they are designed, with observations about tuning the methods for achieving robustness and invariance targets for specific applications.

The survey is broader than it is deep, with over 540 references provided to dig deeper. The taxonomy includes search methods, spectra components, descriptor representation, shape, distance functions, accuracy, efficiency, robustness and invariance attributes, and more. Rather than providing ‘how-to’ source code examples and shortcuts, this book provides a counterpoint discussion to the many fine opencv community source code resources available for hands-on practitioners.

Make sure you also check out Jeff Bier's recent interview with Scott, where the author discusses his motivations for writing the book and its contents in greater detail. And while you're on the Alliance website, make sure you check out all the other great new content published there in recent weeks. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


August 12, 2014 Edition

Dear Colleague,Embedded Vision Summit

After having wrapped up publication of the videos of technical presentations from the May Embedded Vision Summit, we're now about halfway through publishing that event's product demonstration videos. In auditioning them, I'm reminded of the breath and depth of practical computer vision educational content that they (and the event they came from) represent. AMD, for example, demonstrated heterogenous processing of vision algorithms, while Analog Devices discussed its processors' face detection, character recognition and pattern recognition capabilities.

ARM also demonstrated heterogenous processing, specifically in the form of facial analysis and gesture interface algorithms accelerated on the company's GPU cores via OpenCL. Bluetechnix and inrevium AMERICA both showcased time-of-flight sensors, representing one of the three common 3D camera approaches. And Cadence's demonstrations highlighted face detection and real-time image processing, while CogniVue's application focuses spanned both ADAS and consumer electronics designs.

I encourage you to take a few minutes and check out the above-mentioned videos, as well as keeping an eye out for those still to come. And of course, don't forget to mark your calendars for next year's Embedded Vision Summit, which promises to raise the quality bar even further. In past newsletters, I'd indicated that the event would take place on April 30; it's recently been rescheduled to May 12, 2015 (still at the Santa Clara California Convention Center), thereby enabling us to provide you with more technical sessions and more (and bigger) technology workshops the prior day.

I'm also happy to report that the Alliance has added nearly a dozen new members in just the past few months; many of their company descriptions (along with all-important links to their websites for more information) can be found on the Alliance member company overview page. One of them, Aspera, has already published an interesting and informative technical article on network protocol alternatives for optimizing bandwidth and latency, which I commend to your inspection. And of course, while you're on the Alliance website, make sure you check out all the other great new content published there in recent weeks. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


July 31, 2014 Edition

Dear Colleague,Fire Phone

Amazon's Fire Phone, rumors of which I passed along to you in mid-April, went on sale last Friday subsequent to a mid-June public unveiling. The overall reviews thus far have been somewhat lukewarm. However, the Fire Phone is chock-full of vision-based features, which have held up quite well to review scrutiny.

First off, there are the computational photography capabilities, for still image and video capture, enabled both by the Fire Phone's Google Android foundation and Amazon-developed enhancements (not to mention its 13 Mpixel rear and 2.1 Mpixel front cameras). Next is Firefly technology, which uses object and text recognition algorithms to identify whatever you point the handset's camera at, including (of course) items you might want to price-match and potentially buy from Amazon; television shows and movies shown on a screen in front of you; and web addresses, email addresses, and phone numbers.

Finally, there's Dynamic Perspective, which leverages infrared transmitters and sensors on each of the phone's front four corners to track your head location and orientation, presenting you with parallax-adjusted 3D representations of on-screen objects, along with enabling sophisticated but intuitive one-handed user interface gestures. And, as Qualcomm is happy to point out, a notable percentage of the vision processing takes place on the Hexagon DSP core integrated within the company's Snapdragon application processor.

I encourage you to check out the recently published analysis of the Fire Phone’s prospects by John Feland of Argus Insights, followed by a perusal of iFixit's product teardown. Then head to a nearby AT&T store (if you're in the United States, that is) to try out a Fire Phone for yourself. While you're on the Alliance website, please also peruse the other great content that's appeared there the past two weeks, including multiple product demonstration videos from May's Embedded Vision Summit, two article reprints (one on vision applications in industrial automation, the other on computational photography), and several press releases from Alliance member companies.

And speaking of Summits, mark your calendars now for next spring's event, currently scheduled to take place on April 30, 2015 at the Santa Clara (California) Convention Center. Thanks for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. As always, I welcome your suggestions on what the Alliance can do to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


July 15, 2014 Edition

Dear Colleague,Embedded Vision Summit

The steady stream of new videos on the Alliance website from the recent Embedded Vision Summit continues unabated. Newly published technical tutorials cover augmented reality for wearable devices and the "Internet of Things" (from AugmentedReality.org), processor optimization for pedestrian detection (from Synopsys), and the implementation of HOG, the histogram of oriented gradients algorithm used in object detection (from videantis). And still to come are nearly two dozen product demonstration videos captured at the event. Regularly revisit the website and keep an eye out on the Summit content archive page for them. If you sign up for the Alliance's Facebook, LinkedIn or Twitter channels, or its RSS feed, you'll receive notification each time a new piece of content appears.

The day after the Summit, the Embedded Vision Alliance held its quarterly Member Meeting. Last time, I told you about the first video from this meeting, the presentation on "Project Tango" depth-sensing mobile devices from Google's Johnny Lee. And now I have the pleasure of sharing with you the other four published sessions from that day. They are:

And speaking of Embedded Vision Summits, I'll close out with a "teaser" about next year's event. It's currently scheduled for April 30, once again preceded one day earlier by Alliance member company-led workshops, and will feature an expanded technical program. An overview page is now up on the website; mark your calendars and check back periodically for updates. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


July 1, 2014 Edition

Dear Colleague,Project Tango Tablet

Johnny Lee, Technical Program Lead at Google, was one of the invited speakers at the Embedded Vision Alliance's May 30 Member Meeting. The video of his presentation, "Project Tango: Integrating 3D Vision Into Smartphones," is now published on the Embedded Vision Alliance website. Lee's talk followed up his company's earlier announced 3D smartphone and was just ahead of its more recent depth-sensing tablet announcement. Complete with multiple entertaining and interesting demos, Lee's presentation was also reprised at last week's Google I/O developer conference. I enthusiastically commend it to your attention.

Last time, I mentioned that videos of presentations from the May 29 Embedded Vision Summit West had begun appearing on the Alliance website. That trend has continued; newly published in the past two weeks are technical presentations on heterogeneous processing architectures (AMD), pipelined video processor usage (Analog Devices), and vision processor options and development tools (both from BDTI). You'll also find educational tutorials on object detection (CEVA), augmented reality (CogniVue), the OpenVX vision hardware acceleration API (Khronos), and Lucas-Kanade tracking (PercepTonic). Regardless of whether you attended these and the other Summit presentations first-hand, there's plenty to learn in each of them and a (re-)view is well worth your time.

Plenty of additional material from both the late-May Summit and Member Meeting is en route. Sign up for the Alliance's Facebook, LinkedIn and Twitter social media channels, along with its RSS feed, to receive proactive notification each time a new piece of content appears. And of course, while you're on the Alliance website, make sure you check out all the other great new content published there in recent weeks, including market analysis reports and summaries, news writeups, and press releases. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


June 17, 2014 Edition

Dear Colleague,Embedded Vision Summit West

Videos of presentations from the recent Embedded Vision Summit West have begun to appear on the Alliance website. We’ve just published the two outstanding keynotes delivered that day, from Facebook's Yann LeCun and Google's Nathaniel Fairfield, in the Embedded Vision Academy area of the site. LeCun wowed the crowd with demonstrations of object recognition implemented using the convolution neural network machine learning approach he discussed. And Fairfield's talk on Google’s self-driving cars was particularly timely given that Google had just announced its first internally developed autonomous vehicles.

In the Academy, you'll also find technical presentations on performance and energy optimization (from Cadence), object detector development (MathWorks), gesture interfaces (Qualcomm), real-time 3D object recognition (VanGogh Imaging), and algorithm programming on heterogeneous architectures (Xilinx). And plenty of additional material from the Summit is en route. Keep an eye out on the event's Content page for it. And if you sign up for the Alliance's Facebook, LinkedIn and Twitter social media channels, along with its RSS feed, you'll receive proactive notification each time a new piece of content appears.

Of course, while you're on the Alliance website, make sure you check out all the other great new content published there in recent weeks. Thanks for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. As always, I welcome your suggestions on what the Alliance can do to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


June 3, 2014 Edition

Dear Colleague,Embedded Vision Summit West

Last Thursday's Embedded Vision Summit West was an absolutely amazing day. The keynotes from Yann LeCun of Facebook and Nathaniel Fairfield of Google provided compelling insights into the future of vision-enabled recognition and autonomy. The sixteen technical presentations from Alliance member companies and partners supplied an abundance of know-how on a diversity of vision processing topics. And in the technology showcase, more than twenty member companies and partners delivered demonstrations of vision technologies and products.

Whether or not you were present in person, visit the Embedded Vision Academy area of the Alliance website, where you can now download the Summit presentation slides. In the coming weeks, they'll be joined by videos of the presentations and demonstrations. Sign up for the Alliance's Facebook, LinkedIn and Twitter social media channels, along with its RSS feed, and you'll receive proactive notification each time a new piece of content appears.

And of course, while you're up on the Alliance website, make sure you also check out all of the other great new content regularly published there. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


May 15, 2014 Edition

Dear Colleague,Embedded Vision Summit West

Two weeks from today, my colleagues at the Embedded Vision Alliance and I will kick off the next, and biggest and best yet, iteration of the Embedded Vision Summit, taking place on May 29 at the Santa Clara (California) Convention Center. Yann LeCun, Director of AI Research at Facebook, will deliver the morning keynote, "Convolutional Networks: Unleashing the Potential of Machine Learning for Robust Perception Systems." Machine learning, found in some of the most sophisticated image-understanding systems deployed today, provides a framework that enables system training through examples. It is at the forefront of applications such as face recognition, visual navigation, and handwriting recognition, and LeCun will discuss a breakthrough method for implementing such tasks.

Nathaniel Fairfield, technical lead at Google, will deliver the afternoon keynote, "Self-Driving Cars." Google recently announced that its autonomous car fleet has logged more than 700,000 miles and is increasingly capable of self-navigating complex city street settings. Dr. Fairfield will discuss Google's overall approach to solving the driving problem, the capabilities of the car, progress so far, and remaining challenges. The Embedded Vision Summit will also include two tracks' worth of sixteen total technical presentations revolving around the themes of visual recognition and visual intelligence, and technology demonstrations from nearly two dozen Alliance member companies. If you haven't registered yet, do so today without further delay; keep in mind that last year's Summit sold out!

While you're registering, don't forget about the two in-depth technical workshops also taking place at the Santa Clara Convention Center, the prior day (May 28). The first workshop, from Alliance founding member BDTI, is entitled "Implementing Computer Vision and Embedded Vision: A Technical Introduction". It will provide a practical tutorial on processors, sensors, algorithms, and development techniques for vision-based application and system design, including OpenCV and OpenCL. The second workshop is co-presented by BDTI and fellow Alliance members Analog Devices and Avnet Electronics. It will explore hardware and software for image processing and video analytics in a hands-on fashion, featuring the Avnet/Analog Devices Embedded Vision Starter Kit.

And while you're up on the Alliance website, make sure you check out all the other great new content published there in recent weeks. One particular highlight is the presentation "Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices," delivered by NASA's Larry Matthies at the March Alliance Member Meeting. Dr. Matthies is a Senior Research Scientist at JPL, and Supervisor of the Computer Vision Group in the Mobility and Robotic Systems Section. His talk discussed in detail the various vision processing-inclusive projects he's worked on over the years, some familiar (Mars Exploration Rover and Mars Pathfinder) and others likely more of a surprise to you (Google's Project Tango 3D mapping smartphone, for example).

I'd also like to draw your attention to a recently published article on the CENTR 360-degree panorama camera, a compelling case study of the embedded vision opportunity, and an example of a system uniquely enable by the technologies and products that will be on display at the upcoming Embedded Vision Summit. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


April 24, 2014 Edition

Dear Colleague,HTC One M8

There’s been quite a burst of interesting news lately about vision technology being used in mobile devices, a topic which has also been regularly covered in past presentations and articles hosted on the Alliance website. A month back, for example, we discussed the rumored depth-sensing capabilities of the latest "M8" variant of HTC's One smartphone, capabilities that were confirmed at the handset's unveiling a short time later, complete with a product teardown. We also covered Google's revolutionary Project Tango handset, which showcases robust 3D mapping facilities.

Project Tango has recently also been disassembled and analyzed, found to contain an infrared projector and multiple embedded vision processors. And just a few days ago, the first photos of the claimed coming-soon Amazon-branded smartphone have surfaced, along with some intriguing claimed embedded resources: a beefy Qualcomm application processor, front and rear conventional cameras, and four front-mounted infrared sensors supposedly for head- and eye-tracking purposes.

These and other trendsetting embedded vision capabilities, not just for mobile electronics devices but a plethora of systems, will be on display at the Embedded Vision Summit West in just over a month. Taking place May 29th at the Santa Clara (California) Convention Center, its comprehensive program encompasses two tracks' worth of sixteen total technical presentations, hour-long keynotes from both Facebook and Google, and technology demonstrations from nearly two dozen Alliance member companies. Two in-depth technical workshops are additionally offered the prior day. And the Embedded Vision Summit West is also co-located with the Augmented World Expo, with special discounts available for Summit attendees.

Last year's Embedded Vision Summit sold out, so I encourage you to register today and not further delay! And while you're up on the Alliance website, make sure you check out all the other great new content published there in recent weeks. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


April 10, 2014 Edition

Dear Colleague,Embedded Vision Summit West

I’m pleased to report that videos of three interesting presentations from the recent Embedded Vision Alliance Member Meeting are now available:

Looking forward, we’ve just published the full technical program for the Embedded Vision Summit West, a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software.  The Summit take place on May 29 in Santa Clara, California.  Online registration for the Embedded Vision Summit West is now available. Last year's Summit sold out, so I encourage you to register right away before the attendance slots are filled.

And as a reminder, two in-depth technical workshops will also take place the prior day. The first, from Alliance founding member BDTI, is entitled "Implementing Computer Vision and Embedded Vision: A Technical Introduction". It will provide a practical tutorial on processors, sensors, algorithms, and development techniques for vision-based application and system design, including OpenCV and OpenCL. The second is co-presented by BDTI and fellow Alliance members Analog Devices and Avnet Electronics. It will explore hardware and software for image processing and video analytics in a hands-on fashion, featuring the Avnet/Analog Devices Embedded Vision Starter Kit.

In addition to the latest content on the Alliance website, I also encourage you to head to EE Journal and peruse the recently published article "Augmented Reality: A Compelling Mobile Embedded Vision Opportunity", authored by the Alliance and member companies CogniVue, SoftKinetic and videantis. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. And please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


March 27, 2014 Edition

Dear Colleague,Embedded Vision Summit West

Preparations for the May 29th Embedded Vision Summit West in Santa Clara are progressing rapidly. We're pleased to announce that the Summit will feature presentations by top-rated speakers, including Francis MacDougall of Qualcomm on gesture interfaces and Simon Morris of CogniVue on recognition and classification in augmented reality. Check out the published abstracts on the Summit Presentations page. Personally, I am also looking forward to the morning keynote by Yann LeCun of Facebook and the afternoon keynote by Nathaniel Fairfield of Google.

I'm also happy to be able to tell you about two technical workshops that will take place on May 28, the day before the Summit. The first, from Alliance founding member BDTI, takes place from 9:00 AM to 3:00 PM and is entitled "Implementing Computer Vision and Embedded Vision: A Technical Introduction". It will provide a practical tutorial on processors, sensors, tools, and development techniques for vision-based application and system design, including OpenCV and OpenCL. The second, occurring from 7:30 AM to 2:00 PM that same day, is co-presented by BDTI and fellow Alliance members Analog Devices and Avnet Electronics. It will explore hardware and software for image processing and video analytics in a hands-on fashion, featuring the Avnet/Analog Devices Embedded Vision Starter Kit.

Online registration for the Embedded Vision Summit West is now available, along with travel and housing information. Last year's Summit sold out, so I encourage you to sign up right away before all of the attendance slots are filled. And while you're on the Alliance website, make sure you check out all the new content published there the last two weeks; tutorial and demonstration videos from NVIDIA and Xilinx, news writeups on vision-centric smartphones from Google and HTC, a press release from CEVA, and a plethora of market analysis report summaries. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


March 11, 2014 Edition

Dear Colleague,

In the previous edition of Embedded Vision Insights, I mentioned that we'd published initial details on the afternoon keynote for the May 29 Embedded Vision Summit, to be held in Santa Clara. The presenter, Nathaniel Fairfield of Google's self-driving car team, has subsequently finalized the title and abstract for his talk, which addresses the second of the two foundation themes of the Summit, recognition and autonomy. Here's what Fairfield says about his planned presentation:

Self-driving cars have the potential to transform how we move: they promise to make us safer, give freedom to millions of people who can't drive, and give people back their time. The Google Self-Driving Car project was created to rapidly advance autonomous driving technology and build on previous research. For the past four years, Google has been working to make cars that drive reliably on many types of roads, using lasers, cameras, and radar, together with a detailed map of the world. Fairfield will describe how Google leverages maps to assist with challenging perception problems such as detecting traffic lights, and how the different sensors can be used to complement each other. Google's self-driving cars have now traveled more than a half a million miles autonomously. In this talk, Fairfield will discuss Google's overall approach to solving the driving problem, the capabilities of the car, the company's progress so far, and the remaining challenges to be resolved.

The Embedded Vision Summit West is a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software. Online registration is now available, along with travel and housing information, so I encourage you to sign up for the conference right away before all of the attendance slots are filled. And while you're on the Alliance website, make sure you check out all the new content published there in recent weeks. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


February 25, 2014 Edition

Dear Colleague,

In the February 11 edition of this newsletter, I told you about the morning keynote presentation planned for the May 29 Embedded Vision Summit West, by Yann LeCun, Director of AI Research at Facebook and a distinguished professor at New York University. I now have the pleasure of telling you about the event's afternoon keynote presentation. It's by Nathaniel Fairfield, the technical lead on Google's self-driving car team, which you can learn more about by watching the "Self Driving Car Test: Steve Mahan" video on YouTube. LeCun and Fairfield's presentations will respectively address the two foundation themes of the Summit, recognition and autonomy.

The Embedded Vision Summit West, a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software, will take place  in Santa Clara, California. The "Early Bird" reduced registration fee of $149 is only available through this Friday, so don't delay in registering. This year’s Summit is co-located with the Augmented World Expo (AWE), and Embedded Vision Summit attendees may obtain an AWE Exhibits-Only Pass for only $20, an 80% discount. Just add the AWE Exhibits-Only Pass to your online registration submission for the Embedded Vision Summit. And it's not too early to begin booking your transportation and hotel reservations, either; for assistance in these matters, see the newly launched Summit travel page.

Nearer term, the next Alliance Member Meeting will be held at Qualcomm's corporate headquarters campus in San Diego, CA on March 13. We will be expanding the attendee list for the afternoon portion of the day's program, and have a limited amount of available space for guests who are not affiliated with Alliance Member companies. This will be a unique opportunity to network with Alliance member company representatives, learn about the products and services available to assist you in completing embedded vision-inclusive designs, and evaluate possible Alliance membership for your own company.

Planned afternoon presentations include:

  • "Vision-based Navigation Applications: from Planetary Exploration to Consumer Devices" by Larry Matthies, Supervisor in the Computer Vision Group at NASA’s Jet Propulsion Laboratory
  • "Who Watches the Machines Watching You? Regulating Privacy in the Era of Machines That See" by Brian Wassom, Partner and Chair of the Social, Mobile, and Emerging Media Practice Group at Honigman Miller Schwartz and Cohn LLP
  • "Recent Developments in Khronos Standards for Embedded Vision" by Neil Trevett, President of Khronos, and
  • "Forecasting Consumer Adoption of Embedded Vision in Mobile Devices in 2014" by John Feland, PhD, CEO of Argus Insights

You'll also have the opportunity to audition Alliance member companies' technology and product demonstrations during the mid-afternoon break. If you're interested in attending the afternoon portion of the March 13 Alliance Member Meeting, please email your credentials to info@embedded-vision.com for consideration. Priority will be given to applicants currently working with embedded vision technology. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. And please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


February 11, 2014 Edition

Dear Colleague,

Last time, I had the pleasure of passing along the news that online registration for the upcoming Embedded Vision Summit West was live on the Alliance website. This time, I'm happy to share with you some key aspects of the conference program. The Embedded Vision Summit West, a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software, will take place on May 29 in Santa Clara, California.

The conference's opening keynote will be delivered by Yann LeCun, a seminal figure in computer vision research and applications. LeCun is Director of AI Research at Facebook and Silver Professor of Data Science, Computer Science, Neural Science, and Electrical Engineering at New York University. His talk is a don't-miss opportunity to gain unique insights into image recognition challenges and techniques. More generally, the Alliance has just published a preliminary technical program agenda for the day. Regularly monitor the website for coming-soon details on the afternoon keynote and various technical presentations, as well as information on the half- and full-day workshops to be held the prior day, May 28.

As I noted last time, an "Early Bird" reduced registration fee of $149 for the Embedded Vision Summit West is only available through February 28, so don't delay in registering. This year’s Summit is co-located with the Augmented World Expo (AWE), and Embedded Vision Summit attendees may obtain an AWE Exhibits-Only Pass for only $20, an 80% discount. Just add the AWE Exhibits-Only Pass to your online registration submission for the Embedded Vision Summit.

While you're on the Alliance website, make sure you check out all the new content published there the past two weeks, including four demonstration videos from January's Consumer Electronics Show, a contributed article on embedded vision in ADAS (advanced driver assistance systems), contributed blog posts on face analysis and pedestrian detection, and multiple market analysis reports and press releases. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


January 28, 2014 Edition

Dear Colleague,

I'm pleased to be able to kick off this edition of Embedded Vision Insights with the news that online registration for the next Embedded  Vision Summit West, which will take place May 29 in Santa Clara, California, is now live on the Alliance website. Embedded Vision Summits are technical educational forums for engineers interested in incorporating visual intelligence into electronic systems and software.

An "Early Bird" reduced registration fee of $149 for the Embedded Vision Summit West is only available through February 28, so don't delay in registering. This year’s Summit is co-located with the Augmented World Expo (AWE), and Embedded Vision Summit attendees may also obtain an AWE Exhibits-Only Pass for only $20, an 80% discount. Just add the AWE Exhibits-Only Pass to your online registration submission for the Embedded Vision Summit.

Speaking of Embedded Vision Summits, the last few videos from last October's East Coast event have now been added to the Alliance website. Take a look at the entire Summit East content suite for a sense of the breadth and depth of information you'll obtain at the upcoming May Summit West; a detailed program for the event will be published shortly. And while you're up on the Alliance website, also make sure you check out the just-published additional presentations from the December 2013 Alliance Member Meeting: from Professor Li Zhang of the University of Wisconsin (on computational photography), from Professor Sanjay Patel of the University of Illinois (on computational imaging), and from Neil Trevett, President of Khronos and Vice President at NVIDIA (on the OpenVX vision processing API).

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


January 14, 2014 Edition

Dear Colleague,

Happy 2014! And welcome back from the holidays, hopefully well rested and ready to tackle all kinds of embedded vision opportunities in the new year. While you've been away, the Alliance has remained busy creating and publishing additional material. Specifically, since the last edition of Embedded Vision Insights went out four weeks ago, eight new videos have appeared on the Alliance website, along with 19 new press releases and two new market analysis reports.

Content highlights include additional tutorials and demonstrations from the October 2013 Embedded Vision Summit East, and the first of several presentations (plus a demonstration) from the December 2013 Alliance Member Meeting. Also make sure you check out a two-demonstration video series from Goksel Dedeoglu, Manager of Embedded Vision R&D at Texas Instruments, on stereo vision for depth perception and Lucas-Kanade feature tracking. See, too, a presentation on the new Camera 3 framework in Android 4.4 "KitKat" from Aptina's Balwinder Kaur, Software Architect for Emerging Technologies. And don't overlook the blizzard of technology and product news from Alliance member companies at last week's Consumer Electronics Show.

Speaking of the Embedded Vision Summit, I also want to remind you of the next Summit, to be held in Santa Clara, California on May 29. The Embedded Vision Summit West, a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software, will be co-located with the Augmented World Expo. Summit attendees will have the option of also attending the full Augmented World Expo conference or accessing the Augmented World Expo exhibit floor at a discounted price. And preceding the Embedded Vision Summit, on Wednesday, May 28, Embedded Vision Alliance member companies will present workshops exploring hardware and software for embedded vision product development. Please revisit the event overview page in the coming weeks for more information, such as detailed agendas, keynote, technical tutorial and other presentation details, speaker biographies, and online registration.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. And please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


December 18, 2013 Edition

Dear Colleague,

"Never put off till tomorrow what you can do today." This well-known quote provides the background for my first important announcement. May 29, 2014 is the currently scheduled date for the next Embedded Vision Summit, a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software, to be held in Santa Clara, California. The end of May may seem a long way away, but we're busily preparing to make the upcoming Embedded Vision Summit West the biggest and best Summit yet!

The Embedded Vision Summit West will be co-located with the Augmented World Expo, and Summit attendees will have the option of also attending the full Augmented World Expo conference or accessing the Augmented World Expo exhibit floor at a discounted price. And preceding the Embedded Vision Summit, on Wednesday, May 28, 2014, Embedded Vision Alliance member companies will present partial- and full-day workshops exploring hardware and software for various embedded vision implementations. Please revisit the event overview page in the coming weeks for more information, such as detailed agendas, keynote, technical tutorial and other presentation details, speaker biographies, and online registration.

Speaking of the Embedded Vision Summit, the steady flow of content coming from October's event in Westford, Massachusetts continues to appear on the Alliance website. Since the publication of the previous newsletter edition, several new demo videos (from Bluetechnix, Cadence and Qualcomm partner Intrinsyc Software) and tutorial videos (from CogniVue, on efficiently computing disparity maps, and Qualcomm, on harnessing heterogeneous computing) have been published, with others queued up to appear in the coming days. I encourage you to take advantage of the upcoming-holiday time off from work to look them (and other site content) over, as a means of bolstering your embedded vision understanding and staying current on technology and product developments.

And speaking of the upcoming holidays, I'll close with another well-known quote, "Have yourself a merry little Christmas"...and Hanukkah, Kwanzaa and New Year, as well! This will be the last edition of Embedded Vision Insights for 2013; the next issue will appear in your inbox mid-next month. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


December 3, 2013 Edition

Dear Colleague,

Back in early July, I told you about Find A Supplier, an online resource provided by the Alliance that connects you with relevant member companies. If you're interested in a particular embedded vision technology, product or service but don't already know which Alliance members provide it, send the Alliance an email via the Find A Supplier page form. Alliance staff members will review and forward it on to relevant company representatives in a timely manner, who will follow up with you directly.

Recently, the Alliance launched another website tool that encompasses the various "players" in the embedded vision industry, this time in an intuitive graphical manner. The Industry Map represents a "system" view of vision technology and applications, and is constructed in a hierarchical format. The top level of the Map includes information on:

  • The Embedded Vision Alliance
  • The ecosystem, including Alliance member companies and industry standards
  • Market data
  • Applications
  • Vision technology
  • Products and services

The Alliance's objectives in developing the Industry Map are:

  • To educate engineers, press and analysts about embedded vision
  • To bring together information in an interactive, visual, searchable format
  • To catalogue the various elements needed to build vision systems and applications
  • To promote discussion and ideas on how to make the Industry Map more valuable

This resource is in the form of an "Idea Map," and the approach allows members and users to suggest extensions and other alterations, thereby allowing the map to evolve in step with industry evolution. The Map is in Adobe Flash format and can be either viewed directly from its webpage using a browser or downloaded and accessed standalone. It has been verified with the latest versions of Chrome, Firefox, Internet Explorer, and Safari.

We hope you find the Industry Map valuable, and we appreciate your feedback on it. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


November 12, 2013 Edition

Dear Colleague,

In the late October edition of Embedded Vision Insights, I wrote, "Additional content from the Embedded Vision Summit East is queuing up for publication in the coming days." At that point in time, we'd published five videos from the event. Two weeks later, as promised, we've published seven more: technical tutorials from Apical ("Better Image Understanding Through Better Sensor Understanding"), Synopsys ("Designing a Multi-Core Architecture Tailored for Pedestrian Detection Algorithms"), Texas Instruments ("Embedded Lucas-Kanade Tracking: How it Works, How to Implement It, and How to Use It") and VanGogh Imaging ("Using FPGAs to Accelerate 3D Vision Processing: A System Developer's View"), and technology and product demonstrations from Analog Devices, Apical and ARM.

I've also been plenty busy in the background reviewing additional material which, just as I said last time, is also queued up for publication in the coming days. The latest website content isn't solely sourced from the Embedded Vision Summit East, however. Check out, for example, the presentation delivered by BDTI Senior Software Engineer Eric Gregori at the recent Qualcomm UPLINQ Conference, on "Accelerating Computer Vision Applications with the Hexagon DSP." Check out IHS's just-published research note on video surveillance camera and analytics trends.  And check out the latest embedded vision news and press releases.

Enough to keep you busy for a while, staying current on embedded vision technology and product developments? I hope so! Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. And please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


October 29, 2013 Edition

Dear Colleague,

Last time, regarding the recent Embedded Vision Summit East, I wrote, "We're busy editing the videos of the various presentations and demonstrations, which will begin appearing on the website shortly." I hope you'll be pleased with what we've accomplished so far in following through on that promise. Visit the "videos" resource page on the Alliance website, and you'll find the outstanding keynote presented by Mario Munich, Vice President of Advanced Development at iRobot. You'll also be able to view the sequence of presentations delivered by DARPA and two of its contractors, Next Century Corporation and SRI International, discussing two general-purpose vision algorithm selection and development tools that the organization and its contractors will soon make widely available on an open source basis as an adjunct of the OpenCV library And you'll find a tutorial on feature detection and tracking from Marco Jacobs, Technical Marketing Director at videantis.

Additional content from the Embedded Vision Summit East is queuing up for publication in the coming days. But for now, there's plenty of other new material to satiate your embedded vision appetite. Make sure, for example, to audition the augmented reality keynote from Ori Inbar, founder and CEO of AugmentedReality.org, delivered at the October 2013 Embedded Vision Alliance Member Meeting. Check out, too, the contributed article on vision-enhanced manufacturing robotics in Control Engineering Magazine from Alliance member company National Instruments. And of course there's the usual assortment of new market analysis reports, news writeups, press releases, and other materials.

I hope that the steadily increasing content on the Alliance website is helpful in keeping you current on embedded vision technology and product developments. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


October 15, 2013 Edition

Dear Colleague,

Another Embedded Vision Summit has come and gone, and I'm feeling no shortage of satisfaction. This year's Embedded Vision Summit East drew even more attendees than last year, and they seemed pleased with the expanded program; the overall 8.6 (out of 10) event rating matched last year's equally impressive score. We're busy editing the videos of the various presentations and demonstrations, which will begin appearing on the website shortly. Keep an eye on the "Videos" page of the Alliance website for the content as it's published; subscribe to the Alliance's Facebook, LinkedIn and Twitter social media channels and site RSS feed for proactive notification.

For now, I encourage you to check out Alliance founder Jeff Bier's interview with Michael Tusch, founder and CEO of Apical, which took place the night before the Summit. Apical chose the event to introduce its Assertive Vision processor core, a real-time detection, classification and tracking engine capable of accurate analysis of people and other objects, and designed for integration into SoCs. With the Assertive Engine, Apical formally expands its business beyond conventional image processing into embedded vision processing, and Michael Tusch shares interesting perspectives on differences between the two processing approaches, as well as how they may coexist going forward.

Speaking of conferences, the Alliance will be well represented at several upcoming shows, both physical and virtual. Both Jeff Bier and SoftKinetic's Tim Droz will present at the late-October Interactive Technology Summit (formerly Touch-Gesture-Motion Conference) put on by IHS (which acquired IMS Research in early 2012). In mid-November, Jeff Bier will represent the Alliance at AMD's Developer Summit. And while member company Nvidia's next GPU Technology Conference won't occur until next March, the company is supplementing the physical event with a yearlong series of online webinars, including one in early November on facial recognition algorithm acceleration given by Alliance advisor, University of Queensland, Australia Professor, and Imagus Technology CTO Brian Lovell.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


September 30, 2013 Edition

Dear Colleague,

It's finally here: the week that those of us in the Alliance have been steadily and intensively working toward ever since the conclusion of the mid-April Embedded Vision Summit. In two days, on October 2, the next Embedded Vision Summit, a technical educational forum for product creators interested in incorporating visual intelligence into electronic systems and software, will take place in Westford, Massachusetts. One day later, Alliance member companies will hold two half-day hands-on embedded vision workshops. And in parallel, Alliance member representatives will be meeting to (among other things) begin planning the next set of Embedded Vision Summits.

If you've already registered for the Embedded Vision Summit East (and one or both workshops, of course), congratulations! I look forward to seeing and meeting you there. And if you haven't yet registered, especially if you're located (or can travel) to the Boston, MA region, what are you waiting for? Check out the comprehensive (and, I must say, mighty impressive) Embedded Vision Summit agenda, along with the next-day workshops' event pages. Peruse the detailed presentation abstracts, along with the illustrious presenter biographies. And then hit up the registration page without delay, because advance registration is required and space is limited.

I'm really looking forward to the keynote presentation by Mario E. Munich, Vice President of Advanced Development at iRobot. Munich was formerly the CTO of Evolution Robotics, a company focused on the development of key technology primitives for consumer robotics. Appropriately, his talk is entitled "Embedding Computer Vision in Everyday Life." An additional set of special presentations will come from Mike Geertsen, program manager at DARPA, and representatives of two DARPA partner companies. Geertsen, Next Century Corporation's Clark Dornan and SRI International's Jayan Eledath will discuss two general-purpose vision algorithm selection and development tools which the organization and its contractors will soon make widely available on an open source basis as an adjunct of the OpenCV library. Then there are the nearly twenty other "how-to" presentations, along with demonstrations from nearly twenty Alliance member companies...wow!

Embedded Vision Summit East aside, I also want to draw your attention to all of the great new content that's shown up on the site the past two weeks: news writeups, market analysis reports, press releases, blog posts, and discussion forum conversations. Check them out to keep current on embedded vision technology and product developments. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. And please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


September 17, 2013 Edition

Dear Colleague,

Good news, everyone! The complete agenda for the Embedded Vision Summit East, to be held in Westford, Massachusetts on October 2, has just been published to the Alliance website. The Embedded Vision Summit East is a technical educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. Scan through the day's agenda, and I'm confident you'll be impressed with the breadth and depth of the technical program we've assembled for your benefit.

The day will begin with the keynote, "Embedding Computer Vision in Everyday Life," by iRobot's Vice President of Advanced Development, Mario E. Munich. Also notable is a triumvirate of presentations from DARPA's Mike Geertsen and two DARPA contractors, Next Century Corporation and SRI International, regarding a set of general-purpose vision algorithm development tools which will soon be released in open-source form. The two-track, day-long program also includes 17 other technical presentations from the Alliance and 15 of its member companies.

As with past Summits, there'll also be a demo area packed with examples of vision technology. The demo area will be open during the morning and afternoon coffee breaks, the lunch hour, and the concluding cocktail reception, providing opportunities to enjoy some refreshments, get up close with some of the latest vision technology, and interact with experts from Alliance member companies. The Embedded Vision Summit East takes place in just two weeks, and registration submissions are rapidly rolling in. Attendance spots are limited and interest is strong, so I encourage you to register without delay. Consider, too, attending one or both of the half-day hands-on workshops held the day after the Summit, respectively sponsored by Analog Devices, Avnet Electronics and BDTI, and by Avnet and Xilinx.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


September 4, 2013 Edition

Dear Colleague,

The Embedded Vision Summit East, a technical educational forum for product creators interested in incorporating visual intelligence into electronic systems and software, will be held in one month, on October 2. It will take place at the Regency Inn and Conference Center in Westford, Massachusetts. Details of the event program are coalescing, and I have a few more updates to share with you in this edition of Embedded Vision Insights.

Back in late July, I passed along the details of the "Blackfin Embedded Vision Starter Kit Hands-on Workshop," presented by Alliance members Analog Devices, Avnet Electronics Marketing and BDTI and taking place from 8:30AM-1:30PM on October 3, the day after the Embedded Vision Summit East. I'm happy to announce that a second half-day workshop has now been added to the October 3 program. The "Smarter Vision Hands-On Workshop" co-presented by Avnet and Xilinx will run from 1:00PM-5:00PM, and will introduce SoC application development with a specific focus on embedded vision using the Xilinx Vivado Design Suite to target the Xilinx Zynq-7000 All Programmable SoC. For more information on both sessions, including online registration forms, please see the workshops page on the Alliance website.

We've also published an initial set of technical presentations (and presenters) for the Summit, with more to come in the coming days. In some cases, the sessions will be updated versions of top-rated talks given at the April Embedded Vision Summit San Jose (such as "Targeting Computer Vision Algorithms to Embedded Hardware" by Mario Bergeron of Avnet); in other cases, the top-rated presenters will be the same, but the topics will be brand new (for example, "Embedded Lucas-Kanade Tracking: How it Works, How to Implement It, and How to Use It" by Goksel Dedeoglu of Texas Instruments). Regularly monitor the speakers and presentations pages on the website for updates. And don't delay in registering, as attendance spots are limited and interest is strong (and will undoubtedly increase in the coming weeks).

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


August 20, 2013 Edition

Dear Colleague,

Last time, I alerted you to the recent publication of one of the videorecorded presentations from the July Embedded Vision Alliance Member Meeting, from Argus Insights' CEO, John Feland, Ph.D. This time, I'm happy to pass along the news of the publication of the other video from that event. It's the keynote, "High Speed Vision and Its Applications -- Sensor Fusion, Dynamic Image Control, Vision Architecture, and Meta-Perception," from Professor Masatoshi Ishikawa of Tokyo University. Some of you may remember having previously heard of Professor Ishikawa's work from a news writeup, "Vision-Superior Robot Trumps Humans At Rock-Paper-Scissors, Ping Pong Balls," published on the Alliance website a year ago. His more recent presentation on the diversity of applications enabled by ultra-high speed vision processing is both highly entertaining and educational, and I commend it to your inspection.

I'd also like to pass along some additional information about the upcoming October 2 Embedded Vision Summit East, a technical educational forum for product creators interested in incorporating visual intelligence into electronic systems and software, to be held at the Regency Inn and Conference Center in Westford, Massachusetts. I've previously mentioned the scheduled keynote by Mario Munich, Vice President of Advanced Development at iRobot. New to the event agenda this time is a special presentation from Mike Geertsen, Program Manager at DARPA. As part of its Visual Media Reasoning (VMR) program, DARPA has created two general-purpose vision system development tools, which it plans to release as an adjunct to the OpenCV open source computer vision software library in late 2013 or early 2014. Mike Geertsen, DARPA Program Manager, will present an overview of the VMR program and the enabling tools developed under it. And don't forget about the 25% discount on your Embedded Vision Summit East registration fee, only through August 31!

That's not all: other new content on the site includes a two-part tutorial from Texas Instruments' embedded vision R&D manager, Goksel Dedeoglu, a digital video stabilizer demo video from CEVA, an "embedded vision on mobile devices" article reprint authored by CogniVue and the Alliance, and downloads, blog and discussion forum posts, news writeups, market analysis reports, and press releases from Alliance member companies. Check them out to keep current on embedded vision technology and product developments. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. And please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


August 6, 2013 Edition

Dear Colleague,

In the last issue of this newsletter, I mentioned that we were in the process of editing some of the content captured during the July 17 Embedded Vision Alliance Member Meeting. One of the videos is now published; it's the market trends presentation "Who Watches The Watchers? Consumer Perceptions of Embedded Vision Features in Consumer Electronics," delivered by Argus Insights' CEO, John Feland, Ph.D. I think you'll find Feland's talk not only highly entertaining but also very informative; Argus Insights employs novel data collection and market analysis methods derived from postings made by consumers on various social media platforms. Specifically, Feland discussed in his July 17 talk how consumers are responding to the increasing ubiquity of image sensors in every aspect of their lives, specifically to the benefits and perceived risks of unintended surveillance via these new solutions.

In the last issue, I also shared with you the biographical information of Mario Munich, Vice President of Advanced Development at iRobot and the scheduled keynote presenter for the upcoming October 2 Embedded Vision Summit East, a technical educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. Until recently, I'd only been able to indicate that the Embedded Vision Summit East would be located in the "Boston Massachusetts area". However, I can be more precise: both it and the next-day embedded vision workshops delivered by Alliance member companies will take place at the Regency Inn and Conference Center in Westford, Massachusetts, conveniently located both to Boston itself and to the Route 128 Technology Corridor. Registration is now open both for the Embedded Vision Summit East and for the hands-on workshop co-presented by Analog Devices, Avnet Electronics and BDTI. Space is limited at both events, so I encourage you to register without delay!

Plenty of other fresh content has also appeared on the Alliance website in the past two weeks: multiple news writeups, market analysis reports, and product announcements from Alliance member companies. Check them out to keep current on embedded vision technology and product developments. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


July 23, 2013 Edition

Dear Colleague,

Last Wednesday's Embedded Vision Alliance Member Meeting, held in San Jose, California at Aptina's headquarters, was as always a great opportunity for member company representatives to reconnect, network and advance the common embedded vision cause. Three of the presentations delivered that day are currently being edited for near-future publication on the Alliance website: the keynote on high-speed vision from Professor Masatoshi Ishikawa of Tokyo University (whose pioneering work I covered in a news writeup last year), the market trends presentation on consumer reactions to vision technology from John Feland of Argus Insights, and the image sensor technology trends presentation from Aptina's Curtis Stith. Subscribe to the Alliance's LinkedIn, Twitter and Facebook social media channels, along with its RSS feed, for notification when the videos show up on the site.

One of the many pieces of good news that we were able to share with the membership last week was the addition of a new member company. Sony is a name doubtlessly familiar to most if not all of you; the company makes a range of still and video image capture systems, from dedicated cameras to smartphones and other devices. What fewer of you may also realize, however, is that the company's semiconductor division is also a leading supplier of camera components, such as image sensors and processors. Welcome, Sony, to the Embedded Vision Alliance! Several other members-to-be are in the process of wrapping up their membership paperwork; when they formally join, I'll be sure to alert you both via this newsletter and in on-site news writeups.

In the previous edition of Embedded Vision Insights, I mentioned the next Embedded Vision Summit, a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software, to be held in the Boston, Massachusetts area on October 2. Today I'm pleased to announce the Summit keynote speaker: Mario Munich, Vice President of Advanced Development at iRobot. Mario has been a pioneer in creating vision-enabled consumer products, and his insights from that experience are sure to be relevant to engineers implementing vision in many kinds of cost-constrained applications.

I'm also excited to announce that, in conjunction with the Embedded Vision Summit, Analog Devices, Avnet and BDTI will be running their "Blackfin Embedded Vision Starter Kit Hands-on Workshop" on October 3. This workshop was recently held in San Jose, California, where it received excellent attendee evaluation scores. Registration is now open both for the Embedded Vision Summit and for the Blackfin workshop. Space is limited at both events, so register early.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


July 9, 2013 Edition

Dear Colleague,

The Silicon Valley edition of the Embedded Vision Summit, a series of technical educational forums for engineers interested in incorporating visual intelligence into electronic systems and software, occurred less than three months ago. But we're already hard at work on planning the next event. It's currently scheduled to take place on Wednesday, October 2, in the Boston, Massachusetts area, the same region that hosted the premier Embedded Vision Summit in September 2012.

Preliminary information on the October 2 event can be found on the newly published Summit-focused area of the website, along with archives of video content and other materials from past Summits. One aspect of the new Summit section that'll undoubtedly catch your eye is a promotional video that just went live a few days ago. In the last newsletter, I pointed out to you a promo video, containing testimonials and other captured footage from the April Summit, and intended for potential new members of the Alliance. This new video, on the other hand, is intended for potential attendees of future Summits. Please check out the video, provide me your feedback on it, and share it with colleagues who you think may be interested in attending a future Summit.

Finally, I'd like to draw your attention to yet another new area of the site. If you're interested in a particular embedded vision technology or product, but don't already know which Alliance members provide it, send the Alliance an email via the "Find a Supplier" page form. Alliance staff members will review and forward it on to relevant company representatives in a timely manner, who will follow up with you directly.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


June 18, 2013 Edition

Dear Colleague,

Last time, I mentioned that nearly 12 hours' worth of content from the April Embedded Vision Alliance Summit and next-day Alliance Member Meeting was now published on the website. There's one more video from the former event that I'd like to draw to your attention today. If you were at the Embedded Vision Summit, you might have noticed that the camera crew that captured the presentations was also filming the demo room activities that day, as well as testimonials from attendees. We've subsequently combined select excerpts from both, along with still photographs and other Summit content, into a few-minute promotional video.

Check out the clip on the website's newly enhanced "Joining the Alliance" page, and let me know what you think of it. As its publication location suggests, this particular video is focused on potential new Alliance members. Stay tuned for a companion promotional clip, targeting potential attendees of future Summits and other Alliance events, to come in a few weeks' time. And while you're on the site, make sure you peruse some of the other newly published material there; several market summary reports from IMS Research, for example, plus multiple news writeups and press releases.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

And for those of you in the United States, since the next edition of Embedded Vision Insights won't be published until after the 4th of July, I wish you an enjoyable extended holiday weekend.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


June 4, 2013 Edition

Dear Colleague,

Two weeks ago, in the previous edition of Embedded Vision Insights, I mentioned that 17 videos from the April Embedded Vision Summit had been published to the Alliance website. In subsequently finalizing the content from the Summit and next-day Alliance Member Meeting, the number of published videos has doubled, to 34. They span a diversity of topics and include the keynote and track overview presentations, technical talks, new-product introductions, and technology and product demonstrations, along with an update on the Khronos OpenVX vision processing API. I encourage you to check them all out, but make sure you reserve sufficient time in your schedule... those 34 videos represent nearly 12 hours of material!

The Alliance and its member companies are also quite busy with other embedded vision outreach opportunities to the engineering community. Synopsys just completed seminars in both Silicon Valley and Tokyo, with Alliance Business Development Director Jeremy Giddings presenting at the latter. At the upcoming SIGGRAPH conference, NVIDIA will be delivering a presentation on mobile vision applications using the Android operating system. And Alliance founder Jeff Bier will be discussing embedded vision at three upcoming events; the mid-June meeting of the Chinese American Information Storage Society in Silicon Valley, and the IEEE Embedded Vision Workshop and Vision Industry and Entrepreneur Workshop, both on June 24th, and both part of the CVPR (Computer Vision and Pattern Recognition) Conference in Portland, Oregon. See the news section of the Alliance website for information on all of these events.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


May 21, 2013 Edition

Dear Colleague,

In the previous edition of Embedded Vision Insights, I mentioned that videos of the various presentations and demonstrations at the April 25 Embedded Vision Summit "will begin showing up on the site shortly." I wasn't kidding: shortly thereafter, the flood gates opened wide and content publication began in earnest. 16 event-related videos are now live on the Alliance website (with more following shortly), some of which I've highlighted below. They include the keynote from UC Berkeley's Professor Peiter Abbeel, along with overview presentations by the OpenCV Foundation's Gary Bradski and by the Embedded Vision Alliance's own Jeff Bier.

The first few product demo clips captured by the Alliance's video crew are now published, and Electronic Design's editorial staff also shot footage that day, which is also on the site and which I commend to your inspection. And speaking of video, CEVA also just published three interesting demonstrations that showcase the company's MM3101 imaging and vision processing core in action on three compelling applications; face recognition, ADAS (specifically, lane departure warning), and gesture recognition, the latter in partnership with fellow Alliance member company eyeSight.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


May 7, 2013 Edition

Dear Colleague,

Although the premier Silicon Valley edition of the Embedded Vision Summit was nearly two weeks ago, I'm admittedly still somewhat exhausted. It's a happy sort of tired, however, because by all discernible measures the event was an unqualified success. We're still tabulating the full suite of attendee feedback scores. But in response to the all-important and all-encompassing "Overall, how satisfied were you..." question, the Summit received an average score of 8.6 out of 10. Any of you who have been involved in similar events will, I'm sure, attest to the robustness of that rating.

Speaking of attendees, their sheer number was also a testament to the event's achievements. We blew through our original target of 300 attendees several weeks in advance of the show, but some creative last-minute scrambling freed up additional seats (and lunches, and badges and lanyards, and USB flash drives, and....). The final tallies: 565 registration applicants, 441 of them accepted, with 395 event attendees. The fact that nearly 90% of the registrants actually showed up attests both to the robustness of the event agenda and to the burgeoning popularity of embedded vision technology.

Videos of the day's various presentations, along with product demonstration and attendee testimonial clips, will begin showing up on the site shortly. Also currently being edited is the video of a technology trends presentation on the OpenVX vision processing acceleration API standard, from the next day's Alliance member meeting, delivered by Khronos (and NVIDIA) representative Frank Brill. I'll highlight their availability in future editions of this newsletter.

I also encourage you to monitor the Alliance website's primary RSS feed, along with its various social media channels (LinkedIn, Twitter, and Facebook) for immediate alerts as soon as each new piece of content is published. For now, you can content yourself with the day's various presentation slides in PDF format, along with an event highlights article and slideshow recently published in EE Times. Additional press coverage should appear shortly.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


April 23, 2013 Edition

Dear Colleague,

It's finally here: the Embedded Vision Summit is later this week (Thursday, to be exact) in San Jose, California. This is a moment that many of us in the Embedded Vision Alliance have been steadily working towards ever since the conclusion of the last (and first) Embedded Vision Summit last September in Boston, Massachusetts. And if you're fearing it's too late to register, don't panic (as Douglas Adams used to say). We recently bumped up the attendance limit, enabling even more of you to participate. So check out the event agenda, ascertain which tracks you plan to sit in on, and submit a registration application right away...before the additional seats fill up, too.

While you're in San Jose, don't forget about the additional embedded vision-themed events that surround the Summit this week. On Tuesday and Wednesday, Alliance members BDTI, Freescale, and National Instruments will be delivering technical presentations at the DESIGN West conference. And on Friday, Analog Devices, Avnet Electronics Marketing and BDTI will partner to present a half-day embedded vision workshop exploring hardware and software for image processing and video analytics, featuring the Avnet/Analog Devices Embedded Vision Starter Kit.

Summit aside, there's also been a whole lot of great new content added to the Alliance website over the past two weeks: reprints of contributed articles authored by various Alliance member companies and covering gesture interfaces, the Embedded Vision Academy, and vision-centric medical applications, along with a bevy of press releases. And Embedded.com also just published another excellent Alliance-spearheaded piece, on 3D sensor technologies. Although I'm admittedly biased, I think they're all great writeups, and I'm confident you'll agree.

Last but not least, and speaking of press releases, I want to make sure that you know about the Alliance's latest member. Bluetechnix was added to the membership roster subsequent to the recent publication of my new-member summary news writeup. The Austria-based company provides embedded vision hardware and software design services, along with a broad range of standard products. For example, Bluetechnix just released its new line of 3D time-of-flight cameras, featuring the Argos3D-P100 off-the-shelf camera and the Sentis-M100 board-level camera. And the company's name may also be already familiar to you from a recently published demo video taken at the Embedded World show a few months ago. Welcome, Bluetechnix!

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


April 9, 2013 Edition

Dear Colleague,

It's almost here: the Embedded Vision Summit, a free day-long technical educational forum to be held on April 25th in San Jose, California and intended for engineers interested in incorporating visual intelligence into electronic systems and software. I first mentioned the Silicon Valley version of the Embedded Vision Summit in the early January edition of Embedded Vision Insights, and with every subsequent newsletter, I've been able to pass along additional event information.

This time around, I'm pleased to tell you about the finalization of the agenda. When you visit the main event page on the Alliance website, I'm confident you'll be impressed with the breadth and depth of the program that various Alliance member company representatives are scheduled to deliver in conjunction with influential industry visionaries such as Gary Bradski from the OpenCV Foundation and Professor Pieter Abbeel from the University of California, Berkeley. Trust me: it will be quite a challenge, in the process of submitting a registration application, to decide which of the two presentation tracks you plan to attend in each portion of the program!

If you haven't yet submitted a registration application, I encourage you to do so without delay. We're nearing capacity, and it would be a shame for you to miss this one-of-a-kind event due to a lack of space. And while you're scheduling the week's events, I encourage you to also keep in mind (and plan on attending) the following related embedded vision activities: the DESIGN West presentations from BDTI, Freescale and National Instruments on Tuesday and Wednesday, and the Friday half-day embedded vision workshop exploring hardware and software for image processing and video analytics, delivered by BDTI and fellow Alliance members Analog Devices and Avnet Electronics Marketing, and featuring the Avnet/Analog Devices Embedded Vision Starter Kit.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


March 26, 2013 Edition

Dear Colleague,

By now, I'm hopeful that at least most of you are already aware of the upcoming Embedded Vision Summit, a free day-long technical educational forum to be held on April 25th (less than a month from now) in San Jose, California and intended for engineers interested in incorporating visual intelligence into electronic systems and software. If this is the first time you've heard about the Summit, I encourage you to check out the main event page. And if you've heard about the Summit before, but haven't yet registered, I encourage you to do so right away, while attendance spots are still available.

In this edition of Embedded Vision Insights, I'd also like to alert you to additional events surrounding the Summit. On April 17 (and one week ahead of the Summit), for example, Embedded Vision Alliance founder Jeff Bier will be presenting on "Embedded Vision: Giving Mobile Devices the Ability to See and Understand" at the Linley Tech Mobile Conference, in Santa Clara, California. On April 24, the day before the Summit, Alliance member companies BDTI and National Instruments will deliver four embedded vision-themed presentations at the DESIGN West conference. And on April 26, the day after the Summit, BDTI and fellow Alliance members Analog Devices and Avnet Electronics Marketing will offer a half-day embedded vision workshop in San Jose, California exploring hardware and software for image processing and video analytics, and featuring the Avnet/Analog Devices Embedded Vision Starter Kit. See the relevant news writeups below for additional details, including event registration links.

Speaking of "online" and "events", the Alliance (represented by member companies Apical, BDTI, CogniVue, National Instruments and Xilinx) just completed a successful five-day series of on-line classes on various interesting embedded vision topics. The presentation slides, audio recordings and chat archives remain available on the Design News website, so if you missed the live deliveries (or if you'd benefit from a revisit), I encourage you to check them out. And speaking of Alliance companies, I'll close with a last-but-not-least acknowledgement of the latest entry (as of April 1) to the member roster: Fidus Systems, an electronic design services company (and Xilinx Premier Design Services Member) with offices in Ottawa and Kitchener, Ontario, and San Jose, California. Welcome, Fidus!

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


March 12, 2013 Edition

Dear Colleague,

First and foremost, I want to begin this edition of Embedded Vision Insights by alerting you to next week's (March 18-22) free five-day embedded vision webinar series co-delivered by the Embedded Vision Alliance and several of its member companies, in partnership with Design News Magazine. Entitled "Implementing Embedded Vision: Designing Systems That See & Understand Their Environments," it takes place each day at 11 AM PST/2 PM EST/6 PM GMT; I encourage you to attend the entire five-part series. Advance registration is necessary; please note that separate registration for each session is required. For more information, including registration links, please see this news writeup on the Alliance website.

Speaking of upcoming events, we're now only a bit more than a month away from the premier Silicon Valley iteration of the Embedded Vision Summit, a free day-long technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software. I've begun filling in the day's agenda with specific presentation titles, presenter biographies and (coming soon) presentation abstracts.  I again encourage you to reserve the day on your calendar and submit an online registration application now, while attendance spots are still available.

And if you look closely at the Embedded Vision Summit agenda, you might notice at least one presenting company that's a surprise. You're certainly forgiven if you didn't know that Qualcomm is a member of the Alliance, because the company only just joined a few days ago! In addition to supplying the well-known Snapdragon line of ARM-based application processors, Qualcomm develops a number of other key embedded vision innovations: the Vuforia augmented reality software platform, for example, and the FastCV algorithm library and API. Welcome, Qualcomm, to the Embedded Vision Alliance!

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


February 26, 2013 Edition

Dear Colleague,

I began the previous edition of Embedded Vision Insights with the words, "I'm pleased when I'm able to regularly pass along announcements about new Embedded Vision Alliance members, and lately I've been pretty pleased." In that particular edition, for example, I mentioned that GEO Semiconductor had just joined the Alliance. Two weeks earlier, I'd told you about the then-latest member, Tensilica.

Well, I'm thankfully still feeling pleased! This time, in fact, I have the pleasure of telling you about two new Alliance members. FireFly DSP is a developer of processor IP cores and associated software development tools, whose management team has a substantive semiconductor and software lineage. And SoftKinetic is a name that may already be familiar to some of you via its partnerships with Alliance members such as Intel and Texas Instruments; the company develops time-of-flight sensors for 3-D camera designs. Welcome, FireFly DSP and SoftKinetic!!

The Alliance is also less than two months away from presenting the premier Silicon Valley iteration of the Embedded Vision Summit, a free day-long technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software. Notable updates to the published event information in the last few weeks include details on the keynote presenter, Professor Pieter Abbeel of the University of California at Berkeley, and biographies of several of the technical presentation speakers. On the main Summit page, you'll also now find a detailed agenda of the day's planned events. Reiterating my comments from last time, I encourage you to reserve the day on your calendar and submit an online registration application now, while attendance spots are still available.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


February 12, 2013 Edition

Dear Colleague,

I'm pleased when I'm able to regularly pass along announcements about new Embedded Vision Alliance members, and lately I've been pretty pleased. Two weeks ago, for example, in the previous edition of Embedded Vision Insights, I mentioned that processor IP core supplier Tensilica had joined the Alliance. And this time around, I'm happy to share the news that GEO Semiconductor is the Alliance's latest membership entrant.

In early December, GEO Semiconductor had announced its intention to acquire the video processing business (PDF) of fellow Alliance member Maxim Integrated Products; the deal closed approximately one month later (PDF) and one month ago. And more generally, if you're not already familiar with GEO Semiconductor, here's what the company description on the Alliance member page says:

GEO Semiconductor is a pioneer in geometric correction for images and video. GEO’s proprietary algorithms allow for incredibly efficient, low latency transforms to correct, dewarp and calibrate video from any lens or sensor configuration. GEO’s intelligent and infinitely configurable warping engine enables embedded vision systems to capture images in an ultra-wide field of view, breaking open new opportunities for innovation in automotive, consumer and industrial markets. With the addition of the Mobilygen/Maxim H.264 video compression and human interface business in 2012, GEO added the capabilities to compress, process and transport video to enable a new class of camera and vision systems that are connected to the cloud as well as enable new methods of interacting with devices through gesture and voice recognition.

That all sure sounds like embedded vision to me! Welcome, GEO Semiconductor!

I'd also like to draw your attention to a series of interesting embedded vision case studies just published on the Alliance website by National Instruments and three of its customers. I'm always amazed at the compelling implementation ideas that engineers come up with in combining one or several cameras, a processor, and software uniting the two. And these particular examples certainly exemplify that creativity.

Speaking of engineers, in closing I'd like to remind you once again of the upcoming Embedded Vision Summit, to be held April 25 in San Jose, CA. The Embedded Vision Summit is a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software. A preliminary agenda of the day's events is now on the Alliance website, with more details to come shortly. Also live on the site is an online event registration form; I encourage you to reserve the time on your calendar and submit an application now, while attendance spots are still available.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


January 29, 2013 Edition

Dear Colleague,

At the beginning of the month, I pointed out the most recent press release from the Embedded Vision Alliance, which announced new members Digital Media Professionals and LSI. I also mentioned that PathPartner Technology had more recently joined the Alliance. And at this time, I'm happy to pass along news of a further expansion of the Alliance membership, to include processor core supplier Tensilica. Several additional Alliance members-to-be are in the process of completing their enrollment paperwork, and I look forward to telling you about them in future newsletter editions.

The company overview on Tensilica's website notes, "As the recognized leader in customizable DPUs [dataplane processor units], Tensilica is helping top-tier semiconductor companies, innovative start-ups and system OEMs build high-volume, trend-setting products. Tensilica’s IP cores power SoC designs at system OEMs and seven of the top 10 semiconductor companies for designs in mobile wireless, telecom and network infrastructure, computing and storage, and home and auto entertainment." By virtue of its membership in the Alliance, Tensilica has clearly identified embedded vision as a key growth market opportunity, both today and in the future. Welcome, Tensilica!

Two weeks ago, I mentioned that online registration had just become available for the upcoming April 25 Embedded Vision Summit, to be held in San Jose, California. I've subsequently added to the website a preliminary agenda of the day's events, which we'll further fill out in the coming days and weeks as we identify specific keynoters, tutorial presenters and topics. The Embedded Vision Summit is a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software. I encourage you to reserve the time on your calendar and submit a registration application right now, while you're thinking of it, in order to increase your likelihood of securing a spot in this limited-space event.

Finally, I’m also very excited to share a new batch of excellent video content just published on the site. It includes technology and product demonstrations from Alliance members eyeSight Mobile Technologies and Omek Interactive at the early-January Consumer Electronics Show, Analog Devices' presentation on its BF60x family of embedded vision processors at the 2012 IEEE Hot Chips symposium, and a tutorial on the Android imaging software stack (and associated hardware) delivered by Aptina's Balwinder Kaur and Joe Rickson. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Let me know how the Alliance can do a better job of servicing your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


January 15, 2013 Edition

Dear Colleague,

In the previous edition of Embedded Vision Insights, I mentioned that material from the December Alliance Member Summit was beginning to appear on the website. The content suite is now complete; videos published in the last several weeks include a fascinating keynote on machine learning by Professor Kristen Grauman of the University of Texas at Austin, and the technology trends presentation on embedded vision in mobile electronics, co-delivered by BDTI senior software engineer Eric Gregori and myself.

Also now on the site in video form is an interesting discussion that Embedded Vision Alliance Founder Jeff Bier held last month with Daniel Wilding, a digital hardware engineer at National Instruments, who hosted the Member Summit. Bier and Wilding discuss National Instruments' presence and plans in the embedded vision application space, the advantages of FPGAs as embedded vision processors, and the company's development tools for FPGA-based embedded vision designs.

I commend all of this compelling content to your attention. And with the December Alliance Member Summit material published, my attention has now fully turned to the upcoming April 25 Embedded Vision Summit in San Jose, California. As promised last time, online registration for this public event is now live on the website. Space is limited, and I therefore encourage you to submit your registration application right away in order to increase your likelihood of securing a spot in this one-of-a-kind event. In the coming weeks, I'll publish additional event information; a detailed agenda, for example, along with speaker biographies.

And speaking of conferences, last time I also mentioned that Jeff Bier and the Alliance's business director, Jeremy Giddings, would be attending the Consumer Electronics Show. I'm happy to report that both Jeff and Jeremy have survived their time in Las Vegas, and have returned with an abundance of good news regarding the burgeoning presence of embedded vision-based user interfaces (gesture, gaze, etc.) and other technologies in computers, televisions, game consoles, smartphones and tablets, and numerous other devices. Jeff and Jeremy shot several video demonstrations, which I'll get on the site as soon as possible, along with various news write-ups written by myself. Keep an eye out for them, and for now, check out the CES-related press releases published by various Alliance member companies.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. And also as always, I welcome your feedback on how the Alliance can do a better job of servicing your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


January 3, 2013 Edition

Dear Colleague,

Happy New Year! As the Embedded Vision Alliance's third year begins, the organization's accomplishments and announcements continue unabated. Last September, the Alliance successfully produced its first public conference for engineers, the Embedded Vision Summit, in Boston, Massachusetts. Plans are well underway for an expanded second iteration of the event in San Jose, California on April 25. For more information on the Silicon Valley version of the Embedded Vision Summit, please visit www.embeddedvisionsummit.com. Online registration and additional details will be published on the site shortly.

More recently, in early December the Alliance held its latest Member Summit in Austin, Texas, hosted by member company National Instruments. Selected content from that event is now beginning to show up on the site (with more to come soon): product demonstrations from NVIDIA and videantis, along with IMS Research senior analyst John Morse's market trends presentation on machine vision applications. At the event, the Alliance announced new members Digital Media Professionals and LSI. And less than a month later, I'm please to announce yet another new member of the Alliance, its 28th, PathPartner Technology. Stay tuned for additional details on the company to appear in a near-future news writeup.

The Alliance's founder, Jeff Bier, and business director, Jeremy Giddings, will be representing the organization at next week's Consumer Electronics Show in Las Vegas, Nevada. If you'd like to learn more about the Alliance and how it can help your company accomplish its business objectives, please contact Jeremy at giddings@embedded-vision.com and 510-451-1800. And if you have any feedback on this newsletter or the website whose content it showcases, please don't hesitate to drop me an email. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


December 10, 2012 Edition

Dear Colleague,

As you read these words, I'm in Austin, Texas at the Q4 2012 Embedded Vision Alliance Member Summit. With video camera in hand, I look forward to capturing, editing, uploading and sharing with you some of the presentations that I'll be both attending and participating in:

  • The keynote from Kristen Grauman (Associate Professor, Computer Science, University of Texas at Austin) on "Big Challenges and Recent Advances in Machine Learning for Vision"
  • The market trends tutorial from John Morse (Senior Market Analyst, IMS Research) on Machine Vision Applications, and
  • The technology trends tutorial on Mobile Vision Applications

I'll be co-delivering the latter presentation with BDTI Senior Engineer Eric Gregori, likely a familiar name to those of you who've already perused other content on the Embedded Vision Alliance website, and with Rony Greenberg, Vice President of Business Development at eyeSight Mobile Technologies. In the process of developing the presentation, Eric came across some interesting topics and questions regarding Android development and computational photography, which he's posted to the Alliance website's discussion forum section for your feedback. More generally, I encourage you to, on an ongoing basis, tap into the collective wisdom of the embedded vision community by both posting new discussion forum topics and responding to the comments published by your peers.

This will be the last edition of Embedded Vision Insights for 2012; the next iteration is currently scheduled for send-out in early January. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you think of an idea of how the Alliance can do a better job of servicing your needs, you know where to find me. And on behalf of my fellow Embedded Vision Alliance representatives, I'd like to wish you and yours a fulfilling holiday season.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


November 27, 2012 Edition

Dear Colleague,

The next Embedded Vision Alliance Member Summit is exactly two weeks away as I write these words, and as such you can imagine that I'm neck-deep in numerous associated projects' final preparations. Among other things, BDTI's Eric Gregori and I will be co-delivering (along with Rony Greenberg of eyeSight Mobile Technologies) the technology trends presentation this time, on the topic of embedded vision for mobile devices such as smartphones and tablets. Embedded vision development on consumer electronics products such as these is a subject that I regularly revisit in content published on the Embedded Vision Alliance website.

As I wrote in the introduction to the November 15, 2011 Embedded Vision Insights newsletter edition, "Cellular handsets and tablet computers are compelling platform for implementing embedded vision, by virtue of the prevelence of both front- and rear-mounted image sensors of sufficient resolution, the substantial available memory and processing resources, the systems' application-enabling portability, and (perhaps most importantly) the often-subsidized prices at which they're sold and their consequent large installed user base." Even though those image sensors primarily exist for photography and videoconferencing applications, they can also be leveraged for innumerable other compelling functions, some of which are discussed in the news writeups showcased in this newsletter edition. I look forward to discussing the subject with the Alliance membership in mid-December, and to sharing the resultant video with the rest of you afterward.

Considering the above-mentioned large installed user base, mobile electronics devices are forecasted to be one of the initial "boom" markets for embedded vision. Another likely early adopter is the vehicle, via another frequently discussed embedded vision application, ADAS (advanced driver assistance systems). The Terminology page of the Alliance website defines ADAS as "an umbrella term used to describe various technologies used in assisting a driver in navigating a vehicle." Examples include:

  • In-vehicle navigation with up-to-date traffic information
  • Adaptive cruise control
  • Lane departure warning
  • Lane change assistance
  • Collision avoidance
  • Intelligent speed adaptation/advice
  • Night vision
  • Adaptive headlight control
  • Pedestrian protection
  • Automatic parking (or parking assistance)
  • Traffic sign recognition
  • Blind spot detection
  • Driver drowsiness detection
  • Inter-vehicular communications, and
  • Hill descent control

And considering these applications' appeal to drivers and passengers, to vehicle manufacturers, to law enforcement agencies, and to insurance companies, it's no coincidence that both a highlighted article and video listed below cover ADAS in greater detail.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I always welcome your email feedback on how the Alliance can do a better job of servicing your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


November 13, 2012 Edition

Dear Colleague,

In the previous edition of the Embedded Vision Insights newsletter, I indicated that we'd recently bolstered the amount of published video content sourced from the September Embedded Vision Summit to approximately six hours of cumulative material. You can now add around another hour's worth of video to that tally, with yet another clip still to come. I recently re-watched some of the content cache in the process of bolstering each video's associated description text, and it's quite outstanding both in its embedded vision topic diversity, depth and technical accuracy.

If you haven't yet perused the video content overview page on the Embedded Vision Alliance website, I encourage you to do so at your earliest convenience. We're all perpetually busy, I know, but this will be a time and attention investment well spent for your long-term education enhancement. Perhaps the upcoming Thanksgiving holiday will provide the necessary schedule "breathing room?"

Speaking of perpetually busy, the Alliance team is putting final preparations in place for next month's Embedded Vision Alliance Member Summit in Austin, Texas. Development is also already well underway for the next Embedded Vision Summit public event, currently scheduled for the week of April 22-25 in conjunction with the DESIGN West conference in San Jose, California. And nearer term, the Alliance plans to have a presence at December's Touch-Gesture-Motion conference in Austin, next January's Consumer Electronics Show in Las Vegas, Nevada, and several other upcoming industry events.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. And remember: whenever you think of an idea for how the Alliance can do a better job of servicing your needs, I'm only an email away.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


October 25, 2012 Edition

Dear Colleague,

In the most recent edition of Embedded Vision Insights, I mentioned that the first three videos from last month's Embedded Vision Summit had just been published to the Alliance website. Over the past two weeks, I've been busy posting additional content, including a market analysis presentation from the next-day Embedded Vision Alliance Member Summit. So here's an update.

As of today, nearly six hours' worth of new, publicly-accessible videos spanning a diversity of embedded vision hardware, software and other topics are now online. This translates to more than a day's worth of cumulative video content on the Alliance website. Links to several of the new videos can be found below. More than an hour's worth of additional Summit content should appear soon. And if you're an Alliance company representative who would like a tutorial on embedded vision and the Alliance opportunity, please let me know and I'll send you a private link to last month's "Boot Camp" presentation.

We're already hard at work on the next Embedded Vision Alliance Member Summit, currently scheduled for Tuesday, December 11 in Austin, Texas. And speaking of Austin, the 2012 edition of the IMS Research Touch Gesture Motion conference will be (not coincidentally) held the subsequent two days in the same city, at the Barton Creek Resort. If you're interested in attending, use discount code EVA10 when registering, for a 10% discount off the full delegate price. And if you're an Alliance company representative who's interested in attending, drop me an email for your particular code and discount percentage.

Also mentioned in a recent Embedded Vision Insights newsletter was the announcement that Aptina Imaging had joined the Alliance. A just-published news writeup provides more details on Aptina and its embedded vision activities and aspirations. Stay tuned for coming-soon equivalent coverage on the Alliance's newest (and 26th) member, LSI (which some of you may better recognize by its former name, LSI Logic). Welcome, LSI, to the Embedded Vision Alliance!

I'll close with the reminder that I always welcome correspondence from any newsletter recipient or website visitor, with your feedback on how the Alliance can do a better job of servicing your needs. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


October 10, 2012 Edition

Dear Colleague,

The premier Embedded Vision Summit was three weeks ago, attended by more than 160 audience members, the vast majority of whom stuck around for the entire conference. Embedded Vision Alliance representatives also supported multiple activities the prior day, as well as the quarterly Embedded Vision Alliance Member Summit the next day. And I'm delighted to report that attendees of the various events passed along overwhelmingly positive feedback in their reviews.

If you weren't able to be in Boston, Massachusetts for the Embedded Vision Summit on September 19, or if you were in attendance but would like to refresh your memories, feel free to head to the Embedded Vision Academy section of the website, where you'll already find the first three published videos from the event. They are the keynotes from Professor Rosalind Picard of MIT and from Gary Bradski of the OpenCV Foundation, along with Alliance Founder Jeff Bier's introductory talk on embedded vision.

In the Academy, you'll also find an archive of all of the presentations delivered that day. And additional video and other Summit content will continue to appear on the site in the coming weeks. Keep an eye out on the Alliance's various social media channels for publication alerts: RSS, LinkedIn, Twitter, and Facebook.

Thank you as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I'm always open to your feedback on how the Alliance can do a better job of servicing your needs; don't be shy about emailing me!

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


September 18, 2012 Edition

Dear Colleague,

In the previous edition of this newsletter, published just two weeks ago, I shared the news that the Alliance had just added two new members, Synopsys and VanGogh Imagine. At that time, I had already published a detailed writeup on Synopsys' embedded vision strategies and aspirations, and I promised that one would shortly follow on VanGogh. Well, I'm happy to extend the trend. The aforementioned VanGogh coverage is now live on the website, and the Alliance has added its third member for this month, image sensor developer Aptina Imaging. Stay tuned for more in-depth coverage of Aptina to appear soon.

Speaking of news, this week's a notable one for the Alliance. As you're receiving this newsletter on Tuesday, Alliance representatives Jeff Bier and Eric Gregori from BDTI will be presenting two classes and moderating an exhibit floor theater presentation on embedded vision applications from Analog Devices, Texas Instruments and Xilinx at the Embedded Systems Conference Boston, part of the DESIGN East series of shows. Later this same day, Bier and I will co-deliver a "Boot Camp" embedded vision tutorial to Alliance newcomers. Wednesday's the day-long Embedded Vision Summit, combining two keynotes and a variety of technical presentations from Alliance member companies, along with diverse product demonstrations. And Thursday morning is the quarterly Alliance Member Summit.

If you happen to be in the Boston, Massachusetts area tomorrow, send a registration request to summit@embedded-vision.com, because there's a good chance (although I can't definitively guarantee) that a few remaining Embedded Vision Summit attendee slots will remain open for day-of-event registrants. And speaking of Summits, I'll close with a bit of a "back to the future" explanation. One of this newsletter edition's showcase videos, listed below, is an interview I recently conducted with John Morse, IMS Research senior analyst covering the machine vision market. Morse will also be the featured market analysis presenter at December's upcoming Alliance Member Summit. And that Summit will be hosted by National Instruments, whose product demonstration from the April Alliance Member Summit is the other showcase video below.

Thank you as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. As always, I welcome your feedback on this newsletter, the Alliance website, and anything else Alliance-related.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


September 4, 2012 Edition

Dear Colleague,

Those of you who perused the Embedded Vision Alliance member page beginning Saturday morning may have already figured out what I'm about to tell the rest of you. I'm happy to announce two new members of the Alliance, Synopsys and VanGogh Imaging. VanGogh Imaging provides affordable and easy to use embedded vision solutions for high volume applications that can use mobile devices to accurately capture, measure, and display objects and scenes in 3D and in real time. And Synopsys' diverse product line encompasses many items with direct embedded vision relevance: embedded processor cores, high-level synthesis and other EDA toolsets, hardware and software prototyping technologies and services, etc. A just-published news writeup provides more details on Synopsys' multiple embedded vision thrusts; stay tuned for a companion writeup on VanGogh Imaging to come later this week.

The other big news is, of course, the Embedded Vision Summit, which will take place in just two weeks (and one day) in Boston, Massachusetts. I'm happy to announce that Professor Rosalind Picard of MIT, the morning keynoter, will be joined by Gary Bradski of the OpenCV Foundation, who delivered the keynote at the July Embedded Vision Alliance Member Summit and will also keynote in the afternoon at the upcoming Embedded Vision Summit. More generally, the Alliance has just published on the main event page a fairly detailed agenda, which will be further fleshed out in the days to come. Alliance member company representatives will present on a diversity of embedded vision topics: applications and algorithms, processors, tools, APIs, design techniques, image sensors, etc. Space is limited and is filling up fast, so don't delay; register today!

Ahead of the Summit, several other events deserve your attention. Tomorrow, Xilinx and iVeia will co-present a webcast in which company representatives will do a teardown of an embedded vision system design based on the Xynq-7000 Extensible Processing Platform SoC containing a programmable FPGA fabric and dual ARM Cortex-A9 processor cores. Next Monday through Friday, Jeff Bier (founder of the Embedded Vision Alliance) and Eric Gregori (senior software engineer at BDTI) will co-present a five-session embedded vision tutorial series, discussing image sensors, processors, algorithms, and toolsets. And next Wednesday and Thursday is IMS Research's Touch-Gesture-Motion EMEA conference in London. Click on the links in the preceding sentences for more details on these exciting embedded vision activities, including registration information.

Thank you as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to contact me at any time with your ideas on how the Embedded Vision Alliance website and other resources can more effectively address your needs. And if you know someone who might be interested in receiving this newsletter, please forward this email along with an encouragement to register for his or her own copy in the future.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


August 21, 2012 Edition

Dear Colleague,

With less than a month to go until the September 19 Embedded Vision Summit in Boston, Massachusetts, the technical program is shaping up very nicely. The Summit will feature over a dozen technical presentations focused on providing practical know-how for engineers interested in incorporating vision capabilities into their products. Presentations will cover embedded vision applications, algorithms, processors, image sensors, and tools and design techniques. Check out the event page on the Alliance website for more details, and register right away, as space is limited and seats are filling up!

Speaking of registrations, one week prior to the Summit, Design News Magazine will deliver a free five-session tutorial series called "Fundamentals of Embedded Computer Vision: Creating Machines That See", September 10-14 at 2PM ET (11AM PT) each day. The content presenters will be Jeff Bier, Embedded Vision Alliance founder, and Eric Gregori, BDTI senior software engineer. Bier and Gregori will begin with an introductory overview, followed by more in-depth details on image sensors, processors, algorithms, tools, and the OpenCV software library. See the first news item listing in this newsletter for more information, including registration links.

Also one week ahead of the Summit, on September 12 and 13 to be exact, Alliance Platinum member IMS Research will present the European edition of the Touch-Gesture-Motion Conference, in London. Vision-based gesture- and motion-related technologies will be particularly showcased on day 2 of the event. For example, Stephane Gervais-Ducouret, sensor product line director at Alliance member Freescale, will deliver the conference keynote that morning. The keynote will be followed by a technical session that includes presentations from Alliance members eyeSight and PointGrab. And the Alliance will be officially represented by Apical's founder and CEO, Michael Tusch. Again, see the news section of this newsletter for a writeup containing additional details, including detailed agenda and registration page links.

Last but not least, there's a substantial amount of new video content recently published on the website that's sourced from last month's Member Summit: the market and technology trends presentations, for example, along with various product demonstrations. Also, make sure you check out my interview with IMS Research's senior analyst and machine vision expert, John Morse, who will be delivering the market trends presentation at the upcoming December Member Summit. Thank you as always for your support of the Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your feedback at any time on how we can do a better job of addressing your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


August 9, 2012 Edition

Dear Colleague,

The most recent Embedded Vision Alliance Member Summit was only a few weeks ago, yet lots of video content from that event is already flowing onto the Alliance website. Specifically, I'd like to direct your attention to the keynote presentation on OpenCV delivered by Gary Bradski of Industrial Perception and the product demonstration from Summit host Xilinx, both of which are highlighted in the Featured Videos section of this newsletter. Also recently published to the site is the market trends presentation on video surveillance and security from IMS Research's Jon Cropley, along with product demonstrations from CogniMem Technologies and videantis. The July Summit's technology trends tutorial on OpenCL will be posted to the site shortly, as will be my interview with IMS Research's machine vision expert, John Morse.

I'd also like to draw your attention once again to next month's Embedded Vision Summit in Boston, Massachusetts, which I first mentioned in the previous Embedded Vision Insights newsletter. Free of charge to qualified engineers, and also open to invited press and analysts, the Embedded Vision Summit will provide a technical educational forum for attendees, including how-to presentations, seminars, demonstrations, and opportunities to interact with Alliance member companies.

Keynoted by Professor Rosalind Picard of MIT, a pioneer in the field of affective computing (i.e. systems that discern or influence emotions), the Embedded Vision Summit is intended to:

  • Inspire engineers' imaginations about the potential applications for embedded vision technology through exciting presentations and demonstrations,
  • Offer practical know-how for engineers to help them incorporate vision capabilities into their products, and
  • Provide opportunities for engineers to meet and talk with leading embedded vision technology companies and learn about their offerings.

Newly added to the main event page is a preliminary agenda. Space is limited, and seats are filling up, so don't delay in registering for this compelling event! Simply send an email to summit@Embedded-Vision.com to begin the registration process.

These are, I'm sure you agree, exciting times to be an embedded vision industry and Embedded Vision Alliance participant! As always, I welcome your feedback on how the Embedded Vision Alliance can more effectively help you harness the burgeoning and abundant embedded vision opportunities. Thank you for your support of the Alliance, and for your interest in and contributions to embedded vision technologies, products and applications.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


July 24, 2012 Edition

Dear Colleague,

Phew! Another Embedded Vision Alliance Member Summit has come and gone, last Thursday to be precise. I'd like to thank all of the member companies who sent representatives, for your attendance and participation in the day's various activities. Special thanks go to Platinum member Xilinx for hosting the event, as well as to Industrial Perception's Gary Bradski, IMS Research's Jon Cropley, and BDTI's Shehrzad Qureshi for (respectively) their informative and interesting keynote, market trends and technology trends presentations.

And, speaking of events, the Alliance is pleased to announce its first-ever public embedded vision event for the engineering community. The Embedded Vision Summit will take place on September 19 in Boston, Massachusetts, concurrent with (and at the same venue as) the DESIGN East series of conferences, which include the Embedded Systems Conference Boston along with others. The event will be free of charge to qualified engineers, and will also be open to invited press and analysts.

The Embedded Vision Summit will provide a technical educational forum for engineers, including how-to presentations, seminars, demonstrations, and opportunities to interact with Alliance member companies. This event is intended to:

  • Inspire engineers’ imaginations about the potential applications for embedded vision technology through exciting presentations and demonstrations,
  • Offer practical know-how for engineers to help them incorporate vision capabilities into their products, and
  • Provide opportunities for engineers to meet and talk with leading embedded vision technology companies and learn about their offerings.

The keynote speaker will be Professor Rosalind Picard of MIT. Professor Picard is the founder and director of the Affective Computing Research Group at the MIT Media Laboratory, co-director of the Things That Think Consortium (the largest industrial sponsorship organization at the lab) and leader of the new and growing Autism & Communication Technology Initiative at MIT. She is also co-founder, chief scientist and chairman of Affectiva, Inc., which develops technology to help measure and communicate emotion.

If you're interested in attending the Embedded Vision Summit, please visit the event page for more information, including registration application instructions. And if you know someone who might be interested in attending the event, please forward this newsletter to him or her. Thanks as always for your support of the Alliance, and for your interest in and contributions to embedded vision technologies, products and applications.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


July 12, 2012 Edition

Dear Colleague,

As I write these words, final preparations are underway for next week's Embedded Vision Alliance Member Summit in Silicon Valley. If you're an Alliance member company representative, I look forward to seeing you again, or to meeting you for the first time if you're from one of the companies who've joined the Alliance since March (including the latest member addition, PointGrab). And for those of you who are part of the broader Embedded Vision Alliance community, I look forward to sharing video content with you after the Alliance Member Summit's conclusion. We plan to "film" the keynote by OpenCV guru Gary Bradski, the market trends presentation on surveillance systems by IMS Research's Jon Cropley, and the technology trends presentation on OpenCL by BDTI's Shehrzad Qureshi.

Several companies also plan to provide product demonstrations at the upcoming Alliance Member Summit, and we intend to "film" them for you, as well. Speaking of which...earlier this week I published demo videos from the March Alliance Member Summit, from Analog Devices, Apical, CEVA, CogniMem Technologies, The MathWorks, National Instruments, Omek Interactive, Texas Instruments, and Xilinx. They're all quite short, only a few minutes' each in length, and quite informative. I commend them to your inspection; you can find them in the Video Interviews & Demos section of the Alliance website.

In addition to wrapping up lingering loose ends ahead of next week's event, I've also been fine-tuning the website. You'll notice the next time you visit, for example, that the Embedded Vision Academy now has its own top-level menu entry at the top of each page, versus being relegated to one in a list of menu options under "About Embedded Vision," as it was initially. I hope that this appropriate elevation of the Academy's visibility will make it easier for you and your industry peers to access the useful content found in this free online training facility for embedded vision product developers. I've also added an extensive page of definitions for various embedded vision terminology; I welcome your feedback both on any terms I may have overlooked and on enhancements to existing terms' names and/or definitions.

And beginning next week, we'll be expanding both the site's press release and upcoming events pages to encompass announcements and activities not only of the Embedded Vision Alliance itself, but also from various Alliance member companies. More generally, as always, please let me know how the Embedded Vision Alliance website and other resources can more effectively address your needs. Thank you for your support of the Alliance, and for your interest in and contributions to embedded vision technologies, products and applications.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


June 19, 2012 Edition

Dear Colleague,

It's mid-June and, for many of you, thoughts might be wandering towards planned and potential summer activities. Admittedly, mine are too, but they're also fixated on a similar-sounding word, Summit. Mid-next month in Silicon Valley, the Embedded Vision Alliance will hold its next Summit, sponsored by Platinum member Xilinx. As I type these words, agendas are being finalized, presentations are being developed, and myriad other loose ends both big and small are being tied down. I look forward to seeing many of you in four weeks' time!

I'm also still focusing a fair bit of attention on the most recent Embedded Vision Alliance Summit, held in late March. Video content from that very successful event, which brought together the Alliance membership and key representatives of the technology press and analyst ranks, is still regularly being published on the Alliance website. Below, for example, you'll find a link to the hour-long embedded vision market analysis presentation delivered by IMS Research senior analyst Tom Hackenburg. You'll also find a pointer to the equally in-depth embedded vision technology trends tutorial on 2-D, 3-D and "4-D" image sensors, led by BDTI senior engineers Eric Gregori and Shehrzad Qureshi.

The technology trends presentation, along with a technical article from Alliance member Apical's Michael Tusch, also formed the content foundation of a recently published cover story at EDN Magazine. But wait, there's more! (I sound like a TV commercial, don't I?) Last week, the Alliance published video recordings of the short new-product presentations delivered by Analog Devices, Apical, Omek Interactive, Texas Instruments and Xilinx at the March Summit. And just in the past few days, a number of new demonstration and tutorial videos have also appeared on the Alliance site, from CogniMem Technologies and CogniVue.

Speaking of videos, I'd like to draw your particular attention to two of the recent news write-ups highlighted below. Two weeks ago, Alliance founder Jeff Bier conducted two webcasts within the same day, respectively published by Vision Systems Design Magazine and EE Times. Archive recordings of both technical talks are available on the publications' websites. And two days from today, Alliance members CEVA and eyeSight will deliver yet another technical tutorial webcast in partnership with EE Times' TechOnline; pre-registration is required, and I encourage your attendance.

I know I've said it before, but I'll say it again, because week after week I'm repeatedly reminded of it... these are exciting times to be an embedded vision industry and Embedded Vision Alliance participant!  As always, I welcome your feedback on how the Embedded Vision Alliance can more effectively help you harness the abundant opportunities in this burgeoning technology category. Thank you for your support of the Alliance, and for your interest in and contributions to embedded vision technologies, products and applications.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


June 5, 2012 Edition

Dear Colleague,

Rarely is it the case that I'm able to construct an entire newsletter around a common concept. But this time, the planets aligned. Multiple pieces of new content have appeared on the Embedded Vision Alliance website in recent weeks, all focused on ADAS (automotive driver assistance systems), thereby enabling me to publish a "theme" edition. And the popularity of this budding application shouldn't be a surprise to embedded vision industry observers and participants, particularly if you've been tracking the Alliance website as it has expanded over time.

In late March, after all, Analog Devices unveiled a family of four new multi-core DSPs, two of which contain embedded vision co-processors and specifically target the ADAS application space. Last November, fellow Alliance Platinum member Xilinx published a detailed application note on the opportunities for FPGAs in ADAS systems, suggesting a corporate focus on this market segment, too. IMS Research senior analyst Tom Hackenberg spoke at length about promising ADAS opportunities, fueled in no small part by pending legislation both in the United States and elsewhere, during his market trends presentation at the most recent Embedded Vision Alliance Summit. And as you'll soon see, Platinum member Texas Instruments is bullish on ADAS, too, as are many other companies in the Alliance.

Some of you may already have rear-view cameras in your vehicles, although their capabilities are likely currently quite "dumb"; they simply display their view on an LCD and rely on you to detect and react to objects behind you. But, as some luxury automobiles already implement (and aggressively promote), in-car cameras are poised to explode both in their functions and their per-vehicle count, as well as to migrate beyond high-end models into mainstream vehicles. The potential operating modes are myriad, both standalone and paired with synergistic technologies such as infrared, radar, and ultrasound, and in various implementation forms:

  • Rear collision warning
  • Front collision warning and active avoidance (i.e. automatic braking)
  • Driver distraction and drowsiness alerts
  • Inadvertent lane change warning and active avoidance (i.e. steering override)
  • Adaptive cruise control
  • Headlight high beam auto-disable, and
  • Roadway sign discernment and alerts (i.e. excessive speed warnings, road construction heads-ups, and the like)

For more information on this promising embedded vision application, I encourage you to check out the newly published article listed below from Analog Devices, as well as a brand new white paper from Texas Instruments. Spend some time, too, watching my recently published video interview with IMS Research senior market analyst Helena Perslow, who specializes in various automotive technologies. And in another showcased video, Embedded Vision Alliance founder Jeff Bier demonstrates the Mobileye vehicle safety system, which Perslow mentions in her discussion with me, and which Bier has installed in the family minivan.

These are exciting times for embedded vision, both in ADAS and elsewhere. To wit, I'd like to welcome videantis GmbH, the Embedded Vision Alliance's newest member, to our ever-expanding and vibrant organization. To keep on top of industry and Alliance developments as they occur, I encourage you to rely not just on this twice-monthly newsletter but to also subscribe to the Embedded Vision Alliance website's RSS feed and to the Alliance's various social media channels on Facebook, LinkedIn, and Twitter, all of which are updated each time a news writeup or other piece of content is added to the site. As always, I welcome your feedback on how the Embedded Vision Alliance can better serve your needs. Thank you for your support of the Alliance, and for your interest in and contributions to embedded vision technologies, products and applications.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


May 15, 2012 Edition

Dear Colleague,

Microsoft's Kinect peripheral for the Xbox 360 game console and Windows 7-based PCs singlehandedly brought awareness of vision-based applications such as gesture interfaces and facial recognition to the masses. It's also the embedded vision foundation for a plethora of other system implementations, either based on Microsoft's O/S and thereby leveraging the official Kinect for Windows SDK, or via harnessing unofficial third-party toolsets. Not a day seemingly goes by without news of some cool new Kinect-based implementation; pipe organ control, for example, or augmented reality-augmented (pun intended) magic tricks, or Force-tapping video games, or holographic videoconferencing systems, or navigation assistance for the blind among us. Were I to try to even briefly mention each of the ones I've heard about in just the past few months, far from explain them in-depth, this introductory letter alone would be several pages in length. Instead, at least for the purposes of this particular newsletter, I'll focus on Microsoft-announced Kinect advancements.

  • Later this month, the company will release v1.5 of the Kinect SDK. According to the blog post revealing the news, "Among the most exciting new capabilities is Kinect Studio, an application that will allow developers to record, playback and debug clips of users engaging with their applications.  Also coming is what we call 'seated' or '10-joint' skeletal tracking, which provides the capability to track the head, neck and arms of either a seated or standing user." The enhancements will work in both standard and "near mode", and won't require new hardware.
  • Last November, the company announced that it was co-creating (with TechStars) an accelerator program intended to promote startups that are harnessing Kinect for commercial applications. Applications were accepted through late January; the victors will take part in a three-month incubation program at Microsoft, as well as receive $20,000 in seed funding. Early last month, the company unveiled the 11 winners, selected from nearly five hundred applications with concepts spanning nearly 20 different industries, including healthcare, education, retail, and entertainment.
  • Kinect, at least in its Xbox 360 form, will likely soon show up in a lot more homes. That's because Microsoft, taking a page from cellular service providers, just announced a subsidized version of the 4 GByte console-plus-peripheral bundle. You pay only $99 upfront, but commit to a two-year Xbox LIVE Gold subscription at $14.99/month. At the end of the two-year term, you've shelled out roughly $100 more than if you had bought the console-plus-subscription in one shot, but it's an attractive entry to the Kinect experience for folks without a lot of extra cash on hand.
  • And this last one should be treated as a rumor, at least for the moment. The most recent upgrade of the Xbox 360 user interface, which rolled out last December, focused the bulk of its Kinect attention on the peripheral's array microphone audio input subsystem. Persistent speculation fueled by unnamed insiders, however, suggests that the next Xbox 360 UI upgrade, currently being tested, will showcase numerous vision enhancements. Specifically, while the console currently supports Bing search engine-powered media explorations on various websites, Microsoft will supposedly soon bring a full-featured Internet Explorer browsing experience to the Xbox 360, powered by both voice commands and gestures.

There's plenty more where those came from; the best ways to track Microsoft's ongoing Kinect developments are to regularly monitor the company blog (via RSS if you wish), Twitter feed and Facebook page.

I'm curious: how many of you are planning on using Kinect (either sanctioned on the Xbox 360 or PC, or unsanctioned on another platform via enthusiast-developed SDKs) as the basis for your embedded vision implementations? And how many others of you, while you might not be harnessing Kinect directly, are still leveraging one or several of its technology building blocks; the PrimeSense depth-map processor, for example, or the structured light depth-discerning technique? I look forward to hearing from you; I'll certainly keep your comments anonymous if you wish.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


May 1, 2012 Edition

Dear Colleague,

I've got Israel on the brain of late, it seems. And it's not just because the 40 days of Lent wrapped up a few weeks ago with Easter (roughly coincident with Passover for my Jewish friends and associates). It's because Israel has become a particular hotbed of embedded vision technology and product innovation. Of the twenty current members of the Embedded Vision Alliance, three (CEVA, eyeSight Mobile Technologies, and Omek Interactive) are headquartered in Israel; many other member companies have Israeli subsidiaries.

Gesture interface software developer eyeSight is the latest company to become a member of the Embedded Vision Alliance. The company may be a familiar name to many of you, as I've covered it in at least two past news write-ups, along with a video-recorded interview and demonstration at January's Consumer Electronics Show. Last week, the Alliance issued a press release announcing eyeSight's (and other recent companies') memberships and upgrades. And a couple of days later, I discussed the company and its products and technologies in more detail.

Alliance Members who attended last month's Summit already know that the Alliance has contracted with New ARTech Technologies, Ltd. to solicit new member candidates and broaden the awareness of the Alliance within the computer and embedded vision industries in Israel. ARTech's principals, Roni Amir and Shai Mor, have cultivated impressively extensive contacts in the Israeli technology sector. This week, they are representing the Alliance at the ChipEx conference in Tel Aviv, which runs May 1-2.

Roni and Shai will be occasionally contributing content to this newsletter and the Alliance website. In response to my query about why Israel has such a large concentration of embedded vision technology companies, Roni and Shai write: "Geopolitical, demographic and cultural circumstances are behind the substantial infrastructure that has emerged for all aspects of embedded vision in Israel. The geopolitical situation in the Middle East has resulted in the development of numerous embedded vision-based applications, such as surveillance, security, and guidance systems. Demographic factors include a massive immigration of highly educated Russian scientists and engineers. And cultural influences include an entrepreneurial environment, partly the result of mandatory service in the Israel Defense Forces."

"All of these variables have combined to create a flourishing industry," they continue, "encompassing numerous high technology firms, including more than 100 companies traded in U.S. stock exchanges, plus hundreds of startups. A diversity of disciplines is represented; industrial electronics, defense systems, semiconductors, Internet technologies, etc. Companies in embedded vision-related areas include those focused on image sensors and associated software (e.g., CMOS image sensors, touch-free interfaces for digital devices, and gesture recognition), semiconductors (DSPs, video processing ICs, camera chips, and SoCs for multimedia phones), mobile videoconferencing solutions, and various system implementations (surveillance, security, capsule endoscopy, and the like)."

Thanks as always for your support of the Alliance and your interest and involvement in embedded vision technologies, products and applications. Don't hesitate to drop me an email with any ideas you might have on how the Alliance can better serve you and the industry at large.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


April 17, 2012 Edition

Dear Colleague,

I'm admittedly feeling pretty good right now. That's because I've just reviewed the attendee feedback from the late-March Embedded Vision Alliance Summit, and the strong ratings and positive supporting comments combine to confirm my gut feel that it was an extremely valuable event for Alliance members and press-and-analyst attendees alike. To the latter point, in the previous (April 3) newsletter I linked to Summit coverage from Rick Merritt at EE Times and Dean Takahashi at VentureBeat. Since then, Takahashi has published a second writeup on the event ("Military Wants Better Machine Vision for Smarter Robot Cameras") and has been joined by Kevin Morris of Electronic Engineering Journal ("Envisioning the Future: Embedded Vision Lunges Forward"). Based on conversations with other analysts and technology journalists, I anticipate more coverage to come; keep an eye out on the Embedded Vision Alliance's Facebook and LinkedIn pages and Twitter feed for alerts when the material is published.

Putting together a Summit is a lot of work, and I'm admittedly tempted after each one to temporarily throttle back and coast for a bit. But if anything, the pace has accelerated in the past few weeks. In the last newsletter, I mentioned that Analog Devices had chosen the Summit as a forum both for introducing a series of embedded-vision-tailored Blackfin SoCs and to upgrade its Alliance membership to the premier Platinum tier. The day before the Summit I spoke with Colin Duggan, ADI's director of marketing, about these and other embedded vision topics, and you'll find a link to the video of our interview below. Newly published to the site, too, is a video demonstration by Navanee Sundaramoorthy, Xilinx product manager, of the compelling capabilities of the Zynq-7000 Extensible Processing Platform (containing both a dual-core ARM Cortex-A9 CPU and FPGA fabric) as an embedded vision processor.

You'll find plenty of new written content on the site, too. Two Texas Instruments engineers have, for example, developed a detailed white paper on sensor, processor and software alternatives for implementing rich gesture interfaces. Michael Tusch, founder and CEO of Alliance member Apical Limited, has just published the third article in his series on image quality optimization, this one discussing various HDR (high dynamic range) sensor and algorithm techniques. I have, as usual, been writing regular installments on the latest embedded vision industry breaking news. And there's much more compelling content to come in the near future.

The video of Jim Donlon's (DARPA) Summit keynote, "The Way Ahead for Visual Intelligence," is nearly completed, for example. And it'll be shortly followed by videos of other events from that day:

  • My introductory embedded vision presentation to the press and analyst attendees
  • The panel discussion, "Beyond Kinect; from Research to Revenue", moderated by Embedded Vision Alliance founder Jeff Bier, with Donlon and representatives from Analog Devices (Duggan), Texas Instruments (Bruce Flinchbaugh) and Xilinx (Bruce Kleinman) participating
  • The market trends presentation "Embedded Vision Markets in 2012 and Beyond: Established, Developing and Emerging" by Tom Hackenberg, semiconductor research manager at IMS Research
  • The technology trends presentation, "Image Sensor Technologies for Embedded Vision," by BDTI senior engineers Eric Gregori and Shehrzad Qureshi
  • Member product announcements by Analog Devices, Apical Limited, Omek Interactive, Texas Instruments and Xilinx, and
  • An online slideshow of various snapshots taken throughout the day

Keep an eye out on the Embedded Vision Alliance website for this and other upcoming material, and thanks for your support of the Alliance and your interest and involvement in embedded vision technologies, products and applications. As always, I welcome your feedback on how the Alliance, its website and this newsletter can do a better job of serving your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


April 3, 2012 Edition

Dear Colleague,

Last week's quarterly Embedded Vision Alliance Summit, held in Silicon Valley, was notably successful on several fronts. With nearly 50 member attendees representing nearly all of the Alliance companies, the Summit was an effective opportunity to touch base with existing business contacts, make new connections, exchange information, and strategize the way forward for the Alliance in particular and the embedded vision industry in general. Special thanks go to DARPA's Jim Donlon for an in-depth and engaging keynote on the Mind's Eye program that he manages. Keep an eye out (bad pun admittedly intended) for the video of Jim's presentation, to be published soon on the Embedded Vision Alliance website. And particular acknowledgment also goes to Analog Devices, which not only sponsored the Summit but also announced the company's upgrade to the Platinum Alliance membership tier there.

In attendance for the majority of the day were more than a dozen highly influential press and analyst representatives, who learned about the burgeoning embedded vision technology opportunity both via a number of formal presentations and through informal discussions with Alliance representatives. Several writeups covering the event have already been published, such as Rick Merritt's (EE Times) "DARPA Seeks Breakthroughs in Computer Vision" and Dean Takahashi's (VentureBeat) "This Chip Can Count Dice Rolls Faster than You Can". And further coverage of the Alliance and its activities will undoubtedly appear in the days and weeks to come. Please drop me an email with any coverage you come across, in case you see it before I do!

Speaking of press and analysts, the Embedded Vision Alliance also has several upcoming activities planned in this regard. Towards the end of this month, Alliance founder Jeff Bier will present to approximately 50 international journalists at the Globalpress Electronics Summit in Santa Cruz, California. In early May, the Alliance will be promoted by its new Israel representative, New Artech Technologies, at the ChipEx Conference in Tel Aviv, Israel. And Jeff Bier will also deliver the keynote presentation at the Embedded Vision Workshop associated with the IEEE CVPR (Conference on Computer Vision and Pattern Recognition) in mid-June, in Providence, Rhode Island. For a regularly updated listing of these and other advocacy and education activities in which the Embedded Vision Alliance will participate, please visit the events page on the Alliance website.

The Embedded Vision Alliance is clearly on a roll, and you're a key part of the reason why. Thanks for your support of the Alliance, and for your interest and involvement in embedded vision technologies, products and applications. As always, I welcome your feedback on how the Alliance, its website and this newsletter can do a better job of servicing your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


March 13, 2012 Edition

Dear Colleague,

In the previous newsletter, I told you about embedded vision-related developments coming out of the Mobile World Congress show in Barcelona, Spain. Since then, two additional notable conferences have come and gone, both including plenty of embedded vision news of their own. CeBIT took place from March 6-10 in Hannover, Germany, while the GDC (Game Developer Conference) ran in near-parallel (March 5-9) in San Francisco, California. As before, I was personally unable to attend either show, so I welcome feedback from those of you who saw any of the products I mention below first-hand (as well as on any products whose coverage I might have overlooked).

Take Tobii, for example. I've recently written about the company's eye-tracking technology on several occasions, and Embedded Vision Alliance Founder Jeff Bier got a personal demonstration at January's Consumer Electronics Show. What I didn't realize until recently is that the company doesn't just code embedded vision software algorithms; it's also a hardware developer. At CeBIT, the Tobii unveiled a next-generation eye tracking sensor module called the IS-2S which according to a company spokesperson, fits on a single board, is 75 percent smaller than its precursor, consumes 40% less power and will be "cheaper to implement" (although the company declined to provide specifics).

At the show, Tobii was demonstrating its technology on the cleverly named EyeAsteroids 3D, a pupil-controlled variant of one of my favorite childhood quarter-gobbling gaming diversions, complete with a glasses-free autostereoscopic 3D display. But Tobii wasn't the only company talking up eye-tracking implementations at the time; GazeHawk just got acquired by Facebook. The social networking giant was compelled to do the deal for the human talent it bought, but it apparently didn't have direct interest in the startup's existing products, which use a computer's webcam to log a user's eye movements. The applicability to online advertising (and broader web page design) is perhaps obvious; the startup's founders welcome emails at team@gazehawk.com from parties interested in picking up the current-product torch.

Turning your attention to gestures, SoftKinetic is a company that Jeff Bier also spoke with at CES. Like Tobii, its business model encompasses both hardware and software. And like Tobii, it uses popular game titles to showcase its products' capabilities. SoftKinectic recently demonstrated on YouTube its DepthSense DS311 camera and accompanying drivers gesture-controlling a "stock" copy of Angry Birds running on a PC. And at the GDC, the company impressed Engadget with the accuracy and usability range of its infrared time-of-flight-based approach.

When it comes to consoles, Microsoft's Kinect has gotten the lion's share of press attention, but it's not the only game (pun intended) in town. Consider Sony's PlayStation Move, for example. While unlike "you are the controller" Kinect, Move requires that the user hold a hardware controller with an illuminated-orb end, Sony's setup is still vision-based by virtue of the console-located PlayStation Eye camera that tracks the orb's size and movement, thereby discerning distance, direction and speed over time. At the GDC, Sony reported that to date it had shipped more than 10.5 million Move controllers worldwide.

Considering the "shipped" qualifier (versus "sold"), that most PlayStation 3 consoles probably use at least two controllers, and that Move launched several months before Kinect, you may conclude that Sony's accomplishment has to date undershot that of competitor Microsoft. Nonetheless, it's a notable achievement. And Sony's also an early adopter of another embedded vision technology, augmented reality. The company's latest-generation portable game console, the PlayStation Vita, extends and expands on the augmented reality capabilities first pioneered in competitor Nintendo's 3DS.

While it might be tempting to dismiss games and other consumer-tailored implementations as casual "toys", don't underestimate their power to fuel broad consumer awareness of gesture (and eye) controlled interfaces, augmented reality, face recognition and other embedded vision technologies, awareness from which developers of other embedded vision products can also benefit. Equally, their high volumes will also fuel accelerated development of new silicon (and other hardware) and software system building blocks, along with cost reductions of those building blocks, from which other embedded vision applications can also benefit.

Speaking of conferences, stay tuned for the Silicon Valley-based Design West show at the end of this month, from which will come notable news from several Embedded Vision Alliance members, along with other companies. More on that next time...for now (and returning once more to the topic of gaze-tracking user interfaces), please join me in welcoming the latest member of the Embedded Vision Alliance, CogniMem Technologies. As always, I thank you for your interest and involvement in the field of embedded vision, and for your support of the Embedded Vision Alliance.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


March 1, 2012 Edition

Dear Colleague,

As I write these words, the Mobile World Congress show is underway in Barcelona, Spain. At this yearly event, one of the most important cellular communications conferences, mobile handset and tablet manufacturers and their service provider partners reveal their latest and greatest offerings. And alongside them are the silicon and software providers, unveiling next-generation system building blocks which they hope will show up in handsets and tablets at next year's MWC.

One of the biggest announcements coming out of this year's show, at least so far, is Nokia's model 808 PureView phone. I've written many times in recent months about the potential for cameraphones to render standalone cameras obsolete, as well as about the notable embedded vision development potential implied in the burgeoning still and video image capture capabilities of mobile electronics devices. Although the Nokia 808 will probably not sell in large quantities, due both to its fairly high price point (450 Euros, roughly $600 USD) and its archaic Symbian operating system foundation, it's a leading-edge case study of where mainstream handsets will likely be in short order.

The Nokia 808 contains a 41 Mpixel image sensor (no, that's not a typo), notable not only for its high resolution but also for its relatively relaxed 1.4 um pixel pitch, the latter translating into larger-than-otherwise silicon die size and cost but also to better-than-otherwise low-light performance. Equally compelling to me is Nokia's motivation for going with such a robust image capture foundation. The company had unsuccessfully tried for many years to implement robust optical zoom capabilities into its cameraphone designs, and decided this time around to take a different tack.

The largest resolution still images that the Nokia 808 can capture are 38 Mpixels (7,152x5,368 pixels) in 4:3 aspect ratio mode, and 34 Mpixels (7,728x4,354 pixels) in 16:9 aspect ratio mode. Alternatively, the PureView algorithms combine multiple pixels' data together in creating lower-resolution 8 Mpixel, 5 Mpixel or 3 Mpixel photographs. The resultant oversampling not only improves the per-pixel light sensitivity, it also enhances image sharpness. And, as you "zoom" in to objects at the sensor's center, the input-to-output scaling ratio decreases, until it reaches 1:1. Note, though, that there's no traditional "digital zoom" upscaling, which leads to soft and otherwise artifact-filled results. As Nokia's documentation (PDF) states, "We've taken the radical decision not to use any upscaling whatsoever. There isn't even a setting for it."

With the default 5 Mpixel and 16:9 aspect ratio still image capture settings, the effective zoom range is 3x. Higher resolution still images support a narrower zoom range, lower resolution images a wider range, and the same process applies by extension to video. 720p (1280x720 pixel, i.e. 0.9 Mpixel) video capture, for example, supports a 6x lossless zoom range. I'm excited about the Nokia 808 for a number of reasons, beginning with the implications of such a robust sensor (and accompanying image processor) appearing in a volume consumer electronics device. Can you imagine how accurate an optical character recognition algorithm could be, for example, if it leveraged such a high pixel count foundation? And let's also not underestimate the downward price pressure and upward feature set pressure that the 41 Mpixel image sensor and companion processing SoC, presumably not exclusive to Nokia, will put on today's conventional counterparts.

In other news, we've recently added an archive of past Embedded Vision Insights editions to the website. And, as you may have already ascertained given that the previous edition of Embedded Vision Insights came out exactly two weeks ago, we're moving to a twice-monthly newsletter publication pace. I'd like to thank my partners in the Embedded Vision Alliance for their ongoing new-content submissions, without which this distribution step-up wouldn't be possible. And I hope that you're as enthusiastic about the uptick as I am. Regardless of whether you love it or dislike it, I always welcome your comments. Thanks for your interest and involvement in the field of embedded vision, and for your support of the Embedded Vision Alliance.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


February 16, 2012 Edition

Dear Colleague,

Shortly after declaring bankruptcy on January 19th, longstanding photography pioneer Eastman Kodak announced last week that it was winding down its digital imaging product line this year, focusing going forward on patent licenses, printers, enterprise services, photo labs and (ironically) disposable silver halide film-based cameras. Yet, as anyone who uses online services such as Facebook, Flickr, Instagram, Picasa and YouTube already knows, still and video photography is more popular than ever. So what happened? To some degree, Kodak's digital debacle, in spite of debuting the technology nearly 40 years ago, was the result of unwillingness to turn its back on its silver halide heritage and fully embrace the digital future.

But Kodak's not the only company having problems; traditional competitors such as Canon and Sony are also struggling. That's because standalone cameras have largely fallen out of favor in recent years, with the increasingly capable imaging subsystems integrated within cellphones, tablet computers and other multi-function devices as the heirs apparent and ascendant. This evolutionary transition is good news for embedded vision developers. As I've written about on numerous occasions in recent months, cellphones and tablets are open systems that represent fruitful development ground for independent developers. Whether the application involves health care, security, automotive driver assistance, a gesture-augmented user interface or any of the countless other implementations that have emerged, they wouldn't have been possible in the comparatively closed-system camera past.

In other news, as I first mentioned in an Embedded Vision Insights newsletter edition published shortly after the December 2011 Embedded Vision Alliance Summit, EVA member Texas Instruments (TI) has upgraded its membership to the premier Platinum tier. According to Niels Anderskouv, vice president, Digital Signal Processing Systems at TI, "Embedded vision and vision analytics are becoming pervasive in many applications, including video security, machine vision, automotive safety or even your refrigerator at home. Texas Instruments’ digital signal processing products provide the real-time precision and high performance that’s at the core of many of these innovative applications. TI is proud to now be a platinum member of the Embedded Vision Alliance and anticipate that our role will help the Alliance further spur exciting innovation in the industry." As of earlier this week, TI's Platinum Portal on the Embedded Vision Alliance website is live and ready for your perusal. I encourage you to check it out, learn more about the company and its embedded vision involvement, and periodically revisit the Portal as TI (and I) add more material.

Finally, speaking of Embedded Vision Alliance Summits, the next one will be held on Thursday, March 29, in San Jose, CA. Alliance member representatives should have already received preliminary email communication about the event; please confirm your planned attendance as soon as possible and look for a detailed agenda to come shortly. The Summit will be held coincident with and close by the Design West Conference series, which includes the Embedded Systems Conference Silicon Valley. We'll be inviting key members of the technology analyst and press community to join us beginning mid-day for a compelling series of embedded vision presentations, panel discussions, and product introductions, not to mention a cocktail reception. If you're an analyst or journalist who'll be in Silicon Valley that week and hasn't yet heard from us, I apologize; please drop us an email and we'll be sure to add you to the attendance list. And for the rest of you, stay tuned for more information on the earlier-mentioned product introductions, representing compelling embedded vision breakthroughs from several Alliance member companies.

As always, I encourage you to contact me with your ideas about making the Alliance, this newsletter and the website better. Thanks for your interest and involvement in the field of embedded vision, and for your support of the Embedded Vision Alliance.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


January 17, 2012 Edition

Dear Colleague,

Welcome to the premier 2012 edition of Embedded Vision Insights, the newsletter of the Embedded Vision Alliance.

Last week's Consumer Electronics Show provided a plethora of reminders that embedded vision is no longer just the promising future but is also the already-successful present. Embedded Vision Alliance member CEVA, for example, publicly released a vision-optimized processor core. EVA member CogniVue unveiled a small, low-power smart camera development module. And check out the diversity of other daily news writeups that I filed throughout the week:

Jeff Bier and Jeremy Giddings represented the Embedded Vision Alliance at CES and videorecorded many of the demonstrations they saw, some of which are already posted with others to be published pending interviewee approval. Bier and Giddings met with both current and prospective new EVA members at CES, along with educating press representatives on Alliance progress and plans, and left Las Vegas inspired by the visionary embedded vision implementations they heard about and auditioned.

I encourage you to regularly revisit the news, technical article and video sections of the EVA website for additional embedded vision content coming from both last week's CES and from future conferences. Similarly, if you haven't yet perused the technical information found at the Embedded Vision Academy, unveiled last month, I commend it to your inspection. And as always, I encourage you to send me an email with any and all thoughts regarding making the Alliance, this newsletter and the website better.

Best wishes for the new year!

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


December 20, 2011 Edition

Dear Colleague,

Welcome to the fourth edition of Embedded Vision Insights, the newsletter of the Embedded Vision Alliance.

This past month has been a productive one for the Embedded Vision Alliance. In early December, the Alliance launched the Embedded Vision Academy, a free online training facility for embedded vision product developers. The Academy incorporates training videos, technical interviews, demonstrations, downloadable code and demos, and other developer resources. Access is free to all, with registration. The Academy makes it possible for engineers worldwide to gain the skills needed for embedded vision product development. I encourage you to check out the wealth of Academy content, which will steadily increase over time, at your earliest convenience.

Two weeks ago, the Alliance held its second quarterly Summit meeting for member companies, following up September's premier Summit. This well-attended event in Dallas, TX was sponsored by Texas Instruments, who generously provided not only facilities but also and an enthusiastically received technical session on the BeagleBoard evaluation module series and its applicability to embedded vision applications. At the December Summit, Texas Instruments also announced that it will be upgrading its Alliance membership to the Platinum level, demonstrating the company's commitment to the Embedded Vision Alliance's mission of inspiring and empowering design engineers to create machines that see.

Speaking of enthusiastic receptions, Nik Gagvani from Cernium provided the mid-day keynote address. Nik and his team developed the Archerfish Solo, the first low-cost smart surveillance camera for consumer use. Nik shared his insightful perspective on the challenges faced by embedded vision system designers, and what these system designers need most from their suppliers. Stay tuned for Nik's video-recorded interview with me, along with an article about his keynote and a copy of his foil set, all to appear soon on the website and in next month's edition of Embedded Vision Insights for all registered website users.

Also in attendance at the December Summit event was Jim Donlon, the program manager of DARPA's Mind's Eye Program. Donlon is scheduled to deliver the keynote at the next Embedded Vision Alliance Member Summit, currently scheduled for March 29, 2012 in Silicon Valley, coincident with DESIGN West (formerly the Embedded Systems Conference Silicon Valley). As currently envisioned, a portion of the day will be open to invited press and industry analyst attendees; an evening cocktail reception will provide additional opportunity for Alliance member interactions with press and analyst representatives, including demonstrations. Alliance members, please mark this date in your calendar and plan to attend.

For Alliance members, registered website users and visitors alike, please send me an email with any and all thoughts regarding making the Alliance, this newsletter and the website better. Happy holiday wishes from the Embedded Vision Alliance!

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


November 15, 2011 Edition

Dear Colleague,

Welcome to the third edition of Embedded Vision Insights, the newsletter of the Embedded Vision Alliance.

This past few weeks have been particularly newsworthy for camera-inclusive smartphones and tablets. Consider, for example, handsets such as the HTC MyTouch Slide 4G and its plethora of "power user" snapshot settings, the 1080p video capture capabilities of the Apple iPhone 4S, the stitch-free panorama mode supported by the Samsung Galaxy Nexus and the high quality Carl Zeiss optics built into the Nokia Lumia 800. Key to new capabilities such as these are the systems' microprocessors; now-sampling CPUs built from Qualcomm's latest Krait and ARM's latest Cortex-A15 microarchitectures, for example, along with Nvidia's in-production quad-core (or more accurately, penta-core) Tegra 3 and Apple's dual-core A5.

To be clear, these systems (and the SoCs they're derived from) are useful for a diversity of embedded vision functions, not just for picture-snapping and videography purposes. Take a look, for example, at the Kinect-reminiscent gesture interfaces supported by Kinectimals for Windows Phone 7, included in latest-generation Pantech handsets, documented in both filed and granted patents from Apple, and suggested by recent Qualcomm acquisitions. Ponder the facial recognition-based unlock capabilities built into Google's "Ice Cream Sandwich" Android v4 and Nokia's Symbian O/S. And appraise the fresh perspectives represented by embryonic applications such as television program identification, augmented reality, and traffic flow optimization.

Cellular handsets and tablet computers are compelling platform for implementing embedded vision, by virtue of the prevelence of both front- and rear-mounted image sensors of sufficient resolution, the substantial available memory and processing resources, the systems' application-enabling portability, and (perhaps most importantly) the often-subsidized prices at which they're sold and their consequent large installed user base. How do you hope to harness mobile electronics' potential in actualizing your embedded vision, and what barriers exist to transforming your aspirations into reality? Drop me an email with your thoughts, and enjoy this issue of Embedded Vision Insights.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


October 18, 2011 Edition

Dear Colleague,

Welcome to the second edition of Embedded Vision Insights, the newsletter of the Embedded Vision Alliance.

The Embedded Vision Alliance achieved a key milestone on September 20 with its successful premier Alliance Summit meeting, hosted by Alliance member Xilinx at its San Jose, CA facilities. The daylong series of briefings, planning sessions and relationship-building opportunities were judged highly rewarding by all in attendance, and set in place a solid foundation for 2012-and-beyond activities. Please see here for a detailed report on the day's events and outcomes.

The next quarterly Alliance Summit is scheduled for Tuesday, December 6 in Dallas, TX, hosted by Alliance member Texas Instruments.  It immediately precedes Alliance member IMS Research's Touch-Gesture-Motion Conference in nearby Austin. If you're already a member of the Embedded Vision Alliance, mark that date in your calendar and plan to attend. If your company is interested in joining the Alliance, contact Jeremy Giddings at 510-451-1800 or giddings@embedded-vision.com for membership information. And please also continue to send us your feedback about this newsletter and how we can improve it. I look forward to hearing from you.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

Click here for the remainder of the newsletter


September 20, 2011 Edition

Dear Colleague,

Welcome to Embedded Vision Insights, the newsletter of the Embedded Vision Alliance.

The Embedded Vision Alliance is an industry partnership dedicated to helping engineers use embedded vision technology to design "machines that see." The Alliance currently comprises 18 companies, including leaders in semiconductors, tools, algorithms, cameras, and design services for embedded vision applications. Our web site, www.Embedded-Vision.com, is growing rapidly with video seminars, technical articles, coverage of industry news, and discussion forums.

We are excited to launch the Embedded Vision Insights newsletter to help keep the industry informed on developments related to designing machines that see. Initially the newsletter will be published on a monthly basis. Please help us get the word out by forwarding Embedded Vision Insights to colleagues who will find it valuable, and encouraging them to subscribe. Please also send us your feedback about the newsletter and how we can improve it. I look forward to hearing from you.

Jeff Bier
Founder, Embedded Vision Alliance

Click here for the remainder of the newsletter