What can be done with a Raspberry Pi but not with an Arduino?

You can think of raspberry pi as a laptop with programmable pins, as well as being a micro-sized computer.

I think we could classify the difference into two categories, a hardware, and a software.

Hardware

For example, and not as a limitation,

  1. You can show out graphics and movies after attaching an LCD whether it is big like TVs or a credit-card-size one, in which Arduino cannot do it obviously.
  2. Using many features that can not be done in Arduino simultaneously like using Bluetooth, WIFI, Ethernet networking, and USB at the same time, you can do each of them in Arduino individually using a shield for each feature but not simultaneously as most of the shields cannot accept other shields to be installed over it.
  3. Its high processing speed enables it to get rid of some circuits like the PWM circuit as it could do it using a simple software code that will not delay the processor much as in Arduino, so you can extend your capabilities even if you do not have the hardware to do it as long as your GPIO pins are available enough, of course, you cannot bypass some circuits like analog reading as it needs a hardware and cannot be programmed.

On the other side

Software

  1. You can program the pi as you program your laptop, it is not limited to a certain language like C in Arduino you can program it using different languages including C, like python, java C++, etc. Also, you can install a meta-operating system like ROS for robotic applications that need a direct connection with sensors through pins in some cases.
  2. Although Arduino can do multi-threading using FreeRTOS, the pi is more reliable and faster it can process images, read sensors, take decisions and command actuators simultaneously.
  3. You can utilize it as a web server as you can install an OS on it.

What is SLAM in robotics and Self-driving cars

The need for high definition maps, mapping of unknown GPS-denied areas, exploration and autonomous mission are the main motivation for Solving the Simultaneous Localization And Mapping Problem “SLAM”

Given a map, a robot could localize itself easily by comparing its observations to the stored map. Given a robot pose relative to a global axis “localization” makes it easy to map the environment.

A map is built by the incorporation and stitching of the robot observations, but these observations are measured relative to the robot frame of reference due to the existence of the observing sensor on-board.

To refer these observations to the map axes , robot pose needed to be known so that the observation could be transformed from the robot frame to the map frame, that is why a localization is needed for mapping.

OK, that’s seems good. Given a map we could get pose, given a pose we could get map. What if we do not know neither of them, like being in an unknown place with neither a known maps nor a localization system ?.

You have to map it yourself. But the robot has no map localize it self to build the map. That’s why it is an egg-chicken problem and where do simultaneous name came from.

If there is no noise associated with sensors, the problem could be solved easily, by tracking the robot position from the origin and map the environment consequently.

Unfortunately noise does exist and will be accumulated over time leading to catastrophic results. Probabilistic approaches are introduced then to filter and minimize sensors noise.

Filtering out these noise using different methods like least square error minimization or filtering is the main task of the SLAM problem.

Extended kalman filter (EKF) SLAM, rao-blackwellized particle filter and graph SLAM represent some approaches to solve that problem.

What is the difference between SLAM and visual odometry?

First, we have to distinguish between SLAM and odometry.

Odometry is a part of SLAM problem. It estimates the agent/robot trajectory incrementally, step after step, measurement after measurement.

new_state=old_state+step_measurement

The next state is the current state plus the incremental change in motion.

Incremental change can be measured using various sensors. Wheel encoders for instance, counts the number of wheel rotation per time step,leading to travelled distance estimation per unit step.

Camera can do odometry as well. Each frame is considered as a step measurement. Moving right or left, up or down indicates a change in the scene from frame to frame. Using computer vision algorithms, the changes could be interpreted to pose change.

On the other side, SLAM is a Simultaneous Localization And Mapping. Localization is a subset of SLAM. Odometry affects localization which in turn affects SLAM.

What is CNN in machine learning?

CNN “Convolutional Neural Network” is more commonly listed under deep learning algorithms which is a subset of machine learning and AI.

Convolution means, convolving/applying a kernel/filter of nxn dimension on a selected pixel and its surroundings, then moving the same kernel to the next pixel and its surrounding and so on, to asses each pixel.

Mainly, CNN is used with images to extract features. Although features, shapes and patterns can be detected directly using multilayer sequential neural networks, CNN is more accurate.

CNN applies filters to each pixel of the image to examine the feature type which this pixel belongs to, whether it is a line, half circle, curve,etc.

Pixels which gives a high score after it is convolved with a straight line filter, most probably belongs to a straight line.

A kernel of size 3×3 that gives a maximum value for detecting a diagonal. Image source :Deep learning Udacity Nanodegree .

Filter/kernel size determines the size around the pixel to consider during feature examination.

Detecting a square is done by detecting its edges first, “straight lines” then form a complete understanding of the shape .

Horizontal edge with another vertical one, form a half square, half square with another half, signals a complete square. This process is done over multiple CNN layers, each layer has its own function till the assembly.

The same happens with circles as shown

A circle is detected with steps, the upper left curve with a upper right ..etc. results in a complete circle understanding as shown in the above figure. See more in this youtube channel

This is also used for detecting the handwritten letters. Each set of layers can detect a feature in the image combining different features results in different shapes of numbers.

In this figure, both numbers, 8 and 9 has an upper loop, while 9 has a lower straight line, 8 has a lower loop. A layer(s) in this network is dedicated for loop detection another one for vertical line detection. Analysing the features within the input will lead to classify the number inside the image.

The beauty is that, the values of these filters are determined by training the network not by your selection :).

Introduce Yourself (Example Post)

This is an example post, originally published as part of Blogging University. Enroll in one of our ten programs, and start your blog right.

You’re going to publish a post today. Don’t worry about how your blog looks. Don’t worry if you haven’t given it a name yet, or you’re feeling overwhelmed. Just click the “New Post” button, and tell us why you’re here.

Why do this?

  • Because it gives new readers context. What are you about? Why should they read your blog?
  • Because it will help you focus you own ideas about your blog and what you’d like to do with it.

The post can be short or long, a personal intro to your life or a bloggy mission statement, a manifesto for the future or a simple outline of your the types of things you hope to publish.

To help you get started, here are a few questions:

  • Why are you blogging publicly, rather than keeping a personal journal?
  • What topics do you think you’ll write about?
  • Who would you love to connect with via your blog?
  • If you blog successfully throughout the next year, what would you hope to have accomplished?

You’re not locked into any of this; one of the wonderful things about blogs is how they constantly evolve as we learn, grow, and interact with one another — but it’s good to know where and why you started, and articulating your goals may just give you a few other post ideas.

Can’t think how to get started? Just write the first thing that pops into your head. Anne Lamott, author of a book on writing we love, says that you need to give yourself permission to write a “crappy first draft”. Anne makes a great point — just start writing, and worry about editing it later.

When you’re ready to publish, give your post three to five tags that describe your blog’s focus — writing, photography, fiction, parenting, food, cars, movies, sports, whatever. These tags will help others who care about your topics find you in the Reader. Make sure one of the tags is “zerotohero,” so other new bloggers can find you, too.

Design a site like this with WordPress.com
Get started