Research Activities on Intelligent Systems

The Bio-Inspired Machine Intelligence (BIMI) Laboratory investigates intelligence to build autonomous systems using insights from biological systems. An autonomous vehicle is a good example of a system that has machine intelligence. The BIMI Lab directed by Prof. Kwon has been working on various areas of intelligent systems through brain research as well as real-world applications.

Research Interests

  • Machine Learning, Neural Network, Artificial Intelligence
  • Computer Vision, Perception, Decision Making, Autonomous Vehicles
  • Computational Neuroscience, Neuroevolution

Research Activities

Top-down theory-driven approach and bottom-up data-driven approach: These two approaches must be taken at the same time to achieve true intelligence. First, the BIMI has been exploring what true machine intelligence is and how it can be implemented. Internal neural dynamics and neural delay compensation mechanisms have been investigated by Prof. Kwon. Second, the BIMI Lab has also been probing the neuronal structure of the brain to understand its function. 

Intelligent System Implementations

The following projects were selected to show efforts to implement intelligent systems.

Sensor Fusion for Autonomous Vehicles using End-to-End Driving

Three types of Deep Neural Network architectures were designed and tested to see which types of sensor fusion approach perform better than others. The BIMI Lab tested a Deep Neural Network with a camera and LIDAR. The following three architectures were tested.

Figure 1. Three types of neural network architectures

Two types of sensor fusion were tested. First, a single image input was constructed using two sensor data and was fed to a Deep Neural Network (the left figure below). Two sensor inputs can be fed to separate Deep Neural Network and their outputs were combined to generate a weighted output. (the right figure below)

Figure 2. Two sensor fusion methods. 2D LIDAR data is projected to an image plane and stacked on top of the other (left). Sensor data outputs are combined with different weights (right)

These four cases were tested in an indoor environment. 

Figure 3. Demo with four different approaches. Camera only (left-top), LIDAR only (right-top), camera and LIDAR stacked (left-bottom), camera and LIDAR ensemble (right-bottom).

Comparative Analysis of Behavioral Cloning Approaches for Autonomous Vehicles

This research aims to introduce two different learning approaches to behavioral cloning; a learning architecture developed by combining CNN (Convolution Neural Network) with LSTM (Long and Short Term Memory – variant of Recurrent Neural network) and a learning architecture using state of the art ResNet as a transfer learning. Both the architectures were trained using monocular vision inputs which are end to end trainable. Using the combination of CNN with LSTM provides the network the ability to update the parameters of visual feature extractor as well as temporal module simultaneously. Using the transfer learning approach helps the network to transfer the already learned parameters for a different application and also the ability to start training with weights close to convergence as compared to the random initialization.

Figure 4. System overview

Figure 5. Simulation environment. A Chevy Bolt model was created and tested.

Figure 6. Data collection. A student was collecting driving data using a driving simulator.

Development of Low-Cost Autonomous Vehicle Platform

The BIMI Lab designed a low-cost autonomous vehicle platform for research and education. The platform was based on a ride-on electric car. To implement X-by-Wire, two encoder motors were attached and a custom-built controller module was designed. To improve the versatility of the system, ROS (Robot Operating System) Kinetic was used.

Figure 7. The system overview. A LIDAR and camera sensors were attached to test the system. A GPU-powered laptop was the main processing unit.

Software-in-the-Loop Modeling and Simulation

The BIMI Lab designed a software-in-the-loop framework to test various algorithms in a simulated environment. ROS Kinetic and Gazebo 9 were used to implement this project. A full-simulated Chevy Bolt model was constructed and a test track was built.

Figure 8. Chevy Bolt simulated model.

Figure 9. A graph representation of the Chevy Bolt robot model.

Self-Driving with Traffic Sign Detection

In a simulated environment, traffic signs were placed. Using the transfer learning, we trained the Darknet with our traffic sign datasets. To accomplishment lateral control, we collected a human operator’s driving record, and training a Deep Neural Network to mimic the human driver’s behavior.

Figure 10. Object detection. Data labeling (left). Test in a simulated environment (right)

Semantic Segmentation

The automatic segmentation project was conducted to detect the drivable region of the road. A Fully Convolutional Network (FCN) was used to train the automatic segmentation.

Figure 11. An example of semantic segmentation. The green area was classified as a drivable region.

Traffic Light Detection with Path Planning

This was the Udacity’s Self-Driving Car Nano Degree capstone project. The BIMI Lab has successfully completed the final capstone project by making the car follow the path as it detects traffic lights and reacts to them. The TensorFlow object detection module was used to detect traffic lights and identify the color.

Figure 12. The car detects red traffic signals and stops while the signal stays.

Deep Learning Related Research

Facial Landmark Detection (Experimental)

To automatically extract the region of the face, a facial landmark detection method was used.

Figure 13. An automatic facial landmark detection was implemented to extract a facial region.

Generation of Video using RecycleGAN (Experimental)

Using RecycleGAN, The BIMI Lab was able to generate a figure from another person’s facial expression.

Figure 14. The right side of the image was generated from the left side facial expression


Data-Driven Approaches:

Bottom-up data-driven approaches were taken and followings are selected examples of what we achieved.

Touch-Enabled Virtual Room for Scientific Data Exploration

We proposed to develop a virtual room where a user can touch objects to explore scientific data. The virtual room system will consist of Oculus Rift (a headset for virtual reality), Leap Motion 3D Jam (a position tracking system of hands), GloveOne (a haptic glove), and a host computer where my custom software system and 3D scientific data reside. Users can get haptic feedback from touching objects while they are exploring scientific data. 

Figure 15. 3D data visualization and gesture-based user interface.

3D Brain Tissue Scanner

A high-throughput and high-resolution 3D tissue scanner were proposed and developed by our team.

National Science Foundation (NSF) Award Abstract #1337983. Development of High-Throughput and High-Resolution Three-Dimensional Tissue Scanner with Internet-Connected 3D Virtual Microscope for Large-Scale Automated Histology, $341,563.00 (2013 – 2017) 

Figure 16. Internet-Enabled Robotics Microscope

Comments are closed.