BEGIN:VCALENDAR
VERSION:2.0
METHOD:PUBLISH
PRODID:-//Missouri State University/Calendar of Events//EN
CALSCALE:GREGORIAN
X-WR-TIMEZONE:America/Chicago
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
DTSTART:20070311T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
TZNAME:CDT
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
DTSTART:20071104T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:a91d5320-f11d-42f9-b148-b9e6899dd673.192463@calendar.missouristate.edu
CREATED:20181031T151518Z
LAST-MODIFIED:20181031T151518Z
LOCATION:Cheek Hall 0151
SUMMARY:An Adaptive Memory Based Reinforcement Learning Controller
DESCRIPTION:Thesis Defense - Keith Cissell\n\n\nAbstract:\n\n\nRecently\, 
 the use of autonomous robots for exploration has drastically expanded--la
 rgely due to innovations in both hardware technology and the development 
 of new artificial intelligence (AI) methods. The wide variety of robotic 
 agents and operating environments has led to the creation of many unique 
 control strategies that cater to each specific agent and their goal withi
 n an environment. Most control strategies are single purpose\, meaning th
 ey are built from the ground up for each given operation. Here we present
  a single\, reinforcement learning control solution for autonomous explor
 ation intended to work across multiple agent types\, goals\, and environm
 ents. Our solution includes a memory of past actions and rewards to effic
 iently analyze an agent’s current state when planning future actions. The
  agent’s objective is to safely navigate an environment and collect data 
 to achieve a defined goal. The control solution is first compared with ra
 ndom and heuristic control schemas. To test the controller for adaptabili
 ty\, the controller is next subjected to changes in the agent’s sensors\,
  environments\, and goals. Control strategies are compared by examining  
 goal completion rates\, the number of actions taken\, and the agent’s rem
 aining health and energy at the end of a simulation. Results indicate tha
 t our newly developed control strategy is adaptable to new situations. A 
 reinforcement-learning based controller\, such as the one presented in th
 is research\, could help provide a universal solution for controlling aut
 onomous robots in the field of exploration. \n\n\n 
X-ALT-DESC;FMTTYPE=text/html:&lt;html&gt;&lt;head&gt;&lt;title&gt;&lt;/title&gt;&lt;/head&gt;&lt;body&gt;&lt;p&gt;Th
 esis Defense - Keith Cissell&lt;/p&gt;\n&lt;p&gt;Abstract:&lt;/p&gt;\n&lt;p&gt;&lt;span lang="en-US"
 &gt;Recently\, the use of autonomous robots for exploration has drastically 
 expanded--largely due to innovations in both hardware technology and the 
 development of new artificial intelligence (AI) methods. The wide variety
  of robotic agents and operating environments has led to the creation of 
 many unique control strategies that cater to each specific agent and thei
 r goal within an environment. Most control strategies are single purpose\
 , meaning they are built from the ground up for each given operation. Her
 e we present a single\, reinforcement learning control solution for auton
 omous exploration intended to work across multiple agent types\, goals\, 
 and environments. Our solution includes a memory of past actions and rewa
 rds to efficiently analyze an agent’s current state when planning future 
 actions. The agent’s objective is to safely navigate an environment and c
 ollect data to achieve a defined goal. The control solution is first comp
 ared with random and heuristic control schemas. To test the controller fo
 r adaptability\, the controller is next subjected to changes in the agent
 ’s sensors\, environments\, and goals. Control strategies are compared by
  examining&lt;span&gt;&amp;nbsp\; &lt;/span&gt;goal completion rates\, the number of acti
 ons taken\, and the agent’s remaining health and energy at the end of a s
 imulation. Results indicate that our newly developed control strategy is 
 adaptable to new situations. A reinforcement-learning based controller\, 
 such as the one presented in this research\, could help provide a univers
 al solution for controlling autonomous robots in the field of exploration
 .&lt;/span&gt;&lt;span lang="en-US"&gt; &lt;/span&gt;&lt;/p&gt;\n&lt;p&gt;&lt;span lang="en-US"&gt;&amp;nbsp\;&lt;/s
 pan&gt;&lt;/p&gt;&lt;/body&gt;&lt;/html&gt;
DTSTART;TZID=America/Chicago:20181112T110000
DTEND;TZID=America/Chicago:20181112T120000
SEQUENCE:0
URL:
CATEGORIES:Current Students,Faculty,Staff
END:VEVENT
END:VCALENDAR