• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Past Member

Pedro Colon-Hernandez

  • Personal Robots

I am a Computer Engineer from Puerto Rico.  I am currently a Graduate Research assistant in the Object-Based Media Group.  I have worked on developing applications for  8K displays.  I have experience in various software development areas.  I want to explore wearable technologies, and how these can help us interface with the world around us in a more seamless way than our phones.  Now I am working on intelligent assistants, such as Siri, Alexa, and Google’s Assistant.  These have been developed to handle a predetermined set of actions and situations. When these assistants are asked to perform an action that goes beyond this predetermined set, they simply send a user on a web search of the action in hopes that they can figure it out by themselves. However, what if intelligent assistants could perform this web search (or similar action), assimilate the information, and relay it back to us all by themselves? Take for example, asking one of the aforementioned agents for a minor role in a movie.  Some of these agents may know the general summary of the movie but have no idea on how to answer for specific minor details, based on a given context.  We are looking to develop natural language processing pipelines combined with higher level reasoning and planning techniques to take unstructured data from specific web searches and generate actionable knowledge graphs from them.  We expect to be able to process these knowledge graphs to answer questions, generate procedures to accomplish actions (i.e. replacing a phone’s battery), and generate dynamic plans (i.e. travel plans).  The concept we are exploring is summarized best as a domain adaptation technique.  In very broad terms we are looking to adapt general knowledge (primarily commonsense knowledge) into a specific knowledge domain to be able to make inferences in this narrower domain.