While automation is not intelligence, the real value of automation, especially software automation comes from removing the bot from the human. The global pandemic has shown just how valuable automation has become for companies and governments alike to make sure that operations run smoothly, bottlenecks are avoided, and repetitive tasks can be taken away from humans to free them up for higher value tasks.
However, if you want to see real value from automation, and in particular Robotic Process Automation (RPA), it’s important to know what these bots can and can’t do, and how AI is being applied to help handle more complex tasks. USPTO uses automation and AI to improve operational efficiency and empower their highly-skilled Examining Corps. Additionally, they are automating various processes to lighten the manual load on their Examiners.
Timothy Goodwin, Deputy Director, Office of Organizational Policy and Governance at the United States Patent and Trademark Office (USPTO) shares how they are leveraging automation and cognitive technology at America’s Innovation Agency. Timothy will also be presenting at an upcoming ATARC CPMAI “Methodologies and Best Practices for Successful RPA Implementation” event on July 21, 2021, 2:00-3:00 p.m. to dig deeper into some of the questions below.
How are you leveraging automation at USPTO?
Timothy Goodwin: The depth and breadth of which automation technologies are being leveraged within USPTO is vast. It is a critical enabler for driving business value. Recently we have used AI/ML to reduce the manual patent classification actions performed by an examiner; RPA to free up valuable time performing suspension checks on trademark applications; and virtual Data as-a-Service (vDaaS) to increase quality of applications in development through on-demand provisioning of test data. All of which has helped propel more, and more automation capabilities and is enabling our agency to deliver higher quality services to the public.
How do you identify which problem area(s) to start with for your automation and cognitive technology projects?
Timothy Goodwin: I am going to narrow this question and focus on RPA. When we first started our RPA program in 2019, we were looking for any USPTO process that could be used to demonstrate capabilities. This started with a “first-in-first-out” model where requests being submitted were only helping individual or low number of users. Since then, we have evolved our intake process to look more broadly at the automation request and find critical problem areas impacting USPTO business lines. A recent example was developing RPA solutions to help reduce the backlog created from the high volume of trademark applications submitted over the past twelve months.
MORE FOR YOU
How do you measure ROI for these sorts of automation, advanced AI and analytics projects?
Timothy Goodwin: Measurements are always based upon the business value derived from the automations demonstrated capabilities. This can come in many different forms depending on the solution being implemented. For provisioning of cloud infrastructure, it can be something as simple as creating a routine that terminates idle virtual services when not in use, avoiding unnecessary expenses. For RPA, it can be looking at the number of productivity hours recouped from a single or multiple process instances automated. The key metric is always centered on asking ourselves “how does this help disseminate and issue timely and high quality patents and trademarks?”
What are some of the unique opportunities the public sector has when it comes to data and AI?
Timothy Goodwin: In very general terms, the public sector is stewards and has access to vast amounts of very unique data that is equally inaccessible by any other entity in the world. This of course, coming from the totality perspective and not from views available through open data platforms. There is immense potential combining these unique data sets with AI to advance research into every single discipline known today. Quite simply it is boundless. The challenges, on the other hand, are all over the place and span legal, technical, and ethical boundaries. However, I’d like to point back towards our responsibilities as data stewards and ensuring public trust is being upheld. For me, this is the fundamental topic that should be addressed when determining how data should be used. Ultimately, the dilemma of how to use data and for what purposes related to AI have to be explicitly defined and vetted before pursuits are made to ensure we are exceeding the public’s expectations.
How do analytics, automation, and AI work together at the USPTO?
Timothy Goodwin: USPTO data is unique and with that we have unique challenges and opportunities. The three areas are naturally woven together and build upon each other to enable advanced capabilities. Automations help feed our patent and trademark data lakes where preparations are made to address data quality and security. This in turn, mutually feeds our AI/ML models and eventually gets rolled out and provides data insights and visualizations to broader groups. All of this helps create a sustainable environment for conducting data driven decisions for the agency and ensuring USPTO can continually provide high quality services.
What are you doing to develop an AI ready workforce?
Timothy Goodwin: Workforce development within advanced technologies is already a challenge for many federal agencies. At USPTO we are fortunate to have strong leadership within the data science, analytics, and AI space from Scott Beliveau and our new emerging technologies director, Jerry Ma [For additional insights Jerry Ma presented at a previous AI In Government event, and Scott Beliveau will be sharing insights at the October 2021 AI In Government event]. With support from their teams, they are forging a new path for other USPTO personnel to follow by creating opportunities and allowing innovation to be explored. Enabling focused experimentation within AI that provides strong business value is one of the best tools we can leverage for developing our workforce. In the more practical sense, we have also been growing our workforce through traditional training and have had many employees participate in various levels of AI/ML and advanced analytics courses. [The best practices approach to doing AI and big data analytics is the CPMAI methodology, which large organizations are increasingly adopting.]
What AI technologies are you most looking forward to in the coming years?
Timothy Goodwin: I am really trying to keep an eye on how AI is evolving in the domain of cyber security research and development. There has already been a vast amount of work and success achieved in this area, to the point that any modern AV product is utilizing AI for static analysis and trending better with dynamic analysis. What I am most interested in is seeing how AI can “heal” vulnerable or compromise systems in real time. Knowing how vulnerability research is traditionally conducted, there are ample opportunities to utilize AI to prevent the viability of a bug from being exploited. Recognizing and disseminating AI driven patching actions before compromise occurs is what I hope matures in the coming years.