Shortlisting Appropriate Search Results – Can IP Intelligence Help?

Published on 21 Jul, 2016

Search Results IP Intelligence


Looking for the right prior art in a sea of patents is as drawn-out as it is dreary.

It’s about time we tapped Big Data and AI to cut to the chase.

In a previous post, we mused about how artificial intelligence could make patent searches easier. If Big Data and AI in IP analytics can really help ask the right questions, perhaps they could also help sift through the right answers.

If you’re someone who dabbles in intellectual property research or advisory for a living, you’ve got over a hundred patent databases to sift through.

In an effort to set themselves apart from the herd, most patent databases try to lure researchers through features like semantic search, context search, cluster search, and relevancy sorts, among others.

Most of them are pretty nifty at finding you reams of data.

Some of them may even help you sort them, with a semblance of sanity.

All of them fall short in the real grunt work however - figuring out which of those search results matter the most.

Every IP researcher longs for the day when they won’t have to deep-dive into the results their query returns. If the most appropriate results would, somehow, bubble up to the surface all on their own, it’d save us some trouble; and a whole lot of time.

Seeing the most appropriate hits at the top of your results pile is, to an IP researcher, what seeing Santa Claus is to a five year old.

It’d be a dream come true alright.

To be honest, I’m surprised it’s not already happening.

Big Data analytics has been around for a while now. It’s certainly matured enough to be able to pull something like that off. Most patent databases use pretty run-of-the-mill information retrieval techniques, usually based on keyword matching and indexing. Some advanced databases use patent-specific sections such as title or abstract, while others are more adept at prioritizing keyword hits from some sections above the rest. Some even go as far as high-level patent classification. All these improved systems can handle your query processing faster and better than ever before.

It’s still not good enough.

There’s a lot more that your search tool and database could do for you.

Most patent searches are about identifying anything out there that may be similar to your invention or idea. A “new invention” is all about solving a problem that hasn’t been bested yet. Any invention described in a patent database generally elaborates a problem to be solved as well as solutions that the invention provides. If database entries and the patent document being searched (by you) both have the same general fields, finding a match should be a simple matter of tallying their problem and solution fields, right?

Wrong.

Searching through patents is usually a harrowing affair simply because these two key aspects — problem and solution — aren’t always straightforward. 

Usually, the “Background” section in a patent contains the problem that it’s supposed to address, whereas the “Summary” section contains the solution (or advantages) it brings to the table. There’s also an “Abstract” section that contains briefs on the background of the problem as well as a description of the solution provided.

When the lines get blurry like that, you’re going to need time to make sense of what’s what. You’ll also need a lot of it. That, or some serious help.

That’s where Big Data analytics and AI comes in.

A tandem system could analyze those relevant sections, formulating standardized “Problems” and “Solutions” pertinent to each patent.

To begin with, we could split patents by a specific domain (say an IPC class) and apply Big Data analytics and heuristic algorithms on specific sections of your patent subset. Dedicated software or tools could do that. The tool could, for instance, scan an entire patent document to locate keywords such as problem, drawback, background, and prior art,  intelligently summarizing the sentences around those words — giving you the problem section for a patent. A similar approach with words like solution, solved, or advantages can then help deduce the solution section.

Once that’s done, your tool can compare the specific problem and solution that you’re looking into with the deduced problems and solutions that your tool readied — figuring out and shortlisting the most appropriate results.

To whittle down that list further, advanced semantic techniques and natural language processing can then be applied. Of course, there could be other modalities, logic, and sophistications in play while implementing this approach. Given what Big Data analytics and AI are currently capable of, this looks within reach.

An intelligent tool that sorts stockpiles of data to give us our best bets you say?

Boy, something like that could really shave off the time we take to hit pay dirt. Precious time that we’d otherwise waste sifting through the noise.

While it’ll be a tall order to expect 100% accuracy from some fledgling AI, a functional system like that could sure help take a load off.



Speak your Mind