Call me naive, but I would have thought that entering the identical search query on, say, both Westlaw and Lexis Advance would return fairly similar results, at least among the cases ranked highest for relevance. After all, shouldn’t the cases that are most relevant to the query be largely the same, regardless of the research platform?
Turns out, the results they deliver vary widely — not just between Westlaw and Lexis Advance, but among several legal research platforms. In fact, in a comparison of six leading research platforms — Casetext, Fastcase, Google Scholar, Lexis Advance, Ravel and Westlaw — there was hardly any overlap in the cases that appeared in the top-10 results returned by each database.
This finding comes out of research performed by Susan Nevelow Mart, director of the law library and associate professor at the University of Colorado Law School, where she teaches advanced legal research and analysis and environmental legal research. Mart has published a draft of her research paper reporting these results, The Algorithm as a Human Artifact: Implications for Legal {Re} Search, and she presented some of her findings in a program I attended at the recent annual meeting of the American Association of Law Libraries.
In my column this week at Above the Law, I go deeper into Mart’s research and findings and discuss the implications for those of us who perform legal research. Read it here: Legal Research Services Vary Widely in Results, Study Finds.