An excellent article on PlagiarismToday.
As a blogger, feed scraping is one of my pet peeves. It irks me to no end that sploggers use automated tools to copy my copyrighted content from my site to sites that exist solely to attract clicks on AdSense and other ads.
Jonathan Bailey likely feels the same way. He writes about the topic regularly in his blog, providing well-researched and insightful commentary to help understand and fight the problem.
His recent article, “Using Creative Commons to Stop Scraping” on PlagiarismToday:
Many sites, including this one , have expressed concerns that CC licenses may be encouraging or enabling scraping.
The problem seems to be straightforward. If a blog licenses all of their content under a CC license, then a scraper that follows the terms of said license is just as protected as a human copying one or two works….
However, after talking with Mike Linksvayer, the Vice President of Creative Commons, I’m relieved to say that is not the case. CC licenses have several built-in mechanisms that can prevent such abuse.
In fact, when one looks at the future of RSS, it is quite possible that using a CC license might provide better protection than using no license at all.
The article then goes on to explain what a Creative Commons license is and what it requires of the licensee. As Jonathan explains, the automation tools that sploggers use simply cannot meet all of the requirements of a CC license, thus putting the sploggers in clear violation of the license terms.
If you’ve been wondering about copyright as it applies to your blog or Web site, be sure to check out this article. While you’re at PlagiarismToday, poke around a bit. I think you’ll find plenty of other good material to help you understand copyright and what you can do when your rights are violated.
Discover more from An Eclectic Mind
Subscribe to get the latest posts sent to your email.