On December 29, 2015, in this Google Hangout, John Mueller said that if you implement an OnHover event on your website that turns plain text into a link, GoogleBot will most likely not identify that link.
This is due to the fact that GoogleBot doesn’t use its mouse to scroll over the text of the page it is crawling and thus won’t activate the event which transforms said text into a link. To quote the question from the hangout:
We’ve tested reversed links with jQuery. Would this be considered cloaking, since when the link is hovered over by a real visitor, the real link is revealed?
John responded with:
What would happen in a case like that is that we would probably not pick up those links because GoogleBot isn’t going to hover over every part of the page. It will pull out the page, render it once, like a browser, it is not going to interact with the page to see what is actually going to happen when you do physical things.
If you need those links to be found by GoogleBot, then make sure we can find them when we load the page. If you just want to make them available for your users, then sure, I think that might be an option. I think in most cases you wouldn’t want to do this.
And if you are having problems with scrapers, then I’d try to find something different to kind of attack that more directly than to try to obfuscate the links like this which could end up causing more problems for your website in search than the scrapers anyway.
So there you have it. Don’t hide your links behind JavaScript events unless it is some really special case. GoogleBot will most probably not identify them and they might even get you into trouble.