From a functionality perspective, I don't think Splunk will see a difference and/or care. The underlying SAN storage is still abstracted to a filesystem. (Of course I am assuming that the SAN volume manager doesn't present anything funky like a non-supported filesystem)
That said, any type of abstracted storage provisioning (LVM, thin provisioning, etc) could introduce I/O latencies you don't necessarily expect. If you wind up with a situation where the logical volume is spread out all over the physical array, this could negatively impact your performance. (Very similar to disk fragmentation on a normal filesystem). Usually the storage system has a tool/function to assist with cleaning this up.
To my knowledge, there are no showstoppers or blocking issues with a volume manager such as LVM or Veritas Volume Manager being in charge of the filesystem that hosts your Splunk indexes.
The things to be careful with would be the same than with any other I/O intensive application. For example, if your volume management software allows dynamic volume and filesystem growth but this operation has a potentially heavy impact on your I/O performance, you might want to stop Splunk while you dynamically expand your storage capacity so as to not mess with indexing.
The same could be said for live snapshots, again in the case that they would be disruptive to your I/O performance.
That being said, it remains important to adequately manage the underlying topology of the logical devices assigned to Splunk's indexes in order to maintain optimal performance. As a reminder, the ideal topology for Splunk indexes is hardware RAID10 :
From a support perspective, although volume management software solutions are not part of our Quality Assurance cycle, they are supported to host Splunk's indexes as long as they are used in a reasonable manner and the indexes are hosted on supported filesystems.