All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have tried to get the index not used used any KO, but not getting all the details.  | rest /services/data/indexes | fields index | eval index=1 [index=_audit| stats count as accessed by index, ... See more...
I have tried to get the index not used used any KO, but not getting all the details.  | rest /services/data/indexes | fields index | eval index=1 [index=_audit| stats count as accessed by index, search ] | stats sum(accessed) as accessed, values(index) as index by  | fillnull accessed value=0 | where index=1 AND accessed=0 Total Index Index Not used in Any Knowledge Object Index has 0 data last 90 days 100 25 10
Have you tried not escaping the < and > chars ? I've read somewhere escaping a non-special char might not work here.
For Error 1 (Modular Input): Verify the script exists and is executable. Install Java if missing and ensure it’s in the PATH. Test the script manually and adjust permissions or dependencies as nee... See more...
For Error 1 (Modular Input): Verify the script exists and is executable. Install Java if missing and ensure it’s in the PATH. Test the script manually and adjust permissions or dependencies as needed.  How to do this? can you please guide me... how to install java on my AWS Splunk instance?  
@Karthikeya For Error 1 (Modular Input): Verify the script exists and is executable. Install Java if missing and ensure it’s in the PATH. Test the script manually and adjust permissions or depe... See more...
@Karthikeya For Error 1 (Modular Input): Verify the script exists and is executable. Install Java if missing and ensure it’s in the PATH. Test the script manually and adjust permissions or dependencies as needed.  For Error 2 (KV Store): Check mongod.log and splunkd.log for details. Validate and renew server.pem if expired. Fix permissions or reinitialize KV Store if necessary.  
Hi @livehybrid, I did'nt modify the serverName on my instance. If i search "index=_internal source=*splunkd.log" - I would see the 2  sources in the interested fields. I had configured the f... See more...
Hi @livehybrid, I did'nt modify the serverName on my instance. If i search "index=_internal source=*splunkd.log" - I would see the 2  sources in the interested fields. I had configured the forwarding of the data from UF and the main instance both using port 9997. In real time uf and server should not be on the same machine right? Thanks..
I am trying to ingest a csv file which has headers with double quotes " and %. They are separated by comma. But after ingestion if two field names has same name except one has # and the other one has... See more...
I am trying to ingest a csv file which has headers with double quotes " and %. They are separated by comma. But after ingestion if two field names has same name except one has # and the other one has % then it merges both of them into one field while using table output. How to fix this issue. If splunk does`nt support csv headers then i have to remove before ingesting them. Any ideas.
I left out a character.  Try my updated query.
Hi @Karthikeya  I think it would be worth focussing on the KV Store issue first as that might (although might not!) rectify your other issue if the app relies on the KV Store. Have you made any oth... See more...
Hi @Karthikeya  I think it would be worth focussing on the KV Store issue first as that might (although might not!) rectify your other issue if the app relies on the KV Store. Have you made any other recent changes to the KV Store or Splunk version? Are there any logs in splunkd.log ($SPLUNK_HOME/var/log/splunk/splunkd.log) which might indicate what the issue with KV Store is? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @hazem , at first the last row isn't mandatory, it's an old configuration and if you put it, you should add one row for each server. Anyway, if you configure more than one Indexer, lofs are forw... See more...
Hi @hazem , at first the last row isn't mandatory, it's an old configuration and if you put it, you should add one row for each server. Anyway, if you configure more than one Indexer, lofs are forwarded to all the Indexers changing destination every 30 seconds using a round robin algorithm for the load balancing. Then, if an Indexers isn't available, the Forwarders tries with another one; id no Indexers are available it saves logs on a local cache and forward them when the connection is established again. Ciao. Giuseppe  
We are planning to on-board Akamai platform logs to Splunk. We are following this link to implement the same - SIEM Splunk connector   In the process we have installed this Akamai add-on - Akamai S... See more...
We are planning to on-board Akamai platform logs to Splunk. We are following this link to implement the same - SIEM Splunk connector   In the process we have installed this Akamai add-on - Akamai SIEM Integration | Splunkbase   When we are going to Settings > Data Inputs as mentioned here - SIEM Splunk connector – we are unable to find this data input - ​Akamai​ Security Incident Event Manager API.   And we are getting the following error in Splunk post installing the add-on.   Deployer       Search head   Can you help us in this challenge? We are stuck at “data inputs”. I think we need to perform these pre-requisites to get this Akamai add-on (Modular Input) work –     Please help us in installing Java in our Splunk instance and whether KVStore is installed or not and is it working fine?
Good day, unfortunately this did not prompt a triggered alert even after changing the usage value to a lower number to test it. Thank you though.
Good day, unfortunately this did not prompt a triggered alert even after changing the usage value to a lower number when testing it. Thank you though.
@cpetterborg can you please help me how to install Java on our Splunk instance?   
Morning, Splunkers! I've been running a dashboard that monitors the performance of the various systems my customer uses, and I recently switched all of my timechart line graphs over to the Downsampl... See more...
Morning, Splunkers! I've been running a dashboard that monitors the performance of the various systems my customer uses, and I recently switched all of my timechart line graphs over to the Downsampled Line Chart because it allows a user to zoom in on a specific time/date range that is already displayed (most of my customer's users aren't Splunk-savy in the slightest). My customer has users literally all over the country, so our Splunk is set for all times to be shown as UTC by default for every account. The problem is the Downsampled Line Chart insists on showing everything in local time, regardless of what our account configurations are set to, and I can't find any documentation on how to get it to stop (I'm not an admin, so I can't just go into settings and start editing configuration files). Does anybody have any idea on how to get it to stop? I'd hate to have to give up the functionality of the chart because it won't show the same times for people on opposite sides of the country, but I'm out of options, here.  
In terms of further breakdown to the previous answer:  Automatic Failover: If mysplunk_indexer1 goes down, the UF will detect the failure and automatically stop sending data to that indexer. Conti... See more...
In terms of further breakdown to the previous answer:  Automatic Failover: If mysplunk_indexer1 goes down, the UF will detect the failure and automatically stop sending data to that indexer. Continued Forwarding to Available Indexers: The UF will continue forwarding data to mysplunk_indexer2:9997. The forwarder does not stop forwarding entirely but rather distributes the load among the remaining available indexers. Retry Logic: The UF will periodically attempt to reconnect to mysplunk_indexer1. Once it becomes available again, data will resume being sent to it. Load Balancing (if applicable): If both indexers were previously receiving traffic in a load-balanced manner (e.g., using autoLBFrequency), the UF would shift all the load to the remaining functional indexer. Also, you might want to consider the following:   If no indexers are available, events will be queued locally in memory (or on disk if useAck is enabled). Ensure you configure proper connectionTimeout and autoLBFrequency settings to optimize failover behavior. If useACK=true (for reliable delivery), the UF will queue events until an indexer acknowledges them. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @hazem  Will the UF continue sending data to both indexers? No, it will only send data to the available indexer (mysplunk_indexer2) Will the UF detect that mysplunk_indexer1 is unreachable? Ye... See more...
Hi @hazem  Will the UF continue sending data to both indexers? No, it will only send data to the available indexer (mysplunk_indexer2) Will the UF detect that mysplunk_indexer1 is unreachable? Yes, the UF will detect the unreachability and automatically adjust its forwarding strategy Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will    
Dear all,   I have the following outputs.conf configuration: [tcpout] defaultGroup = my_indexers   [tcpout:my_indexers] server = mysplunk_indexer1:9997, mysplunk_indexer2:9997   [tcpout-serv... See more...
Dear all,   I have the following outputs.conf configuration: [tcpout] defaultGroup = my_indexers   [tcpout:my_indexers] server = mysplunk_indexer1:9997, mysplunk_indexer2:9997   [tcpout-server://mysplunk_indexer1:9997]   Could you please clarify the Universal Forwarder (UF) behavior in the event that mysplunk_indexer1 goes down? Will the UF continue sending data to both indexers despite mysplunk_indexer1 being down? Or will the UF detect that mysplunk_indexer1 is unreachable and stop forwarding traffic to it?
Hi @yeahnah @gcusello  I used in below way where unique user count is not matching ,why i need to provide specify json...I want to fetch from all event from the splunk log give the unique user list ... See more...
Hi @yeahnah @gcusello  I used in below way where unique user count is not matching ,why i need to provide specify json...I want to fetch from all event from the splunk log give the unique user list for their specify group .group can respresent in [group 1,group 2] or [group1]...then fetch  unique user list of [App.Au1,App.Au2] in one row and unique user list of [App.Au1] in second row
Hi @sufs2000  Have a look at the image below, does this help you work out the settings required to have colour dependant on the value?   Please let me know how you get on and consider adding k... See more...
Hi @sufs2000  Have a look at the image below, does this help you work out the settings required to have colour dependant on the value?   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @shabamichae  What do your monitor stanzas currently look like for monitoring these files? Do the logs roll to a "logName.log.1" format (.1 being yesterday)? If so. you may be able to update you... See more...
Hi @shabamichae  What do your monitor stanzas currently look like for monitoring these files? Do the logs roll to a "logName.log.1" format (.1 being yesterday)? If so. you may be able to update your existing monitor stanzas to add a whitelist (see https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/Monitorfilesanddirectorieswithinputs.conf#:~:text=whitelist%20%3D%20%3Cregular%20expression%3E) whitelist = <regular expression> If set, the Splunk platform monitors files whose names match the specified regular expression. ## inputs.conf ## [monitor:///var/log/*] index=syslog sourcetype=example ..etc.. whitelist = .*\.1$ Also check out https://docs.splunk.com/Documentation/Splunk/latest/Data/Specifyinputpathswithwildcards Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will