All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Every time I search, I get errors: Could not load lookup=LOOKUP-cisco_asa_change_analysis Could not load lookup=LOOKUP-cisco_asa_ids_lookup Could not load lookup=LOOKUP-cisco_asa_intrusion_severit... See more...
Every time I search, I get errors: Could not load lookup=LOOKUP-cisco_asa_change_analysis Could not load lookup=LOOKUP-cisco_asa_ids_lookup Could not load lookup=LOOKUP-cisco_asa_intrusion_severity_lookup Could not load lookup=LOOKUP-cisco_asa_severity_lookup How can this be fixed in Splunk Cloud  
Hey Guys We are trying to configure Splunk with S3 and facing issues :  Have a few questions : 1) what should be under  Configure the remote volume We have storageType:remote  what does [volume... See more...
Hey Guys We are trying to configure Splunk with S3 and facing issues :  Have a few questions : 1) what should be under  Configure the remote volume We have storageType:remote  what does [volume:s3] signify?  2) Do the entries below suffice ? storageType = remote path = s3://splunk-smartstore/indexes remote.s3.supports_versioning = false remote.s3.endpoint = http://<IP-address>/splunk-smartstore remote.s3.access_key = <Access_key> remote.s3.secret_key = <secrey key>   We keep seeing the following errors : /opt/splunk/etc/master-apps/_cluster/local]# /opt/splunk/bin/./splunk cmd splunkd rfs -- ls error: <remote_id> expected error: operation failed; check log for details What log file can help debugging this ?
Hi Folks, I am trying to enrich my search with subsearch in the same time bucket/bin. The search can be found below. Details: Main search: looking for 5 times or more failed login attempts fro... See more...
Hi Folks, I am trying to enrich my search with subsearch in the same time bucket/bin. The search can be found below. Details: Main search: looking for 5 times or more failed login attempts from an account/user. if login attempt get failed, userid doesn't show up, however if it can be successful on subsequent attempts, userid shows up in the logs. Subsearch : looking for username by using userid. this username will enrich main search's username field along with the userid.  Two complications: 1. userid is supposed to be unique, but not always, so both main search and subsearch should look for same time frame to create correct results.  2. sometimes subsearch could not find username due to the lack of successful login, in this case I want my main main search should show result without username or fill username with NULL or so. Note: not sure the following way is proper or not. but looks working without meeting second complication mentioned above.  Thanks,       index="useractivity" event=login response.login=failed | eval temp=split(userid, ":") | eval urole=mvindex(temp,5) | bucket _time span=15m | join type=inner userid [ search index="useractivity" | eval userid_tmp=split(userid, ":") | eval userid=mvindex(userid_tmp, 0), username=mvindex(userid_tmp, 1) | bucket _time span=15m | stats latest(userid) as userid by username ] | stats values(src_ip) values(event) count(event) as total by _time user urole userid username | where total >= 5
Hello! I  have a search with timechart that I need to filter time AFTER the timechart based on the current time.   I've tried: search blablabla | timechart span=1m limit=0 eval(sum(SOM)/sum(VO... See more...
Hello! I  have a search with timechart that I need to filter time AFTER the timechart based on the current time.   I've tried: search blablabla | timechart span=1m limit=0 eval(sum(SOM)/sum(VOL)) by VAR | where earliest=-3m@m latest=@m But got the error: Error in 'where' command: The operator at 'm@m latest=@m' is invalid. And: search blablabla | timechart span=1m limit=0 eval(sum(SOM)/sum(VOL)) by VAR | search earliest=-3m@m latest=@m But got no results.   Does anyone know how to to that? Thank you!  
We have a multi-site installation of Splunk and would like to test if the forwarder_site_failover is working properly. In the forwarders output.conf we have the following       [indexer_discovery... See more...
We have a multi-site installation of Splunk and would like to test if the forwarder_site_failover is working properly. In the forwarders output.conf we have the following       [indexer_discovery:master1] pass4SymmKey = $secretstuff$ master_uri = https://yadayada:8089 [tcpout:group1] indexerDiscovery = master1 useACK = false clientCert = /opt/splunk/etc/auth/certs/s2s.pem sslRootCAPath = /opt/splunk/etc/auth/certs/ca.crt [tcpout] forceTimebasedAutoLB = true autoLBFrequency = 30 defaultGroup = group1       As far as the yadayada clustermaster server goes, we have the following config:       /opt/splunk/etc/apps/clustermaster_base_conf/default/server.conf [clustering] (...) /opt/splunk/etc/apps/clustermaster_base_conf/default/server.conf forwarder_site_failover = site1:site2       One thing that I was trying to figure out was the need to explicitly set site2:site1 or if the existing configuration was enough for failing over from site1 to site2 and from site2 to site1. My approach was to shut the connection between the forwarder and the site1 indexers by setting iptable rules in the indexers that DROP the connections from the forwarder.       #e.g. iptables rule iptables -I INPUT 1 -s <forwarder ip> -p tcp --dport 9997 -j DROP #forwarder splunkd.log 07-15-2021 16:20:41.729 +0000 WARN TcpOutputProc - Cooked connection to ip=<site1 indexer ip>:9997 timed out       The iptables rules didn't make the forwarder failover so, i wonder if the failover process only kicks when the clustermaster loses the visibility over the indexers. In the live setup this seems more risky and less flexible. What is the recommended approach to perform this kind of testing?  
For non admin roles, when I navigate to User Web page "Account Settings" showing page not found. Is there way to allow certain roles to access the page?  My user role already have default capabilitie... See more...
For non admin roles, when I navigate to User Web page "Account Settings" showing page not found. Is there way to allow certain roles to access the page?  My user role already have default capabilities including change_own_password. Still not able to access "Account Settings". Thanks in advance. 
I want to fetch the availability report for all the Network devices that we have in our Data center. Requesting helping hands on this platform to help me formulating a query on Splunk tool. I m enc... See more...
I want to fetch the availability report for all the Network devices that we have in our Data center. Requesting helping hands on this platform to help me formulating a query on Splunk tool. I m enclosing the results that I have fetched from NNMi (Network Node Monitoring Performance Tool), I want similar results from Splunk as well (Node Availability %) Thanks & Regards, Sahil Vaishnavi
I've got a JSON event that I like to tabulate by using `index=myindex | table *` When I do this though it includes some system fields, such as `host`, `index`, `linecount`, `punct`, `source`, `sourc... See more...
I've got a JSON event that I like to tabulate by using `index=myindex | table *` When I do this though it includes some system fields, such as `host`, `index`, `linecount`, `punct`, `source`, `sourcetype` Does anyone know if there's a way to exclude them without naming them all individually via a built in method/variable? e.g. `index=myindex | fields - $SYSTEM_FIELDS$ | table *` Thanks, Henri
Hi I have file server that everyday backups of servers copy on that server on below path: /backup/files/ /backup/files/server1/$DATE.zip /backup/files/server2/$DATE.zip ...   How can I trigger... See more...
Hi I have file server that everyday backups of servers copy on that server on below path: /backup/files/ /backup/files/server1/$DATE.zip /backup/files/server2/$DATE.zip ...   How can I trigger this with Splunk: every day check that path and whenever one server not copy backup files, Splunk alert me. e.g. backup  file every night at 04:00 is ready, every morning at 07:00AM check that path and if find directory that has not have file that create today alert me.   Any idea? Thanks,
If we have logs being pushed to a text file stored on our drive, can Splunk monitor the content of these files and can we search the content of these files?
How to do windows monitoring 
Good afternoon I have a dashboard with multiple timechart where I am using a time picker -7 days and +7 days. The problem is that not all timechart end on the same day because there are no events f... See more...
Good afternoon I have a dashboard with multiple timechart where I am using a time picker -7 days and +7 days. The problem is that not all timechart end on the same day because there are no events for future days. Is it possible that the timechar always represents future days, even when there are no events for those days? Image as an example:
So, long story short... I am trying to determine the event count by source, which host is producing the most events in that source, and who owns the host (custom_field). Any suggestions on how ... See more...
So, long story short... I am trying to determine the event count by source, which host is producing the most events in that source, and who owns the host (custom_field). Any suggestions on how to accomplish this would be helpful.  Thank you. This is what I have tried so far:   | tstats count as events where index=wineventlog sourcetype=* by _time host custom_field source | search custom_field=unit1 OR custom_field=unit_2 OR custom_field=unit_3 Then I run a stats command to collect the event count, then list the event count by the custom_field | stats   sum(events) as total_events   list(events) as event_counts   list(source) as source   list(host) as host   by custom_field I understand that event_counts is now a string.  However, I would like to be able to use these numbers to determine which source is producing the most events by each custom_field. I have tried: | convert num(event_counts) | eval num_events = tonumber(event_counts) But these don't work unless I use | mvexpand event_counts  This then skews the results to where they don't make any sense.  I want to convert the event_count field to a number so I can make a chart or a timechart from it as well to analyze the growth over time. Thanks in advanced.                                                   
When I am trying to create entities using search in Splunk ITSI, it is throwing the below error and the entity load is failing.  ERROR: KeyError at "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/csv_import/... See more...
When I am trying to create entities using search in Splunk ITSI, it is throwing the below error and the entity load is failing.  ERROR: KeyError at "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/csv_import/itoa_bulk_import_entity.py", line 172 : 'abcde servers : os' abcde servers : os  --> this happens to be the old title of an existing service. The Service was initially named as "abcde servers : os ". Now the service has a different name. I am not sure if this service is somewhat related to the error thrown by ITSI while importing entities.  Can anyone help in fixing this error.  
I want to see any failed job, ad-hoc and scheduled. For instance, I was creating a new search command, and it failed a lot until I got it right. I expect to see the same error, I see in web search, i... See more...
I want to see any failed job, ad-hoc and scheduled. For instance, I was creating a new search command, and it failed a lot until I got it right. I expect to see the same error, I see in web search, in the logs: | rest /servicesNS/-/-/search/jobs shows a handful over 4 hours. There was far more than that. _audit shows plenty of failed searches, but not the reason _internal doesn't show anything useful
I have a user that is asking me to look at the file hashes of every file that some into splunk across today and yesterday.  I can compare one just fine: index=my_index RuleName="Rule_name" FileName=... See more...
I have a user that is asking me to look at the file hashes of every file that some into splunk across today and yesterday.  I can compare one just fine: index=my_index RuleName="Rule_name" FileName="file.exe" earliest="06/11/2021:00:00:00" latest="06/11/2021:24:00:00" | rename FileHash as "todays_hash" | append [ search index=my_index RuleName="Rule_name" FileName="file.exe" earliest="06/12/2021:00:00:00" latest="06/12/2021:24:00:00" | rename FileHash as "yesterdays_hash"] | stats values(*) as * by FileName | eval description=case(todays_hash=yesterdays_hash,"Hash has not changed", todays_hash!=yesterdays_hash,"Hash has changed") | table FileName description todays_hash yesterdays_hash This makes a table showing the 2 hashes and a message telling me if the hash had changed or not.  Now is there a way to run this through foreach or something that can do that for the whole list of file names. Something like: index=my_index RuleName="Rule_name" | stats values | foreach FieldName  | append [ search index=my_index RuleName="Rule_name" FileName="file.exe" earliest="06/12/2021:00:00:00" latest="06/12/2021:24:00:00" | rename FileHash as "yesterdays_hash"] | stats values(*) as * by FileName | eval description=case(todays_hash=yesterdays_hash,"Hash has not changed", todays_hash!=yesterdays_hash,"Hash has changed") | table FileName description todays_hash yesterdays_hash
Hi, I want to build a bar chart that shows the anomaly_count for each data_source in JS. But I also want to keep the database_id field to be used in the drilldown. Using the search query below, I go... See more...
Hi, I want to build a bar chart that shows the anomaly_count for each data_source in JS. But I also want to keep the database_id field to be used in the drilldown. Using the search query below, I got a chart looks like this   , where database_id is also counted. How can I hide the database_id field in the chart but use it as a key to drill down to another dashboard?         index=\"assets_py\" asset_type=database | fields data_source, anomaly_count", database_id | fields - _time _cd _bkt _indextime _raw _serial _si _sourcetype     This is my JS code for drilldown:      anomalycountchart.on("click", function(e) { e.preventDefault(); tokenSet.set("databaseID_tok", ""); utils.redirect("anomaly?databaseID_tok="+e.data['row.database_id']); });      Thank you in advance! 
HI I am displaying a token that is refreshing every 10 seconds - but now that I have added a base search the token is flicking to $result.TIME$ on the screen and then back to the value. How do I us... See more...
HI I am displaying a token that is refreshing every 10 seconds - but now that I have added a base search the token is flicking to $result.TIME$ on the screen and then back to the value. How do I use a base search and not have the token flick? I have put both examples below one working(not base search) and one not working. I have tried to change finalized to done - but nothing changed. We can see in the image one working and one displaying the token (only for 1 second until the search is finished, but it doe not look nice)           <search base="basesearch_MAIN"> <!-- Displays the last time pack that has entered SPLUNK - THis need to be update to use the base search off the main search --> <query>| rename _time as TIME | eval TIME=strftime(TIME,"%m/%d/%y %H:%M:%S") | table TIME | tail 1</query> <finalized> <set token="Token_TIME_OF_LAST_DATA">$result.TIME$</set> <finalized> </search> NO BASE SEARCH - This does not jump on the screen <search> <query>| mstats max("mx.process.cpu.utilization") as cpuPerc WHERE "index"="metrics_test" AND mx.env=MONITORING_MVP span=10s | rename _time as TIME | eval TIME=strftime(TIME,"%m/%d/%y %H:%M:%S") | table TIME | tail 1</query> <earliest>-1m</earliest> <latest>now</latest> <finalized> <set token="Token_TIME_OF_LAST_DATA1">$result.TIME$</set> </finalized> <refresh>10s</refresh> </search>         
Hi,  I am trying to return results if an item in the array has both values set to specific values. ie bu = "blob" and disp="enforce" on the one array item However,  my search seems to happen acros... See more...
Hi,  I am trying to return results if an item in the array has both values set to specific values. ie bu = "blob" and disp="enforce" on the one array item However,  my search seems to happen across items.   |makeresults |eval _raw ="{ \"sp_v\":[ {\"bu\":\"blob\",\"disp\":\"enforce\"}, {\"bu\":\"inline\",\"disp\":\"report\"} ] }" | spath | search sp_v{}.bu=blob AND sp_v{}.disp=report This is returning result as the first item has 'blob' and the second has 'report'. I would not expect any results in this search Would appreciate any help, Kind Regards, Maurice
Hello, I am trying to change cron_schedule of saved searches/alerts by calling REST API URI in a bash script. I am reading cron_schedule, search title and app name from a CSV file. CURL commands with... See more...
Hello, I am trying to change cron_schedule of saved searches/alerts by calling REST API URI in a bash script. I am reading cron_schedule, search title and app name from a CSV file. CURL commands with working fine to change cron_schedule for all the private searches/alerts. but in case of Global searches/alert, It makes a private copy of that global search and changes the cron_schedule of that one, not the original one. I want to change the schedule of both local and global searches/alerts without creating a private copy of the global one.    #! /bin/bash INPUT=data.csv OLDIFS=$IFS IFS=',' [ ! -f $INPUT ] && { echo "$INPUT file not found" exit 99; } echo "-----------------------------------------------------" >> output.txt while read app cron search_name do SEARCH=${search_name// /%20} QUERY="https://localhost:8089/servicesNS/admin/$app/saved/searches/$SEARCH" echo $QUERY >> output.txt echo -e "\n---------------------------------------------------------\n" echo -e "---Search Name-->$search_name" echo -e "---Rest API URI-->$QUERY" curl -i -k -u <admin_user>:<password> $QUERY -d cron_schedule=$cron -d output_mode=json >> response.txt done < $INPUT IFS=$OLDIFS