All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How to do windows monitoring 
Good afternoon I have a dashboard with multiple timechart where I am using a time picker -7 days and +7 days. The problem is that not all timechart end on the same day because there are no events f... See more...
Good afternoon I have a dashboard with multiple timechart where I am using a time picker -7 days and +7 days. The problem is that not all timechart end on the same day because there are no events for future days. Is it possible that the timechar always represents future days, even when there are no events for those days? Image as an example:
So, long story short... I am trying to determine the event count by source, which host is producing the most events in that source, and who owns the host (custom_field). Any suggestions on how ... See more...
So, long story short... I am trying to determine the event count by source, which host is producing the most events in that source, and who owns the host (custom_field). Any suggestions on how to accomplish this would be helpful.  Thank you. This is what I have tried so far:   | tstats count as events where index=wineventlog sourcetype=* by _time host custom_field source | search custom_field=unit1 OR custom_field=unit_2 OR custom_field=unit_3 Then I run a stats command to collect the event count, then list the event count by the custom_field | stats   sum(events) as total_events   list(events) as event_counts   list(source) as source   list(host) as host   by custom_field I understand that event_counts is now a string.  However, I would like to be able to use these numbers to determine which source is producing the most events by each custom_field. I have tried: | convert num(event_counts) | eval num_events = tonumber(event_counts) But these don't work unless I use | mvexpand event_counts  This then skews the results to where they don't make any sense.  I want to convert the event_count field to a number so I can make a chart or a timechart from it as well to analyze the growth over time. Thanks in advanced.                                                   
When I am trying to create entities using search in Splunk ITSI, it is throwing the below error and the entity load is failing.  ERROR: KeyError at "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/csv_import/... See more...
When I am trying to create entities using search in Splunk ITSI, it is throwing the below error and the entity load is failing.  ERROR: KeyError at "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/csv_import/itoa_bulk_import_entity.py", line 172 : 'abcde servers : os' abcde servers : os  --> this happens to be the old title of an existing service. The Service was initially named as "abcde servers : os ". Now the service has a different name. I am not sure if this service is somewhat related to the error thrown by ITSI while importing entities.  Can anyone help in fixing this error.  
I want to see any failed job, ad-hoc and scheduled. For instance, I was creating a new search command, and it failed a lot until I got it right. I expect to see the same error, I see in web search, i... See more...
I want to see any failed job, ad-hoc and scheduled. For instance, I was creating a new search command, and it failed a lot until I got it right. I expect to see the same error, I see in web search, in the logs: | rest /servicesNS/-/-/search/jobs shows a handful over 4 hours. There was far more than that. _audit shows plenty of failed searches, but not the reason _internal doesn't show anything useful
I have a user that is asking me to look at the file hashes of every file that some into splunk across today and yesterday.  I can compare one just fine: index=my_index RuleName="Rule_name" FileName=... See more...
I have a user that is asking me to look at the file hashes of every file that some into splunk across today and yesterday.  I can compare one just fine: index=my_index RuleName="Rule_name" FileName="file.exe" earliest="06/11/2021:00:00:00" latest="06/11/2021:24:00:00" | rename FileHash as "todays_hash" | append [ search index=my_index RuleName="Rule_name" FileName="file.exe" earliest="06/12/2021:00:00:00" latest="06/12/2021:24:00:00" | rename FileHash as "yesterdays_hash"] | stats values(*) as * by FileName | eval description=case(todays_hash=yesterdays_hash,"Hash has not changed", todays_hash!=yesterdays_hash,"Hash has changed") | table FileName description todays_hash yesterdays_hash This makes a table showing the 2 hashes and a message telling me if the hash had changed or not.  Now is there a way to run this through foreach or something that can do that for the whole list of file names. Something like: index=my_index RuleName="Rule_name" | stats values | foreach FieldName  | append [ search index=my_index RuleName="Rule_name" FileName="file.exe" earliest="06/12/2021:00:00:00" latest="06/12/2021:24:00:00" | rename FileHash as "yesterdays_hash"] | stats values(*) as * by FileName | eval description=case(todays_hash=yesterdays_hash,"Hash has not changed", todays_hash!=yesterdays_hash,"Hash has changed") | table FileName description todays_hash yesterdays_hash
Hi, I want to build a bar chart that shows the anomaly_count for each data_source in JS. But I also want to keep the database_id field to be used in the drilldown. Using the search query below, I go... See more...
Hi, I want to build a bar chart that shows the anomaly_count for each data_source in JS. But I also want to keep the database_id field to be used in the drilldown. Using the search query below, I got a chart looks like this   , where database_id is also counted. How can I hide the database_id field in the chart but use it as a key to drill down to another dashboard?         index=\"assets_py\" asset_type=database | fields data_source, anomaly_count", database_id | fields - _time _cd _bkt _indextime _raw _serial _si _sourcetype     This is my JS code for drilldown:      anomalycountchart.on("click", function(e) { e.preventDefault(); tokenSet.set("databaseID_tok", ""); utils.redirect("anomaly?databaseID_tok="+e.data['row.database_id']); });      Thank you in advance! 
HI I am displaying a token that is refreshing every 10 seconds - but now that I have added a base search the token is flicking to $result.TIME$ on the screen and then back to the value. How do I us... See more...
HI I am displaying a token that is refreshing every 10 seconds - but now that I have added a base search the token is flicking to $result.TIME$ on the screen and then back to the value. How do I use a base search and not have the token flick? I have put both examples below one working(not base search) and one not working. I have tried to change finalized to done - but nothing changed. We can see in the image one working and one displaying the token (only for 1 second until the search is finished, but it doe not look nice)           <search base="basesearch_MAIN"> <!-- Displays the last time pack that has entered SPLUNK - THis need to be update to use the base search off the main search --> <query>| rename _time as TIME | eval TIME=strftime(TIME,"%m/%d/%y %H:%M:%S") | table TIME | tail 1</query> <finalized> <set token="Token_TIME_OF_LAST_DATA">$result.TIME$</set> <finalized> </search> NO BASE SEARCH - This does not jump on the screen <search> <query>| mstats max("mx.process.cpu.utilization") as cpuPerc WHERE "index"="metrics_test" AND mx.env=MONITORING_MVP span=10s | rename _time as TIME | eval TIME=strftime(TIME,"%m/%d/%y %H:%M:%S") | table TIME | tail 1</query> <earliest>-1m</earliest> <latest>now</latest> <finalized> <set token="Token_TIME_OF_LAST_DATA1">$result.TIME$</set> </finalized> <refresh>10s</refresh> </search>         
Hi,  I am trying to return results if an item in the array has both values set to specific values. ie bu = "blob" and disp="enforce" on the one array item However,  my search seems to happen acros... See more...
Hi,  I am trying to return results if an item in the array has both values set to specific values. ie bu = "blob" and disp="enforce" on the one array item However,  my search seems to happen across items.   |makeresults |eval _raw ="{ \"sp_v\":[ {\"bu\":\"blob\",\"disp\":\"enforce\"}, {\"bu\":\"inline\",\"disp\":\"report\"} ] }" | spath | search sp_v{}.bu=blob AND sp_v{}.disp=report This is returning result as the first item has 'blob' and the second has 'report'. I would not expect any results in this search Would appreciate any help, Kind Regards, Maurice
Hello, I am trying to change cron_schedule of saved searches/alerts by calling REST API URI in a bash script. I am reading cron_schedule, search title and app name from a CSV file. CURL commands with... See more...
Hello, I am trying to change cron_schedule of saved searches/alerts by calling REST API URI in a bash script. I am reading cron_schedule, search title and app name from a CSV file. CURL commands with working fine to change cron_schedule for all the private searches/alerts. but in case of Global searches/alert, It makes a private copy of that global search and changes the cron_schedule of that one, not the original one. I want to change the schedule of both local and global searches/alerts without creating a private copy of the global one.    #! /bin/bash INPUT=data.csv OLDIFS=$IFS IFS=',' [ ! -f $INPUT ] && { echo "$INPUT file not found" exit 99; } echo "-----------------------------------------------------" >> output.txt while read app cron search_name do SEARCH=${search_name// /%20} QUERY="https://localhost:8089/servicesNS/admin/$app/saved/searches/$SEARCH" echo $QUERY >> output.txt echo -e "\n---------------------------------------------------------\n" echo -e "---Search Name-->$search_name" echo -e "---Rest API URI-->$QUERY" curl -i -k -u <admin_user>:<password> $QUERY -d cron_schedule=$cron -d output_mode=json >> response.txt done < $INPUT IFS=$OLDIFS
High CPU utilization observed for splunkd and python3.7 processes on Splunk HF after Splunk Enterprise upgrade from 7.x to 8.1.4 version. Any help would be appreciated. Tq.
I've almost created  a  framework to update  Splunk configuration  items for Search Heads   (transforms, props, savedsearches) etc and Create NEW apps via Splunk REST api. This works well in Standalo... See more...
I've almost created  a  framework to update  Splunk configuration  items for Search Heads   (transforms, props, savedsearches) etc and Create NEW apps via Splunk REST api. This works well in Standalone SH & SH cluster. Anyone  know if there are restrictions/capability  restrictions kept  in place for Splunk cloud offering? ie in Cloud offering - Can  I  create a  new App  via Rest api ? - Can i create/modify configuration items remotely?
Hi All, we are trying to install the ServiceNow Security Operations add-on for Splunk, and after we add in the required details including the password, we cannot locate where the password is being s... See more...
Hi All, we are trying to install the ServiceNow Security Operations add-on for Splunk, and after we add in the required details including the password, we cannot locate where the password is being stored. Was expecting a passwords.conf to be created with the password encrypted, but am not seeing anything in: /opt/splunk/etc/apps/TA-ServiceNow-SecOps/default Or in /opt/splunk/etc/apps/TA-ServiceNow-SecOps/local ServiceNow Security Operations Addon | Splunkbase We do have a sn_sec_instance.conf created in /local, but it only lists the url of our ServiceNow instance and the username. thanks
Hi, I'm running the below syntax on Splunk Enterprise to get traffic logs from Fortigate firewalls: index="fortinet" "devname=" "xxxxx-xxxxxx" "vd=" "xxx-xxxxx" policyid=5 action=accept | stats co... See more...
Hi, I'm running the below syntax on Splunk Enterprise to get traffic logs from Fortigate firewalls: index="fortinet" "devname=" "xxxxx-xxxxxx" "vd=" "xxx-xxxxx" policyid=5 action=accept | stats count by srcip, dstip, dstport, service, action, date, time, policyid | dedup srcip dstip dstport service consecutive=true | sort 0 field This gives me all TCP & UDP traffic, then I can download & filter in a .csv but doesn't pick up ICMP traffic (specifically icmp type 8). I have to run a separate syntax to get just ICMP as below: index="fortinet" "devname=" "xxxxx-xxxxxx" "vd=" "xxx-xxxxx" policyid=5 action=accept | stats count by srcip, dstip, service, action, date, time, policyid | dedup srcip dstip service consecutive=true | sort 0 field It seems that because ICMP has no dstport the syntax needs adjusting. I need is a syntax that will return all traffic, i.e. TCP, UDP & ICMP. Please advise? Naz
Hello, I am trying to get the Perc99 and Perc95 from the total transaction in IIS which the bellow search:   source="C:\\inetpub\\logs\\LogFiles\\*" host="WIN-699VGN4SK4U" index="main" |bucket spa... See more...
Hello, I am trying to get the Perc99 and Perc95 from the total transaction in IIS which the bellow search:   source="C:\\inetpub\\logs\\LogFiles\\*" host="WIN-699VGN4SK4U" index="main" |bucket span=1w _time|stats count by _time|eventstats perc95(count) as p95 ,perc95(count) as p95    however it is just being the total for both, any help would be greatly appreciated.   Thanks   Joe  
Hi All, We are trying to install snow with Splunk using latest version of Splunk Add-on for ServiceNow and we are getting the below error. We are trying to integrate snow with Splunk using latest v... See more...
Hi All, We are trying to install snow with Splunk using latest version of Splunk Add-on for ServiceNow and we are getting the below error. We are trying to integrate snow with Splunk using latest version of Splunk Add-on for ServiceNow Heavy forwarder version: 8.1 Service now Addon:7.1   error: file=splunk_ta_snow_account_validation.py:validate:110 | Unable to reach ServiceNow instance at https://XXXX.service-now.com. The reason for failure is=Traceback (most recent call last): File "/splunk/etc/apps/Splunk_TA_snow/bin/splunk_ta_snow_account_validation.py", line 106, in validate resp, content = http.request(url) File "/splunk/etc/apps/Splunk_TA_snow/lib/httplib2/__init__.py", line 1709, in request conn, authority, uri, request_uri, method, body, headers, redirections, cachekey, File "/splunk/etc/apps/Splunk_TA_snow/lib/httplib2/__init__.py", line 1424, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/splunk/etc/apps/Splunk_TA_snow/lib/httplib2/__init__.py", line 1346, in _conn_request conn.connect() File "/splunk/etc/apps/Splunk_TA_snow/lib/httplib2/__init__.py", line 1138, in connect self.sock = self._context.wrap_socket(sock, server_hostname=self.host) File
  Current query : index=salcus sourcetype= ticket_mgmt_rest source= http:ticket_mgmt_rest |rename "properties.o2-TroubleTicket-ReqId" as REQID | transaction REQID keepevicted=true | search even... See more...
  Current query : index=salcus sourcetype= ticket_mgmt_rest source= http:ticket_mgmt_rest |rename "properties.o2-TroubleTicket-ReqId" as REQID | transaction REQID keepevicted=true | search eventcount=2 |table REQID duration |sort -duration   Now I want only top 1 record which has maximum duration , so how can I modify above query
Hello, community What's skipped search? Do I understand correctly that it's a search which finished with error? How can I generate skipped search (weird task, but I have) Thank you.
Hi all, I'm having issue with this add-on "Microsoft Teams Add-on for Splunk". I currently use the add-on to get Teams Call Details Record downloaded to Splunk. For small call, this run well withou... See more...
Hi all, I'm having issue with this add-on "Microsoft Teams Add-on for Splunk". I currently use the add-on to get Teams Call Details Record downloaded to Splunk. For small call, this run well without issue. However, for big meeting when there are 200+ participants, call details record will be divided into several pages, and the URL to download next page is via "@odata.nextLink". But this add-on doesn't seem to download next page, it stops after downloading the first page and therefore not all participants' details are downloaded to Splunk. Via powershell, i can confirm that Graph API did return the "@odata.nextLink" (which was missing before, they only fixed it recently). Look into python code,  the add-on uses function "get_item" to fetch the data, and this function doesn't care about nextLink at all, which explains why I encountered the issue. Below is the code of this get_item function   def get_item(helper, access_token, url): headers = {} headers["Authorization"] = "Bearer %s" % access_token headers["Content-type"] = "application/json" proxies = get_proxy(helper, "requests") try: r = requests.get(url, headers=headers, proxies=proxies) r.raise_for_status() response_json = None response_json = json.loads(r.content) item = response_json except Exception as e: raise e return item     I found another function within the same library that does look into nextLink data, which is get_items   def get_items(helper, access_token, url, items=[]): headers = {} headers["Authorization"] = "Bearer %s" % access_token headers["Content-type"] = "application/json" proxies = get_proxy(helper, "requests") try: r = requests.get(url, headers=headers, proxies=proxies) if r.status_code != 200: return items r.raise_for_status() response_json = None response_json = json.loads(r.content) items += response_json['value'] if '@odata.nextLink' in response_json: nextLink = response_json['@odata.nextLink'] # This should never happen, but just in case... if not is_https(nextLink): raise ValueError("nextLink scheme is not HTTPS. nextLink URL: %s" % nextLink) helper.log_debug("_Splunk_ nextLink URL (@odata.nextLink): %s" % nextLink) get_items(helper, access_token, nextLink, items) except Exception as e: raise e return items     So, what I did was changing the code from   call_record = azutils.get_item(helper, access_token, url)     to   call_record = azutils.get_items(helper, access_token, url)   but it doesn't work. Anyone know how to get around this?   Thanks a lot
I've JSON Object in msg field as : "objectA":{ "aggrStatus":"SUCCESS", "attempts":[ { "aggrStatus":"FAILURE", "responses":[ { "requestTime":1626329472707, "responseTime":1626329474713, "s... See more...
I've JSON Object in msg field as : "objectA":{ "aggrStatus":"SUCCESS", "attempts":[ { "aggrStatus":"FAILURE", "responses":[ { "requestTime":1626329472707, "responseTime":1626329474713, "status":"FAILURE" } ] }, { "aggrStatus":"SUCCESS", "responses":[ { "requestTime":1626330378365, "responseTime":1626330378622, "status":"SUCCESS" } ] } ] } I want to find out Average Total time taken by Successful responses i.e. in above example second attempts response time should be considered as it's success and not first attempts response time. Total time taken = response time - requestTime; so how to find out : 1. Average Response of All successful events found 2. Table with count for response time less than 1sec, between 1 sec to 2 sec, between 2sec to 3 sec, greater than 3 sec Can you please help with Query ? Thank you so much for your help and efforts.