All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I installed DB Connect on Heavy Forwarder but I get message the task server cannot start. Eariler I installed DB Connect of Search Head (the same way) and it worked fine. What is the problem with Hea... See more...
I installed DB Connect on Heavy Forwarder but I get message the task server cannot start. Eariler I installed DB Connect of Search Head (the same way) and it worked fine. What is the problem with Heavy Forwarder? Can anyone please give mi clear answer? I
Good afternoon! I have a request based on which I create an aller: index="main" sourcetype="testsystem-script707" | eval srcMsgId_Исх_Сообщения=if(len('Correlation_srcMsgId')==0 OR isnull('Correla... See more...
Good afternoon! I have a request based on which I create an aller: index="main" sourcetype="testsystem-script707" | eval srcMsgId_Исх_Сообщения=if(len('Correlation_srcMsgId')==0 OR isnull('Correlation_srcMsgId'),'srcMsgId','Correlation_srcMsgId') | eventstats count as Message_Number by srcMsgId_Исх_Сообщения | rex field=routepointID "^(?<Routepoint_ID_num>\d+)\." | sort routepointID | eventstats first(Routepoint_ID_num) as RouteID by srcMsgId_Исх_Сообщения | sort _time | eventstats first(_time) as MessageTime by srcMsgId_Исх_Сообщения | eval Time_Now=now() | eval diff_time=Time_Now-MessageTime | where RouteID!=1 AND diff_time>15 But the alert is constantly triggering trigger on old message threads that match the conditions of the query above. I would like to get rid of triggers on old chains by adding fields to messages: alert=true or alert=0. Accordingly, I would add an additional condition to my request: work only when: alert=0. Tell me how to do it?
We have alerts for high Windows Server CPU usage, and we have automated vulnerability scanners which can trip these alerts, which we'd like to ignore. The existing alert has just one nested search to... See more...
We have alerts for high Windows Server CPU usage, and we have automated vulnerability scanners which can trip these alerts, which we'd like to ignore. The existing alert has just one nested search to check CPU usage, and the main search uses that host name to figure out the processes with high CPU, roughly like this:     ```analyze processes on high-cpu hosts``` index=perf source=process cpu>25 ```find high-CPU hosts``` [search index=perf source=cpudata cpu>95 |stats count by host |search count > 2 |fields host ] ```omitted: other macros, formatting, etc.```     In order to ignore the vulnerability scans, we have to check the IIS index for requests from servers that start with a particular name (scanner*) -- this works fine:     ```analyze processes on high-cpu hosts``` index=perf source=process cpu>25 ```disregard hosts taking scanner traffic``` [search index=iis |lookup dnslookup clientip OUTPUT clienthost |eval scanner=(if(like(clienthost,"scanner%"),1,0) |stats sum(scanner) as count by host |search count<10 |fields host ```find high-CPU hosts``` |search [search index=perf source=cpudata cpu>95 |stats count by host |search count > 2 |fields host ]] ```omitted: other macros, formatting, etc.```     The problem is that not all servers use IIS. For example a SQL Server will never appear in the IIS index. So I'm trying to find a way to have the host value passed to the main search when the IIS sub-search doesn't have any hits at all. I kind of suspect I should combine the IIS search into the high-CPU subsearch (reference both indexes) but I'm having a hard time wrapping my head around how that would work. As a side note, performance seems pretty bad. Each subsearch runs subsecond, stand-alone (and our indexes like IIS have 1 million plus events per minute), but the multi-subsearch version takes a little over 1 minute -- this surprises me since only a few host values are passed out of the innermost search. Performance suggestions are very welcome but we can live with the processing time if I can solve this other issue.
I am trying to create a search where if there is a change of 30 percent within 5 mins of a few field values, I would like to create an alert. The search should take a sum of the field values where... See more...
I am trying to create a search where if there is a change of 30 percent within 5 mins of a few field values, I would like to create an alert. The search should take a sum of the field values where field names are like traffic_in#abc and traffic_out#abc, and then use delta command to find difference between current and previous values. Now, the issue is I have 11 field values like abc(e.g., abc, cde,efg etc.) and I want delta of total of (traffic_in#***+traffic_out#***) and then find the values when traffic has changed by over 30%. The search that I have can be used when I have only one value like abc, but I want to change it so that it can work with multiple values. The search is :   eventtype=cacti:mirage host="onl-cacti-02" host_id=193 ldi IN("8835","8836","8837","8839","8840","8841","8846","8847","8848","8843","8844",) | reverse | eval combination=rrdn+"#"+name_cache | streamstats current=t window=2 global=f range(_time) as deltaTime range(rrdv) AS rrd_value_delta by combination | eval isTraffic = if(like(rrdn,"%traffic%"),1,0) | eval kpi = if(isTraffic==1,rrd_value_delta*8/deltaTime,rrd_value_delta/deltaTime) | timechart limit=0 useother=f span=5min last(kpi) by combination | addtotals fieldname=total | delta total as change | eval change_percent=change/(total-change)*100 | timechart span=5min first(total) AS total_traffic, first(change_percent) AS traffic_change | where abs(traffic_change) > 30  
Good afternoon! I figured out how to set up alerts. Understood with the parameter: Cron Expression. Currently I am using: */1 * * * * (run every minute). Tell me how to run in seconds, I tried a ... See more...
Good afternoon! I figured out how to set up alerts. Understood with the parameter: Cron Expression. Currently I am using: */1 * * * * (run every minute). Tell me how to run in seconds, I tried a lot of options, but the splunk swears - it gives an error. How, for example, to run every 30 or 40 seconds?   Thanks in advance!
The goal is to take all eventIds with "operation failed" and exclude events with "Duplicate key" and "Event processed successfully": index="idx" "Transaction failed" | table eventId | dedup eventId... See more...
The goal is to take all eventIds with "operation failed" and exclude events with "Duplicate key" and "Event processed successfully": index="idx" "Transaction failed" | table eventId | dedup eventId | search NOT [search index="idx" "Duplicate key" | table eventId ] | search NOT [search index="idx" "Event processed successfully" | table eventId ] But for some reason the last NOT subquery doesn't exclude the events which processed successfully: | search NOT [search index="idx" "Event processed successfully" | table eventId ]
Hi  For example  Using below query i can see  when we received the last log to splunk, based on that if I search for events it's not showing  Using below spl i can see when we we received lates... See more...
Hi  For example  Using below query i can see  when we received the last log to splunk, based on that if I search for events it's not showing  Using below spl i can see when we we received latest events with below combination with 30 days timerange   |Tstats latest(_time) as _time where index=abc sourcetype="Cisco devises" host=1234 by index source sourcetype host.     But if I search same spl for seeing events not showing the results --same timerange index=abc sourcetype="Cisco devises" host=1234     
Add "A" field from another index if "B" and ""C" are equal across indexes I have search that returns events with fields  Index=1 "B" "C"  "D" index=2 "A" "B"  "C"  On a search in ind... See more...
Add "A" field from another index if "B" and ""C" are equal across indexes I have search that returns events with fields  Index=1 "B" "C"  "D" index=2 "A" "B"  "C"  On a search in index 1, I would like to kind of 'lookup'  dynamicaly field from index=2. I know "B" "C" will most likelly be equal, so I would like to use them to find event, and pull value of field "A". Same activity is similarly recorded within two indexes but one has "A" Within 5min timeframe of an event (search & map?) if I find event where fields B and C are equal accross both indexes, (if) then pull value from index=2 field "A" and map it to field "D" on index=1 in that event.  else do a lookup from lookuptable (if false | lookup table x as B OUTPUTNEW z as D) I hope it's clear, please let me know in case of any quesions.  
Hi I have created dashboard with column charts  and tables and it is working fine but when i export the dashboard to pdf i am unable to see the bar chart but the statistic tables are visible in the... See more...
Hi I have created dashboard with column charts  and tables and it is working fine but when i export the dashboard to pdf i am unable to see the bar chart but the statistic tables are visible in the pdf. Any help will be appreciated.   Thanks  
Hello Splunkers !!   Last week Current week New Error  "enableEnhancedCheckout"  "enableEnhancedCheckout"  "error_in_python_script"   "error_in_python_script"   ... See more...
Hello Splunkers !!   Last week Current week New Error  "enableEnhancedCheckout"  "enableEnhancedCheckout"  "error_in_python_script"   "error_in_python_script"     Above is the use case I have, In which I want to compare two week errors. And if any new error introduced then I want to highlight that error.  Below is the SPL I have used so far. Please let me know what I need to correct in below query and How can I achieve, if you have any other approach. Index="ABC" source="/abc.log" ("ERROR" OR "EXCEPTION") earliest=-14d latest=now() | rex field=_raw "Error\s(?<Message>.+)MulesoftAdyenNotification" | rex field=_raw "fetchSeoContent\(\)\s(?<Exception>.+)" | rex field=_raw "Error:(?<Error2>.+)" | rex field=_raw "(?<ErrorM>Error in template script)+" | rex field=_raw "(?ms)^(?:[^\\|\\n]*\\|){3}(?P<Component>[^\\|]+)" | rex "service=(?<Service>[A-Za-z._]+)" | rex "Sites-(?<Country>[A-Z]{2})" | eval Error_Exception= coalesce(Message,Error2,Exception,ErrorM) | eval Week=case(now()-_time<604800,"Current_Week",_time>604800, "Last_Week") | stats dc(Week) AS Week_count values(Week) AS Week by Error_Exception | eval Error_Status=if(Week_count=2,"Both Weeks",Week) | eval Difference1= abs(tonumber(Last_Week) - tonumber(Current_Week)) | stats count by Difference1 | fields - count
Hi all,  I have a server that collects logs from CISCO ISE which then pipe the logs to Splunk which then generates Splunk report on a weekly basis.  The logs stopped coming into Splunk about two ... See more...
Hi all,  I have a server that collects logs from CISCO ISE which then pipe the logs to Splunk which then generates Splunk report on a weekly basis.  The logs stopped coming into Splunk about two weeks ago. I did some basic troubleshooting and checked no issues on the network level, as other directories are still sending logs to Splunk as per normal.  Would like to ask for some directions in my troubleshooting process for this issue. If there are more information required, please feel free to let me know.  Thank you all in advance. 
Hi, We have close to 1000 indexers in our splunk cluster on AWS. Each indexer has 15TB SSD local storage. Our retention is 30 days and we enable smartstore with AWS S3. The total s3 bucket size f... See more...
Hi, We have close to 1000 indexers in our splunk cluster on AWS. Each indexer has 15TB SSD local storage. Our retention is 30 days and we enable smartstore with AWS S3. The total s3 bucket size for our cluster says it is around 9 PB, however the disk usage on almost all of our indexers is around 95% which leads to (1000 * 0.95 * 15 TB) = 14.2 PB.  What is taking up additional ~5 PB of disk space on indexers? I'm sure the hot data(which isn't on s3) is definitely not of 2.5 PB (RF =2) size. Can someone please throw some light here? Thanks.
Hi sorry for my direct question. This match it's in eval and i get the error "Regex: quantifier doesn't follow a repeatable item". Do u know where it's the issue? Thank u
Hi, I need help to fine tuned my SPL Query. _time field is not properly formatted when we configure it in dashboard. index=sslvpn sourcetype="sslvpnsourcetype" action=failure | iplocation access... See more...
Hi, I need help to fine tuned my SPL Query. _time field is not properly formatted when we configure it in dashboard. index=sslvpn sourcetype="sslvpnsourcetype" action=failure | iplocation accessIP | search Country ="Canada" | stats values(accessIP), count by user, _time, reason | eval _time=strftime(_time, "%d/%m/%Y %I:%M:%S %p") | table _time, user, values(accessIP), reason, count | rename user as Username, values(accessIP) as "Access IP", reason as "Reason", count as Count This is the result table(_time column) when running on search and reporting app: This is the result (_time column) when we configure in dashboard (Dashboard Studio): Please assist us. Thank you.          
Bad Request — editTracker failed, reason='Unable to connect to license master=https://172.31.17.138:8089 Error connecting: Connect Timeout'
Hi Fellow Splunkers, Good day. I am currently migrating some applications from On-Prem to Splunk Cloud. From app vetting, would anyone be able to suggest of possible fixes/resolutions for this  che... See more...
Hi Fellow Splunkers, Good day. I am currently migrating some applications from On-Prem to Splunk Cloud. From app vetting, would anyone be able to suggest of possible fixes/resolutions for this  check_hotlinking_splunk_web_libraries check? The errors points to JS files that works well in On-Prem and are in their correct location in packaging an application (appserver/static)   Name: check_hotlinking_splunk_web_libraries Description: Check that the app files are not importing files directly from the search head. Details: Embed all your app's front-end JS dependencies in the /appserver directory. If you import files from Splunk Web, your app might fail when Splunk Web updates in the future. Bad imports: ['vizapi/SplunkVisualizationBase', 'vizapi/SplunkVisualizationUtils'] File: /tmp/tmp4bxeox7h/splunk_app/appserver/static/visualizations/VUmeter/src/visualization_source.js   Appreciate any help/advise. Thank you.
Below is my query1: index=adc source=abc "FilesTrasfered DO980" |timechart span=1d count |stats count as D0980 Files query2: index=adc source=abc "FilesTrasfered DO981" |timechar... See more...
Below is my query1: index=adc source=abc "FilesTrasfered DO980" |timechart span=1d count |stats count as D0980 Files query2: index=adc source=abc "FilesTrasfered DO981" |timechart span=1d count |stats count as D0981Files i tried to combine 2 queries and get the result in table format, so i used append command, but i am getting result  in 2 different rows. DO980 Files DO981 Files 500     230 But i want to get the results in the same row like shown in below format: DO980 Files DO981 Files 500 230
Hi,   I have set of events from an index with user details as below and I am looking to populate the events with there manager Name . ID Name MgrID 1 Tom 4 2 Rick 1 ... See more...
Hi,   I have set of events from an index with user details as below and I am looking to populate the events with there manager Name . ID Name MgrID 1 Tom 4 2 Rick 1 3 Harry 1 4 Boss 5 5 CEO 5   I want to add another column to the result with MgrName like below using the MgrID and re-referencing the same index again: ID Name MgrID MgrName 1 Tom 4 Boss 2 Rick 1 Tom 3 Harry 1 Tom 4 Boss 5 CEO 5 CEO 5 CEO   Tried to come up with something and so far no luck, appreciate if someone has any suggestions or have done this before. Thanks
Hello all, I have a search that's something like this:       index=* sourcetype=* ID=* (value=1 OR value=2 OR value=3) | list(_raw) as events BY ID value msg | table ID value msg     ... See more...
Hello all, I have a search that's something like this:       index=* sourcetype=* ID=* (value=1 OR value=2 OR value=3) | list(_raw) as events BY ID value msg | table ID value msg       Next, I utilize a drilldown option that adds the chosen value into a new search. Basically:       index=* sourcetype=* ID=* value=1 | table ID value msg       The point is to group events into one list based on them having the same ID and a specific value. Now, when I click the drilldown sometimes the table will include fields of value=1 that contain a "msg" field that is irrelevant to the data I'm searching for.  Is it possible to do something like:       index=* sourcetype=* ID=* value=1 | table ID value msg | eval msg=if(msg==bad, "Remove From Table", msg)       Sorry for being vague, but I cannot post the actual searches. I hope this makes sense.  
Hi all, So lets suppose I have the following table: Job_ID  Parameter_A  Parameter_B 1                   "Car"                     "Red" 2                    "Bus"                    "Blue" ... See more...
Hi all, So lets suppose I have the following table: Job_ID  Parameter_A  Parameter_B 1                   "Car"                     "Red" 2                    "Bus"                    "Blue"   I want to get value "Red" and use it in an eval function. How to do it? Thanks!   @bowesmana calling you for help like always!!!