All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How would I go about having an alert set at a given threshold ? When I run the following, I sometimes get incomplete results in the stats table due to not every field meeting the number 6 i... See more...
How would I go about having an alert set at a given threshold ? When I run the following, I sometimes get incomplete results in the stats table due to not every field meeting the number 6 index=_internal AND NOT email="blank@domain.com" | stats count by email, Message, client.ipAddress, geographical.city | where count>6 | sort -count When I try the following, I get an alert for 6 total events with no threshold criteria met. Trigger Condition: Number of Results is > 6. Desired outcome would be a criteria of the set threshold met and only when it is met. For example, an alert to fire on the 'count' of a given event occurring 6 times Appreciate any tips in advance
I need an alert that notifies me when the SAME Account_Name logs into 2 specific hosts within the same 30 minute window. I'd like to see the events grouped by Account_Name. We auth with AD. Not su... See more...
I need an alert that notifies me when the SAME Account_Name logs into 2 specific hosts within the same 30 minute window. I'd like to see the events grouped by Account_Name. We auth with AD. Not sure the best way to do this. Logically, it works, but I only see events from the bracketed [search]. Any help would be appreciated. Thank you. Here's what I have so far: index=wineventlog earliest=-30m latest=now source="WinEventLog:Security" (src_ip="10.14.111.60") | join Account_Name [ search index=wineventlog earliest=-30m latest=now source="WinEventLog:Security" (src_ip="10.13.111.60") ]
I am writing a modular input in splunk and need to store API key and secret in a .conf file. I see how I can Rea the .conf file but how to I make splunk encrypt (and subsequently decrypt) these vales... See more...
I am writing a modular input in splunk and need to store API key and secret in a .conf file. I see how I can Rea the .conf file but how to I make splunk encrypt (and subsequently decrypt) these vales in my conf file. I cannot have them clear text I need them hashed. Any help is welcome
I am trying to determine a way to search for user logins over time to get an idea of application usage. If I have a set number of user ids, I want to get a count of their logins based on a search ... See more...
I am trying to determine a way to search for user logins over time to get an idea of application usage. If I have a set number of user ids, I want to get a count of their logins based on a search string in our logs. So my search is like this: "Looking for user based on login/user_id=" "abc123" OR "zxy987" So this would give me all the logs where each of these users had attempted to login and use the application, it could be multiple times per day and there could be multiple entries per day per user of this search string. But I would like to chart this on a dashboard with time on the x-axis and individual 'string search match" count aka login count on the y axis. How would I accomplish this in splunk? Would I need to change my search?
Hello,This is my query | loadjob savedsearch="myquery" |where strftime(_time, "%Y-%m-%d") = "2020-02-24" |eval show=if(STEP="show",strftime(_time, "%Y-%m-%d %H:%M:%S"),NULL),click=if(STEP="Click... See more...
Hello,This is my query | loadjob savedsearch="myquery" |where strftime(_time, "%Y-%m-%d") = "2020-02-24" |eval show=if(STEP="show",strftime(_time, "%Y-%m-%d %H:%M:%S"),NULL),click=if(STEP="Click",strftime(_time, "%Y-%m-%d %H:%M:%S"),NULL) |stats max(show) as show ,min(click) as click by client I have two date : show and click i want to calculate the difference between the two date by client Exemple of the result : client : RYU5890 Show : 2020-02-24 10:15:00 click: 2020-02-24 10:20:00 Diff: 5 client : FH5Y411 Show : 2020-02-24 09:20:00 click: 2020-02-24 09:21:00 Diff: 1
I reinstalled Splunk with clustering today. The problem is that I keep getting 'Signature mismatch between license slave' errors. I have the same Splunk Secret on all servers. Therefore I added the k... See more...
I reinstalled Splunk with clustering today. The problem is that I keep getting 'Signature mismatch between license slave' errors. I have the same Splunk Secret on all servers. Therefore I added the key already encrypted with the Splunk Secret in the server.conf for the installation. I have already tried the following things: Decrypt Pass4SymmKey: works! Pass4SymmKey into 'etc/system/local/server.conf': No effect! New pass4SymmKey added: No effect! Reinstalled: No effect! Is this a bug or am I missing something fundamental? Thanks for your help. Rafael
I am using a bin command on _time field to have 10 minute sections of data. Like below: |bin _time span=10m minspan=10m | stats sum(myField) as myField by _time |streamstats avg(myfield) as avg ... See more...
I am using a bin command on _time field to have 10 minute sections of data. Like below: |bin _time span=10m minspan=10m | stats sum(myField) as myField by _time |streamstats avg(myfield) as avg by another field I am trying to compare the most current value in the completed 10 minute block with the average of 10 minute blocks of data. My understanding is that span should create 10 minute buckets on the data and minspan should filter out buckets that are not 10 minute yet. So if right now its 10.25 then the bucket that is for 10.20 should not be created. But that is not the case, I am getting the 10.20 bucket that has partial values data comparing to the average. Any workaround or a better options or a much efficient way of doing this will be very helpful. Thanks in advance!!
Hi! First question and relative newbie, so bear with me! I created below query to show the number of missing server ID's per rack. But I can't get the BY clause and percentage calculation to w... See more...
Hi! First question and relative newbie, so bear with me! I created below query to show the number of missing server ID's per rack. But I can't get the BY clause and percentage calculation to work in the same query. index=servers | eval serverId_present=if(isnotnull(serverId), "OK", "Missing server ID") | stats count as totalServers, count(eval(serverId_present="Missing server ID")) as missingServers | eval missingPercentage=round(100*missingServers/totalServers, 2) | chart sum(totalServers), sum(missingServers), sum(missingPercentage) by rack No results found when using the BY clause at the end of the chart pipe. But when I remove the BY clause, it generates a table with only one row where the missingPercentage is correct. Resuslts without BY clause: sum(totalServers) sum(missingServers) sum(missingPercentage) 854043 16326 1.91 So I created a second query to see if that worked instead, but no: index=servers | eval serverId_present=if(isnotnull(serverId), "OK", "Missing server ID") | chart count AS totalServers count(eval(serverId_present="Missing server ID")) AS missingServers count(eval(round(100*missingServers/totalServers,2))) AS missingPercentage by rack This query generates a table with rows for each rack where all the missingPercentage is just "0". Still a "0" if I remove the BY clause. The BY clause is working though and showing rows for each Instance. Resuslts with BY clause: Instance totalOrders totalError missingRate Rack1 575555 2502 0 Rack2 278488 13824 0 Tried stats command instead of chart but no difference.
Hello Eveyone, I am trying to use iplocation command to search for ip address info within my network. My search is as below: eventtype=wineventlog_security | iplocation src_ip prefix=srcip_ |... See more...
Hello Eveyone, I am trying to use iplocation command to search for ip address info within my network. My search is as below: eventtype=wineventlog_security | iplocation src_ip prefix=srcip_ | table src_ip, City, Country I am getting the IP list with other columns blank. I did some research and found iplocation.py is not present in the above directory. I do have GeoLite2-City.mmdb and iso3166 files in "$SPLUNK_HOME/share/" directory. I am wondering if the missing .py file is the reason for my issue. If so, how can I resolve it? Any help would be much appreciated. Thank You!
Hi guys, I'm having a query that take 2 fields from specific index type, and then going out to the main index in order to get more useful info for the search. The query is working only when I ... See more...
Hi guys, I'm having a query that take 2 fields from specific index type, and then going out to the main index in order to get more useful info for the search. The query is working only when I put a 1 field from the subsearch, but I want to pass 2 fields from the subsearch now Its something like: MAIN INDEX SEARCH | [ specific sourcetype index search=xxx| table field1 field2] stats values(fieldx) values(fieldy) values(field1) by field2 So I need to pass 2 of the fields from the subsearch. but it only works with 1 field each time, cant do it with both. Would like to hear suggestions how to pass 2 fields (or more) from subsearch to the main search Thanks!!!
Hi All, Pleas help me in getting a query to display the time difference from the events that mentioned below index=opennms nodelabel="GQML2-WANRTC001" "uei.opennms.org/nodes/nodeUp" OR "uei.open... See more...
Hi All, Pleas help me in getting a query to display the time difference from the events that mentioned below index=opennms nodelabel="GQML2-WANRTC001" "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" | rename _time as Time_CST | sort - Time_CST | fieldformat Time_CST=strftime(Time_CST,"%x %X") | table nodelabel,eventuei, Time_CST output of the above query is nodelabel eventuei Time_CST GQML2-WANRTC001 uei.opennms.org/nodes/nodeUp 02/27/20 04:41:00 GQML2-WANRTC001 uei.opennms.org/nodes/nodeDown 02/27/20 04:40:00 Another separate query I use. | rex field=eventuei "uei.opennms.org/nodes/node(?<State>.+)" | rename _time as Time_CST | fieldformat Time_CST=strftime(Time_CST,"%x %X") | dedup nodelabel sortby - Time_CST | table nodelabel State Time_CST Output for this query is nodelabel State Time_CST GQML2-WANRTC001 UP 02/27/20 04:41:00 Expected output is below is Up event came. nodelabel Status downtime GQML2-WANRTC001 UP 00:01 Expected output if Up event not came. nodelabel Status downtime GQML2-WANRTC001 Down Let me know all the possibilities of this.
Hi, I have a scheduled search that detects assets when they enter and leave geofences. For that I calculate things like total time travelled, distance, everage speed and so forth... The search i... See more...
Hi, I have a scheduled search that detects assets when they enter and leave geofences. For that I calculate things like total time travelled, distance, everage speed and so forth... The search is pretty long and complicated and runs every 3 minutes and looks back 1 hour for changes. The detected events are collected into a summary index. Now comes the problem, that the logic for geofence-detection has changed - which means my summary index has become useless. Ich made changes to my search according to customer requests and the current detected events are fine. But everything up to this point has no value. I made a plan to re-build the time from January 1st to today in a secondary summary index. But i theory I would set the timewindow manually, as far as I understood. means: 1. go to search app 2. copy my search into search window 3. set timerange to 01/01/20-00:00:00 --> 01/01/20-01:00:00 4. let the search run 5. set timerange to 01/01/20-00:03:00 --> 01/01/20-01:03:00 6. let the search run ... 1723434. set timerange to 02/27/20-15:00:00 --> 02/27/20-15:03:00 1723435. let the search run Another way would be to make a JavaScript program in the backend of a Dashboard and let the search run in a loop that artificially sets the timerange. Is there another way I am not seeing to let an entire search run in timeslots? The obvious solution would be to re-write the entire search to be compatible with time-window based commands. But i dont see a way to make sure the results would 1:1 the same with changing the entire search.
I have table with 3 field values as follows SOR Datafeed Status 1art xxx Met SLA 1art yyy Missed SLA 1art zzz Met SLA Now i would like to consider status of SOR as Missed SLA if it has on... See more...
I have table with 3 field values as follows SOR Datafeed Status 1art xxx Met SLA 1art yyy Missed SLA 1art zzz Met SLA Now i would like to consider status of SOR as Missed SLA if it has one single status as Missed SLA , and alo there is come cases where i dont see Missed SLA status in that case it has be calculated as Met SLA. Can you please help me guys
Hi, I need help adding b+ c together to get a total, I will then calculate a percentage using a/combined b+c. Is this possible? | stats dc(Users) as UsersCount by Label app | stats sum(UsersC... See more...
Hi, I need help adding b+ c together to get a total, I will then calculate a percentage using a/combined b+c. Is this possible? | stats dc(Users) as UsersCount by Label app | stats sum(UsersCount) by Label Label sum(UsersCount) 1 a 14 2 b 2 3 c 19
hi I use the search below in order to count the number of degradation by model This search is a scheduled search and I call it from my dashboard with a loadjob command In my dashboard, I have ... See more...
hi I use the search below in order to count the number of degradation by model This search is a scheduled search and I call it from my dashboard with a loadjob command In my dashboard, I have a dropdwon list which allows to filter the data by SITE (test.csv field) For doing that, I need to jeep the field SITE in my search | lookup test.csv HOSTNAME as host output SITE MODEL | stats values (MODEL) as Model, count(DegradationTime) as DegradationTime by host | stats count(host) as "Number of degradation" by Model | sort -"Number of degradation" So I add "by SITE" at the end of my stats command | lookup test.csv HOSTNAME as host output SITE MODEL | stats values (MODEL) as Model, count(DegradationTime) as DegradationTime by host SITE | stats count(host) as "Number of degradation" by Model SITE | sort -"Number of degradation" But when I am doing this, I have double models because a same model can exist for different SITE So how to keep the field SITE available for using it in my dropdwon list without displaying double models? Please note that I dont need this field visible in my table panel
Hello, We'd like to monitor configuration changes on our Linux host. For that we want to detect when in the datamodel Auditd the field name is equal to /etc/audit/* , /etc/audisp/* , or ... See more...
Hello, We'd like to monitor configuration changes on our Linux host. For that we want to detect when in the datamodel Auditd the field name is equal to /etc/audit/* , /etc/audisp/* , or /etc/libaudit.conf . Here is our basic search: | tstats `security_content_summariesonly` count from datamodel=Auditd where nodename=Auditd.Path by _time span=1s host Auditd.name | `drop_dm_object_name("Auditd")` The question is how can we implement in the same search 3 conditions below: | where like(name,"%/etc/audit/%") | where like(name,"%/etc/audisp/%") | where name="/etc/libaudit.conf" Logicaly it could be done via case statement, but we wasn't able to implement it. Do you have any ideas? Thanks for the help.
How to load the data from Microsoft Azure (API) to splunk, is there any way to load the data?
Hi, I have alert and I am sending a notification via mail and MS Teams. The result is a table apple --- 3 ---- link_to_dashboard in the email it is fine but in teams from link_to_dashboa... See more...
Hi, I have alert and I am sending a notification via mail and MS Teams. The result is a table apple --- 3 ---- link_to_dashboard in the email it is fine but in teams from link_to_dashboard. underscores ( _ ) are removed, so the link is invalid. I was trying to find some solution online but didn't find anything. Any tip how to fix it ? Thanks in advance
We have a requirement to send Splunk data to Prometheus. As and when we get events into Splunk they should be sent to Prometheus. Can anyone guide me how to achieve this? --Poornima
Hi, I am trying to fetch splunk events that are created in last 30days for below query, by selecting time range as last 30days. But i am getting all time events itseems for this query. Please su... See more...
Hi, I am trying to fetch splunk events that are created in last 30days for below query, by selecting time range as last 30days. But i am getting all time events itseems for this query. Please suggest Query used: index=servicenow eventtype=snow_change* sourcetype="snow:change_request" (change_state_name="Work Complete" OR change_state_name=Closed) earliest=-30d@d | dedup number | eval diff=strptime(dv_work_end,"%Y-%m-%d %H:%M:%S")-strptime(dv_work_start,"%Y-%m-%d %H:%M:%S") | eval Downtime=round((diff/60),3) | table number Downtime host dv_work_start dv_work_end SPlunk Evets o/p: Complete 1,285 events (1/28/20 12:00:00.000 AM to 2/27/20 5:30:31.555 PM) No Event Sampling Job Smart Mode Events Patterns Statistics (1,285) Visualization 100 Per Page Format Preview Prev1...3456789...Next number Downtime host dv_work_start dv_work_end number Downtime host dv_work_start dv_work_end CHG0129357 300.000 kmci4odw2023 2020-01-19 21:00:00 2020-01-20 02:00:00 CHG0129566 120.000 kmci4odw2023 2020-01-19 23:30:00 2020-01-20 01:30:00 CHG0129494 99.250 kmci4odw2023 2020-01-19 23:48:54 2020-01-20 01:28:09 CHG0129795 4320.367 kmci4odw2023 2020-01-20 10:55:10 2020-01-23 10:55:32 CHG0129116 1110.000 kmci4odw2023 2020-01-20 13:00:00 2020-01-21 07:30:00 CHG0129536 1380.000 kmci4odw2023 2020-01-20 13:30:00 2020-01-21 12:30:00 CHG0129632 88.250 kmci4odw2023 2020-01-20 15:05:04 2020-01-20 16:33:19 CHG0129634 120.000 kmci4odw2023 2020-01-20 16:15:00 2020-01-20 18:15:00 CHG0129585 120.000 kmci4odw2023 2020-01-20 17:00:00 2020-01-20 19:00:00 CHG0129389 155.100 kmci4odw2023 2020-01-20 22:30:25 2020-01-21 01:05:31 CHG0129593 0.000 kmci4odw2023 2020-01-20 23:30:00 2020-01-20 23:30:00 CHG0129647 90.667 kmci4odw2023 2020-01-21 04:30:00 2020-01-21 06:00:40 CHG0129323 1440.000 kmci4odw2023 2020-01-21 07:00:00 2020-01-22 07:00:00 CHG0128642 60.000 kmci4odw2023 2020-01-21 09:00:00 2020-01-21 10:00:00 CHG0129555 151.300 kmci4odw2023 2020-01-21 09:00:25 2020-01-21 11:31:43 CHG0128772 90.000 kmci4odw2023 2020-01-21 09:30:00 2020-01-21 11:00:00 CHG0129613 1440.000 kmci4odw2023 2020-01-21 09:30:00 2020-01-22 09:30:00 CHG0129234 1440.000 kmci4odw2023 2020-01-21 09:30:00 2020-01-22 09:30:00 CHG0129955 10080.000 kmci4odw2023 2020-01-21 09:55:51 2020-01-28 09:55:51 CHG0129650 57.800 kmci4odw2023 2020-01-21 10:00:00 2020-01-21 10:57:48 CHG0128646 120.000 kmci4odw2023 2020-01-21 10:00:00 2020-01-21 12:00:00 CHG0129667 1230.000 kmci4odw2023 2020-01-21 13:00:00 2020-01-22 09:30:00 CHG0128650 3120.000 kmci4odw2023 2020-01-21 13:00:00 2020-01-23 17:00:00 CHG0129676 120.000 kmci4odw2023 2020-01-21 13:15:00 2020-01-21 15:15:00 CHG0129461 119.500 kmci4odw2023 2020-01-21 13:30:30 2020-01-21 15:30:00 CHG0129446 60.000 kmci4odw2023 2020-01-21 16:00:00 2020-01-21 17:00:00 CHG0129292 50.000 kmci4odw2023 2020-01-21 17:00:00 2020-01-21 17:50:00 CHG0129679 35.000 kmci4odw2023 2020-01-21 17:20:00 2020-01-21 17:55:00 CHG0129709 420.000 kmci4odw2023 2020-01-21 19:00:00 2020-01-22 02:00:00 CHG0129526 167.917 kmci4odw2023 2020-01-21 21:00:00 2020-01-21 23:47:55 CHG0129677 180.000 kmci4odw2023 2020-01-21 21:30:00 2020-01-22 00:30:00 CHG0129646 40.183 kmci4odw2023 2020-01-21 23:35:37 2020-01-22 00:15:48 CHG0129567 296.883 kmci4odw2023 2020-01-22 00:25:57 2020-01-22 05:22:50 CHG0129417 1450.000 kmci4odw2023 2020-01-22 07:00:00 2020-01-23 07:10:00 CHG0129295 10.000 kmci4odw2023 2020-01-22 07:00:00 2020-01-22 07:10:00