All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've seen a few posts on the subject, but I'd like to know how we can disable the multiple alerts throughout the maintenance window. For example, I'd like to disable alerts 1, 2, and 3 from Saturda... See more...
I've seen a few posts on the subject, but I'd like to know how we can disable the multiple alerts throughout the maintenance window. For example, I'd like to disable alerts 1, 2, and 3 from Saturday 11:30 p.m. until Sunday 6:00 a.m. Thank you in advance. ------------------------------------ reference alert query index=ABC sourcetype=XYZ ("Internal System Error") |stats count |where count >=30
I have the following fields, where some of them might be null, empty, whatnot values. I would like to split the Services values, which might have 1-N values separated by a comma, to separate columns... See more...
I have the following fields, where some of them might be null, empty, whatnot values. I would like to split the Services values, which might have 1-N values separated by a comma, to separate columns/fields prefixed with "Sp.". For example: | makeresults | eval Platform="p1", Ent="ent1", Ext="100", Fieldx=null(), Fieldy="" , Services="user,role,func1,func2" | append [ | makeresults | eval Platform="p1", Ent="ent2", Ext="100", Fieldx="", Fieldy=null(), Services="user2,role2,func4,func8,func5,role3" ] | fields _time Platform Ent Ext Fieldx Fieldy Services Gives an example like: _time Platform Ent Ext Filedx Fieldy Services 2022-09-30 08:56:11 p1 ent1 100     user,role,func1,func2 2022-09-30 08:56:11 p1 ent2 100     user2,role2,func4,func8,func5,role3   How do I split the Services into a separate fields? I think I cannot just use stats list() by "All_fields" due to those possible null values in other fields. _time Platform Ent Ext Fieldx Fieldy Services Sp.func1 Sp.func2 Sp.func4 Sp.func5 Sp.func8 Sp.role Sp.role2 Sp.role3 Sp.user Sp.user2 2022-09-30 09:07:00 p1 ent1 100     user,role,func1,func2 func1 func2       role     user   2022-09-30 09:07:00 p1 ent2 100     user2,role2,func4,func8,func5,role3     func4 func5 func8   role2 role3   user2  
I have below format log messages. At the end I want to group the messages by BID. I tried using the below query but I am not getting any events even though there are events that qualify my query.  ... See more...
I have below format log messages. At the end I want to group the messages by BID. I tried using the below query but I am not getting any events even though there are events that qualify my query.      { "details" : [ { "BID" : "123" }, { "BID" : "456" } ] }       Expected Output :    BID Count 123 4 456 3     Query I am using :   {my_search} | rex field=MESSAGE "(?<JSON>\{.*\})" | spath input=JSON path=details.BID | stats values(details.BID) as "BID" by CORRID | stats count as Count by "BID"    
Hello, I am using python script to read from remote api with pagination. I have one problem while reading data from api, once i started script and it pulls data after that if i disable the script t... See more...
Hello, I am using python script to read from remote api with pagination. I have one problem while reading data from api, once i started script and it pulls data after that if i disable the script the data does not get printed in splunk though it has passed through print statement.
Hello, So I have a forwarder installed on a server and it show up on Clients in Forwarder Management. Then I create new app in depployment-apps, with local/input.conf like this   [monitor:///... See more...
Hello, So I have a forwarder installed on a server and it show up on Clients in Forwarder Management. Then I create new app in depployment-apps, with local/input.conf like this   [monitor:///home/cnttm/Vibus/logTransit/application.log] crcSalt = <SOURCE> disable = false index = mynewindex [monitor:///home/cnttm/Vibus/logTransit/*.log] crcSalt = <SOURCE> disable = false index = mynewindex [monitor:///home/cnttm/Vibus/logTransit/*] crcSalt = <SOURCE> disable = false index = mynewindex [monitor:///home/cnttm/*] crcSalt = <SOURCE> disable = false index = mynewindex   The log directory is: /home/cnttm/Vibus/logTransit/application.log Then I create a server classes and apps, enable and restart it. But when I search index=mynewindex, I dont have any result, and I'm pretty sure we have log in that directory. Does anyone know anything wrong with my syntax? And how do I know/check if my deployment apps is working or not?
I have the following sample event 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU5  IOS: 96 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU4 IOS: 96 2022-09-2... See more...
I have the following sample event 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU5  IOS: 96 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU4 IOS: 96 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU3 IOS: 96 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU2 IOS: 96 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU1 IOS: 76 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU0 IOS: 96 I want to compare the IOS value  for each host and if any one is showing a different value then I want to output the result .In the above events for host=abc all GPU's has IOS value as 96 except for GPU1 which is 76.I want to output the GPU1 and the value of IOS...I tried doing diff but its not working.   Thanks in Advance  
Hello, I have a log file that go like this     2022-09-30 09:43:41,038: INSTANCE=34-bankgw1, REF=237324562, MESSSAGE=IST2InterfaceModel.ResponseVerifyCardFromBank:{"F0":"0210","F2":"970422xxx... See more...
Hello, I have a log file that go like this     2022-09-30 09:43:41,038: INSTANCE=34-bankgw1, REF=237324562, MESSSAGE=IST2InterfaceModel.ResponseVerifyCardFromBank:{"F0":"0210","F2":"970422xxxxxx6588","F3":"050000","F4":"000001000000","F7":"0930094340","F9":"00000001","F11":"277165","F12":"094340","F13":"0930","F15":"0930","F18":"7399","F25":"08","F32":"970471","F37":"273094277165","F38":"277165","F39":"15","F41":"00005782",0822,237324562,VNPAYCE","F49":"704","F54":"0000000000000000000000000000000000000000","F62":"EC_CARDVER","F63":"AAsA7QKwYzZX3AAB","F102":"0000000000000000"}     With a log structure like this, I can't really extract the field that I want with Splunk field extractor. The field that I want to Extract is F39 (which mean status) for monitoring purpose. I'm really amateur when it come to rex so can anyone help me with it?
Hello, Did anyone tried sending Moogsoft alerts/events to Splunk!   Thanks
I need to create a field (30days) with a date 30 days from the date in a given field (pubdate). I believe I have that part working, but can't seem to get the date to convert to the format I want. ... See more...
I need to create a field (30days) with a date 30 days from the date in a given field (pubdate). I believe I have that part working, but can't seem to get the date to convert to the format I want. |makeresults |eval pubdate="2022-09-30,2021-08-31" |makemv delim="," pubdate |mvexpand pubdate |eval epochtime=strptime(pubdate, "%Y-%m-%d") |eval 30days=epochtime + 2592000 |convert ctime(30days) |table pubdate, 30days Which produces: pubdate 30days 2022-09-30 10/30/2022 00:00:00.000000 2021-08-31 09/30/2021 00:00:00.000000   All I want to do is to format the 30days date field the same was as pubdate - "%Y-%m-%d". Everything I'm trying is producing an error.
Is cloud data stored in Canada? 
According to the docs for cron the Sunday code is 0.   When I try to run this cron for the first Sunday of the month it displays Saturday! 00 12 1,2,3,4,5,6,7 * 0 Of course, when I use 6 for Sa... See more...
According to the docs for cron the Sunday code is 0.   When I try to run this cron for the first Sunday of the month it displays Saturday! 00 12 1,2,3,4,5,6,7 * 0 Of course, when I use 6 for Saturday, it works! 00 12 1,2,3,4,5,6,7 * 6 What code am I supposed to use for Sunday? TIA! David
Please advise on my request. Line from request: | where ('result.code'=-1 OR 'result.code'=1 OR 'result.code'=21 OR 'result.code'=23 OR 'result.code'=SMEV-403) The query looks for messages with a... See more...
Please advise on my request. Line from request: | where ('result.code'=-1 OR 'result.code'=1 OR 'result.code'=21 OR 'result.code'=23 OR 'result.code'=SMEV-403) The query looks for messages with all result.code values except SMEV-403. Tell me how can I fix this?
Hi, I have been using Splunk actively for three months. I have created custom insights in AWS security hub to monitor continuous compliance tasks. But, these are not setup to send alerts when there ... See more...
Hi, I have been using Splunk actively for three months. I have created custom insights in AWS security hub to monitor continuous compliance tasks. But, these are not setup to send alerts when there is a change in the number of failed resources. I understand it is possible to create these AWS insights in Splunk, and setup alerts when there is a change. How is this done? I imagine these would be standard searches that anyone can use.
Hello all, New splunker here, so forgive me if this is totally way wrong to do it. I was asked to make a comparison dashboard for application performance before the monthly patch and after. I was ... See more...
Hello all, New splunker here, so forgive me if this is totally way wrong to do it. I was asked to make a comparison dashboard for application performance before the monthly patch and after. I was able to do so with the following code:         index=erp sourcetype=erp_heartbeat tenant=AX2 earliest=-31d@month latest=-1d@month | eval custate="Post-Update" | append [ search index=erp sourcetype=erp_heartbeat tenant=AX2 earliest=-61d@month latest=-30d@month | eval custate="Pre-Update" ] | chart avg(duration) by trans_name, custate       In a recent touchpoint, it was requested that users be able to change the dates to look at prior months' numbers. I can't figure out how to accomplish this as I'm using specific earliest and latest time modifiers   , so any help would be tremendously appreciated.  Thank you all, as I've gotten this far with your community. 
I destroyed some VMs in my production recently. But I can still them in the page: "IT Essentials Work" => "Infrastructure Overview" => Unix/Linux Add-on. They have inactive status already but how do ... See more...
I destroyed some VMs in my production recently. But I can still them in the page: "IT Essentials Work" => "Infrastructure Overview" => Unix/Linux Add-on. They have inactive status already but how do I completely remove them from the Splunk Cloud? I don't want to monitor them anymore.
My task is like I need to group by two fields i.e  eventid and dest  make it happened at  firsttime and lasttime    eventid      dest                                    count                firstti... See more...
My task is like I need to group by two fields i.e  eventid and dest  make it happened at  firsttime and lasttime    eventid      dest                                    count                firsttime                                     lasttime  256             drdydyf.google.com    56                  2022-09-28T19:21:10             2022-09-28T19:21:34   249               bigdaddy.com         78                         2022-09-28T19:22:10              2022-09-28T19:22:20
I have an SPL which gives a result. I want to get a trend of the result.  So I tried using timechart command, but it is not working.    Query | tstats `summariesonly` earliest(_time) as _time fro... See more...
I have an SPL which gives a result. I want to get a trend of the result.  So I tried using timechart command, but it is not working.    Query | tstats `summariesonly` earliest(_time) as _time from datamodel=Incident_Management.Notable_Events_Meta by source,Notable_Events_Meta.rule_id | `drop_dm_object_name("Notable_Events_Meta")` | `get_correlations` | join rule_id [| from inputlookup:incident_review_lookup | eval _time=time | stats earliest(_time) as review_time by rule_id] | eval ttt=review_time-_time | stats avg(ttt) as avg_ttt | sort - avg_ttt | `uptime2string(avg_ttt, avg_ttt)` | rename *_ttt* as *(Time_To_Triage)* | fields - *_dec |table avg(Time_To_Triage) |rename avg(Time_To_Triage) as "Mean/Average Time To Respond"    
Hi,  I  have a lookup file with the fields - biz_department, biz_unit, biz_owner, data_usage I have a query to generate the "datausage" values based on biz_unit. I will schedule the report so that ... See more...
Hi,  I  have a lookup file with the fields - biz_department, biz_unit, biz_owner, data_usage I have a query to generate the "datausage" values based on biz_unit. I will schedule the report so that it will update only the "data_usage" values in the lookup file periodically. How can i call the lookupfile and update only specific field? Thanks MS
Hello I have basic questions about hte way to geolocate devices with Splunk Is an addon exists? If not, is it possible to correlate a tool like NetDB with Splunk using DB Connect? https://web... See more...
Hello I have basic questions about hte way to geolocate devices with Splunk Is an addon exists? If not, is it possible to correlate a tool like NetDB with Splunk using DB Connect? https://web.stanford.edu/group/networking/netdb/help/prod/netdb.html If yes, what are the prerequesites for doing this? Thanks
The below search is intended to get status codes from two different sources and put them together in a table. It works except that it keeps codes separate if they come from different searches. In the... See more...
The below search is intended to get status codes from two different sources and put them together in a table. It works except that it keeps codes separate if they come from different searches. In the table at the bottom, I want only one row for 504, with entries for both searches and the sum (=5).  | multisearch [search index=ABC status.code>399 | rename status.code as StatusCode | eval type="search1"] [search index=DEF data.status>399 | rename data.status as StatusCode | eval type="search2"] | chart count over StatusCode by type | eval sum = search1+search2 StatusCode search1 search2 sum 1 400 17 0 17 2 406 10 0 10 3 500 647 0 647 4 504 0 1 1 5 504 4 0 4 6 530 8 0 8