All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

HI @innoce , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @BrC_Sys99, it isn't possible now, but there's a Splunk Ideas about this. The only possibility is download, one by one all the objects. Ciao. Giuseppe
The opposite of filldown is to reverse sort the data, use filldown, then re-sort to the original order.
Our network team is using Splunk in the cloud however they asked me to see if it was possible to create a local copy on one of our Servers in our Data Center as a redundancy.  Is it possible to copy ... See more...
Our network team is using Splunk in the cloud however they asked me to see if it was possible to create a local copy on one of our Servers in our Data Center as a redundancy.  Is it possible to copy the apps and set to use that copy in case the forwarder to the cloud goes down?    Thank you 
@richgalloway @isoutamo  Rich was able to help out with his search. I now have this as my search:   index=plc source="middleware" sourcetype="plc:" Tag = "Channel1*" | where Value != "" | eval Sc... See more...
@richgalloway @isoutamo  Rich was able to help out with his search. I now have this as my search:   index=plc source="middleware" sourcetype="plc:" Tag = "Channel1*" | where Value != "" | eval Schedule = if(Tag="Schedule", Value, null()) | eval Incident = if(Tag="Incident", Value, null()) | table _time Schedule Incident Value    This returns the following results: _time Schedule Incident Value 11:54:31 AM   Alarm2 Alarm2 11:30:15 AM Productive   Productive 10:59:15 AM Non-productive   Non-productive 10:59:09 AM   Alarm2 Alarm2 10:55:10 AM   Alarm1 Alarm1 10:47:59 AM   Alarm2 Alarm2 10:27:40 AM   Alarm2 Alarm2 10:17:12 AM   Alarm2 Alarm2 10:15:03 AM   Alarm2 Alarm2 10:13:12 AM   Alarm2 Alarm2 10:01:49 AM   Alarm2 Alarm2 9:54:00 AM   Alarm2 Alarm2 9:48:44 AM   Alarm2 Alarm2 9:38:20 AM   Alarm2 Alarm2 9:27:36 AM   Alarm2 Alarm2 9:21:20 AM   Alarm2 Alarm2 9:16:33 AM   Alarm2 Alarm2 9:15:22 AM   Alarm3 Alarm3 9:10:15 AM Productive   Productive 8:59:14 AM Non-productive   Non-productive 8:59:13 AM   Alarm2 Alarm2 8:52:15 AM   Alarm2 Alarm2 8:48:59 AM   Alarm1 Alarm1 8:46:41 AM   Alarm1 Alarm1 8:42:16 AM   Alarm1 Alarm1 8:39:58 AM   Alarm1 Alarm1 8:27:52 AM   Alarm2 Alarm2 8:20:13 AM   Alarm2 Alarm2 8:15:44 AM   Alarm2 Alarm2 8:11:46 AM   Alarm2 Alarm2 8:09:37 AM   Alarm1 Alarm1 8:07:23 AM   Alarm1 Alarm1 8:01:53 AM   Alarm1 Alarm1 7:58:28 AM   Alarm1 Alarm1 7:57:16 AM   Alarm1 Alarm1 I think I need the opposite of the filldown command (if there is one?), where I take the last known value of schedule and populate the schedule field with that if a get a value timestap where the schedule is null.
Hi, How to efficiently schedule the correlation searches in Splunk ITSI to avoid skipping of concurrent running jobs. We can see the below message in skipped searches.   Thanks!    
hey, thanks for the reply  i figured out that what was causing this is the mobile action in the alert when i configured the alert to send action via Email it worked! i don't know why sending push... See more...
hey, thanks for the reply  i figured out that what was causing this is the mobile action in the alert when i configured the alert to send action via Email it worked! i don't know why sending push notification to mobile is not working although its configured on my splunk mobile app correctly and on Splunk Secure Gateway! thank you for mentioning the email action to me 
hi, How to identify the correlation search name using the Report name found in skipped searches. We are trying to resolve the skipped searches issue. Any help would be much appreciated.   Tha... See more...
hi, How to identify the correlation search name using the Report name found in skipped searches. We are trying to resolve the skipped searches issue. Any help would be much appreciated.   Thanks!
Thanks for responding, do you have any examples using summary index
Try removing the append=true from the outputlookup command - however, it you do this, you will have to modify the search to have all the information you want to keep i.e. the last 28 days worth. Ano... See more...
Try removing the append=true from the outputlookup command - however, it you do this, you will have to modify the search to have all the information you want to keep i.e. the last 28 days worth. Another possibility is to use a summary index and set the retention period for the index to 28 days.
You could start by creating fields based on various criteria e.g.  | eval startTime=if(eventtype="startExport", _time, null()) | eval exportComplete=if(eventtype="exportSuccess", _time, null()) The... See more...
You could start by creating fields based on various criteria e.g.  | eval startTime=if(eventtype="startExport", _time, null()) | eval exportComplete=if(eventtype="exportSuccess", _time, null()) Then you can gather the values of these fields by requestId | stats values(startTime) as startTime values(exportComplete) as exportComplete by requestId
Hi there, I was wondering if I could get some assistance on whether the following is possible. I am quite new to creating tables in Splunk. In Splunk, we have logs for an export process. Each step ... See more...
Hi there, I was wondering if I could get some assistance on whether the following is possible. I am quite new to creating tables in Splunk. In Splunk, we have logs for an export process. Each step of the export process has the same ID to show it's part of the same request and each event in the chain has a type. I'd like to create a table that lists all exports over a given time period: request ID actor.email export.duration startTime exportComplete emailSent - Each event for the same export has the same requestID - startTime would be the timestamp of the event with type "startExport" - exportComplete would be the timestamp of the event with type "exportSuccess" (or "in progress" if an event of that type is not present with that request ID) - email would be the timestamp of the event with type "send" (or "email not send" if an event of type  type is not present with that request ID) All of this information is available in the original results but the table i have created so far just lists each event sorted by the timestamp which is definitely helpful versus raw results but getting a table like this would be so much better.
Hello All, I have some dashboards  which are using reports for calculations, it has some lookup files, the problem is when the csv file limit reaches the set value, it stopped showing the Graphs on... See more...
Hello All, I have some dashboards  which are using reports for calculations, it has some lookup files, the problem is when the csv file limit reaches the set value, it stopped showing the Graphs on dashboard and I have create new lookup file every time and update the dashboards, but I dont wanted to do it, is there anyway this can be avoided, I wanted outlookup file just to keep last 28 days of data and delete the rest of the data. I am trying below splunk script but not sure if I am doing it correctly. I have also tried the max option and its just restrict the query to dump the records into csv file above the set value index="idx_rwmsna" sourcetype=st_rwmsna_printactivity source="E:\\Busapps\\rwms\\mna1\\geodev12\\Edition\\logs\\DEFAULT_activity_1.log" | transaction host, JobID, Report, Site startswith="Print request execution start." | eval duration2=strftime(duration, "%Mm %Ss %3Nms") | fields * | rex field=_raw "The request was (?<PrintState>\w*) printed." | rex field=_raw "The print request ended with an (?<PrintState>\w*)" | rex field=_raw ".*Dest : (?<Dest>\w+).*" | search PrintState=successfully Dest=Printer | table _time, host, Client, Site, JobID, Report, duration, duration2 | stats count as valid_events count(eval(duration<180)) as good_events avg(duration) as averageDuration | eval sli=round((good_events/valid_events) * 100, 2) | eval slo=99, timestamp=now() | eval burnrate=(100-sli)/(100-slo), date=strftime(timestamp,"%Y-%m-%d"), desc="WMS Global print perf" | eval time=now() | sort 0 - time | fields date, desc, sli, slo, burnrate, timestamp, averageDuration | outputlookup lkp_wms_print_slislo1.csv append=true override_if_empty=true | where time > relative_time(now(), "-2d@d") OR isnull(time)
After a little tweaking this gives the desired results: | inputlookup appJobLogs | search [ | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | fields RunID | uniq | forma... See more...
After a little tweaking this gives the desired results: | inputlookup appJobLogs | search [ | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | fields RunID | uniq | format ] | rex mode=sed field=MessageText "s/, /\n/g" | sort RunStartTimeStamp asc, LogTimeStamp asc, LogID ASC
"keep the last schedule tag value" == filldown index=plc source="middleware" sourcetype="plc:___" Tag = "Channel1*" | where Value != "" AND Value != "nothing" | eval Schedule=if(Tag="Schedule", Valu... See more...
"keep the last schedule tag value" == filldown index=plc source="middleware" sourcetype="plc:___" Tag = "Channel1*" | where Value != "" AND Value != "nothing" | eval Schedule=if(Tag="Schedule", Value, null()) | filldown Schedule
Hi Team, We were trying to enable appdynamics agent for php . Apache server stops intermittently. following is our configuration PHP 7.4.33 Apache/2.4.57 appdynamics-php-agent-23.7.1.751-1.x86_64... See more...
Hi Team, We were trying to enable appdynamics agent for php . Apache server stops intermittently. following is our configuration PHP 7.4.33 Apache/2.4.57 appdynamics-php-agent-23.7.1.751-1.x86_64 Please find the below error logs. [Fri Sep 01 17:18:11.766615 2023] [mpm_prefork:notice] [pid 12853] AH00163: Apache/2.4.57 (codeit) OpenSSL/3.0.10+quic configured -- resuming normal operations [Fri Sep 01 17:18:11.766641 2023] [core:notice] [pid 12853] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom [Fri Sep 01 17:18:40.794670 2023] [core:notice] [pid 12853] AH00052: child pid 12862 exit signal Aborted (6) [Fri Sep 01 17:18:40.794714 2023] [core:notice] [pid 12853] AH00052: child pid 12883 exit signal Aborted (6) terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom Kindly let me know, Why i am getting this issue after installing Appd php agent. Thankyou.
This is pretty much what I want, but there are other RunID lines that do not have the "general error" message that I want to capture also. So your example groups all RunID's and the MessageText with ... See more...
This is pretty much what I want, but there are other RunID lines that do not have the "general error" message that I want to capture also. So your example groups all RunID's and the MessageText with "general error".   What I need is, all RunID entries for the RunID with MessageText "general error". I.e:   RunId MessageText 1 Start 1 There has been a general error. 1 Finish 2 Start   So I find the RunID 1 having the error and I want to output the start, finish and the error too. If that is possible, and in this example, not RunID 2.
I got an error  "The 'NOT ()' filter could not be optimized for search results." I'll look into. Thanks for the suggestion
Hi, I need to collect the logs from Windows Defender and I was looking for an official app and I couldn't find one. I read some people recommending "TA for Microsoft Windows Defender" but I see tha... See more...
Hi, I need to collect the logs from Windows Defender and I was looking for an official app and I couldn't find one. I read some people recommending "TA for Microsoft Windows Defender" but I see that it didn't get update since 2017. Any other option more recent? thanks.
I apologize for the confusion. Here's a general query to grab the information:  index=plc source="middleware" sourcetype="plc:___" Tag = "Channel1*" | where Value != "" AND Value != "nothing" Her... See more...
I apologize for the confusion. Here's a general query to grab the information:  index=plc source="middleware" sourcetype="plc:___" Tag = "Channel1*" | where Value != "" AND Value != "nothing" Here are the results for the last 120 minutes... You can see around 9AM that the schedule tag value changes. I would almost want to keep the last schedule tag value and tack that onto the incident tags as they come in. Time Event 9:27:36 AM 2023-09-01 13:27:36.260 +0000 Tag="Incident" Value="ALARM3" Quality="good" 9:21:20 AM 2023-09-01 13:21:20.297 +0000 Tag="Incident" Value="ALARM3" Quality="good" 9:16:33 AM 2023-09-01 13:16:32.918 +0000 Tag="Incident" Value="ALARM3" Quality="good" 9:15:22 AM 2023-09-01 13:15:22.263 +0000 Tag="Incident" Value="ALARM4" Quality="good" 9:10:15 AM 2023-09-01 13:10:15.419 +0000 Tag="Schedule" Value="Productive" Quality="good" 8:59:14 AM 2023-09-01 12:59:14.164 +0000 Tag="Schedule" Value="Non-productive" Quality="good" 8:59:13 AM 2023-09-01 12:59:12.661 +0000 Tag="Incident" Value="ALARM3" Quality="good" 8:52:15 AM 2023-09-01 12:52:14.779 +0000 Tag="Incident" Value="ALARM3" Quality="good" 8:48:59 AM 2023-09-01 12:48:59.291 +0000 Tag="Incident" Value="ALARM1" Quality="good" 8:46:41 AM 2023-09-01 12:46:41.037 +0000 Tag="Incident" Value="ALARM1" Quality="good" 8:42:16 AM 2023-09-01 12:42:16.314 +0000 Tag="Incident" Value="ALARM1" Quality="good" 8:39:58 AM 2023-09-01 12:39:58.018 +0000 Tag="Incident" Value="ALARM1" Quality="good" 8:27:52 AM 2023-09-01 12:27:51.918 +0000 Tag="Incident" Value="ALARM3" Quality="good" 8:20:13 AM 2023-09-01 12:20:13.465 +0000 Tag="Incident" Value="ALARM3" Quality="good" 8:15:44 AM 2023-09-01 12:15:44.416 +0000 Tag="Incident" Value="ALARM3" Quality="good" 8:11:46 AM 2023-09-01 12:11:46.442 +0000 Tag="Incident" Value="ALARM3" Quality="good" 8:09:37 AM 2023-09-01 12:09:37.184 +0000 Tag="Incident" Value="ALARM1" Quality="good" 8:07:23 AM 2023-09-01 12:07:23.474 +0000 Tag="Incident" Value="ALARM1" Quality="good" 8:01:53 AM 2023-09-01 12:01:52.538 +0000 Tag="Incident" Value="ALARM1" Quality="good" 7:58:28 AM 2023-09-01 11:58:27.990 +0000 Tag="Incident" Value="ALARM1" Quality="good" 7:57:16 AM 2023-09-01 11:57:15.859 +0000 Tag="Incident" Value="ALARM1" Quality="good" 7:49:31 AM 2023-09-01 11:49:31.305 +0000 Tag="Incident" Value="ALARM1" Quality="good" 7:48:21 AM 2023-09-01 11:48:20.686 +0000 Tag="Incident" Value="ALARM2" Quality="good" 7:47:13 AM 2023-09-01 11:47:13.069 +0000 Tag="Incident" Value="ALARM1" Quality="good" 7:35:14 AM 2023-09-01 11:35:14.139 +0000 Tag="Incident" Value="ALARM1" Quality="good"