All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am new to splunk and I trying to extract the fields using built-in feature.  Since the log format contain both the pipe as well as spaces, the in-built field extraction was unable to work. ... See more...
Hello, I am new to splunk and I trying to extract the fields using built-in feature.  Since the log format contain both the pipe as well as spaces, the in-built field extraction was unable to work. I was trying to extract the field before pipe as "name" , after pipe as "size" , after first space as "value" as shown in below.  I doesn't care about last values like 1547, 1458, 1887.   Any help would be appreciated.   Name size value abc-pendingcardtransfer-networki 30 77784791 log-incomingtransaction-datainpu 3 78786821 dog-acceptedtransactions-incoming 1 7465466           Sample Logs:   9/2/22 11:52:39.005 AM abc-pendingcardtransfer-networki|30 77784791 1547 9/2/22 11:50:39.005 AM log-incomingtransaction-datainpu|3 78786821 1458 9/2/22 11:45:39.005 AM [INFO] 2022-09-01 13:52:38.22 [main] ApacheInactivityMonitor - Number of input traffic is 25 9/2/22 11:44:39.005 AM dog-acceptedtransactions-incoming|1 7465466 1887       Thank You
Was given the incorrect information on last post. Our Splunk is On-Prem and we want to migrate to the Cloud.  Will we be given the option to use On-Prem and cloud as a hybrid when migrating ?  Als... See more...
Was given the incorrect information on last post. Our Splunk is On-Prem and we want to migrate to the Cloud.  Will we be given the option to use On-Prem and cloud as a hybrid when migrating ?  Also options for forwarding redundancy during migration?     Thank you 
Do we have anything (i.e. Add-on or functionality) to check the code quality of our Splunk dashboards, reports and alerts ?
  Deferred Searches:   | rest /servicesNS/-/-/search/jobs splunk_server=local | search dispatchState="DEFERRED" isSavedSearch=1 | search title IN ("*outputcsv*","*outputlookup*","*collect*") ... See more...
  Deferred Searches:   | rest /servicesNS/-/-/search/jobs splunk_server=local | search dispatchState="DEFERRED" isSavedSearch=1 | search title IN ("*outputcsv*","*outputlookup*","*collect*") | table label  dispatchState reason published updated title Skipped Search:   index=_internal sourcetype=scheduler status=skipped     [| rest /servicesNS/-/-/saved/searches splunk_server=local     | search search IN ("*outputcsv *" ,"*outputlookup *" )     | table title     | rename title as savedsearch_name] | stats count by app search_type reason savedsearch_name | sort - count Searches ran with error:   | rest /servicesNS/-/-/search/jobs splunk_server=local | search isSavedSearch=1 isFailed=1 | search title IN ("*outputcsv*","*outputlookup*","*collect*") | table label dispatchState reason published updated messages.fatal title Saved Search with collect command generated 0 events: index=_internal sourcetype=scheduler result_count=0     [| rest /servicesNS/-/-/saved/searches splunk_server=local     | search  search="*collect*"     | table title     | rename title as savedsearch_name] | table _time user app savedsearch_name status scheduled_time run_time result_count |  convert ctime(scheduled_time)
Our network team is using Splunk in the cloud however they asked me to see if it was possible to create a local copy on one of our Servers in our Data Center as a redundancy.  Is it possible to copy ... See more...
Our network team is using Splunk in the cloud however they asked me to see if it was possible to create a local copy on one of our Servers in our Data Center as a redundancy.  Is it possible to copy the apps and set to use that copy in case the forwarder to the cloud goes down?    Thank you 
Hi, How to efficiently schedule the correlation searches in Splunk ITSI to avoid skipping of concurrent running jobs. We can see the below message in skipped searches.   Thanks!    
hi, How to identify the correlation search name using the Report name found in skipped searches. We are trying to resolve the skipped searches issue. Any help would be much appreciated.   Tha... See more...
hi, How to identify the correlation search name using the Report name found in skipped searches. We are trying to resolve the skipped searches issue. Any help would be much appreciated.   Thanks!
Hi there, I was wondering if I could get some assistance on whether the following is possible. I am quite new to creating tables in Splunk. In Splunk, we have logs for an export process. Each step ... See more...
Hi there, I was wondering if I could get some assistance on whether the following is possible. I am quite new to creating tables in Splunk. In Splunk, we have logs for an export process. Each step of the export process has the same ID to show it's part of the same request and each event in the chain has a type. I'd like to create a table that lists all exports over a given time period: request ID actor.email export.duration startTime exportComplete emailSent - Each event for the same export has the same requestID - startTime would be the timestamp of the event with type "startExport" - exportComplete would be the timestamp of the event with type "exportSuccess" (or "in progress" if an event of that type is not present with that request ID) - email would be the timestamp of the event with type "send" (or "email not send" if an event of type  type is not present with that request ID) All of this information is available in the original results but the table i have created so far just lists each event sorted by the timestamp which is definitely helpful versus raw results but getting a table like this would be so much better.
Hello All, I have some dashboards  which are using reports for calculations, it has some lookup files, the problem is when the csv file limit reaches the set value, it stopped showing the Graphs on... See more...
Hello All, I have some dashboards  which are using reports for calculations, it has some lookup files, the problem is when the csv file limit reaches the set value, it stopped showing the Graphs on dashboard and I have create new lookup file every time and update the dashboards, but I dont wanted to do it, is there anyway this can be avoided, I wanted outlookup file just to keep last 28 days of data and delete the rest of the data. I am trying below splunk script but not sure if I am doing it correctly. I have also tried the max option and its just restrict the query to dump the records into csv file above the set value index="idx_rwmsna" sourcetype=st_rwmsna_printactivity source="E:\\Busapps\\rwms\\mna1\\geodev12\\Edition\\logs\\DEFAULT_activity_1.log" | transaction host, JobID, Report, Site startswith="Print request execution start." | eval duration2=strftime(duration, "%Mm %Ss %3Nms") | fields * | rex field=_raw "The request was (?<PrintState>\w*) printed." | rex field=_raw "The print request ended with an (?<PrintState>\w*)" | rex field=_raw ".*Dest : (?<Dest>\w+).*" | search PrintState=successfully Dest=Printer | table _time, host, Client, Site, JobID, Report, duration, duration2 | stats count as valid_events count(eval(duration<180)) as good_events avg(duration) as averageDuration | eval sli=round((good_events/valid_events) * 100, 2) | eval slo=99, timestamp=now() | eval burnrate=(100-sli)/(100-slo), date=strftime(timestamp,"%Y-%m-%d"), desc="WMS Global print perf" | eval time=now() | sort 0 - time | fields date, desc, sli, slo, burnrate, timestamp, averageDuration | outputlookup lkp_wms_print_slislo1.csv append=true override_if_empty=true | where time > relative_time(now(), "-2d@d") OR isnull(time)
Hi Team, We were trying to enable appdynamics agent for php . Apache server stops intermittently. following is our configuration PHP 7.4.33 Apache/2.4.57 appdynamics-php-agent-23.7.1.751-1.x86_64... See more...
Hi Team, We were trying to enable appdynamics agent for php . Apache server stops intermittently. following is our configuration PHP 7.4.33 Apache/2.4.57 appdynamics-php-agent-23.7.1.751-1.x86_64 Please find the below error logs. [Fri Sep 01 17:18:11.766615 2023] [mpm_prefork:notice] [pid 12853] AH00163: Apache/2.4.57 (codeit) OpenSSL/3.0.10+quic configured -- resuming normal operations [Fri Sep 01 17:18:11.766641 2023] [core:notice] [pid 12853] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom [Fri Sep 01 17:18:40.794670 2023] [core:notice] [pid 12853] AH00052: child pid 12862 exit signal Aborted (6) [Fri Sep 01 17:18:40.794714 2023] [core:notice] [pid 12853] AH00052: child pid 12883 exit signal Aborted (6) terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom Kindly let me know, Why i am getting this issue after installing Appd php agent. Thankyou.
Hi, I need to collect the logs from Windows Defender and I was looking for an official app and I couldn't find one. I read some people recommending "TA for Microsoft Windows Defender" but I see tha... See more...
Hi, I need to collect the logs from Windows Defender and I was looking for an official app and I couldn't find one. I read some people recommending "TA for Microsoft Windows Defender" but I see that it didn't get update since 2017. Any other option more recent? thanks.
  HI , please help to get new field URI by using rex  /area/label/health/readiness||||||||||METRICS|--
Sometimes after an app has a change made to it when it is deployed to our Universal Forwarders on Windows computers the conf files are malformed. Specifically, the inputs.conf file has all the spaces... See more...
Sometimes after an app has a change made to it when it is deployed to our Universal Forwarders on Windows computers the conf files are malformed. Specifically, the inputs.conf file has all the spaces removed and formatting removed and the forwarder does not use the file anymore. The only fix I have found is to delete the app from the forwarder and wait for the deployment server to re-deploy it.
We are not able to  send data to application controller dashboard. When enable appdynamics agent for php , apache process terminates. Please find the below error logs. [Fri Sep 01 17:18:11.766615 2... See more...
We are not able to  send data to application controller dashboard. When enable appdynamics agent for php , apache process terminates. Please find the below error logs. [Fri Sep 01 17:18:11.766615 2023] [mpm_prefork:notice] [pid 12853] AH00163: Apache/2.4.57 (codeit) OpenSSL/3.0.10+quic configured -- resuming normal operations [Fri Sep 01 17:18:11.766641 2023] [core:notice] [pid 12853] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom [Fri Sep 01 17:18:40.794670 2023] [core:notice] [pid 12853] AH00052: child pid 12862 exit signal Aborted (6) [Fri Sep 01 17:18:40.794714 2023] [core:notice] [pid 12853] AH00052: child pid 12883 exit signal Aborted (6) terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom
Apologies, I am quite new to Splunk so not sure if this is possible, I have the following simple query:     | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | rex mode=... See more...
Apologies, I am quite new to Splunk so not sure if this is possible, I have the following simple query:     | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | rex mode=sed field=MessageText "s/, /\n/g" | sort RunStartTimeStamp asc, LogTimeStamp asc, LogID ASC       This works and gets the data I need for the error I am after, but, I want all associated values for the error by RunID. So the headers are: Host, InvocationID, Name, LogID, LogTS, LogName, MessageID, MessageText, RunID, RunTS, RunName I would like to do something like:     | inputlookup appJobLogs | where RunID in [ | search appJobLogs | where match(MessageText, "(?i)general error") | fields RunID ]     I have tried various forms and closest I got was a join which gave me the not found fields (should be fixable) but limited to 10,000 results so that seems like the wrong solution.
I'm ingesting logs from DNS (Next DNS via API) and struggling to exclude the header. I have seen @woodcock resolve some other examples and I can't quite see where I'm going wrong. The common mistake ... See more...
I'm ingesting logs from DNS (Next DNS via API) and struggling to exclude the header. I have seen @woodcock resolve some other examples and I can't quite see where I'm going wrong. The common mistake is not doing this on the UF. Sample data: (comes in via a curl command and writes out to a file)   timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name 2023-09-01T09:09:21.561936+00:00,beam.scs.splunk.com,AAAA,false,DNS-over-HTTPS,213.31.58.70,,,,splunk.com,8D512,"NUC10i5",,,,nextdns-cli 2023-09-01T09:09:09.154592+00:00,time.cloudflare.com,A,true,DNS-over-HTTPS,213.31.58.70,,,,cloudflare.com,14D3C,"NUC10i5",,,,nextdns-cli     UF (On syslog server) v8.1.0   props.conf [nextdns:dns] INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 HEADER_FIELD_DELIMITER =, FIELD_NAMES = timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name TIMESTAMP_FIELDS = timestamp inputs.conf [monitor:///opt/remote-logs/nextdns/nextdns.log] index = nextdns sourcetype = nextdns:dns initCrcLength = 375     Indexer (SVA S1) v9.1.0 Disabled the options, I will apply Great8 once I have this fixed. All the work needs to happen on the UF.   [nextdns:dns] #INDEXED_EXTRACTIONS = CSV #HEADER_FIELD_LINE_NUMBER = 1 #HEADER_FIELD_DELIMITER =, #FIELD_NAMES = timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name #TIMESTAMP_FIELDS = timestamp      Challenge: I'm still getting the header field ingest I have deleted the indexed data, regenerated updated log, reingested and still issues. Obviously I have restarted splunk on each instance after respective changes.
Hello to all, i have the following Issue: I receive logs from an older machine for which I cannot adjust the logging settings. When extracting data in Splunk, I encounter the following field and so... See more...
Hello to all, i have the following Issue: I receive logs from an older machine for which I cannot adjust the logging settings. When extracting data in Splunk, I encounter the following field and some values: id = EF_jblo_fdsfew42_sla id = EF_space_332312_sla id = EF_97324_pewpew_sla with a field extraction I then get my location from the id. For example: id = EF_jblo_fdsfew42_sla         => location = jblo id = EF_space_332312_sla       => location = space id = EF_97324_pewpew_sla     => location = 97324 <- where this is not a location here.   Now, I aim to replace the location using an automatic lookup based on the ID "EF_97324_pewpew_sla." Unfortunately, I encounter an issue where I either retrieve only the location from the table, omitting the rest, or I only receive the values extracted from the field extraction. I've reviewed the search sequence as per the documentation, ensuring that field extraction precedes lookup. However, I'm perplexed as to why it consistently erases all the values rather than just overwriting a single one. Is there an automated solution running in the background, similar to automatic lookup, that could resolve this? Thought lookup: ID Solution EF_97324_pewpew_sla TSINOC   My original concept was as follows: Data is ingested into Splunk. Using field extraction to extract the location from the ID. For the IDs where I am aware that they do not contain any location information, I intend to replace the extracted value with the lookup data. I wanted to run the whole thing in the "background" so that the users do not have to run it as a search string. I also tried to use calculated fields  to build one from two fields, but since the calculation takes place before the lookup, this was unfortunately not possible. Hope someone can help me. Kind regards, Flenwy
Hi,  In some of the Dashboards of my Splunk Monitoring Console, it returns this error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details" ... See more...
Hi,  In some of the Dashboards of my Splunk Monitoring Console, it returns this error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details" Error in Dashboard:   The console in the developer details specifies a 404 Not Found Error on many scripts:  The same error is issued for also other js like PopTart.js or Base.js.  Searching for all these files in the Splunk folder I notice these scritpts are all stored in a folder called quarantined_files, an odd folder placed directly in the /opt/splunk/ path.  Any ideas on how to debug this error? 
Hello Team, I have log like this, File Records count is 2 File Records count is 5 File Records count is 45 File Records count is 23 and I have extracted the values 2,5,45,23 as a separate fiel... See more...
Hello Team, I have log like this, File Records count is 2 File Records count is 5 File Records count is 45 File Records count is 23 and I have extracted the values 2,5,45,23 as a separate field called Count. When I use "base search| table Count"  I am getting the expected value in a stats table But I want 2,5,45,23 to be plotted in the line graph. I tried stats commands but its only showing the no. of events of Count but not the values of count. Could you please provide your assistance on how can I plot the values of Count into a graph.
 We have a Splunk addon for AWS, under which we configured input cloud trail input type SQS-Based S3, Here we are not getting logs continuously.