All Topics

Top

All Topics

Hi I am new to splunk. I set up a single-site cluster to parse a JSON-formatted log. I use cm in the path of /opt/splunk/etc/manager-apps/_cluster/local. conf and transforms.conf configuration files... See more...
Hi I am new to splunk. I set up a single-site cluster to parse a JSON-formatted log. I use cm in the path of /opt/splunk/etc/manager-apps/_cluster/local. conf and transforms.conf configuration files were sent to index in the path /opt/splunk/etc/peer-apps/_cluster/local. However, when I searched in the search header, the desired effect was not found. props.conf [itsd] DATETIME_CONFIG = CURRENT KV_MODE = json LINE_BREAKER = ([\r\n]+) category = Structured disabled = false pulldown_type = true TRANSFORMS-null1 = replace_null TRANSFORMS-null2 = replace_null1   transforms.conf [replace_null] REGEX = ^\[ DEST_KEY=queue FORMAT=nullQueue [replace_null1] REGEX=(.*)(\}\s?\}) DEST_KEY=_raw FORMAT=$1$2
I would like a search query that would display a graph with the number of closed notables divided by urgency in the last 12 hours, but the notables need to be retrieved based on the time they were cl... See more...
I would like a search query that would display a graph with the number of closed notables divided by urgency in the last 12 hours, but the notables need to be retrieved based on the time they were closed. I'm using this search: | inputlookup append=T incident_review_lookup | rename user as reviewer | `get_realname(owner)` | `get_realname(reviewer)` | eval nullstatus=if(isnull(status),"true","false") | `get_reviewstatuses` | eval status=if((isnull(status) OR isnull(status_label)) AND nullstatus=="false",0,status) | eval status_label=if(isnull(status_label) AND nullstatus=="false","Unassigned",status_label) | eval status_description=if(isnull(status_description) AND nullstatus=="false","unknown",status_description) | eval _time=time | `uitime(time)` | fields - nullstatus     What's wrong?
I choose source from forwarded input selection to input in splunk. I can't see sysmon in logs from source. I made the inputs.conf setting via forwarder, unfortunately I couldn't see it again. I have ... See more...
I choose source from forwarded input selection to input in splunk. I can't see sysmon in logs from source. I made the inputs.conf setting via forwarder, unfortunately I couldn't see it again. I have logs. There are forwarders. My other logs are coming. The sysmon log is not coming. I would appreciate your help.   not sysmon log   
Hello, I have been trying to integrate Nessus Essentials with SOAR since days but with failure till now, I installed Nessus App in SOAR, and configured the new asset with the APIs from Nessus Essen... See more...
Hello, I have been trying to integrate Nessus Essentials with SOAR since days but with failure till now, I installed Nessus App in SOAR, and configured the new asset with the APIs from Nessus Essentials and the Nessus IP address and port  Nessus server IP/hostname : https://192.168.199.78    I tried http and without it Port that the Nessus server is listening on 8834 when i test connectivity i get : 1 action failed Error Connecting to server. Details: HTTPSConnectionPool(host='https', port=443): Max retries exceeded with url: //192.168.199.78:8834/users (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fc8b0364940>: Failed to establish a new connection: [Errno -2] Name or service not known'))   I searched the community and other sources but didnt find any thing that can help please, any body can help me?   many thanks  
I'm doing the course on Splunk education  I  did download on this link below, but didn't have this type of logs on the exemple:  Someone can help with this?  https://docs.splunk.com/Documentation/... See more...
I'm doing the course on Splunk education  I  did download on this link below, but didn't have this type of logs on the exemple:  Someone can help with this?  https://docs.splunk.com/Documentation/Splunk/8.0.4/SearchTutorial/Systemrequirements?_gl=1*12wu1tu*_ga*MTgxODQ1ODIyMS4xNjkzNDM3ODU2*_ga_GS7YF8S63Y*MTY5MzU2MjM1MS41LjEuMTY5MzU2Mzk0Ni4wLjAuMA..*_ga_5EPM2P39FV*MTY5MzY0NzY0My43LjAuMTY5MzY0NzY0My4wLjAuMA..&_ga=2.226732381.1349470967.1693437856-1818458221.1693437856#Download_the_tutorial_data_files  
I have an index A and another index B. logs in A have a correlation to logs in B. But the only common field between them is 'timestamp'. There is a field 'fa' in index A and field 'fb' in index B. t... See more...
I have an index A and another index B. logs in A have a correlation to logs in B. But the only common field between them is 'timestamp'. There is a field 'fa' in index A and field 'fb' in index B. timestamp in index A logs has a +5 minutes drift with index B. Now I want to write a query to match field 'fa' in index A and find corresponding log based on timestamp (with +5 minutes drift) on index B and get me field 'fb' in index B.
When I create report and enable summary index, the results are getting in the below format.   Table: id    _time 1      2022-06-01 12:01:30.802 1      2022-06-01 12:11:47.069   But when... See more...
When I create report and enable summary index, the results are getting in the below format.   Table: id    _time 1      2022-06-01 12:01:30.802 1      2022-06-01 12:11:47.069   But when I call this summary index using spl query, milliseconds are missing in _time column.   Query I have used, index="summary" report="yy" |eventstats max(search_now) as latestsearch by id, report |where search_now = latestsearch   This query is to fetch latest run result
I am new to Splunk so I'm learning and I know that it can do quite a bit.  I am searching for similar network traffic for users based on our proxy indexes.  I want to know if there is a particular si... See more...
I am new to Splunk so I'm learning and I know that it can do quite a bit.  I am searching for similar network traffic for users based on our proxy indexes.  I want to know if there is a particular site visited by all of the users in our list of 50 or so.  so user and url are necessary.  I need to pull it from all of their data in our network proxy though.  here is a redacted portion of a search I have honed down to but feel free to suggest something better. Edit to provide a clear question:  The below search doesn't work, can you provide a different search or edits that would assist me in getting the data I'm looking for? index=<network one> <userID> IN (userID1,userID2) AND url=* | stats dc(userID) as count by url | where count=2
My client wants to know if users do not connect in 90 days they can be blocked
Hello, I am new to splunk and I trying to extract the fields using built-in feature.  Since the log format contain both the pipe as well as spaces, the in-built field extraction was unable to work. ... See more...
Hello, I am new to splunk and I trying to extract the fields using built-in feature.  Since the log format contain both the pipe as well as spaces, the in-built field extraction was unable to work. I was trying to extract the field before pipe as "name" , after pipe as "size" , after first space as "value" as shown in below.  I doesn't care about last values like 1547, 1458, 1887.   Any help would be appreciated.   Name size value abc-pendingcardtransfer-networki 30 77784791 log-incomingtransaction-datainpu 3 78786821 dog-acceptedtransactions-incoming 1 7465466           Sample Logs:   9/2/22 11:52:39.005 AM abc-pendingcardtransfer-networki|30 77784791 1547 9/2/22 11:50:39.005 AM log-incomingtransaction-datainpu|3 78786821 1458 9/2/22 11:45:39.005 AM [INFO] 2022-09-01 13:52:38.22 [main] ApacheInactivityMonitor - Number of input traffic is 25 9/2/22 11:44:39.005 AM dog-acceptedtransactions-incoming|1 7465466 1887       Thank You
Was given the incorrect information on last post. Our Splunk is On-Prem and we want to migrate to the Cloud.  Will we be given the option to use On-Prem and cloud as a hybrid when migrating ?  Als... See more...
Was given the incorrect information on last post. Our Splunk is On-Prem and we want to migrate to the Cloud.  Will we be given the option to use On-Prem and cloud as a hybrid when migrating ?  Also options for forwarding redundancy during migration?     Thank you 
Do we have anything (i.e. Add-on or functionality) to check the code quality of our Splunk dashboards, reports and alerts ?
Hey folks, I'm scheduling a meet up on 21 September! We'll meet at The Hoppy Monk on 1604 and 281 (North Central). Hope to see yall there
  Deferred Searches:   | rest /servicesNS/-/-/search/jobs splunk_server=local | search dispatchState="DEFERRED" isSavedSearch=1 | search title IN ("*outputcsv*","*outputlookup*","*collect*") ... See more...
  Deferred Searches:   | rest /servicesNS/-/-/search/jobs splunk_server=local | search dispatchState="DEFERRED" isSavedSearch=1 | search title IN ("*outputcsv*","*outputlookup*","*collect*") | table label  dispatchState reason published updated title Skipped Search:   index=_internal sourcetype=scheduler status=skipped     [| rest /servicesNS/-/-/saved/searches splunk_server=local     | search search IN ("*outputcsv *" ,"*outputlookup *" )     | table title     | rename title as savedsearch_name] | stats count by app search_type reason savedsearch_name | sort - count Searches ran with error:   | rest /servicesNS/-/-/search/jobs splunk_server=local | search isSavedSearch=1 isFailed=1 | search title IN ("*outputcsv*","*outputlookup*","*collect*") | table label dispatchState reason published updated messages.fatal title Saved Search with collect command generated 0 events: index=_internal sourcetype=scheduler result_count=0     [| rest /servicesNS/-/-/saved/searches splunk_server=local     | search  search="*collect*"     | table title     | rename title as savedsearch_name] | table _time user app savedsearch_name status scheduled_time run_time result_count |  convert ctime(scheduled_time)
Our network team is using Splunk in the cloud however they asked me to see if it was possible to create a local copy on one of our Servers in our Data Center as a redundancy.  Is it possible to copy ... See more...
Our network team is using Splunk in the cloud however they asked me to see if it was possible to create a local copy on one of our Servers in our Data Center as a redundancy.  Is it possible to copy the apps and set to use that copy in case the forwarder to the cloud goes down?    Thank you 
Hi, How to efficiently schedule the correlation searches in Splunk ITSI to avoid skipping of concurrent running jobs. We can see the below message in skipped searches.   Thanks!    
hi, How to identify the correlation search name using the Report name found in skipped searches. We are trying to resolve the skipped searches issue. Any help would be much appreciated.   Tha... See more...
hi, How to identify the correlation search name using the Report name found in skipped searches. We are trying to resolve the skipped searches issue. Any help would be much appreciated.   Thanks!
Hi there, I was wondering if I could get some assistance on whether the following is possible. I am quite new to creating tables in Splunk. In Splunk, we have logs for an export process. Each step ... See more...
Hi there, I was wondering if I could get some assistance on whether the following is possible. I am quite new to creating tables in Splunk. In Splunk, we have logs for an export process. Each step of the export process has the same ID to show it's part of the same request and each event in the chain has a type. I'd like to create a table that lists all exports over a given time period: request ID actor.email export.duration startTime exportComplete emailSent - Each event for the same export has the same requestID - startTime would be the timestamp of the event with type "startExport" - exportComplete would be the timestamp of the event with type "exportSuccess" (or "in progress" if an event of that type is not present with that request ID) - email would be the timestamp of the event with type "send" (or "email not send" if an event of type  type is not present with that request ID) All of this information is available in the original results but the table i have created so far just lists each event sorted by the timestamp which is definitely helpful versus raw results but getting a table like this would be so much better.
Hello All, I have some dashboards  which are using reports for calculations, it has some lookup files, the problem is when the csv file limit reaches the set value, it stopped showing the Graphs on... See more...
Hello All, I have some dashboards  which are using reports for calculations, it has some lookup files, the problem is when the csv file limit reaches the set value, it stopped showing the Graphs on dashboard and I have create new lookup file every time and update the dashboards, but I dont wanted to do it, is there anyway this can be avoided, I wanted outlookup file just to keep last 28 days of data and delete the rest of the data. I am trying below splunk script but not sure if I am doing it correctly. I have also tried the max option and its just restrict the query to dump the records into csv file above the set value index="idx_rwmsna" sourcetype=st_rwmsna_printactivity source="E:\\Busapps\\rwms\\mna1\\geodev12\\Edition\\logs\\DEFAULT_activity_1.log" | transaction host, JobID, Report, Site startswith="Print request execution start." | eval duration2=strftime(duration, "%Mm %Ss %3Nms") | fields * | rex field=_raw "The request was (?<PrintState>\w*) printed." | rex field=_raw "The print request ended with an (?<PrintState>\w*)" | rex field=_raw ".*Dest : (?<Dest>\w+).*" | search PrintState=successfully Dest=Printer | table _time, host, Client, Site, JobID, Report, duration, duration2 | stats count as valid_events count(eval(duration<180)) as good_events avg(duration) as averageDuration | eval sli=round((good_events/valid_events) * 100, 2) | eval slo=99, timestamp=now() | eval burnrate=(100-sli)/(100-slo), date=strftime(timestamp,"%Y-%m-%d"), desc="WMS Global print perf" | eval time=now() | sort 0 - time | fields date, desc, sli, slo, burnrate, timestamp, averageDuration | outputlookup lkp_wms_print_slislo1.csv append=true override_if_empty=true | where time > relative_time(now(), "-2d@d") OR isnull(time)
Hi Team, We were trying to enable appdynamics agent for php . Apache server stops intermittently. following is our configuration PHP 7.4.33 Apache/2.4.57 appdynamics-php-agent-23.7.1.751-1.x86_64... See more...
Hi Team, We were trying to enable appdynamics agent for php . Apache server stops intermittently. following is our configuration PHP 7.4.33 Apache/2.4.57 appdynamics-php-agent-23.7.1.751-1.x86_64 Please find the below error logs. [Fri Sep 01 17:18:11.766615 2023] [mpm_prefork:notice] [pid 12853] AH00163: Apache/2.4.57 (codeit) OpenSSL/3.0.10+quic configured -- resuming normal operations [Fri Sep 01 17:18:11.766641 2023] [core:notice] [pid 12853] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom [Fri Sep 01 17:18:40.794670 2023] [core:notice] [pid 12853] AH00052: child pid 12862 exit signal Aborted (6) [Fri Sep 01 17:18:40.794714 2023] [core:notice] [pid 12853] AH00052: child pid 12883 exit signal Aborted (6) terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom Kindly let me know, Why i am getting this issue after installing Appd php agent. Thankyou.